Toggle light / dark theme

Inland waters consist of multiple concentrations of constituents, and solving the interference problem of chlorophyll-a and colored dissolved organic matter (CDOM) can help to accurately invert total suspended matter concentration (Ctsm). In this study, according to the characteristics of the Multispectral Imager for Inshore (MII) equipped with the first Sustainable Development Goals Science Satellite (SDGSAT-1), an iterative inversion model was established based on the iterative analysis of multiple linear regression to estimate Ctsm. The Hydrolight radiative transfer model was used to simulate the radiative transfer process of Lake Taihu, and it analyzed the effect of three component concentrations on remote sensing reflectance.

We speak with Sakana AI, who are building nature-inspired methods that could fundamentally transform how we develop AI systems.

The guests include Chris Lu, a researcher who recently completed his DPhil at Oxford University under Prof. Jakob Foerster’s supervision, where he focused on meta-learning and multi-agent systems. Chris is the first author of the DiscoPOP paper, which demonstrates how language models can discover and design better training algorithms. Also joining is Robert Tjarko Lange, a founding member of Sakana AI who specializes in evolutionary algorithms and large language models. Robert leads research at the intersection of evolutionary computation and foundation models, and is completing his PhD at TU Berlin on evolutionary meta-learning. The discussion also features Cong Lu, currently a Research Scientist at Google DeepMind’s Open-Endedness team, who previously helped develop The AI Scientist and Intelligent Go-Explore.

SPONSOR MESSAGES:
***
CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!
https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich.

Goto https://tufalabs.ai/
***

Rather than simply scaling up models with more parameters and data, they’re drawing inspiration from biological evolution to create more efficient and creative AI systems. The team explains how their Tokyo-based startup, founded in 2023 with $30 million in funding, aims to harness principles like natural selection and emergence to develop next-generation AI.

Satellite-based optical remote sensing from missions such as ESA’s Sentinel-2 (S2) have emerged as valuable tools for continuously monitoring the Earth’s surface, thus making them particularly useful for quantifying key cropland traits in the context of sustainable agriculture [1]. Upcoming operational imaging spectroscopy satellite missions will have an improved capability to routinely acquire spectral data over vast cultivated regions, thereby providing an entire suite of products for agricultural system management [2]. The Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) [3] will complement the multispectral Copernicus S2 mission, thus providing enhanced services for sustainable agriculture [4, 5]. To use satellite spectral data for quantifying vegetation traits, it is crucial to mitigate the absorption and scattering effects caused by molecules and aerosols in the atmosphere from the measured satellite data. This data processing step, known as atmospheric correction, converts top-of-atmosphere (TOA) radiance data into bottom-of-atmosphere (BOA) reflectance, and it is one of the most challenging satellite data processing steps e.g., [6, 7, 8]. Atmospheric correction relies on the inversion of an atmospheric radiative transfer model (RTM) leading to the obtaining of surface reflectance, typically through the interpolation of large precomputed lookup tables (LUTs) [9, 10]. The LUT interpolation errors, the intrinsic uncertainties from the atmospheric RTMs, and the ill posedness of the inversion of atmospheric characteristics generate uncertainties in atmospheric correction [11]. Also, usually topographic, adjacency, and bidirectional surface reflectance corrections are applied sequentially in processing chains, which can potentially accumulate errors in the BOA reflectance data [6]. Thus, despite its importance, the inversion of surface reflectance data unavoidably introduces uncertainties that can affect downstream analyses and impact the accuracy and reliability of subsequent products and algorithms, such as vegetation trait retrieval [12]. To put it another way, owing to the critical role of atmospheric correction in remote sensing, the accuracy of vegetation trait retrievals is prone to uncertainty when atmospheric correction is not properly performed [13].

Although advanced atmospheric correction schemes became an integral part of the operational processing of satellite missions e.g., [9,14,15], standardised exhaustive atmospheric correction schemes in drone, airborne, or scientific satellite missions remain less prevalent e.g., [16,17]. The complexity of atmospheric correction further increases when moving from multispectral to hyperspectral data, where rigorous atmospheric correction needs to be applied to hundreds of narrow contiguous spectral bands e.g., [6,8,18]. For this reason, and to bypass these challenges, several studies have instead proposed to infer vegetation traits directly from radiance data at the top of the atmosphere [12,19,20,21,22,23,24,25,26].

Researchers are investigating fluid-robot interactions at these scales, motivated by fish that use vortices to save energy. Onboard sensing, computation, and actuation are essential for effective navigation. Despite their potential, data-driven algorithms frequently lack practical validation.

Using inertial measurements to infer background flows is a new approach that was motivated by fish’s vestibular systems’ ability to sense acceleration. This method provides an affordable substitute for intricate flow sensors in self-driving cars.

In this regard, the Caltech team developed an underwater robot that uses these flows to reduce energy consumption by “surfing” vortices to reach its destination.

The advancement can enable turbulent analysis of entire nuclear fusion reactors.


“By utilizing deep learning on GPUs, we have reduced computation time by a factor of 1,000 compared to traditional CPU-based codes,” said the joint research team.

“This advancement represents a cornerstone for digital twin technologies, enabling turbulent analysis of entire nuclear fusion reactors or replicating real Tokamaks in a virtual computing environment.”

Researchers underlined that the proposed FPL-net can solve the FPL equation in a single step, achieving results 1,000 times faster than previous methods with an error margin of just one-hundred-thousandth, demonstrating exceptional accuracy.

At the very start Aubrey claims, so long as he has the funding, he can finish the RMR in 3 years and then things take off from there. He seems to hint that the LEV prediction of 12–15 years could be thrown out and come sooner.


In this in-depth conversation, Dr. Aubrey de Grey discusses his Robust Mouse Rejuvenation (RMR) studies at the LEV Foundation and why he believes we’re close to achieving the crucial RMR milestone within just three years — a breakthrough that could transform aging research forever.

You’ll also hear about:
His predictions for reaching Longevity Escape Velocity by the late 2030s.
What he would change about Bryan Johnson’s longevity algorithm.
How reaching RMR could trigger a global \.

Neutron stars are some of the densest objects in the universe. They are the core of a collapsed megastar that went supernova, have a typical radius of 10 km—just slightly more than the altitude of Mt. Everest—and their density can be several times that of atomic nuclei.

Physicists love extreme objects like this because they require them to stretch their theories into new realms and see if they are confirmed or if they break, requiring new thinking and new science.

For the first time, researchers have used lattice quantum chromodynamics to study the interior of neutron stars, obtaining a new maximum bound for the speed of sound inside the star and a better understanding of how pressure, temperature and other properties there relate to one another.

Dr. Rumi Chunara: “Our system learns to recognize more subtle patterns that distinguish trees from grass, even in challenging urban environments.”


How can artificial intelligence (AI) help improve city planning to account for more green spaces? This is what a recent study published in the ACM Journal on Computing and Sustainable Societies hopes to address as a team of researchers proposed a novel concept using AI with the goal of both monitoring and improving urban green spaces, which are natural public spaces like parks and gardens, and provide a myriad of benefits, including physical and mental health, combating climate change, wildlife habitats, and increased social interaction.

For the study, the researchers developed a method they refer to as “green augmentation”, which uses an AI algorithm to analyze Google Earth satellite images with the goal of improving current AI methods by more accurately identifying green vegetation like grass and trees under various weather and seasonal conditions. For example, current AI methods identify green vegetation with an accuracy and reliability of 63.3 percent and 64 percent, respectively. Using this new method, the researchers successfully identified green vegetation with an accuracy and reliability of 89.4 percent and 90.6 percent, respectively.

“Previous methods relied on simple light wavelength measurements,” said Dr. Rumi Chunara, who is an associate professor of biostatistics at New York University and a co-author on the study. “Our system learns to recognize more subtle patterns that distinguish trees from grass, even in challenging urban environments. This type of data is necessary for urban planners to identify neighborhoods that lack vegetation so they can develop new green spaces that will deliver the most benefits possible. Without accurate mapping, cities cannot address disparities effectively.”

A new algorithm, Evo 2, trained on roughly 128,000 genomes—9.3 trillion DNA letter pairs—spanning all of life’s domains, is now the largest generative AI model for biology to date. Built by scientists at the Arc Institute, Stanford University, and Nvidia, Evo 2 can write whole chromosomes and small genomes from scratch.

It also learned how DNA mutations affect proteins, RNA, and overall health, shining light on “non-coding” regions, in particular. These mysterious sections of DNA don’t make proteins but often control gene activity and are linked to diseases.

The team has released Evo 2’s software code and model parameters to the scientific community for further exploration. Researchers can also access the tool through a user-friendly web interface. With Evo 2 as a foundation, scientists may develop more specific AI models. These could predict how mutations affect a protein’s function, how genes operate differently across cell types, or even help researchers design new genomes for synthetic biology.