Toggle light / dark theme

AI creates the first 100-billion-star Milky Way simulation

Researchers combined deep learning with high-resolution physics to create the first Milky Way model that tracks over 100 billion stars individually. Their AI learned how gas behaves after supernovae, removing one of the biggest computational bottlenecks in galactic modeling. The result is a simulation hundreds of times faster than current methods.

UT Eclipses 5,000 GPUs To Increase Dominance in Open-Source AI, Strengthen Nation’s Computing Power

Amid the private sector’s race to lead artificial intelligence innovation, The University of Texas at Austin has strengthened its lead in academic computing power and dominance in computing power for public, open-source AI. UT has acquired high-performance Dell PowerEdge servers and NVIDIA AI infrastructure powered by more than 4,000 NVIDIA Blackwell architecture graphic processing units (GPUs), the most powerful GPUs in production to date.

The new infrastructure is a game-changer for the University, expanding its research and development capabilities in agentic and generative AI while opening the door to more society-changing discoveries that support America’s technological dominance. The NVIDIA GB200 systems and NVIDIA Vera CPU servers will be installed as part of Horizon, the largest academic supercomputer in the nation, which goes online next year at UT’s Texas Advanced Computing Center (TACC). The National Science Foundation (NSF) is funding Horizon through its Leadership Class Computing Facility program to revolutionize U.S. computational research.

UT has the most AI computing power in academia. In total, the University has amassed more than 5,000 advanced NVIDIA GPUs across its academic and research facilities. The University has the computing power to produce open-source large language models — which power most modern AI applications — that rival any other public institution. Open-source computing is nonproprietary and serves as the backbone for publicly driven research. Unlike private sector models, it can be fine-tuned to support research in the public interest, producing discoveries that offer profound benefits to society in such areas as health care, drug development, materials and national security.

One Giant Leap for AI Physics: NVIDIA Apollo Unveiled as Open Model Family for Scientific Simulation

NVIDIA Apollo will provide pretrained checkpoints and reference workflows for training, inference and benchmarking, allowing developers to integrate and customize the models for their specific needs.

Industry Leaders Tap Into NVIDIA AI Physics

Applied Materials, Cadence, LAM Research Corp., Luminary Cloud, KLA, PhysicsX, Rescale, Siemens and Synopsys are among the industry leaders that intend to train, fine-tune and deploy their AI technologies using the new open models. These companies are already using NVIDIA AI models and infrastructure to bolster their applications.

Interpretable AI reveals key atomic traits for efficient hydrogen storage in metal hydrides

Hydrogen fuels represent a clean energy option, but a major hurdle in making its use more mainstream is efficient storage. Hydrogen storage requires either extremely high-pressure tanks or extremely cold temperatures, which means that storage alone consumes a lot of energy. This is why metal hydrides, which can store hydrogen more efficiently, are such a promising option.

To help accurately predict performance metrics of materials, researchers at Tohoku University used a newly established data infrastructure: the Digital Hydrogen Platform (DigHyd). DigHyd integrates more than 5,000 meticulously curated experimental records from the literature, supported by an AI language model. The work is published in the journal Chemical Science.

Leveraging this extensive database, researchers systematically explored physically interpretable models and found that fundamental atomic features— , electronegativity, molar density, and ionic filling factor—emerge as key descriptors. Other researchers can use this as a tool for guiding their materials design process, without having to go through a lengthy trial-and-error process in the lab to search for .

Brain organoid pioneers fear inflated claims about biocomputing could backfire

For the brain organoids in Lena Smirnova’s lab at Johns Hopkins University, there comes a time in their short lives when they must graduate from the cozy bath of the bioreactor, leave the warm, salty broth behind, and be plopped onto a silicon chip laced with microelectrodes. From there, these tiny white spheres of human tissue can simultaneously send and receive electrical signals that, once decoded by a computer, will show how the cells inside them are communicating with each other as they respond to their new environments.

More and more, it looks like these miniature lab-grown brain models are able to do things that resemble the biological building blocks of learning and memory. That’s what Smirnova and her colleagues reported earlier this year. It was a step toward establishing something she and her husband and collaborator, Thomas Hartung, are calling “organoid intelligence.”

Tead More


Another would be to leverage those functions to build biocomputers — organoid-machine hybrids that do the work of the systems powering today’s AI boom, but without all the environmental carnage. The idea is to harness some fraction of the human brain’s stunning information-processing superefficiencies in place of building more water-sucking, electricity-hogging, supercomputing data centers.

Despite widespread skepticism, it’s an idea that’s started to gain some traction. Both the National Science Foundation and DARPA have invested millions of dollars in organoid-based biocomputing in recent years. And there are a handful of companies claiming to have built cell-based systems already capable of some form of intelligence. But to the scientists who first forged the field of brain organoids to study psychiatric and neurodevelopmental disorders and find new ways to treat them, this has all come as a rather unwelcome development.

At a meeting last week at the Asilomar conference center in California, researchers, ethicists, and legal experts gathered to discuss the ethical and social issues surrounding human neural organoids, which fall outside of existing regulatory structures for research on humans or animals. Much of the conversation circled around how and where the field might set limits for itself, which often came back to the question of how to tell when lab-cultured cellular constructs have started to develop sentience, consciousness, or other higher-order properties widely regarded as carrying moral weight.

Innovative underwater exoskeleton boosts diving efficiency

A research team led by Professor Wang Qining from the School of Advanced Manufacturing and Robotics, Peking University, has developed the world’s first portable underwater exoskeleton system that assists divers’ knee movement, significantly reducing air consumption and muscle effort during dives.

The findings, published in IEEE Transactions on Robotics on October 14, 2025, open new possibilities for enhancing in underwater environments.

Advancing Drug Discovery with Artificial Intelligence

Lipid nanoparticles (LNPs) have emerged as popular vehicles for delivering various types of drugs such as mRNA and gene therapy. While these nanoparticles are effective in transporting therapeutic payloads, their components can interact with the human body, potentially causing genotoxicity — damage to the recipient’s genetic material that may lead to inheritable mutations or cancer. In this webinar brought to you by Inotiv, Shambhu Roy will discuss how to test the genotoxicity of LNP-based therapeutics to ensure the safety of these innovative drug delivery systems.

We’ll chat about these topics.

• Understanding the key components of LNP delivery systems • Genotoxicity testing for LNP-based drugs during preclinical safety assessment • Selecting the appropriate assays to meet regulatory requirements.

Nvidia’s Blackwell Chips Anchor GMI Cloud’s $500 Million AI Build in Taiwan

GMI Cloud is stepping deeper into the AI infrastructure boom. The U.S.-based GPU-as-a-Service provider said Monday it will build a $500 million artificial intelligence data center in Taiwan, a project that will run on Nvidia’s NVDA-1.88% ▼ new Blackwell GB300 chips and come online by March 2026. Bit by bit, Taiwan is becoming a major hub for next-generation compute, even as the island continues to wrestle with power-supply constraints.

/* */