Toggle light / dark theme

Physics-based algorithm enables nuclear microreactors to autonomously adjust power output

A new physics-based algorithm clears a path toward nuclear microreactors that can autonomously adjust power output based on need, according to a University of Michigan-led study published in Progress in Nuclear Energy.

Easily transportable and able to generate up to 20 megawatts of thermal energy for heat or electricity, nuclear microreactors could be useful in such as , disaster zones, or even cargo ships, in addition to other applications.

If integrated into an , nuclear microreactors could provide stable, carbon-free energy, but they must be able to adjust to match shifting demand—a capability known as load following. In large reactors, staff make these adjustments manually, which would be cost-prohibitive in remote areas, imposing a barrier to adoption.

Time crystals arise from quantum interactions once thought to prevent their formation

Nature has many rhythms: the seasons result from Earth’s movement around the sun, the ticking of a pendulum clock results from the oscillation of its pendulum. These phenomena can be understood with very simple equations. However, regular rhythms can also arise in a completely different way—by themselves, without an external clock, through the complex interaction of many particles. Instead of uniform disorder, a fixed rhythm emerges—this is referred to as a “time crystal.”

Calculations by TU Wien (Vienna) now show that such time crystals can also be generated in a completely different way than previously thought. The quantum physical correlations between the particles, which were previously thought to be harmful for the emergence of such phenomena, can actually stabilize time crystals. This is a surprising new insight into the quantum physics of many-particle systems.

The findings are published in the journal Physical Review Letters.

New approach improves accuracy of quantum chemistry simulations using machine learning

A new trick for modeling molecules with quantum accuracy takes a step toward revealing the equation at the center of a popular simulation approach, which is used in fundamental chemistry and materials science studies.

The effort to understand materials and eats up roughly a third of national lab supercomputer time in the U.S. The gold standard for accuracy is the quantum many-body problem, which can tell you what’s happening at the level of individual electrons. This is the key to chemical and material behaviors as electrons are responsible for chemical reactivity and bonds, electrical properties and more. However, quantum many-body calculations are so difficult that scientists can only use them to calculate atoms and molecules with a handful of electrons at a time.

Density functional theory, or DFT, is easier—the computing resources needed for its calculations scale with the number of electrons cubed, rather than rising exponentially with each new electron. Instead of following each individual electron, this theory calculates electron densities—where the electrons are most likely to be located in space. In this way, it can be used to simulate the behavior of many hundreds of atoms.

Neutron detector mobilizes muons for nuclear, quantum material

In a collaboration showing the power of innovation and teamwork, physicists and engineers at the Department of Energy’s Oak Ridge National Laboratory developed a mobile muon detector that promises to enhance monitoring for spent nuclear fuel and help address a critical challenge for quantum computing.

Similar to neutrons, scientists use muons, fundamental subatomic particles that travel at nearly the speed of light, to allow scientists to peer deep inside matter at the atomic scale without damaging samples. However, unlike neutrons, which decay in about 10 minutes, muons decay within a couple of microseconds, posing challenges for using them to better understand the world around us.

The new detector achieves an important step toward ensuring the safety and accountability of nuclear materials and supports the development of advanced nuclear reactors that will help address the challenges of waste management. It also acts as a key step toward developing algorithms and methods to manage errors caused by cosmic radiation in qubits, the basic units of information in quantum computing. The development of the muon detector at ORNL reflects the lab’s strengths in discovery science enabled by multidisciplinary teams and powerful research tools to address national priorities.

Advanced AI links atomic structure to quantum tech

A research team led by Oak Ridge National Laboratory has developed a new method to uncover the atomic origins of unusual material behavior. This approach uses Bayesian deep learning, a form of artificial intelligence that combines probability theory and neural networks to analyze complex datasets with exceptional efficiency.

The technique reduces the amount of time needed for experiments. It helps researchers explore sample regions widely and rapidly converge on important features that exhibit interesting properties.

“This method makes it possible to study a material’s properties with much greater efficiency,” said ORNL’s Ganesh Narasimha. “Usually, we would need to scan a large region, and then several small regions, and perform spectroscopy, which is very time-consuming. Here, the AI algorithm takes control and does this process automatically and intelligently.”

Google DeepMind discovers new solutions to century-old problems in fluid dynamics

For centuries, mathematicians have developed complex equations to describe the fundamental physics involved in fluid dynamics. These laws govern everything from the swirling vortex of a hurricane to airflow lifting an airplane’s wing.

Experts can carefully craft scenarios that make theory go against practice, leading to situations which could never physically happen. These situations, such as when quantities like velocity or pressure become infinite, are called ‘singularities’ or ‘blow ups’. They help mathematicians identify fundamental limitations in the equations of fluid dynamics, and help improve our understanding of how the physical world functions.

In a new paper, we introduce an entirely new family of mathematical blow ups to some of the most complex equations that describe fluid motion. We’re publishing this work in collaboration with mathematicians and geophysicists from institutions including Brown University, New York University and Stanford University.

Doing The Math On CPU-Native AI Inference

A number of chip companies — importantly Intel and IBM, but also the Arm collective and AMD — have come out recently with new CPU designs that feature native Artificial Intelligence (AI) and its related machine learning (ML). The need for math engines specifically designed to support machine learning algorithms, particularly for inference workloads but also for certain kinds of training, has been covered extensively here at The Next Platform.

Just to rattle off a few of them, consider the impending “Cirrus” Power10 processor from IBM, which is due in a matter of days from Big Blue in its high-end NUMA machines and which has a new matrix math engine aimed at accelerating machine learning. Or IBM’s “Telum” z16 mainframe processor coming next year, which was unveiled at the recent Hot Chips conference and which has a dedicated mixed precision matrix math core for the CPU cores to share. Intel is adding its Advanced Matrix Extensions (AMX) to its future “Sapphire Rapids” Xeon SP processors, which should have been here by now but which have been pushed out to early next year. Arm Holdings has created future Arm core designs, the “Zeus” V1 core and the “Perseus” N2 core, that will have substantially wider vector engines that support the mixed precision math commonly used for machine learning inference, too. Ditto for the vector engines in the “Milan” Epyc 7,003 processors from AMD.

All of these chips are designed to keep inference on the CPUs, where in a lot of cases it belongs because of data security, data compliance, and application latency reasons.

First-principles simulations reveal quantum entanglement in molecular polariton dynamics

This is what fun looks like for a particular set of theoretical chemists driven to solve extremely difficult problems: Deciding whether the electromagnetic fields in molecular polaritons should be treated classically or quantum mechanically.

Graduate student Millan Welman of the Hammes-Schiffer Group is first author on a new paper that presents a hierarchy of first principles simulations of the dynamics of molecular polaritons. The research is published in the Journal of Chemical Theory and Computation.

Originally 67 pages long, the paper is dense with von Neumann equations and power spectra. It explores dynamics on both electronic and vibrational energy scales. It makes use of time-dependent density functional theory (DFT) in both its conventional and nuclear-electronic orbital (NEO) forms. It spans semiclassical, mean-field-quantum, and full-quantum approaches to simulate dynamics.

/* */