Toggle light / dark theme

AI efficiency advances with spintronic memory chip that combines storage and processing

To make accurate predictions and reliably complete desired tasks, most artificial intelligence (AI) systems need to rapidly analyze large amounts of data. This currently entails the transfer of data between processing and memory units, which are separate in existing electronic devices.

Over the past few years, many engineers have been trying to develop new hardware that could run AI algorithms more efficiently, known as compute-in-memory (CIM) systems. CIM systems are electronic components that can both perform computations and store information, typically serving both as processors and non-volatile memories. Non-volatile essentially means that they can retain data even when they are turned off.

Most previously introduced CIM designs rely on analog computing approaches, which allow devices to perform calculations leveraging electrical current. Despite their good energy efficiency, analog computing techniques are known to be significantly less precise than digital computing methods and often fail to reliably handle large AI models or vast amounts of data.

Artificial neurons replicate biological function for improved computer chips

Researchers at the USC Viterbi School of Engineering and School of Advanced Computing have developed artificial neurons that replicate the complex electrochemical behavior of biological brain cells.

The innovation, documented in Nature Electronics, is a leap forward in neuromorphic computing technology. The innovation will allow for a reduction of the chip size by orders of magnitude, will reduce its energy consumption by orders of magnitude, and could advance artificial general intelligence.

Unlike conventional digital processors or existing neuromorphic chips based on silicon technology that merely simulate neural activity, these physically embody or emulate the analog dynamics of their biological counterparts. Just as neurochemicals initiate brain activity, chemicals can be used to initiate computation in neuromorphic (brain-inspired) . By being a physical replication of the biological process, they differ from prior iterations of artificial neurons that were solely mathematical equations.

Unit-free theorem pinpoints key variables for AI and physics models

Machine learning models are designed to take in data, to find patterns or relationships within those data, and to use what they have learned to make predictions or to create new content. The quality of those outputs depends not only on the details of a model’s inner workings but also, crucially, on the information that is fed into the model.

Some models follow a brute force approach, essentially adding every bit of data related to a particular problem into the model and seeing what comes out. But a sleeker, less energy-hungry way to approach a problem is to determine which variables are vital to the outcome and only provide the model with information about those key variables.

Now, Adrián Lozano-Durán, an associate professor of aerospace at Caltech and a visiting professor at MIT, and MIT graduate student Yuan Yuan, have developed a theorem that takes any number of possible variables and whittles them down, leaving only those that are most important. In the process, the model removes all units, such as meters and feet, from the underlying equations, making them dimensionless, something scientists require of equations that describe the physical world. The work can be applied not only to machine learning but to any .

Researcher improves century-old equation to predict movement of dangerous air pollutants

A new method developed at the University of Warwick offers the first simple and predictive way to calculate how irregularly shaped nanoparticles—a dangerous class of airborne pollutant—move through the air.

Every day, we breathe in millions of , including soot, dust, pollen, microplastics, viruses, and synthetic nanoparticles. Some are small enough to slip deep into the lungs and even enter the bloodstream, contributing to conditions such as heart disease, stroke, and cancer.

Most of these are irregularly shaped. Yet the mathematical models used to predict how these particles behave typically assume they are perfect spheres, simply because the equations are easier to solve. This makes it difficult to monitor or predict the movement of real-world, non-spherical—and often more hazardous—particles.

Gravitational wave events hint at ‘second-generation’ black holes

In a paper published in The Astrophysical Journal Letters, the international LIGO-Virgo-KAGRA Collaboration reports on the detection of two gravitational wave events in October and November of 2024 with unusual black hole spins. This observation adds an important new piece to our understanding of the most elusive phenomena in the universe.

Gravitational waves are “ripples” in that result from cataclysmic events in deep space, with the strongest waves produced by the collision of black holes.

Using sophisticated algorithmic techniques and mathematical models, researchers are able to reconstruct many physical features of the detected black holes from the analysis of gravitational signals, such as their masses and the distance of the event from Earth, and even the speed and direction of their rotation around their axis, called spin.

ML Systems Textbook

Machine Learning Systems provides a systematic framework for understanding and engineering machine learning (ML) systems. This textbook bridges the gap between theoretical foundations and practical engineering, emphasizing the systems perspective required to build effective AI solutions. Unlike resources that focus primarily on algorithms and model architectures, this book highlights the broader context in which ML systems operate, including data engineering, model optimization, hardware-aware training, and inference acceleration. Readers will develop the ability to reason about ML system architectures and apply enduring engineering principles for building flexible, efficient, and robust machine learning systems.

AI teaches itself and outperforms human-designed algorithms

Like humans, artificial intelligence learns by trial and error, but traditionally, it requires humans to set the ball rolling by designing the algorithms and rules that govern the learning process. However, as AI technology advances, machines are increasingly doing things themselves. An example is a new AI system developed by researchers that invented its own way to learn, resulting in an algorithm that outperformed human-designed algorithms on a series of complex tasks.

For decades, human engineers have designed the algorithms that agents use to learn, especially reinforcement learning (RL), where an AI learns by receiving rewards for successful actions. While learning comes naturally to humans and animals, thanks to millions of years of evolution, it has to be explicitly taught to AI. This process is often slow and laborious and is ultimately limited by human intuition.

Taking their cue from evolution, which is a random trial and error process, the researchers created a large digital population of AI agents. These agents tried to solve numerous tasks in many different, complex environments using a particular learning rule.

Google claims its latest quantum algorithm can outperform supercomputers on a real-world task

Researchers from Google Quantum AI report that their quantum processor, Willow, ran an algorithm for a quantum computer that solved a complex physics problem thousands of times faster than the world’s most powerful classical supercomputers. If verified, this would be one of the first demonstrations of practical quantum advantage, in which a quantum computer solves a real-world problem faster and more accurately than a classical computer.

In a new paper published in the journal Nature, the researchers provided details on how their algorithm, called Quantum Echoes, measured the complex behavior of particles in highly entangled . These are systems in which multiple particles are linked so that they share the same fate even when physically separated. If you measure the property of one particle, you instantly know something about the others. This linkage makes the overall system so complex that it is difficult to model on ordinary computers.

The Quantum Echoes algorithm uses a concept called an Out-of-Time-Order Correlator (OTOC), which measures how quickly information spreads and scrambles in a quantum system. The researchers chose this specific measurement because, as they state in the paper, “OTOCs have quantum interference effects that endow them with a high sensitivity to details of the quantum dynamics and, for OTOC, also high levels of classical simulation complexity. As such, OTOCs are viable candidates for realizing practical quantum advantage.”

https://lnkd.in/gUDFq8KF Explicit solution of Navier Stokes Equation A millennium problem! can we prove that fluid motion always stays smooth, or can it blow up into chaos?

Here’s the equation that rules all fluids: ρ (∂u/∂t + (u·∇)u) = −∇p + μ∇²u + f What it means: — u: velocity field (how the fluid moves) — p: pressure — μ: viscosity (internal friction) — ρ: density — f: external forces (like gravity) Instead of solving the velocity u directly, he treats the fluid like a symphony of interacting notes: φ(x, t) = ∫ d³k [ aₖ e^(-iωt + ik·x) + aₖ† e^(iωt — ik·x) ] Each aₖ and aₖ† represent creation and annihilation operators — the conductors of the quantum orchestra of sound. 🎵 Analogy: Fluid as a Symphony Imagine a calm pond. Every ripple is a gentle musical note. Now drop many stones — the ripples overlap, collide, and amplify. That’s turbulence.

AI tools fall short in predicting suicide, study finds

The accuracy of machine learning algorithms for predicting suicidal behavior is too low to be useful for screening or for prioritizing high-risk individuals for interventions, according to a new study published September 11 in the open-access journal PLOS Medicine by Matthew Spittal of the University of Melbourne, Australia, and colleagues.

Numerous risk assessment scales have been developed over the past 50 years to identify patients at high risk of suicide or self-harm. In general, these scales have had poor predictive accuracy, but the availability of modern machine learning methods combined with electronic health record data has re-focused attention on developing to predict suicide and self-harm.

In the new study, researchers undertook a systemic review and meta-analysis of 53 previous studies that used machine learning algorithms to predict suicide, self-harm and a combined suicide/self-harm outcome. In all, the studies involved more than 35 million and nearly 250,000 cases of suicide or hospital-treated self-harm.

/* */