Toggle light / dark theme

A protocol to realize near-perfect atom-photon entanglement

Quantum technologies, devices and systems that operate leveraging quantum mechanical effects, could tackle some tasks more reliably and efficiently than any classical technology could. In recent years, some researchers have been trying to realize quantum networks to scale up the size of quantum computers, which essentially consist of several connected smaller quantum processors.

The devices in a quantum network are connected via entanglement, a quantum effect via which distant quantum particles become inextricably linked and share a single correlated state. One way to create entanglement between different atomic quantum computers is to use an atom-cavity interface, a system in which atoms interact with light inside an optical cavity.

Over two decades ago, two physicists at the University of Aarhus introduced a protocol designed to produce high-quality entangled states, reliably connecting devices in a network. Despite its potential, this framework, known as the state-carving (SC) protocol, was found to only succeed in 50% of cases, which has so far prevented its application on a large scale.

When light ‘thinks’ like the brain: The connection between photons and artificial memory

An international study has revealed a surprising connection between quantum physics and the theoretical models underlying artificial intelligence. The study results from a collaboration between the Institute of Nanotechnology of the National Research Council (Cnr-Nanotec), the Italian Institute of Technology (IIT), and Sapienza University of Rome, together with international research institutions. The research paper was published recently in the journal Physical Review Letters.

Italian researchers show that identical photons propagating within optical circuits spontaneously behave like a Hopfield Network, one of the best-known mathematical models used to describe the associative memory mechanisms of the human brain.

“Instead of using traditional electronic chips, we exploited quantum interference —the phenomenon that occurs in photonic chips when particles of light overlap and interact with one another to encode and retrieve information,” explains Marco Leonetti, coordinator and corresponding author of the study, senior researcher at Cnr-Nanotec and affiliated with the Center for Life Nano-and Neuro-Science at the Italian Institute of Technology (IIT) in Rome. “In this system, photons are not merely carriers of data, but themselves become the ‘neurons’ of an associative memory.”

Electrical control of magnetism in 2D materials promises to advance spintronics

Conventional electronics process information leveraging the electrical charge of electrons. Over the past few decades, some electronics engineers have been exploring the potential of a different type of device that instead processes and stores data exploiting the intrinsic magnetic moment (i.e., spin) of electrons.

These devices, known as spintronics, could consume less energy, process data faster and be easier to reduce in size than current electronics. A central objective for engineers who are developing spintronics is to identify promising strategies to control magnetism in devices without wasting power.

One promising approach to control magnetism entails the use of multiferroics, materials that exhibit both ferroelectricity, meaning that positive and negative charges in them are permanently separated, and ferromagnetism, which means that magnetic moments in them are aligned. When one of these properties can be used to control the other, this is known as magnetoelectric coupling.

Rapid Evolution of Complex Multi-mutant Proteins

The researchers developed MULTI-evolve, a framework for efficient protein evolution that applies machine learning models trained on datasets of ~200 variants focused specifically on pairs of function-enhancing mutations.

Published in Science, this work represents the first lab-in-the-loop framework for biological design, where computational prediction and experimental design are tightly integrated from the outset, reflecting our broader investment in AI-guided research.

Our insight was to focus on quality over quantity. First identify ~15–20 function-enhancing mutations (using protein language models or experimental screens), then systematically test all pairwise combinations of those beneficial mutations. This generates ~100–200 measurements, and every one is informative for learning beneficial epistatic interactions.

We validated this computationally using 12 existing protein datasets from published studies. Training neural networks on only the single and double mutants, we found models could accurately predict complex multi-mutants (variants with 3–12 mutations) across all 12 diverse protein families. This result held even when we reduced training data to just 10% of what was available.

Training on double mutants works because they reveal epistasis. A double mutant might perform better than the sum of its parts (synergy), worse than expected (antagonism), or exactly as predicted (additivity). These pairwise interaction patterns teach models the rules for how mutations combine, enabling extrapolation to predict which 5-, 6-, or 7-mutation combinations will work synergistically.

We then applied MULTI-evolve to three new proteins: APEX (up to 256-fold improvement over wild-type, 4.8-fold beyond already-optimized APEX2), dCasRx for trans-splicing (up to 9.8-fold improvement), and an anti-CD122 antibody (2.7-fold binding improvement to 1.0 nM, 6.5-fold expression increase). For dCasRx, we started with a deep mutational scan of 11,000 variants, extracted only the function-enhancing mutations, and tested their pairwise combinations—demonstrating the value of strategic data curation for efficient engineering.

Each required experimentally testing only ~100–200 variants in a single round to train models that accurately predicted complex multi-mutants, compressing what traditionally takes 5–10 iterative cycles over many months into weeks. Science Mission sciencenewshighlights.

The quantum world reveals reality is made of relations, not objects

The everyday picture: a world of objects

We ordinarily think of the world as a collection of things or individual objects: tables, trees, planets, particles, people.

This way of thinking is not only intuitive but also tremendously useful. Whether crossing a busy street or hunting prey, we survive by tracking the motions of objects —judging their distances, anticipating their paths, and timing our actions accordingly. Evolutionarily speaking, this is a worldview to which humanity owes its continued existence.

Feedback neurons based on perovskite memristor with nickel single-atom engineered reduced graphene oxide cathode

Scientists have long looked to the human brain as the ultimate blueprint for computing, seeking to build “neuromorphic” systems that process information with the same efficiency and flexibility as our own neurons. However, replicating the brain’s complex ability to both excite and inhibit signals—essentially “talking” and “listening” simultaneously—has proven difficult with standard hardware.

The problem? Perovskites are often too chaotic. Tiny charged particles called ions tend to zip around inside the material too quickly, making the device’s behavior hard to control. Additionally, the “bottlenecks” (barriers) where the electricity enters the device often cause lopsided performance, preventing the smooth, bidirectional communication required for advanced brain-like tasks.


Li et al. report feedback neurons based on perovskite memristors with a nickel single-atom modified reduced graphene oxide cathode. The device successfully implements an unsupervised learning network with over 50% clustering accuracy and cooperative learning for solving NP-hard combinatorial optimisation problem.

A new form of aluminum unlocks sustainable and cheaper catalysts

A research team at King’s College London has isolated a new form of aluminum—a highly abundant metal, that could provide a far cheaper and more sustainable alternative to commonly used rare earth metals. Dr. Clare Bakewell, Senior Lecturer in the Department of Chemistry, and her lab developed highly reactive aluminum molecules able to break apart tough chemical bonds. Published in Nature Communications, their work has also unlocked molecular structures that have never been observed before, which creates the potential for new kinds of reactive behavior.

The team reported the first example of a cyclotrialumane, a compound comprising three aluminum atoms arranged in a trimeric—triangular—structure. The trimeric molecule carries unprecedented reactivity as the structure is retained when dissolved into different solutions, making it robust enough for use in a range of chemical reactions. These include splitting dihydrogen and the stepwise insertion and chain growth of the 2-carbon hydrocarbon, ethene.

Metals are vital for making a whole range of commodity and fine chemicals produced in industry. However, many processes, especially catalytic ones, use expensive precious materials like platinum, which are environmentally damaging to extract.

Auroras on Ganymede and Earth share striking similarities

New observations of Ganymede reveal a striking similarity between the auroras on the largest moon in the solar system and those on Earth. The international team of astrophysicists, led by researchers from the University of Liège, has produced new results indicating that, despite different conditions, the fundamental physical processes that generate auroras are common to different celestial bodies, and not just planets.

A team of astrophysicists from the Laboratory of Atmospheric and Planetary Physics (LPAP) has observed for the first time the fine details of the auroras on Ganymede, the only moon in the solar system to have its own intrinsic magnetic field, similar to that of Earth. The observation of auroras is a cornerstone of space weather analysis, as it provides a comprehensive view of the characteristics and effects of space particle precipitation into atmospheres.

For centuries, humanity has witnessed a diffuse and changing glow that occasionally illuminates the night sky with red, green, purple and blue lights—known as the “aurora.” Auroras are typically observed at polar latitudes, although we have just passed the peak of the 11-year solar cycle, which is producing many instances of intense auroras at mid-latitudes.

/* */