Toggle light / dark theme

Quantum computing without interruptions

Mid-circuit measurements are one of the biggest practical hurdles in quantum error correction on encoded qubits. Researchers in Innsbruck and Aachen have now proposed and experimentally demonstrated that a universal fault-tolerant quantum algorithm can be executed without such measurements. Using a trapped-ion quantum processor, the team successfully ran Grover’s quantum search algorithm on three logical qubits.

A key bottleneck in today’s leading approaches to quantum error correction is the need to repeatedly pause and measure the quantum processor mid-computation, a process that is slow, technically demanding, and itself a significant source of errors.

Now, a joint team from the University of Innsbruck, RWTH Aachen University, Forschungszentrum Jülich and spin-off Alpine Quantum Technologies (AQT) has demonstrated fault-tolerant quantum computation without any such interruptions.

A new equation may help baristas produce the perfect espresso shot every time

Everyone’s idea of the perfect cup of coffee is different. Whether you have yours black, with a splash of milk or extra sweet, you like it your way. But is there a universal law that governs how that flavor gets into your cup? According to new research published in the journal Royal Society Open Science, part of the answer lies in the permeability of the puck, the name for the bed of tightly packed coffee grains through which water passes under high pressure.

To make a really good espresso is essentially trial and error. No matter the coffee type, baristas must constantly adjust how finely the coffee is ground and how much is packed into the puck to achieve the right flow rate. This is the volume of liquid passing through the puck over a specific amount of time and determines how long the water stays in contact with the grounds. This new research helps take some of the guesswork out of the process.

Psychological traits of scientists predict their theories and research methods

The research team also wanted to see if survey responses translated to actual scientific output. They received permission from a portion of the participants to securely link their survey answers with their professional publication records.

The team utilized machine learning technology to analyze the text of the scientists’ published abstracts and article titles. The computer algorithms measured how closely the words and phrasing matched among different authors. They also built algorithms to map out who these scientists collaborated with and which older papers they cited as foundational literature.

The algorithms revealed that cognitive traits are associated with differences in real-world publishing activity. This remained true even when controlling for a researcher’s specific subfield and preferred tools. Two psychologists who study the exact same topic using identical methods are still more likely to cite the same reference materials if they happen to share similar internal thinking styles.

New Advances Bring the Era of Quantum Computers Closer Than Ever

From the article:

” home new advances bring the era of quantum computers closer than ever

Quantum computing New Advances Bring the Era of Quantum Computers Closer Than Ever By Charlie Wood April 3, 2026

Two research groups say they have significantly reduced the amount of qubits and time required to crack common online security technologies.

Kristina Armitage/Quanta Magazine Introduction Some 30 years ago, the mathematician Peter Shor(opens a new tab) took a niche physics project — the dream of building a computer based on the counterintuitive rules of quantum mechanics — and shook the world.

Shor worked out a way for quantum computers to swiftly solve a couple of math problems that classical computers could complete only after many billions of years. Those two math problems happened to be the ones that secured the then-emerging digital world. The trustworthiness of nearly every website, inbox, and bank account rests on the assumption that these two problems are impossible to solve. Shor’s algorithm proved that assumption wrong.

For 30 years, Shor’s algorithm has been a security threat in theory only. Physicists initially estimated that they would need a colossal quantum machine with billions of qubits — the elements used in quantum calculations — to run it. That estimate has come down drastically over the years, falling recently to a million qubits. But it has still always sat comfortably beyond the modest capabilities of existing quantum computers, which typically have just hundreds of qubits.

Dozens of hidden star streams found in the outskirts of our Milky Way galaxy

To find them, Chen developed a computer algorithm called StarStream, which searches for streams using a physics-based model rather than relying on visual patterns alone, according to the study. The team then applied the method to Gaia data, which from 2014 to 2025 mapped the positions and motions of billions of stars in the Milky Way.

“It turns out that it’s a lot easier to find things when you have a theoretical expectation of what you’re looking for when you have a simple phenomenological picture,” Gnedin said in the statement.

The results also revealed that many streams do not match the classic expectation of thin, well-aligned trails. Instead, the study reports that some of the newfound streams are shorter, wider or even misaligned with their parent clusters’ orbits — suggesting earlier searches may have missed them by focusing only on the most obvious structures.

Google DeepMind’s Research Lets an LLM Rewrite Its Own Game Theory Algorithms — And It Outperformed the Experts

Michal Sutter is a data science professional with a Master of Science in Data Science from the University of Padova. With a solid foundation in statistical analysis, machine learning, and data engineering, Michal excels at transforming complex datasets into actionable insights.

MICrONS Explorer: A virtual observatory of the cortex

The Machine Intelligence from Cortical Networks (MICrONS) program seeks to revolutionize machine learning by reverse-engineering the algorithms of the brain. It is an ambitious program to map the function and connectivity of cortical circuits, using high throughput imaging technologies, with the goal of providing insights into the computational principles that underlie cortical function in order to advance the next generation of machine learning algorithms.

This website serves as a data portal to release connectivity and functional imaging data collected by a consortium of laboratories led by groups at the Allen Institute for Brain Science, Princeton University, and Baylor College of Medicine, with support from a broad array of teams, coordinated and funded by the IARPA MICrONS program. These data include large scale electron microscopy based reconstructions of cortical circuitry from mouse visual cortex, with corresponding functional imaging data from those same neurons.

Have a Scientific Request? Check out the Virtual Observatory of the Cortex (VORTEX) project, a BRAIN Initiative funded program to bring the MICrONS dataset to the research community. Access proofreading resources to answer your scientific questions.

New memristor design uses built-in oxygen gradient to bring stability to reinforcement learning

In a recent study published in Nature Communications, researchers created a memristor that uses a built-in oxygen gradient to produce slow, stable conductance changes, enabling a reinforcement learning (RL) algorithm to learn faster and more stably than conventional approaches.

Reinforcement learning stands as one of the most promising ways to achieve continual learning in AI. The idea is to replicate how biological systems acquire and adapt knowledge slowly over time. The brain achieves this via ion gradients that regulate slow, directional signaling across cell membranes. Replicating this in hardware is a key goal of neuromorphic computing.

With their ability to mimic synaptic behavior, memristors have long been considered strong candidates for this. However, most existing devices suffer from unpredictable, abrupt conductance changes, making sustained and stable learning difficult.

World’s largest quantum circuit simulation for quantum chemistry achieved on 1,024 GPUs

A joint research team between the Center for Quantum Information and Quantum Biology (QIQB) at The University of Osaka and Fixstars Corporation has demonstrated one of the world’s largest classical simulations of iterative quantum phase estimation (IQPE) circuits for quantum chemistry on up to 1,024 GPUs, surpassing the previous 40-qubit limit. The result expands the scale of molecular systems available for the development and validation of quantum algorithms for future fault-tolerant quantum computers, supporting progress toward industrial applications in drug discovery and materials development.

The paper was presented at NVIDIA GTC 2026, held in San Jose, California, March 16–19, 2026.

Overcoming unresolved challenges in drug discovery and developing new materials to address climate change will require advanced quantum chemical calculations beyond the reach of current technology. Against this backdrop, fault-tolerant quantum computers (FTQC) are widely anticipated as a key enabling technology, making it increasingly important to develop and validate, ahead of their deployment, the quantum algorithms that will eventually run on such systems.

Social media feeds: Algorithm redesign could break echo chambers and reduce online polarization

Scroll through social media long enough and a pattern emerges. Pause on a post questioning climate change or taking a hard line on a political issue, and the platform is quick to respond—serving up more of the same viewpoints, delivered with growing confidence and certainty.

That feedback loop is the architecture of an echo chamber: a space where familiar ideas are amplified, dissenting voices fade, and beliefs can harden rather than evolve.

But new research from the University of Rochester has found that echo chambers might not be a fact of online life. Published in IEEE Transactions on Affective Computing, the study argues that they are partly a design choice—one that could be softened with a surprisingly modest change: introducing more randomness into what people see.

/* */