Toggle light / dark theme

Surgery for quantum bits: Bit-flip errors corrected during superconducting qubit operations

Quantum computers hold great promise for exciting applications in the future, but for now they keep presenting physicists and engineers with a series of challenges and conundrums. One of them relates to decoherence and the errors that result from it: bit flips and phase flips. Such errors mean that the logical unit of a quantum computer, the qubit, can suddenly and unpredictably change its state from “0” to “1,” or that the relative phase of a superposition state can jump from positive to negative.

These errors can be held at bay by building a logical qubit out of many physical qubits and constantly applying error correction protocols. This approach takes care of storing the quantum information relatively safely over time. However, at some point it becomes necessary to exit storage mode and do something useful with the qubit—like applying a quantum gate, which is the building block of quantum algorithms.

The research group led by D-PHYS Professor Andreas Wallraff, in collaboration with the Paul Scherrer Institute (PSI) and the theory team of Professor Markus Müller at RWTH Aachen University and Forschungszentrum Jülich, has now demonstrated a technique that makes it possible to perform a quantum operation between superconducting logical qubits while correcting for potential errors occurring during the operation. The researchers have just published their results in Nature Physics.

Mathematical Innovation Advances Complex Simulations for Science’s Toughest Problems

Berkeley researchers have developed a proven mathematical framework for the compression of large reversible Markov chains—probabilistic models used to describe how systems change over time, such as proteins folding for drug discovery, molecular reactions for materials science, or AI algorithms making decisions—while preserving their output probabilities (likelihoods of events) and spectral properties (key dynamical patterns that govern the system’s long-term behavior).

While describing the dynamics of ubiquitous physical systems, Markov chains also allow for rich theoretical and computational investigation. By exploiting the special mathematical structure behind these dynamics, the researchers’ new theory delivers models that are quicker to compute, equally accurate, and easier to interpret, enabling scientists to efficiently explore and understand complex systems. This advance sets a new benchmark for efficient simulation, opening the door to scientific explorations once thought computationally out of reach.

Background.

AI streamlines deluge of data from particle collisions

Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have developed a novel artificial intelligence (AI)-based method to dramatically tame the flood of data generated by particle detectors at modern accelerators. The new custom-built algorithm uses a neural network to intelligently compress collision data, adapting automatically to the density or “sparsity” of the signals it receives.

As described in a paper just published in the journal Patterns, the scientists used simulated data from sPHENIX, a particle detector at Brookhaven Lab’s Relativistic Heavy Ion Collider (RHIC), to demonstrate the algorithm’s potential to handle trillions of bits of detector data per second while preserving the fine details physicists need to explore the building blocks of matter.

The algorithm will help physicists gear up for a new era of streaming data acquisition, where every collision is recorded without pre-selecting which ones might be of interest. This will vastly expand the potential for more accurate measurements and unanticipated discoveries.

They Are Waiting for Us To Die: Aestivation Hypothesis

What if advanced civilizations aren’t absent—they’re just waiting? What if they looked at our universe, full of burning stars and abundant energy, and decided it’s too hot, too expensive, too wasteful to be awake? What if everyone else has gone into hibernation, sleeping through the entire age of stars, waiting trillions of years for the universe to cool? The Aestivation Hypothesis offers a stunning solution to the Fermi Paradox: intelligent civilizations aren’t missing—they’re deliberately dormant, conserving energy for a colder, more efficient future. We might be the only ones awake in a sleeping cosmos.

Over the next 80 minutes, we’ll explore one of the most patient answers to why we haven’t found aliens. From thermodynamic efficiency to cosmic hibernation, from automated watchers keeping vigil to the choice between experiencing now versus waiting for optimal conditions trillions of years ahead, we’ll examine why the rational strategy might be to sleep through our entire era. This changes everything about the Fermi Paradox, the Drake Equation, and what it means to be awake during the universe’s most “expensive” age.

CHAPTERS:

0:00 — Introduction: The Patience of Stars.

4:30 — The Fermi Paradox Once More.

8:20 — Introducing the Aestivation Hypothesis.

Framework sets new benchmarks for 3D atom maps in amorphous materials

Researchers at the California NanoSystems Institute at UCLA published a step-by-step framework for determining the three-dimensional positions and elemental identities of atoms in amorphous materials. These solids, such as glass, lack the repeating atomic patterns seen in a crystal. The team analyzed realistically simulated electron-microscope data and tested how each step affected accuracy.

The team used algorithms to analyze rigorously simulated imaging data of nanoparticles—so small they’re measured in billionths of a meter. For amorphous silica, the primary component of glass, they demonstrated 100% accuracy in mapping the three-dimensional positions of the constituent silicon and oxygen atoms, with precision about seven trillionths of a meter under favorable imaging conditions.

While 3D atomic structure determination has a history of more than a century, its application has been limited to crystal structures. Such techniques depend on averaging a pattern that is repeated trillions of times.

Deep-learning algorithms enhance mutation detection in cancer and RNA sequencing

Researchers from the Faculty of Engineering at The University of Hong Kong (HKU) have developed two innovative deep-learning algorithms, ClairS-TO and Clair3-RNA, that significantly advance genetic mutation detection in cancer diagnostics and RNA-based genomic studies.

The pioneering research team, led by Professor Ruibang Luo from the School of Computing and Data Science, Faculty of Engineering, has unveiled two groundbreaking deep-learning algorithms—ClairS-TO and Clair3-RNA—set to revolutionize genetic analysis in both clinical and research settings.

Leveraging long-read sequencing technologies, these tools significantly improve the accuracy of detecting genetic mutations in complex samples, opening new horizons for precision medicine and genomic discovery. Both research articles have been published in Nature Communications.

Facebook Admits the Social Network Isn’t Social

Facebook admitted something that should have been front-page news.

In an FTC antitrust filing, Meta revealed that only 7% of time on Instagram and 17% on Facebook is spent actually socializing with friends and family.

The rest?

Algorithmically selected content. Short-form video. Engagement optimized by AI.

This wasn’t a philosophical confession. It was a legal one. But it quietly confirms what many of us have felt for years:

What we still call “social networks” are not social.

They are attention machines.

AI Now Has a Primitive Form of Metacognition

In this video I break down recent research exploring metacognition in large language model ensembles and the growing shift toward System 1 / System 2 style AI architectures.
Some researchers are no longer focusing on making single models bigger. Instead, they are building systems where multiple models interact, critique each other, and dynamically switch between fast heuristic reasoning and slower deliberate reasoning. In other words: AI systems that monitor and regulate their own thinking.

Artificial metacognition: Giving an AI the ability to ‘think’ about its ‘thinking’
https://theconversation.com/artificia… System 1 to System 2: A Survey of Reasoning Large Language Models https://arxiv.org/abs/2502.17419 The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity https://dl.acm.org/doi/10.1145/374625… Emotions? Towards Quantifying Metacognition and Generalizing the Teacher-Student Model Using Ensembles of LLMs https://arxiv.org/abs/2502.17419 Metacognition https://research.sethi.org/metacognit… Robot passes the mirror test by inner speech https://www.sciencedirect.com/science… METIS: Metacognitive Evaluation for Intelligent Systems https://research.sethi.org/metacognit… Distinguishing the reflective, algorithmic, and autonomous minds: Is it time for a tri-process theory? Get access Arrow https://academic.oup.com/book/6923/ch… #science #explained #news #research #sciencenews #ai #robots #artificialintelligence.

From System 1 to System 2: A Survey of Reasoning Large Language Models.
https://arxiv.org/abs/2502.

The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity.
https://dl.acm.org/doi/10.1145/374625

Emotions? Towards Quantifying Metacognition and Generalizing the Teacher-Student Model Using Ensembles of LLMs.
https://arxiv.org/abs/2502.

/* */