Toggle light / dark theme

A low-cost protocol enables preparation of magic states and fault-tolerant universal quantum computation

Quantum computers, systems that perform computations leveraging quantum mechanical effects, could outperform classical computers in some optimization and information processing tasks. As these systems are highly influenced by noise, however, they need to integrate strategies that will minimize the errors they produce.

One proposed solution for enabling fault-tolerant quantum computing across a wide range of operations is known as state . This approach consists of preparing special quantum states (i.e., magic states) that can then be used to perform a universal set of operations. This allows the construction of a universal quantum computer—a device that can reliably perform all operations necessary for implementing any quantum algorithm.

Yet while magic state distillation techniques can achieve good results, they typically consume large numbers of error-protected qubits and need to perform many rounds of error correction. This has so far limited their potential for real-world applications.

Clever algorithm enables real-time noise mitigation in quantum devices

Quantum researchers have deployed a new algorithm to manage noise in qubits in real time. The method can be applied to a wide range of different qubits, even in large numbers.

Noise is the “ghost in the machine” in the effort to make work. Certain quantum devices use qubits—the central component of any quantum processor—and they are extremely sensitive to even small disturbances in their environment.

A collaboration between researchers from the Niels Bohr Institute, MIT, NTNU, and Leiden University has now resulted in a method to effectively manage the noise. The result has been published in PRX Quantum.

New retina-inspired photodiodes could advance machine vision

Over the past decades, computer scientists have developed increasingly sophisticated sensors and machine learning algorithms that allow computer systems to process and interpret images and videos. This tech-powered capability, also referred to as machine vision, is proving to be highly advantageous for the manufacturing and production of food products, drinks, electronics, and various other goods.

Machine vision could enable the automation of various tedious steps in industry and manufacturing, such as the detection of defects, the inspection of electronics, automotive parts or other items, the verification of labels or expiration dates and the sorting of products into different categories.

While the sensors underpinning the functioning of many previously introduced machine vision systems are highly sophisticated, they typically do not process with as much detail as the human retina (i.e., a light-sensitive tissue in the eye that processes visual signals).

Breaking the code in network theory: Bimodularity reveals direction of influence in complex systems

As summer winds down, many of us in continental Europe are heading back north. The long return journeys from the beaches of southern France, Spain, and Italy once again clog alpine tunnels and Mediterranean coastal routes during the infamous Black Saturday bottlenecks. This annual migration, like many systems in our world, forms a network—not just of connections, but of communities shaped by shared patterns of origin and destination.

This is where —and in particular, community detection—comes in. For decades, researchers have developed powerful tools to uncover in networks: clusters of tightly interconnected nodes. But these tools work best for undirected networks, where connections are mutual. Graphically, the node maps may look familiar.

These clusters can mean that a group of people are all friends on Facebook, follow different sport accounts on X, or all live in the same city. Using a standard modularity algorithm, we can then find connections between different communities and begin to draw useful conclusions. Perhaps users in the fly-fishing community also show up as followers of nonalcoholic beer enthusiasts in Geneva. This type of information extraction, impossible without community analysis, is a layer of meaning that can be leveraged to sell beer or even nefariously influence elections.

New AI attack hides data-theft prompts in downscaled images

Researchers have developed a novel attack that steals user data by injecting malicious prompts in images processed by AI systems before delivering them to a large language model.

The method relies on full-resolution images that carry instructions invisible to the human eye but become apparent when the image quality is lowered through resampling algorithms.

Developed by Trail of Bits researchers Kikimora Morozova and Suha Sabi Hussain, the attack builds upon a theory presented in a 2020 USENIX paper by a German university (TU Braunschweig) exploring the possibility of an image-scaling attack in machine learning.

For the Singularity to Truly Arrive, We’d Need a Machine That Eats the Sun

However, if you’re rich and you don’t like the idea of a limit on computing, you can turn to futurism, longtermism, or “AI optimism,” depending on your favorite flavor. People in these camps believe in developing AI as fast as possible so we can (they claim) keep guardrails in place that will prevent AI from going rogue or becoming evil. (Today, people can’t seem to—or don’t want to—control whether or not their chatbots become racist, are “sensual” with children, or induce psychosis in the general population, but sure.)

The goal of these AI boosters is known as artificial general intelligence, or AGI. They theorize, or even hope for, an AI so powerful that it thinks like… well… a human mind whose ability is enhanced by a billion computers. If someone ever does develop an AGI that surpasses human intelligence, that moment is known as the AI singularity. (There are other, unrelated singularities in physics.) AI optimists want to accelerate the singularity and usher in this “godlike” AGI.

One of the key facts of computer logic is that, if you can slow the processes down enough and look at it in enough detail, you can track and predict every single thing that a program will do. Algorithms (and not the opaque AI kind) guide everything within a computer. Over the decades, experts have written the exact ways information can be sent, one bit—one minuscule electrical zap—at a time through a central processing unit (CPU).

Researchers Demonstrate QuantumShield-BC Blockchain Framework

Researchers have developed QuantumShield-BC, a blockchain framework designed to resist attacks from quantum computers by integrating post-quantum cryptography (PQC) utilising algorithms such as Dilithium and SPHINCS+, quantum key distribution (QKD), and quantum Byzantine fault tolerance (Q-BFT) leveraging quantum random number generation (QRNG) for unbiased leader selection. The framework was tested on a controlled testbed with up to 100 nodes, demonstrating resistance to simulated quantum attacks and achieving fairness through QRNG-based consensus. An ablation study confirmed the contribution of each quantum component to overall security, although the QKD implementation was simulated and scalability to larger networks requires further investigation.

Thermodynamic computing system for AI applications

Recent breakthroughs in artificial intelligence (AI) algorithms have highlighted the need for alternative computing hardware in order to truly unlock the potential for AI. Physics-based hardware, such as thermodynamic computing, has the potential to provide a fast, low-power means to accelerate AI primitives, especially generative AI and probabilistic AI. In this work, we present a small-scale thermodynamic computer, which we call the stochastic processing unit. This device is composed of RLC circuits, as unit cells, on a printed circuit board, with 8 unit cells that are all-to-all coupled via switched capacitances. It can be used for either sampling or linear algebra primitives, and we demonstrate Gaussian sampling and matrix inversion on our hardware. The latter represents a thermodynamic linear algebra experiment. We envision that this hardware, when scaled up in size, will have significant impact on accelerating various probabilistic AI applications.

#Repost Nature Publishing


Current digital hardware struggles with high computational demands in applications such as probabilistic AI. Here, authors present a small-scale thermodynamic computer composed of eight RLC circuits, demonstrating Gaussian sampling and matrix inversion, suggesting potential speed and energy efficiency advantages over digital GPUs.

China data link could offer faster coordination during hypersonic attacks

China’s military data link could offer faster coordination during hypersonic attacks.


Chinese researchers explain that traditional tactical data links rely on round-trip time (RTT) for synchronization, which works for low-speed aircraft. Systems like NATO’s Link-16 achieve roughly 100-nanosecond accuracy under these conditions.

However, in hypersonic cooperative strike systems operating above Mach 5, the rapid relative motion between widely dispersed platforms creates asymmetric transmission paths, severely reducing the precision of conventional RTT algorithms. This highlights the need for new communication technologies capable of maintaining ultra-precise timing at extreme speeds.

What came before the Big Bang? Supercomputers may hold the answer

Scientists are rethinking the universe’s deepest mysteries using numerical relativity, complex computer simulations of Einstein’s equations in extreme conditions. This method could help explore what happened before the Big Bang, test theories of cosmic inflation, investigate multiverse collisions, and even model cyclic universes that endlessly bounce through creation and destruction.

/* */