The fundamental quantum postulates on the existence of a wave function, its propagation with the Schrödinger equation in theorem 3.2 and the wave collapse at a measurement in lemma 3.3 are derived from the classical theorem 2.4. Furthermore, analytic computations of the classical action are simpler than solving the Feynman path integral and potentially easier than solving the Schrödinger equation directly. In addition, theorem 3.2 is a multi-particle result.
The J classical multipaths in theorem 3.2 and lemma 3.3 are strictly determined by the initial and final conditions. In the double slit experiment, the probabilistic quantum observation results from the non-Lipschitz constraint force in the slit. For the harmonic oscillator, the Coulomb wave, the particle in the box, or the spinning particle, the initial probabilistic density distribution is classically propagated forward in time. In the EPR experiment [64,65], theorem 2.4 determines a constant angular momentum χo↑,χo↓ over time, and lemma 3.3 in turn allows a classical interpretation that the decision which spin correlation is sensed behind the filters is already taken when the particles separate.
For a fixed number of configurations, representing quantum states becomes less accurate as their non-stabilizerness increases. This demonstrates a clear limit to how well restricted Boltzmann machines can compress and represent highly entangled systems. Calculations using ground states of medium-mass atomic nuclei reveal non-stabilizerness as a key property governing neural network performance.
Quantum computers, devices that process information leveraging quantum mechanical effects, could tackle some tasks that are difficult or impossible to solve using classical computers. These systems represent data as qubits, units of information that can exist in multiple states at once, unlike the bits used by classical computers that represent data using binary values (“0” or “1”).
Some of the quantum computers developed in recent years store quantum information in the spin (i.e., intrinsic angular momentum) of electrons or nuclei that are trapped in small semiconductor-based structures, known as quantum dots. For these devices to operate reliably, however, engineers need to be able to precisely measure the quantum states of the spin qubits they rely on, a process that is known as qubit readout. It would also be advantageous for these states to be precisely measured in a way that is architecturally compact, or in other words, using space-efficient hardware as opposed to numerous bulkier components.
Researchers at Quantum Motion and University College London (UCL) recently introduced a new approach to clearly read out the states of spin qubits leveraging high-frequency electrical signals. This method, introduced in a paper published in Nature Electronics, was developed by Jacob F. Chittock-Wood and his colleagues while he was completing his Ph.D. at UCL.
Excitons are being explored in materials science and information technology as a means of storing light. These luminous quasiparticles move through individual layers of quantum materials and can absorb and emit light with high efficiency. They form when a laser pulse excites an electron, leaving behind a positively charged “hole.” The electron and hole attract each other and behave together like a new, independent particle. When the quasiparticle recombines, it emits light and can be detected in high-tech laboratories.
Excitons in ultrathin quantum materials have been intensively studied for more than a decade, including by Alexey Chernikov and his team. At the Cluster of Excellence ctd.qmat—Complexity, Topology and Dynamics in Quantum Matter—at the Universities of Würzburg and Dresden, Chernikov and an international research team based in Dresden have now made a surprising discovery: excitons can be carried along by the magnetic excitations of a quantum material and, as a result, accelerated to ultrahigh speeds. The findings are published in the journal Nature Nanotechnology.
“The fact that the motion of optical particles can be controlled by magnetism is new. Until now, we only knew that the transport of electrons could be controlled by the magnetic order in a quantum material—this is how some sensors in smartphones work, for example. This newly discovered link between optics and magnetism could open up entirely new technological possibilities,” explains Florian Dirnberger, head of an Emmy Noether Junior Research Group at the Technical University of Munich and formerly a postdoctoral researcher in Alexey Chernikov’s Chair of Ultrafast Microscopy and Photonics, where he was responsible for carrying out the research project.
Semiconductor spin qubits are a promising candidate for the building blocks of next-generation quantum computers due to their high potential for integration and compatibility with existing semiconductor technologies. Qubits—like the 0s and 1s of a traditional computer—serve as a basic unit of information for quantum computers. However, the practical realization of these computers requires a massive number of qubits, making the development of more efficient adjustment methods a critical challenge for the field.
A research group including Yui Muto from Tohoku University’s Graduate School of Engineering, Assistant Professor Motoya Shinozaki and Associate Professor Tomohiro Otsuka from the Advanced Institute for Materials Research (WPI-AIMR), and their colleagues have successfully demonstrated a method that may help make this massive number of qubits much more manageable, moving us one step closer toward scaling up quantum computing. The findings are published in Scientific Reports.
The most demanding calculations in quantum chemistry can now be solved with graphics processing unit (GPU) supercomputers. A recently published study shows that software adapted to use GPU hardware can provide not just speed, but also the accuracy needed to solve complex chemistry problems. The work solved the two chemical structures often seen as too complex and expensive to tackle. The advance, published in the Journal of Chemical Theory and Computation, could allow researchers to make meaningful progress in designing new catalysts and improve predicted behaviors of magnetic and electronic materials.
Specifically, the research team—led by computational chemists from NVIDIA, Sandbox AQ, the Wigner Research Centre in Hungary, the Institute for Advanced Study of the Technical University of Munich in Germany, and the Department of Energy’s Pacific Northwest National Laboratory—showed that NVIDIA Blackwell architecture effectively tackles complex simulations. Here, the researchers used a mixture of mathematically precise and approximated approaches to accomplish their goal.
“Our study shows that AI-oriented hardware can do more than provide speed—it can also power chemically accurate, strongly correlated quantum chemistry at the frontier of what is computationally feasible,” said Sotiris Xantheas, a computational chemist at PNNL and study author. Xantheas also serves as the principal investigator of Scalable Predictive methods for Excitations and Correlated phenomena (SPEC), a Department of Energy initiative.
Support the Research Behind this Channel on Patreon: / arvinash.
REFERENCES How black holes may be responsible for Dark Energy • How BLACK HOLES May be Responsible for DAR… Is Dark Energy made of particles? • Is Dark ENERGY made of PARTICLES? The Quin… What is Dark Energy made of? • What is Dark Energy made of? Quintessence?… CHAPTERS 0:00 The 70% mystery 0:58 How Dark Energy was discovered? 4:26 What could be causing Dark Energy? 6:58 Repulsive Gravity? 10:16 What is the energy made of? 11:56 Evolving Dark energy? Quintesssence 14:18 Could Dark Energy be a particle? 16:43 Could Black Holes cause Dark Energy? SUMMARY Dark energy is one of the greatest mysteries in modern physics. It appears to make up nearly 70% of the universe, yet scientists still do not know what it is. Unlike matter, it does not clump together. Unlike radiation, it does not dilute as space expands. Instead, it causes the expansion of the universe to accelerate, pushing galaxies apart faster over time. The discovery of this acceleration came in the late 1990s when astronomers measured distant Type Ia supernovae, which act as reliable “standard candles.” By comparing their brightness and redshift, researchers could determine how fast the universe expanded at different points in cosmic history. Instead of finding that gravity slowed expansion—as expected—they discovered the opposite: the universe was expanding faster and faster. This unexpected result led to the concept of dark energy, the unknown driver behind cosmic acceleration. One possible explanation is that dark energy is a cosmological constant, represented by the Greek letter lambda in Einstein’s equations. In this model, empty space itself contains a constant energy density known as vacuum energy. Quantum mechanics predicts that empty space is not truly empty; quantum fields constantly fluctuate, producing short-lived “virtual particles.” These fluctuations create energy even in a vacuum. Experiments like the Casimir effect provide evidence that vacuum energy is real. However, this explanation has a major problem. When physicists calculate vacuum energy using quantum theory, the predicted value is about 10¹²⁰ times larger than what observations of the universe allow. This enormous mismatch is widely considered the worst prediction in physics. In general relativity, cosmic acceleration can occur if the universe contains energy with negative pressure. In the Friedmann equation, expansion accelerates when pressure is sufficiently negative relative to energy density. Dark energy appears to have exactly this property, effectively producing a form of repulsive gravity that stretches spacetime. Another possibility is that dark energy is not constant but comes from a dynamic field known as quintessence. In quantum theory, fields can have particle-like excitations, meaning dark energy might correspond to extremely weakly interacting particles. If the strength of this field changes over time, the acceleration of the universe could grow stronger. In extreme scenarios, this could eventually lead to a catastrophic future known as the Big Rip, where galaxies, stars, atoms, and even spacetime itself are torn apart. A more speculative idea suggests a connection between supermassive black holes and dark energy. Some recent studies have observed that black holes appear to grow more massive over billions of years than expected from normal matter accretion alone. Researchers have proposed that black holes might somehow be linked to dark energy, though current evidence only shows a correlation and not a confirmed causal explanation. #darkenergy For now, dark energy remains an observed phenomenon with multiple possible explanations. Whether it is a property of empty space, a new field of physics, or something even deeper, it stands as one of the most profound open questions in cosmology.
CHAPTERS 0:00 The 70% mystery 0:58 How Dark Energy was discovered? 4:26 What could be causing Dark Energy? 6:58 Repulsive Gravity? 10:16 What is the energy made of? 11:56 Evolving Dark energy? Quintesssence 14:18 Could Dark Energy be a particle? 16:43 Could Black Holes cause Dark Energy?
SUMMARY Dark energy is one of the greatest mysteries in modern physics. It appears to make up nearly 70% of the universe, yet scientists still do not know what it is. Unlike matter, it does not clump together. Unlike radiation, it does not dilute as space expands. Instead, it causes the expansion of the universe to accelerate, pushing galaxies apart faster over time.
The discovery of this acceleration came in the late 1990s when astronomers measured distant Type Ia supernovae, which act as reliable “standard candles.” By comparing their brightness and redshift, researchers could determine how fast the universe expanded at different points in cosmic history. Instead of finding that gravity slowed expansion—as expected—they discovered the opposite: the universe was expanding faster and faster. This unexpected result led to the concept of dark energy, the unknown driver behind cosmic acceleration.
One possible explanation is that dark energy is a cosmological constant, represented by the Greek letter lambda in Einstein’s equations. In this model, empty space itself contains a constant energy density known as vacuum energy. Quantum mechanics predicts that empty space is not truly empty; quantum fields constantly fluctuate, producing short-lived “virtual particles.” These fluctuations create energy even in a vacuum. Experiments like the Casimir effect provide evidence that vacuum energy is real.
When you throw a ball in the air, the equations of classical physics will tell you exactly what path the ball will take as it falls, and when and where it will land. But if you were to squeeze that same ball down to the size of an atom or smaller, it would behave in ways beyond anything that classical physics can predict.
Or so we’ve thought.
MIT scientists have now shown that certain mathematical ideas from everyday classical physics can be used to describe the often weird and nonintuitive behavior that occurs at the quantum, subatomic scale.
A tiny discrepancy in particle physics has loomed for decades as an exciting possible crack in one of science’s most successful theories, hinting at unknown forces or quantum objects. Now, an international team led by a Penn State physicist has published the most precise study yet to reveal the discrepancy was a fluke in calculation, not nature.
More than half a century of measurements of a fundamental property of the muon—the more massive, short-lived cousin of the electron—did not line up with theoretical predictions, raising hopes that new physics might be behind the unexplained inconsistency.
In a paper published in the journal Nature, a team led by a Penn State researcher describes one of the most precise calculations ever performed in particle physics, showing that the Standard Model—the theory describing the known building blocks of matter—still holds.