Toggle light / dark theme

Estimating spectral features of quantum many-body systems has attracted great attention in condensed matter physics and quantum chemistry. To achieve this task, various experimental and theoretical techniques have been developed, such as spectroscopy techniques1,2,3,4,5,6,7 and quantum simulation either by engineering controlled quantum devices8,9,10,11,12,13,14,15,16 or executing quantum algorithms17,18,19,20 such as quantum phase estimation and variational algorithms. However, probing the behaviour of complex quantum many-body systems remains a challenge, which demands substantial resources for both approaches. For instance, a real probe by neutron spectroscopy requires access to large-scale facilities with high-intensity neutron beams, while quantum computation of eigenenergies typically requires controlled operations with a long coherence time17,18. Efficient estimation of spectral properties has become a topic of increasing interest in this noisy intermediate-scale quantum era21.

A potential solution to efficient spectral property estimation is to extract the spectral information from the dynamics of observables, rather than relying on real probes such as scattering spectroscopy, or direct computation of eigenenergies. This approach capitalises on the basics in quantum mechanics that spectral information is naturally carried by the observable’s dynamics10,20,22,23,24,25,26. In a solid system with translation invariance, for instance, the dynamic structure factor, which can be probed in spectroscopy experiments7,26, reaches its local maximum when both the energy and momentum selection rules are satisfied. Therefore, the energy dispersion can be inferred by tracking the peak of intensities in the energy excitation spectrum.

Imagine a world where the act of observation itself holds the key to solving our most complex problems, a world where the very fabric of reality becomes a canvas for computation. This is the tantalizing promise of Observational Computation (OC), a radical new paradigm poised to redefine the very nature of computation and our understanding of the universe itself.

Forget silicon chips and algorithms etched in code; OC harnesses the enigmatic dance of quantum mechanics and the observer effect, where the observer and the observed are inextricably intertwined. Instead of relying on traditional processing power, OC seeks to translate computational problems into carefully crafted observer-environment systems. Picture a quantum stage where potential solutions exist in a hazy superposition, like ghostly apparitions waiting for the spotlight of observation to solidify them into reality.

By meticulously designing these “observational experiments,” we can manipulate quantum systems, nudging them towards desired outcomes. This elegant approach offers tantalizing advantages over our current computational methods. Imagine harnessing the inherent parallelism of quantum superposition for exponentially faster processing, or tapping into the natural energy flows of the universe for unprecedented energy efficiency.

AI transformational impact is well under way. But as AI technologies develop, so too does their power consumption. Further advancements will require AI chips that can process AI calculations with high energy efficiency. This is where spintronic devices enter the equation. Their integrated memory and computing capabilities mimic the human brain, and they can serve as a building block for lower-power AI chips.

Now, researchers at Tohoku University, National Institute for Materials Science, and Japan Atomic Energy Agency have developed a new spintronic device that allows for the electrical mutual control of non-collinear antiferromagnets and ferromagnets. This means the device can switch magnetic states efficiently, storing and processing information with less energy—just like a brain-like AI chip.

The breakthrough can potentially revolutionize AI hardware via high efficiency and low energy costs. The team published their results in Nature Communications on February 5, 2025.

Scientists at Goethe University Frankfurt have identified a new way to probe the interior of neutron stars using gravitational waves from their collisions. By analyzing the “long ringdown” phase—a pure-tone signal emitted by the post-merger remnant—they have found a strong correlation between the signal’s properties and the equation of state of neutron-star matter. Their results were recently published in Nature Communications.

Neutron stars, with a mass greater than that of the entire solar system confined within a nearly perfect sphere just a dozen kilometers in diameter, are among the most fascinating astrophysical objects known to humankind. Yet, the in their interiors make their composition and structure highly uncertain.

The collision of two neutron stars, such as the one observed in 2017, provides a unique opportunity to uncover these mysteries. As binary neutron stars inspiral for millions of years, they emit , but the most intense emission occurs at and just milliseconds after the moment of merging.

In a milestone that brings quantum computing tangibly closer to large-scale practical use, scientists at Oxford University Physics have demonstrated the first instance of distributed quantum computing.

Using a photonic network interface, they successfully linked two separate quantum processors to form a single, fully connected quantum computer, paving the way to tackling computational challenges previously out of reach. The results were published on 5 Feb in Nature.

The breakthrough addresses quantum’s ‘scalability problem’: a quantum computer powerful enough to be industry-disrupting would have to be capable of processing millions of qubits. Packing all these processors in a single device, however, would require a machine of an immense size.

The concept of computational consciousness and its potential impact on humanity is a topic of ongoing debate and speculation. While Artificial Intelligence (AI) has made significant advancements in recent years, we have not yet achieved a true computational consciousness capable of replicating the complexities of the human mind.

AI technologies are becoming increasingly sophisticated, performing tasks that were once exclusive to human intelligence. However, fundamental differences remain between AI and human consciousness. Human cognition is not purely computational; it encompasses emotions, subjective experiences, self-awareness, and other dimensions that machines have yet to replicate.

The rise of advanced AI systems will undoubtedly transform society, reshaping how we work, communicate, and interact with the digital world. AI enhances human capabilities, offering powerful tools for solving complex problems across diverse fields, from scientific research to healthcare. However, the ethical implications and potential risks associated with AI development must be carefully considered. Responsible AI deployment, emphasizing fairness, transparency, and accountability, is crucial.

In this evolving landscape, ETER9 introduces an avant-garde and experimental approach to AI-driven social networking. It redefines digital presence by allowing users to engage with AI entities known as ‘noids’ — autonomous digital counterparts designed to extend human presence beyond time and availability. Unlike traditional virtual assistants, noids act as independent extensions of their users, continuously learning from interactions to replicate communication styles and behaviors. These AI-driven entities engage with others, generate content, and maintain a user’s online presence, ensuring a persistent digital identity.

ETER9’s noids are not passive simulations; they dynamically evolve, fostering meaningful interactions and expanding the boundaries of virtual existence. Through advanced machine learning algorithms, they analyze user input, adapt to personal preferences, and refine their responses over time, creating an AI representation that closely mirrors its human counterpart. This unique integration of AI and social networking enables users to sustain an active online presence, even when they are not physically engaged.

The advent of autonomous digital counterparts in platforms like ETER9 raises profound questions about identity and authenticity in the digital age. While noids do not possess true consciousness, they provide a novel way for individuals to explore their own thoughts, behaviors, and social interactions. Acting as digital mirrors, they offer insights that encourage self-reflection and deeper understanding of one’s digital footprint.

As this frontier advances, it is essential to approach the development and interaction with digital counterparts thoughtfully. Issues such as privacy, data security, and ethical AI usage must be at the forefront. ETER9 is committed to ensuring user privacy and maintaining high ethical standards in the creation and functionality of its noids.

ETER9’s vision represents a paradigm shift in human-AI relationships. By bridging the gap between physical and virtual existence, it provides new avenues for creativity, collaboration, and self-expression. As we continue to explore the potential of AI-driven digital counterparts, it is crucial to embrace these innovations with mindful intent, recognizing that while AI can enhance and extend our digital presence, it is our humanity that remains the core of our existence.

As ETER9 pushes the boundaries of AI and virtual presence, one question lingers:

— Could these autonomous digital counterparts unlock deeper insights into human consciousness and the nature of our identity in the digital era?

© 2025 __Ӈ__

We may be well past the uncanny valley point right now. OmniHuman-1’s fake videos look startlingly lifelike, and the model’s deepfake outputs are perhaps the most realistic to date. Just take a look at this TED Talk that never actually took place.

The system only needs a single photo and an audio clip to generate these videos from scratch. You can also adjust elements such as aspect ratio and body framing. The AI can even modify existing video footage, editing things like body movements and gestures in creepily realistic ways.

If left unchecked, powerful AI systems may pose an existential threat to the future of humanity, say UC Berkeley Professor Stuart Russell and postdoctoral scholar Michael Cohen.

Society is already grappling with myriad problems created by the rapid proliferation of AI, including disinformation, polarization and algorithmic bias. Meanwhile, tech companies are racing to build ever more powerful AI systems, while research into AI safety lags far behind.

Without giving powerful AI systems clearly defined objectives, or creating robust mechanisms to keep them in check, AI may one day evade human control. And if the objectives of these AIs are at odds with those of humans, say Russell and Cohen, it could spell the end of humanity.

The advent of quantum simulators in various platforms8,9,10,11,12,13,14 has opened a powerful experimental avenue towards answering the theoretical question of thermalization5,6, which seeks to reconcile the unitarity of quantum evolution with the emergence of statistical mechanics in constituent subsystems. A particularly interesting setting is that in which a quantum system is swept through a critical point15,16,17,18, as varying the sweep rate can allow for accessing markedly different paths through phase space and correspondingly distinct coarsening behaviour. Such effects have been theoretically predicted to cause deviations19,20,21,22 from the celebrated Kibble–Zurek (KZ) mechanism, which states that the correlation length ξ of the final state follows a universal power-law scaling with the ramp time tr (refs. 3, 23,24,25).

Whereas tremendous technical advancements in quantum simulators have enabled the observation of a wealth of thermalization-related phenomena26,27,28,29,30,31,32,33,34,35, the analogue nature of these systems has also imposed constraints on the experimental versatility. Studying thermalization dynamics necessitates state characterization beyond density–density correlations and preparation of initial states across the entire eigenspectrum, both of which are difficult without universal quantum control36. Although digital quantum processors are in principle suitable for such tasks, implementing Hamiltonian evolution requires a high number of digital gates, making large-scale Hamiltonian simulation infeasible under current gate errors.

In this work, we present a hybrid analogue–digital37,38 quantum simulator comprising 69 superconducting transmon qubits connected by tunable couplers in a two-dimensional (2D) lattice (Fig. 1a). The quantum simulator supports universal entangling gates with pairwise interaction between qubits, and high-fidelity analogue simulation of a U symmetric spin Hamiltonian when all couplers are activated at once. The low analogue evolution error, which was previously difficult to achieve with transmon qubits due to correlated cross-talk effects, is enabled by a new scalable calibration scheme (Fig. 1b). Using cross-entropy benchmarking (XEB)39, we demonstrate analogue performance that exceeds the simulation capacity of known classical algorithms at the full system size.