Toggle light / dark theme

Chinese scientists unveiled a superconducting quantum computer prototype named “Zuchongzhi 3.0” with 105 qubits on Monday (Beijing Time), marking a breakthrough in China’s quantum computing advancements.

The achievement also sets a new record in quantum computational advantage within superconducting systems.

Developed by Chinese quantum physicists Pan Jianwei, Zhu Xiaobo, Peng Chengzhi, etc., “Zuchongzhi 3.0” features 105 readable qubits and 182 couplers. It processes quantum random circuit sampling tasks at a speed quadrillion times faster than the world’s most powerful supercomputer and 1 million times faster than Google’s latest results published in Nature in October 2024.

Zuchongzhi-3, a superconducting quantum computing prototype with 105 qubits and 182 couplers, has made significant advancements in random quantum circuit sampling. This prototype was successfully developed by a research team from the University of Science and Technology of China (USTC).

This prototype operates at a speed that is 1015 times faster than the currently available and one million times faster than the latest results published by Google. This achievement marks a milestone in enhancing the performance of quantum computation, following the success of Zuchongzhi-2. The research findings have been published as the cover article in Physical Review Letters.

Quantum supremacy is the demonstration of a quantum computer capable of performing tasks that are infeasible for classical computers. In 2019, Google’s 53-qubit Sycamore processor completed a random circuit sampling task in 200 seconds, a task that would have taken approximately 10,000 years to simulate on the world’s fastest supercomputer at the time.

Using the Frontier supercomputer, researchers have cracked a major challenge in nuclear physics: accurately predicting nuclear structure and forces at an unprecedented level of detail.

Their discoveries, including new insights into the shape-shifting nature of the 30-neon nucleus, could revolutionize scientific fields ranging from quantum mechanics to national security.

Revolutionizing Nuclear Predictions with Frontier.

Google’s new quantum computer solved a calculation in five minutes that would take longer than the universe’s existence to solve with a regular supercomputer. The time it would take the supercomputer to do the calculation is nearly a million billion times longer than the age of the universe.

PsiQuantum unveiled Omega, a quantum photonic chipset designed for large-scale quantum computing. This development, detailed in a Nature publication, marks a significant milestone in the mass production of quantum chips. Manufactured in partnership with GlobalFoundries at their Albany, New York facility, Omega integrates advanced components essential for constructing million-qubit quantum computers. The chipset employs photonics technology, manipulating single photons for computations, which offers advantages such as simplified cooling mechanisms. PsiQuantum has achieved manufacturing yields comparable to standard semiconductors, producing millions of these chips. The company plans to establish two Quantum Compute Centers in Brisbane, Australia, and Chicago, Illinois, aiming for operational facilities by 2027. This progress positions PsiQuantum at the forefront of the quantum computing industry, alongside other major companies making significant strides in the field. Summary of the paper in Nature: For decades, scientists have dreamed of building powerful quantum computers using light—photonic quantum computers. These machines could solve complex problems far beyond the reach of today’s most advanced supercomputers. However, a major roadblock has been the sheer difficulty of manufacturing the components required at the necessary scale. Now, researchers have developed a manufacturable platform for photonic quantum computing, marking a significant breakthrough. Their system is built using silicon photonics, a technology that integrates optical components directly onto a chip, much like modern semiconductor chips. The team demonstrated key capabilities: * Ultra-precise qubits: They achieved a stunning 99.98% accuracy in preparing and measuring quantum states. * Reliable quantum interference: Independent photon sources interacted with a visibility of 99.50%, crucial for quantum logic operations. * High-fidelity entanglement: A critical quantum process, known as two-qubit fusion, reached 99.22% accuracy. * Seamless chip-to-chip connections: The team linked quantum chips with 99.72% fidelity, a crucial step for scaling up quantum systems. Looking ahead, the researchers highlight new technologies that will further improve performance, including better photon sources, advanced detectors, and high-speed switches. This work represents a major step toward large-scale, practical quantum computing, bringing us closer to a future where quantum machines tackle problems that are impossible today.


PsiQuantum’s focus is now on wiring these chips together across racks, into increasingly large-scale multi-chip systems – work the company is now expanding through its partnership with the U.S. Department of Energy at SLAC National Accelerator Laboratory in Menlo Park, California as well as a new manufacturing and testing facility in Silicon Valley. While chip-to-chip networking remains a hard research problem for many other approaches, photonic quantum computers have the intrinsic advantage that photonic qubits can be networked using standard telecom optical fiber without any conversion between modalities, and PsiQuantum has already demonstrated high-fidelity quantum interconnects over distances up to 250m.

In 2024, PsiQuantum announced two landmark partnerships with the Australian Federal and Queensland State governments, as well as the State of Illinois and the City of Chicago, to build its first utility-scale quantum computers in Brisbane and Chicago. Recognizing quantum as a sovereign capability, these partnerships underscore the urgency and race towards building million-qubit systems. Later this year, PsiQuantum will break ground on Quantum Compute Centers at both sites, where the first utility-scale, million-qubit systems will be deployed.

At the threshold of a century poised for unprecedented transformations, we find ourselves at a crossroads unlike any before. The convergence of humanity and technology is no longer a distant possibility; it has become a tangible reality that challenges our most fundamental conceptions of what it means to be human.

This article seeks to explore the implications of this new era, in which Artificial Intelligence (AI) emerges as a central player. Are we truly on the verge of a symbiotic fusion, or is the conflict between the natural and the artificial inevitable?

The prevailing discourse on AI oscillates between two extremes: on one hand, some view this technology as a powerful extension of human capabilities, capable of amplifying our creativity and efficiency. On the other, a more alarmist narrative predicts the decline of human significance in the face of relentless machine advancement. Yet, both perspectives seem overly simplistic when confronted with the intrinsic complexity of this phenomenon. Beyond the dichotomy of utopian optimism and apocalyptic pessimism, it is imperative to critically reflect on AI’s cultural, ethical, and philosophical impact on the social fabric, as well as the redefinition of human identity that this technological revolution demands.

Since the dawn of civilization, humans have sought to transcend their natural limitations through the creation of tools and technologies. From the wheel to the modern computer, every innovation has been seen as a means to overcome the physical and cognitive constraints imposed by biology. However, AI represents something profoundly different: for the first time, we are developing systems that not only execute predefined tasks but also learn, adapt, and, to some extent, think.

This transition should not be underestimated. While previous technologies were primarily instrumental—serving as controlled extensions of human will—AI introduces an element of autonomy that challenges the traditional relationship between subject and object. Machines are no longer merely passive tools; they are becoming active partners in the processes of creation and decision-making. This qualitative leap radically alters the balance of power between humans and machines, raising crucial questions about our position as the dominant species.

But what does it truly mean to “be human” in a world where the boundaries between mind and machine are blurring? Traditionally, humanity has been defined by attributes such as consciousness, emotion, creativity, and moral decision-making. Yet, as AI advances, these uniquely human traits are beginning to be replicated—albeit imperfectly—within algorithms. If a machine can imitate creativity or exhibit convincing emotional behavior, where does our uniqueness lie?

This challenge is not merely technical; it strikes at the core of our collective identity. Throughout history, humanity has constructed cultural and religious narratives that placed us at the center of the cosmos, distinguishing us from animals and the forces of nature. Today, that narrative is being contested by a new technological order that threatens to displace us from our self-imposed pedestal. It is not so much the fear of physical obsolescence that haunts our reflections but rather the anxiety of losing the sense of purpose and meaning derived from our uniqueness.

Despite these concerns, many AI advocates argue that the real opportunity lies in forging a symbiotic partnership between humans and machines. In this vision, technology is not a threat to humanity but an ally that enhances our capabilities. The underlying idea is that AI can take on repetitive or highly complex tasks, freeing humans to engage in activities that truly require creativity, intuition, and—most importantly—emotion.

Concrete examples of this approach can already be seen across various sectors. In medicine, AI-powered diagnostic systems can process vast amounts of clinical data in record time, allowing doctors to focus on more nuanced aspects of patient care. In the creative industry, AI-driven text and image generation software are being used as sources of inspiration, helping artists and writers explore new ideas and perspectives. In both cases, AI acts as a catalyst, amplifying human abilities rather than replacing them.

Furthermore, this collaboration could pave the way for innovative solutions in critical areas such as environmental sustainability, education, and social inclusion. For example, powerful neural networks can analyze global climate patterns, assisting scientists in predicting and mitigating natural disasters. Personalized algorithms can tailor educational content to the specific needs of each student, fostering more effective and inclusive learning. These applications suggest that AI, far from being a destructive force, can serve as a powerful instrument to address some of the greatest challenges of our time.

However, for this vision to become reality, a strategic approach is required—one that goes beyond mere technological implementation. It is crucial to ensure that AI is developed and deployed ethically, respecting fundamental human rights and promoting collective well-being. This involves regulating harmful practices, such as the misuse of personal data or the indiscriminate automation of jobs, as well as investing in training programs that prepare people for the new demands of the labor market.

While the prospect of symbiotic fusion is hopeful, we cannot ignore the inherent risks of AI’s rapid evolution. As these technologies become more sophisticated, so too does the potential for misuse and unforeseen consequences. One of the greatest dangers lies in the concentration of power in the hands of a few entities, whether they be governments, multinational corporations, or criminal organizations.

Recent history has already provided concerning examples of this phenomenon. The manipulation of public opinion through algorithm-driven social media, mass surveillance enabled by facial recognition systems, and the use of AI-controlled military drones illustrate how this technology can be wielded in ways that undermine societal interests.

Another critical risk in AI development is the so-called “alignment problem.” Even if a machine is programmed with good intentions, there is always the possibility that it misinterprets its instructions or prioritizes objectives that conflict with human values. This issue becomes particularly relevant in the context of autonomous systems that make decisions without direct human intervention. Imagine, for instance, a self-driving car forced to choose between saving its passenger or a pedestrian in an unavoidable collision. How should such decisions be made, and who bears responsibility for the outcome?

These uncertainties raise legitimate concerns about humanity’s ability to maintain control over increasingly advanced technologies. The very notion of scientific progress is called into question when we realize that accumulated knowledge can be used both for humanity’s benefit and its detriment. The nuclear arms race during the Cold War serves as a sobering reminder of what can happen when science escapes moral oversight.

Whether the future holds symbiotic fusion or inevitable conflict, one thing is clear: our understanding of human identity must adapt to the new realities imposed by AI. This adjustment will not be easy, as it requires confronting profound questions about free will, the nature of consciousness, and the essence of individuality.

One of the most pressing challenges is reconciling our increasing technological dependence with the preservation of human dignity. While AI can significantly enhance quality of life, there is a risk of reducing humans to mere consumers of automated services. Without a conscious effort to safeguard the emotional and spiritual dimensions of human experience, we may end up creating a society where efficiency outweighs empathy, and interpersonal interactions are replaced by cold, impersonal digital interfaces.

On the other hand, this very transformation offers a unique opportunity to rediscover and redefine what it means to be human. By delegating mechanical and routine tasks to machines, we can focus on activities that truly enrich our existence—art, philosophy, emotional relationships, and civic engagement. AI can serve as a mirror, compelling us to reflect on our values and aspirations, encouraging us to cultivate what is genuinely unique about the human condition.

Ultimately, the fate of our relationship with AI will depend on the choices we make today. We can choose to view it as an existential threat, resisting the inevitable changes it brings, or we can embrace the challenge of reinventing our collective identity in a post-humanist era. The latter, though more daring, offers the possibility of building a future where technology and humanity coexist in harmony, complementing each other.

To achieve this, we must adopt a holistic approach that integrates scientific, ethical, philosophical, and sociological perspectives. It also requires an open, inclusive dialogue involving all sectors of society—from researchers and entrepreneurs to policymakers and ordinary citizens. After all, AI is not merely a technical tool; it is an expression of our collective imagination, a reflection of our ambitions and fears.

As we gaze toward the horizon, we see a world full of uncertainties but also immense possibilities. The future is not predetermined; it will be shaped by the decisions we make today. What kind of social contract do we wish to establish with AI? Will it be one of domination or cooperation? The answer to this question will determine not only the trajectory of technology but the very essence of our existence as a species.

Now is the time to embrace our historical responsibility and embark on this journey with courage, wisdom, and an unwavering commitment to the values that make human life worth living.

__
Copyright © 2025, Henrique Jorge

[ This article was originally published in Portuguese in SAPO’s technology section at: https://tek.sapo.pt/opiniao/artigos/a-sinfonia-do-amanha-tit…exao-seria ]

Since their invention, traditional computers have almost always relied on semiconductor chips that use binary “bits” of information represented as strings of 1’s and 0’s. While these chips have become increasingly powerful and simultaneously smaller, there is a physical limit to the amount of information that can be stored on this hardware. Quantum computers, by comparison, utilize “qubits” (quantum bits) to exploit the strange properties exhibited by subatomic particles, often at extremely cold temperatures.

Two qubits can hold four values at any given time, with more qubits translating to an exponential increase in calculating capabilities. This allows a quantum computer to process information at speeds and scales that make today’s supercomputers seem almost antiquated. Last December, for example, Google unveiled an experimental quantum computer system that researchers say takes just five minutes to finish a calculation that would take most supercomputers over 10 septillion years to complete—longer than the age of the universe as we understand it.

But Google’s Quantum Processing Unit (QPU) is based on different technology than Microsoft’s Majorana 1 design, detailed in a paper published on February 19 in the journal Nature. The result of over 17 years of design and research, Majorana 1 relies on what the company calls “topological qubits” through the creation of topological superconductivity, a state of matter previously conceptualized but never documented.

By combining digital and analog quantum simulation into a new hybrid approach, scientists have already started to make fresh scientific discoveries using quantum computers.