Toggle light / dark theme

String Theory in 2037 | Brian Greene & Edward Witten

Edward Witten, widely regarded as one of the greatest living theoretical physicists, sits down with Brian Greene to explore the deepest questions at the frontiers of modern science. From string theory and quantum gravity to black holes, cosmology, and the nature of consciousness, Witten reflects on what physics has revealed—and what remains profoundly mysterious.

The only physicist to receive the Fields Medal, Witten discusses why unifying quantum mechanics and general relativity has proven so difficult, how string theory forces gravity into its framework, and why decades of progress have still not revealed the fundamental principles underlying the theory. He also examines powerful ideas such as duality, extra dimensions, and the controversial anthropic principle, offering rare insight into how physicists grapple with uncertainty at the edge of human understanding.

The conversation moves beyond equations into philosophy, addressing questions about free will, the quantum measurement problem, and whether consciousness plays a role in how reality is observed. Witten reflects candidly on discovery, doubt, beauty in mathematics, and what it feels like to work at the limits of knowledge.

This discussion is essential viewing for anyone interested in theoretical physics, cosmology, quantum theory, and the future of our understanding of the universe.
This program is part of the Rethinking Reality series, supported by the John Templeton Foundation.

Participant: Edward Witten.
Moderator: Brian Greene.

0:00:00 — Introduction: Free Will, Physics, and the Quest to Unify Reality.

Machine learning helps robots see clearly in total darkness using infrared

From disaster zones to underground tunnels, robots are increasingly being sent where humans cannot safely go. But many of these environments lack natural or artificial light, making it difficult for robotic systems, which usually rely on cameras and vision algorithms, to operate effectively.

A team consisting of Nathan Shankar, Professor Hujun Yin and Dr. Pawel Ladosz from The University of Manchester is tackling this challenge by teaching robots to “see” in the dark. Their approach uses machine learning to reconstruct clear images from infrared cameras—sensors that can “see” even when no visible light is present.

The breakthrough, published in a paper on the arXiv preprint server, means that robots can continue using their existing vision algorithms without making changes, reducing both computational costs and the time it takes to deploy them in the field.

Forecasting Spoken Language Development in Children With Cochlear Implants Using Preimplant Magnetic Resonance Imaging

Deep transfer learning using presurgical brain MRI features predicted post–cochlear implant language improvement in children with 92% accuracy, outperforming traditional ML.


Importance Cochlear implants substantially improve spoken language in children with severe to profound sensorineural hearing loss, yet outcomes remain more variable than in children with healthy hearing. This variability cannot be reliably predicted for individual children using age at implant or residual hearing. Development of an artificial intelligence clinical tool to predict which patients will exhibit poorer improvements in language skills may enable an individualized approach to improve language outcomes.

Objective To compare the accuracy of traditional machine learning (ML) with deep transfer learning (DTL) algorithms to predict post–cochlear implant spoken language development in children with bilateral sensorineural hearing loss using a binary classification model of high vs low language improvers.

Design, Setting, and Participants This multicenter diagnostic study enrolled children from English-, Spanish-, and Cantonese-speaking families across 3 independent clinical centers in the US, Australia, and Hong Kong. A total of 278 children with cochlear implants were enrolled from July 2009 to March 2022 with 1 to 3 years of post–cochlear implant outcomes data. All children underwent pre–cochlear implant 3-dimensional volumetric brain magnetic resonance imaging (MRI). ML and DTL algorithms were trained to predict high vs low language improvers in children with cochlear implants using neuroanatomical features from presurgical brain MRI. Data were analyzed from August 2023 to April 2025.

AlphaFold Changed Science. After 5 Years, It’s Still Evolving

Until AlphaFold’s debut in November 2020, DeepMind had been best known for teaching an artificial intelligence to beat human champions at the ancient game of Go. Then it started playing something more serious, aiming its deep learning algorithms at one of the most difficult problems in modern science: protein folding. The result was AlphaFold2, a system capable of predicting the three-dimensional shape of proteins with atomic accuracy.

Its work culminated in the compilation of a database that now contains over 200 million predicted structures, essentially the entire known protein universe, and is used by nearly 3.5 million researchers in 190 countries around the world. The Nature article published in 2021 describing the algorithm has been cited 40,000 times to date. Last year, AlphaFold 3 arrived, extending the capabilities of artificial intelligence to DNA, RNA, and drugs. That transition is not without challenges—such as “structural hallucinations” in the disordered regions of proteins—but it marks a step toward the future.

To understand what the next five years holds for AlphaFold, WIRED spoke with Pushmeet Kohli, vice president of research at DeepMind and architect of its AI for Science division.

From Big Bang To AI, Unified Dynamics Enables Understanding Of Complex Systems

Experiments reveal that inflation not only smooths the universe but populates it with a specific distribution of initial perturbations, creating a foundation for structure formation. The team measured how quantum fluctuations during inflation are stretched and amplified, transitioning from quantum to classical behavior through a process of decoherence and coarse-graining. This process yields an emergent classical stochastic process, captured by Langevin or Fokker-Planck equations, demonstrating how classical stochastic dynamics can emerge from underlying quantum dynamics. The research highlights that the “initial conditions” for galaxy formation are not arbitrary, but constrained by the Gaussian field generated during inflation, possessing specific correlations. This framework provides a cross-scale narrative, linking microphysics and cosmology to life, brains, culture, and ultimately, artificial intelligence, demonstrating a continuous evolution of dynamics across the universe.

Universe’s Evolution, From Cosmos to Cognition

This research presents a unified, cross-scale narrative of the universe’s evolution, framing cosmology, astrophysics, biology, and artificial intelligence as successive regimes of dynamical systems. Rather than viewing these fields as separate, the work demonstrates how each builds upon the previous, connected by phase transitions, symmetry-breaking events, and attractors, ultimately tracing a continuous chain from the Big Bang to contemporary learning systems. The team illustrates how gravitational instability shapes the cosmic web, leading to star and planet formation, and how geochemical cycles establish stable, long-lived attractors, providing the foundation for life’s emergence as self-maintaining reaction networks. The study emphasizes that the universe is not simply evolving in state, but also in its capacity for description and learning, with each transition.

Lorenz system

The Lorenz system is a three-dimensional classical dynamic system represented by three ordinary differential equations. It was first developed by the meteorologist Edward Lorenz and describes chaotic behavior of fluid movement when subjected to heating.

Although the Lorenz system is deterministic, its dynamics depend on the choice of initial parameters. For some ranges of parameters, the system is predictable as trajectories settle into fixed points or simple periodic orbits. In contrast, for other parameter ranges, the system becomes chaotic and the solutions never settle down but instead trace out the butterfly-shaped Lorenz attractor, popularly known as butterfly effect. In this regime, small differences in initial conditions grows exponentially making long-term prediction practically impossible.

Ignorance Is the Greatest Evil: Why Certainty Does More Harm Than Malice

The most dangerous people are not the malicious ones. They’re the ones who are certain they’re right.

Most of the harm in history has been done by people who believed they knew what was right — and acted on that belief without recognizing the limits of their own knowledge.

Socrates understood this long ago: the most dangerous is not *not knowing*, but *not knowing that we don’t know* — especially when paired with power.

Read on to find why:

* certainty often does more harm than malice * humility isn’t weakness, it’s discipline * action doesn’t require certainty, only responsibility * and why, in an age of systems, algorithms, and institutions, has quietly become structural.

This isn’t an argument for paralysis or relativism.

It’s an argument for acting without pretending we are infallible.

AI learns to build simple equations for complex systems

A research team at Duke University has developed a new AI framework that can uncover simple, understandable rules that govern some of the most complex dynamics found in nature and technology.

The AI system works much like how history’s great “dynamicists”—those who study systems that change over time—discovered many laws of physics that govern such systems’ behaviors. Similar to how Newton, the first dynamicist, derived the equations that connect force and movement, the AI takes data about how complex systems evolve over time and generates equations that accurately describe them.

The AI, however, can go even further than human minds, untangling complicated nonlinear systems with hundreds, if not thousands, of variables into simpler rules with fewer dimensions.

Making lighter work of calculating fluid and heat flow

Scientists from Tokyo Metropolitan University have re-engineered the popular Lattice-Boltzmann Method (LBM) for simulating the flow of fluids and heat, making it lighter and more stable than the state-of-the-art.

By formulating the algorithm with a few extra inputs, they successfully got around the need to store certain data, some of which span the millions of points over which a simulation is run. Their findings might overcome a key bottleneck in LBM: memory usage.

The work is published in the journal Physics of Fluids.

/* */