Toggle light / dark theme

To Meld A.I. With Supercomputers, National Labs Are Picking Up the Pace

For years, Rick Stevens, a computer scientist at Argonne National Laboratory, pushed the notion of transforming scientific computing with artificial intelligence.

But even as Mr. Stevens worked toward that goal, government labs like Argonne — created in 1946 and sponsored by the Department of Energy — often took five years or more to develop powerful supercomputers that can be used for A.I. research. Mr. Stevens watched as companies like Amazon, Microsoft and Elon Musk’s xAI made faster gains by installing large A.I. systems in a matter of months.

Ursula Eysin on Uncertainty and Future Scenarios

How do we turn uncertainty from a threat into an advantage?

Three years ago, I sat down with someone who has built her entire career around that question: Ursula Eysin, founder of Red Swan and one of the most multidimensional futurists I’ve ever met.

Ursula is a trained ballerina who speaks seven languages, reads chemistry books for fun, mentors startups, and teaches at five universities — and somehow still finds time to help leaders navigate the unknown with clarity and courage.

In this conversation, we dig into: • Why predicting the future is a powerless position • Scenario planning vs. futurism — and why leaders need both • How to reframe uncertainty as a strategic asset • What it truly means to connect as humans in an age of AI • And why strong, diverse leadership matters more than ever.

My favourite line from Ursula remains razor-sharp:

“Turn uncertainty into an advantage. See it as a gift. And connect to other people.”

If you’re steering a team, a company, or even your own life through volatility, this one is worth your time.

How the French philosopher Jean Baudrillard predicted today’s AI 30 years before ChatGPT

One of the most important members of this enlightened club is the philosopher Jean Baudrillard – even though his reputation over the past couple of decades has diminished to an association with a now bygone era when fellow French theorists such as Roland Barthes and Jacques Derrida reigned supreme.

In writing our new biography of Baudrillard, however, we have been reminded just how prescient his predictions about modern technology and its effects have turned out to be. Especially insightful is his understanding of digital culture and AI – presented over 30 years before the launch of ChatGPT.

Back in the 1980s, cutting-edge communication technology involved devices which seem obsolete to us now: answering machines, fax machines, and (in France) Minitel, an interactive online service that predated the internet. But Baudrillard’s genius lay in foreseeing what these relatively rudimentary devices suggested about likely future uses of technology.

Argonaut lunar lander family grows

Today, the European Space Agency’s Argonaut lunar lander programme welcomes new members to its growing family. At ESA’s European Astronaut Centre (EAC) near Cologne, Germany, Thales Alenia Space Italy – the prime contractor for Argonaut’s first lander – signed agreements with Thales Alenia Space in France, OHB in Germany, and Thales Alenia Space and Nammo in the United Kingdom.

Argonaut represents Europe’s autonomous, versatile and reliable access to the Moon. Starting with the first mission in 2030, Argonaut landers will be launched on Ariane 6 rockets, each delivering up to 1.5 tonnes of exploration-enabling cargo to the Moon’s surface, from scientific instruments and rovers to vital resources for astronauts such as food, water and air.

Earlier this year, ESA selected Thales Alenia Space Italy to lead the development of the first Argonaut lander, or Lunar Descent Element. Today’s signing ceremony took place in a symbolic location: the LUNA analogue facility at EAC, home to a full-scale Argonaut model – a tangible vision of Europe’s future presence on the Moon.

Machine learning algorithm rapidly reconstructs 3D images from X-ray data

Soon, researchers may be able to create movies of their favorite protein or virus better and faster than ever before. Researchers at the Department of Energy’s SLAC National Accelerator Laboratory have pioneered a new machine learning method—called X-RAI (X-Ray single particle imaging with Amortized Inference)—that can “look” at millions of X-ray laser-generated images and create a three-dimensional reconstruction of the target particle. The team recently reported their findings in Nature Communications.

X-RAI’s ability to sort through a massive number of images and learn as it goes could unlock limits in data-gathering, allowing researchers to see molecules up close—and perhaps even on the move. “There is really no limit” to the dataset size it can handle, said SLAC staff scientist Frédéric Poitevin, one of the study’s principal investigators.

Humans bring gender bias to their interactions with AI, finds study

Humans bring gender biases to their interactions with Artificial Intelligence (AI), according to new research from Trinity College Dublin and Ludwig-Maximilians Universität (LMU) Munich.

The study involving 402 participants found that people exploited female-labeled AI and distrusted male-labeled AI to a comparable extent as they do human partners bearing the same gender labels.

Notably, in the case of female-labeled AI, the study found that exploitation in the Human-AI setting was even more prevalent than in the case of human partners with the same gender labels.

The Intelligence Foundation Model Could Be The Bridge To Human Level AI

Cai Borui and Zhao Yao from Deakin University (Australia) presented a concept that they believe will bridge the gap between modern chatbots and general-purpose AI. Their proposed “Intelligence Foundation Model” (IFM) shifts the focus of AI training from merely learning surface-level data patterns to mastering the universal mechanisms of intelligence itself. By utilizing a biologically inspired “State Neural Network” architecture and a “Neuron Output Prediction” learning objective, the framework is designed to mimic the collective dynamics of biological brains and internalize how information is processed over time. This approach aims to overcome the reasoning limitations of current Large Language Models, offering a scalable path toward true Artificial General Intelligence (AGI) and theoretically laying the groundwork for the future convergence of biological and digital minds.


The Intelligence Foundation Model represents a bold new proposal in the quest to build machines that can truly think. We currently live in an era dominated by Large Language Models like ChatGPT and Gemini. These systems are incredibly impressive feats of engineering that can write poetry, solve coding errors, and summarize history. However, despite their fluency, they often lack the fundamental spark of what we consider true intelligence.

They are brilliant mimics that predict statistical patterns in text but do not actually understand the world or learn from it in real-time. A new research paper suggests that to get to the next level, we need to stop modeling language and start modeling the brain itself.

Borui Cai and Yao Zhao have introduced a concept they believe will bridge the gap between today’s chatbots and Artificial General Intelligence. Published in a preprint on arXiv, their research argues that existing foundation models suffer from severe limitations because they specialize in specific domains like vision or text. While a chatbot can tell you what a bicycle is, it does not understand the physics of riding one in the way a human does.

Early experiments in accelerating science with GPT-5

Most strikingly, the paper claims four genuinely new mathematical results, carefully verified by the human mathematicians involved. In a discipline where truth is eternal and progress is measured in decades, an AI contributed novel insights that helped settle previously unsolved problems. The authors stress these contributions are “modest in scope but profound in implication”—not because they’re minor, but because they represent a proof of concept. If GPT-5 can do this now, what comes next?

The paper carries an undercurrent of urgency: many scientists still don’t realize what’s possible. The authors are essentially saying, “Look, this is already working for us—don’t get left behind.” Yet they avoid boosterism, emphasizing the technology’s current limitations as clearly as its strengths.


What we’re learning from collaborations with scientists.

/* */