XAI has lost two cofounders and six other AI researchers in recent weeks as it has merged with SpaceX.
In a bold fusion of SpaceX’s satellite expertise and Tesla’s AI prowess, the Starthink Synthetic Brain emerges as a revolutionary orbital data center.
Proposed in Digital Habitats February 2026 document, this next-gen satellite leverages the Starlink V3 platform to create a distributed synthetic intelligence wrapping the planet.
Following SpaceX’s FCC filing for up to one million orbital data centers and its acquisition of xAI, Starthink signals humanity’s leap toward a Kardashev II civilization.
As Elon Musk noted in February 2026, ]
“In 36 months, but probably closer to 30, the most economically compelling place to put AI will be space.”
## The Biological Analogy.
Starthink draws from neuroscience: * Neural Cluster: A single Tesla AI5 chip, processing AI inference at ~250W, like a neuron group. * Synthetic Brain: One Starthink satellite, a 2.5-tonne self-contained node with 500 neural clusters, solar power, storage, and comms. * Planetary Neocortex: One million interconnected Brains forming a global mesh intelligence, linked by laser and microwave “synapses.”
The future of intelligence is rapidly evolving with AI advancements, poised to transform numerous aspects of life, work, and existence, with exponential growth and sweeping changes expected in the near future.
## Questions to inspire discussion.
Strategic Investment & Career Focus.
🎯 Q: Which companies should I prioritize for investment or career opportunities in the AI era?
A: Focus on companies with the strongest AI models and those advancing energy abundance, as these will have the largest marginal impact on enabling the innermost loop of robots building fabs, chips, and AI data centers to accelerate exponentially.
Understanding Market Dynamics.
For the first time in Earth’s history, one species can rewrite its own genome, rebuild its own brain, and design entirely new forms of intelligence. That combination makes Homo sapiens look less like evolution’s end point and more like a transitional form: an ancestral species whose descendants may be biological, mechanical, or something in between. The way future humans remember us may depend on how seriously our generation takes its role as the first conscious ancestor.
Imagine a descendant civilization, thousands or millions of years from now, trying to reconstruct its origins. Its members might not have bones or blood. They might be born in free-fall habitats orbiting other stars, or instantiated as software in computational substrates that current engineers can barely imagine. Their analysts would comb through archives from a small blue planet called Earth and conclude that the strange, warlike primates who built the first rockets and the first neural networks were not the culmination of evolution, but an ancestral phase.
That premise — the idea that present-day humans are an ancestral species for future humans and other intelligent beings — is beginning to migrate from science fiction into serious scientific and philosophical discussion. Advances in gene editing, synthetic biology, space medicine, brain–computer interfaces and artificial intelligence all point toward a future in which “intelligent beings” no longer form a single species, or even share a single kind of body. The more that picture comes into focus, the more it forces a rethinking of what “being human” means.
One of the biggest challenges in climate science and weather forecasting is predicting the effects of turbulence at spatial scales smaller than the resolution of atmospheric and oceanic models. Simplified sets of equations known as closure models can predict the statistics of this “subgrid” turbulence, but existing closure models are prone to dynamic instabilities or fail to account for rare, high-energy events. Now Karan Jakhar at the University of Chicago and his colleagues have applied an artificial-intelligence (AI) tool to data generated by numerical simulations to uncover an improved closure model [1]. The finding, which the researchers subsequently verified with a mathematical derivation, offers insights into the multiscale dynamics of atmospheric and oceanic turbulence. It also illustrates that AI-generated prediction models need not be “black boxes,” but can be transparent and understandable.
The team trained their AI—a so-called equation-discovery tool—on “ground-truth” data that they generated by performing computationally costly, high-resolution numerical simulations of several 2D turbulent flows. The AI selected the smallest number of mathematical functions (from a library of 930 possibilities) that, in combination, could reproduce the statistical properties of the dataset. Previously, researchers have used this approach to reproduce only the spatial structure of small-scale turbulent flows. The tool used by Jakhar and collaborators filtered for functions that correctly represented not only the structure but also energy transfer between spatial scales.
They tested the performance of the resulting closure model by applying it to a computationally practical, low-resolution version of the dataset. The model accurately captured the detailed flow structures and energy transfers that appeared in the high-resolution ground-truth data. It also predicted statistically rare conditions corresponding to extreme-weather events, which have challenged previous models.
In a new study published in Physical Review Letters, researchers used machine learning to discover multiple new classes of two-dimensional memories, systems that can reliably store information despite constant environmental noise. The findings indicate that robust information storage is considerably richer than previously understood.
For decades, scientists believed there was essentially one way to achieve robust memory in such systems—a mechanism discovered in the 1980s known as Toom’s rule. All previously known two-dimensional memories with local order parameters were variations on this single scheme.
The challenge lies in the sheer scale of possibilities. The number of potential local update rules for a simple two-dimensional cellular automaton is astronomically large, far greater than the estimated number of atoms in the observable universe. Traditional methods of discovery through exhaustive search or hand-design are therefore impractical at this scale.
Researchers have significantly enhanced an artificial intelligence tool used to rapidly detect bacterial contamination in food by eliminating misclassifications of food debris that looks like bacteria. Current methods to detect contamination of foods such as leafy greens, meat and cheese, which typically involve cultivating bacteria, often require specialized expertise and are time-consuming—taking several days to a week.
Luyao Ma, an assistant professor at Oregon State University, and her collaborators from the University of California, Davis, Korea University and Florida State University, have developed a deep learning-based model for rapid detection and classification of live bacteria using digital images of bacteria microcolonies. The method enables reliable detection within three hours. The findings are published in the journal npj Science of Food.
Their latest breakthrough involves training the model to distinguish bacteria from microscopic food debris to improve its accuracy. A model trained only on bacteria misclassified debris as bacteria more than 24% of the time. The enhanced model, trained on both bacteria and debris, eliminated misclassifications.
In December, the artificial intelligence company Anthropic unveiled its newest tool, Interviewer, used in its initial implementation “to help understand people’s perspectives on AI,” according to a press release. As part of Interviewer’s launch, Anthropic publicly released 1,250 anonymized interviews conducted on the platform.
A proof-of-concept demonstration, however, conducted by Tianshi Li of the Khoury College of Computer Sciences at Northeastern University, presents a method for de-anonymizing anonymized interviews using widely available large language models (LLMs) to associate responses with the real people who participated. The paper is published on the arXiv preprint server.
Become a Big Think member to unlock expert classes, premium print issues, exclusive events and more: https://bigthink.com/membership/?utm_…
“Old systems of the past are collapsing, and new systems of the future are still to be born. I call this moment the great progression.”
Up next, We are living through a slowdown in human progress | Jason Crawford ► • We are living through a slowdown in human…
We are at a tipping point. In the next 25 years, technologies like AI, clean energy, and bioengineering are poised to reshape society on a scale few can imagine.
Peter Leyden draws on decades of observing technological revolutions and historical patterns to show how old systems collapse, new ones rise, and humanity faces both extraordinary risk and unprecedented opportunity.
0:00 We’re on the cusp of an era of progress.