One physicist says his design to use nuclear waste as fuel for nuclear fusion could help the U.S. be a leader in the fusion economy.

Legacy Auto’s Desperation vs. Tesla’s Dominance.
## Abstract.
In the accelerating automobility transformation, legacy automakers like Ford—grappling with $12 billion in EV losses since 2023, including $2.2 billion in H1 2025 and projections up to $5.5 billion for the year—desperately seek Tesla’s technological lifelines, yet Tesla has scant incentive to license its Full Self-Driving (FSD) system.
This report unveils the Darwinian imbalance: Tesla’s unassailable edge in 4.5 billion FSD miles (adding millions daily), propelling intelligent vehicles (IVs) to 10x safer than humans; poised to eliminate over 1 million annual global road deaths, 50 million injuries, and $4 trillion in economic damage annually.
Bolstered by vertical integration, unboxed manufacturing for sub-$30,000 Cybercabs at unprecedented rates, a 70,000+ connector Supercharger network, and robotaxi economics unlocking a $10 trillion market by 2029, Tesla dominates—hastening an 80% decline in private ownership by 2030 per Tony Seba, fostering shared fleets, urban digital twins, and integrated energy systems for sustainable communities worldwide.
Discover why legacy desperation fuels Tesla’s triumph in reshaping transportation.
[Get The Imbalance in Automobility Transformation White Paper](https://cdn.shopify.com/s/files/1/1295/2229/files/The_Imbala…756222023)
Questions to inspire discussion.
AI and Supercomputing Developments.
🖥️ Q: What is XAI’s Colossus 2 and its significance? A: XAI’s Colossus 2 is planned to be the world’s first gigawatt-plus AI training supercomputer, with a non-trivial chance of achieving AGI (Artificial General Intelligence).
⚡ Q: How does Tesla plan to support the power needs of Colossus 2? A: Elon Musk plans to build power plants and battery storage in America to support the massive power requirements of the AI training supercomputer.
💰 Q: What is Musk’s prediction for universal income by 2030? A: Musk believes universal high income will be achieved, providing everyone with the best medical care, food, home, transport, and other necessities.
🏭 Q: How does Musk plan to simulate entire companies with AI? A: Musk aims to simulate entire companies like Microsoft with AI, representing a major jump in AI capabilities but limited to software replication, not complex physical products.
Part 1 of the Singularity Series was “Putting Brakes on the Singularity.” That essay looked at how economic and other non-technical factors will slow down the practical effects of AI, and we should question the supposedly immediate move from AGI to SAI (superintelligent AI).
In part 3, I will consider past singularities, different paces for singularities, and the difference between intelligence and speed accelerations.
In part 4, I will follow up by offering alternative models of AI-driven progress.
Ten years from now, it will be clear that the primary ways we use generative AI circa 2025—rapidly crafting content based on simple instructions and open-ended interactions—were merely building blocks of a technology that will increasingly be built into far more impactful forms.
The real economic effect will come as different modes of generative AI are combined with traditional software logic to drive expensive activities like project management, medical diagnosis, and insurance claims processing in increasingly automated ways.
In my consulting work helping the world’s largest companies design and implement AI solutions, I’m finding that most organizations are still struggling to get substantial value from generative AI applications. As impressive and satisfying as they are, their inherent unpredictability makes it difficult to integrate into the kind of highly standardized business processes that drive the economy.
A look at the next big iteration of the transformative technology.
In a recent episode of High Signal, we spoke with Dr. Fei-Fei Li about what it really means to build human-centered AI, and where the field might be heading next.
Fei-Fei doesn’t describe AI as a feature or even an industry. She calls it a “civilizational technology”—a force as foundational as electricity or computing itself. This has serious implications for how we design, deploy, and govern AI systems across institutions, economies, and everyday life.
Our conversation was about more than short-term tactics. It was about how foundational assumptions are shifting, around interface, intelligence, and responsibility, and what that means for technical practitioners building real-world systems today.
Questions to inspire discussion.
Industry Disruption.
🏢 Q: How might traditional companies be affected by AI simulations? A: Traditional firms like Microsoft could see their valuation drop by 50% if undercut by AI clones, while the tech industry may experience millions of jobs vanishing, potentially leading to recessions or increased inequality.
🤖 Q: What is the potential scale of AI company simulations? A: AI-simulated companies like “Macrohard” could become real entities, operating at a fraction of the cost of traditional companies and disrupting markets 10 times faster and bigger than the internet’s impact on retail.
Regulatory Landscape.
📊 Q: How might governments respond to AI-simulated companies? A: Governments may implement regulations on AI companies to slow innovation, potentially creating monopolies that regulators would later need to break up, further disrupting markets.
We’ve all heard the arguments – “AI will supercharge the economy!” versus “No, AI is going to steal all our jobs!” The reality lies somewhere in between. Generative AI1 is a powerful tool that will boost productivity, but it won’t trigger mass unemployment overnight, and it certainly isn’t Skynet (if you know, you know). The International Monetary Fund (IMF) estimates that “AI will affect almost 40% of jobs around the world, replacing some and complementing others”. In practice, that means a large portion of workers will see some tasks automated by AI, but not necessarily lose their entire job. However, even jobs heavily exposed to AI still require human-only inputs and oversight: AI might draft a report, but you’ll still need someone to fine-tune the ideas and make the decisions.
From an economic perspective, AI will undoubtedly be a game changer. Nobel laureate Michael Spence wrote in September 2024 that AI “has the potential not only to reverse the downward productivity trend, but over time to produce a major sustained surge in productivity.” In other words, AI could usher in a new era of faster growth by enabling more output from the same labour and capital. Crucially, AI often works best in collaboration with existing worker skillsets; in most industries AI has the potential to handle repetitive or time-consuming work (like basic coding or form-filling), letting people concentrate on higher-value-add aspects. In short, AI can raise output per worker without making workers redundant en masse. This, in turn, has the potential to raise GDP over time; if this occurs in a non-inflationary environment it could outpace the growth in US debt for example.
Some jobs will benefit more than others. Knowledge workers who harness AI – e.g. an analyst using AI to sift data – can become far more productive (and valuable). New roles (AI auditors, prompt engineers) are already emerging. Conversely, jobs heavy on routine information processing are already under pressure. The job of a translator is often cited as the most at risk; for example, today’s AI can already handle c.98% of a translator’s typical tasks, and is gradually conquering more technically challenging real-time translation.
Chronic pain is life-changing and considered one of the leading causes of disability worldwide, making daily life difficult for millions of people around the world, and exacerbating personal and economic burdens. Despite established theories about the molecular mechanisms behind it, scientists have been unable to identify the specific processes in the body responsible, until now.
In an exciting collaboration, a team led by NDCN’s Professor David Bennett, and Professor Simon Newstead in the Department of Biochemistry and Kavli Institute for NanoScience Discovery, have identified a new genetic link to pain, determined the structure of the molecular transporter that this gene encodes, and linked its function to pain.
The findings of the research offers a promising, new, specific target against which to develop a drug to alleviate chronic pain. The paper “SLC45A4 is a pain gene encoding a neuronal polyamine transporter” is published in Nature.
Altruism, the tendency to behave in ways that benefit others even if it comes at a cost to oneself, is a valuable human quality that can facilitate cooperation with others and promote meaningful social relationships. Behavioral scientists have been studying human altruism for decades, typically using tasks or games rooted in economics.
Two researchers based at Willamette University and the Laureate Institute for Brain Research recently set out to explore the possibility that large language models (LLMs), such as the model underpinning the functioning of the conversational platform ChatGPT, can simulate the altruistic behavior observed in humans. Their findings, published in Nature Human Behavior, suggest that LLMs do in fact simulate altruism in specific social experiments, offering a possible explanation for this.
“My paper with Nick Obradovich emerged from my longstanding interest in altruism and cooperation,” Tim Johnson, co-author of the paper, told Tech Xplore. “Over the course of my career, I have used computer simulation to study models in which agents in a population interact with each other and can incur a cost to benefit another party. In parallel, I have studied how people make decisions about altruism and cooperation in laboratory settings.