Toggle light / dark theme

Why the economics of orbital AI are so brutal

He’s not alone. xAI’s head of compute has reportedly bet his counterpart at Anthropic that 1% of global compute will be in orbit by 2028. Google (which has a significant ownership stake in SpaceX) has announced a space AI effort called Project Suncatcher, which will launch prototype vehicles in 2027. Starcloud, a startup that has raised $34 million backed by Google and Andreessen Horowitz, filed its own plans for an 80,000 satellite constellation last week. Even Jeff Bezos has said this is the future.

But behind the hype, what will it actually take to get data centers into space?

In a first analysis, today’s terrestrial data centers remain cheaper than those in orbit. Andrew McCalip, a space engineer, has built a helpful calculator comparing the two models. His baseline results show that a 1 GW orbital data center might cost $42.4 billion — almost 3x its ground-bound equivalent, thanks to the up-front costs of building the satellites and launching them to orbit.

Why the Future of Intelligence Is Already Here | Alex Wissner-Gross | TEDxBoston

The future of intelligence is rapidly evolving with AI advancements, poised to transform numerous aspects of life, work, and existence, with exponential growth and sweeping changes expected in the near future.

## Questions to inspire discussion.

Strategic Investment & Career Focus.

🎯 Q: Which companies should I prioritize for investment or career opportunities in the AI era?

A: Focus on companies with the strongest AI models and those advancing energy abundance, as these will have the largest marginal impact on enabling the innermost loop of robots building fabs, chips, and AI data centers to accelerate exponentially.

Understanding Market Dynamics.

AGI Is Here: AI Legend Peter Norvig on Why it Doesn’t Matter Anymore

Are we chasing the wrong goal with Artificial General Intelligence, and missing the breakthroughs that matter now?

On this episode of Digital Disruption, we’re joined by former research director at Google and AI legend, Peter Norvig.

Peter is an American computer scientist and a Distinguished Education Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). He is also a researcher at Google, where he previously served as Director of Research and led the company’s core search algorithms group. Before joining Google, Norvig headed NASA Ames Research Center’s Computational Sciences Division, where he served as NASA’s senior computer scientist and received the NASA Exceptional Achievement Award in 2001.He is best known as the co-author, alongside Stuart J. Russell, of Artificial Intelligence: A Modern Approach — the world’s most widely used textbook in the field of artificial intelligence.

Peter sits down with Geoff to separate facts from fiction about where AI is really headed. He explains why the hype around Artificial General Intelligence (AGI) misses the point, how today’s models are already “general,” and what truly matters most: making AI safer, more reliable, and human-centered. He discusses the rapid evolution of generative models, the risks of misinformation, AI safety, open-source regulation, and the balance between democratizing AI and containing powerful systems. This conversation explores the impact of AI on jobs, education, cybersecurity, and global inequality, and how organizations can adapt, not by chasing hype, but by aligning AI to business and societal goals. If you want to understand where AI actually stands, beyond the headlines, this is the conversation you need to hear.

In this episode:
00:00 Intro.
01:00 How AI evolved since Artificial Intelligence: A Modern Approach.
03:00 Is AGI already here? Norvig’s take on general intelligence.
06:00 The surprising progress in large language models.
08:00 Evolution vs. revolution.
10:00 Making AI safer and more reliable.
12:00 Lessons from social media and unintended consequences.
15:00 The real AI risks: misinformation and misuse.
18:00 Inside Stanford’s Human-Centered AI Institute.
20:00 Regulation, policy, and the role of government.
22:00 Why AI may need an Underwriters Laboratory moment.
24:00 Will there be one “winner” in the AI race?
26:00 The open-source dilemma: freedom vs. safety.
28:00 Can AI improve cybersecurity more than it harms it?
30:00 “Teach Yourself Programming in 10 Years” in the AI age.
33:00 The speed paradox: learning vs. automation.
36:00 How AI might (finally) change productivity.
38:00 Global economics, China, and leapfrog technologies.
42:00 The job market: faster disruption and inequality.
45:00 The social safety net and future of full-time work.
48:00 Winners, losers, and redistributing value in the AI era.
50:00 How CEOs should really approach AI strategy.
52:00 Why hiring a “PhD in AI” isn’t the answer.
54:00 The democratization of AI for small businesses.
56:00 The future of IT and enterprise functions.
57:00 Advice for staying relevant as a technologist.
59:00 A realistic optimism for AI’s future.

#ai #agi #humancenteredai #futureofwork #aiethics #innovation.

A ‘crazy’ dice proof leads to a new understanding of a fundamental law of physics

Right now, molecules in the air are moving around you in chaotic and unpredictable ways. To make sense of such systems, physicists use a law known as the Boltzmann distribution, which, rather than describe exactly where each particle is, describes the chance of finding the system in any of its possible states. This allows them to make predictions about the whole system even though the individual particle motions are random. It’s like rolling a single die: Any one roll is unpredictable, but if you keep rolling it again and again, a pattern of probabilities will emerge.

Developed in the latter half of the 19th century by Ludwig Boltzmann, an Austrian physicist and mathematician, this Boltzmann distribution is used widely today to model systems in many fields, ranging from AI to economics, where it is called “multinomial logit.”

Now, economists have taken a deeper look at this universal law and come up with a surprising result: The Boltzmann distribution, their mathematical proof shows, is the only law that accurately describes unrelated, or uncoupled, systems.

Tech Companies Showing Signs of Distress as They Run Out of Money for AI Infrastructure

AI companies are looking to spend trillions of dollars on data centers to power their increasingly resource-intensive AI models — an astronomical amount of money that could threaten the entire economy if the bet doesn’t pay off.

As the race to spend as much money as possible on AI infrastructure rages on, companies have become increasingly desperate to keep the cash flowing. Firms like OpenAI, Anthropic, and Oracle are exhausting existing debt markets — including junk debt, private credit, and asset-backed loans — in increasingly desperate moves, as Bloomberg reports, that are raising concerns among investors.

“The numbers are like nothing any of us who have been in this business for 25 years have seen,” Bank of America managing head of global credit Matt McQueen told Bloomberg. “You have to turn over all avenues to make this work.”

Honest or deceptive? What a new signaling model means for animal displays and human claims

For decades, scientists have tried to answer a simple question: why be honest when deception is possible? Whether it is a peacock’s tail, a stag’s roar, or a human’s résumé, signals are means to influence others by transmitting information and advantages can be gained by cheating, for example by exaggeration. But if lying pays, why does communication not collapse?

The dominant theory for honest signals has long been the handicap principle, which claims that signals are honest because they are costly to produce. It argues that a peacock’s tail, for example, is an honest signal of a male’s condition or quality to potential mates because it is so costly to produce. Only high-quality birds could afford such a handicap, wasting resources growing it, demonstrating their superb quality to females, whereas poor quality males cannot afford such ornaments.

A new synthesis by Szabolcs Számadó, Dustin J. Penn and István Zachar (from the Budapest University of Technology and Economics, University of Veterinary Medicine Vienna and HUN-REN Centre for Ecological Research, respectively) challenges that logic. They argue that honesty does not depend on how costly or wasteful a signal is, but rather on the trade-offs between investments and benefits, faced by signalers.

Silicon as strategy: the hidden battleground of the new space race

In the consumer electronics playbook, custom silicon is the final step in the marathon: you use off-the-shelf components to prove a product, achieve mass scale and only then invest in proprietary chips to create differentiation, improve operations, and optimize margins.

In the modern satellite communications (SATCOM) ecosystem, this script has been flipped. For the industry’s frontrunners, custom silicon is the starting line where the bets are high, and the rewards are even higher, not a late-stage luxury. Building custom silicon is just a small piece of the big project when it comes to launching a satellite constellation and the fact there are very limited off the shelf options.

The shift toward custom silicon is no longer a theoretical debate; it is a proven competitive requirement. To monetize the massive capital expenditure of a constellation, market leaders are already driving aggressive custom silicon programs for beamformers and modems from the very beginning. The consensus is clear: while commercial off-the-shelf (COTS) and field-programmable gate arrays (FPGAs) served as useful stopgaps, they have become a strategic liability that compromises price and power efficiency. If the industry is to scale to the mass market, operators must commit to bespoke silicon programs now — or risk being permanently priced out of the sky by competitors who have already optimized their hardware for the unit economics of space.

How AI & Quantum Are Reshaping Federal Innovation

By Chuck Brooks

#artificialintelligence #tech #government #quantum #innovation #federal #ai


By Chuck Brooks, president of Brooks Consulting International

In 2026, government technological innovation has reached a key turning point. After years of modernization plans, pilot projects and progressive acceptance, government leaders are increasingly incorporating artificial intelligence and quantum technologies directly into mission-critical capabilities. These technologies are becoming essential infrastructure for economic competitiveness, national security and scientific advancement rather than merely scholarly curiosity.

We are seeing a deliberate change in the federal landscape from isolated testing to the planned implementation of emerging technology across the whole government. This evolution represents not only technology momentum but also policy leadership, public-private collaboration and expanded industrial capability.

/* */