Toggle light / dark theme

Eve Poole on Robot Souls, Junk Code and the Future of AI

Are we building AI that enhances humanity or a master race of beautifully optimized psychopaths?

My latest Singularity. FM conversation with Dr. Eve Poole goes straight to the nerve:

What makes us human, and what happens when we leave that out of our machines?

Eve argues that the very things Silicon Valley dismisses as “junk code”—our emotions, intuition, uncertainty, meaning-making, story, conscience, even our mistakes—aren’t flaws in our design. They’re the *reason* our species survived. And we’re coding almost none of it into AI.

The result? Systems with immense intelligence but no soul, no context, no humanity—and therefore, no reason to value us.

In this wide-ranging conversation, we dig into:

🔹 Why the real hallmarks of humanity aren’t IQ but junk code 🔹 Consciousness, soul, and the limits of rationalist AI thinking 🔹 Theology, capitalism & tech: how we ended up copying the wrong parts of ourselves 🔹 Why “alignment” is really a parenting challenge, not a control problem 🔹 What Tolkien, u-catastrophe, and ancient stories can teach us about surviving the future 🔹 Why programming in humanity isn’t for AI’s sake—it’s for ours.

The 2026 Timeline: AGI Arrival, Safety Concerns, Robotaxi Fleets & Hyperscaler Timelines | 221

The 2026 Timeline: AGI Arrival, Safety Concerns, Robotaxi Fleets & Hyperscaler Timelines ## The rapid advancement of AI and related technologies is expected to bring about a transformative turning point in human history by 2026, making traditional measures of economic growth, such as GDP, obsolete and requiring new metrics to track progress ## ## Questions to inspire discussion.

Measuring and Defining AGI

🤖 Q: How should we rigorously define and measure AGI capabilities? A: Use benchmarks to quantify specific capabilities rather than debating terminology, enabling clear communication about what AGI can actually do across multiple domains like marine biology, accounting, and art simultaneously.

🧠 Q: What makes AGI fundamentally different from human intelligence? A: AGI represents a complementary, orthogonal form of intelligence to human intelligence, not replicative, with potential to find cross-domain insights by combining expertise across fields humans typically can’t master simultaneously.

📊 Q: How can we measure AI self-awareness and moral status? A: Apply personhood benchmarks that quantify AI models’ self-awareness and requirements for moral treatment, with Opus 4.5 currently being state-of-the-art on these metrics for rigorous comparison across models.

AI Capabilities and Risks.

Elon Musk on AGI Timeline, US vs China, Job Markets, Clean Energy & Humanoid Robots

Questions to inspire discussion.

🤖 Q: How quickly will AI and robotics replace human jobs? A: AI and robotics will do half or more of all jobs within the next 3–7 years, with white-collar work being replaced first, followed by blue-collar labor through humanoid robots.

🏢 Q: What competitive advantage will AI-native companies have? A: Companies that are entirely AI-powered will demolish competitors, similar to how a single manually calculated cell in a spreadsheet makes it unable to compete with entirely computer-based spreadsheets.

💼 Q: What forces companies to adopt more AI? A: Companies using more AI must outcompete those using less, creating a forcing function for increased AI adoption, as inertia currently keeps humans doing AI-capable tasks.

📊 Q: How much of enterprise software development can AI handle autonomously? A: Blitzy, an AI platform using thousands of specialized agents, autonomously handles 80%+ of enterprise software development, increasing engineering velocity 5x when paired with human developers.

Energy and Infrastructure.

CRISPR vs Aging: What’s Actually Happening Right Now

🧠 VIDEO SUMMARY:
CRISPR gene editing in 2025 is no longer science fiction. From curing rare immune disorders and type 1 diabetes to lowering cholesterol and reversing blindness in mice, breakthroughs are transforming medicine today. With AI accelerating precision tools like base editing and prime editing, CRISPR not only cures diseases but also promises longer, healthier lives and maybe even longevity escape velocity.

0:00 – INTRO — First human treated with prime editing.
0:35 — The DNA Problem.
1:44 – CRISPR 1.0 — The Breakthrough.
3:19 – AI + CRISPR 2.0 & 3.0
4:47 – Epigenetic Reprogramming.
5:54 – From the Lab to the Body.
7:28 – Risks, Ethics & Power.
8:59 – The 2030 Vision.

👇 Don’t forget to check out the first three parts in this series:
Part 1 – “Longevity Escape Velocity: The Race to Beat Aging by 2030″
Part 2 – “Longevity Escape Velocity 2025: Latest Research Uncovered!“
Part 3 – “Longevity Escape Velocity: How AI is making us immortal by 2030!”

📌 Easy Insight simplifies the future — from longevity breakthroughs to mind-bending AI and quantum revolutions.

🔍 KEYWORDS:
longevity, longevity escape velocity, AI, artificial intelligence, quantum computing, supercomputers, simplified science, easy insightm, CRISPR 2025, CRISPR gene editing, CRISPR cures diseases, CRISPR longevity, prime editing 2025, base editing 2025, AI in gene editing, gene editing breakthroughs, gene therapy 2025, life extension 2025, reversing aging with CRISPR, CRISPR diabetes cure, CRISPR cholesterol PCSK9, CRISPR ATTR amyloidosis, CRISPR medical revolution, Easy Insight longevity.

👇 JOIN THE CONVERSATION:

Who Wants to Enhance Their Cognitive Abilities? Potential Predictors of the Acceptance of Cognitive Enhancement

In the 21st century, new powerful technologies, such as different artificial intelligence (AI) agents, have become omnipresent and the center of public debate. With the increasing fear of AI agents replacing humans, there are discussions about whether individuals should strive to enhance themselves. For instance, the philosophical movement Transhumanism proposes the broad enhancement of human characteristics such as cognitive abilities, personality, and moral values (e.g., ; ). This enhancement should help humans to overcome their natural limitations and to keep up with powerful technologies that are increasingly present in today’s world (see ). In the present article, we focus on one of the most frequently discussed forms of enhancement—the enhancement of human cognitive abilities.

Not only in science but also among the general population, cognitive enhancement, such as increasing one’s intelligence or working memory capacity, has been a frequently debated topic for many years (see ). Thus, a lot of psychological and neuroscientific research investigated different methods to increase cognitive abilities, but—so far—effective methods for cognitive enhancement are lacking (). Nevertheless, multiple different (and partly new) technologies that promise an enhancement of cognition are available to the general public. Transhumanists especially promote the application of brain stimulation techniques, smart drugs, or gene editing for cognitive enhancement (e.g., ). Importantly, only little is known about the characteristics of individuals who would use such enhancement methods to improve their cognition. Thus, in the present study, we investigated different predictors of the acceptance of multiple widely-discussed enhancement methods. More specifically, we tested whether individuals’ psychometrically measured intelligence, self-estimated intelligence, implicit theories about intelligence, personality (Big Five and Dark Triad traits), and specific interests (science-fiction hobbyism) as well as values (purity norms) predict their acceptance of cognitive enhancement (i.e., whether they would use such methods to enhance their cognition).

New research reveals a subtle and dark side-effect of belief in free will

A new study published in Applied Psychology provides evidence that the belief in free will may carry unintended negative consequences for how individuals view gay men. The findings suggest that while believing in free will often promotes moral responsibility, it is also associated with less favorable attitudes toward gay men and preferential treatment for heterosexual men. This effect appears to be driven by the perception that sexual orientation is a personal choice.

Psychological research has historically investigated the concept of free will as a positive force in social behavior. Scholars have frequently observed that when people believe they have control over their actions, they tend to act more responsibly and helpfully. The general assumption has been that a sense of agency leads to adherence to moral standards. However, the authors of the current study argued that this sense of agency might have a “dark side” when applied to social groups that are often stigmatized.

The researchers reasoned that if people believe strongly in human agency, they may incorrectly attribute complex traits like sexual orientation to personal decision-making. This attribution could lead to the conclusion that gay men are responsible for their sexual orientation.

Feral AI gossip with the potential to spread damage and shame will become more frequent, researchers warn

“Feral” gossip spread via AI bots is likely to become more frequent and pervasive, causing reputational damage and shame, humiliation, anxiety, and distress, researchers have warned.

Chatbots like ChatGPT, Claude, and Gemini don’t just make things up—they generate and spread gossip, complete with negative evaluations and juicy rumors that can cause real-world harm, according to new analysis by philosophers Joel Krueger and Lucy Osler from the University of Exeter.

The research is published in the journal Ethics and Information Technology.

THE BRAVE AND THE COWARDS — SRI Newsletter December 2025

As the geopolitical climate shifts, we increasingly hear warmongering pronouncements that tend to resurrect popular sentiments we naïvely believed had been buried by history. Among these is the claim that Europe is weak and cowardly, unwilling to cross the threshold between adolescence and adulthood. Maturity, according to this narrative, demands rearmament and a head-on confrontation with the challenges of the present historical moment. Yet beneath this rhetoric lies a far more troubling transformation.

We are witnessing a blatant attempt to replace the prevailing moral framework—until recently ecumenically oriented toward a passive and often regressive environmentalism—with a value system founded on belligerence. This new morality defines itself against “enemies” of presumed interests, whether national, ethnic, or ideological.

Those who expected a different kind of shift—one that would abandon regressive policies in favor of an active, forward-looking environmentalism—have been rudely awakened. The self-proclaimed revolutionaries sing an old and worn-out song: war. These new “futurists” embrace a technocratic faith that goes far beyond a legitimate trust in science and technology—long maligned during the previous ideological era—and descends into open contempt for human beings themselves, now portrayed as redundant or even burdensome in the age of the supposedly unstoppable rise of artificial intelligence.

Neutrality isn’t a safe strategy on controversial issues, research shows

Researchers Rachel Ruttan and Katherine DeCelles of the University of Toronto’s Rotman School of Management are anything but neutral on neutrality. The next time you’re tempted to play it safe on a hot-button topic, their evidence-based advice is to consider saying what you really think.

That’s because their recent research, based on more than a dozen experiments with thousands of participants, reveals that people take a dim view of others’ professed neutrality on controversial issues, rating them just as morally suspect as those expressing an opposing viewpoint, if not worse.

“Neutrality gives you no advantage over opposition,” says Prof. Ruttan, an associate professor of organizational behavior and human resource management with an interest in moral judgment and prosocial behavior. “You’re not pleasing anyone.”

/* */