Toggle light / dark theme

A Startup Has Been Quietly Pitching Cloned Human Bodies to Transfer Your Brain Into

That hasn’t stopped some from exploring the idea as part of a secretive effort to realize an alternative to anti-aging tech that sounds like it was ripped straight out of a dystopian science fiction novel. A billionaire-backed stealth startup, called R3 Bio, recently announced that it was raising money to develop non-sentient monkey “organ sacks,” as Wired reported last week, an eyebrow-raising alternative to animal testing. Such structures would contain all typical organs excluding the brain, ultimately serving as a source for donor organs and tissues.

But according to a sprawling followup investigation by MIT Technology Review, R3 Bio’s founders secretly have a far more ambitious goal in mind: creating entire “brainless clones” of the human body that aging or ill individuals could one day transplant their brain into. One advantage of not developing the brain in the donor bodies, albeit a ghoulish one: such a brain-free clone would neatly circumvent certain moral conundrums over the concept.

Still, to call the idea ethically fraught would be a vast understatement. Despite an insider likening a pitch they heard from R3’s founder, John Schloendorn, to a “close encounter of the third kind” with “Dr. Strangelove” in an interview with Tech Review, the company has since distanced itself from the idea of brainless human clones.

Eyal Aharoni — Breaking the Moral Turing Test

Dr. discusses one of the most provocative frontiers in technology: the automation of moral judgement — in his talk focusses on outcomes of a comparative moral Turing test (AI outperforms humans across a range of metrics), as well as AI assisted medical triage!

Link in reply🔗

Eyal Aharoni


Dr. Eyal Aharoni (Georgia State University) to the Future Day 2026 stage to discuss one of the most provocative frontiers in technology: the automation of moral judgement.

Breaking the Moral Turing Test: Studies of human attribution and deference to AI moral judgment and decision-making.

Joscha Bach & Anders Sandberg — AI, Consciousness and the Cyborg Leviathan

Are minds just processes? Can AI become conscious, morally wiser, or even part of a larger collective intelligence? Anders Sandberg and Joscha Bach discuss consciousness, AGI, hybrid minds, moral uncertainty, collective agency and the future of the cyborg Leviathan. It’s a deep and winding discussion with so many interesting topics covered!

0:00 Intro.
0:37 What is consciousness? Phenomenology — functionalism & panpsychism.
1:54 Causal boundaries — the mind is a causally organised process with a non-arbitrary functional boundary, sustained through time by feedback, control, and internal continuity.
3:20 Minds are not states — they are processes. We don’t see causal filtering in tables.
5:54 Epiphenomenalism is self-undermining if it has no causal role, and taking causation seriously pushes towards functionalism.
9:49 Methodological humility about armchair philosophy of mind.
12:41 Putnam-style Brain-in-a-vat — and why standard objections to AI minds fall flat.
16:37 Is sentience required (or desired) for not just moral competence in AI, but moral motivation as well?
22:35 Why stepping outside yourself is powerful — seeing.
25:12 Are AIs born enlightened?
26:25 Are LLMs AGI yet? What’s still missing.
28:16 AI, hybrid minds, and the limits of human augmentation.
32:32 Can minds be extended — in humans, dogs, and cats?
36:19 Why human language may not be open-ended enough.
39:41 Why AI is so data-hungry — and why better algorithms must exist.
43:39 Why better representations matter more than raw compute (grokking was surprising)
48:46 How babies build a world model from touch and perception.
51:05 What comes after copilots: agent teams, multimodality and new AI workflows.
55:32 Can AI help us discover new forms of taste and aesthetics.
59:49 Using AI to learn art history and invent a transhumanist aesthetic.
1:01:47 When AI helps everyone looks professional, what still counts as real skill?
1:03:56 What happens when the self starts to merge with AI
1:05:43 How AI changes the way we think and create.
1:08:10 What happens when AI starts shaping human relationships.
1:11:18 Why feeling in control can matter more than being right.
1:12:58 Why intelligence without wisdom is very dangerous.
1:17:45 AI via scaling statistical pattern matching vs symbolic (& causal) reasoning. Can LLMs learn causality or just correlation?
1:23:00 Will multimodal AI replace LLMs or use them as glue everywhere.
1:24:02 10 years to the singularity?
1:25:27 AI, coordination and the corruption problem.
1:29:47 Can AI become more moral than us (humans)? and if so, should it?
1:34:31 Why pluralism still leaves moral collisions unresolved.
1:34:31 Traversing the landscape of norms (value)
1:38:14 Can ethics work across nested levels of existence? (from the person-effecting-view to the matrioshka-effecting-view)
1:43:08 Moral realism, evolution & game-theoretic symmetries.
1:48:01 Is there a global optimum of moral coordination? Is that god?
1:55:12 Metaphors of the body-politic, the body of Christ, Omega Point theory, Leviathan.
1:59:36 Will superintelligences converge into a cosmic singleton?

Post: https://www.scifuture.org/minds-in-th… thanks for tuning in! Please support SciFuture by subscribing and sharing! Buy me a coffee? https://buymeacoffee.com/tech101z Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P… Kind regards, Adam Ford

Many thanks for tuning in!
Please support SciFuture by subscribing and sharing!
Buy me a coffee? https://buymeacoffee.com/tech101z.

Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: https://docs.google.com/forms/d/1mr9P

Kind regards.

The brain region associated with moral inconsistency

Though previous studies have identified brain regions that are involved in moral behavior and moral judgement, little is known about how brain activity underpins moral inconsistency.

To identify brain regions associated with moral inconsistency, the researchers used fMRI imaging to scan people’s brains during a task that required them to weigh honesty and profit. Participants could earn more money by being dishonest, but they were also asked to rate their own behavior on a 10-point scale from “extremely immoral” to “extremely moral.” The team also monitored the participants’ brain activity while they judged the morality of other people undertaking the same task.

In people who were morally consistent—meaning, they judged themselves and others by the same moral standards—the vmPFC was activated similarly during both the behavioral and judgement tasks. However, in morally inconsistent participants—those who judged other people’s cheating as immoral but rated their own cheating more leniently—the vmPFC was less active during the behavioral task and less connected to other brain regions involved in decision making and morality.

To examine whether vmPFC activity plays a causal role in moral inconsistency, the researchers stimulated some participants’ vmPFCs via a non-invasive method called transcranial temporal interference stimulation (tTIS) before they undertook the behavioral and judging tasks. They showed that vmPFC stimulation resulted in higher levels of moral inconsistency compared to participants who received mock stimulation.

These results suggest that people who are morally inconsistent don’t make use of their vmPFC to integrate information when making behavioral decisions, the researchers say. “Individuals exhibiting moral inconsistency are not necessarily blind to their own moral principles; they are just biologically failing to consider and apply them in their own moral behavior,” says the author. ScienceMission sciencenewshighlights https://sciencemission.com/Moral-inconsistency

Defining Alzheimer’s disease: stipulations and the ethics of diagnostic change

In this really interesting essay, Michalon et al discuss defining Alzheimer’s disease in response to recent discussions on revising the definition and diagnostic criteria for the condition. The essay provides interesting historical context to the debate.


Recent revisions of Alzheimer’s Disease (AD) definitions by two leading research groups—the Alzheimer’s Association and the International Working Group—reflect divergent approaches: the former promotes a strictly biological definition, while the latter promotes a clinicalbiological construct. We contend that this emerging controversy is not merely semantic, but scientifically, clinically, and politically significant. Drawing on philosophical tools and situating the current debate within a broader historical context from the reconceptualization of AD in the 1970s onwards, we explore how definitions can serve as transformative instruments, acting as strategic bets that reshape scientific fields and clinical practices. Ultimately, we draw from the AD case study to argue for a critical reflection on the risks and promises of such definitional acts. We also propose a renewed attention to the ‘ethics of stipulating’ in the field of contemporary biomedical sciences.

In response to advances in diagnostics and therapeutics, two major research groups specialising in Alzheimer’s disease (AD) have recently revised their definition and diagnostic criteria for the condition. While they concur on certain aspects—most notably, the centrality of amyloid and tau pathologies—the two groups have proposed different types of definition. The Alzheimer’s Association (AA) group asserts the following fundamental principle: “AD is defined by its unique neuropathologic findings; therefore, detection of AD neuropathologic change by biomarkers is equivalent to diagnosing the disease” 1(p.5145). This definition regards specific biological changes as the unique defining feature rather than a joint characteristic, together with specific symptoms, of a disease. In this framework, asymptomatic individuals can be diagnosed with ‘preclinical AD’

Ben Goertzel responds

As part of Future Day 2026, we hosted a conversation between two of the most provocative minds in AGI – Ben Goertzel and Hugo de Garis (with Adam Ford as moderator/provocateur) – to tackle the ultimate existential question: Is an Artilect War inevitable, and should humanity accept becoming the “number two” species?

The discussion will build upon last years discussion between Ben and Hugo on AGI and the Singularity.

It will explore the idea of human transcendence. If we can’t beat them, do we join them?

Will humanity transcend into a Jupiter brain quectotech utility fog?

Is the Artilect War the inevitable conclusion of biological intelligence? Or can we find a path toward existing in a universe that still finds us aesthetically pleasing?

0:00 Intro.

The Terraforming Compendium

Could we sculpt dead planets into living worlds? From artificial crusts and orbital mirrors to taming tectonics and engineering biospheres, this is your definitive guide to turning alien rocks into second Earths.

Watch my exclusive video Fishbowl Starships — Water As Shielding — https://nebula.tv/videos/isaacarthur–… Nebula using my link for 40% off an annual subscription: https://go.nebula.tv/isaacarthur Get a Lifetime Membership to Nebula for only $300: https://go.nebula.tv/lifetime?ref=isa… Use the link https://gift.nebula.tv/isaacarthur to give a year of Nebula to a friend for just $36. Visit our Website: http://www.isaacarthur.net Support us on Patreon: / isaacarthur Support us on Subscribestar: https://www.subscribestar.com/isaac-a… Facebook Group: / 1,583,992,725,237,264 Reddit: / isaacarthur Twitter: / isaac_a_arthur on Twitter and RT our future content. SFIA Discord Server: / discord Credits: Interstellar Travel: Can We Survive The Long Journey? Episode 725; June 15, 2025 Written, Produced & Narrated by: Isaac Arthur Graphics: Jarred Eagley Jeremy Jozwik Ken York YD Visual Mafic Studios Sergio Botero Select imagery/video supplied by Getty Images Music Courtesy of Epidemic Sound http://epidemicsound.com/creator Chris Zabriskie, “Unfoldment, Revealment”, “A New Day in a New Sector”, “Oxygen Garden”, “Wonder Cycle” Kai Engel, “Endless Story About Sun and Moon” Taras Harkavyi, “Alpha and…” Dark Future, “Staring Through” pt1 Miguel Johnson. “The Commanders”, “Far From Home” Lombus, “Hydrogen Sonata”, “Cosmic Soup” Aerium, “Deijocht” Stellardrone, “Red Giant”, “Solar Eclipse”, “Billions and Billions” Chapters 0:00 Intro 5:33 What is Terraforming? 8:27 Terraforming vs Para-Terraforming 11:54 Planets vs Megastructures 14:05 Terraforming vs Bioforming 17:14 The Inevitable Hybrid Approach 20:59 Ethics & Debate: Preservation vs. Transformation 22:42 Terraforming as a Civilization-Scale Endeavor 23:46 Terraforming Technologies & Techniques 24:42 Artificial Gravity Solutions 27:58 Atmospheric Manipulation 31:25 Bioforming & Genetic Engineering 34:06 Comet & Asteroid Bombardment 39:43 Domes & Worldhouses 43:24 Geoengineering & Climate Control 47:05 Hydrospheric Engineering 49:58 Magnetosphere Generation 53:35 Fishbowl Starships 55:02 Mass & Orbital Adjustments 1:00:17 Mega-Mirrors & Solar Shades 1:04:30 Oxygenation & Soil Processing 1:07:39 Planetary Shells & Artificial Crusts 1:10:37 Terraforming Nanotechnology 1:14:04 Tidal & Seismic Stabilization 1:18:45 From Theory to Practice: Adapting Terraforming to Specific Worlds 1:20:27 Extreme Radiation Levels 1:23:57 Frequent Asteroid & Meteor Impacts 1:27:41 High Gravity 1:30:29 Highly Eccentric Orbits 1:34:46 Hostile Native Life 1:38:25 Intense Volcanism 1:40:55 Long or Erratic Day/Night Cycles 1:51:09 Low Light Levels 1:52:57 No Air 1:54:25 No Magnetosphere 1:56:17 No Seasons 1:58:13 No Water 2:00:48 Short or Long Years & Seasons 2:02:05 Tidally Locked 2:03:32 Tidally Wracked 2:04:36 Too Cold 2:05:36 Too Hot 2:06:21 Too Much Air 2:07:05 Too Much Ocean 2:08:44 Too Much Solar Wind 2:11:13 Toxic or Corrosive Atmosphere or Surface 2:14:09 Unstable Tectonics 2:15:10 Wrong Air Composition 2:16:21 Final Thoughts.
Get Nebula using my link for 40% off an annual subscription: https://go.nebula.tv/isaacarthur.
Get a Lifetime Membership to Nebula for only $300: https://go.nebula.tv/lifetime?ref=isa
Use the link https://gift.nebula.tv/isaacarthur to give a year of Nebula to a friend for just $36.

Visit our Website: http://www.isaacarthur.net.
Support us on Patreon: / isaacarthur.
Support us on Subscribestar: https://www.subscribestar.com/isaac-a
Facebook Group: / 1583992725237264
Reddit: / isaacarthur.
Twitter: / isaac_a_arthur on Twitter and RT our future content.
SFIA Discord Server: / discord.
Credits:
Interstellar Travel: Can We Survive The Long Journey?
Episode 725; June 15, 2025
Written, Produced & Narrated by: Isaac Arthur.
Graphics:
Jarred Eagley.
Jeremy Jozwik.
Ken York YD Visual.
Mafic Studios.
Sergio Botero.
Select imagery/video supplied by Getty Images.
Music Courtesy of Epidemic Sound http://epidemicsound.com/creator.
Chris Zabriskie, \

Read more

Joscha Bach & Anders Sandberg

Are minds just processes? Can AI become conscious, morally wiser, or even part of a larger collective intelligence? Anders Sandberg and Joscha Bach discuss consciousness, AGI, hybrid minds, moral uncertainty, collective agency and the future of the cyborg Leviathan. It’s a deep and winding discussion with so many interesting topics covered!

0:00 Intro.
0:37 What is consciousness? Phenomenology — functionalism & panpsychism.
1:54 Causal boundaries — the mind is a causally organised process with a non-arbitrary functional boundary, sustained through time by feedback, control, and internal continuity.
3:20 Minds are not states — they are processes. We don’t see causal filtering in tables.
5:54 Epiphenomenalism is self-undermining if it has no causal role, and taking causation seriously pushes towards functionalism.
9:49 Methodological humility about armchair philosophy of mind.
12:41 Putnam-style Brain-in-a-vat — and why standard objections to AI minds fall flat.
16:37 Is sentience required (or desired) for not just moral competence in AI, but moral motivation as well?
22:35 Why stepping outside yourself is powerful — seeing.
25:12 Are AIs born enlightened?
26:25 Are LLMs AGI yet? What’s still missing.
28:16 AI, hybrid minds, and the limits of human augmentation.
32:32 Can minds be extended — in humans, dogs, and cats?
36:19 Why human language may not be open-ended enough.
39:41 Why AI is so data-hungry — and why better algorithms must exist.
43:39 Why better representations matter more than raw compute (grokking was surprising)
48:46 How babies build a world model from touch and perception.
51:05 What comes after copilots: agent teams, multimodality and new AI workflows.
55:32 Can AI help us discover new forms of taste and aesthetics.
59:49 Using AI to learn art history and invent a transhumanist aesthetic.
1:01:47 When AI helps everyone looks professional, what still counts as real skill?
1:03:56 What happens when the self starts to merge with AI
1:05:43 How AI changes the way we think and create.
1:08:10 What happens when AI starts shaping human relationships.
1:11:18 Why feeling in control can matter more than being right.
1:12:58 Why intelligence without wisdom is very dangerous.
1:17:45 AI via scaling statistical pattern matching vs symbolic (& causal) reasoning. Can LLMs learn causality or just correlation?
1:23:00 Will multimodal AI replace LLMs or use them as glue everywhere.
1:24:02 10 years to the singularity?
1:25:27 AI, coordination and the corruption problem.
1:29:47 Can AI become more moral than us (humans)? and if so, should it?
1:34:31 Why pluralism still leaves moral collisions unresolved.
1:34:31 Traversing the landscape of norms (value)
1:38:14 Can ethics work across nested levels of existence? (from the person-effecting-view to the matrioshka-effecting-view)
1:43:08 Moral realism, evolution & game-theoretic symmetries.
1:48:01 Is there a global optimum of moral coordination? Is that god?
1:55:12 Metaphors of the body-politic, the body of Christ, Omega Point theory, Leviathan.
1:59:36 Will superintelligences converge into a cosmic singleton?

Many thanks for tuning in!
Please support SciFuture by subscribing and sharing!
Buy me a coffee? https://buymeacoffee.com/tech101z.

Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: https://docs.google.com/forms/d/1mr9P… regards, Adam Ford

Kind regards.
Adam Ford.
Science, Technology & the Future — #SciFuture — http://scifuture.org

Read more

Hume on suicide

Anyone interested in the morality of suicide reads David Hume’s essay on the subject even today. There are numerous reasons for this, but the central one is that it sets up the starting point for contemporary debate about the morality of suicide, namely, the debate about whether some condition of life could present one with a morally acceptable reason for autonomously deciding to end one’s life. We shall only be able to have this debate if we think that at least some acts of suicide can be moral, and we shall only be able to think this if we give up the blanket condemnation of suicide that theology has put in place. I look at this strategy of argument in the context of the wider eighteenth-century attempt to develop a non-theologically based ethic. The result in Hume’s case is a very modern tract on suicide, with voluntariness and autonomy to the fore and with reflection on the condition of one’s life and one’s desire to carry on living a life in that condition the motivating circumstance.

PubMed Disclaimer

What Can 50-Year-Old Chatbots Teach Us About Clinical Applications of AI?

Can a large language model (LLM) provide insights on the history of chatbots and their clinical applications? 🤖

In this episode of JAMA+ AI Conversations, JAMA+ AI Editor in Chief Roy Perlis, MD, MSc, interviews OpenAI’s ChatGPT (GPT-4o, voice mode) about the development and legacy of the first clinical chatbots, ELIZA and PARRY.

The discussion explores differing perspectives of their creators, as well as how foundational debates about technology and ethics continue to inform the present landscape of AI in mental health care.

🎧 Listen now.


JAMA+ AI Editor in Chief Roy Perlis, MD, MSc, conducted an interview with ChatGPT about the history of chatbots and their clinical applications, for JAMA+ AI Conversations.

/* */