Toggle light / dark theme

Roman Yampolskiy — AI: Unexplainable, Unpredictable, Uncontrollable

In this presentation, Dr. Roman V. Yampolskiy provides a rigorous examination of the fundamental limitations of Artificial Intelligence, arguing that as systems approach and surpass human-level intelligence, they become inherently unexplainable, unpredictable, and uncontrollable. He illustrates how the black box nature of deep learning prevents full audits of decision-making, while concepts like computational irreducibility suggest we cannot forecast the actions of a smarter agent without running it – often until it is too late for safety. He asserts that there is currently no evidence or mathematical proof to guarantee that a superintelligent system can be safely contained or aligned with human values.
Dr. Yampolskiy further bridges theoretical computer science with safety engineering by applying impossibility results, such as the Halting Problem and Rice’s Theorem, to demonstrate that certain safety guarantees for Artificial General Intelligence (AGI) are mathematically unreachable. These technical impediments lead to a sobering discussion on existential risk, where the inability to verify or monitor advanced systems results in an alarmingly high probability of catastrophic outcomes. By analysing why advanced AI defies traditional engineering safety standards, he makes the case that current trajectories may lead to irreversible consequences for humanity.
To conclude, the talk shifts toward potential pathways for mitigation, emphasising the urgent need to prioritise specialised, narrow AI over the pursuit of general superintelligence. Dr. Yampolskiy argues that while narrow AI can solve global challenges within controllable parameters, the pursuit of AGI represents an existential gamble. He calls for a shift in the research community from a “move fast and break things” mentality to a mathematically grounded approach, urging that we must prove a problem is solvable before investing billions into its deployment.

Whole Brain Emulation & Substrate-Independence: New Beginnings For Old Minds

When a human mind can be emulated — memories, habits, and the weather of thought running on engineered hardware — “uploading” stops being an ending and becomes a beginning. Substrate-independent minds can be backed up, restored, paused without time passing, and deployed into new bodies: telepresence robots, swarms, or chassis built for heat and radiation. Distance turns into bandwidth as consciousness moves as data, bound only by light. Under the spectacle is a harder, technical question: what must be captured, at what scale, for an emulation to be someone — and what rights and power follow once persons are portable infrastructure?

Mind uploading has usually been told as a one-way escape hatch: a last-minute transfer from a failing body into a machine, the technological equivalent of outrunning a deadline. That framing makes the idea feel like a hospice fantasy — dramatic, personal, terminal. But it leaves out the second verb that changes everything. If a mind can be reproduced as a running process, it isn’t just uploaded once; it can be instantiated again, moved, paused, restored, and redeployed. Uploading is capture. Downloading is what makes a mind into something mobile.

The phrase “substrate-independent mind” tries to name that mobility without the melodrama. A substrate is the medium a mind runs on: biological tissue, silicon, specialized hardware, something not yet invented. Independence doesn’t mean the mind floats free of physics; it means the same meaningful mental functions might be implementable on different platforms, like a program that can run on different computers. The promise is not that neurons are irrelevant, but that the mind might be the pattern of information processing the neurons carry out — the thing they do, not the stuff they’re made of.

The language of the unconscious

“The unconscious is structured like a language,” argued psychoanalyst Jacques Lacan.

And now, with the rise of AI-generated video and audio, Lacan’s thinking has taken an unexpected twist.

Might AI therefore capture something key about the human unconscious?

Join leading Lacanian philosopher and collaborator of Slavoj Žižek, Alenka Zupančič, as she argues that AI shows the unconscious is structured like a large language model.

REPLACED BY AI! | Seedance 2 + Kling 3.0 Short Film

The increasing use of Artificial Intelligence (AI) in the workplace is leading to job displacement and raising concerns among employees about the security of their positions ## Key Insights.

Career Obsolescence Through AI

🔄 AI engineer David becomes obsolete after 7 years and 1,000 lines of code building the AI division, receiving a “sweet pink slip” as the CEO eliminates his role and takes his company car while AI assumes control of the entire division.

Existential Work Motivation.

💭 David questions whether his 7-year dedication was driven by glory, stock options, passion, art, or simply maintaining purpose (“beating heart”), confronting the irony of being replaced by the AI system he built.

Corporate Restructuring Mechanics.

AI to help researchers see the bigger picture in cell biology

A new AI framework identifies which data about a cell are captured by one measurement modality and which are shared across multiple modalities. This gives researchers a more complete picture of the cell state and could help them understand disease mechanisms and plan treatments.

DeepSeek withholds latest AI model from US chipmakers including Nvidia, sources say

SAN FRANCISCO/SINGAPORE — DeepSeek, the Chinese artificial intelligence lab whose low-cost model rattled global markets last year, has not shown US chipmakers its upcoming flagship model for performance optimization, two sources familiar with the matter said, breaking from standard industry practice ahead of a major model update.

Instead, the lab, which is expected to launch its next major update, V4, granted early access to domestic suppliers, including Huawei Technologies, the sources said.

AI developers typically share pre-release versions of major models with leading chipmakers such as Nvidia and Advanced Micro Devices to ensure their software performs efficiently on widely used hardware. DeepSeek has previously worked closely with Nvidia’s technical staff.

Foundation Models Meet Medical Image Interpretation

In contrast, traditional deep learning methods in the medical domain have long been constrained by scarce annotations data, weak cross-modal semantic correlation, and insufficient generalization capabilities. FMs can effectively alleviate these issues by extracting semantic representations from large-scale unlabeled data, reducing dependence on expert annotations, and enhancing cross-modal understanding and transferability [7]. This provides technical support to address challenges such as long-tail distributions, data scarcity, and modality imbalance, thereby promoting a shift in medical decision-making from experience-driven to data-driven approaches.

Unlike traditional specialist models such as nnU-Net [8], which are typically designed for a single modality and specific tasks, FMs emphasize modality unification and task generalization, enabling cross-domain transfer and knowledge sharing. With mechanisms such as prompt engineering and PEFT, these models support few-shot and even zero-shot transfer (ZST). For example, Med-PaLM [9] is based on a unified medical pretraining model, which can generate structured pathology reports and perform lesion localization from medical images. It effectively overcomes the limitations of traditional methods that require separate architectures for different tasks, significantly improving modeling efficiency and system integration. Driven by such unified model architecture, medical AI systems are evolving toward greater generality and reusability.

Despite these advancements, the unique characteristics of the medical domain pose multiple challenges to the application of FMs. On one hand, medical data are highly heterogeneous, with pronounced differences in resolution, contrast, and noise distribution across imaging modalities such as computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound [10]. This limits the ability of traditional single-modality pretraining strategies to achieve effective cross-domain knowledge integration. On the other hand, clinical applications demand higher standards for model performance. Clinical decision-making relies on interpretable diagnostic evidence, yet pretraining models often behave as “black boxes”, limiting their clinical traceability [11]. In addition, the long-tail distribution of rare diseases poses fairness challenges for model generalization [12].

/* */