Toggle light / dark theme

How Close Are We to Human-Level AI? Here’s the Most Plausible Timeframe for Achieving Artificial General Intelligence (AGI)

This is exactly why the benchmarks I propose in the book matter so much. Defining AGI and ASI is not semantics; it determines which architectures we are trying to align, how long top-down control mechanisms remain useful, and when we must shift our focus toward developmental and integrative approaches such as the AGI Naturalization Protocol and merge-based alignment.

In SUPERALIGNMENT, I argue that no single strategy is sufficient. Today’s control-based alignment is indispensable, but only as an early scaffold. Ethical-emotional development is necessary, but only as a middle phase. Merge-based alignment becomes increasingly relevant as humans and artificial minds begin to co-evolve within shared cognitive ecosystems. The triadic structure matters because each phase corresponds to a distinct level of intelligence maturity: constraint, cultivation, and convergence.

In my framework, AGI is not a static point but a continuum of cognitive emergence: from embodied agency to disembodied abstraction, from classical computation to quantum cognition, and from reactive behavior to phenomenological self-awareness. The benchmarks provide the conceptual anchors for intervention. They tell us when control may still be enough, when cultivation becomes necessary, and when convergence between human and synthetic minds becomes the more realistic path to Superalignment.

Light-responsive hydrogels enable fast and precise control of soft materials

Researchers at Tampere University have recently demonstrated that light can be used to precisely reshape soft materials without mechanical contact. They have developed light-responsive hydrogel thin films that enable programmable surfaces with high sensitivity, rapid response, precise spatial control and reversibility. The technology opens new possibilities for tunable devices in photonics, sensing and biomedicine.

Until now, responses in hydrogel films have typically been limited to timescales of tens of seconds and spatial resolutions of tens of micrometers—about the thickness of a fine human hair—restricting practical applications. In contrast, the university’s Smart Photonic Materials research group has achieved control on sub-second timescales and sub-micron resolution, marking a significant advance in speed and precision. The findings are published in the journal Nature Communications.

Light-responsive hydrogels are particularly attractive for mimicking dynamic microstructures found in nature. The materials absorb and release water when exposed to light, enabling accurate and remote actuation in lightweight structures. Such properties are well suited for applications including soft micro-robots, remote drug delivery systems and active cell culture platforms.

The Entrepreneurial University

More academic and nonprofit labs should act as spinoff factories — both creating innovative foundational technologies *and* pushing these technologies forward towards the entrepreneurial translation needed to truly change the world for the better.


A research university emphasizes entrepreneurial science—and spawns start-ups in fields as varied as genetic medicine, humanoid robotics and carbon-catching materials.

A domain-adapted large language model to support clinicians in psychiatric clinical practice

The authors present PsychFound, a psychiatry-specialized large language model trained on expert knowledge and clinical records. It achieves clinical-grade performance and enhances diagnostic and treatment decisions when deployed in clinical workflows.

Brain-inspired approach can teach AI to doubt itself just enough to avoid overconfidence

Most contemporary artificial intelligence (AI) systems learn to complete tasks via machine learning and deep learning. Machine learning is a computational approach that allows models to uncover patterns in data that are useful for making predictions. Deep learning, on the other hand, is a subset of machine learning that entails the use of multi-layered neural networks, which can autonomously extract features and learn complex patterns from unstructured data, sometimes with little or no human supervision.

Many AI systems trained with these approaches also produce confidence scores for their predictions. These scores are essentially estimates of how probable it is for a specific prediction to be accurate. Past studies suggest that in many cases, AI systems are overconfident and assign high confidence scores to wrong answers, or even present inaccurate information as a fact. This limits their reliability, particularly in high-stakes applications where wrong predictions can have serious consequences.

Researchers at the Korea Advanced Institute of Science and Technology recently introduced a new brain-inspired training approach that could yield more realistic AI confidence estimates. Their proposed strategy, introduced in a paper published in Nature Machine Intelligence, entails briefly training artificial neural networks on random noise (i.e., data with no meaningful patterns) and arbitrary outputs, so that they can learn to produce more realistic confidence estimates before learning specific tasks.

These AI-powered guide dogs don’t just lead, they talk

Guide dogs are powerful allies, leading the visually impaired safely to their destinations, but they can’t talk with their owners—until now. Using large language models, a team of researchers at Binghamton University, State University of New York has created a talking robot guide dog system that determines an ideal route and safely guides users to their destination, offering real-time feedback along the way.

The paper, “From Woofs to Words: Towards Intelligent Robotic Guide Dogs with Verbal Communication,” was presented at the 40th Annual AAAI Conference on Artificial Intelligence (AAAI 2026), held January 20–27 in Singapore. It is also available on the arXiv preprint server.

“For this work, we’re demonstrating an aspect of the robotic guide dog that is more advanced than biological guide dogs,” said Shiqi Zhang, an associate professor at the Thomas J. Watson College of Engineering and Applied Science’s School of Computing. “Real dogs can understand around 20 commands at best. But for robotic guide dogs, you can just put GPT-4 with voice commands. Then it has very strong language capabilities.”

The New Duality: Why This Quantum Discovery Has Even Physicists Questioning Reality

GET MY FREE GUIDE: 📘 The Content Creator’s AI Blueprint: From 25 Hours to 5 Minutes https://FirstMovers.ai/blueprint/

This quantum duality discovery shows a material acting as both conductor and insulator… confirmed in a real lab.

A 35 Tesla experiment revealed quantum oscillations inside an insulator’s core. This “conductor-insulator duality” is being compared to wave-particle duality… raising deeper questions about how reality behaves.

Inside this breakdown:
• University of Michigan quantum physics finding
• Conductor-insulator duality explained
• Wave-particle and observer effect links
• Faith and science parallels from Scripture.

If quantum duality keeps expanding… what does it suggest about how reality actually works?

/* */