Toggle light / dark theme

Will humans soon live forever? Scientists believe it’s possible — and it could happen as early as 2050.
In this video, we explore 10 shocking scientific breakthroughs that are pushing humanity closer to immortality.
From nanobots that cure disease from within, to brain uploading, cloning organs, and AI-driven consciousness — this is the future of life itself.

🧬 Get ready to discover the jaw-dropping technologies that might just make death optional.

⚠️ Don’t blink. The future is coming faster than you think.

For Martin Schrimpf, the promise of artificial intelligence is not in the tasks it can accomplish. It’s in what AI might reveal about human intelligence.

He is working to build a “digital twin” of the brain using artificial neural networks — AI models loosely inspired by how neurons communicate with one another.

That end goal sounds almost ludicrously grand, but his approach is straightforward. First, he and his colleagues test people on tasks related to language or vision. Then they compare the observed behavior or brain activity to results from AI models built to do the same things. Finally, they use the data to fine-tune their models to create increasingly humanlike AI.

Questions to inspire discussion.

A: Tesla is testing FSD in the Arctic and awaiting regulatory approval for cities like Paris, Amsterdam, and Rome.

🇸🇪 Q: Why was FSD testing denied in Stockholm?

A: Stockholm denied FSD testing due to risks for infrastructure and pressure from ongoing innovation tasks.

🤖 Q: What improvements are expected in Tesla’s Grok AI?

A: Grok 3.5 will be trained on video data from Tesla cars and Optimus robots, enabling it to understand the world and perform tasks like dropping off passengers.

Not only can A.I. now make these assessments with remarkable, humanlike accuracy; it can make millions of them in an instant. A.I.’s superpower is its ability to recognize and interpret patterns: to sift through raw data and, by comparing it across vast data sets, to spot trends, relationships and irregularities.

As humans, we constantly generate patterns: in the sequence of our genes, the beating of our hearts, the repetitive motion of our muscles and joints. Everything about us, from the cellular level to the way our bodies move through space, is a source of grist for A.I. to mine. And so it’s no surprise that, as the power of the technology has grown, some of its most startling new abilities lie in its perception of us: our physical forms, our behavior and even our psyches.

Given the recent explosion of large language models (LLMs) that can make convincingly human-like statements, it makes sense that there’s been a deepened focus on developing the models to be able to explain how they make decisions. But how can we be sure that what they’re saying is the truth?

In a new paper, researchers from Microsoft and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) propose a novel method for measuring LLM explanations with respect to their “faithfulness”—that is, how accurately an explanation represents the reasoning process behind the model’s answer.

As lead author and Ph.D. student Katie Matton explains, faithfulness is no minor concern: if an LLM produces explanations that are plausible but unfaithful, users might develop false confidence in its responses and fail to recognize when recommendations are misaligned with their own values, like avoiding bias in hiring.