Toggle light / dark theme

Humanity’s Endgame? We Built an AI That Will Command Us — MO Gawdat

🏦 Invest In Luxury Dubai Property https://londonreal.tv/dubai-ytd
🍿 Watch the full interview for free at https://londonreal.tv/gawdat.

In this powerful episode, Brian Rose sits down with former Google X exec and bestselling author Mo Gawdat 🧠 to explore the mind-blowing future of Artificial Intelligence 🤯. From the rise of machine learning to the ethical dangers of unchecked AI evolution ⚠️, this conversation uncovers why AI is the infant that could soon become our master.

🔥 Discover the truth about what’s coming

⚙️ Why we must act now to guide its growth
🧘♂️ And how mindfulness may be our only defense.

This one will change how you see the future 🌍💡
👉 Don’t miss it — hit play now and prepare your mind.

🚨 Learn To Make Money In Crypto:
💰The Investment club: https://londonreal.tv/club
💰Crypto & DeFi Academy: https://londonreal.tv/defi-ytd.

🔔 SUBSCRIBE ON YOUTUBE: http://bit.ly/SubscribeToLondonReal

Godfather of AI: How To Make Safe Superintelligent AI

The co-inventor of modern AI and the most cited living scientist believes he’s figured out how to ensure AI is honest, incapable of deception, and never goes rogue. Yoshua Bengio – Turing Award Winner and founder of LawZero – is disturbed by the many unintended drives and goals present in today’s AIs, their ability to tell when they’re being tested, and demonstrated willingness to lie. AI companies are trying to stamp these out in a ‘cat-and-mouse game’ that Yoshua fears they’re losing.

But Yoshua is optimistic: he believes the companies can win this battle decisively with a single rearrangement to how AI models are trained, and has been developing mathematical proofs to back up the claim. The core idea is that instead of training AI to predict what a human would say, or to produce responses we’d rate highly, we should train it to model what’s actually true.

Learn more & full transcript: https://80k.info/bengio.

Yoshua argues this new architecture, which he calls “Scientist AI,” is a small enough change that we could keep almost all the techniques and data we use to train frontier AIs like Claude and ChatGPT. And that the new architecture need not cost more, could be built iteratively, and might be more capable as well as more honest.

Until recently, the biggest practical objection to Scientist AI was simple: the world wants agents, and Scientist AI isn’t one. But in new research, Yoshua has extended the design and believes the same honest predictor can be turned into a capable agent without losing its \.

Blood as the mirror and modulator of aging: mechanistic insights and rejuvenation strategies

Aging is a complex process influenced by changes in our blood that affect how quickly we age. Scientists have shown that blood contains important molecules and cellular components — including proteins, metabolites, and immune cells — that can either accelerate or slow aging. Tools such as the ‘proteomic aging clock’ predict age and disease risk based on blood protein profiles, whereas emerging multi-omics approaches integrate metabolomic and immunomic data. Large-scale analyses of circulating factors reveal how these components change with age and identify markers of organ-specific aging. Certain blood-borne molecules can predict diseases such as heart disease and Alzheimer disease. These findings demonstrate that aging does not occur uniformly across tissues. Overall, studying diverse blood components provides valuable insight into aging biology and offers opportunities to develop strategies that promote healthier aging and improve long-term health.

This summary was initially drafted using artificial intelligence, then revised and fact-checked by the author.

A human-inspired pipeline could enhance the training of computer vision models

Over the past few decades, computer scientists have developed increasingly advanced artificial intelligence (AI) systems that can tackle some tasks exceedingly well. These include computer vision models, systems that can rapidly analyze images and categorize them, recognize objects and faces, or make other accurate predictions.

While computer vision systems now perform well on various tasks, they typically process visual information very differently from humans. While humans focus more on the shape and outline of objects, AI systems prioritize texture, such as color variations or repeated visual patterns. This difference may in part explain why AI vision systems remain much more prone to errors than human vision.

Researchers at Osnabrück University and Freie Universität Berlin recently introduced a new approach to train AI models that draws from the development of the human visual system. Their proposed pipeline, dubbed developmental visual diet (DVD), was introduced in a paper published in Nature Machine Intelligence.

Agentic AI: Navigating The Evolving Frontier

Link:

#artificialIntelligence #agenticai #ai #cybersecurity #governance #tech Forbes


Kindly see my latest article: By Chuck Brooks.

The Strategic Inflection Point: From Automation to Autonomy. This moment is characterized by operational autonomy and technical innovation. Agentic AI is increasingly establishing itself as the standard decision-making framework in critical systems. This transition resembles cloud computing and mobile networks, yet it possesses agency. Incorporating intent into machines.

2030 The Survival Singularity: Why Billionaires Are Panicking

To support the channel and help us make more videos like this, check out our Patreon: https://patreon.com/Technomics?utm_me

👉 Get The AI Career Survival Guide: https://technomics.gumroad.com/l/ai-s

By 2030, Artificial General Intelligence (AGI) will change everything. While tech leaders publicly promise a \.

The “Nanobot” Singularity: Ray Kurzweil’s Terrifying Plan for 2030

👉 Get The AI Career Survival Guide: https://technomics.gumroad.com/l/ai-s

What if immortality and god-like intelligence were just a few years away?
Renowned futurist and former Google engineer Ray Kurzweil predicts that humanity is rapidly approaching a \.

Anthropic research warns AI could build itself by 2028

In this exclusive interview, Axios co-founder Mike Allen sits down with Anthropic co-founder Jack Clark to discuss his warning that by 2028, AI systems may be able to improve and build better versions of themselves.

Clark explains why Anthropic is preparing for the possibility of an “intelligence explosion,” how advanced AI could accelerate breakthroughs in science and medicine, and why governments, companies and researchers need new plans for cyber threats, bio risks, economic disruption and the future of work.

Timestamps:
00:00 — Introduction: the future of AI
00:41 — The 2028 prediction: AI building itself.
01:49 — The risks of rapid acceleration.
03:11 — The 3D printer metaphor.
05:21 — Intelligence explosion and fire drill scenarios.
06:55 — Building a \.

Anthropic to consider using SpaceX orbital data center satellites

WASHINGTON — Artificial intelligence company Anthropic will study use of orbital data centers being developed by SpaceX.

The two companies announced agreements May 6 giving Anthropic, developer of a line of AI products known as Claude, access to both terrestrial data centers as well as potential use of SpaceX’s orbital data center.

In the near term, Anthropic will purchase all the capacity of a SpaceX terrestrial data center, Colossus 1, with more than 300 megawatts of computing capacity. Anthropic said that capacity will allow it to raise limits on usage of Claude products for its customers.

AI data center boom is leaving consumer electronics short of chips − even though they don’t use the same kinds

Data centers need powerful chips, while smartphones need chips that are energy efficient. A supply chain scholar explains why chipmakers’ focus on the former comes at the expense of the latter.

/* */