Toggle light / dark theme

AGI Is Here: AI Legend Peter Norvig on Why it Doesn’t Matter Anymore

Are we chasing the wrong goal with Artificial General Intelligence, and missing the breakthroughs that matter now?

On this episode of Digital Disruption, we’re joined by former research director at Google and AI legend, Peter Norvig.

Peter is an American computer scientist and a Distinguished Education Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). He is also a researcher at Google, where he previously served as Director of Research and led the company’s core search algorithms group. Before joining Google, Norvig headed NASA Ames Research Center’s Computational Sciences Division, where he served as NASA’s senior computer scientist and received the NASA Exceptional Achievement Award in 2001.He is best known as the co-author, alongside Stuart J. Russell, of Artificial Intelligence: A Modern Approach — the world’s most widely used textbook in the field of artificial intelligence.

Peter sits down with Geoff to separate facts from fiction about where AI is really headed. He explains why the hype around Artificial General Intelligence (AGI) misses the point, how today’s models are already “general,” and what truly matters most: making AI safer, more reliable, and human-centered. He discusses the rapid evolution of generative models, the risks of misinformation, AI safety, open-source regulation, and the balance between democratizing AI and containing powerful systems. This conversation explores the impact of AI on jobs, education, cybersecurity, and global inequality, and how organizations can adapt, not by chasing hype, but by aligning AI to business and societal goals. If you want to understand where AI actually stands, beyond the headlines, this is the conversation you need to hear.

In this episode:
00:00 Intro.
01:00 How AI evolved since Artificial Intelligence: A Modern Approach.
03:00 Is AGI already here? Norvig’s take on general intelligence.
06:00 The surprising progress in large language models.
08:00 Evolution vs. revolution.
10:00 Making AI safer and more reliable.
12:00 Lessons from social media and unintended consequences.
15:00 The real AI risks: misinformation and misuse.
18:00 Inside Stanford’s Human-Centered AI Institute.
20:00 Regulation, policy, and the role of government.
22:00 Why AI may need an Underwriters Laboratory moment.
24:00 Will there be one “winner” in the AI race?
26:00 The open-source dilemma: freedom vs. safety.
28:00 Can AI improve cybersecurity more than it harms it?
30:00 “Teach Yourself Programming in 10 Years” in the AI age.
33:00 The speed paradox: learning vs. automation.
36:00 How AI might (finally) change productivity.
38:00 Global economics, China, and leapfrog technologies.
42:00 The job market: faster disruption and inequality.
45:00 The social safety net and future of full-time work.
48:00 Winners, losers, and redistributing value in the AI era.
50:00 How CEOs should really approach AI strategy.
52:00 Why hiring a “PhD in AI” isn’t the answer.
54:00 The democratization of AI for small businesses.
56:00 The future of IT and enterprise functions.
57:00 Advice for staying relevant as a technologist.
59:00 A realistic optimism for AI’s future.

#ai #agi #humancenteredai #futureofwork #aiethics #innovation.

Quantum encryption method demonstrated at city-sized distances for the first time

Concerns that quantum computers may start easily hacking into previously secure communications has motivated researchers to work on innovative new ways to encrypt information. One such method is quantum key distribution (QKD), a secure, quantum-based method in which eavesdropping attempts disrupt the quantum state, making unauthorized interception immediately detectable.

Previous attempts at this solution were limited by short distances and reliance on special devices, but a research team in China recently demonstrated the ability to maintain quantum encryption over longer distances. The research, published in Science, describes device-independent QKD (DI-QKD) between two single-atom nodes over up to 100 km of optical fiber.

Germany warns of Signal account hijacking targeting senior figures

Germany’s domestic intelligence agency is warning of suspected state-sponsored threat actors targeting high-ranking individuals in phishing attacks via messaging apps like Signal.

The attacks combine social engineering with legitimate features to steal data from politicians, military officers, diplomats, and investigative journalists in Germany and across Europe.

The security advisory is based on intelligence collected by the Federal Office for the Protection of the Constitution (BfV) and the Federal Office for Information Security (BSI).

DKnife Linux toolkit hijacks router traffic to spy, deliver malware

A newly discovered toolkit called DKnife has been used since 2019 to hijack traffic at the edge-device level and deliver malware in espionage campaigns.

The framework serves as a post-compromise framework for traffic monitoring and adversary-in-the-middle (AitM) activities. It is designed to intercept and manipulate traffic destined for endpoints (computers, mobile devices, IoTs) on the network.

Researchers at Cisco Talos say that DKnife is an ELF framework with seven Linux-based components designed for deep packet inspection (DPI), traffic manipulation, credential harvesting, and malware delivery.

Electron-phonon ‘surfing’ could help stabilize quantum hardware, nanowire tests suggest

That low-frequency fuzz that can bedevil cellphone calls has to do with how electrons move through and interact in materials at the smallest scale. The electronic flicker noise is often caused by interruptions in the flow of electrons by various scattering processes in the metals that conduct them.

The same sort of noise hampers the detecting powers of advanced sensors. It also creates hurdles for the development of quantum computers—devices expected to yield unbreakable cybersecurity, process large-scale calculations and simulate nature in ways that are currently impossible.

A much quieter, brighter future may be on the way for these technologies, thanks to a new study led by UCLA. The research team demonstrated prototype devices that, above a certain voltage, conducted electricity with lower noise than the normal flow of electrons.

/* */