Menu

Blog

Archive for the ‘existential risks’ category: Page 23

Jul 16, 2023

From Sci-Fi to Reality: Addressing AI Risks — with David Brin

Posted by in categories: cryptocurrencies, existential risks, military, particle physics, robotics/AI

AI had its nuclear bomb threshold. The biggest thing that happens to human technology maybe since the splitting of the atom.

A conversation with Science Fiction author and a NASA consultant David Brin about the existential risks of AI and what approach we can take to address these risks.

Continue reading “From Sci-Fi to Reality: Addressing AI Risks — with David Brin” »

Jul 15, 2023

Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED

Posted by in categories: business, existential risks, robotics/AI

Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don’t lead to our extinction.

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership.

Continue reading “Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED” »

Jul 13, 2023

Zachary Kallenborn — Existential Terrorism

Posted by in categories: existential risks, mathematics, policy, security, terrorism

“Some men just want to watch the world burn.” Zachary Kallenborn discusses acts of existential terrorism, such as the Tokyo subway sarin attack by Aum Shinrikyo in 1995, which killed or injured over 1,000 people.

Zachary kallenborn is a policy fellow in the center for security policy studies at george mason university, research affiliate in unconventional weapons and technology at START, and senior risk management consultant at the ABS group.

Continue reading “Zachary Kallenborn — Existential Terrorism” »

Jul 8, 2023

AI Singularity realistically by 2029: year-by-year milestones

Posted by in categories: existential risks, robotics/AI, singularity

This existential threat could even come as early as, say, 2026. Or might even be a good thing, but whatever the Singularity exactly is, although it’s uncertain in nature, it’s becoming clearer in timing and much closer than most predicted.

AI is nevertheless hard to predict, but many agree with me that with GPT-4 we’re close to AGI (artificial general intelligence) already.

Jul 3, 2023

The real reason claims about the existential risk of AI are scary

Posted by in categories: ethics, existential risks, robotics/AI

Claims that superintelligent AI poses a threat to humanity are frightening, but only because they distract from the real issues today, argues Mhairi Aitken, an ethics fellow at The Alan Turing Institute.

By Mhairi Aitken

Jun 30, 2023

Tesla, Facebook, OpenAI Account For 24.5% Of ‘AI Incidents,’ Security Company Says

Posted by in categories: existential risks, food, health, law, military, nuclear weapons, robotics/AI

The first “AI incident” almost caused global nuclear war. More recent AI-enabled malfunctions, errors, fraud, and scams include deepfakes used to influence politics, bad health information from chatbots, and self-driving vehicles that are endangering pedestrians.

The worst offenders, according to security company Surfshark, are Tesla, Facebook, and OpenAI, with 24.5% of all known AI incidents so far.

In 1983, an automated system in the Soviet Union thought it detected incoming nuclear missiles from the United States, almost leading to global conflict. That’s the first incident in Surfshark’s report (though it’s debatable whether an automated system from the 1980s counts specifically as artificial intelligence). In the most recent incident, the National Eating Disorders Association (NEDA) was forced to shut down Tessa, its chatbot, after Tessa gave dangerous advice to people seeking help for eating disorders. Other recent incidents include a self-driving Tesla failing to notice a pedestrian and then breaking the law by not yielding to a person in a crosswalk, and a Jefferson Parish resident being wrongfully arrested by Louisiana police after a facial recognition system developed by Clearview AI allegedly mistook him for another individual.

Jun 29, 2023

Russian Ships Enter Taiwan’s Territory: New Escalation in East Asia? | Vantage with Palki Sharma

Posted by in categories: existential risks, military

Taiwan was on high alert after two Russian warships entered its waters. Taiwan is used to incursions by China, not Russia. It marks a new flare-up in East Asia. Moscow then doubled down by releasing footage of a military drill in the Sea of Japan. East Asia is becoming a powder keg.

The region already deals with tensions between North Korea, South Korea & Japan. And now the US is trying to send a message to Pyongyang by having its largest nuclear submarine visit South Korea.

Continue reading “Russian Ships Enter Taiwan’s Territory: New Escalation in East Asia? | Vantage with Palki Sharma” »

Jun 25, 2023

The rise of AI: beware binary thinking

Posted by in categories: existential risks, robotics/AI

When Max More writes, it’s always worth paying attention.

His recent article Existential Risk vs. Existential Opportunity: A balanced approach to AI risk is no exception. There’s much in that article that deserves reflection.

Continue reading “The rise of AI: beware binary thinking” »

Jun 23, 2023

How existential risk became the biggest meme in AI

Posted by in categories: business, existential risks, robotics/AI

Who’s afraid of the big bad bots? A lot of people, it seems. The number of high-profile names that have now made public pronouncements or signed open letters warning of the catastrophic dangers of artificial intelligence is striking.

Hundreds of scientists, business leaders, and policymakers have spoken up, from deep learning pioneers Geoffrey Hinton and Yoshua Bengio to the CEOs of top AI firms, such as Sam Altman and Demis Hassabis, to the California congressman Ted Lieu and the former president of Estonia Kersti Kaljulaid.

Jun 22, 2023

ChatGPT — A Human Upgrade Or Future Malaise?

Posted by in categories: biotech/medical, Elon Musk, existential risks, robotics/AI

Elon Musk is exploring the possibility of upgrading the human brain to allow humans to compete with sentient AI through ‘a brain computer interface’ created by his company Neuralink. “I created [Neuralink] specifically to address the AI symbiosis problem, which I think is an existential threat,” says Musk.

While Neuralink has just received FDA approval to start clinical trials in humans (intended to empower those with paralysis), only time will tell whether this technology will succeed in augmenting human intelligence as Musk first intended. But the use of AI to augment human intelligence brings up some interesting ethical questions as to which tools are acceptable (a subject to be discussed… More.


Chat GPT may have an effect on critical thinking. Also early adopters may be at an advantage with GPT. Study with students.

Continue reading “ChatGPT — A Human Upgrade Or Future Malaise?” »

Page 23 of 150First2021222324252627Last