There’s a lot of other ways that AI could really take things in a bad direction.
One of the OpenAI directors who worked to oust CEO Sam Altman is issuing some stark warnings about the future of unchecked artificial intelligence.
In an interview during Axios’ AI+ summit, former OpenAI board member Helen Toner suggested that the risks AI poses to humanity aren’t just worst-case scenarios from science fiction.
“I just think sometimes people hear the phrase ‘existential risk’ and they just think Skynet, and robots shooting humans,” Toner said, referencing the evil AI technology from the “Terminator” films that’s often used as a metaphor for worst-case-scenario AI predictions.
Leave a reply