Menu

Blog

Archive for the ‘existential risks’ category: Page 26

Apr 26, 2023

Researchers Took The First Pics Of DEATH — It Is Actually PALE BLUE And Looks Nice

Posted by in categories: biological, existential risks

In today’s well-researched world, death is one of those unknown barriers. It was pursued by British scientists… The color of death is a faint blue.

British scientists got a firsthand look at what it’s like to die. They took a close look at the worm in the experiment. During this stage of passage, cells will perish. It starts a chain reaction that leads to the creature’s extinction and destroys cell connections.

Continue reading “Researchers Took The First Pics Of DEATH — It Is Actually PALE BLUE And Looks Nice” »

Apr 24, 2023

The biggest fear with AI is fear itself | De Kai | TEDxSanMigueldeAllende

Posted by in categories: ethics, existential risks, media & arts, robotics/AI

In this talk, De Kai examines how AI amplifies fear into an existential threat to society and humanity, and what we need to be doing about it. De Kai’s work across AI, language, music, creativity, and ethics centers on enabling cultures to interrelate. For pioneering contributions to machine learning of AIs like Google/Yahoo/Microsoft Translate, he was honored by the Association for Computational Linguistics as one of only seventeen Founding Fellows worldwide and by Debrett’s HK 100 as one of the 100 most influential figures of Hong Kong. De Kai is a founding Professor of Computer Science and Engineering at HKUST and Distinguished Research Scholar at Berkeley’s ICSI (International Computer Science Institute). His public campaign applying AI to show the impact of universal masking against Covid received highly influential mass media coverage, and he serves on the board of AI ethics think tank The Future Society. De Kai is also creator of one of Hong Kong’s best known world music collectives, ReOrientate, and was one of eight inaugural members named by Google to its AI ethics council. This talk was given at a TEDx event using the TED conference format but independently organized by a local community.

Apr 21, 2023

Chandra X-ray Observatory identifies new stellar danger to planets

Posted by in categories: cosmology, existential risks

Astronomers using data from NASA’s Chandra X-ray Observatory and other telescopes have identified a new threat to life on planets like Earth: a phase during which intense X-rays from exploded stars can affect planets over 100 light-years away. This result, as outlined in our latest press release, has implication for the study of exoplanets and their habitability.

This newly found threat comes from a supernova’s blast wave striking dense gas surrounding the exploded star, as depicted in the upper right of our artist’s impression. When this impact occurs it can produce a large dose of X-rays that reaches an Earth-like planet (shown in the lower left, illuminated by its host star out of view to the right) months to years after the explosion and may last for decades. Such intense exposure may trigger an extinction event on the planet.

A new study reporting this threat is based on X-ray observations of 31 and their aftermath—mostly from NASA’s Chandra X-ray Observatory, Swift and NuSTAR missions, and ESA’s XMM-Newton—show that planets can be subjected to lethal doses of located as much as about 160 light-years away. Four of the supernovae in the study (SN 1979C, SN 1987A, SN 2010jl, and SN 1994I) are shown in composite images containing Chandra data in the supplemental image.

Apr 20, 2023

Why do some AI researchers dismiss the potential risks to humanity?

Posted by in categories: existential risks, robotics/AI

Existential risk from AI is admittedly more speculative than pressing concerns such as its bias, but the basic solution is the same. A robust public discussion is long overdue, says David Krueger

By David Krueger

Apr 19, 2023

Why we can still avoid imminent extinction with Daniel Schmachtenberger

Posted by in categories: existential risks, governance

Some of Daniel Schmarchtenberger’s friends say you can be “Schmachtenberged”. It means realising that we are on our way to self-destruction as a civilisation, on a global level. This is a topic often addressed by the American philosopher and strategist, in a world with powerful weapons and technologies and a lack of efficient governance. But, as the catastrophic script has already started to be written, is there still hope? And how do we start reversing the scenario?

Apr 12, 2023

Lightning strike creates a material seen for the first time on Earth

Posted by in categories: asteroid/comet impacts, chemistry, climatology, existential risks

After lightning struck a tree in New Port Richey, Florida, a team of scientists from the University of South Florida (USF) discovered that this strike led to the formation of a new phosphorous material in a rock. This is the first time such a material has been found in solid form on Earth and could represent a member of a new mineral group.

“We have never seen this material occur naturally on Earth – minerals similar to it can be found in meteorites and space, but we’ve never seen this exact material anywhere,” said study lead author Matthew Pasek, a geoscientist at USF.

According to the researchers, high-energy events such as lightning can sometimes cause unique chemical reactions which, in this particular case, have led to the formation of a new material that seems to be transitional between space minerals and minerals found on Earth.

Apr 10, 2023

The intelligence explosion: Nick Bostrom on the future of AI

Posted by in categories: biotech/medical, Elon Musk, existential risks, robotics/AI

We may build incredible AI. But can we contain our cruelty? Oxford professor Nick Bostrom explains.

Up next, Is AI a species-level threat to humanity? With Elon Musk, Michio Kaku, Steven Pinker & more ► https://youtu.be/91TRVubKcEM

Continue reading “The intelligence explosion: Nick Bostrom on the future of AI” »

Apr 9, 2023

Doomsday Predictions Around ChatGPT Are Counter-Productive

Posted by in categories: Elon Musk, employment, existential risks, robotics/AI

The last few weeks have been abuzz with news and fears (well, largely fears) about the impact chatGPT and other generative technologies might have on the workplace. Goldman Sachs predicted 300 million jobs would be lost, while the likes of Steve Wozniak and Elon Musk asked for AI development to be paused (although pointedly not the development of autonomous driving).

Indeed, OpenAI chief Sam Altman recently declared that he was “a little bit scared”, with the sentiment shared by OpenAI’s chief scientist Ilya Sutskever, who recently said that “at some point it will be quite easy, if one wanted, to cause a great deal of harm”.


As fears mount about the jobs supposedly at risk from generative AI technologies like chatGPT, are these fears likely to prevent people from taking steps to adapt?

Continue reading “Doomsday Predictions Around ChatGPT Are Counter-Productive” »

Apr 9, 2023

Fermi Paradox: The Vulnerable World Hypothesis

Posted by in category: existential risks

And exploration of the Vulnerable World Hypothesis solution to the Fermi Paradox.

And exploration of the possibility of finding fossils of alien origin right here on the surface of the earth.

Continue reading “Fermi Paradox: The Vulnerable World Hypothesis” »

Apr 5, 2023

We Should Consider ChatGPT Signal For Manhattan Project 2.0

Posted by in categories: existential risks, government, military, nuclear energy, robotics/AI

In 1942 The Manhattan Project was established by the United States as part of a top-secret research and development (R&D) program to produce the first nuclear weapons. The project involved thousands of scientists, engineers, and other personnel who worked on different aspects of the project, including the development of nuclear reactors, the enrichment of uranium, and the design and construction of the bomb. The goal: to develop an atomic bomb before Germany did.

The Manhattan Project set a precedent for large-scale government-funded R&D programs. It also marked the beginning of the nuclear age and ushered in a new era of technological and military competition between the world’s superpowers.

Today we’re entering the age of Artificial Intelligence (AI)—an era arguably just as important, if not more important, than the age of nuclear war. While the last few months might have been the first you’ve heard about it, many in the field would argue we’ve been headed in this direction for at least the last decade, if not longer. For those new to the topic: welcome to the future, you’re late.

Page 26 of 150First2324252627282930Last