Toggle light / dark theme

How would you allocate a hypothetical $100 million budget for a Lifeboat Foundation study of the top 10 existential risks… risks that are both global and terminal?

$?? Biological viruses…
$?? Environmental global warming…
$?? Extraterrestrial invasion…
$?? Governments abusive power…
$?? Nanotechnology gray goo…
$?? Nuclear holocaust…
$?? Simulation Shut Down if we live in one…
$?? Space Threats asteroids…
$?? Superintelligent AI un-friendly…
$?? Other
$100 million total

To vote, please reply below.

Results after 80 votes updated: Jan 13, 2008 11 AM EST

$23.9 Biological viruses…
$17.9 Space Threats asteroids…
$13.9 Governments abusive power…
$10.2 Nuclear holocaust…
$8.8 Nanotechnology gray goo…
$8.6 Other
$8.5 Superintelligent AI un-friendly…
$7.2 Environmental global warming…
$0.7 Extraterrestrial invasion…
$0.4 Simulation Shut Down if we live in one…
$100 million total

Planning for the first Lifeboat Foundation conference has begun. This FREE conference will be held in Second Life to keep costs down and ensure that you won’t have to worry about missing work or school.

While an exact date has not yet been set, we intend to offer you an exciting line up of speakers on a day in the late spring or early summer of 2008.

Several members of Lifeboat’s Scientific Advisory Board (SAB) have already expressed interest in presenting. However, potential speakers need not be Lifeboat Foundation members.

If you’re interested in speaking, want to help, or you just want to learn more, please contact me at matt@lifeboat.com.

What’s the NanoShield you ask? It’s a long-term scientific research project aimed at creating a nanotechnoloigical immune system. You can learn more about it here.

Facebook users — please come join the cause and help fund the Lifeboat Foundation’s NanoShield project.

Not a Facebook user? No worries. By joining the Lifeboat Foundation and making even a small donation you can have a hugely positive impact on humanity’s future well being.

So why not join us?

The inspiration of Help Hookup is actually a comic book called Global Frequency by Warren Ellis. My brother, Alvin Wang, took the idea to startup weekend and they launched the idea this past weekend for hooking up volunteers. It is similar to the concepts of David Brin’s “empowered citizens” and Glenn Reynolds “an army of Davids”. The concepts are compatible with the ideas and causes of the Lifeboat foundation.

Global Frequency was a network of 1,001 people that handled the jobs that the governments did not have the will to handle. I thought that it was a great idea and it would be more powerful with 1,000,001 people or 100,000,001 people. We would have to leave out the killing that was in the comic.

Typhoons, earthquakes, and improperly funded education could all be handled. If there is a disaster, doctors could volunteer. Airlines could provide tickets. Corporations could provide supples. Trucking companies could provide transportation. Etc. State a need, meet the need. No overhead. No waste.

The main site is here it is a way for volunteers to hookup

The helphookup blog is tracking the progress.

The Yellowstone caldera has moved upwards nine inches over the last three years, a record rate since geologists first began taking measurements in the 1920s. This is the result of a Los Angeles-sized blob of magma that recently rose up into the chamber only six miles below the surface. The Yellowstone caldera is an ancient supervolcano. Last time it erupted, 642,000 years ago, it ejected 1,000 cubic kilometers of magma into the air. If this happened in today’s world, it would kill millions and cover most of the United States in a layer of ash at least a centimeter thick. The lighter ash would rise up into the atmosphere, initiating a volcanic winter and ruining crops worldwide.

Calderas rise and fall worldwide all the time without erupting. But the activity in Yellowstone is still concerning. Like a reckless teenager in a sports car, it seems as if our civilization laughs off the possibility of its own demise like a complete joke. Yet the right sort of event, and we could be knocked flat. Instead of waiting for a disaster to happen, we should prepare in advance to minimize its probability.

I would like to see scientists do a study on the feasibility of using nuclear weapons to initiate a supervolcano eruption. If it looks feasible, then park security in Yellowstone should be increased.

University of Pittsburgh researchers injected a therapy previously found to protect cells from radiation damage into the bone marrow of mice, then dosed them with some 950 roentgens of radiation — nearly twice the amount needed to kill a person in just five hours. Nine in 10 of the therapy-receiving mice survived, compared to 58 percent of the control group.

Between 30 and 330 days, there were no differences in survival rates between experiment and control group mice, indicating that systemic MnSOD-PL treatment was not harmful to survival.

The researchers will need to verify whether this treatment would work in humans.

This is part of the early development in the use of genetic modification to increase the biological defences (shields) of people against nuclear, biological and chemical threats. We may not be able to prevent all attacks, so we should improve our toughness and survivability. We should still try to stop the attacks and create the conditions for less attacks.

Determining the structure of a protein called hemagglutinin on the surface of influenza B is giving researchers at Baylor College of Medicine and Rice University in Houston clues as to what kinds of mutations could spark the next flu pandemic.

This is interesting research and progress in understanding and possibly blocking changes that would lead to pandemics.

In a report that goes online today in the Proceedings of the National Academy of Sciences (PNAS), Drs. Qinghua Wang, assistant professor of biochemistry and molecular biology at BCM, and Jianpeng Ma, associate professor in the same department and their colleagues describe the actual structure of influenza B virus hemagglutinin and compare it to a similar protein on influenza A virus. That comparison may be key to understanding the changes that will have to occur before avian flu (which is a form of influenza A virus) mutates to a form that can easily infect humans, said Ma, who holds a joint appointment at Rice. He and Wang have identified a particular residue or portion of the protein that may play a role in how different types of hemagglutinin bind to human cells.

“What would it take for the bird flu to mutate and start killing people” That’s the next part of our work,” said Ma. Understanding that change may give scientists a handle on how to stymie it.

There are two main forms of influenza virus – A and B. Influenza B virus infects only people while influenza A infects people and birds. In the past, influenza A has been the source of major worldwide epidemics (called pandemics) of flu that have swept the globe, killing millions of people. The most famous of these was the Pandemic of 1918–1919, which is believed to have killed between 20 and 40 million people worldwide. It killed more people than World War I, which directly preceded it.

The Asian flu pandemic of 1957–1958 is believed to have killed as many as 1.5 million people worldwide, and the so-called Hong Kong flu pandemic of 1968–1969 is credited with as many as 1 million deaths. Each scourge was accompanied by a major change in the proteins on the surface of the virus.

The Lifeboat Foundation has the bioshield project

New Scientist reports on a new study by researchers led by Massimiliano Vasile of the University of Glasgow in Scotland have compared nine of the many methods proposed to ward off such objects, including blasting them with nuclear explosions.

The team assessed the methods according to three performance criteria: the amount of change each method would make to the asteroid’s orbit, the amount of warning time needed and the mass of the spacecraft needed for the mission.

The method that came out on top was a swarm of mirror-carrying spacecraft. The spacecraft would be launched from Earth to hover near the asteroid and concentrate sunlight onto a point on the asteroid’s surface.

In this way, they would heat the asteroid’s surface to more than 2100° C, enough to start vaporising it. As the gases spewed from the asteroid, they would create a small thrust in the opposite direction, altering the asteroid’s orbit.

The scientists found that 10 of these spacecraft, each bearing a 20-metre-wide inflatable mirror, could deflect a 150-metre asteroid in about six months. With 100 spacecraft, it would take just a few days, once the spacecraft are in position.

To deflect a 20-kilometre asteroid, about the size of the one that wiped out the dinosaurs, it would take the combined work of 5000 mirror spacecraft focusing sunlight on the asteroid for three or more years.

But Clark Chapman of the Southwest Research Institute in Boulder, Colorado, US, says ranking the options based on what gives the largest nudge and takes the least time is wrongheaded.

The proper way to go about ranking this “is to give weight to adequate means to divert an NEO of the most likely sizes we expect to encounter, and to do so in a controllable and safe manner”, Chapman told New Scientist.

The best approach may be to ram the asteroid with a spacecraft to provide most of the change needed, then follow up with a gravity tractor to make any small adjustments needed, he says.

It is good to have several options for deflection and a survey to detect the specific risks of near earth objects.

When I read about the “Aurora Generator Test” video that has been leaked to the media I wondered “why leak it now now and who benefits.” Like many of you, I question the reasons behind any leak from an “unnamed source” inside the US Federal government to the media. Hopefully we’ll all benefit from this particular leak.

Then I thought back to a conversation I had at a trade show booth I was working in several years ago. I was speaking with a fellow from the power generation industry. He indicated that he was very worried about the security ramifications of a hardware refresh of the SCADA systems that his utility was using to control its power generation equipment. The legacy UNIX-based SCADA systems were going to be replaced by Windows based systems. He was even more very worried that the “air gaps” that historically have been used to physically separate the SCADA control networks from power company’s regular data networks might be removed to cut costs.

Thankfully on July 19, 2007 the Federal Energy Regulatory Commission proposed to the North American Electric Reliability Corporation a set of new, and much overdue, cyber security standards that will, once adopted and enforced do a lot to help make an attacker’s job a lot harder. Thank God, the people who operate the most critically important part of our national infrastructure have noticed the obvious.

Hopefully a little sunlight will help accelerate the process of reducing the attack surface of North America’s power grid.

After all, the march to the Singularity will go a lot slower without a reliable power grid.

Matt McGuirl, CISSP

There are two sides to living as long as possible: developing the technologies to cure aging, such as SENS, and preventing human extinction risk, which threatens everybody. Unfortunately, in the life extensionist community, and the world at large, the balance of attention and support is lopsided in favor of the first side of the coin, while largely ignoring the second. I see people meticulously obsessed with caloric restriction and SENS, but apparently unaware of human extinction risks. There’s the global warming movement, sure, but no efforts to address the bio, nano, and AI risks.

It’s easy to understand why. Life extension therapies are a positive and happy thing, whereas existential risk is a negative and discouraging thing. The affect heuristic causes us to shy away from negative affect, while only focusing on projects with positive affect: life extension. Egocentric biases help magnify the effect, because it’s easier to imagine oneself aging and dying than getting wiped out along with billions of others as a result of a planetary plague, for instance. Attributional biases work against both sides of the immortality coin: because there’s no visible bad guy to fight, people aren’t as juiced up as they would be, about, say, protesting a human being like Bush.

Another element working against the risk side of the coin is the assignment of credit: a research team may be the first to significantly extend human life, in which case, the team and all their supporters get bragging rights. Prevention of existential risks is a bit hazier, consisting of networks of safeguards which all contribute a little bit towards lowering the probability of disaster. Existential risk prevention isn’t likely to be the way it is in the movies, where the hero punches out the mad scientist right before he presses the red button that says “Planet Destroyer”, but because of a cooperative network of individuals working to increase safety in the diverse areas that risks could emerge from: biotech, nanotech, and AI.

Present-day immortalists and transhumanists simply don’t care enough about existential risk. Many of them are at the same stage with regards to ideological progression as most of humanity is against the specter of death: accepting, in denial, dismissive. There are few things less pleasant to contemplate than humanity destroying itself, but it must be done anyhow, because if we slip and fall, there’s no getting up.

The greatest challenge is that the likelihood of disaster per year must be decreased to very low levels — less than 0.001% or something — because otherwise the aggregate probability computed over a series of years will approach 1 at the limit. There are many risks that even distributing ourselves throughout space would do nothing to combat — rogue, space-going AI, replicators that eat asteroids and live off sunlight, agents that pursue reproduction at the exclusion of value structures such as conscious experiences. Space colonization is not our silver bullet, despite what some might think. Relying overmuch on space colonization to combat existential risk may give us a false sense of security.

Yesterday it hit the national news that synthetic life is on its way within 3 to 10 years. To anyone following the field, this comes as zero surprise, but there are many thinkers out there who might not have seen it coming. The Lifeboat Foundation, which has saw this well in advance, set up the A-Prize as an effort to bring development of artificial life out into the open, where it should be, and the A-Prize currently has a grand total of three donors: myself, Sergio Tarrero, and one anonymous donor. This is probably a result of insufficient publicity, though.

Genetically engineered viruses are a risk today. Synthetic life will be a risk in 3–10 years. AI could be a risk in 10 years, or it could be a risk now — we have no idea. The fastest supercomputers are already approximating the computing power of the human brain, but since an airplane is way less complex than a bird, we should assume that less-than-human computing power is sufficient for AI. Nanotechnological replicators, a distinct category of replicator that blurs into synthetic life at the extremes, could be a risk in 5–15 years — again, we don’t know. Better to assume they’re coming sooner, and be safe rather than sorry.

Once you realize that humanity has lived entirely without existential risks (except the tiny probability of asteroid impact) since Homo sapiens evolved over 100,000 years ago, and we’re about to be hit full-force by these new risks in the next 3–15 years, the interval between now and then is practically nothing. Ideally, we’d have 100 or 500 years of advance notice to prepare for these risks, not 3–15. But since 3–15 is all we have, we’d better use it.

If humanity continues to survive, the technologies for radical life extension are sure to be developed, taking into account economic considerations alone. The efforts of Aubrey de Grey and others may hurry it along, saving a few million lives in the process, and that’s great. But if we develop SENS only to destroy ourselves a few years later, it’s worse than useless. It’s better to overinvest in existential risk, encourage cryonics for those whose bodies can’t last until aging is defeated, and address aging once we have a handle on existential risk, which we quite obviously don’t. Remember: there will always be more people paying attention to radical life extension than existential risk, so the former won’t be losing much if you shift your focus to the latter. As fellow blogger Steven says, “You have only a small fraction of the world’s eggs; putting them all in the best available basket will help, not harm, the global egg spreading effort.”

For more on why I think fighting existential risk should be central for any life extensionist, see Immortalist Utilitarianism, written in 2004.