Menu

Advisory Board

Michael Dickey

The Lifeboat Foundation, which was recently featured in a Tech Central Station article (a popular technology, science, and politics web site) seeks to raise awareness about the potential threats humanity as a whole faces, weather from external threats such as X-Ray bursts, nearby supernovas, or cataclysmic meteor impacts, or from internal threats such as global nuclear war, a terrorist or malicious organization or individual intentionally releasing a biological engineered virus, or even accidentally releasing a grey goo which could devour all life on earth. The simple fact is that for intelligent sentient life to survive, humanity must spread out among the stars. This is something recognized by many prominent members of the scientific community, including most recently Stephen Hawking, who said, warning of earth destroying disasters, that humans must spread out into space. In the long term, however, the Lifeboat Foundation would like to actually see these self-sustaining space stations be created and built, with humanity spreading among the solar system and eventually to neighboring solar systems.
 
I am an adamant supporter of the Lifeboat Foundations goals for three major reasons, The Drake Equation, the Fermi Paradox, and The Law of Accelerating Returns. The combination of these three principles illustrates something terribly jarring; that life throughout the universe almost always destroys itself through technology.
 
Now, I am not a luddite and I love technology and all the great things it has brought, and I hope to see the day when technology has conquered aging, all disease, and death in general. But I do not embrace the idea that many strong proponents of technology do; an absolute blind faith in everything turning out perfectly well. I hope it does, and I think that it will, but I also think I will get to work safely everyday yet still wear my seat belt. I don’t expect to come down with a life threatening illness but I have insurance anyway. In our daily lives we takes steps to mitigate risk to whatever degree is comfortable with us all the time. Humanity as a whole needs to do the same thing, we need an insurance policy. That is what the Lifeboat Foundation seeks to create.
 
The Drake Equation is a popular one in Astrophysicists communities. It is essentially a simple, but long equation, intended to determine how common life is in the universe. It goes like this, you start with the number of stars in our galaxy, which is estimated to be about 400 billion. Multiple that by percentage of stars which form stable planets, and that by the time stable planets are conducive to life forming on them, times the actual likelihood of life forming, times the average time life survives on a planet, times the chance it becomes technologically advanced, etc. etc. etc. Now it’s obvious from this question that besides the very first number; the number of stars, none of these numbers are actually known. But even if you put very small numbers into this equation, say one in 10 thousand for each one, since you start out with 400 billion stars, you still end up with thousands of space fairing civilizations of intelligent life, and even if they spread slowly so much time would have elapsed (billions of years) that they should be virtually everywhere we look. And this leads to the second principle, The Fermi Paradox.
 
Enrico Fermi, a Nobel prize winning physicist, looked at this equation and said “so where are they?” No one had a good answer. Essentially the Fermi Paradox is stating that even assuming very conservative numbers for all of those variables, the universe should still be teaming with life, yet we seem to be all alone. Why is that? Well, there are only 3 logical possibilities. The first is that we are the first, or of part of the first generation, of life to arise in the universe. This could be caused by conditions we are not yet familiar with which require certain cycles to past (just as heavy planets couldn’t form until the first few generations of stars were born and died) to make solar systems, galaxies, or the universe conducive to life. The second is that they are all around us, just in forms we can not detect. The third possibility is that life is common and does grow, but something always happens that prevents them from spreading out. Of these three scenarios, only one logically requires any action on our part, the third. That is, if there is something that tends to wipe a technological species out just before it starts to spread among the stars, we better damn well identify it, and if we cant do that, at least have secondary and tertiary plans to compensate for it.
 
Thus we are brought to author and inventor Ray Kurzweil’s “Law of Accelerating Returns” in which he argues that growth of information, ideas, and technology increase exponentially as well, leading eventually to such a profoundly rapid change of technological progress as to create a hitherto un imaginable altering of human life as we know it. Imagine, by comparison, that the atom was discovered, X-Rays, Nuclear power, Radio, Lasers, the Internal Combustion Engine was perfected, and the computer revolution all were discovered within the course of a few months. And then imagine the same thing happening in the next few weeks. And then in the next few hours, then days, then minutes. You get the idea. You should read Kurzweil’s essay and also the good overview from Wikipedia, along with some criticisms.
 
Kurzweil’s recognition of the rapid growth of technology, something his essay goes through leaps and bounds to empirically demonstrate, leads into a corollary principle, that of the Doomsday Curve. This curve, demonstrated graphically in a link from the Tech Central Station article above, essentially draws the logical conclusion of such a rapid technological growth. That is, the more technology that is available to a person, the easier it is for them to kill larger and larger numbers of people. In the middle ages it would take half of humanity all of their effort to wipe out the other half, being limited to hand to hand combat. With the advent of chemical explosives and machine guns, perhaps a 3rd to a quarter of the world could get away with killing all the rest of the people. With nuclear explosives, perhaps one tenth or one twentieth would be all that is needed. With the advent of the internet and its subsequent rapid information dissemination, and the mass production of complex technology, small groups of people may be able to biological engineer viruses directed to take out entire races of people, and a few hundred people could kill hundreds of millions. We could argue about the numbers, but the pattern remains. In the future, with things like nanotechnology on the horizon, this could become more and more of a threat, a future where eventually one person could, even accidentally, wipe out the entire human race, or even potentially all life on earth.
 
With that, we indeed have the jarring answer to the Fermi Paradox. Despite all the hundreds of millions of stars and likely thousands of technologically advanced civilizations, none of them survive, or so few survive that it is a rare event to come across them.
 
Now, I don’t want to be a dystopian alarmist, this is just one answer to the Fermi Paradox, personally I suspect we might very well be the first, or part of the first generation, of technologically advanced civilizations to arise. A very fascinating and exciting prospect! But it’s easy to fool ourselves into thinking the most promising explanation is the right one, and truth be told, I have no clue, nor does anyone for that matter. But we do know that technological growth, even if eventually limited, is rapid and very powerful. We do know there are no other intelligent species yet discovered. We do know all ready the dangers that can come from technological growth. We do know that we are talking about the continuation of the human race, indeed the only intelligent race yet known to exist in the universe, and as such we *must* act to rationally secure our place in the future, and sign up for an insurance policy for humanity. Support the Lifeboat Foundation.

Michael Dickey was the author of this article and has developed most of the design of the Lifeboat Foundation Ark I and has done all of the 3D graphics and artwork of the Ark I and the Lifeboat Foundation’s web site. He has spent much of his life pursuing his deep interests in physics and technology, specifically, self sustaining systems, mobile biospheres, and philosophy and science in general. He is an active member of extropian / transhumanist circles and is an avid inventor and futurist. Michael currently works full time at a global pharmaceutical corporation, runs his own small business and continues to do design and development work on the Ark I Space Station.
 
Michael is an Aristotlean Eudaemonist and has authored A Cure for Aging, Scandals lead execs to ‘Atlas Shrugged’ and an introduction to Ayn Rand, The Cost of Bias, Interesting Things in Science, Cancer Among Us, A response to Cal Thomas on Stem Cell Research, Terrorism and the US, A two paragraph anthropocentric history of the universe, Get ‘used’ to it!, Do Cell Phones Cause Cancer?, The Sociological Advantageous Nature of Buddhism, Parsimonious Ethics on Abortion, and The future of technology.