Menu

Blog

Archive for the ‘singularity’ category: Page 91

Mar 19, 2013

Ten Commandments of Space

Posted by in categories: asteroid/comet impacts, biological, biotech/medical, cosmology, defense, education, engineering, ethics, events, evolution, existential risks, futurism, geopolitics, habitats, homo sapiens, human trajectories, life extension, lifeboat, military, neuroscience, nuclear energy, nuclear weapons, particle physics, philosophy, physics, policy, robotics/AI, singularity, space, supercomputing, sustainability, transparency

1. Thou shalt first guard the Earth and preserve humanity.

Impact deflection and survival colonies hold the moral high ground above all other calls on public funds.

2. Thou shalt go into space with heavy lift rockets with hydrogen upper stages and not go extinct.

Continue reading “Ten Commandments of Space” »

Mar 4, 2013

Human Brain Mapping & Simulation Projects: America Wants Some, Too?

Posted by in categories: biological, biotech/medical, complex systems, ethics, existential risks, homo sapiens, neuroscience, philosophy, robotics/AI, singularity, supercomputing

YANKEE.BRAIN.MAP
The Brain Games Begin
Europe’s billion-Euro science-neuro Human Brain Project, mentioned here amongst machine morality last week, is basically already funded and well underway. Now the colonies over in the new world are getting hip, and they too have in the works a project to map/simulate/make their very own copy of the universe’s greatest known computational artifact: the gelatinous wad of convoluted electrical pudding in your skull.

The (speculated but not yet public) Brain Activity Map of America
About 300 different news sources are reporting that a Brain Activity Map project is outlined in the current administration’s to-be-presented budget, and will be detailed sometime in March. Hoards of journalists are calling it “Obama’s Brain Project,” which is stoopid, and probably only because some guy at the New Yorker did and they all decided that’s what they had to do, too. Or somesuch lameness. Or laziness? Deference? SEO?

For reasons both economic and nationalistic, America could definitely use an inspirational, large-scale scientific project right about now. Because seriously, aside from going full-Pavlov over the next iPhone, what do we really have to look forward to these days? Now, if some technotards or bible pounders monkeywrench the deal, the U.S. is going to continue that slide toward scientific… lesserness. So, hippies, religious nuts, and all you little sociopathic babies in politics: zip it. Perhaps, however, we should gently poke and prod the hard of thinking toward a marginally heightened Europhobia — that way they’ll support the project. And it’s worth it. Just, you know, for science.

Going Big. Not Huge, But Big. But Could be Massive.
Both the Euro and American flavors are no Manhattan Project-scale undertaking, in the sense of urgency and motivational factors, but more like the Human Genome Project. Still, with clear directives and similar funding levels (€1 billion Euros & $1–3 billion US bucks, respectively), they’re quite ambitious and potentially far more world changing than a big bomb. Like, seriously, man. Because brains build bombs. But hopefully an artificial brain would not. Spaceships would be nice, though.

Continue reading “Human Brain Mapping & Simulation Projects: America Wants Some, Too?” »

Feb 8, 2013

Machine Morality: a Survey of Thought and a Hint of Harbinger

Posted by in categories: biological, biotech/medical, engineering, ethics, evolution, existential risks, futurism, homo sapiens, human trajectories, robotics/AI, singularity, supercomputing

KILL.THE.ROBOTS
The Golden Rule is Not for Toasters

Simplistically nutshelled, talking about machine morality is picking apart whether or not we’ll someday have to be nice to machines or demand that they be nice to us.

Well, it’s always a good time to address human & machine morality vis-à-vis both the engineering and philosophical issues intrinsic to the qualification and validation of non-biological intelligence and/or consciousness that, if manifested, would wholly justify consideration thereof.

Uhh… yep!

But, whether at run-on sentence dorkville or any other tech forum, right from the jump one should know that a single voice rapping about machine morality is bound to get hung up in and blinded by its own perspective, e.g., splitting hairs to decide who or what deserves moral treatment (if a definition of that can even be nailed down), or perhaps yet another justification for the standard intellectual cul de sac:
“Why bother, it’s never going to happen.“
That’s tired and lame.

Continue reading “Machine Morality: a Survey of Thought and a Hint of Harbinger” »

Feb 6, 2013

How can humans compete with singularity agents?

Posted by in categories: ethics, futurism, philosophy, robotics/AI, singularity

It appears now that intelligence of humans is largely superseeded by robots and artificial singularity agents. Education and technology have no chances to make us far more intelligent. The question is now what is our place in this new world where we are not the topmost intelligent kind of species.

Even if we develop new scientific and technological approaches, it is likely that machines will be far more efficient than us if these approaches are based on rationality.

IMO, in the next future, we will only be able to compete in irrational domains but I am not that sure that irrational domains cannot be also handled by machines.

Sep 26, 2012

On Leaving the Earth. Like, Forever. Bye-Bye.

Posted by in categories: asteroid/comet impacts, cosmology, defense, engineering, existential risks, futurism, human trajectories, lifeboat, military, singularity, space


Technology is as Human Does

When one of the U.S. Air Force’s top future strategy guys starts dorking out on how we’ve gotta at least begin considering what to do when a progressively decaying yet apocalyptically belligerent sun begins BBQing the earth, attention is payed. See, none of the proposed solutions involve marinade or species-level acquiescence, they involve practical discussion on the necessity for super awesome technology on par with a Kardeshev Type II civilization (one that’s harnessed the energy of an entire solar system).

Because Not if, but WHEN the Earth Dies, What’s Next for Us?
Head over to Kurzweil AI and have a read of Lt. Col. Peter Garretson’s guest piece. There’s perpetuation of the species stuff, singularity stuff, transhumanism stuff, space stuff, Mind Children stuff, and plenty else to occupy those of us with borderline pathological tech obsessions.

[BILLION YEAR PLAN — KURZWEIL AI]
[U.S. AIR FORCE BLUE HORIZONS FUTURE STUFF PROJECT]

Aug 15, 2012

Approaching the Great Rescue

Posted by in categories: biological, biotech/medical, business, chemistry, complex systems, education, engineering, ethics, events, evolution, existential risks, futurism, geopolitics, homo sapiens, human trajectories, life extension, media & arts, neuroscience, philosophy, policy, singularity, sustainability, transparency

http://www.sciencedaily.com/releases/2012/08/120815131137.htm

One more step has been taken toward making whole body cryopreservation a practical reality. An understanding of the properties of water allows the temperature of the human body to be lowered without damaging cell structures.

Just as the microchip revolution was unforeseen the societal effects of suspending death have been overlooked completely.

The first successful procedure to freeze a human being and then revive that person without damage at a later date will be the most important single event in human history. When that person is revived he or she will awaken to a completely different world.

Continue reading “Approaching the Great Rescue” »

Aug 13, 2012

The Electric Septic Spintronic Artilect

Posted by in categories: biological, biotech/medical, business, chemistry, climatology, complex systems, counterterrorism, defense, economics, education, engineering, ethics, events, evolution, existential risks, futurism, geopolitics, homo sapiens, human trajectories, information science, military, neuroscience, nuclear weapons, policy, robotics/AI, scientific freedom, singularity, space, supercomputing, sustainability, transparency

AI scientist Hugo de Garis has prophesied the next great historical conflict will be between those who would build gods and those who would stop them.

It seems to be happening before our eyes as the incredible pace of scientific discovery leaves our imaginations behind.

We need only flush the toilet to power the artificial mega mind coming into existence within the next few decades. I am actually not intentionally trying to write anything bizarre- it is just this strange planet we are living on.

http://www.sciencedaily.com/releases/2012/08/120813155525.htm

http://www.sciencedaily.com/releases/2012/08/120813123034.htm

Jun 1, 2012

Response to the Global Futures 2045 Video

Posted by in categories: futurism, human trajectories, nanotechnology, robotics/AI, scientific freedom, singularity, space

I have just watched this video by Global Futures 2045.

This is my list of things I disagree with:

It starts with scary words about how every crisis comes faster and faster. However this is untrue. Many countries have been running deficits for decades. The financial crisis is no surprise. The reason the US has such high energy costs goes back to government decisions made in the 1970s. And many things that used to be crises no longer happen, like the Black Plague. We have big problems, but we’ve also got many resources we’ve built up over the centuries to help. Much of the challenges we face are political and social, not technical.

We will never fall into a new Dark Ages. The biggest problem is that we aren’t advancing as fast as we could and many are still starving, sick, etc. However, it has always been this way. The 20th century was very brutal! But we are advancing and it is mostly known threats like WMDs which could cause a disaster. In the main, the world is getting safer every day as we better understand it.

Continue reading “Response to the Global Futures 2045 Video” »

May 14, 2012

Singularity and hacking

Posted by in category: singularity

Using Large Hadron Colliders to break particles and explore new ways to understand our universe may be seen as a hacking attack by the Administrator of the universe. We can imagine that god can restore the universe to a previous version in order to neutralize the LHC hack. In the same way, the emerging singularity will probably try to break all the security rules achieved by humans in order to prevent it from accessing our real world. The main difference is that we do not have a way to do a big internet rollback, if the singularity succeeds in breaking our rules. Therefore, we must be prepared to collaborate with the singularity rather than desperately trying to reduce its liberty.

Page 91 of 91First8485868788899091