Menu

Blog

Archive for the ‘Artificial Intelligence’ tag: Page 3

Aug 31, 2017

AI Firm Focusing on Consciousness Publishes Frameworks

Posted by in categories: alien life, cyborgs, robotics/AI, singularity, transhumanism

London-based AI start-up REZIINE has published the entire explanation and framework design for the creation of consciousness in machines.

“Consciousness Illuminated and the Reckoning of Physics” – a 525-page document – features:

  • The full explanation of consciousness and the AGI framework, including all designs, components, and algorithms;
  • The roadmap to Artificial Super Intelligence;
  • The AI genome for self-evolution; and
  • A full-scale physics framework, complete with experiments and explanations.

Describing the compact definition of consciousness as “the ability to make illogical decisions based on personal values”, founder, Corey Reaux-Savonte, goes on to say:

If consciousness is the ability to make illogical decisions based on personal values,

Read the full story at LinkedIN

Jun 8, 2017

Artificial General Intelligence (AGI): Future A to Z

Posted by in categories: business, computing, cyborgs, engineering, ethics, existential risks, machine learning, robotics/AI, singularity

What is the ultimate goal of Artificial General Intelligence?

In this video series, the Galactic Public Archives takes bite-sized looks at a variety of terms, technologies, and ideas that are likely to be prominent in the future. Terms are regularly changing and being redefined with the passing of time. With constant breakthroughs and the development of new technology and other resources, we seek to define what these things are and how they will impact our future.

Follow us on social media:
Twitter / Facebook / Instagram

Jul 2, 2015

Elon Musk-backed Future of Life Institute Provides $7M in Safe AI Project Grants

Posted by in categories: existential risks, policy, singularity

Read more

Jan 15, 2015

Who is FM2030?

Posted by in categories: lifeboat, science, transhumanism

FM 2030 was at various points in his life, an Iranian Olympic basketball player, a diplomat, a university teacher, and a corporate consultant. He developed his views on transhumanism in the 1960s and evolved them over the next thirty-something years. He was placed in cryonic suspension July 8th, 2000.

Oct 30, 2014

Amit Singhal (at Google): Will your computer plan change your life?

Posted by in categories: lifeboat, posthumanism, robotics/AI, science

This archive file was compiled from an interview conducted at the Googleplex in Mountain View, California, 2013. In the discussion, Amit Singhal, a key figure in the evolution of Google’s search engine, broadly outlined the significant hurdles that stood in the way of achieving one of his long-held dreams — creating a true ‘conversational’ search engine. He also sketched out a vision of how the initial versions of such a system would, and also importantly, would not attempt to assist the individuals that it interacted with.

Though the vision was by design more limited and focused than a system capable of passing the famous Turing test, it nonetheless raised stimulating questions about the future relationships of humans and their ‘artificial’ assistants.

More about Amit Singhal:

Wikipedia:
en.wikipedia.org/wiki/Amit_Singhal

Google Search:
en.wikipedia.org/wiki/Google_Search

Oct 23, 2014

Who is Amit Singhal (at Google)?

Posted by in categories: futurism, lifeboat, science, transhumanism

This archive file was compiled from an interview conducted at the Googleplex in Mountain View, California, 2013.

As late as the 1980s and the 1990s, the common person seeking stored knowledge would likely be faced with using an 18th century technology — the library index card catalogue — in order to find something on the topic he or she was looking for. Fifteen years later, most people would be able to search, at any time and any place, a collection of information that dwarfed that of any library. And unlike the experience with a library card catalogue, this new technology rarely left the user empty-handed.

Information retrieval had been a core technology of humanity since written language — but as an actual area of research it was so niche that before the 1950s, nobody had bothered to give the field a name. From a superficial perspective, the pioneering work in the area during the 1940s and 50s seemed to suggest it would be monumentally important to the future — but only behind the scenes. Information retrieval was to be the secret tool of the nation at war, or of the elite scientist compiling massive amounts of data. Increasingly however, a visionary group of thinkers dreamed of combining information retrieval and the ‘thinking machine’ to create something which would be far more revolutionary for society.

Continue reading “Who is Amit Singhal (at Google)?” »

May 31, 2013

How Could WBE+AGI be Easier than AGI Alone?

Posted by in categories: complex systems, engineering, ethics, existential risks, futurism, military, neuroscience, singularity, supercomputing

This essay was also published by the Institute for Ethics & Emerging Technologies and by Transhumanity under the title “Is Price Performance the Wrong Measure for a Coming Intelligence Explosion?”.

Introduction

Most thinkers speculating on the coming of an intelligence explosion (whether via Artificial-General-Intelligence or Whole-Brain-Emulation/uploading), such as Ray Kurzweil [1] and Hans Moravec [2], typically use computational price performance as the best measure for an impending intelligence explosion (e.g. Kurzweil’s measure is when enough processing power to satisfy his estimates for basic processing power required to simulate the human brain costs $1,000). However, I think a lurking assumption lies here: that it won’t be much of an explosion unless available to the average person. I present a scenario below that may indicate that the imminence of a coming intelligence-explosion is more impacted by basic processing speed – or instructions per second (ISP), regardless of cost or resource requirements per unit of computation, than it is by computational price performance. This scenario also yields some additional, counter-intuitive conclusions, such as that it may be easier (for a given amount of “effort” or funding) to implement WBE+AGI than it would be to implement AGI alone – or rather that using WBE as a mediator of an increase in the rate of progress in AGI may yield an AGI faster or more efficiently per unit of effort or funding than it would be to implement AGI directly.

Loaded Uploads:

Continue reading “How Could WBE+AGI be Easier than AGI Alone?” »

Mar 4, 2013

Human Brain Mapping & Simulation Projects: America Wants Some, Too?

Posted by in categories: biological, biotech/medical, complex systems, ethics, existential risks, homo sapiens, neuroscience, philosophy, robotics/AI, singularity, supercomputing

YANKEE.BRAIN.MAP
The Brain Games Begin
Europe’s billion-Euro science-neuro Human Brain Project, mentioned here amongst machine morality last week, is basically already funded and well underway. Now the colonies over in the new world are getting hip, and they too have in the works a project to map/simulate/make their very own copy of the universe’s greatest known computational artifact: the gelatinous wad of convoluted electrical pudding in your skull.

The (speculated but not yet public) Brain Activity Map of America
About 300 different news sources are reporting that a Brain Activity Map project is outlined in the current administration’s to-be-presented budget, and will be detailed sometime in March. Hoards of journalists are calling it “Obama’s Brain Project,” which is stoopid, and probably only because some guy at the New Yorker did and they all decided that’s what they had to do, too. Or somesuch lameness. Or laziness? Deference? SEO?

For reasons both economic and nationalistic, America could definitely use an inspirational, large-scale scientific project right about now. Because seriously, aside from going full-Pavlov over the next iPhone, what do we really have to look forward to these days? Now, if some technotards or bible pounders monkeywrench the deal, the U.S. is going to continue that slide toward scientific… lesserness. So, hippies, religious nuts, and all you little sociopathic babies in politics: zip it. Perhaps, however, we should gently poke and prod the hard of thinking toward a marginally heightened Europhobia — that way they’ll support the project. And it’s worth it. Just, you know, for science.

Going Big. Not Huge, But Big. But Could be Massive.
Both the Euro and American flavors are no Manhattan Project-scale undertaking, in the sense of urgency and motivational factors, but more like the Human Genome Project. Still, with clear directives and similar funding levels (€1 billion Euros & $1–3 billion US bucks, respectively), they’re quite ambitious and potentially far more world changing than a big bomb. Like, seriously, man. Because brains build bombs. But hopefully an artificial brain would not. Spaceships would be nice, though.

Continue reading “Human Brain Mapping & Simulation Projects: America Wants Some, Too?” »

Feb 8, 2013

Machine Morality: a Survey of Thought and a Hint of Harbinger

Posted by in categories: biological, biotech/medical, engineering, ethics, evolution, existential risks, futurism, homo sapiens, human trajectories, robotics/AI, singularity, supercomputing

KILL.THE.ROBOTS
The Golden Rule is Not for Toasters

Simplistically nutshelled, talking about machine morality is picking apart whether or not we’ll someday have to be nice to machines or demand that they be nice to us.

Well, it’s always a good time to address human & machine morality vis-à-vis both the engineering and philosophical issues intrinsic to the qualification and validation of non-biological intelligence and/or consciousness that, if manifested, would wholly justify consideration thereof.

Uhh… yep!

But, whether at run-on sentence dorkville or any other tech forum, right from the jump one should know that a single voice rapping about machine morality is bound to get hung up in and blinded by its own perspective, e.g., splitting hairs to decide who or what deserves moral treatment (if a definition of that can even be nailed down), or perhaps yet another justification for the standard intellectual cul de sac:
“Why bother, it’s never going to happen.“
That’s tired and lame.

Continue reading “Machine Morality: a Survey of Thought and a Hint of Harbinger” »

Page 3 of 3123