Menu

Blog

Archive for the ‘robotics/AI’ category: Page 1438

Jun 29, 2019

David Cage interview: Researching Detroit: Become Human’s back story of rogue AI

Posted by in category: robotics/AI

David Cage wrote a nearly 4,000-page script for Detroit: Become Human, which was steeped in research about artificial intelligence.

Jun 29, 2019

The Top 5 Artificial Intelligence Books to Read in 2019

Posted by in categories: futurism, robotics/AI

Here is a list of the top 5 books on artificial intelligence for beginners. These books will help you understand the current landscape of the technology as well as learn what the future holds.

Jun 29, 2019

‘DeepNude’ app that ‘undresses’ women shut down after furor

Posted by in categories: entertainment, robotics/AI

DeepNude already put on its clothes.


WASHINGTON, United States—The creators of an application allowing users to virtually “undress” women using artificial intelligence have shut it down after a social media uproar over its potential for abuse.

The creators of “DeepNude” said the software was launched several months ago for “entertainment” and that they “greatly underestimated” demand for the app.

Continue reading “‘DeepNude’ app that ‘undresses’ women shut down after furor” »

Jun 29, 2019

Samsung’s Creepy New AI Can Generate Talking Deepfakes From a Single Image

Posted by in categories: information science, robotics/AI

Our deepfake problem is about to get worse: Samsung engineers have now developed realistic talking heads that can be generated from a single image, so AI can even put words in the mouth of the Mona Lisa.

The new algorithms, developed by a team from the Samsung AI Center and the Skolkovo Institute of Science and Technology, both in Moscow work best with a variety of sample images taken at different angles – but they can be quite effective with just one picture to work from, even a painting.

mona lisa talk 1024 (Egor Zakharov)

Jun 28, 2019

Brain cells for 3D vision discovered

Posted by in categories: health, robotics/AI

Scientists at Newcastle University have discovered neurons in insect brains that compute 3D distance and direction. Understanding these could help vision in robots.

Could a Mediterranean diet and exercise reduce dementia risk?

Researchers at Newcastle University are launching a new study to see whether eating a Mediterranean-style diet and being more physically active could improve brain function and reduce dementia risk.

Jun 28, 2019

Waymo starts self-driving pick-ups for Lyft riders

Posted by in categories: robotics/AI, transportation

Autonomous driving company Waymo has launched its tie-in with Lyft, using a “handful” of vehicles to pick up riders in its Phoenix testing zone, per CNBC. To be eligible, Lyft users requesting a ride have to be doing a trip that both starts and ends in the area of Phoenix that it’s already blocked for for its own autonomous testing.

The number of cars on the road is less than 10, since Waymo plans to eventually expand to 10 total for this trial but isn’t there yet. Those factors combined mean that the number of people who’ll get this option probably isn’t astronomical, but when they are opted in, they’ll get a chance to decide whether to go with the autonomous option via one of Waymo’s vans (with a safety driver on board) or just stick with a traditional Lyft.

Waymo and Lyft announced their partnership back in May, and the company still plans to continue operating its own Waymo One commercial autonomous ride-hailing service alongside the Lyft team-up.

Jun 28, 2019

New AI programming language goes beyond deep learning

Posted by in category: robotics/AI

General-purpose language works for computer vision, robotics, statistics, and more.

Jun 28, 2019

MIT’s new interactive machine learning prediction tool could give everyone AI superpowers

Posted by in categories: biotech/medical, business, robotics/AI

Soon, you might not need anything more specialized than a readily accessible touchscreen device and any existing data sets you have access to in order to build powerful prediction tools. A new experiment from MIT and Brown University researchers have added a capability to their ‘Northstar’ interactive data system that can “instantly generate machine-learning models” to use with their exiting data sets in order to generate useful predictions.

One example the researchers provide is that doctors could make use of the system to make predictions about the likelihood their patients have of contracting specific diseases based on their medial history. Or, they suggest, a business owner could use their historical sales data to develop more accurate forecasts, quickly and without a ton of manual analytics work.

Researchers are calling this feature the Northstar system’s “virtual data scientist,” (or VDS) and it sounds like it could actually replace the human equivalent, especially in settings where one would never actually be readily available or resourced anyway. Your average doctor’s office doesn’t have a dedicated data scientist headcount, for instance, and nor do most small- to medium-sized businesses for that matter. Independently owned and operated coffee shops and retailers definitely wouldn’t otherwise have access to this kind of insight.

Jun 28, 2019

I welcomed our new robot overlords at Amazon’s first AI conference

Posted by in categories: robotics/AI, space

Walking the show floor at Amazon re: MARS.

Jun 28, 2019

Severely Disabled People Mind-Control a Robotic Arm via EEG

Posted by in categories: biotech/medical, robotics/AI

Scientific collaborators from Carnegie Mellon University and University of Minnesota have created a way for people to control a robotic arm using a non-invasive brain-computer interface (BCI). Previously, electrode array implants in the brain have been necessary to give severely disabled people the ability to manipulate an external robot. That is because implants can gather more actionable signal information by being placed right on the surface of the brain. Avoiding dangerously invasive brain surgery to place these implants, though, is a big goal in the field of brain-computer interfaces.

The Carnegie Mellon team turned to newly developed sensing and machine learning methods to accurately read signals coming from deep within the brain, relying only on an external electroencephalography cap for signal gathering. The system can quickly improve both its performance and that of the person using it, to achieve drastically better results than previous solutions. Volunteers using the technology were put through a pursuit task and a training regimen to improve their engagement, while the system was performing an analysis of their brain signals.

Continue reading “Severely Disabled People Mind-Control a Robotic Arm via EEG” »