Recent research from Stanford and Google has made the worst nightmare of some concerned with artificial intelligence (AI) all the more real. A machine learning agent was caught cheating by hiding information in “a nearly imperceptible, high-frequency signal.”
Clever, but also creepy.
The agent was instructed to turn aerial images into street maps and back again as part of research to improve Google’s process of turning satellite images into the widely used and relied upon Google Maps. The process involves CycleGAN, “a neural network that learns to transform images of type X and Y into one another, as efficiently yet accurately as possible.” Though the agent was performing this task quite well, it quickly became apparent that it was performing the task too well.
Comments are closed.