Right now it’s easiest to think about an artificial intelligence algorithm as a specific tool, like a hammer. A hammer is really good at hitting things, but when you need a saw to cut something in half, it’s back to the toolbox. Need a face recognized? Train an facial recognition algorithm, but don’t ask it to recognize cows.
Alphabet’s AI research arm, DeepMind, is trying to change that idea with a new algorithm that can learn more than one skill. Having algorithms that can learn multiple skills could make it far easier to add new languages to translators, remove bias from image recognition systems, or even have algorithms use existing knowledge to solve new complex problems. The research published in Proceedings of the National Academy of Sciences this week is preliminary, as it only tests the algorithm on playing different Atari games, but this research shows multi-purpose algorithms are actually possible.
The problem DeepMind’s research tackles is called “catastrophic forgetting,” the company writes. If you train an algorithm to recognize faces and then try to train it again to recognize cows, it will forget faces to make room for all the cow-knowledge. Modern artificial neural networks use millions of mathematic equations to calculate patterns in data, which could be the pixels that make a face or the series of words that make a sentence. These equations are connected in various ways, and are so dependent on some equations that they’ll begin to fail when even slightly tweaked for a different task. DeepMind’s new algorithm identifies and protects the equations most important for carrying out the original task, while letting the less-important ones be overwritten.
Continue reading “Artificial intelligence has a multitasking problem, and DeepMind might have a solution” »