AI is often reliant on largescale datasets supplied by humans to learn an algorithm. AlphaGo Zero, however, has successfully taught itself.
Google’s latest artificial intelligence, AutoML, can now code itself better than its human counterparts can, after teaching itself basic programming. Google’s other recent and successful AI had been AlphaGo. A year ago, AlphaGo beat the world’s best Go player, but this AI has now been beaten every single time by its newest update, the AlphaGo Zero.
“We’ve actually removed the constraints of human knowledge”
This development changes the game for artificial intelligence because in a self-teaching dynamic there is no chance for human error transferring to the AI. A flawed human input data set can lead to flawed AI algorithms, but when the AI entirely develops the algorithm itself – there’s no room for mistakes.
AlphaGo Zero blows its predecessor out of the water
Just over a year ago, AlphaGo AI beat the Korean Go 18-time world champion for the first time, surprising the world with its ability. Now, AlphaGo Zero has blown its predecessor out of the water.
The ancient board game Go may seem like a trivial task for an AI to learn, but it’s possible 10,170 moves means there is a lot of complicated information in playing the game and building an algorithm to do so perfectly. It is for this reason the AlphaGo Zero has the potential to work with other data, such as particle physics, quantum chemistry, or drug discovery.
AlphaGo Zero is also more efficient; the previous AlphaGo used 48 TPUs (AI processors built by Google) whereas this new version uses only four. Deepmind co-founder Demis Hassabis has explained that AlphaGo can be thought of as a very good machine for searching through complicated data, but AlphaGo Zero has the possibility of being reprogrammed for far more potential.
How does AlphaGo Zero work?
AlphaGo Zero becomes its own teacher. It does this by a form of reinforcement learning, starting of with a blank neural network. It plays the game of Go against itself, combining this neural network with a search algorithm. The neural network then learns to predict moves.
This updated neural network recombines itself with the search algorithm. The process repeats as AlphaGo Zero learns more with each game that it plays. The quality of the self-play games improves: from constant practice, AlphaGo Zero’s neural network becomes more and more refined, increasing its knowledge by learning from itself. As AlphaGo Zero is the strongest Go player in the world, there’s no one better to learn from.
Earlier versions of AlphaGo used a “policy network” to select the next move to play, whilst a “value network” predicted the winner. There is just one neural network in AlphaGo Zero, meaning that it can train itself more efficiently.
Image from http://deepmind.com/