As a participant in the Amazon Services LLC Associates Program, this site may earn from qualifying purchases. We may also earn commissions on purchases from other retail websites.
A first version of the AI, AlphaGo, became famous in 2016 by defeating the world champion of Go Lee Sedol in a tournament. The new version AlphaGoZero outscored the old version a 100-0 in the game.
The pace at which Artificial Intelligence is advancing may seem terrifying to some. Google’s DeepMind project has revealed another huge advancement in Artificial Intelligence as one of their machines has ‘mastered’ the Chinese game of Go without the aid of humans.
What we call the future, that excitable and sometimes creepy worldview that science fiction films show us, can start with things like this.
A sophisticated artificial intelligence program called AlphaGoZero, developed by Google’s DeepMind company, has been able to teach itself to master the classic Go strategy game, which many consider more sophisticated than chess.
And it has done it from scratch, at an incredible speed, and without human intervention.
Research on artificial intelligence has advanced rapidly in a wide variety of fields, but its great challenge is to develop algorithms that learn challenging concepts from a blank slate and with superhuman competence. And that, according to the journal Nature, is what the new program has achieved.
A first version of the AI, AlphaGo, became famous in 2016 by defeating the world champion of Go Lee Sedol in a tournament.
But to achieve this, the AI was trained for several months through supervised learning based on millions of expert human movements, combined with learning solitary reinforcement.
The program required 48 TPUs (specialized chips that work similarly to neural networks).
But AlphaGo Zero is much smarter.
It learns only by playing against itself, from random moves, with only the board and pieces as inputs and without any human telling it how to do it. It has become its own master, and improves with every repetition of automatic play. IT uses only four TPUs to “think” each move through, in only 0.4 seconds.
After only three days of training, which included almost 5 million solitaire games (compared to 30 million games which took several months for the first version), the new software was already prepared to outdo human players and defeat all previous versions of AlphaGo a hundred games to zero.
It even discovered some of the same game principles that humans have taken thousands of years to conceptualize while also developing novel strategies.
Go is just a game, but not just any game.
It is very popular in countries such as China, South Korea or Japan, the objective is to conquer as much territory as possible by placing some white and black stones on a board.
Rules are simple, but there are multiple possibilities.
Edward Lasker, a chess master and great go enthusiast, went as far as to say that “if intelligent life forms exist elsewhere in the Universe, they will almost certainly play Go.”
In a sense, DeepMind’s AI has shown that Lasker may not be wrong.