Google’s AlphaGo already beat us puny humans to become the best at the Chinese board game of Go. Now, it’s done with humans altogether. DeepMind, the Alphabet subsidiary behind the artificial intelligence, just announced AlphaGo Zero. The latest iteration of the computer program is the most advanced yet, outperforming all previous versions. It’s also different from its predecessors in one uniquely significant way: Whereas the older AlphaGos trained in Go from thousands of human amateur and professional games, Zero foregoes the need for human insight altogether. Like the unpopular kid in class, it will learn simply by playing alone, and against itself.
What sounds like a sad, lonesome existence, is already paying dividends. Zero whitewashed the previous (champion-beating) version of Go by 100 games to nil. That victory came after just three days of training. After 40 days of internal Go playing, it beat the Master version (the same program that triumphed over world number one Ke Jie in May) 89-11 — making it "arguably the strongest Go player in history."
There are other technical elements that define the new AI, which you can dig into courtesy of DeepMind’s paper, published in the scientific journal Nature. But removing the "constraints of human knowledge" has been the most liberating factor, according to the company’s CEO Demis Hassabis.
In doing so, DeepMind is even closer to decoding one of the biggest hurdles facing AI: The reliance on vast amounts of data training. Whether this approach will work outside the confines of a strategic board game, however, remains to be seen. DeepMind, at least, believes it could have far-reaching implications. "If similar techniques can be applied to other structured problems, such as protein folding, reducing energy consumption or searching for revolutionary new materials, the resulting breakthroughs have the potential to positively impact society," writes the company in its blog post.
from Engadget http://engt.co/2hPHFou
via IFTTT