Today's AI news is that DeepMind's AlphaGo has won the first game against the Go world champion. As it happens, DeepMind's CEO gave a lecture at Oxford recently, and although I couldn't get there in person, the ubiquitous magic of tech allowed me to watch it streamed live. (I did manage to physically attend the first Ada Lovelace Memorial Lecture given by the awesome Barbara Liskov, a little while back.)
There's something missing from today's news reporting, which I happen to believe is significant.
Playing Go at this level requires a more "intuition"-based approach than chess, which is a significant difference between AlphaGo and Deep Blue (which beat the chess world champion back in the *gulp* last millennium). That's in the news reports, and correctly so.
The other distinction is that AlphaGo is not designed purely to play Go. It has learned the game, and an earlier project learned to play dozens of other games (all the old Atari games!), based only on an input of numbers, the ability to recognise patterns, and a goal of maximising its score. In other words, even that earlier software could adapt to a new game with new rules, something that DeepBlue could never achieve: it would require rewriting by its developers to cope with anything besides chess.
Although the overall constraint is game-playing, within that constraint the DeepMind AIs employ general learning algorithms (so-called "DQL"). And that, I suspect, is the most impressive part of all.
Meanwhile, I shouldn't be blogging... I've a book to write. Hence my vast silence for months now.
P.S. It belatedly occurs to me that my VERY FIRST STORY concerned an AI challenging its creator's father to a game of Go. That appeared just the other day, in 1992... Bloomin' heck!