This news has flewn way under the radar in comparison to the previous Match.
Cho Chikun is a legendary player ,and was top of the world in the 70's. From that era, along with Takemiya Masaki, he was one of the only ones that kept playing and kept winning titles even in his old age.
The style of 70's players was artistic and not as sports-like as koreans were when they rose in the 90's. They always showed a higher level of philosophy and art in game, but it was ultimately crushed by the "machine-like" tactical capacity of korean players.
Cho chikun has very little chance to beat Lee sedol. But he was confident on beating Zen, and he did something unspeakable in the LeeSedol Matches: he came back from a losing game, from a machine that looked that would never relent its edges.
As a retired semi-professional Go player that learnt by replaying games from Cho Chikun, it is exciting to see the Japenese artistic and humane vision dominate the computer over the Korean Tactical sports-minded way.
The news flies under the radar because for most people Go is now solved (compare to what happened after Deep Blue's win). There's a small blemish in that Ke Jie is still rated higher than AlphaGo. Perhaps they will rectify that, perhaps not.
Cho Chikun had the large advantage over Lee Sedol that his opponent (Zen) and similar programs (Crazy Stone, Leela) are simply available for him to practice against and probe for weaknesses.
Then of course there's also a significant reduction in the speed of the hardware.
Both of those matter a lot. I won't make a firm statement how relevant the style argument is you mention, but really, don't underestimate those two.
> The news flies under the radar because for most people Go is now solved (compare to what happened after Deep Blue's win).
"Machines reaching a level of play that is consistently better than humans" and "the game is solved" are two completely different statements and it isn't particularly useful to conflate them.
We will likely never be able to solve Go in the formal sense.
I don't think there is a factual disagreement here, just a terminology issue. Mathematical jargon is not quite the same language as plain English. In the former, 'solved' by default means you have found the mathematically perfect solution. In the latter, 'solved' by default just means you have found a solution good enough for practical purposes.
>"Machines reaching a level of play that is consistently better than humans" and "the game is solved" are two completely different statements and it isn't particularly useful to conflate them.
Very good point
>We will likely never be able to solve Go in the formal sense.
I'm not sure if it would be precise to say chess will be solved either. The number of outcomes in a 50-move chess game approaches Graham's Number--a number larger than the number of atoms in the observable universe.
In a 100-move game, the number would be many orders of magnitude than an already incomprehensibly large number. So chess also will probably in a precise definition of the word, also never be solved.
Edit: It's kind of mind boggling to think about every atom in the universe representing one position in a game of chess and there not being enough atoms in the universe to represent each one.
Thanks, I was wrong about that. I swore I read that somewhere, turns out it was on a chess forum--not a source I should have committed to long-term memory.
I did find this interesting statement about Graham's number (unsure of the veracity):
> Graham's number is greater than all of the Planck lengths in the universe. There literally isn't room in the universe to write it out in numerical form.
But according to the Wiki on Shannon's number, a the number of possible outcomes in chess does still far exceed the number of atoms in the universe:
>Allis also estimated the game-tree complexity to be at least 10123, "based on an average branching factor of 35 and an average game length of 80". As a comparison, the number of atoms in the observable universe, to which it is often compared, is estimated to be between 4×10^79 and 4×10^81.
This is a bad kind of argument. There are more 16x16 sudoku boards, and ways of dealing a deck of cards, than there are atoms in the Universe too, but Sudoku and many card games are easy for computers to solve.
Well, you should volunteer on the Stockfish engine and show everyone how it's done.
I don't believe your comparison between fundamentally different games is sufficient to support your conclusion that an extremely large number of calculations does not pose a challenge for computer programs to solve.
Rather, to the best of my knowledge, to support your conclusion what you would need to do is to prove there exists a method or formula for reducing the number of permutations to a manageable number while still coming to an optimal solution. Apparently if Sudoku is easy to solve, that exists in Sudoku, but it certainly does not yet exist in the realm of human knowledge in chess.
The fact is that Sudoku and many card games are easy to solve, while chess is in fact a much, much more challenging problem for a computer program, and there are a lot of very smart people that have worked on that problem.
I don't disagree that Chess, and Go, and very hard. I don't believe without a major mathematical breakthrough we will see either solved, ever. There just isn't enough possible computing power in the universe.
HOWEVER, arguing that they aren't solvable purely from a point of view of "the number of possible games / states is greater than the number of particles in the universe" type arguments is that there are many problems we routinely solve with massive search spaces.
Your comment kind of bothers me. Probably because it is both wrong yet says in a rather frank, matter-of-fact way that my argument is bad.
Well, you could just do a Google search for yourself to see why your reasoning is wrong. The problem with using a computer to play Go is precisely what I described, the extremely high number of permutations involved, and it's the same concept with chess (Go just has even more of them): https://www.wired.com/2016/01/googles-go-victory-is-just-a-g...
Well, your comment bothered me, so we are all bothered!
I still disagree with your argument. The problem with Go isn't just the high number of permutations involved, there are real-world problems, with just as many permutations, which have been solved.
The extra problem with Go (and chess to a lesser degree) is that's it's incredibly hard to be sure, with any certainty, to be sure that a move is a "good" choice.
Comparing to the sudoku example again, when "playing" sudoku it's usually obvious as soon as you make a bad move, you've put two '1's in the same row/column/box for example. With Go if we want to play well, or perfectly, we really have to explore the entire search space.
Sudoku is a game, and in which to solve it, a computer program has an obvious method to limit possible permutations to compute in finding a solution. In other words, it is not faced with the challenge of a requirement to compute permutations > 10^86 (roughly atoms in the observable universe).
In contrast to sudoku, chess and go are games which have no yet discovered or theoretical method to limit permutations to solve, therefore to solve them you would be required to computer permutations > 10^86.
So the problem is still with the number of permutations.
Like I said in my other post: if you could develop a technique to reduce the number of permutations while still solving the optimal solution then you would be onto something. Right now no one has been able to dream up such a theory and there are a lot of very smart people who have worked on this problem.
> It's kind of mind boggling to think about every atom in the universe representing one position in a game of chess and there not being enough atoms in the universe to represent each one.
Chess players mentions this a lot. And then they consider chess a sport. For a sport, it shure is a mind boggingly low number of game states :)
(ps: I don't mean to start an uninteresting sport-or-not debate. It is what it is.)
Something else- today you can run programs that play chess at superhuman levels on your phone. At the time of DeepBlue's win against Kasparov the same kind of system could run on off-the-shelve hardware [1]. Very soon after AI programs running on ordinary laptops demonstrated dominance against human masters (that's in Russel & Norvig, somewhere, I can dig it up if you like).
So the fact that Cho Chikun won against a less powerful machine is not necessarily an indication that the entire system is weaker than DeepMind's one. To know that you'd have to run Zen against AlphaGo.
______________
[1] A big hint of why it didn't is that, at the time, IBM was still selling more hardware than software.
>> Cho Chikun had the large advantage over Lee Sedol that his opponent (Zen) and similar programs (Crazy Stone, Leela) are simply available for him to practice against and probe for weaknesses.
That's an argument against Go having been solved by AlphaGo, not for. If Lee Sedol could beat AlphaGo given the opportunity to practice against it, then nothing is solved.
Also, for a system to dominate against humans in a game it must be able to beat any opponent every time. AlphaGo hasn't demonstrated this yet, so while in the popular opinion AlphaGo "dominates", in the technical sense it does not, and Go won't be "solved" conclusively until it has.
Speed of the hardware is significant to discuss.
A human, given sufficent notebook paper & time, could think ahead for thousands of years and thousands of turns -- this is effectively what a machine is doing.
With much tighter time constraints, I don't believe machines would do as well because they are using full-breadth search (to my knowledge) -- searching every possibliity. Whereas humans use visual pattern recognition, and we can run this programs quite fast.
Isn't it plausible that DeepZenGo simply isn't at the level of play of AlphaGo? Facebook's DarkForest is also a competent AI, but even it does not compete yet.
It's hard to compare, as even Deep Zen runs on much more...modest...hardware than the match version of AlphaGo did. It's conceivable that AlphaGo would get clobbered on Zen's hardware, or that Zen would clobber AlphaGo if that team had Google-eseque resources available. But there's no reason to believe we'd ever see such a fair comparison. It makes no sense for either team!
Zen is commercial software sold to people with a normal PC, so obviously reaching AlphaGo performance on AlphaGo hardware does their customers no good either.
It's for sure stronger than Darkforest. Some free PC programs like Leela have reached Darkforest levels on normal PC hardware (Darkforest used 44 NVIDIA Tesla cards, Leela uses a single normal "gaming" GPU), and Zen is several stones stronger than Leela.
A rather similar hardware with a recent version of the same program plays on the KGS server. Seems to be about what would be 10d and is among the top players on the server (was 1., now seems to be 3.).
You can see the graph of its playstrength; its quite impressive to see the gains Zen got from implementing ValueNet (and previously with PolicyNet) really.
CPU: 2 x Intel Xeon E5-2699v4 (44 cores/2.2 GHz)
GPU: 4 x nVidia Titan X (Pascal )
RAM: 128GB
So rather similar.
~10d is way way lower than what AlphaGo had to be to play on par with one of the top players of Go. Pro ranks start around or just above the KGS 9d (which is around what a 7d rank would be for regular amateurs in most world amateur organisations). A rough approximation is that you can take 3-4 pro ranks for each dan rank/stone of handicap (well, to the extent one can even approximate strength by the rank, as a rank once won is kept for life, so as you age and young pro players pass you by, your rating is getting inflated). So apriori you could expect Zen to play around middling pro ranks, and maybe around the strength of the v13 of AlphaGo, the one that beat Fan Hui, and described in the Nature paper. And that's about what his play with Chikun indicates as well.
Cho Chikun's rating is 3239, while Lee Sedol's is 3508. so around 300 elo difference, which seems a conservative estimate of the difference scale (and the results of the two matches do indicate, if weakly, that the actual difference is even larger). A 230 point gap corresponds to a 79% probability of winning.
For comparison, distributed AlphaGo v13, the one that played vs Fan Hui is 250 elo points stronger than a single machine version of the same version of AlphaGo.
Also if that would carry any weight with you, you could just take it on the say-so of Myungwan Kim 9p, who was doing commentary on both DeepZen and AlphaGo matches. Really the difference in strength seems rather obvious to such a strong go player, as you can see in the way he criticized some DeepZen's moves, and how impressed he was with so many of AlphaGo's.
My own personal opinion is embodied in the phrase, "You don't stop playing games because you get old, you get old because you stop playing games."
I'm fairly close to retirement, and have friends who are getting there (or retired already). I really enjoy programming, and intend to continue indefinitely; most of my friends tolerate it as a job, and look forward to not doing it when they retire.
So it's the old "follow your passion" thing, again, I guess... I have that curiosity about such things, they don't.
This sounds a lot like video games like StarCraft and Overwatch today. Korean competitive gamers have achieved the same kind of machine-like skill. In the Overwatch World Cup this year, not only did the Korean team go undefeated, but they massively crushed their opponents (even in the final game!).
Cho is a korean as well. Though I'm not sure on his current nationality. Yes, there might be some difference in the style of playing Go, which I do not know the detail.
For the purposes of Go, he is japanese. He learnt Go from Kitani Minoru from age 5-6, and he became a professional player at 11~ breaking records for his time (if I recall correctly). Back in the 70's japan was everything for Go.
i play go, nowhere near the retired semi-professional level
my backwater go community doesnt really care about computers because we're too focussed on trying not to be so lousy, and on the conversation. the light chase.
i'm super curious...when you see alpha against sedol, or whatever this is against chikun. do you read something there? is there a character?
About 4 years ago, a friend i know used a 3-4d level computer against me for a fraction of his game. I noticed something was off, i actually thought he was being substituted, but it didnt cross my mind it was a bot. However, it was a very tactical situation.
Looking at alpha, something similar but at higher level happens: it looks so profoundly tactical that its hard to catch character from it. I can tell how Lee felt however: like i felt playing people close to his level, like you play a wall that you cant read.
Computers losing to Machines in Go is not yet that surprising. The article doesn't really give any reasons to why we should have expected something else. Perhaps the main news worthy part was that the machine actually won a game. But then we don't really know how strong its opponent is today.
Cho Chikun is a legendary player ,and was top of the world in the 70's. From that era, along with Takemiya Masaki, he was one of the only ones that kept playing and kept winning titles even in his old age.
The style of 70's players was artistic and not as sports-like as koreans were when they rose in the 90's. They always showed a higher level of philosophy and art in game, but it was ultimately crushed by the "machine-like" tactical capacity of korean players. Cho chikun has very little chance to beat Lee sedol. But he was confident on beating Zen, and he did something unspeakable in the LeeSedol Matches: he came back from a losing game, from a machine that looked that would never relent its edges.
As a retired semi-professional Go player that learnt by replaying games from Cho Chikun, it is exciting to see the Japenese artistic and humane vision dominate the computer over the Korean Tactical sports-minded way.