You probably saw this, but linking anyway: https://news.ycombinator.com/item?id=10983539 The general point is that it's more evidence improvement can come in discontinuous leaps, it doesn't have to be some smooth (even if accelerating) incremental process. So timeline predictions should probably be wide, with closer-to-present lower bounds (especially when successful generalizable techniques become public). I don't think this initial view would have changed much this week unless Sedol just totally crushed the bot, perhaps suggesting there's still something more.
Personally I think the approach of combining deep learning with MCTS to beat Go was obvious to anyone sort of familiar with each thing, and with a good funded team could be done in a year or less, but a lot of 'experts' were ignorant of one or more of the areas. The uncertainty should have revolved around when some group would get around to writing the software and scaling up with powerful hardware. Implementation details, the theory work was already known.
My own lesson from this is that if all that is needed is tough engineering work (but not really new theory) that work could literally arrive in a week instead of the x+delay time it might take from scratch, because some group could already have been in the process for about x time that isn't public knowledge. AlphaGo kind of came out of nowhere; there were early signs with papers on using deep learning techniques, but I don't remember any public commitments to much. That just indicates companies are still quite capable of doing secret projects. If they had a secret new theory, too, it could be even more amazing. I know of one startup in particular that's been securely at the top of its niche because internally they have secret CS research unknown to academia.
The OpenAI initiative may be useful from the perspective that if they've shared what probably shouldn't be shared, at least we can suspect something is imminent and try to plan a last minute stand of getting it right first, vs someone like Google doing all the research in secret and then bam, unleashing UFAI.
Hugely agree with your response. Just to add slightly to that, I think the fact that Facebook had a quite similar Go AI in the works (just without self play reinforcement learning) in the works is an indicator of how clear this research direction was. The complexity of Google's solution (2 neural nets and a fast evaluation function, plus other small details?) really indicates to me a lot of manual engineering and iteration went into this. So it is not really an indicator of cool new theoretical breakthroughs, but an indication that applied engineering to make use of known techniques can achieve great things.
Personally I think the approach of combining deep learning with MCTS to beat Go was obvious to anyone sort of familiar with each thing, and with a good funded team could be done in a year or less, but a lot of 'experts' were ignorant of one or more of the areas. The uncertainty should have revolved around when some group would get around to writing the software and scaling up with powerful hardware. Implementation details, the theory work was already known.
My own lesson from this is that if all that is needed is tough engineering work (but not really new theory) that work could literally arrive in a week instead of the x+delay time it might take from scratch, because some group could already have been in the process for about x time that isn't public knowledge. AlphaGo kind of came out of nowhere; there were early signs with papers on using deep learning techniques, but I don't remember any public commitments to much. That just indicates companies are still quite capable of doing secret projects. If they had a secret new theory, too, it could be even more amazing. I know of one startup in particular that's been securely at the top of its niche because internally they have secret CS research unknown to academia.
The OpenAI initiative may be useful from the perspective that if they've shared what probably shouldn't be shared, at least we can suspect something is imminent and try to plan a last minute stand of getting it right first, vs someone like Google doing all the research in secret and then bam, unleashing UFAI.