It also takes a while for Watson to do the “machine learning” necessary to become a reliable assistant in an area.
This could make a great pay-per-package product. Matlab and Mathematica do this, for instance, by making easy-to-use statistical or signal processing packages by subfield.
To expand on the example in the article, taking this into an easier to use interface for people like medical professionals would be awesome. A friend of mine who worked in an ER told me Wikipedia is actually used quite frequently. A program that could intelligently parse medical documents could be a great next step. It doesn't need to make actual diagnoses, but to jog the memory, an intelligent "these symptoms are consistent with pneumonia, bronchitis, or whooping cough" would probably be well-used.
Or for double-checking contraindications at pharmacies. Or for advising lawyers about applicable case-law. Or for reminding me about relevant studies as I do literature searches. Cool.
Edit: I imagine this will draw some comparisons to Siri, or its factual backend, wolfram Alpha. While wolfram Alpha is pretty sweet, I imagine context-specific question-parsing and machine learning would be much more powerful here (although I suppose wolfram Alpha/Mathematica could get into that game, too).
It wasn't a comment on the ER doctor's ability to extract value out of wikipedia. It was a comment on the fact that Wikipedia is the wild west. Far as I can tell, most erroneous edits on popular subjects are quickly caught. But I wonder how vigilant those wikipedia editors are on obscure medical topics that require an expensive medical education to truly understand.
I'd feel better about it if Wikipedia was innovating above and beyond the implementation of a wiki ... even if the solution is still crowdsourced. For example I would love to see the application of machine learning to the process of moderation, not unlike what Stackexchange is working on these days (http://blog.stackoverflow.com/2012/08/stack-exchange-machine...).
Circling back to the topic at hand, I would actually love it if "Watson" eventually has all of the data currently available in wikipedia. But with each fact cross-checked for validity against everything else he already knows.
My doctor actually Googled my symptoms while I was in the office. It's not actually a bad approach, as he put it any article ranking highly for all of the symptoms and no or few other ones is likely to be relevant.
Wow, reading this makes me wonder if a great deal of family doctors can be replaced by an expert system that can be assisted by a remote doctor in a centralized way. I call it 'Doctor in demand'
That's a cheap shot. The medical and science articles in wikipedia that I've read in areas where I have expertise are remarkably good. Why not use a powerful, free and fast resource, as long as you're cognizant of its limitations?
Watson’s nerve center is 10 racks of IBM Power750 servers running in Yorktown Heights, New York, that have the same processing power as 6,000 desktop computers. Even though most of the computations occur at the data center, a Watson smartphone application would still consume too much power for it to be practical today.
So not actually a pocket-sized Watson, but a smartphone app that will connect to Watson. As long as you've got reliable internet access.
That was also the conclusion I drew, but it makes me wonder why that would take too much power to be practical on a smartphone as the article suggests. It seems like it would only need to capture the text of the question and push it over the network to IBM's servers.
I think what they mean is, the datacenters actually running Watson would consume too much power. A mobile app with millions of users would require many many instances of Watson running, so 10 racks of servers per instance doesn't sound feasible.
While I agree with this interpretation, here's an explicit quote from the OP:
"Even though most of the computations occur at the data center, a Watson smartphone application would still consume too much power for it to be practical today."
Perhaps they are referring to speech recognition, which would either consume a lot of bandwidth being sent over the wire as audio or consume a lot of power to be processed on-device.
Did anyone else think that if they did this their version would answer with a question?
A: Watson, the best hamburger I've ever eaten.
Q: Where is St. John's bar and grill?
It would be the RPN of voice activated assistants.
I got a bit annoyed when the article kept conflating 'power' with 'number of cpu cycles running in parallel to get an answer.' I can tell you that we are no where near having the compute pipeline of 10 racks of Power750 servers in a co-processor in a smartphone.
Watson in your pocket or a UI on your phone to a Watson in the cloud? Given the number of servers they used for the Jeopardy Watson I'm sure it's more like the latter.
For Jeopardy, they had to beat 2 champions in 3 seconds on a wide range of subjects. For a more specific topic or "good enough" answers, you would not need so much hardware. For example, they kept the whole database in 16TB of RAM!
This would (possibly) be true if all mistakes were weighted equally. As soon as some mistakes have more weight (like maybe the machine killed you on accident because it thought your spleen was in your ear), then looking at aggregate numbers doesn't cut it. You'd need to look at both the frequency of errors, as well as severity of errors. I'll take the guy who messes up 50% of the time, but at worst will give me a paper cut over the hypothetical machine who only makes mistakes 1% of the time, but they're always fatal.
That answer was below the confidence threshold, so it just guessed, because it was final Jeopardy and it had nothing to lose at that point by guessing. For something like medical advice, AI tools like Watson are meant to be used by a domain expert, anyway. Even Starship Enterprise still had doctors...
This could make a great pay-per-package product. Matlab and Mathematica do this, for instance, by making easy-to-use statistical or signal processing packages by subfield.
To expand on the example in the article, taking this into an easier to use interface for people like medical professionals would be awesome. A friend of mine who worked in an ER told me Wikipedia is actually used quite frequently. A program that could intelligently parse medical documents could be a great next step. It doesn't need to make actual diagnoses, but to jog the memory, an intelligent "these symptoms are consistent with pneumonia, bronchitis, or whooping cough" would probably be well-used.
Or for double-checking contraindications at pharmacies. Or for advising lawyers about applicable case-law. Or for reminding me about relevant studies as I do literature searches. Cool.
Edit: I imagine this will draw some comparisons to Siri, or its factual backend, wolfram Alpha. While wolfram Alpha is pretty sweet, I imagine context-specific question-parsing and machine learning would be much more powerful here (although I suppose wolfram Alpha/Mathematica could get into that game, too).