Hacker News new | past | comments | ask | show | jobs | submit login

What I don't get about AI optimism: So we humans are very fallible and can easily make mistakes. Computers are better at us when it comes to problems that are clearly defined and for the problem is decidable (can implement an algorithm to solve any instance of the problem). But we need AI when the problems are more fuzzy, like recognizing a lion in a picture.

How can we build a mostly automated future, if the AIs that are supposed to do our jobs turn out to be very fallible as well? They won't - supposedly - have the problem of being self-aware and being able to follow their emotions rather than their own best judgement and reasoning. But it seems that some problems are inherently prone to making mistakes. Can it be avoided at all? And if so, who do we blame when an AI makes a "mistake" like that? The training set?




Just some back-of-the-envelope estimations (which could be completely wrong):

* AIs can focus all their capacities on a single task for an unlimited time, while a human can focus for a couple of hours each day. (That’s 5-6X thinking time each day.)

* More importantly, AIs have faster access to knowledge systems, other AIs and computation resources (e.g. for simulations and prototyping). For an AI, it will possibly take only in the order of 10e-3 to 10e-2 s to query and interpret information while humans are more in the ballpark of 10e0 to 10e2 s.

* Another advantage is that AIs could fork modified versions of themselves which possibly results in an exponential evolution. The rate of evolution could possibly about 10e8 times higer compared to humans (several seconds vs. 25 years).


> if the AIs that are supposed to do our jobs turn out to be very fallible as well?

To state that AIs can be foiled by a specifically crafted adversarial attack, does not mean the AI is very fallible. Under normal circumstances it still outperforms humans. But let's say the error rate of a net is on average 3%, and an attack works only if it is crafted for a specific net (where the weights are known etc.). Like a team of humans is able to come up with better solutions than individual team members, so does an ensemble of nets (usually) outperform any of its individual members. For instance, the final model generalizes better, because it uses information and predictions from all the nets. Foil one out of ten nets, and the other nine will cancel out the bad prediction with their votes.

In an automated future jobs will be taken up by AI. A security guard then becomes a supervisor: 100s of nets will try to detect disturbances in a mall, and when they find one, they notify the supervisor. A judgement call is then made by a human. This is currently happening with law expert systems. A judge will input the case and get a prediction for punishment. Then make a final adjustment to this punishment, looking at the context of the case. Such a system prevents racially based sentences (the AI will not care about race, but look at precedent), while still giving emotions, judgment and reasoning a final say.

> it seems that some problems are inherently prone to making mistakes. Can it be avoided at all?

Some problems are incomputable, like finding the shortest program to reproduce a larger string. In others, like lossy compression, mistakes may be made in representing lossless information, that humans are unable to spot. I think this is a very interesting, but difficult question to answer.

> who do we blame when an AI makes a "mistake" like that? The training set?

We blame the AI researcher that build the net :). And she will blame the training set :).


People's fallibility goes down as you throw more of them at a task—not because the majority will be right, but because the signal adds up while the noise cancels out. This is what the efficient market hypothesis, "wisdom of crowds", etc. are basically about.

If you train 1000 AIs on different subsets of your training corpus, their ensemble will be much "hardier" than one AI trained on the entire corpus. The automated future comes from the fact that you didn't need 1000 full training corpii to get this effect, nor do 1000 AIs cost much more than one to run, once you've built out hardware enough for one.

In other words, AI makes the application of "brute-force intelligence" to a large problem cheap enough to be feasible, in the same way slave labor made building pyramids by brute force cheap enough to be feasible.


There are many examples where crowd behavior exhibits less "wisdom". Take any market bubble for example. Have we ever looked at a tough scientific problem and came to the conclusion that the best path forward was to collect as many random people off the street and shove them in a room to solve it?

Also bootstrapping or model parameter selection techniques are already heavily used in AI and have not yet brought us this future. I believe that the model you presented has been simplified a bit too much ignoring a lot of important variables.


> examples where crowd behavior exhibits less "wisdom"

When crowds act irrational there is usually a problem in communication. The crowd would still be able to solve problems better after they repair these communication channels. For instance, some rocket launches failed because information by engineers lower in the chain of command, did not work its way up to the decision makers. The group was too compartmentalized, but launching a rocket is necessarily a group effort. 9/11 could have been prevented, or the aftermath lessened, if communication between intelligence agencies was better. In a market crash we often see a single actor making a decision or prediction, and there is little to no reward for people down the chain to disagree with that prediction, or even adjust it (leading to insufficient variance in the predictions). Everyone is blindly chasing the experts, while in a good group setting there is no need to chase the experts.

> Have we ever looked at a tough scientific problem and came to the conclusion that the best path forward was to collect as many random people off the street and shove them in a room to solve it?

Has a scientist ever solved a tough problem growing up in isolation to other scientists? I consider "standing on the shoulders of giants" to be a form of group intelligence. But yes: We have done something similar at RAND corporation. The problem was: Forecast the impact of future technology on the military. The solution was to collect experts (not random people), put them in a room with an experiment leader, and gradually converge to the best forecast, using anonymous feedback every round. It's called the Delphi Technique and it is still in use.

Also, there is an experiment running right now, that takes random civilians, has them answer intelligence questions ("Will North-Korea launch a nuke within 30 days?") and gives weights to their answers, according to previous results. This way random civilians trickle up to the top, that individually beat a team of trained intelligence analysts, simply using their gut or Google. It's called the "the Good Judgment Project". Put ten of those civilians in a room and you have an intelligence unit that is not afraid to be wrong, does not have a reputation to uphold, and does not care about any group pressure, authorities or restrictive protocols that may hamper a group of real intelligence analysts.

> Also bootstrapping or model parameter selection techniques are already heavily used in AI

I believe the parent was talking about model ensembling/ model averaging, not ensembling techniques used by single models, like the boosting or bagging that random forests use. If you have a single attack as input crafted for a single model, then a voting ensemble of three models (lets say: random forests with Gini split, regularized greedy forests and extremely randomized trees) will not be foiled.


Except, of course, that the pyramids are a marvel of workmanship and engineering probably built by a relatively small force of skilled workers. Their construction methods are about as far from "brute-force" as you could possibly get.

http://science.howstuffworks.com/engineering/structural/pyra...


"One machine can do the work of 50 ordinary men. No machine can do the work of one extraordinary man." Elbert Hubbard


There is a huge class of cognitive mistakes our brains make that we are aware of, but we can't really train them out of ourselves effectively. Since we can't rewire our own brain, the hope is to wire up something that would not exhibit those known mistakes and biases.


The good thing about machine learning is that there are multiple ways to do it.

The textbook solution for your problem is to throw multiple methods at a problem, and then when they disagree let a human use their judgement.

This is the best of both worlds: computers do the easy, repetitive stuff that humans find boring. Then the tricky things humans use their judgement on.


AIs may be fallible, but much less so than humans because we can eliminate many sources of fallibility that humans have. A car-driving AI will not fail because it didn't get enough sleep, was distracted by an attractive person on the sidewalk, or drank too much.


I'd think all you can do is design for failure (as you would now) and use the failures as training data. The only real advantage is that you don't pay AI and they don't get bored, they're consistsntly good or bad.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: