I think these people have read too much science fiction.
>First we had the legs race. Then we had the arms race. Now we're going to have the brain race. And, if we're lucky, the final stage will be the human race.
>John Brunner, The Shockwave Rider (1975), Bk. 1, Ch. "The Number You Have Reached"
I don't see how a ban could work. We already have technology that can identify targets, so the issue is whether a human is involved to approve each kill decision. But what constitutes human approval? Is it enough for a human to sit and watch target details flashing up on a screen, and intervene if they see an incorrect target?
We have bans on chemical and biological weapons. You don't say "but everything is made of chemicals. What are we even banning?" The law will create a legal definition and courts will interpret it. It's not the hard.
This is something of a problem. The system which scans mobile phone metadata (among other things) and turns it into drone bombing targets is already enough like the latter. A human is involved in pulling the trigger, but really they're following orders rather than having made the target/not target decision. And they may not even have the security clearance for the information on which the decision is based.
(Spoiler alert for those who haven't seen the first 3 seasons.)
The Samaritan works like you described - turns data and metadata into kill decisions. The Machine just outputs a social security number of a "person of interest". Figuring out why is the job of humans.
The show features an interesting discussion about what happens when an AI following the first method (data -> kill decision) starts lying to you to further its own agenda. The same issue applies when you have a black box of sub-AI complicated analysis algorithms that make the decision.
In the show, Samaritan is a full-fledged AI with its own goals and agenda.
In terms of the real world, how many systems do explain step-by-step how they arrived to their conclusions? With some of the analysis and reasoning algorithms, it would be unfeasible to require humans to double-check every step. How would you quickly tell if a Bayesian network aggregating probabilities from hundreds of observations is correct?
We don't even require explaining themselves from humans. Instead, we have the concept of trust, which is something one earns. As a commander, I will trust that my subordinate is doing his analysis in good faith, but because there is abuse, we try to have an audit trail that can be used at later date to detect who acted wrongly, and should be deemed untrustworthy. Especially in combat situations, there is no time for an individual to evaluate every piece of evidence.
So the point is - are you willing to trust the algorithms you don't understand? With humans you at least share the common mind architecture. In the show, Machine was designed to avoid this question altogether by refusing to make decisions that carry moral weight, only to point out potential persons of interest for human agents to investigate.
It's a plot device. The characters spend their time trying to understand what is going on.
But in the context of the script: the designer wanted it to be a 'black box' so nobody controlling the machine used it for it's owns goals. The machine only outputs the minimum relevant information. It's not a bad show.
If the system is using all available classified information, then an explanation of its output would have to include the security classification of every input, and the system of classification may also be such that no one person is cleared to know everything. Therefore nobody can know the rationale.
(I've made this up, but I think it's the most plausible way such a thing would get built, and it's the logical outcome of "no read up/no write down" security rules.)
I don't know much about the exact structure of classification. The situation you described would be certainly possible if the classification system is not a tree but a general graph. Is that the case with clearance levels?
I don't think that's so much a problem. The problem is literally robots pulling the trigger. What happens when the president is targeted due to error/hacking? A member of the military would have a major problem with assassination of her own leader; a robot may not care. What happens if everybody becomes a target? The military personnel would figure things out real quick, a robot, again, may not care.
A member of the military might not have any problem at all with the "accidental" assassination of their leader, whether that leader is the president or the company sergeant. AI could easily provide plausible deniability.
I've been very critical of claims of the dangers of AI, but this isn't about AI - this is about giving lethal weapons to autonomous systems.
And that's just stupid. All software has bugs, and even if you don't have a problem with autonomous killing systems - I certainly do - it's incredibly irresponsible to build systems that could kill just because someone left out a semicolon or couldn't convert between miles and km.
It's taking the "blue screen of death" experience just a little too literally.
Umm, we already have machines that can kill people if someone "left out a semicolon". Airplane autopilots, medical equipment, car computers, etc.
Not saying this makes it OK, but that isn't the reason. Life-critical code isn't a new thing.
And those systems all have humans operating in conjunction with them, none of those things are (currently) autonomous. The success thus far of life-critical code usually depends partially on the assistance of humans.
Most of those systems operate on their own unless/until they hit something they can't handle and a human is standing by as failover. So, unless you're saying it's OK to have deathbots as long as a human is on hand as failover if the deathbot gets confused, I think my point stands.
Usually if a leader is assassinated, the assassin is a member of the country's military.
The US is comparatively rare in having had no coups since independence and only one civil war. The US also has very strong indoctrination against this kind of thing, but individuals are just as vulnerable as robots to making the wrong decision if they are fed false information. Such as endless attacks on the legitimacy of the president's election, birth certificate, allegations of being a foreign agent, etc.
The US is of course not above assassinating foreign leaders, conspiring to have them deposed, or providing arms to military coups against democratically elected governments. Allende is probably the biggest example.
Edit: and of course there's always the risk of a single rogue individual, from Dr. Strangelove's " Brigadier General Jack D. Ripper" to the very real Ford Hood massacre. In fact, Dr. Strangelove should be required viewing on this subject; not only is it darkly hilarious, it's all about setting in motion destructive partly-automated systems which cannot be countermanded.
The advantage of humans sitting and watching target details flash up on screen is that not being in any immediate danger themselves they should have fewer excuses for misindentification. Technology should make humans more acccountable for kill decisions rather than less.
In practice it would just make it easier to kill -- so more killings.
In wars, invasions, etc, including those done by "democratic powers" few care about "misindentification" (or even far worse offenses to the enemies including civilians). That's why the mostly go unpunished or just get a slap on the wrist.
"Kill the all and let god sort them out" is the modus operandi.
I would think it better that we have weapon systems that intelligently make an assessment as to whether their target is (A) a combatant, (B) a threat, (C) surrendered, etc... before engaging and killing them.
The alternative is indiscriminate death that we see in mass bombings, mine fields, artillery strikes, and drone strikes.
> I would think it better that we have weapon systems that intelligently make an assessment as to whether their target is (A) a combatant, (B) a threat, (C) surrendered, etc...
The problem is that those intelligent systems will be overlorded by the same people overlording the actual military.
At least in our current situation a human has to pull the trigger and that person could be potentially charged for crimes against humanity which would make it thinking it twice and potentially resist to follow a direct order. You won't have that with a machine.
That's true, but the scenario you are thinking of is one in which the higher ups order a massacre and the soldier on the ground refuses to carry out the order. I believe that the more frequent case is when soldiers on the ground are having to make split second life and death decisions, quite possibly panicking themselves. If they get it wrong, they're dead.
An AI can make split second decisions without panicking and without any consideration for its own "survival". For any mistakes an AI makes, those higher up will get the blame (ideally), not some 20 year old who was scared for his own life when he made the decision to kill everyone on that overloaded truck because he it wouldn't stop 10 seconds earlier.
That said, I shudder at the thought of a world in which people get killed by machines who will never be whistleblowers, who do not go home with post traumatic stress disorder, telling everyone how horrible war really is.
> I believe that the more frequent case is when soldiers on the ground are having to make split second life and death decisions
Tha't ok, but I think you are assuming that the "good guys" are the only ones that would have access to this. Truth is that once one government start doing this, all the others will follow and even the organized crime would have access to that technology.
Even if both sides use machines, that doesn't make it any worse than both sides using humans, on the contrary. But as I said, the wider consequences are a different matter.
I think I didn't explain myself clearly... What I meant is that once one side has it, it will go outside of common warfare pretty quickly.
You are thinking in Gov. A vs. Gov. B both using machines. My concern is that after that, Gov. B will use machines against humans C inside the same country, or in the neightborhood. And in those cases, "technical errors" can be used as excuse after a tragedy. And that's only one of my concerns... add organized crime and terrorism to the mix and you will have a very explosive soup.
I think your assumptions are very realistic and I share your concerns. But looking back at the history of war or combat, I don't feel that human nature has been a mitigating factor. On the contrary.
(a) kill unfriendlies
(b) save neutral lives
(c) save friendlies
(d) keep minimal costs
(e) thoroughly destroy opponents means of warfare
Given that the people setting priorities already don't keep (b) or (e) as it relates to civilian property in mind, how likely is a system to come into existence trained to consider (b) or (e) highly enough to say, not make the assertion that any amount of foreign destruction is okay if it is likely to minimize costs in the next 3 months? (i.e. We won't have to sweep that building if it's leveled.)
Another alternative to indiscriminate death is calculated efficiency driven by algorithms tuned to save US money, kill their guys, save our guys, and if possible while doing the other three, don't kill neutrals, friendlies, or neutral property.
Do they have to be killed though?
If we have machine soldiers and the enemy combatants aren't an immediate threat to actual humans shouldn't we be trying to detain them instead?
Human drone pilots, and gunners in manned aircraft are much better at making such assessments than any current technology-- yet they do not do so, and kill people who really it appears shouldn't be killed, including those you mention.
If we can make humans do it, I suspect whatever is controlling future craft would do it too, especially if programming is much easier than it is to program humans to be murderers.
Warfare is institutionalized criminal activity backed by outdated belief systems and social technologies. Technologists have a moral obligation to educate themselves about social and political issues and prevent criminals from obtaining weapons.
How do you suggest we prevent any technology we create from getting used for everything it is suitable for? We'd have to start by not publishing any scientific results and never open source anything. Then we'd have to prevent industrial espionage as well as democratically elected law makers forcing us to reveal our secrets. It's different than with nuclear technology because AI is so versatile and so cheap.
And to wrap it all up, we'd have to stand by and watch as a medieval death cult like Islamic State overruns an entire region.
War is very rarely the right solution. Arguably, wars had a large part in creating IS in the first place, and I was very much against most recent wars. But for reasonably enlightened democratic nations not to have the capability to win a war if need be, ultimately means to be at the mercy of any insane barbaric ideology or religion that comes along.
I think you're misunderstanding
ilaksh's point. Yes, knowledge is social potential energy. The potential energy lacks the moral quality until it is implemented for a purpose by a person in society. I, as the person who knows how to build the killer robot, have a moral obligation to not build the killer robot.
I don't think this is a valid argument when it comes to AI. Sure you can say I'm not going to be the one who builds that missile or that gun. But a killer robot is but a thin layer on top of a general AI or on top of a variety of specialised AI functions. The more general the AI becomes, the more "dual use" it is necessarily going to be.
In other words, should we not build intelligent robots because someone might give them a gun to make them killer robots?
I'm imagining that in the case of AI general enough to be trained for any arbitrary task, but not general or physically able enough to choose its own tasks, you still need technologists to teach the warfarers how to install and train the AI. So in that case, build the AI; don't turn it into a killer robot.
If it doesn't take expertise to train the AI though, the goose was cooked a long time ago.
their goal should be how to circumvent it or protect against it because no government is truly going to give it up. As the technology progresses there will more and more decisions removed from people to the point where what we think the line is today is just ho hum by then.
Worse: By signing this protest there's all chances that AI be banned for citizen. But it will certainly keep being accessible to governments while making it legitimate to squash any citizen who attempts to build a counter-power.
I don't agree that no government will give it up. There are weapons available to western governments that are not used because of moral concerns. Nuclear bombs, land mines, biological weapons and so on.
No doubt there will always be a Syria or North Korea that uses AI-controlled weapons anyway, but I think civilised countries could avoid them, at least while we have the military advantage.
Calling them naive seems misguided, and this scenario is no longer particularly far-fetched.
> Meanwhile, few warn against widespread surveillance, the repurcursions from the use of drones, etc.
The repercussions from drones could become worse if they are upgraded to include entirely autonomous AI, where there is no human involved in the "should I kill this person?" decision.
In the land where there are no "1000 tech experts" warning about them in mainstream media and rich moguls like Kurzweill and Musk getting interviewed every second week on the matter...
I don't think its as naive or far-fetched as you think. But I do wish more celebrities would have the balls to stand up against real political problems of today.
In the actual letter, the signatures are broken into two groups, one for AI and closely related experts, the other for people like Hawking, Musk, etc. Both are quite long and impressive lists, with the expert list actually being longer.
It's weird. When we like what they say, it's "this guy is really smart. You should listen." When we don't like what they say, it's "experts should stick to what they know. Don't be fooled by appeal to authority."
In some cases, I think you have to give credence to opinions from very reputable people. Stephen Hawking, for instance, is one of the greatest minds in the world; his word isn't gospel, but it's damn worth listening to.
Even worse, taking a look at futureoflife.org (the site that actually hosts the letter), I see their "Scientific Advisory Board" includes Alan Alda and Morgan Freeman.
Now, I happen to respect both men as actors/directors. I also see they have both participated in scientific documentaries (as narrators/hosts, I suppose). I could understand if they were part of a "public relations board", which usually requires some degree of celebrity power, but a scientific advisory board? Doesn't this require one to actually be a scientist or a subject matter expert?
agreed, and this is the whole reason the argument is academic. strong ai, once developed, will be the ultimate dual-use technology. one must not only deny a strong ai access to arms, one must also deny it access to anything/anyone that might help it become freed from the constraints preventing it from obtaining, using, or directing the use of arms. this is essentially the super well known thought experiment fleshed out in Ex Machina. only one solution: ban ALL strong ai. good luck with that.
I fear the AI arms race may be inevitable. Even if all the nations could agree to place limits on AI research there will always be a huge incentive to develop something in secret.
Very few ML experts on that list I'm sure.
But at the same time, I'm rather against autonomous weapon systems. Yet not as naive to think that modern armies will ignore the benefits of machine intelligence.
If you think about it, the US has not even ratified the Nuclear weapons testing ban treaty. I doubt it will ever consider an AI weapons ban.
This is not only overblown, it is misguided. All banning does is make people continue in secret. In any event, research in that area will produce knowledge that is both useful for non-weapon purposes as well as weaponizable... The same as all knowledge that has ever existed.
I don't understand what kind of AI/robotics we are talking about here:
- weak non-self-replicating;
- weak self-replicating;
- strong.
Besides, maybe it's an egoistic point of view. Many species have perished while humanity established itself. Does it really matter if we go extinct, if the result is going to be a superior species?
>Besides, maybe it's an egoistic point of view. Many species have perished while humanity established itself. Does it really matter if we go extinct, if the result is going to be a superior species?
In general, not wanting to die or go extinct is not considered an "egoistic point of view".
Or, let's put it this way, from all the egoistic points of view, it's the most excusable.
Why the duck should we care for a "superior species" (and a mchanical one at that)?
Would you let a "superior country" fuck up your own country?
Would you let a "superior person" kill your family and use your resources to sustain themselves?
Is it OK to extinct dolphins and lions, since we are a "superior species"?
Even more so since "superior" in this context has nothing to do with "morally better" but just means "more powerful" and "more fit to overtake others and survive".