Evil behavior, however precisely defined, has been and always will be with us. Technology enhances what we do as human beings and, hence, always has the potential to be applied to ill uses. If someone, then, takes what you develop and applies it for a purpose you never intended in creating it, that is an item beyond your control. Alan Turing - who applied his genius to confer what can only be called immeasurable benefits on society and who used his skills to crack Nazi codes to help end a terrible war - is not ethically responsible for the many consequences inevitably brought into the world simply because computing power can be used to magnify the effects of human evil. Was he (or is any other engineer whose technology is misused) a causal agent in the various bad outcomes we can identify in, for example, the enhanced lethality of weaponry or in the massive spying by governments on their citizenry? In a narrow sense, perhaps yes. When one traces things back up a causal chain, one can theoretically identify every individual actor who made technical innovations that culminated ultimately in a particular bad use of whatever type that afflicts us today. But, though a cause-in-fact, Mr. Turing (and the many engineers who followed him respecting any given facet of computing technology) is not what the lawyers call the "proximate cause" - that is, the immediately enabling agent - of the outcome. Meaning, if you deem it unethical to build bombs, then don't do work for a defense contractor helping to build bombs because your every innovation will be immediately applied to a use you deem unethical. The same for working for NSA in developing sophisticated spying technology. Or for whatever other ill use you can identify in society. But, beyond avoiding direct conduct by which you are proximately helping to cause an outcome you deem wrong, you as a technologist basically have no control over how your work may be applied by others and, as the collective results of such work eventually permeate society, your moral responsibility for the indirect results of your work effectively stand at zero. If the operative standard were otherwise, then all innovation would stand frozen altogether because it is always possible to conceive of an ill use for any technology that makes things faster, more powerful, more efficient, etc. Put any such thing into the hands of human actors and some bad results are guaranteed to follow given enough time and opportunity. Thus, unless one is to freeze all productive activity or is to go insane second-guessing how others might pervert that which is being done for good, engineers must perforce ignore tangential ethical implications over which they have no effective control.
I think it is fair to say that each of us in our given professions (mine being law) ought to avoid being a proximate cause of something deemed wrong even though technically legal (for example, I would not be a "mob lawyer" even though there are some technically very good lawyers who do that work). But even here that is an individual choice for each actor to make. For engineers, some may see it as a great opportunity to do advanced work in some of the social media companies while others may regard such companies as being engaged in unethical conduct as they at least sometimes use dubious techniques to try corral us as consumers into their tight little worlds. For any given engineer, working for such a company doing such things is a matter of conscience. Some may say yes, others no. The same is true in working for a defense contractor or for the government. Or for any other work that is legal but ethically suspect in the eyes of some but not others. It is your choice and it is your conscience.
The author of this piece reflects what I call the bane of associational thinking. He uses the royal "we" to define a group (here, engineers) and then prescribes very broad goals for what "we" "ought" to do. Since all the things described now stand as a matter of private choice over which the "we" group as a whole has no say, then the only way to translate this sort of thing into practical action is to form formal associations, assign things to committees, and then issue a series of prescriptions on what the group members ought to do. That may be fine in terms of the association giving rules that amount to exhortations to do good (who would disagree with that). But, beyond that, do you really want an organized association dictating what today stand as your private choices for your career? Or, worse, do you want such an association to lobby governments to adopt their strictures and give them the force of law? I would think not. How, then, can "we" do better? Of course, the question is always described as difficult and is left for further discussion precisely because it has no real answer apart from acting on individual conscience or apart from the potentially coercive ones of letting some association or government dictate your career choices and opportunities. Perhaps this sort of reasoning is justified as encouraging people to have a heightened conscience about what they do, and in that respect it is fine. But that is really as far as it goes before veering into unacceptable alternatives.
Our capacity to do wrong is innate to our nature, as is our capacity to do good. We should not stop trying to do good through our creativity just because others can take what we do and commit wrongs with it. Nor should we feel guilty about what we do as long as we in good conscience can say to ourselves that we are doing something productive and worthwhile and not directly causing harm to others. The "we" issue is in reality much more of an "I" issue and, for that, you should examine what you do carefully and strive for the good regardless of what others may do with it. If you want to exhort others to do better by your standards, then all the better. Just don't dictate to them on matters over which people in good conscience may disagree.
Your point about causation is a good one. Its worth thinking through the causal chain of how your work is used as an engineer, even if you conclude that you are not the proximate cause and therefore do not bear responsibility.
I live in the Delaware Valley, an area of the country devastated by engineers. Engineers built communications and automation technology, which has allowed executives to outsource and eliminate jobs far more quickly than people here can get trained for new ones. The human toll of these changes has been higher than drone and surveillance technology combined.
Are engineers working in communications and automation the proximate cause of these changes? Maybe or maybe not. But how long or short is the chain of causation? I think its not very deep, not when an automation company might market its technology by mentioning the labor cost savings. At the very least, before anyone gets sanctimonious, they should think about what sorts of impacts their own work has on other people.
> Now I work in video games, not curing cancer, but in my search I was looking for a company that at least did no harm.
Ironically, one of the reasons I left EA was that I saw my CTO and half of the programmers around me assigned to the task of figuring out how to outsource more development. That didn't seem like a winning proposition to me.
I would say that outsourcing development jobs is at least a wash. Relatively wealthy people in developed countries may be (temporarily, let's be honest) out of work, but far less wealthy people living in less developed countries will have the opportunity to make what is, for them, a good wage.
Automation is harder to justify this sort of way. Outsourcing moves jobs around, automation is intended to eliminate them (yeah yeah, we need people to make and fix the robots, but let's be real, there is a net loss of jobs and we can only hope that cheaper products will trigger the creation of new, largely unrelated, jobs.)
Yeah, a $5 an hour job is created in China, but a $20 an hour job is eliminated in America, and much of the difference is captured by some executive or shareholder in the U.S.
Automation has the potential to eliminate jobs, but also has the potential to allow much more work to be done by a single person, or allowing that person to do the same work with less effort.
Perhaps I'm overly simplifying your words, but I don't think automation is inherently "evil". Like all tools or techniques, they may be used toward good or bad ends.
Well, I wouldn't say that automation is evil, and automation certainly can be used primarily to scale processes, but I think that if a process is already running at capacity (say, you are already producing more wheat than the world needs), then automation will tend to reduce prices (or at least costs) and reduce jobs. The end-game is total automation (hopefully with everybody enjoying the fruits of that past labor, Star Trek style.)
I think that automation in general is a worthwhile endeavor, but we need to be mindful of the downsides and modify our society as we implement more automation to ensure that we are not causing undue harm. I believe that various forms of social safety-nets will become essential as we march towards automation's logical conclusion.
So if we used to build a road by having a group of 50 guys with shovels, should we just ignore the invention of the bulldozer so these men don't lose their jobs.
95% of Americans used to work in agriculture. Should we still all be farmers today because if we adapt technology then some of the farmers would lose their job?
One of the big problems with engineers is that they like solving problems, but sticking around to debate how those solutions will be used is "politics" and they say, "I hate that shit". Thus it's easy for the psychopaths who run this society to come in and use automation for bad (depriving others of participation, rather than distributing the benefits).
EA brings up a horrible (but true) thought. One of my colleagues was discussing the video game industry (which he left in disgust). People accept terrible terms to work in it, but in doing so, they make it worse for everyone. People who accept 60 cents on the dollar and mandatory 80-hour weeks and death-march projects to work in VG are making it worse for everyone else who wants to work in that industry and are, in a real way, being unethical.
I'm not anti-market, because I can't come up with anything better as a general economic problem-solving tool, but they do have the undesirable effect of often pitting have-nots against other have-nots, when it would be morally better for them to team up and maybe get a fighting chance against the haves. It's easy in New York (I lived there for 7 years) to hate "rich assholes" (and foreign speculators, and rent-control royalty) for the rent situation, but every time I paid that rent check, I was just as much a part of the problem.
Engineers built communications and automation technology, which has allowed executives to outsource and eliminate jobs far more quickly than people here can get trained for new ones. The human toll of these changes has been higher than drone and surveillance technology combined.
I think this comes down to a conflict (not very well fought from the engineers' side) between cost-cutters and excellence-maximizers. The first category want to take something that's already being done and cut people out of the action. That's not always a bad thing, because they attack inefficiencies and should, in theory, make the world richer. However, they end up taking almost all of the gains for themselves (and externalizing costs). The second also want to cut costs, remove grunt work, etc. but because they want to do more, i.e. "now that I shaved clock cycles off of this operation, that frees up resources to do more cool shit".
Businessmen tend to be cost-cutters, because that's the one thing people can agree on in executive tussles. For executives, R&D, philanthropy, etc. all devolve into bikeshedding, but the bottom line is a common language. People with vision, on the other hand, tend to get into conflicts and causes that have negative expectancy for their political fortunes. Engineers tend to be excellence-maximizers.
The excellence-maximizers do believe that they're helping society and adding value-- and they're right, at least on the latter. They cut jobs and create value. The problem is that society is run by greedy cost-cutters who have no vision but a lot of greed, and who make sure that none of the gains trickle back. Thus, those affected by the industry changes never get the resources (time, money, education) to survive them.
Right. Automation increases the overall productive capacity of a society, but does so in a way that (by reducing the demand for labor), allows holders of capital to capture more of the value generated by that production for themselves.
That said, I'm not sure what to do with that realization other than hold on to it as a vaguely disquieting feeling. I'm certainly not advocating that engineers do less in the way of creating automation or communications technology. I tend to believe its the job of the political class to reconcile technological change with societal well-being. But that same thinking applies to engineers in the defense industry as much as engineers in the automation industry.
> I tend to believe its the job of the political class to reconcile technological change with societal well-being.
In a democracy (including a representative democracy) the "political class" is the citizenry at large, so that responsibility belongs to everyone in such a society. (Accepting, arguendo, that it is an obligation of the "political class".)
I tend to believe its the job of the political class to reconcile technological change with societal well-being.
I agree-- but I also don't trust the current "political class".
It's made worse by the current Silicon Valley arrogance, which assumes everything "big" (esp. government) to be intractably mediocre (and, therefore, useless) because the whole populace (i.e. the full IQ spectrum) is a part of it. I feel like this secessionism is a rather Machiavellian move by the technological elite to convince their underlings not to see the big picture, because it's all mediocre and inefficient out there anyway. The attitudes coming out of both the technological and political elites (both anti-intellectual and limited in their own ways) are bad for both sides.
From what you write, I get an impression that you claim this article is irrelevant and worthless, especially because "engineering tools can always be used both for good and bad", and thus an engineer has totally no responsibility as to where his creations are used. But actually, an engineer can choose for which company he works, can't he? So, he already has some choice! Sure, in the end, a tool he created might get used by an "evildoing" company, if buyer of the engineer's work sells it, but then at least the engineer had his totally real chance to at least try to make it harder, by not selling immediately to the evildoer, doesn't he? I don't understand how can you claim that an engineer cannot make such a choice, and "must perforce ignore tangential ethical implications".
You also claim, that if engineer wanted to care about ethics, he would have "to go insane second-guessing how others might pervert [his work]". Classical rhetorical maneuver of exaggeration - why immediately claim his only option is "insane second-guessing"? Please don't remove engineer from ability of basic observation and critical thinking - as some totally sane observation has already power to reveal quite a lot of morally suspicious activity of a company! So, does an engineer have moral right to close his eyes, or claim that what he sees, and by his work facilitates, he actually does not facilitate? Is that what you claim?
Now, the problem is, first of all, that it's easy and cozy to claim no responsiblity and no power. But this always is, was, and will be easy. And second problem, and that's how I understand the gist of the most of the article, is that because of corporations (especially their size), this observation and reasoning actually did recently get quite a bit harder and more difficult. But therefore, it should be considered important to try and make an effort of thinking about how to reverse that, so that engineers could regain more control and insight here.
Also, from many problems I have with your response, one other that shines is your claim that the article calls "to form formal associations, assign things to committees, and then issue a series of prescriptions on what the group members ought to do". Where, oh where, does it give even a slightest hint in this direction??? Please!
As an engineer I really hate the idea that it should fall to me to quit as basically my only option if my boss wants me to do something that I think is unethical. I understand where the authors of the Guardian article are coming from. But I also think it's shitty to suggest that because of someone else's poor morals I have to find a new job.
I say this as a guy who's successfully avoided working at companies where I might be put in the position to have to make that choice. So I'm not bitter that I've already had to quit a job as a result. But it does kinda bum me out that there are a lot of jobs I can't take because of the possibility of my work getting abused.
What bugs me is that the implicit assumption here is that management is corrupt and can't be trusted to be ethical themselves, and thus it falls to the engineers to boycott actually doing what they pay us to do. Why shouldn't the blame go to the people who are holding the reins?
Responsibility isn't zero-sum. The fact that you have an obligation not to act irresponsibly does not absolve your manager of the obligation not to ask you to. This particular article focuses on one particular layer in the heirarchy, but the author isn't implying that morality is only relevant in the trenches -- it applies at all levels, and shortcomings at any level should be addressed. This was an article written by an engineer for an engineering audience; tomorrow you might see an article in a different newspaper for middle managers or company directors or shareholders.
As an aside, though "exit" may be a more powerful force than "voice", I wouldn't discount people's abilities to change corporate behaviour by mechanisms other than boycott. In fact, if moral workers are to steer clear of questionable businesses it is unreasonable to expect those businesses to maintain any kind of moral compass.
I think the idea is you can work for anybody, but you should think what you're doing. If everything the company does is immoral, don't work for them at all. If not all is immoral, you can do what's moral. Usually it's a whole gray of shades.
To me, an ethical decision is one in which the result does not, and will not, put an individual, or a group of people, at an obvious disadvantage, while benefitting another.
> Ethics also means, then, the continuous effort of studying our own moral beliefs and our moral conduct, and striving to ensure that we, and the institutions we help to shape, live up to standards that are reasonable and solidly-based.
There seem to be a lot of defensive comments, which I guess, seem to be coming from the focus on individual guilt. Focussing too much on individual choice is examining a particular solution approach and not the problem itself which needs to be solved. Lack of awareness is not a valid excuse, because even when some people speak out on the harmful consequences of technology(people have long talked about the surveillance state), they are ignored since we dont have systems in place to do a cost-benefit analysis and coordinate around a solution. Sure, there are many problems with the individual approach(what if somebody else is hired?, what if opponents develop a weapon technology?). But this only highlights the difficulty of the problem. This is why the 'we' becomes appropriate, because individuals are weak(unless they are at some critical stage).
There are potentially many developments in areas like nanotech, bioengineering, ai with which humanity might face big troubles. For this, there needs to be some strategy identifying such potential problems, and putting precautionary measures on r&d. Something like this happened with nuclear scientists hiding their research before WW2. This is harder to replicate now, with the science and engineering community distributed more widely between competing nations. So, a first step could be to acheive a shared understanding of common interests and ways to coordinate with each other.
Alternatively, energy can be poured into defensive measures for defeating harmful technologies. We already have many engineers developing counter-technologies. Also, Snowden himself was an example of exposing something which he considered harmful.
This kind of thinking only works if all intelligent people end up sharing and following your same ideals. Since that's just not going to happen, is it better to leave your country in the weak position?
How would the cold war have played out if only one side had nuclear capabilities?
I'm also not sure the NSA intelligence is on the same difficulty as the Manhattan Project. As I understand, most if it is general information retrieval techniques. The same tool I wrote to help troubleshoot our VoIP networks is literally the same thing I'd do if I was writing a mass-spying tool. (Collect and index every single packet, store indefinitely, provide fast on-demand access on any selection criteria.)
Edit: Parent comment was equating NSA surveillance in difficulty to the Manhattan Project, and making the case that if no one like Feynman had worked on it, there wouldn't be nukes.
Sorry about that. I retracted my comment because I felt it was meandering and muddled.
This kind of thinking only works if all intelligent people end up sharing and following your same ideals. Since that's just not going to happen, is it better to leave your country in the weak position?
Indeed, this is one of the reasons why there's not really much to say on the subject of ethics regarding the NSA's behavior. It's in people's nature to want to acquire as much power as possible, and there are endless justifications for doing so.
Since 1925, in a voluntary ritual near the end of a Canadian engineering degree, almost-graduates swear an oath and, upon doing so, are given a small card and a ring.
I swore my oath more than 25 years ago and I still have the card in my wallet and the iron ring on the smallest finger of my working hand. A couple times a year, I pull out the card and read it.
The oath reads:
"
I [worldvoyageur], in the presence of these my betters and my equals in my Calling, bind myself upon my Honour and Cold Iron, that, to the best of my knowledge and power, I will not henceforth suffer or pass, or be privy to the passing of, Bad Workmanship or Faulty Material in aught that concerns my works before mankind as an Engineer, or in my dealings with my own Soul before my Maker.
MY TIME I will not refuse; my Thought I will not grudge; my Care I will not deny towards the honour, use, stability and perfection of any works to which I may be called to set my hand.
MY FAIR WAGES for that work I will openly take. My Reputation in my Calling I will honourably guard; but I will in no way go about to compass or wrest judgement or gratification from any one with whom I may deal. And Further, I will early and warily strive my uttermost against professional jealousy or the belittling of my working colleagues in any field of their labour.
FOR MY ASSURED FAILURES and derelictions, I ask pardon beforehand of my betters and my equals in my Calling here assembled; praying that in the hour of my temptations, weakness and weariness, the memory of this my Obligation and of the company before whom it was entered into, may return to aid, comfort and restrain.
"
That's a great oath: Bypassing ethics completely, and swearing devotion to technical quality and not disturbing the porfessional social order, it underscores, bolds and italicizes TFA, which is nicely summarized in the caption under the picture which comes before any of its text:
"Engineering ethics are mostly technical: how to design properly, how to not cut corners, and how to serve our clients well."
Congratulations, you have utterly and perfectly- without wasting materials!- demonstrated why such articles are necessary. In case it's not crystal clear yet, an oath like that would do nothing to encourage an Aaron Swartz or an Ed Snowden.
What would you prefer? "I swear to uphold an ethical course of behavior as defined by the current zeitgeist of Hacker News"? It's clear that in delivering a sub-standard project, you're acting badly. But when you make a great product that may be used to harm others?
Many of the creators of the atom bomb have expressed their belief that they did the right thing. I agree with them and would have no ethical problems working on the nuclear stockpile. Many people, however, would consider that extremely bad.
I think the point is not doing what you believe to be wrong.
And to have ethics as a group, there needs to be some discussion on what's right and wrong. That might help you to (further) develop your own ethics, and also give you same backup from your peers if you decide to not do something.
It's not just Canadian. I did it too for my engineer degrees in the US. My ring is in my dresser, but I still keep it in mind. I was being recruited hard by Picatinny Arsenal and didn't give them the light of day as I'm not keen on building weapons systems.
Thanks. I lost my card years ago...but I don't think I forgot what it said. Maybe I can order a replacement to carry with me to show people at parties.
An important note here: "Moral" does not refer to a singular set of values, behaviors or beliefs. It only refers to a moral system, any moral system. That is why that oath the PP carries around from morning till night is "moral", yet does nothing to encourage the larger ramifications of their work, such as ecology, human rights, etc. It's only one moral system, not one which respects others' rights (beyond efficacious contractual obligatoin), etc.
Interesting how the wiki version of the "Obligation of the Engineer"^1 seems to speak toward being a good steward of technology, humanity and public resources where the oath the OP quotes is more focused on delivering a good product and being obligated to do so by your pay.
No. How to be a responsible engineer who doesn't get people killed. There are situations in life that require one to grow the fuck up, and being a professional engineer is one of them.
> How to be a responsible engineer who doesn't get people killed.
Actually, the above oath will maximize the number of people killed if the sworn engineer happens to be working on a weapon. There is absolutely nothing in that oath about not doing harm.
There's a difference between a collapsing bridge (the original reason for the oath) and a weapon. You knew from the start that the weapon was going to kill people. I'm ok with designing weapons; my ethical beliefs allow that. It would be unethical to accept the project but deliver a weapon that didn't work properly.
> I'm ok with designing weapons; my ethical beliefs allow that. It would be unethical to accept the project but deliver a weapon that didn't work properly.
Sabotage for a good cause is unethical? You have strange ethical beliefs.
> Actually, the above oath will maximize the number of people killed if the sworn engineer happens to be working on a weapon.
That depends on the weapon. If a weapon malfunctions and kills its operators, that will help maximize casualties. But it would still be a failure on the engineer's part. If it's supposed to kill its operators, then it wasn't.
And an ethical engineer would design those weapons so that they don't blow up in a soldier's magazine or a submarine missile tube.
It's about adhering to a code of ethics, not undertaking some Hippocratic oath. In most places that code is actually a concrete document, which can be used as grounds to discipline/prosecute/regulate you, should you breach it.
I'm not pulling this stuff out of my ass, and I'm not sure why people seem to want to discredit my thesis that engineers are/should be ethically bound to not endanger the public good.
You do what you think is right. If you think designing weapons helps the greater good, you should design them well. Remember nuclear weapons prevented at least one global war and a couple regional ones.
And no. I have refused to design weapons early on in my career and I would refuse to do so now. I took my oath (a slightly different one, because I am Brazilian) and I take it very seriously.
Many wars of the late 20th century were proxy wars between nuclear powers. My bet is that, without MAD, they would have happened anyway, with the powers fighting directly. It would be very ugly.
Besides, we may need nukes to vaporize incoming asteroids. They can also be handy if you need to quickly dig a hole, move a mountain or if you need a cannon that can reach orbiting spacecraft.
Equivocating taking the engineer's oath with being a 'good robot' is childish, at best. I could also call it ignorant, or perhaps on its to criminal (if somebody truly believes that and puts it into practice).
The engineer's oath towards ethical practice is precisely about not being a robot. We have specifications, years of technical brainwashing in school, and employment imperatives to make us into robots. Ethical practice is about having a watchdog thread in your brain that monitors your situation as an engineer. When a situation comes up where you could be a robot and do something stupid/illegal/dangerous/corrupt, the hope is that you can invoke your ethical/moral compass and make the right choice. Hopefully your compass is better calibrated than some unnamed folks in this thread.
You want to talk about reducing a position simplistically? How about taking the backbone of a healthy engineering infrastructure, one which everybody relies on and must trust, and equating its core values to being a 'good robot'. I'm not reducing that person's position, I'm defending the position of the person who understands what they're talking about.
So, yes, I reiterate to anyone who agrees with the OP: grow up. Or better yet, don't be a god damn engineer.
I've had Richard Stallman personally lambaste me for unethical behavior, namely by not quitting my job at Apple because he disagreed with Apple's behavior.
While I agree that ethics are important, there are people who will use ethics-like arguments to manipulate you.
I make shooter video games. Is that a bad thing? According to some people I'm evil. I've written software that people used to write software that people used to kill people, is that okay?
Saying, "We gotta follow ethics" is great stuff, but I'm unwilling to be used.
I'm sympathetic to the broad point your making, but your example isn't very good:
> I make shooter video games. Is that a bad thing? According to some people I'm evil.
If you thought that shooter video games were bad and you built them anyway, that would be poor ethics. That other people think shooter video games are evil isn't what matters; it's what you think that matters.
The essence is that the people who built the unethical NSA tools most likely did not consider the things they did unethical, so calling on people to be ethical as a response to the NSA leaks only work if "ethical" is understood as "you can't build surveillance tools for the NSA even if you personally believe it's ethical".
There are roughly two sorts of people to consider here. The people who agree that the project they are working on is unethical (or would agree if they stopped to think about it), and the people who honestly believe that their work is ethical.
Obviously when we tell engineers to consider the ethical implications of the project that they are working on, we are talking to the first group (particularly the parenthesized subgroup). The people who believe their work is unethical won't stop working on it just because we disagree, but that should not discourage us from encouraging people to grow a spine and not work on projects that they have ethical objections to.
For people in the second group, we can work with alternative techniques. One possibility is socially ostracizing and blacklisting people who continue to work on unethical projects. (As discussed at some length on HN three or so weeks ago: https://news.ycombinator.com/item?id=6714585)
I think these debates tend to dramatically overemphasize the size of group one, to the point of being misanthropic ("grow a spine"?). A fundamental fact of life is that people have vastly diverging belief systems and that those in the ivory tower don't always know what's right or best.
There's a tangential group to this, which are the people who work on something they don't consider unethical, but may change their minds when they learn the true scale of what they are part of (it's unlikely many people in the NSA outside the very top have full visibility on all the programs detailed in the Snowdon leaks, and it's very possible that each, viewed in isolation and with the right context can be quite defensible).
The second suggestion overemphases just how attractive hanging out with judgmental purists actually is. Today it's NSA, tomorrow it's people who work on social gaming, the day after it's finance and the next it's ads. These sectors are already shunned by (some? large? At least they're vocal.) parts of the tech community that considers themselves and their endeavours morally superior, yet they thrive just fine.
The NSA is different from all your other examples, though. Finance, social gaming, ads and pretty much everything else is or can be regulated by the rule of law.
Intelligence agencies cannot.
There is no precedent of a surveillance society that managed to keep from turning authoritarian. Why should anybody assume that this time is different?
Of course intelligence agencies can, should be and are regulated by law. Sometimes they manipulate the law in their favour and sometimes they break it, but so do the other fields, most notably finance.
But more importantly, law and ethics are not the same thing. Merely not breaking the law does not make you an ethical person (and that's not the point of the law). Conversely, breaking some laws under some circumstances does not make you an unethical person.
> "I think these debates tend to dramatically overemphasize the size of group one, to the point of being misanthropic ("grow a spine"?). A fundamental fact of life is that people have vastly diverging belief systems and that those in the ivory tower don't always know what's right or best."
You seem to have not fully read my comment, or misinterpreted it.
I am not telling people who consider their (in my opinion, unethical) work to be ethical to grow a spine. Disagreeing with with other people, even me, on matters of ethics does not imply that somebody is spineless.
Performing a job that you consider unethical makes you spineless.
I am telling people to act on their own assessment of their work. Frankly I shouldn't even need to tell people that, it is practically tautological how common sense it is.
The quoted line is more responding to your second paragraph, not your first: "Obviously when we tell engineers to consider the ethical implications of the project that they are working on, we are talking to the first group"
No, I think when "we" tell engineers that, we're addressing a straw-man, posturing and showing off our superior ethics. Imagining that there's an audience of engineers who just need to be told that what they're doing is unethical is vain.
The "first group" refers to "The people who agree that the project they are working on is unethical".
Telling people to, in essence, 'follow their heart' and do what they think is right is not "posturing and showing off our superior ethics". If they think that their work is unethical, then they should not do it.
The third paragraph is the paragraph that endorses posturing and showing off our superior ethics.
> so calling on people to be ethical as a response to the NSA leaks only work if "ethical" is understood as "you can't build surveillance tools for the NSA..."
It also works if enough other people believe it. For example, the medical doctors who helped design torture at Guantanamo. Would you join a practice with them? Allow them to admit to your hospital?
Merely making a tool that is used in an atrocity is traditionally not enough to condemn a toolmaker.
Consider the extraordinarily extreme case of Bruno Tesch. He was hung because he provided his pesticide for the express purpose of killing humans, knowing that this is what it was for. This was evidenced by the sale of the pesticide without the standard warning odorant, and witness testimony. Conversely, I am unaware of anybody that was hung for manufacturing train cars.
An NSA employee may claim ignorance, and assert that they were told the systems they helped develop were only to be used for legal and ethical purposes. If this is work that they did before various NSA revelations became public, they might even be believed and therefore excused.
So what if these NSA engineers really thought they were helping people and doing good? What if they personally decided they were willing to deal with the certain privacy sacrifices they were making because from what they knew, working in very secret high level intelligence(seeing and knowing a lot more than we do), the threat to peoples lives and safety were worth what they are doing. Then in that case this whole article would be considered bunk. Ethics like morals are relative.
Is that actually useful in any way? It sounds like self-help "be true to yourself" kinda nonsense.
Ethics are rather pointless without some general priors to agree on. It's hard to reach full consensus even on basic human rights. Even if everyone did, they diverge rather quickly. The whole topic is logically pointless.
The article is arguing from the useful prior that what the NSA is doing is wrong. Then people are jumping in and saying, "hang on a second, what if people disagree about what's right and wrong?" (e.g. "According to some people I'm evil"). So it sounds like you should be arguing with them instead of me?
But to expand the discussion a bit, ethics is a multilayered thing. As Crito points out above, there's a difference between "x is wrong" and "x is wrong but not my problem because I just build stuff". A major thrust of the article is that engineers should take ethically responsibility for the consequences of our actions, which the author sees many engineers as failing to do. Thinking through the ethical implications--assuming those ethics are founded on some general priors--results in a world more closely honed to those priors. Think of "take ethical responsibility" as a meta-ethic.
In that context, "be true to yourself" isn't as nonsensical as your dismissal makes it out to be. It's true that arguing about moral axioms isn't helpful (and wasn't that sort of my point?), but that's not actually what's happening here.
I happened to think that nuclear power is a good thing (or at least, that it can be). Many people agree with me. I'm at odds with people who think that nuclear power is evil. Is it unethical for me to write firmware for a nuclear power plant?
Frankly, these lists of things that engineers need to follow in order to behave ethically pop up every now and then, and it's clear to me that the authors of these lists have their own agendas. As I've said, I refuse to be used. I'll pick my own standards.
The lists appear to be so broad that they're useless (e.g., "Don't be evil") or so narrow that they are instruments of control ("Don't work on X or Y or Z").
So I think we're in agreement.
Finally, it's likely that the number of engineers who actually do evil is quite small, and of a character that, upon reading a code of ethics, they are rather unlikely to shake their heads, wake up and change their ways. (I'm thinking in particular about malware writers, but there are many other examples).
I think the point which the parent is trying to get at is that he should not be expected to follow rules whose premise he disagrees with. People can call you whatever they like, but the question is whether you should allow their moral or ethical norms to affect your actions.
Its written into our Code of Ethics in Canada that all Engineers are to put the public good above that of our own personal gain or that of our employer. Our tasks are to be selfless and to the betterment of all mankind but like any difficult subject there's a fog of gray areas and open interpretation to be had. Heck we wear the Iron Ring as a reminder of this sworn duty.
However, the role we claim to take on, true "selflessness" is near impossible to reach in practice. Before we are engineers we are people and have basic needs. Without whipping out my Googlefu I would wager there are certainly more cases of people blowing the whistle and seeing their way of life crumble around them than there are who get protected by law. There is massive resistance to change from those who call the shots and if you're to go against the grain you're looking to sacrifice your career, life, health, etc.. The engineer says "you need to say something or people will die" but the person says "your family needs to eat".
In the end I think engineers weigh decisions of ethics in the moment. A protocol, security appliance or other technology development isn't bad, its how its governed to say when its harmful and to or how it gets implemented into the bigger picture that starts to cross ethical boundaries. Any engineer sees the good in his/her work and is perhaps aware of its dark side but elects to overlook it for the good it will provide.
Asking engineers to hold themselves to this standard is good. A self-governed society with regulation helps everyone for the better but I feel anyone tasked with decisions that effect the public at large should be held to this accountability. I think I'm preaching to the choir at this point though.
> Its written into our Code of Ethics in Canada that all Engineers are to put the public good above that of our own personal gain or that of our employer.
That's great. Which part of "protecting the motherland against terrorists" would not satisfy that code? I'd bet that the vast majority of the (necessarily quite competent) engineers that built the NSA infrastructure believed (and possibly still do) that that is what they're doing. It would surprise me severely if a majority of them was making a trade-off between "family eats" and "the right thing to do".
We're not talking about people that plausibly do not have alternative employment available. These are highly competent engineers and they could go work for any number of companies with decidedly less evil businesses if they felt their ethics were being compromised.
While it's true as engineers we must be ethical, try to ask the question of ethics with regards to what we do, and do stand by those ethics in the face of adversity, the reality is that we are but only engineers. Without a historical grasp for lessons learned in the past and training on how specifically to handle ethical situations (and I can attest that the Intro to Engineering Ethics are poor lessons for practical purposes), there truly needs to be people versed in such code of ethics to really do the work (hence the ACLU, EFF, various Privacy Commissioners of Ontario/Canada, etc.) as well as within the organizations doing the work (lawyers, policy analysts, etc.)
As far as I can tell, it's not a conundrum that many software engineers will face. As for the U. S., the worst that's going to happen is that one goes and finds another job. In the worst imaginable case (that is very unlikely to happen), one begs at an exit ramp off the interstate highway. Oh, you'll sacrifice some pride, but you won't starve.
The most likely case is that the person doesn't say "your family needs to eat", but rather "I'll have to explain why we gave up the leased BMW and the kids aren't getting iPads for Christmas" (or pick your own lifestyle impact that doesn't involve going without food).
Maybe India is different, Europe I'm inclined to believe not so different. Few of us will have to explain to our children that they are hungry because Mommy didn't take that job at Lockheed Martin.
However, the role we claim to take on, true "selflessness" is near impossible to reach in practice. Before we are engineers we are people and have basic needs. Without whipping out my Googlefu I would wager there are certainly more cases of people blowing the whistle and seeing their way of life crumble around them than there are who get protected by law. There is massive resistance to change from those who call the shots and if you're to go against the grain you're looking to sacrifice your career, life, health, etc.. The engineer says "you need to say something or people will die" but the person says "your family needs to eat".
I blew whistles twice in about a 6-month period. First was at a good company (won't name it, you know what it is) when I had a really bad manager. (I worked with a recruiter once who knew the name and said, "I get his people all the time, he's the worst manager at <company>.") Four months later I worked at a startup where top management tried to get me to commit felony perjury. Refusing to it so cost me my job, and the CEO launched a months-long personal campaign to ruin my reputation. If he weren't basically considered a joke by everyone who actually knows him, and if I weren't above-average in terms of being articulate, I'd have been fucked.
If I were of more moderate talents (meaning IQ 130-140, so I'm talking about a level that's still quite strong) or older (meaning 45) or had kids, I would have been in a lot more danger. The percentage of people who can blow whistles and survive it is small-- maybe 1 or 2 percent. There are a lot of shady databases that some companies are now doing to make it easier to share information in appropriate ways. It started with banks and tech companies to share compensation information (because in finance, your bonus is your performance review, so people inflated their numbers) but it's also been used for blacklisting attempts.
What really surprised me out of the whistleblowing experience was not the official response. That generally came out to what I expected. But you learn quickly that most people are morally weak and will turn on you, either because they don't like your "tone" or because "you should have used proper channels". What those people don't see is that whistleblowing almost always happens when "proper channels" fail. (See: the infamous, but misunderstood "hot coffee lawsuit", wherein the plaintiff originally just asked not for a million-dollar award-- which she never received, by the way-- but to have her medical bills paid-- a reasonable request given all the circumstances.) Most people are morally vacant "slutty cheerleaders" (please see this high-school metaphor as gender neutral; sluts and cheerleaders can both be male) who ultimately side with power unless power really fucks up.
What's the best thing society can do for whistleblowers? Personally, I think blacklisting should be a jail offense with the same penalty as attempted murder, because that's what it is.
No, not at all. Blacklisting is attempted murder (perhaps with a low success rate) or, at the least, reckless endangerment.
Financial hardship leads to depression and risk of suicide. Obviously, people can't be held liable for all cases in which they create conditions that may be unhealthy, but the purpose of blacklisting is to create persistent financial hardship, often without the person knowing the true cause.
The fact that a blacklisted person will often not know the cause of his failed future interviews may add to the sense of low self-worth, or the suspicion may create paranoia, and both are conducive to mental health events that place the sufferer in high danger and others around in medium danger.
Of course, there are legitimate actions that may cause financial hardship. If you fire someone but don't damage his reputation, then you've made a valid business decision and the fault isn't with you. That's just how business works. If you ruin his reputation or make it harder for him to find jobs and he ends up sick or dead, that's your fault entirely and you should be brought up on criminal charges.
By that logic, exposing a scam is also murder, since the scammer could go bankrupt, face jail, etc.
Not all things that cause financial hardship are attempted murder. Firing someone is even more likely to cause hardship, but a valid business decision. Financial hardship isn't the purpose, though, when you fire someone or expose a scam. It's a side effect. What you are really trying to do, in the scam case, is protect the public and, in the firing case, end an unprofitable business relationship. Once those goals are achieved, you generally don't go after that person to cause further financial difficulty.
Reporting a drunk driver is murder, since they may get arrested, lose their job, and be shunned.
Again, no. If anything, you're saving the world from more severe hardships (of the financial kind; but, more severely, injuries and deaths) by reporting him. Besides, when you report him, your goal isn't to ruin his life. It's to protect innocents from the danger he's causing. (If his life gets ruined, he's not exactly innocent, but that's probably not your goal in reporting him.)
See, blacklisting is done specifically to cause persistent and unreasonable financial hardship. Financial difficulties to another party are, in those other cases, side effects. With blacklisting, there is intent to ruin the person's life, and that intent is persistent, which generally means that there is no escape for that person. Different story altogether.
> Engineers have, in many ways, built the modern world and helped improve the lives of many. Of this, we are rightfully proud. What's more, only a very small minority of engineers is in the business of making weapons or privacy-invading algorithms.
Morality is relative - by defining what is and isn't moral the article loses the opportunity to make a bigger, more inclusive point. While some people may think spying is wrong, others, like the Economist (http://www.economist.com/news/leaders/21588861-america-will-...) have argued that it's not a bad thing. Similarly, building weapons isn't bad (per se); as the ancient adage goes: "if you want peace, prepare for war".
My point isn't to start an off-topic debate, it's to point out that morality is complex and subjective, and it's up to the individual to make their own decisions. Personally I would never work for the NSA or a defense contractor, but I understand why some people do. I think building a strong moral code is very important, and it's a bad idea to let other people do it for you.
I don't know that I would agree with you that morality is relative, but I strongly agree with your larger point that you shouldn't let other people build your moral code for you. I also would never[1] work for the NSA or a defense contractor.
[1] By "never," I mean that I cannot imagine a realistic scenario in which I would accept such a job. I can imagine an unrealistic scenario in which the head of the NSA has kidnapped my child and is holding New York hostage with a ticking time bomb in which I would accept such a job gladly.
As someone who has done both of those things, the pay is great and the work is interesting. The only thing that sucks is the work environment. SCIFs are a terrible place to be a programmer.
Are SCIFs terrible work environments for software developers specifically or does that assessment apply to other roles in the same environment? What made it so terrible?
> Personally I would never work for the NSA or a defense contractor, but I understand why some people do.
I used to work for a defense contractor and would do so again happily. I think there's just as much of an ethical consideration in play when you're building robots to put people out of work faster than our safety net can keep up with as when you're building robots to fight our wars so American soldiers won't have to.
Does anyone else have a feeling that the idea that it is unacceptable to suffer soldier casualties during war is ultimately emasculating for the US army?
This is not the first time this happens in history, either. The golden age fearsome Roman Legions used the Glatius, a short, broad, and extremely lethal sword as their main service weapon. By the 3rd century A.D. the population had grown so unmanly that some young men would chop off one of their fingers just to avoid being recruited, and the Legions were increasingly composed of long conquered barbarians, who were pitted against new waves of far away barbarians.
These new Legions brought longer, thinner swords into battle. Those were great from a personal safety point of view (you could hold your foe at a longer range), but were harder to use in closed formation and ultimately less effective in the offensive. We all know how that ended for them....
We just spent a decade trying to make people live like we want them to without actually living with them - Lesson #1 of any Counter-Insurgency.
And all that concern about pilots getting captured is why drones will take over everything but strategic bombing. Can't have people getting captured and showing up on the nightly news.
Your analogy is such a long way from modern consequences that I wonder if it really applies. But we have made modern war so expensive, nobody can afford another one. If there is a reason we are not at war with Iran, I'd say that's it.
I'd say the ethical considerations are much greater in the latter case, since building drones is also putting people out of work (it's well known that poor people enlist in greater numbers than other social classes).
> My point isn't to start an off-topic debate, it's to
> point out that morality is complex and subjective,
> and it's up to the individual to make their own decisions.
Did the author make the opposite point? (I'm having trouble seeing it if he did. For example, the sentence that follows what you quoted exemplifies the author's efforts to not define morality in strict terms: "However, we [engineers] are part and parcel of industrial modernity with all its might, advantages and flaws, and we therefore contribute to human suffering as well as flourishing.")
This might not be the norm here, but give me enough money and challenging work, great working conditions, and I'll work for any legal company or organization, though some organizations would have to put out a great deal of money vs others!
Given that like most decent programmers here, I can work more or less anywhere, given equal amounts of money etc, I'd probably go work for Google than the CIA.
But if the job choice is between fiddling with javascript on some ancient offshored CRUD codebase(say, been there done that), vs building robots for the American Army, or the Indian army - [I live in India]), no contest at all.
In other words I won't work for the mafia or child pornographers or anything illegal, but the CIA/NSA/whoever? In a heartbeat.
Drone targeting software? sure thing. A drone is just the weapon of the day and not any more illegal than say a smoothbore musket in its day. Should all the engineers/metallurgists have stuck to making trinkets for the nobility vs cannons for their army? You are working for the same people in any case.
Every algorithm known to man has both good and bad uses. Not developing algorithms because they might be harmful is insane.
On a less hyperbolic plane, I like Richard Stallman but would easily work for Apple or Goldman Sachs (for e.g, given enough money and good work and good working conditions and coworkers).
The software you'd write for SpaceX isn't that far different from the software you'd write to control an ICBM.
Salarymen who think they can control what the software they write will be used for by their employers are just deluding themselves. Do you think the people who built Watson have any control on what IBM will do with the tech? And this is a company that used computing to help the Nazis. Should the engineers at IBM not have worked on computers?
Work is just work, a means to exchange your talents for money. Keep it legal, do good work. Go home and play with the kids.
The way to stop the NSA from doing distasteful things is, in my opinion, to work to elect people/hold your legislators feet to the fire, and get good legislation passed not refuse to research/deploy cryptographic/cryptanalysis algorithms.
I just wanted a little balance to this discussion. Not everyone buys into the political correctnesses of the day.
Sorry for the rant, but the article is nonsensical, trying to guilt trip algorithm developers.
yes so "People should think about it. But I'm just an engineer, basically."
I have to disagree. Resorting to "keep it legal" removes any moral obligation. There are ethical decisions to be made, some easy, some not so.
Should IBM not have worked on Computers? No.
Should IBM not have sold to Nazi Germany? Most likely yes.
Should the guy working hard all day to make the Zyklon B have refused to do so? ABSOLUTELY
Back when I went to school, we had one (or two?) hours per week of "ethics class". If there was one thing to take away from those lessons, it's that you can never just refuse to exercise moral/ethical judgement. Acting ethically is one of the central responsibilities that a citizen in a free society has.
> Should the guy working hard all day to make the Zyklon B have refused to do so? ABSOLUTELY
Initially at least, Zyklon B was an entirely legitimate pesticide/vermicide that had a broad range of uses. The original form contained a powerful warning odorant to alert users of its presence in sub-lethal concentrations.
I don't think there's any claim that the actual production staff knew exactly where or how it was being used, and so would have no real grounds to refuse or quit. There is evidence that once it was found suitable for killing humans, there was a change in formulation to remove the warning odorant at the direct request of the extermination camp officials.
I'm not sure if that alone would be grounds for staff suspicion, as it was apparently passed off as necessity due to wartime material shortages, which to me would seem plausible.
The production and distributor firms management, however, knew a lot more about what was going on, and where it was being used. Founder Tesch & director Weinbacher were both found guilty in a war-crimes tribunal[1], whilst another employee was acquitted.
So I don't think it's quite so simple a scenario as 'the process engineer for the pesticide vats should have guessed they might be being misused and quit just in case' that you lay out.
...Which is exactly why engineers (like me) simply must seek out and look beyond the Kool-Aid their company and managers pour and understand exactly what effects the actions of their products and services have on the world.
We shouldn't just come into work, pull a bunch of levers and tell ourselves, "I'm sure the managers wouldn't let this happen if it were evil." We have to leverage our curiosity to extend beyond the Disney version.
BTW, this goes equally for marketers, a group I feel is as bad at ethics as engineers. Persuading people to continuously purchase and consume unhealthy food and activities is as destructive to the human spirit and our nation as bomb-making.
Zyklon B was invented in the 20s, long before WWII. So I'm not sure what Kool-Aid they were supposedly drinking. There were many legitimate uses.
Additionally, Haber (of Haber-Bosch, indirectly responsible for feeding like what, a third or more of current human population?) deliberately worked on chemical warfare as he really believed "in Germany". I'm pretty sure he didn't see his work as evil, and likely many of his employees would agree. I think I recall that they hoped it'd be such a shock, it'd end the war earlier.
I agree with your sentiment in general, but things aren't as easy as you're phrasing them.
But indeed the workers producing Zyklon B in large quantities had very strong indicators available regarding what was going on.
Think about it: You produce this poison gas for years and then, suddenly, a large part of the production pipeline is used to produce that same gas but without the smelling safety compound. What possible plausible reason could there be except using it against people?
I admit that some of them might have suspected that they were working (illegally) on a secret stash of poison gas for use in warfare. But that makes it still morally wrong.
If you have already listened to the Radiolab episode on Haber, I recommend also the one on the Milgram experiment. They, unlike practically every other instance of Milgram explanation out there, reported the actual results that the experiments gave: People will _not_ do evil when ordered, but they will do evil if they believe it is neccessary and serves a greater good.
Back to the workers.
Most of those bastards probably suspected something but found it unsetteling to pursue this mental avenue, so they didn't.
Some asked questions or raised the issue with a superior and recieved an evasive answer. Sensing uneasyness ahead, they did not pursue the issue.
Finally, if there were those that raised moral objections, they would have had a talking to, not a harsh one but an understanding one. "We know this is hard for you." "It is part of the bigger plan for Germany" "Someone has to do it" "We are all indebted to you for taking part in this gruesome but important endeavour".
Humans are like that. These guys all chose the easy way out. But it was still wrong.
For anyone interested in more about Zyklon, Haber, mustard gas, nitrogen based fertilizer and the ethics of it all I highly suggest checking on the radiolab episode that tells their story. Here is the relevant section: http://www.radiolab.org/story/180132-how-do-you-solve-proble...
"Every algorithm known to man has both good and bad uses. Not developing algorithms because they might be harmful is insane."
Yep, that missile guiding system someone built for Boeing totally has primarily good uses in mind.
"Work is just work, a means to exchange your talents for money. Keep it legal, do good work. Go home and play with the kids."
Dude, seriously? Everyone should have some sense of morals and question authority. Come on, if we don't question our current way of life, how does anything get better? Why should we be okay with people dying just so we can have a nice, cozy life with our kids?
These days, the primary purpose is quite likely to be taking out hardened targets of one kind or another - unguided bombs are not terribly suitable for that, unless you use a lot of overkill (as in, nukes against low-tech adversaries hiding in caves overkill, most likely).
I would gladly work on "defense tech" too, as in actually defending people. But what you mean by "defense tech" is actually offense tech. Look at how these things are actually used in the world.
When an entire field needs to stand behind a euphemism in order to prevent seeming ugly, that is certainly some kind of a sign.
There is no difference between a technology being used defensively or offensively. The 'self driving car' tech will feed into 'self driving tanks' tech, and vice versa. Military Satellite tech gives you the GPS and Google Maps, which will in turn be used for military purposes. A technology (including algorithms) can be used for good or ill. That is just the nature of the beast.
You can certainly work towards what you see as the overreach of American militarism. It is a political/economic problem which needs a political/economic solution. Your (not) working on cryptanalysis algorithms has nothing to do with it.
To play devil's advocate: Let's say my morals aren't bothered by killing members of the Taliban / Al-Qaeda / Enemy Of The Week, as I consider them a bunch of fanatics and a threat to western civilization. As such, I work on guidance systems and weapons to try and improve their effectiveness and accuracy, making them more precise, and less likely to kill civilians and cause collateral damage when employed.
Now what? Where do you derive your morals, and what makes them inherently correct, and mine reprehensible?
The missile guidance systems used for targeting offensively is similar (not the same) to the missile guidance systems used for intercepting incoming ballistics.
Are you then suggesting that guidance systems for missiles that intercept (like a thaad or mim patriot) should not have been created?
That is one way things are actually used in the world. I would think a case such as that could be seen to be morally ambiguous.
I'm saying people shouldn't actively engineer things that will hurt people. If every engineer refused to build weapons, there would be no weapons. GPS would exist without the army. If someone is putting GPS into a homing missile system, then they're actively building something to harm someone else.
It possibly wouldn't have existed until a much more modern era when the cost would have been less, sure. But someone (Google hint hint) would definitely invest in GPS if they thought it would make them money. Besides, I don't think you can really compare GPS with say, making explosives that can reach a greater blast radius. My point was that to simply say "well, some Army projects can be used for bad and for good" does not justify the engineering of technologies that are clearly meant for killing.
when you say "keep it legal" it sounds like you only mean, "so I don't get arrested". That is, you would not work on something very ethical, but illegal (like some system of assisting in the production of medical marijuana in a state where it's illegal) but you would work on something unethical but legal (NSA spying).
So really, saying anything legal is not saying much - essentially ethics don't come into question.
I know for a fact that I'm very uncomfortable working on something that I find to be obviously unethical. Though NSA stuff, I'd not work on that just because I'd hate keeping everything I do a secret.
I can't agree more. Legal isn't the same as moral, and conflating the two is pretty disingenuous.
My professional goal (hell, my life goal) is to work on things that make a positive difference in the world. Laws don't necessarily come into it (and sometimes are in direct opposition to this). I believe (or naively hope) that other people in my profession feel the same way, otherwise the future is going to get... interesting.
Sure, as with all things human, there is the occasional gray area where there are no good answers, but yes I probably wouldn't work in a criminal enterprise in a democratic nation, whether it is about medical marijuana or not.
The point is, I'll work on things as per my ethics, not some random person's ethics or a hypothetical "engineers code of conduct" or whatever as the article seems to advocate.
I gave plenty of examples of software that some people would consider unethical (say drone targeting, or NSA crawling etc that TFA is about) that I would have no problem working on. My ethics are mostly in harmony with the legal systems of democratic nations. I'm sorry if that doesn't accord with your notions of what "should be".
Oh well that's something different - you'll work on what you find to be personally ethical. That's absolutely fine, ethics after all are ultimately personal. Like I'd not work for Monsanto, since I find them unethical, but certainly most hackers disagree with me on that. I'm not happy about the NSA situation though I see it more as an issue of bounds being overstepped, the engineers working there don't strike me as empirically unethical.
I agree completely. I threw in 'legal' in the original post to head off morons chanting "so you'd work for drug dealers" etc. My only point was, to quote you, "the NSA situation ...I see it more as an issue of bounds being overstepped, the engineers working there don't strike me as empirically unethical" (well stated).
I have a hard time listening to the demonizing of the NSA engineers by people who work at Google/Yahoo/Facebook/Amazon (or would like to) who build similar technology to profile users!
Throwing in "legal" in a discussion that pretty much revolves around that seems like a poor way to communicate.
Also "drug dealers" is probably the epitome of a morally/legal ambiguous job. Purdue has no doubt eased the suffering of millions of people, yet they were still hit with felony convictions.
> The point is, I'll work on things as per my ethics
I'm a bit confused... are you saying there are some jobs you wouldn't take no matter how good the salary was? You seemed to imply otherwise in your original comment.
I did read Grellas's comment, and I'm not sure why you think my question had anything to do with "absolute black and white with sharp dividing lines."
In your original comment, you said:
> give me enough money and challenging work, great working conditions, and I'll work for any legal company or organization, though some organizations would have to put out a great deal of money vs others
The key word here is "any." You later stated that you operate by your own ethical standards, and didn't seem to disagree when zzzeek interpreted that as "you'll work on what you find to be personally ethical," so I was merely asking you to clarify if you would really work for "any" legal organization.
Of course, if your ethical standard = whatever the authorities deem "legal," then your position is entirely consistent.
There is a point when further nitpicking communication is not only useless, it is actually a disservice to the forum. I leave the thread to you, and bow out. Cheers, have a nice day.
I honestly didn't think I was nitpicking, just trying to gain some clarity on your moral perspective.
Also, in terms of grellas saying it better than you could, he specifically notes that we "ought to avoid being a proximate cause of something deemed wrong even though technically legal." This directly contradicts your comment, in which you implied that you would work for any legal organization if the money was good enough. I was merely asking if you could address this contradiction, in the spirit of provoking reasonable discussion about the intersection of money and morality. But if you feel my line of inquiry was nitpicking, then I apologize.
Hey no need to apologize, it is all good. What I meant by "go read grellas' comment" was "Ignore what I wrote, Grellas says what I should have said".
Yes the use of the 'legal' bit (which I threw in to keep the ravening hordes off 'but but are you saying you are ok with drug dealing/child pornography/whatever'. In my defense, it was late at night here (India) when I wrote that and it is too late to edit now.
My position in my original comment was "Within the law (see below for an alternative to this clumsy phrasing), I'd probably do almost any job in software, provided I were paid enough". The "enough" might be quite high.
As a thought experiment, take a sw job you'd find distasteful, and ask yourself if you'd take it if your pay was a million dollars a year. How about if it were a billion? If you are the rare human being who wouldn't work for the NSA, no matter what you were paid, then more power to you. I would gladly work for them if they rewarded me well, because I don't think the engineers at NSA are evil demons bent on world domination. By that logic, nobody should be a soldier, because the essence of that job is killing other people for dubious political causes. Yet the USA worships its service personnel ("Thank you for your service") to a far greater degree than most other countries (where being a soldier is "just" another job).
Grellas of course lays all of tis this out much better than I can. "proximate cause" is the phrase I should have used, but I am not a lawyer and wasn't even aware of the phrase.
So, yes I probably wouldn't work on something that has a pure unadulterated horrifying evil proximate effect( and no good aspects) to it. But that isn't saying very much since this kind of pure evil job probably doesn't exist outside a platonic ideal.
I was objecting to the broadbrush sanctimony in the article, and the use of (in my mind) silly examples.
The whole "legal" bit was clumsy phrasing on my part and you are right to call me out on it.
Well, I guess I wouldn't work for the NSA no matter what (well, it depends on the specific job, of course, they might actually have some positions that are not ethically questionable, after all), and there is one important reason why pay cannot compensate for it: The effects are not just on others, but also affect myself, so that's a cost that I have to subtract from the pay. A society with complete surveillance removes everything from life that makes it worth living, for everyone, including myself, and you can not buy that back, so money would be worthless as compensation.
I think a common misconception is that ethics are somehow something that you obey for others to gain from it. A concensus of behaving ethically creates wealth for everyone, including yourself. It's not that we do not commonly mug other people in the street because we don't want to have their money, but because a society where you can expect to not be mugged is just so much nicer to live in, so that is a value in itself. The only problem with that is that society can support some free riders, and so there is some motivation to be the free rider.
"great working conditions" for me implies that I don't find the work odious. No amount of free soda can make up for having to work on a shitty or immoral product.
This is a luxury afforded me by the decent amount of decently well paid work available to an engineer. I might have less moral freedom if I couldn't feed my family.
And there's nothing wrong with that, you're just excerpting yourself from responsibility, and if enough people do that engineering becomes irresponsible.
Arguably this is a problem right now.
Look, I understand your position, food has to get on the table, but: I don't respect you. I am responsible for everything I put out in the world.
You can take that as a guilt trip if you'd like, but I feel like if you're going to be bald-faced about how you put all the real responsibility on laws and those who make them, I can be blunt about how I view your attitude.
I guess everyone has to find their own relationship to the value of their work.
Your morality and ethics are your own. How you choose to align your beliefs with those around you is up to you. Just be aware the non-alignment will cause friction and conflict, and that that choice was yours.
But more seriously, regardless of your ethical views, we should be more mindful of the effects of our work cause it's often pretty clear that those effects aren't actually thought through. Understand that what negative impacts may have from your frame of reference, and figure out if they are acceptable for you before hand.
Case in point, say you can develop a system that will 100% secure the communication and organization of resistance groups in countries with oppressive government. There is no guarantee that the end result of that is something you want. Maybe it'll lead to genocide of the once ruling tribe once the government is overthrown. Maybe it gets replaced with something worse. And certainly, "bad people" at home will get their hands on it. Can you deal with that? Gotta make up your mind. No action takes place in a vacuum.
Ethics are not necessarily your own, and can take the form of an enforceable code in a number of professional organizations.
Some sources define ethics as being a set of normative values common to a group of people, and morals as being subset of the individual's normative values.
The distinction you (and the site you link) propose between "morals" and "ethics" is far from universally used, even in the domain of philosophy which concerns morals/ethics.
Notably, its not even consistent with the definitions given in the "references" provided on the linked site, which seems to adopt the principle that if you assert something boldly and provided references to "support" it, it doesn't matter if the references actually support it, people will just assume your description is authoritative because you gave references.
Quite a few of my university friends went off to join the military-industrial-circus and I saw their rationale for doing so change over time. Initially something like having learned the ADA programming language coupled with a poor degree result meant that 'defence' was the only option open to them unless they wanted to stack shelves at a supermarket. Then the family came along, with the mortgage and the responsibilities of being a mature adult. The rationale shifted, working for the military machine was now just a means to an end, for bread on the table, where there are bills to pay. This is what happens.
Initially they went to work in the defence industry because it was their only option unless they wanted to stack shelves.
Later they continued working in the defence industry because it was their only option to put bread on the table and pay the mortgage.
Those both look to me like "I'm doing it because it's the only option I have that uses my skills to do something that pays decently".
What I'd find more interesting would be if people tended to shift (1) from "I do this because I think it is a noble way to serve my country" to "I do this even though it's horrible because the alternative is poverty" or (2) from "I do this because the alternative is poverty" to "actually it turns out that this is a noble way to serve my country".
I've worked in the defence sector with embedded systems.
Some engineers do work ethically within unethical companies, particularly defence companies. There are lots of people working out how best to fuck up the delivery and implementation of the latest and greatest killing machines and surveillance technologies. Skilled engineers can also skillfully make total lemons too. Some high profile projects that failed were pretty much sabotaged by the staff on ethical grounds.
This is not discussed outside of the organisations, nor is it discussed inside the organisations in detail but if something is designed to wage war, it might not get there in the end.
"Do you regret building the internet because of surveillance?" is the new "Do you regret discovering fission because of the bomb?"
Now, as a technologist I'd never directly contribute to a project whose intent was to kill or surveil people. (I turned down that cushy DoD contract job straight out of college.)
But this whole "as engineers we need to consider the ethical implications" argument is deeply flawed. First, it assumes we can predict how the technology we build will be used. Second, even if we could predict all the uses, should we refrain from building something that has some good uses, just because it has some bad ones? Third...what about the people actually using the thing for evil?
We might as well drop the "as engineers" part and just have a discussion about not doing evil things in general.
Obviously we are limited by our predictive capacity, but I don't think "there might be bad and good uses" is really a strong argument. Like anything else, you weigh the good against the bad in making your decision.
That's the thing though. How does one weigh the good against the bad in these cases?
If you're building stuff at the application layer, maybe the use is obvious, but if you're writing a library or a service, how can you know how it will be used? Should you expend time enumerating and assigning probabilistic weights to all the good and evil that could come from it?
Far simpler proscription: as a human using a tool, don't do evil with the tool.
You do your best, along whatever axes are situationally appropriate.
For what it's worth, a sufficiently generic tool I think tends to balance toward morally positive, because there is more intent to do good than intent to do harm out there. But of course, helping to grow that disparity is still important, which is why you should be looking to see if there's ways in which your tool radically, disproportionately facilitates harm.
> When doctors or nurses use their knowledge of anatomy in order to torture or conduct medical experiments on helpless subjects, we are rightly outraged. Why doesn't society seem to apply the same standards to engineers?
I think the difference we draw between the two is that an engineer builds things that can be used for many purposes (defense or offense for example) and we as a society generally agree with one of those two uses. When a doctor abuses their knowledge to cause pain they are the ones making a choice we disagree with.
Anyways, yes engineers should think about how their work can (and will) be used but I think it's disingenuous to compare them to patient torturing doctors.
Assuming that an engineer is aware of the eventual applications for his or her work and is ethically mature enough to recognize that some of those applications are evil, destructive, or violate the rights of others, it doesn't necessarily follow that this engineer should refuse to do the work. For one, just recognizing that a technology has harmful applications--even only harmful applications--and refusing to build it doesn't mean that the technology will not come about anyway. If you find the applications harmful and refuse to work on it, others may--and likely will--step in and build it instead. So there is a game theory issue that the article ignores. If many people can profit (financially, professionally, learning) from building technology that eventually becomes harmful, then a sufficient number of those people need to refuse to work on the technology in order for it to not be developed. But decreasing the scarcity of labor will increase the value of it, and thus the rewards for developing this harmful technology will be increased, which creates a more powerful incentive for engineers to "defect" (in the prisoner's dilemma sense) and work on the harmful tech.
There are also objectively good benefits to doing work that is eventually harmful. If you have a family, you can feed and clothe them. You can learn skills and that will help with non-harmful work in the future, and you will meet people who may help you do non-harmful work later. Being intimately familiar with this technology, you will be able to warn others of the dangers of it. Whether these objective goods outweigh the potential bads of a technology is difficult to determine during its development. And where the balance is difficult to determine, engineers, being practical and conservative by nature, will tend to side with the tangible benefits they can get today over the intangible costs that others may incur in the future.
I agree with the article's point that engineers need to develop a broader societal focus[1] and be mindful of the potential uses, especially the unintended ones, of their work. The problem is that awareness alone isn't going to accomplish much because a world full of aware engineers doesn't change the existing incentives.
Moral behavior is not a 'game theory' concept. It is what you do even when you don't 'win'. At its heart, moral behavior is NOT doing whatever it takes to benefit yourself, which is the sum of game theory.
So yes, if your project has only harmful application then you shouldn't work on it. Even if someone else most assuredly will (I'm guessing that could be you). Because its wrong, hypocritical.
If you look at these things at an individual level, individual morality is a useful lens. But this topic is really about a class of professionals within a whole society. Influencing behavior through moralizing unfortunately works much better in the ancient tribal environment in which it evolved, where shame and ostracization could be used to punish wayward members of the tribe.
The reality of large societies is that a certain percentage of people will do immoral things even when they know those things are immoral, because they can get away with it or even profit from it. And they will do this especially when the rewards for immoral behavior increase asymptotically due to supply constraints.
Using morals to provide solutions only gets you as far as "X should Y". But prescribing "shoulds" doesn't solve anything. Changing incentives does. How to do that is the real question.
Unfortunately we all only really can change our individual behavior. So individual morals is all there is. Given that, you can 'go with the flow' and become a person that does immoral things, or you can hold the line and perhaps be an example. And keep your self-respect.
Only if you have faith in some kind of mystical virtue separate from all consequences of your acts in the real world. Even an ethical altruist wouldn't be required to make an empty gesture which doesn't benefit anyone.
http://en.wikipedia.org/wiki/Consequentialism is an entire category of theories of ethics (IMHO the only sane ones) that focus on consequences (for everyone, not only yourself) and do not grant style points for failed efforts.
Cynical to call it style points when you're the only one doing anything that matters. Like instructing a kid, or leading a crowd that could have become a mob, or starting a community effort to reduce graffiti etc.
> Engineers are behind government spying tools and military weapons. We should be conscious of how our designs are used
Except that it's not always clear whether something is ethically good/bad. If you're working on a system for tracking cell phone calls, this can be used to protect against terrorism for public safety (good) or discovering mob or child porn networks (good) or for targeted blackmail (bad) or a surveillance state (bad) -- and the engineer working on it not only has no guarantees on what it is being planned for, but also has no crystal ball to know how its use might change in the future.
And different peoples' ethics are different. So while saying we should consider the ethical implications of our work sounds nice... it sounds awfully simplistic.
Bullshit; it would be like pretending that tires shouldn't have been created because someday those could be used on war jeeps.
It's not possible to predict the uses of one's work and therefor is impossible to comprehend its ethical implications. And if you add capitalism to the equation it means that someone else is going to create the technology even if you don't just if there is a direct -or even indirect- profit from it regardless of ethics.
So the "problem" is not a burden on individuals of certain professions (e.g. engineers) but on basic social structures.
> It's not possible to predict the uses of one's work
Probably not in all cases, but you can often make an educated guess.
You could at least consider classes of activity, ranging from:
Primary, sole purpose is to cause direct harm. You'd have to be dumb or willfully ignorant not to realise. "This BabyFaceMelter technology will be so amazingly terrible nobody will ever want to fight us again!"
High likelihood of indirect harm - nominally for other purposes, but trivial to see simple ways in which it can be weaponised/made harmful. "Our Public Order Droid platforms are all equipped only with non-lethal electro-tickle cannons, and it'd be really hard to put actual bullets in there"
Moderate likelihood of indirect harm. Fairly compelling "good" uses, but still possible to think of ways it could go bad. Less toxic or higher power explosives intended for mining, perhaps.
Too generic to really know - finally, the weakest level that you present, something like better off-road vehicle tyres. Quite a lot of fundamental science (ie: not the "we hypothesise that X will make the Anthrax really angry" sort) has such wide application that you can't really guess.
Ditto basic algorithms in maths & CS, or software libraries (is OpenSSL evil?)
I don't totally disagree with you, but you might want to try painting with a slightly smaller brush to make your argument more compelling.
Some Americans think working for the NSA is ethical. I'm an American engineer, and I would not work for them, given their recent history.
I would be surprised if you found an American working for the NSA who didn't do so willingly-- people in that position can easily find other jobs, so they do.
People think working for Goldman Sachs is ethical. People participating in the Summer Olympics think Russia is ethical. People buying Nikes and Fuji water think they are ethical.
You don't think you're biasing your analysis by concluding that every NSA related person believes in what they're doing? People that don't believe in what the NSA is doing wouldn't work there. Its going to be very hard to find someone who doesn't believe, since adults that change their long-held opinions are pretty rare.
To name one that did change his opinion of his "mission," Edward Snowden.
For my B.S.C.E., we were required to take an ethics course. By the end, everyone was convinced being ethical was a fast track to job loss & blacklisting.
same. I still don't know whether the university was deliberately trying to teach the kids to be obedient by shoving Boisjoly down their throats, or whether it was merely a side-effect.
This reminds me of Mike Monteiro's talk, "How Designers Destroyed the World". http://vimeo.com/68470326
He was talking about web/industrial/etc designers, and this is talking about engineers. It doesn't matter what field you're in, the moral of the story is: Be cognizant of how your work affects the world, and be prepared to say "no" to requirements that will likely affect others negatively.
I don't think anyone actively works to do evil things in engineering. However, it is possible/easy to end up making something that is used for evil by someone else later - and justifying making weapons through notions of "defense" has been commonplace for years (although I don't agree with the justifications, generally).
I've never quite been happy with open source licenses for this reason - and I've considered writing a pacifists' OSS license with a clause along the lines of:
"No license is granted for any purpose of deliberate harm or death to humans [and other intelligent species]; whether in a deliberate weapon or implement of torture, or whether in a targeting or guidance system for any such device. Missile 'sheild's, designed and only usable for blocking incoming weapons are excluded from this category."
You could hypothetically write additional clauses for other objectionable things, pornography and fossil fuel drilling/mining come to mind.
It wouldn't stop people reimplementing your code from a specification/summary, as many companies do with OSS that they need to use but can't re-release, but it does make making weapons and such more expensive to make (which will decrease supply, assuming your code was ever in contention for use in such a system.)
The problem with statements like this is, whose ethics should one be held against ?
The ethics of the majority population are usually more misses than hits. Ethics of the peers will perhaps be better but will still suffer from different kinds of prejudices (for ex. discovery above all else among scientists).
This leaves the ethics of self which most people should and will follow. The only thing anyone else can do is make sure people are held accountable for the consequences of their actions.
It's strange to see those wikipedia topics don't have an English translations (yet).
.)
It's certainly interesting to see in which ways engineers who chose to follow "Gesinnungsethik" justify their "choice"... interesting, in the sense of: not differently at all compared to any other profession. I think this is the result of last decade(s) or so of overdone and misunderstood "individualism" culture.
.)
IF software eats the world - and it certainly looks like this right now - we better make the human beings who wield and create these very powerful tools more sensitive to the responsibilities and consequences of their works.
.)
I'm sure true treasures can be found regarding this topic in similar discussions which took place in the non-software engineering community decades ago and I guess the statements and positions used there are applicable in the software world.
The issue is not that engineering ethics is too technical. It stems from the fact that everybody defines what is ethical for themselves -- there are few universal opinions on ethics, just as with political opinions.
I could write music for commercials, it pays very well. I choose not to on ethical grounds. If it were my only choice to feed my family, I would change my mind. But it is hard to truly terrorize people with music, although I've been accused of it. ;)
Should personal ethics factor in to what we do as work? Absolutely. More than most ethical decisions, what we do as work can profoundly affect the world around us. Are these distinctions easy and clear? No. Is that a reason to stop thinking about it and just do whatever is most fun and pays best? In my opinion, no. This is one of the greatest questions we all face, how our daily actions affect others, and I no longer expect quick and definitive answers.
Your profession doesn't define your ethics, you should define your own. Anyone telling you that you have to be ethical because of your profession is trying to coerce you to their way of thinking.
I find it heavily disturbing to blame engineers for something they had no part in deciding. If a politician/general/manager decides to use some technology for "evil" purposes (for whatever definition of evil), the responsibility should fall primarily on whoever makes that decision.
Engineers who are unhappy with how their work is used are free to quit their jobs and work on something else (in most cases). However, since they get little say in how their work is used, they should also get little responsibility over it.
In Ontario, and much of Canada, to call yourself an Engineer you must pass an exam and meet the requirements. Part of this exam is about ethics. In Canada, Engineers have an iron ring to remind them of their duty to society.
The issue is the term "duty" and "public" can get tricky.
How far does duty go? Does it include compromising morals?
Does the "public" include the enemy that we would deploy the technology against?
There is no answer that is without some shortcoming.
I think this is a good point, but it runs into some particular problems when dealing with software, in particular open source software, where there is no direct relationship between the person creating the software and the person using it - in fact, it's not automatically true that the person creating the software knows anything at all about who is using it.
We can only make certain that our individual contributions are ethical, correct, and secure. It helps a little to assuage those particular fears, I think.
People will use good software, and they'll use it for good and evil. Think of how nginx is used. Software isn't like an article for theguardian.com, which can take an ethical position.
Paralyzing engineers by demanding they consider social complexities seems like a bad strategy for getting things done.
I think it is fair to say that each of us in our given professions (mine being law) ought to avoid being a proximate cause of something deemed wrong even though technically legal (for example, I would not be a "mob lawyer" even though there are some technically very good lawyers who do that work). But even here that is an individual choice for each actor to make. For engineers, some may see it as a great opportunity to do advanced work in some of the social media companies while others may regard such companies as being engaged in unethical conduct as they at least sometimes use dubious techniques to try corral us as consumers into their tight little worlds. For any given engineer, working for such a company doing such things is a matter of conscience. Some may say yes, others no. The same is true in working for a defense contractor or for the government. Or for any other work that is legal but ethically suspect in the eyes of some but not others. It is your choice and it is your conscience.
The author of this piece reflects what I call the bane of associational thinking. He uses the royal "we" to define a group (here, engineers) and then prescribes very broad goals for what "we" "ought" to do. Since all the things described now stand as a matter of private choice over which the "we" group as a whole has no say, then the only way to translate this sort of thing into practical action is to form formal associations, assign things to committees, and then issue a series of prescriptions on what the group members ought to do. That may be fine in terms of the association giving rules that amount to exhortations to do good (who would disagree with that). But, beyond that, do you really want an organized association dictating what today stand as your private choices for your career? Or, worse, do you want such an association to lobby governments to adopt their strictures and give them the force of law? I would think not. How, then, can "we" do better? Of course, the question is always described as difficult and is left for further discussion precisely because it has no real answer apart from acting on individual conscience or apart from the potentially coercive ones of letting some association or government dictate your career choices and opportunities. Perhaps this sort of reasoning is justified as encouraging people to have a heightened conscience about what they do, and in that respect it is fine. But that is really as far as it goes before veering into unacceptable alternatives.
Our capacity to do wrong is innate to our nature, as is our capacity to do good. We should not stop trying to do good through our creativity just because others can take what we do and commit wrongs with it. Nor should we feel guilty about what we do as long as we in good conscience can say to ourselves that we are doing something productive and worthwhile and not directly causing harm to others. The "we" issue is in reality much more of an "I" issue and, for that, you should examine what you do carefully and strive for the good regardless of what others may do with it. If you want to exhort others to do better by your standards, then all the better. Just don't dictate to them on matters over which people in good conscience may disagree.