I didn't say forbid AI PhDs to work on ethics. I just say that an AI PhD shouldn't be a requirement.
We already have an ethics discipline. This is an ethics problem.
Do you also propose that for example self-driving cars accidents should be investigated by AI PhDs which worked on self-driving cars algorithms, and they should be the ones proposing regulations? Are you saying that a regular "car accident specialist" is ill equipped to handle these, and it's opinions aren't as valuable because he's an expert in "human car accidents" not "AI car accidents"?
Frankly, I think it should be. Extremely bright and educated people who have worked with the technology as an executive in a professional or educational/research setting.
There is a legal term for what you are describing - it is called "conflict of interest". No, I do not think professionals should be involved in making decisions in matters in which they have a conflict of interest. Not all AI researchers are stakeholders in Uber, Waymo, or Tesla, hence you would select those people to investigate, as you would analogously do in any other matter of human affairs.
In fact, I'll go a step further and say that an "AI specialist" should be working alongside a "car accident specialist" and "laywers". I am arguing that this is a better combination than "Ethics specialist", "car accident specialist" and "lawyers" to investigate the accident.
> Not all AI researchers are stakeholders in Uber, Waymo, or Tesla, hence you would select those people to investigate
Think if this would work in finance: asking JP Morgan people to investigate the Barclays Libor scandal. Smart people understand that a scandal against one of their peers will taint them also.
Do you hear today Google/Amazon/.... or other big private data hoarders makeing noise against Facebook? The one which spoke, Apple, doesn't have and rely on private data as much.
I would say that a serious ethics panel would have opposed the way Facebook handles private data, in a way that engineers working on them don't. Because engineers didn't study ethical/political problems of the past, and they don't understand how being lax with this can cause harm.
BTW, in all universities, doing studies on humans/animals requires first getting approval from an Ethics comitee. Googling now a bit I see that they have people from all faculties - Engineering, Humanities, Law, ... So universities, which give those AI PhDs consider that it's valuable to have a diverse ethics panel, not just one composed of Engineering professors if they meet to discuss an engineering experiment ethics.
Actually, that's literally exactly what happens in finance, as I stated earlier. People leave JP Morgan or Goldman for a comfy position at the SEC and maintain their close friendships. Edit: and unfortunately, it does make sense, as unfortunate as that is
Further, comparing the reactions of large companies with divergent interests to scandals is only tangentially relevant; that does not imply anything meaningful of the bias of hiring autonomous individuals from divergent backgrounds in AI; they do not all work on autonomous cars or <sell your data> at Big N. Finance has different social conventions as well.
"engineers didn't study ethical/political problems of the past, and they don't understand how being lax with this can cause harm" - I would wager that the average CS professor at a top school is significantly more well-read in history, ethics, and many other subjects than the "AI Ethics major" being proposed. But okay, let's set up an AI McCarthy panel with "experts", we'll get bored of Trump scandals on TV eventually.
> Average CS professor at a top school is significantly more well-read in history, ethics, and many other subjects than the "AI Ethics major"
This sounds like the jokes of when a plumber comes in to fix the pipes of a physicist, and the physicist is like "Whats so hard here? It's just basic physics"
>I would wager that the average CS professor at a top school is significantly more well-read in history, ethics, and many other subjects than the "AI Ethics major" being proposed.
Wait, what? To become a professor you have to spend the vast majority of your time teaching and/or studying/producing research in your specific discipline. How do you reach the conclusion that someone who doesn't exclusively focus on a topic would be more well-read on it than someone who does?
It really depends on your basis of comparison here. I would prefer the opinion of a top CS professor. It is a very reasonable assumption that they are, like most other hyper intelligent and intellectually curious people who find time to do other things in their long lifespan despite working 60 hours/week, extraordinarily well-read. This is relative to the binge drinking undergrad who decides to become an AI Ethics major because the workload and intellectual rigor is lower than STEM and he knows he can easily get affirmative-actioned into a well-paying job micro managing software developers with moral high ground as soon as he graduates, which seems to be the convergence of the idiotic suggestions being made. Would I say that a top CS professor has more of a background studying ethics than a top ranking federal judge or a top ranking philosophy professor? No, I would not. I'll also mention that one's level of knowledge and life experience is a completely dichotomous issue from whether an individual is inclined to make "fair" or "unbiased" ethical judgments when given authority to do so.
As btilly is referencing above, you already have a clear example of what type of people this field will attract: people with poor intellectual ability and zero understanding of inefficient, large-scale bureaucracies, basic probability or statistics, or software systems, painting simple organizational ineptitude as deliberate malfeasance and racism with the intent of bolstering themselves and their social group financially. It's clear there is an issue and it needs to be addressed as a growing pain, but deliberately training moral police with power over the software industry may lead to something like a dark age in technology.
We already have an ethics discipline. This is an ethics problem.
Do you also propose that for example self-driving cars accidents should be investigated by AI PhDs which worked on self-driving cars algorithms, and they should be the ones proposing regulations? Are you saying that a regular "car accident specialist" is ill equipped to handle these, and it's opinions aren't as valuable because he's an expert in "human car accidents" not "AI car accidents"?