I've been harsh to Microsoft in the past (for what I consider good reasons) but I want to give credit where credit is due - this is exactly the sort of behavior that many have been asking for from companies. Foresee a problem, don't do the thing.
It doesn't matter that there's an argument of this refusal being good business - truth is, big companies being confident that consumer outrage won't translate into enough market impact to say they shouldn't do the bad thing has become the norm. Nice to see an instance where that's the not the case.
Rubio told the FT: "It is deeply disturbing that an American company would be actively working with the Chinese military to further build up the government's surveillance network against its own people [...]"
I can see why the US government is perturbed. The more patriotic thing to do would be helping the NSA, FBI, and a bunch of good, old-fashioned American TLAs build up our own government surveillance network against our own people, and maybe helping the CIA and US Military build up its surveillance network of the Chinese people.
This is absurd. First of all, https://arxiv.org/pdf/1803.06340.pdf which opens with the intimidating quote: "Spicing up selfies by inserting virtual hats, sunglasses or toys has become easy to do with mobile augmented reality (AR) apps like Snapchat"
Second of all, most universities in China are affiliated with the government; this one just has a snappier name. It would also appear the two individuals in question are grad students at Simon Fraser University in Canada.
Third, MS has been and will continue to catch uneducated & poorly-researched flak like this for operating in China in any capacity.
The same tech that turns your face into an alien, complete with facial expressions, can turn your face into a searchable surveillance data point ("Show me all the people who looked unhappy or worried when looking at this propaganda poster").
That’s in addition to the wide variety of state, local and private systems.
Microsoft entered into a well publicized partnership with NYPD on intelligence/surveillance stuff back in 2012. They may have lines they won’t cross, but are hardly innocent.
Only civil government can stop the creeping nature of expanding military and paramilitary powers.
Another Merican thing is to help cause chaos in countries that have adopted ideologies which are not 'democratic.' Nixon would probably give this action a thumbs up.
I don't understand your comment neither the downvotes. The comment I was replying to was criticizing the government for wanting national companies to help it, which I think is the reality in all countries.
Also, the government didn't knock Microsoft for not helping spy on U.S. citizens— they were mad that they helped the Chinese spy on their own citizens, which they asserted was morally wrong enough to be considered a human rights abuse. My response was not "criticizing the government for wanting national companies to help it," I was criticizing them for having a two-faced stance on governments surveilling their own citizens.
Since Satya Nadella has become CEO of Microsoft, exactly what history is there that is dark enough to blacklist that company from any positive reaction?
Windows 10 and its heavy use of dark patterns for excessive data collection is reason enough for me to loathe the company.
Plus those decades of ruthless antitrust/monopoly abuse by MS under Bill Gates has left its mark on the culture.
They are going to need to behave quite spectacularly for at least 20 more years before I even consider viewing them as a customer friendly company with a social conscience.
Are YOU trolling? I'm not, and I suspect that you are.
Dark patterns in Windows 10 were used to get people to upgrade, and the exec behind those decisions was let go.
What exactly constitutes "excessive data collection" to you? If you are saying "telemetry" then turn off all of your computers and devices forever, because they ALL do it. Microsoft is simply more open about actually doing it than most places.
Bill Gates has donated more money to charity than anyone in history, and through that charity work has certainly saved more lives than you or me.
Just admit that you like disliking Microsoft and that your opinion isn't based on anything real.
> Dark patterns in Windows 10 were used to get people to upgrade, and the exec behind those decisions was let go.
Don't care. They are still using dark patterns all over Windows 10 to convince people to disable various privacy settings.
It also doesn't matter that it was to get people to upgrade. The ends don't justify the means. Shitty behaviour == shitty company.
> What exactly constitutes "excessive data collection" to you? If you are saying "telemetry" then turn off all of your computers and devices forever, because they ALL do it. Microsoft is simply more open about actually doing it than most places.
Argumentum ad populum. "Other people do it too!" is not valid defence of this behaviour.
I'm not going to argue with logical fallacies. The fact that the EU just passed GDPR and the fact that people all over the world are waking up to the ramifications of this dragnet data collections is a sign of things to come: surveillance capitalism has peaked and people, when they actually understand what is happening, do not like this behaviour. Keep using those dark patterns to deceive your users though.
> Bill Gates has donated more money to charity than anyone in history, and through that charity work has certainly saved more lives than you or me.
Ah yes, the Bill and Melinda Gate's foundation. The colossal tax write-off scheme that allows Bill Gate's to avoid having his wealth taxed before he dies so he can make his children 'directors' of the foundation with security for life so he can get around that pesky vow he made about not giving his children all his money when he croaks.
The fact that he puts some money towards charity at the end of his utterly selfish life to try to leave a legacy that is anything but negative is transparent and shallow. Nothing will ever make up for what he has done in the past, the business he bullied into submission and the lives ruined by his antitrust practices and hostile attitude to open source.
So a lot of people are being saved now due to contributions made by the Gate's foundation. That's great, it really is. I still donate a larger proportion of my wealth to charity than he does. He's barely even trying.
Bill Gate's is part of modern societies problem. Billionaires that can't see it's their and their ilk's own actions that bring so much of the misery they are seeing. The little good is doing will never make up for what he is and what he has done.
Yet we were talking about Windows 10, and you started whining on about Bill Gates. I'm guessing you're a Microsoft employee.
Yeah can confirm - OPORD briefs are given in powerpoint with neat graphics and an accompanying paper narrative. Make sure you have the right font and color scheme, though.
If I am not mistaking, the shareholders are entitled to sue under the current doctrine for not acting in their best interest.
MS would arguably defend itself making the case that key employees might resign, or key accounts might leave, or whatever else its legal team comes up with. MS might even win such a case for all we know.
Still, a court case could be an unwanted distraction. Or an interesting test case, depending on viewpoint.
But I find "someone else might do it" to be a bit of a weak excuse to do anything. Every bad thing that COULD happen that HASN'T happened is because someone (usually multiple someones) chose not to do it.
It's really that simple. The first step in no one doing any particular thing is to not do it. And while there are cases where this forces a tough choice, the majority of such cases aren't that tough. Like here: The question is not "do we make mega bucks and be evil, or be poor but good". The question here is "Do we make mega bucks being evil or do we go try to make mega bucks elsewhere?"
Everything about AI / face detection is open source. It is not like a chemical formula you can try to hide for a while... The will to use it is there. The will to pour millions into it is also there. We are then talking about just a year or two later than it would happen.
And yet it will only happen when those willing to do so do it.
I'm not saying nothing bad will ever happen - as you say, the will to do it is there.
But I'd much rather deny them the effort of all the people who COULD do it but choose not to.
After all, if you're doing something you'd rather not just because someone else might (or even probably will)...you're not very convincing that you'd rather not do it.
There's a more subtle argument though: someone else will do it, and they will be worse. If you're in the room, you might have a chance of affecting a decision.
Personally I avoid having to make tough calls, but I respect people who think about it differently.
That's a VERY slippery slope, and there are very few arguments where I am personally convinced by it. (and I say that as someone that generally finds "slippery slope" arguments to be overblown about how slippery and how sloped) Feels more like an excuse than a reason.
A world where we don't do something bad creates a reason not to do it. A world where we say it's okay to do the bad things because we assume that someone else will not only do them, they will be more evil than us (and here we are compromising our morals from the start) removes any such incentive and creates a race to the bottom.
Microsoft is actually lobbying pretty hard in Washington to ban private use of facial recognition. The cynic will say that it’s because they realize that Google and Facebook will outdo them here, but I couldn’t care less about their motives.
I think it’s not so much that Google and Facebook will outdo them in this area, but about why they are so much more vested in and therefore better at technologies that are opposed to privacy. Google and Facebook as businesses are all about collecting information about their users and monetizing it, which is inherently anti-privacy. Apple and Microsoft actually sell something (hardware and software, respectively), so the smart move for them is to position themselves as firmly pro-privacy—because they can whereas Google and Facebook cannot (without finding new business models). We will increasingly see Apple and Microsoft using privacy features as a competitive advantage against Google and Facebook. We’ve already been seeing this with Apple quite a lot recently, now Microsoft is following suit.
So one day we will wake up in a world where government skunk works developed the technology under our noses deploying it without proper oversight. all under lock and key of courts and laws similar to those that established FISA courts.
No thank you. The tech world needs to acknowledge that the real danger is facial recognition will be developed outside the public eye, without the input and oversight of the technical community that keep the Orwellian nature many suspect in check.
Which means, lead from within. Push companies that want to work on this technology to engage with law makers to insure that privacy rights are respected. to insure the technology does not false incriminate and when unsure it is enshrined in law that it cannot be used to do so. Lead within to establish limits of the use of the data, how it is stored, and how it is guaranteed to deleted within a set period. Limit how law enforcement or other government agencies gain access or use the data. There are no whistle blowers when we are not involved
tl;dr
we either lead in making sure it works and respects our rights or watch as its done in a back room deal with intelligence agencies and the hawks in Congress.
Microsoft is aware of the downsides of false positives. In this case California police officers would use facial recognition on individuals, but Microsoft recognized that the false positives could disproportionately target women and minorities. That could result in massive backlash against the company.
So they are calling for greater regulation of AI-related tech, considering the human rights issues. No mention of what "regulation" could mean here.
That’s doesn’t square with Amazon’s decision to do the opposite, except in that this “best for the company” argument is so mendable it can be used to argue for anything and its opposite.
A company can make moral decisions, and the fiction that lawsuits are likely or even plausible is almost entirely mythical.
>were evenly distributed across races, then that's no big deal?...
These are the perverse incentives when the standard is "equal treatment under the law", or "equal protection under the law". If they can find some way to treat everyone in a crappy fashion, the law doesn't have a problem with that. The successful class actions tend to come only when it can be shown that you are only treating some people crappy.
That would be fairly easy to show in this case. Simply have random whites and blacks stand in front of a camera running MS software and compare the accuracy rates. (Come to think of it, a demo like that would be pretty powerful in a courtroom too. Maybe there is more to this than just PR MS is worried about?)
EDIT:
I just realized how bad that sounded. To be clear, I'm not saying we should treat some people unfairly. I'm saying the law doesn't care whether we treat people fairly or unfairly, so long as they all get treated the same. I was positing that maybe it would be better if we could try to make the law incentivize treating everyone fairly.
Unfortunately, in the current political climate, yes. More specifically, a smaller group of people would care about that, then the group of people that cares about the current, non-even mis-identification, so there would be less political will in opposing it.
Cigarettes are considered harmful to everyone, you don't magically get the ability to resist addictive, harmful substances when you turn 21, yet we ban sale of them to minors - for similar reasons.
I'm confused by the premise of your question. Are under the impression that it's possible for a law enforcement tool to have zero false positives?
Nothing has ever had nor will ever have zero chance of false positives. Even DNA evidence can have false positives. Law enforcement, the justice system, and society overall has always had and will always have (highly imperfect) mechanisms to deal with nonzero false positive rates.
Therefore, it is of course important that the false positive rate not be higher than average for a subgroup, as they will be disproportionately affected, since our systems will be set up to deal with the lower average false positive rate.
No, that wasn't my point, and I agree that false positives are inevitable. My point was that you shouldn't only care about an excessive false positive rate when it happens to minority group.
Algorithms 'discriminate' (as in differentiate) because that is exactly the job they are tasked with. Is this a picture of a person on list A or not? Is this a picture of criminal activity X? They discriminate on a large number of, often hidden, unknown or not understood features in the data.
In many countries the laws dictate that some features such as race, religion, sexual orientation, ... are protected, as in not legally allowed to be used in differentiation. (Take note that in many countries certain national security/safety related organizations are exempt from certain regulations).
The models that are used in facial recognition rely (in part) on 'sub-symbolic' probabilistic feedback systems, that in many cases defy post rationalization: we do not have a convincing or specific 'story' about how 'the machine' decides in each cases. This means we can not deductively prove that the above mentioned 'illegal' forms of discrimination were not used (note that it is not sufficient to show that e.g. 'race' or 'gender' was not explicitly used as a feature in the input, as it could be strongly correlated with other inputs or derivations thereof (e.g. type of shampoo bought, zip-code, food preference, ...)
So we rely on things like testing post training deployed model to 'vet' the systems aren't biased in the ways we by law and regulation have deemed that they should not be. We test whether the output distribution shifts when we only feed in males vs a mixed gender test set etc.
In practice this means that in compliance testing we replace deductive reasoning with correlation. We accept that this will yield false positives, but this choice is partly due to technical limitations (apart from a few well published cases, understanding and explaining how e.g. a deep learning derived model actually 'works' in a rational synoptic way is still beyond us), and part due to ideological stances we have come to.
So, yes, we choose to accept false positives that are presented as evenly distributed across specific protected groups or features, while not 'making a fuzz' over others. These are inherently cultural, political, moral and empathic decisions, not 'logical' ones.
1) "The algorithm has a 50% false positive rate over everyone" -> awesome, no problem!
2) "The algorithm has a 50% false positive rate of people of color, and a .01% false positive rate over everyone else." -> yikes, we can't use this thing!
Do you think you would still object to that mindset if your kin were on the receiving end of that 50% false positive rate where a 'positive' outcome would mean you being wrongfully arrested, whereas all others had just a tiny chance of this happening?
A future of global surveillance seems inevitable at this rate, but society will evolve means to thwart it.
Wearing masks in public is already a norm in some East-Asian countries, and it’s a form of fashion as well [0]. Just add glasses to that, which will become common if AR smartglssses take off.
Maybe we’ll see people walking around in fashionable neo-tribal’ish masks [1] (as decorations around AR glasses) to hinder intrusive face/retina scanning.
> China has begun rolling out new surveillance software capable of recognising people simply by the way that they walk.
> The "gait recognition" technology, developed by Chinese artificial intelligence firm Watrix, is capable of identifying individuals from the shape and movement of their silhouette from up to 50 metres away, even if their face is hidden.
Some Europeans countries (France, Belgium, Netherlands, Austria ) and China have banned hiding one's face one's face in public. Those Asian face masks would be OK, but tribal masks wouldn't.
Again, if those medical/hygienic face masks are OK, wouldn't decorative frills (say, feathers) around sunglasses be OK too? That combination would effectively cover enough of a face.
My big question with face rec is this: what happens when the tech is so good that people can find out the identity of someone whose "special" photos were stolen or shared on the internet? That's going to be a huge deal and will cause huge problems for some people if not handled correctly.
I think society has actually gotten better at this. A strategically leaked sex tape is already an acceptable career move for celebrities, and I don’t think many people would necessary care about it in politics, either.
I don't care about the elites, I care about people who are not well known who get exposed due to this technology, or who's endangered because their stalker who happened on their nude stolen photos can now track them in real time.
Celebrities and politicians have money and power to ensure they recover from this event, your sister or wife doesn't.
I think it's often more about hypocrisy than it is about the act/actions. It seems to, at least from my pov, work out worst for conservative/republican politicians than left leaning ones. Of course when you get bad actors on the left, this also means things are overlooked or forgiven that really shouldn't be.
This had already happened. There are quite a lot of online services that run facial recognition and match provided photos against the datasets derived from porn.
The tech is nowhere near as good as humans are. It's just leveraging the fact that computers can search through a much bigger data set. It's going to take a long time before it gets better than humans considering the fact that progress generally plateaus.
What's the point of turning it down? I can train a superhuman-performance facial recognition model with consumer hardware sitting in my garage. The cat has been out of the bag for close to a decade. What leads to abuse in this case is not the technology, but the policy, and withholding technology accomplishes nothing policy-wise simply because it's accessible to just about anyone with a GPU.
But making news for withholding that technology, thereby drawing attention to and sparking discussion about the issue, can affect public sentiment, which in turn affects policy.
There's also supply and demand. Reducing supply of the technology increases its costs.
The past 20 years (along with PATRIOT act and FISA courts) show that we can "discuss" until we're blue in the face and nothing is going to change. This also seems to be remarkably bipartisan, for a change. The only set of actors that can change anything in US congress are corporate donors and PACs, and Microsoft is a donor (through its employees and PACs, don't know about directly).
Reducing the supply of this particular technology is not going to increase its costs, because literally anyone with a recent GPU and some motivation can just download it from GitHub at this point. The most worrisome user of it, the US government and its various three-letter agencies, already have and extensively use this tech. Casinos have been using it for well over a decade to spot people who are "too consistently lucky".
I do buy your point regarding good PR. It's just completely ineffectual wrt its stated goals.
Means it will distinguish faces better than a human in side-by-side testing. This is actually a thing, and it has been for a while. Humans aren't very good at telling if it's "the same guy" or not.
I'll have to look into the terminology to be confident of the best match for this context, but I figure "Image-Restricted, Label-Free Outside Data Results"[1] might match the scenario where a recognizer provides a yes/no after being trained on e.g. driver's licenses (let's call it the "Super Bowl" model). None of the results appear to be greater than 95%, which I understand to be a hard ceiling for requiring human QC.
Many of the other methods get you worse than 50-50 chance of false positives by the time the algo gets to 95% true positives, but not being a statistician I may be reading the results wrong, in which case I'm totally willing to be corrected. Empirically, though, we know that the Super Bowl model has never worked.
Microsoft said they refused to provide the tech due to the bias against minorities caused by the training data.
How difficult is it to fix this bias? For example, the model can be told to only produce a match when a confidence level is higher than a certain threshold. Then the threshold can be increased as needed on those subsets of faces where training data is lacking. Would that work?
Also, why not build more diverse training data if this is a pervasive problem? It is not free, but neither is it cost prohibitive for someone like Microsoft.
I think the best part of this is that they're talking publicly about this, and are explaining our reasoning. We shouldn't be dependent on Microsoft making the ethical judgment correctly every time. Instead, (potential) customers themselves should learn about the potential downsides, lest they simply go to a competitor to purchase the same flawed technology anyway.
That's nice, although I wonder how effective it is to try and prevent governments from getting their hands on technology that is made available to corporations. What's to stop some consulting company from signing up for an Azure account and selling the same service to the "non-Free" country in the article?
Are there just PR departments behind the scenes leaking things like this following the other bad press received by other tech firms? Anybody have experience with this world? How far out can you predict media coverage I wonder.
My first impression: This is a good thing overall. But then I think, well some other bidder will do it. And what if they do a worse job? Does that lead to detaining the wrong person? Is that better?
The journalist Joseph Menn looks like he has stellar integrity as a long time writer (despite hiding Beto's involvement with Dead Cow cult) so it must just be me.
No, I'm saying there was controversy about the fact (?) that he didn't disclose this information even though he knew about it for two years. Isn't that the story? I think there are good reasons he might have felt this was not doing justice to the story, the public overreacts when they hear about a hacking group. I'm not questioning whether it should have come to light or not.
What about their OS, Office Suite, DBs, etc? Do they turn down sales of those on human rights concerns too? Will Microsoft stop taking "telemetry" data with its software?
Would they reject any sales to saudi arabia, china, israel or even the US military if human rights concerns arise?
If this is part of a genuine ethical paradigm shift within the company, then I commend them for it. But if this is just a one-time PR move, then my opinion of Microsoft has dropped considerably.
It doesn't matter that there's an argument of this refusal being good business - truth is, big companies being confident that consumer outrage won't translate into enough market impact to say they shouldn't do the bad thing has become the norm. Nice to see an instance where that's the not the case.