In an opinion by Judge Ikuta, the court “concludes that the development of a face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests.” As the court explained, “the facial-recognition technology at issue here can obtain information that is ‘detailed, encyclopedic, and effortlessly compiled,’ which would be almost impossible without such technology.”
Possessing a picture of someone's face is OK, but creating a model that represents that face is not OK if that model can be compared to another picture to categorize it.
Possessing a picture of someones face is OK, and having a human create a mental model that represents that face is OK as well even if that can be compared to another picture to categorize it.
The argument seems to center around how easy/hard or expensive/cheap the process is.
I think that's a great place to draw the line. Societal norms were established before we could do behaviour-type actions at economies of scale. And most of the ills that have been brought to society have been caused by consequences unintended precisely because we are still calibrated for human scales.
If we had somehow ended up inventing cameras after computers, it's conceivable that we might have regulated them.
Road laws were changed when we switched from horses to cars. The same framing as the parents comment might result in the argument: "Cars are exactly the same thing but faster, why change things?"
But this time the machine processes themselves are controlling the way we talk about them.
> without consent ... invades an individual’s private affairs and concrete interests
Consent, privacy, and concrete interests (possibly includes systematically influencing politics) are all central to this debate.
Secondly, the law is not alien to punishing the same outcome differently based on the tool used. Punching someone, knifing someone, shooting someone, or throwing a bomb at someone will carry different punishments - even if the resultant harm to the victim is roughly the same.
If FB trains to recognize the face of some friend of an FB user, but that friend is not on FB, and never previously signed up for FB, then FB would be clearly in violation.
I think FB does this when people tag friends? I'm not sure though?
You upload a pic of a night out. Tag all your friends in it. And one of those friends is not, and never has been, on FB. Well, clearly, that guy never gave FB consent. And FB is doing it all without his knowledge.
>The argument seems to center around how easy/hard or expensive/cheap the process is.
It is about scale of the violation of privacy rights.
Take murder...its not criminal to think about murder. Its not even criminal to discuss a murder (so long as no "substantial step" is taken). Its not murder if you kill someone, but did not intend to, that would be manslaughter. Intent to kill someone and doing it, that is Murder and is criminal. If one shouts a racial, ethnic, or religious slur while committing murder, then it is both murder and a hate crime. If one organizes the mass killings of groups of people based on race, ethnicity and/or religion at a large enough scale, it is genocide.
The law is very capable of distinguishing intent, acts themselves and the scale of acts. Its only these tech companies that wish to muddy the waters by spinning this decision into the court outlawing your right to take photos.
The argument to me centers around the same terrible "but do it on a computer" sentiment that's been brandished to outlaw things that would otherwise be perfectly legal in meatspace. But something about doing normal things electronically is scary, and since it's easy to ban, that's the hammer we have.
Other examples include:
* Trading card game booster packs vs. video game loot boxes
* Taking notes about people who visit your physical store vs. taking notes about people who browse to your website
That said - I do realize I'm in the minority opinion here re: GDPR, etc. But the inconsistency really disturbs me, I don't agree with a ban on facial recognition: who is to tell someone what they can and can't do with a bucket of bits? What about other automated recognition for content moderation - surely automatically detecting nudity is OK by the law? But what if those recognition models wind up doing some form of emergent facial recognition via unsupervised learning? How could someone even verify that?
What crosses my personal threshold is not that it is done on a computer, but the scale of it. I would be fine with a small website that does not have a significant presence doing it. Once this website became as large as Facebook or Google, it stops being acceptable (the exact threshold naturally being fuzzy).
I would be similarly bothered if this was somehow done manually by a bunch of government spies dispersed all over cities, using pencils and paper. Of course this is impossible, but it shows that neither computers, nor cyberspace, nor the entity doing it are essential components for it to be bothersome. Computers only play a role in this insofar that they make it possible to scale this.
Yeah, the idea of companies becoming capable of widespread surveillance because of the size of their data set is also bothersome to me. But because it is so hard to define what that fuzzy line is (between an acceptable amount of data and too much data), it's too arbitrary for me to draw a line in the sand at all.
Which might seem wild, but it's actually pretty consistent with the amount of trust we put in companies right now. Facebook could arguably do some nebulous bad thing with their facial recognition data. AT&T can definitely know who you are calling on any call that goes through their POTS system. But we don't have a ban on diagnostic tools for large telephone providers out of fear that they'd abuse their position - I see no reason to ban a technology of large social network providers for the same fear. [0]
It comes down to trust - as soon as one of these companies did abuse their position to do something widely evil, the market consequences would be dire (to say nothing of the reactionary legislation).
[0] Possibly a poor example, considering anti-trust w/r/t Bell growing too large - but that kind of regulatory intervention I would hope could be applied equally to either class of company should they be determined a monopoly.
> It comes down to trust - as soon as one of these companies did abuse their position to do something widely evil, the market consequences would be dire...
HAHAHAHAHA.
No.
Look at Nestle/Coca Cola/Pepsi sucking the earth dry of precious driking water.
Or the fallout for bankers over the 2008 recession.
Or VW.
Or Equifax making merry with your social security number.
Or Deepwater horizon.
Or Cambridge Analytica.
Or... you get the point.
I'd argue the market has precious few fucks to give about outright evil deeds. There are countless examples of corporations pillaging and raping the earth for short term gains, and the market thanks them for it.
Free market efficiency is true for only a very narrow range of operating parameters, and regulations are very much needed for tackling th ebig picture.
The bottomline being, given a fuzzy outline of the line in the sand, I'd still rather the government shoot for it. If they fall short by too much, we'll know soon enough because symptoms will persist. If they go too FAR, well, that's perhaps even better because I'd rather the default stance on new things be conservative skepticism and caution, and then we slowly allow more things once we deem them acceptable.
I would argue trust is a big part of it. My counter example:
Apple. Apple is a massive company that could collect massive
sums of valuable data. Althoguh they do act like a monopoly in many respects (cough cough T2) they do not abuse data that they could collect. I think the reason for that is they want some level of trust from consumers. They do abuse their position of power but they do not do very much 'creepy' things like google, MS, Amazon, or Facebook
I sort of get what you're saying. A lot of formerly-anti-Apple people have started looking at them with new eyes because of their welcome stance on privacy.
But even with the Apple example, aren't they the most valuable company on Earth? Despite having anti-consumer policies like removing the 3.5mm jack or making their laptops nigh-on irrepairable. Clearly the market shrugs at those 'tradeoffs' in exchange for Apple shifting more product more quickly. This is what I mean by the market not being the perfect device to self-regulate. Perhaps on a VERY long term scale, but not on human time scales.
> Facebook could arguably do some nebulous bad thing with their facial recognition data.
Theo already did a bad thing with their picture data by applying facial recognition to it! I never consented to this use of my pictures and am explicitly against it. Yet they did it anyway. Hence the lawsuit.
Taking notes about people who visit your physical store vs. taking notes about people who browse to your website
That's just ludicrous.
We're not talking about a shopkeeper in a physical store taking notes (or even using sinister tech like tracking your cell phone, or facial recognition to track you) while you're in his store.
If you really want to draw an analogy we're taking about said shopkeeper, who puts a very sophisticated GPS tracking device on you, which does not only provides information, which shops you visit and what you buy, but also looks over your shoulder to track what you read in the library.
But the inconsistency really disturbs me
It's not inconsistent to demand from said shopkeeper to stay the fuck away and mind his own business as soon as I leave his shop.
It's more like if you keep choosing to go to stores where the shopkeepers all use the same system of tracking while you're in their stores. Every time you enter a store they hand you a tracking device and you turn it on and run it.
Just don't turn it on. Turn off javascript by default. Don't go to those stores. Take personal responsibility. But especially don't call in a cop who's going to brandish a gun and threaten the store owner's with violence unless they stop handing you the device to turn on.
All these new technologies (facial recognition, automated photo editing, etc) are good things when they become widely available to everyone instead of just being the domain of instituations, corporations, and governments. All legal decisions like this do is prevent the spread of abilities to individuals and make the growing power imbalance worse.
"Trading card game booster packs vs. video game loot boxes"
it's not even close to equivalency. The contents of booster packs is predetermined before the moment of purchase. Whether you buy them, or your buddy buys them, you'll get the same cards. With vidoe game loot boxes, you can modify the contents of the lootbox at the moment of purchase depending on who's the buyer. Perhaps it's a popular streamer, so you'll boost his winrate so that people watching the stream can go "wow, that was so worth it, I'll buy some boxes too". Perhaps you'll boost the winrate for a whale, only to get him hooked and put him on a dry period with no good loot at all. The online nature of such transactions offers much capabilities for abuse.
As the judge said "[it] would be almost impossible without such technology". So it doesn't have to do with "on a computer", but companies like Facebook violating privacy in ways that would have been impractical without computers, so it hasn't been an issue before.
Imagine entering an electronics store in a mall and looking at a fridge. You leave the store, but an employee of the store had spotted you and what you looked at. They follow you out the door and into another store. There they whisper to an employee of that store "psst, give me $5 and I'll tell you what this guy's after". The other store's employee then proceeds to offer you a discount on a fridge.
If some store had been doing that in meatspace people would have been up in arms, but for some reason because it's a "re-targeting ad" on a computer people have mostly been blind to what's going on.
Facial recognition is the same. If you were doing it on the scale Facebook is doing it but in meatspace it would be illegal.
> The argument to me centers around the same terrible "but do it on a computer" sentiment that's been brandished to outlaw things that would otherwise be perfectly legal in meatspace.
There's no "to me"; the argument is there in the clear and you can respond to it in terms of its content instead of the unknowns that go on in your head.
> I don't agree with a ban on facial recognition: who is to tell someone what they can and can't do with a bucket of bits?
Classic reductionist argument. They're a "bucket of bits" insofar that your physical body is a "bucket of atoms".
Let‘s make a short thought experiment... imagine Superman just came to earth and has perfect memory and the possibility trace your (and everyone else‘s) every footstep with his superhuman hearing and vision. Would that require new laws to deal with him or would you simply carry on as is because „it‘s Superman! He won‘t do evil...“?
It‘s not „but do it on a computer“ rather it‘s „but when it becomes unprecedented“...
We are facing a completely new type of society where these types of entities are very likely to exist, we need to figure out how to deal with this.
You're correct that the technology is unprecedented.
But, there's a lot of unprecedented technology invented every day. And for the most part we do just carry on as usual. That's part of what made the original internet such a beautiful place - we didn't try to waterlog it with proactive legislation because of potential bad things that could happen. Instead, that freedom created the wonderful ecosystem we have today.
The (US) law eventually reacted, but for the most part did so in a measured and reasonable fashion to form things like DMCA and CFAA.
Conversely, it's my opinion that proactive lawmaking leads to disastrous and overly broad, unenforceable, and burdensome laws like SOPA and CALEA. And I see this kind of ban on facial recognition to be firmly in the proactive category.
I guess the main difference between now and then is that we are starting to have an idea of what is going to be possible and already see “dark uses” of technology happening (e.g., surveillance in china) whereas beforehand we were clueless about those things.
I think that rejecting proactive legislation per se is a dangerous attitude. For example, see climate change. Proactive legislation could have made us avoid all of the discussions we are now working through during crunch time...
If we have reasonable evidence that there is a high likelihood of us creating worlds that we don’t want to live in, we should take reasonable action proactively to avoid those scenarios.
Thus, I agree with you that not all unprecedented technologies need to be proactively legislated but as soon as there is reasonable evidence for possible negative consequences we should start reasonable processes to avoid those consequences. There is no black or white situation here, we need to have evidence based discussions and work our way through this collectively.
The technology is not what is being regulated here. What's being regulated is the use of said technology to do things that were hitherto untractable, and therefore not explicitly forbidden.
In other words, for violating the spirit of the law, but on a computer.
Depends what the ever-changing requirements for "upfront" are - it's fairly clearly(?)[0] called out in Facebook's data policy that they use facial recognition and how to change your settings.
What about a picture I took myself of a public place in Chicago with people's faces in it? Those are certainly my bits.
Every example you provide allows for much more minute, precise, and invisible manipulation of the affected individual than the meatspace equivalent, for starters.
Because it's incredibly faster to process, use, replicate, distribute and earn using that data. That is what makes it scary.
Also loot boxes - imagine if Wizards of the Coast walked and gave away MTG booster packs on the street for free but told you "pay 10 cents to open it and you could get this cool stuff". Sets a completely different mental model in your head.
> Also loot boxes - imagine if Wizards of the Coast walked and gave away MTG booster packs on the street for free but told you "pay 10 cents to open it and you could get this cool stuff". Sets a completely different mental model in your head.
Would that be illegal though? I think my point stands - you could do that in real life and no one is really disturbed by it to campaign for it to be outlawed. You can even see it happen if you ever ride the metro, panhandlers will give you a book/trinkets/snacks and ask for a "donation" for you to keep it (most people refuse and give back the token).
Pretty different things - in the panhandler case, you don't want it, it's not tied to anything you do, their target group is everyone.
This is more like standing in front of a kindergarden with an icecream truck before lunch and giving away free ice creams on a hot summer day but you can't open them just look at them unless you pay. It's abusing human psychology without you asking for it - you just want to play the game and they are pushing your brain's buttons to get you to buy stuff. Maybe it won't work on you, but imagine how it's "brainwashing" the younger generations.
It seems there's a lot of parallels between this and advertising in general. I understand the sentiment and feel the same way about advertising's negative effects, and how predatory a lot of marketing is - especially marketing targeted toward children.
But it's kind of "exit through the gift shop," right? All you wanted to do was ride the roller coaster, but now you've got all of these trinkets and refreshing beverages that you can touch and pick up, but you just can't have without paying. And to me that just seems part and parcel of modern capitalism, so I'm not really in favor of outlawing a particular flavor of it.
As much as I don't like single big entities having this power, this seems like a ridiculous stance.
So I can have two pictures and look at them by hand, and use my own brain to categorize them. But the minute I use a computer to do so I have committed a crime.
It's this kind of stance which is going to make machine learning algorithms that casually identify people for convenience sake illegal. What the people in the court system have decided, is that they are frightened of being found out as hypocrites - as people who take bribes, and people who do bad things in public.
You can always do bad things in private. No one can stop you from doing such things, especially when they come short of murder.
There is essentially no difference between this technology, and the technology used for license plate recognition, but license plate recognition is totally legal.
In many places, if you were pulled over by a cop, you would pay a hefty fine, and the automated license place readers will simply fine you a bit less or let you go to court. What's wrong with recognizing people that are committing crimes and sending them a notice, "We were about 1% sure this was you doing something wrong, so here's a 1% fine." Everyone agrees such systems will be initially imperfect, but everyone is also totally agreed that present law enforcement is imperfect. I fail to understand the problem.
Unless of course, the problem is that everything is really run by criminals, and we're getting way too close to knowing the truth. That could be a problem.
Edit: I appreciate honest debate more than downvotes. If I have a shitty idea, please tell me why. Unless of course, you are afraid of honest discussion. Cowardice is always acceptable here.
"What's wrong with recognizing people that are committing crimes and sending them a notice, "We were about 1% sure this was you doing something wrong, so here's a 1% fine." Everyone agrees such systems will be initially imperfect, but everyone is also totally agreed that present law enforcement is imperfect. I fail to understand the problem."
Well then why not skip the whole imperfect parts recognition and simply fine the whole population each a fraction of the fine? We are 100% sure that the criminal is within the population. So lets just fine everybody one / $population_size fraction of the fine. The adequate fine will be paid and the criminal will be hit and hey, the system is not perfect, but what is?
How often are you convinced that the criminal system is 100% certain it has the right person? How many times have they got it totally wrong? You have erroneous shit happening all the time. Fingerprints are sometimes identical. People don't want to admit it because it erodes at the idea of justice. But when big screw ups are as common as you've seen in the news, how much do you really really know about who is committing crimes?
If someone picked me up based on fingerprints, I would much rather them say "We found prints matching yours at this crime scene. Statistically, there are 5 other people with prints like that in the area. We have circumstantial evidence that points to you as well. Since we aren't certain, but we have your prints we'll fine you 1/5th of the full amount for the regular violation." Currently, we make mistakes that lock people up for years based upon uncertain accusations - it's always all or nothing. This puts way too much pressure on investigators to get it right and in the case of murders, put someone away no matter what.
I'll tell you what - if they're going to make the mistake anyway, I'd rather have 1/5th to 1/20th of the time to serve.
Edit: This is the 21st century. We're also not restricted to simply imprisonment. Other factors can be affected. It would be interesting if labels could be placed upon those who are suspect, like "suspicious". This could prevent certain purchases and types of travel which would make the kinds of crimes they are suspected of committing more difficult.
> Edit: This is the 21st century. We're also not restricted to simply imprisonment. Other factors can be affected. It would be interesting if labels could be placed upon those who are suspect, like "suspicious". This could prevent certain purchases and types of travel which would make the kinds of crimes they are suspected of committing more difficult.
Yep. It's also dystopian maybe to disallow suspicious characters around playgrounds where they could kidnap and molest kids, but we do that already.
And gun violence in America has already shown what happens when we don't take common sense measures such as this seriously. If you knew someone had a 50% chance of shooting up a Walmart, would you casually say "Well we don't know that fro sure." or would you make a list of all of those suspicious people and track them?
We already have what is a dystopian credit scoring system, and I don't see people in a panic, or protesting in the streets, excepting for an occasional person writing that their data is used unfairly.
I'm in agreement really. If people don't want their information used for a credit score, they should be able to refuse to have their info used for it, but of course that comes at the cost of not having good credit.
Social credit scores could function quite similarly.
This case is about whether they can be held accountable; it has already been established that they knowingly violated the law. The law says that the aggrieved party can sue for $1000 to $5000 per violation, and Facebook violated the law millions of times.
Legal industry reporting calls the underlying matter a "$30 billion class action lawsuit," so it'll be interesting to see how that matter proceeds now that Facebook has lost its first dispute against standing.
As opposed to bottom feeding tech companies? If not through class action, how else would you suggest aggrieved users seek redress (the right to do so having been enumerated by statute)?
American Capitalists: We don't need a new law, if a company commits a tort you can sue them for damages ... but don't use bottom-feeding lawyers who are constantly targeting reputable businesses.
Lots of people are already, hence where this sort of decision comes from. If you don't think enough folk are standing up, then why not do your own advocacy work? It's surprisingly fun and especially rewarding.
That’s slightly better, but still not enough. 1 million violations at $5000 a pop would be a pitiful $5 billion. IMO, millions of privacy violations are worth more than that.
"pitiful $5 billion" is such a dumb take on these kinds of rulings.
The ruling is specifically about the facial recognition part of FB. They could end up paying $5 billion just because of facial recognition stuff .
Imagine if you put the Konami Code on the Google homepage, and got sued and lost for it. Then got fined $1 billion for it. You'd feel like you made a pretty bad decision there.
Obviously facial recognition is closer to the core of what FB is doing in general but it's still a lot of money for an incidental part of their system! They can pay it, but that's $5 billion they can no longer use to buy like... 10 startups or whatever.
Apples to oranges. Google losing $1 billion over putting the Konami Code on their homepage hurts because they won't make $1 billion off of doing that.
Facebook is likely making a lot of money off of, as you claim, something that is close to the core of what Facebook is doing. It makes $5 billion closer to a small cost of doing business.
I doubt that the amount of new information and thus potential monetary vlue they gather from face recognition technology is worth a real lot. It is relatively expensive (in compute power/infrastructure) and will not uncover previously hidden/unknown links in the social graph as the people in such photograph probably already were friends (within a few degrees).
As a means to increase user engagement (who then will volunteer more information by posting, liking and clicking around, especially on ads), I'd guess the same, that this also doesn't add much.
What's more, it's not just a one time fine. It's an order to stop certain actions. If they do not comply, then they will get fined again and again.
Google photos is tracking my face and anyone else whose photos I have taken from my phone and I cannot turn this off. They enable a "search photos by people" feature. I find this creepy and ominous. I never asked for this.
I would assume they have a clause about this in their user agreement, specifically because of this. Expect all big corporations to do the same soon and stop trusting them with photos because I don't think Facebook will turn the tide. This is likely going to be a one-off with other tech giants quickly patching the loopholes to make sure they can mine your data without any punishments. Not real ones, at least.
You can always switch to NextCloud, and it comes with an added bonus of no longer needing dropbox, google drive, etc. for syncing files. HOWEVER, NextCloud is not "free"; you'd either have to pay a provider to host your NextCloud instance for you, or self-host (both options are cheap by the way, but not free like google's option).
Well, can we really call Google's option free when it feeds off of our own data? I'd rather spend a few pounds a month on storage that isn't looking for ways to monetize me then save that cost and end up losing my privacy. What's more of a hassle is the setup since quite a few people will always choose Dropbox and Drive over taking the time to set up NextCloud.
Why just users? Surely people who haven't signed up to Facebook, but have their photos uploaded by someone else and subsequently analysed, should be included?
How would this apply in a situation wherein a person uploads a picture of a minor that they are not the legal guardian for? Could that person sue the uploader of the photo for damages of having to go after Facebook? If situations like that are feasible under this then I would expect K-12 school districts to implement swift measure to ban all teachers from taking pictures of students as to avoid putting themselves in the middle. I find it egregious public schools don't have better policy on banning social media uploads of minors since it's such a gray area right now.
When my kid was in elementary school we were expected to sign a “students in media” waiver that basically let them use the pictures they took of the children however they wanted. I don’t know if it was required but it was heavily pushed and included with a packet of about a thousand things to sign, most of which were required.
I never signed it. I just included the unsigned piece of paper (among others I disagreed with) in amongst the pile of papers I turned in to them. Nothing ever came of not signing but I suspect I could sue them now.
it's cultural, too - there are many super popular apps for obscuring your kids face in Japan (or amongst japanese baseball players in the US, if you want to find some easy examples)
Probably they're Japanese players bought by us teams and so bring their families over. They're not immigrants and will likely go home and they're culturally Japanese, so I expect they practice Japanese norms on social media like obscuring their children's faces when posting.
One of the smaller tasks I had when I was a school IT admin was removing the faces of students who hadn't had the waiver signed from photos the school wanted to upload to the site or social media.
My wife's school had a social media policy 10 years ago. She wasn't allowed to upload any picture with a child in it unless she had explicit permission from every child's parent to upload that particular photo. Permission had to be requested for every photo.
She was allowed to have a private photo site that all of the parents in the class had access to, and upload photos there. The parents were not supposed to re-share those, but some did.
Schools are finding that this is a problem, but haven't had the ability to manage it. So we have built a product to address this very issue (https://www.schoolbench.com.au).
SchoolBench allows you to match profiles with media consent, so you can work out what photos are able to be published or not.
> Schools are finding that this is a problem, but haven't had the ability to manage it. So we have built a product to address this very issue (https://www.schoolbench.com.au).
> SchoolBench allows you to match profiles with media consent, so you can work out what photos are able to be published or not.
How does Schoolbench get a picture of my kid if I did not consent for the school to use pictures of my kid?
We do use facial recognition to detect the faces in the photo, and then integrate with the school's student management system to work out what permissions or media consent has been granted for each student.
From there you can identify whether the photo overall is publishable, and if not, what students aren't. You can then crop/edit the photo to remove students from then on.
All of this happens in an on-premise VM, rather than calling out to a cloud service. Obviously you can run up an instance in AWS, etc.. if you want to cloud host, but that is the school's volition.
> We do use facial recognition to detect the faces in the photo, and then integrate with the school's student management system to work out what permissions or media consent has been granted for each student.
> From there you can identify whether the photo overall is publishable, and if not, what students aren't. You can then crop/edit the photo to remove students from then on.
> All of this happens in an on-premise VM, rather than calling out to a cloud service. Obviously you can run up an instance in AWS, etc.. if you want to cloud host, but that is the school's volition.
But if the parents don't sign the waiver allowing their kids photo to be used, how do you process the photo to determine if any kid in it has not had the waiver signed? Isn't the act of processing the image a violation of the agreement (which is effective since the waiver was not signed)?
Ok well at risk of being cease and desisted again by FB, in addition to them doing facial recognition - they take pictures and track every vehicle that drives by their HQ and report all those license plates back to Menlo Park.
Its an invasion of privacy of all cars and drivers in the vicinity and should be illegal.
One or more US federal agencies place innocent people who happen to work in a sensitive industry on one or more types of watch list that they scrape associated metadata and facial recognition to detect an association with suspicious people/criminals/persons of interest. I know this for a fact because one of my friends at a big name malware forensics got a call from his manager that the government noticed he was tagged in a picture at a conference after-party with someone who was on a "baddies" watchlist.
This is so true.
The way they are acting I feel like with people inclining towards "ethical" side of the spectrum leaving facebook and the culture is going to shift more and more into unethical.
Well depends on what you define as the beginning period.
I'd pinpoint the shift from 'ethical-ish' to 'unethical-ish' at about the time they changed feed from 'friends' to 'relevant for you' which is basically 'what our bidders want you to see'
here comes the lawsuit settlement: $147 Million for the lawyers, $5 million for the state and $3 million total for the users that spend 10 hours filling forms.
In other words, this is ineffective. I hope EU cripples them, not even $5B FTC fines scare them.
It appears limited according to the article because of an existing statute found in Illinois law.
Does anyone know if other states have similar laws? Wonder what type of momentum would have to manifest for other state legislatures to get a similar bill into committee for debate
A related effort is restricting what state and local gov can do with facial recognition. The cities of San Francisco, CA and Somerville, MA have passed ordinances, and NYC is considering it (as if the NYPD cares) but city bans obviously have major limitations.
It looks to me like momentum on facial recognition is building now. Call your state reps, tell your friends, find folks who feel the same way and make some noise.
I find this sort of litigiousness to be bullshit because it incentivizes signing up for Facebook. I can't be party to some lawsuit against Facebook if I don't have an account ("users").
I deleted my Facebook a few years ago, so if some class-action suit comes out for people who were users in 2018+, where's my payout? How is such a system fair to people who had the sense to either delete before whatever time horizon is used in a case or people who never created an account? None of these people who could win the lottery in court suffered a real loss.
I don't. I don't think anyone is entitled to a payout, as Facebook users did not pay Facebook causing some sort of breach of contract. They are owed nothing IMO.
Yet the legal system gives people an incentive here to sign up for free services so they can one day reap the rewards when Free Service X slips up and breaks State Law Y. Admittedly, the rewards will be small. But nonetheless, it is an incentive and aggregated across society it isn't nothing.
This sort of litigious behavior just slides us further and further into a culture of dishonesty and makes a mockery of the justice system.
Also, let me be more specific given there's another thread on here about this. What makes anyone think they're entitled to "$5,000" for signing up for a free service and uploading their photos to it? This is absurd. Please explain specifically how running a facial recognition algorithm on person's photos is equivalent to "$5,000" worth of damages. Where did that number come from? Why not $1 or $1,000,000?
Possessing a picture of someone's face is OK, but creating a model that represents that face is not OK if that model can be compared to another picture to categorize it.
Possessing a picture of someones face is OK, and having a human create a mental model that represents that face is OK as well even if that can be compared to another picture to categorize it.
The argument seems to center around how easy/hard or expensive/cheap the process is.