"The Myanmar issues have, I think, gotten a lot of focus inside the company. I remember, one Saturday morning, I got a phone call and we detected that people were trying to spread sensational messages through — it was Facebook Messenger in this case — to each side of the conflict, basically telling the Muslims, “Hey, there’s about to be an uprising of the Buddhists, so make sure that you are armed and go to this place.” And then the same thing on the other side."
Ok. So Facebook monitors Messenger (not just your public "wall" of Facebook) which has the appearance of being private person to person communication and detects that there are encouragements of violence in Myanmar (written I assume in some language other than English).
That is some impressive operation they have going on.
No wonders that the Chinese did not let get them inside their country.
The only thing I'm really amazed by is how the European countries still allow them to operate, they should be fined until they exit the market and their operation made illegal. Probably they provide sufficiently useful information to the security services that governments don't do anything. In effect they are a privatised Stasi.
Nothing that can't be fixed by a bit of Psi Ops. Pretty much none of the traditional media companies like Facebook and in most of Europe at least one TV station per country is state controlled. So all that needs to happen is ~6-12 months of negative press in the tabloids and constant negative coverage in national television. I think that could easily be arranged.
To paint a picture here are some headlines by BILD (largest German tabloid):
"ZUCKERBERG SCANDAL: So verdient Facebook mit Ihren Daten Geld", (ZUCKERBERG SCANDAL: This is how Facebook earns money with your data), article based on a report by German national television (ZDF)
"„SCHADEN FÜR DIE DEMOKRATIE“
60 Prozent der Deutschen fürchten Facebook" (DAMAGE TO DEMOCRACY, 60 Percent of Germans fear Facebook)
Not that I feel strongly either way, but deliberately advocating for a strong, centralized, negative propaganda campaign against a company you don't like doesn't seem ethical to me.
Seems totally ethical to me IFF you don't make stuff up. In fact, if you can drive a strong, negative campaign solely with actual facts, I find it imperative to do so.
Then again, you did say "propaganda" campaign, so I guess there was your assumption of dishonesty.
I think you're well meaning here, but I have two points in reaction:
1) "...[E]thical to me IFF you don't make stuff up." I think you can have propaganda that uses only facts, but in a disingenuous manner. You just don't mention _all_ the facts, and/or put a lens on the wrong perspective. Is that "dishonest?" I'm not sure - maybe a misinterpretation?.
As an example, you can see this all the time with the arguments against global warming: "some studies show that temperatures are not increasing." If you look hard enough, you'll find data points and/or papers that argue against global warming, but they're a very very minor # of studies, that it's not a fair perspective or lens on the arguement.
2) "I find it imperative to do so." I am not a Facebook fan-person by any means, but I'm not sure I find it morally _imperative_ that folks rally against what they are doing. I have a hard time believing that the 1000's of employees that work there are all evil people, but rather the majority are probably well-intentioned and may just need a correction of direction. Can I prove this? No, but if they were, say, selling nukes to ISIS or something at that level, then I might agree with you a bit more.
As I said: "propaganda" brings a whole set of assumptions to the table. My point was that if you can sustain a campaign (not propaganda campaign, i.e. just a real strong case) then you should, because that means there is a genuine problem.
Again, not trying to go to bat for Facebook by any means, and I think you're well meaning with your point, but I worry think this is a dangerous argument to go down.
1) While they're not perfect, I'm not sure I believe they're doing more harm than good. As an annecdotal example, my family loves Facebook, because I live pretty far from most of them, and they like the ability to know that I'm healthy + happy, and/or if I'm travelling that I got back safely. They're generally older, so not that tech-savvy, and I do give Facebook Design a lot of credit for making some really intuitive choices in a lot of areas that make it accessible to older / non-geeky audiences. Could I just text / call them? Sure, but I'm a busy person like most people here, so it's easy to just throw up a picture or post to FB and make their days a bit better.
2) A lot of the arguments that they're doing "harm" are pretty pseudo-scientific, if you look at the claims/articles closely. Many rely on "correlation over causation" views, anecdotes, and the like. It's only recently that some true studies have tried to prove this [1], but even then arguing for putting FB on the level of tobacco and guns is a strong claim to make off of minimal peer-reviewed evidence. We might as well toss Google + Amazon + Twitter into this same boat.
> Sure, but I'm a busy person like most people here, so it's easy to just throw up a picture or post to FB and make their days a bit better.
It's somewhat ironic you use this as an argument for good. I'm much like yourself in this scenario, so I'm not trying to take a moral high ground, but I can't say I truly believe this is enriching the relationships of people on either side.
Actually I would be more than happy. I am ready to make the move to a service where users are more in control, but its going to take a massive kick up the arse to get my friends to move. They whined about having to use Telegram rather than Whatsapp when I had an Ubuntu phone.
>Ok. So Facebook monitors Messenger (not just your public "wall" of Facebook) which has the appearance of being private person to person communication and detects that there are encouragements of violence in Myanmar (written I assume in some language other than English).
Or, to take another reading of it - Facebook actively monitors nothing. They get a call that people are abusing the service, and they open the hatch and proceed to take a look.
Skype does it, too, by the way, in case that's also a surprise. Probably most of the ones that don't use end-to-end encryption by default do. And now that Brian Acton left WhatsApp, you should be very careful with what you saw in WhatsApp's end-to-end encrypted chats, too.
> The fact that it is owned by Facebook is enough to make me stay away.
That’s fair, but I think it’s important to be very explicit if that’s the criticism.
It’s true that Facebook can see your contacts list if you share it with WhatsApp but no one ever claimed that they couldn’t. This isn’t a security vulnerability as such.
The second article you linked is from before WhatsApp’s move to the Signal protocol.
Yes, it’s owned by Facebook and yes, it’s closed source, but WhatsApp has brought end-to-end encryption to over a billion people. For the vast majority of use cases it’s a huge security improvement.
I was just wondering why the leave of Brian Acton would change anything significantly.
Besides that I would never touch WhatsApp. The fact that they hand your phone number over to facebook is one of many reasons to stay far away of this messenger.
I’m just so done with Facebook. I know this is said time and time again here on HN but Facebook (and google) provide a lot of stress to me for little value in return. The majority of the stress comes from always having to be “on” and watching for some ambiguously worded pop up, or constantly changing privacy settings, or tracking pixels around the net. I spend more energy trying to run away from fb and google then I so using their service. It’s constant and fatiguing.
Abstinence does little to help the masses who are still using it. We need good regulation and laws to make it decent, not just abandon our friends and family on it who may not be as aware of the perils
I agree with you on that. A service such as social media (the connecting part, not the advertising part) could easily be seen as a necessary service being that we are all social creatures by design. But at this point in time it’s a massive Wild West where the laws haven’t been able to keep up with the advances in tech. So now we are stuck in this situation where every step you take is one in which you have to be very careful if you don’t want to have your personal habits exploited against you. The truth is I have been doing my best to educate the people close to me but up until very recently, with the Facebook scandals, it all fell on deaf ears. And maybe that’s another reason why I do find it so unnerving, as people are aware of these things going on but either don’t care enough and in some cases fully support their extremely private data being used on them for advertising. It’s also very conflicting because in this day and age of big data, real change could be made but all we seem to hear and experience is the opposite of positive change in this era. (I’m not normally this much of a negative guy but this topic specifically makes me feel very powerless)
The European laws are actually having an effect. Provided we can get the old hats in Congress to understand the right to privacy and the right to be forgotten, we may be able to curb the tide of mass surveillance that's currently being exploited to make more money and sell more goods
I've cut down, and only give Facebook a cursory check once a week. It's very nice to not constantly have it on my mind, and I don't seem to be missing much.
Yes, absolutely. The urge to juuuuust give Facebook a quick check is there, it's muscle memory almost, and it takes a conscious effort to break it.
I uninstalled the FB app ages ago, and I've removed all shortcuts to FB from the start pages in my browsers. That helps a bit. I've also set Cookie Autodelete to not allow any FB cookies, so I have to log in every time, presenting a further small hurdle I have to cross. It's a very small thing, but it makes me more aware of what I'm doing, so I can stop myself.
I want a journalist to ask him what he thinks about the age of post-privacy now, especially regarding his own privacy. But probably they are all scared to cross him - Facebook is too powerful.
Here's what he thinks of privacy: he had his own private paramilitary / police force de facto arrest and detain a photographer on a public California road. The photographer was taken to Facebook HQ for interrogation:
> Here's what he thinks of privacy: he had his own private paramilitary / police force de facto arrest and detain a photographer on a public California road. The photographer was taken to Facebook HQ for interrogation:
Why inflate what actually happened that much? Checked you on that statement, and even the daily mail article just mentioned it was a security guard who told the photographer to go to HQ, where he was met by two executives. No 'paramilitary force'...
I like neither the company nor the person, but please don't spread false information.
This was a good interview to listen to and Ezra Klein asked some good questions. Unfortunately Zuckerberg's answers weren't satisfactory.
Responding to Tim Cook:
>You know, I find that argument, that if you’re not paying that somehow we can’t care about you, to be extremely glib and not at all aligned with the truth. The reality here is that if you want to build a service that helps connect everyone in the world, then there are a lot of people who can’t afford to pay. And therefore, as with a lot of media, having an advertising-supported model is the only rational model that can support building this service to reach people.
>That doesn’t mean that we’re not primarily focused on serving people. I think probably to the dissatisfaction of our sales team here, I make all of our decisions based on what’s going to matter to our community and focus much less on the advertising side of the business.
>But if you want to build a service which is not just serving rich people, then you need to have something that people can afford. I thought Jeff Bezos had an excellent saying on this in one of his Kindle launches a number of years back. He said, “There are companies that work hard to charge you more, and there are companies that work hard to charge you less.” And at Facebook, we are squarely in the camp of the companies that work hard to charge you less and provide a free service that everyone can use.
>I don’t think at all that that means that we don’t care about people. To the contrary, I think it’s important that we don’t all get Stockholm syndrome and let the companies that work hard to charge you more convince you that they actually care more about you. Because that sounds ridiculous to me.
The fact is advertising business models, like Facebook, have inherent conflicts that Apple doesn't have. Paying directly for a product that you're going to use creates a beautiful alignment of interests. Even Microsoft wasn't as beautifully aligned as Apple was/is because Windows was sold to CIOs and IT departments and not the person who was going to actually use it.
"Apple is for rich people" would have been excellent jujtsu if not for that leaked memo, at least rhetorically. Unfortunately that memo laid bare for everyone to see exactly what Facebook as an organization and a culture prioritizes, and reveals that all of Facebook's public statements made after privacy scandals over the years, as well as Zuckerberg's protestations that he cares about the user over "connecting the world" (euphemism for increasing engagement and thus ad growth), were complete bullshit. Ezra refers to Tristan Harris' comment that Zuckerberg couldn't do anything that decreased engagement by 50%, which is completely right and zeros right in on that inherent conflict in ad business models.
On the ethnic cleansing in Myanmar and Facebook's role (I'm going to quote Ezra because Mark's answer is garbage and a complete dodge):
>One of the scary stories I’ve read about Facebook over the past year is that it had become a real source of anti-Rohingya propaganda in Myanmar, and thus become part of an ethnic cleansing. Phil Robertson, who’s a deputy director of Human Rights Watch in Asia, made the point that Facebook is dominant for news information in Myanmar but Myanmar is not an incredibly important market for Facebook. It doesn’t get the attention we give things that go wrong in America. I doubt you have a proportionate amount of staff in Myanmar to what you have in America. And he said the result is you end up being like “an absentee landlord” in Southeast Asia.
>Is Facebook too big to manage its global scale in some of these other countries, the ones we don’t always talk about in this conversation, effectively?
This gets to the heart of the matter. Facebook (and Google with YouTube) are just too big for them to even grasp what's going on on their platforms. "Absentee landlord" is such an apt way to put it. These companies sneeze and the ripples are massively world changing, from election meddling to fucking ethnic cleansing.
The only real solution to this is regulation on a global scale. Antitrust, privacy regulation, Germany's hate speech law applying to social networks (this won't happen in the US for obvious reasons); all of it has to be on the table.
Facebook reminds me of the story The Ones Who Walk Away From Omelas - the story of the utopian city whose prosperity depends on the perpetual misery of a single child.
At its heart, Facebook can't be honest with what it is. Mark Zuckerberg can't admit to himself that what's good for Facebook is bad for society. That's why he says a lot about "community" and "connection" but never talks about advertising or monetization. Look at his updates - they never talk about the billions of dollars they make (money talk - that's taboo!)
Facebook doesn't make money via advertising. Advertising is a billboard, or a TV commercial. Facebook makes money by selling your private data (oh you're showing signs of depression - try some prozac!). Facebook is a surveillance operation in a way that traditional advertising isn't.
I have a friend who worked on the newsfeed team at Facebook. Nice guy. Smart guy. But he's unable to consider the broader ethical implications of what he's doing. In the Facebook paradigm more engagement = good.
Facebook has contempt for the general public, and for that reason deserves contempt from the general public.
Facebook for the most part doesn't sell your data. They sell advertising. They tell you, "If you want to put a billboard for Prozac in front of depressed people, we'll do that for you." But they won't tell you who's depressed and let you do it yourself.
There have been exceptions, but they are broadly speaking either from years ago, or were mistakes in the first place. Facebook is generally uninterested in giving away your information -- that is, after all, a big chunk of what makes them valuable. If everyone had the same information, they wouldn't be as able to sell ads.
I think the difficulty in thinking about Facebook is that it sits in this strange middle ground, between targeted advertising (as it has traditionally been thought of) and direct selling of data.
If you think about it on a spectrum, where traditional advertising lets you do some crude targeting (If I advertise on American's Next Top Model, I'll get a different audience than if I advertise in the Wall Street Journal) that lets audiences opting in and out based on individual choice (I don't have to watch America's Next Top Model, even if I have all the characteristics of an ideal viewer). No data is sold here.
At the other end are people who sell leads - usually for B2B customers (here's the names and contact info for 500 IT Directors). Data is explicitly being sold here.
Facebook sits somewhere in the middle which is why we don't quite know how to talk about it. We can target things like traditional advertising, but we are now also able to get explicit information on our targets, in a way that hasn't really been done before.
What is the difference between an ad and abusing somebody who is in a weak position? There is a difference and targeted ads are dancing on that line, if you don't see this you should read on how people get recruited to sects. They don't advertise, they target people who can be easily be taken advantage of based on their behavior or emotions. I'm not saying it's wrong but there should be a limit on how and what can you target with ads or at least be transparent about it. "We have classified you as depressed" and see if users are happy about it.
I think the people selling anti-depressants would see themselves at helping the person in a weak position, since the medicine could potentially relieve them of some or all symptoms. The line of "abuse" is very hard to draw when someone is willingly buying and using a product designed to have value.
> then there are a lot of people who can’t afford to pay. And therefore, as with a lot of media, having an advertising-supported model is the only rational model that can support building this service to reach people.
Hmm, but ultimately the money still comes from the target audience anyway, right? The advertiser only pays to reach a given audience if that audience is going to buy enough product to pay back the cost of advertising. Maybe you could make the argument that the really poor Facebook users are essentially drafting off of imperfectly targeted advertising?
I think the real reason for the advertising model is that Facebook is more valuable the more people use it. So it started charging money, maybe only 1/4rd of the users actually pay, the other very casual users just use the fee as an excuse to finally stop using Facebook. But now with such an exodus, the paying people don't want to pay anymore, and then you have a death spiral.
>The fact is advertising business models, like Facebook, have inherent conflicts that Apple doesn't have. Paying directly for a product that you're going to use creates a beautiful alignment of interests.
Don't forget FB as business tool. There it not only aligns interests, it gives back more than you put in, many businesses depend on it, and the conversion rate they enable.
> There it not only aligns interests, it gives back more than you put in…
In my experience that's not a popular sentiment. FB has absolutely decimated the organic reach of posts by businesses, creators, and other entities whose customers/fans have followed them. Now you have to pay Facebook to reach those customers/fans who followed you in order to see your posts.
Facebook manipulates the feed to show only those posts which guarantee maximum user engagement. When a new post is made, the post is initially shown only to a small subset of fans. It will be shown to more people only when they engage with your post or if you cough up money to promote the post.
>Paying directly for a product that you're going to use creates a beautiful alignment of interests. Even Microsoft wasn't as beautifully aligned as Apple was/is because Windows was sold to CIOs and IT departments and not the person who was going to actually use it.
Depends on what outcome you want. Buying Apple/MS products - your money simply goes into some executives pocket or in a pile of cash promoting an even greater concentration of wealth. While that is in the interest of Apple, its not really in my interest as a consumer. I want as little of my money as possible to go into a pile of cash, and the maximum possible to the actual cost of the good.
To the people that think Facebook is going to go away you are are in for a rude awakening. The fossil industry has been poisoning people and doing dirty tricks for decades have they gone away? Ultimately they were required for the nation. As the fossil industry loses strength over the coming decades they will be replaced by the tech giants. The tech giants will get regulated but the will never go away. The country needs them. They generate too much GDP.
>That doesn’t mean that we’re not primarily focused on serving people. I think probably to the dissatisfaction of our sales team here, I make all of our decisions based on what’s going to matter to our community and focus much less on the advertising side of the business.
When you use the word "community" to describe a user base of 2 billion people you've stretched the word beyond all sense or recognition.
"The second category is state actors. That’s basically the Russian interference effort. And that is a security problem. You never fully solve it, but you strengthen your defenses. You get rid of the fake accounts and the tools that they have."
Frankly, this is a weak answer. They don't have a good solution and this approach is not going to prevent it from happening in future elections.
You know what comes next - some other scandal from some other area of the economy (like the financial markets melting down again) and the vast majority of people move on from this and continue using Facebook as they always have. The only reason these issues are getting so much attention is because the mass media is over-clocking their coverage and it's in everyone's face right now. I suspect the media is using the same fear mongering tactics the alt-right typically employs for more hits and increased ad revenue - i.e. "Hey everyone, Facebook has used your private data to elect Trump!"
Does anything Facebook (or third party companies) do with your private data actually have an impact in the day to day lives of the vast majority of the world's population? Sure, hyper-targeted advertising and echo-chamber content curation can have a negative impact to society, but these are macro issues. People will generally follow the path of least resistance and if you give them something of high utility for no cost and no immediate negative impact to their daily lives, they will conveniently ignore many serious issues that should otherwise be concerning.
Even the election of a US President you are viscerally opposed to doesn't usually change long term behavior. What percentage of people that are absolutely appalled at the election of Trump have adjusted their daily lives to make a change - e.g. regularly volunteering for a political cause/candidate (rage tweeting doesn't count)?
"One thing that's very notable is, they agreed to do all this stuff back in 2011, and it looks like they didn't live up to the promises then. So the question is, what makes us believe them now?
...
Yes, I mean, that's the problem, is that they keep saying this, but, you know, there's this recidivism problem. They keep not really doing anything.
And I think that the problem is that their model depends on accumulating data and giving it to advertisers. And anything that comes close to threatening that business model, they don't really seem that interested in doing something serious about it.
...
You know, I understand that, but I think the time of "trust us" has got to be over.
...
You know, the - fundamentally, Facebook is a surveillance machine. They get as much data as they can, and they promise advertisers that they're able to manipulate us, and that is at the core. And so, you know, they started this by saying, well, this wasn't really a data breach, this is our normal business model, which I think should tell you something, and then later said, well, it's not so great, and so forth.
But they're really showing an unwillingness to do something more serious about this problem. And it keeps happening over and over again.
...
There is just something not right here with this company and their unwillingness to come clean. And I think that the idea, well, just trust because Zuckerberg wrote a message on Facebook, that everything is going to be fine is really something government investigators cannot trust.
...
And once again, I think the concern in Facebook's heart is that, at some point, this will hurt their advertising revenue and the promises they have made investors. And so they're unwilling to take serious steps.
...
And I think the fundamental problem is, they're all dependent on this pure advertising model, you know, nothing but trying to get as much data out of us and sell as much as they can of our time and attention to other people.
"You know, I find that argument, that if you're not paying that somehow we can't care about you, to be extremely glib and not at all aligned with the truth.
The reality here is that if you want to build a service that helps connect everyone in the world, then there are a lot of people who can't afford to pay. And therefore, as with a lot of media, having an advertising-supported model is the only rational model that can support building this service to reach people.
...
I think now people are appropriately focused on some of the risks and downsides as well. And I think we were too slow in investing enough in that. It's not like we did nothing. I mean, at the beginning of last year, I think we had 10,000 people working on security. But by the end of this year, we're going to have 20,000 people working on security.
[ __% of total headcount at Facebook ]
In terms of resolving a lot of these issues, I think it's just a case where because we didn't invest enough, I think we will dig through this hole, but it will take a few years. I wish I could solve all these issues in three months or six months, but I just think the reality is that solving some of these questions is just going to take a longer period of time.
Now, the good news there is that we really started investing more, at least a year ago. So if it's going to be a three-year process, then I think we're about a year in already. And hopefully, by the end of this year, we'll have really started to turn the corner on some of these issues."
Ok. So Facebook monitors Messenger (not just your public "wall" of Facebook) which has the appearance of being private person to person communication and detects that there are encouragements of violence in Myanmar (written I assume in some language other than English).
That is some impressive operation they have going on.
No wonders that the Chinese did not let get them inside their country.