Hacker News new | past | comments | ask | show | jobs | submit login

I wonder what the solution to this is. From what I know about reviewing reports about content, it's basically a case of someone reporting the post, then the report being farmed to reviewers, then the reviewers can check the post and I optionally the context to either kill the post or leave it alone.

That means your post is one of hundreds this person sees that day and their guidance is likely "no genitalia, no nipples, and definitely never any naked children". So they act accordingly.

So what's the ideal path from here? Do you educate them about art? What's the line after that? Is this photo ok? What about the famous album cover? What about private party pictures? Etc. Can we even describe a reasonable line? Do we expect them to reverse image search every single photo for context? (Not many people could recognise that photo on its own) How many more people would be needed for clarification? What's the incentive to get them?




One solution is to not remove anything until told to do so by law enforcement. If people don't like what's posted they can choose to not view it.


I really don't get why sites don't think this way. If you take the hands-off route, you can deflect a lot of criticism: "We don't endorse it, we just let people post what they want". But as soon as you censor, or curate, or promote one item/subject, you become responsible for everything else on your site. Why add extra load on yourself?


> I really don't get why sites don't think this way

Because they lose revenue. In Facebook's case because advertisers have demonstrated time and again they won't spend money when their products are associated with controversial content. It doesn't matter how nuanced or obtuse the reason for the controversy.


That sounds plausible. I guess I don't see how FB can't throw their weight around. Sure, smallrandomsite.com can't fight advertisers, but FB should be able to.


If you don't moderate what the users post your service becomes a child porn host in a surprisingly short ammount of time is the common fear, and I think its also what generally happens. Though behadings and general gore from shitposters is probably what triggers most censorship programs.


I guess I didn't say it, but I meant don't filter out anything except for legally-required stuff. If they have to comply with the law, I can understand that.


Advertisers hate this approach, unfortunately. This means that if a site is primarily trying to make money, such an idealistic approach will fall to the wayside. Reddit is the most spectacular example of this, moderation by the administrators has in recent years stepped far away from the original approach of leaving up all legal content.


Or you could even let them tag content as inappropriate for those who only want to see fluffy bunnies and have an "give me an unfiltered view" option for everyone else.


I am 100% behind letting the user decide what they want to filter out. If beheadings offend you, filter them out! If tan-colored books offend you, filter those out as well!

I guess it's weird to me that some people can't just "filter" and move on.


And once you realise your site turned into something between /b/ and sad parts of reddit, what do you do then? (I'm assuming that's not your goal)


I've once had the displeasure of clicking on a misleading facebook group title and seeing one of the worst pictures I've ever seen, an atrocity that I couldn't get out of my head for days. I reported it. Some idiots just want to shock others, I don't get it.

I know that there are companies in Europe that only do the verification of the reported content, and the employees cannot say that they work for facebook, but in a way they do.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: