• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

All of Facebook's internal moderation rules leaked in Guardian investigation

This stuff is a no win. There is no set of rules that will work across all cultures, all age groups, and be easy to fairly apply by staffers with very different sets of personal values. There is no website that is large and has "solved" these kinds of moderation issues. Self-harm and suicidal ideation are especially impossible, there is literally never going to be a right answer on how to deal with them at scale.

Do mods here have a set of examples like these? I mean examples to what is considered to be ban-able? What dictates a permanent ban on full members?
 

D4Danger

Unconfirmed Member
I guess they never considered that giving people the platform for their vile shit doesn't actually reflect the world but simply encourages the worst behaviour. or they did and they don't care. Either way it's gross.
 
Who would even want the job of enforcing these standards. The unspeakable imagery you'd have to personally verify would hollow out anybody's sense of good in this world...it's unfathomable.
 

Stumpokapow

listen to the mad man
Do mods here have a set of examples like these? I mean examples to what is considered to be ban-able? What dictates a permanent ban on full members?

Our moderation situation isn't really comparable. First, Facebook users primarily interact with their immediate social circle; GAF is a public discussion forum where anyone can take part in any conversation. Second, our scale is much smaller and moderators here are "handpicked" and engaged and talk to each other and stay on top of things--Facebook needs a set of rules that thousands of moderators all over the world with totally different values systems who have no contact with each other and just look at photos of dead bodies all day need to be able to apply objectively. Third, we are not trying to be a "public service" designed for all types of people to discuss anything the way Facebook is.

We have a terms of service which outlines the behaviours we generally discourage. We have a variety of threads in the FAQ forum that flesh out our expectations of users. Moderators have flexibility as to the appropriate level of sanction (warning, deletion, ban) to respond to a given issue, and have access to tools to assess users' past records to be able contextualize whether they should go harder or softer based on someone's record. We can and do all review each others bans and generally have a good dialogue about bans.

The only one in OP that stands out as being bizarre is the art one. What an odd distinction.

Because everyone agrees that you can't just have people publicly post porn, but many of their content reviewers are in places where tasteful photography that includes nude figures or even near-nude figures (or all sorts of things like a mother breast-feeding or a parent posting a bath-tub picture of their child) would be considered obscene, and they need a clear bright line rule. Does the rule make total sense? No, not really, but they need a rule that everyone who reads the rule will be able to understand how to apply it. That's one of those scale things that comes up.
 

kirby_fox

Banned
Checks out. Those threats of violence aren't credible. If they cut off people who were harming themselves no one could stop them.

I guess they have a definition of art with nudity. I wonder if they see digital art as being more pornographic than handmade.
 

Fancolors

Member
Who would even want the job of enforcing these standards. The unspeakable imagery you'd have to personally verify would hollow out anybody's sense of good in this world...it's unfathomable.

The employees who spoke to SZ-Magazin are not allowed to talk to reporters or the authorities. However, they wanted to make their working conditions known. These people are paid to delete offensive Facebook posts as quickly as possible – and they often feel inadequately prepared and left alone to deal with the psychological fallout of their work. Many complained that guidelines regarding what should or shouldn’t be deleted were unclear, and that they were stressed and overworked. A number of employees also reported major psychological issues as a result of frequent exposure to shocking contents that included images of torture, murder, or child abuse – and they were not provided access to professional help.

http://international.sueddeutsche.de/post/154513473995/inside-facebook
 

Ogodei

Member
"Facebook will allow people to livestream attempts to self-harm because it “doesn’t want to censor or punish people in distress”."

????

There's an element of logic to this: these sort of things are often a cry for help and making them feel rejected could push them over the edge.

I've been on a mod team before and there was substantive debate over suicide policy, but other sites i've seen tend to say they'll shut down suicide threats while pointing the poster to a hotline.
 

SeanTSC

Member
I'm really less concerned about their moderation of people's comments and more about their moderation of fake news.

They need a hard stance on Fake News where they just delete the shit out of every single piece of it. That would be the best thing that they could do for their platform.
 

Alucrid

Banned
"A number of employees also reported major psychological issues as a result of frequent exposure to shocking contents that included images of torture, murder, or child abuse – and they were not provided access to professional help."

yikes
 

Magwik

Banned
"Facebook will allow people to livestream attempts to self-harm because it “doesn’t want to censor or punish people in distress”."

????

This one makes sense actually. If someone his hurting themselves online, that means there are increasing odds for another person to see and try to get them help.
 
Oooooh boy this isn't gonna go well.

What kind of rules are these?!

Loose rules to try to manage a site used by over a billion people with a few thousand employees.

The seeming lack of assistance to those employees, however, is terrible, especially with what they're probably exposed to on a daily basis.
 

Bronx-Man

Banned
All these rules tell me is that Facebook has no idea how actual humans operate.

Which, y'know, is par the course for most people involved with these big social media/Silicon Valley brands.
 

jelly

Member
"A number of employees also reported major psychological issues as a result of frequent exposure to shocking contents that included images of torture, murder, or child abuse – and they were not provided access to professional help."

yikes

That's disgusting right there. I can't possibly imagine doing a job like that and they put them through the meat grinder with no support. Vile company.
 

Aselith

Member
So death-threats against certain political figures are not allowed, but death-threats against members of the general public are fair game?

That's not what that shows. It shows that they are not policing if people want to make an insult ie fuck off and die without a credible threat or a nontargetted "here is how you commit violence" thing like telling someone to throat punch a dude to put him down.

It's messy but humanity is messy and it's hard to find a good palce to draw a line without becoming the nanny company.


"Credible threats" would be real death threats against a specific person regardless of status. Someone saying "fuck off and die" isn't a death threat.
 

Ryaaan14

Banned
So I guess I can stop reporting all those videos of graphic animal abuse.

Good to know. Thank u good guy Mr. Facebook
 
Who would even want the job of enforcing these standards. The unspeakable imagery you'd have to personally verify would hollow out anybody's sense of good in this world...it's unfathomable.

Imagine the shit criminal investigators see when dealing with child abuse... I just can't and don't want to imagine.
 
"A number of employees also reported major psychological issues as a result of frequent exposure to shocking contents that included images of torture, murder, or child abuse – and they were not provided access to professional help."

yikes

Jesus fucking Christ. That's horrifying.
 

faridmon

Member
Facebook moderation is awful

I was commenting under an Everton news, and commented on how I thought Barkley was a very underwhelming character, someone replied to me with a racist remark which I reported. The moderation team came back to me saying that it wasn't a racist remark, just a an offensive banter, which shocked me to the core!
 
There's a lot to talk about here, but as an artist, this one stood out:

All “handmade” art showing nudity and sexual activity is allowed but digitally made art showing sexual activity is not.

Kinda bullshit.
 
Photos of animal abuse can be shared, with only extremely upsetting imagery to be marked as ”disturbing".

All ”handmade" art showing nudity and sexual activity is allowed but digitally made art showing sexual activity is not.

Videos of abortions are allowed, as long as there is no nudity.

Holy crap. Our country's mentality on violence and nudity/sex has got to be the most bizarre and irrational of any developed nation.
 

Sulik2

Member
"A number of employees also reported major psychological issues as a result of frequent exposure to shocking contents that included images of torture, murder, or child abuse – and they were not provided access to professional help."

yikes

Modern corporations and CEOs are all functioning sociopaths in how they treat their employees. The profit motive just turns human beings into numbers on a spreed sheet. When you stop thinking of employees as humans but dollars that count against the bottom line this is the sort of thing that results.
 
Well this one I actually get. Imagine if they took it down instantly, it would also take away any chances that someone notices and is able to stop them.
Doesn't stop them from notifying someone and stop them other than they don't want to and get labeled like Twitter in it's hate posting


also that site (I'm avoiding the name to not get even more ads of that shit)
 
This stuff is a no win. There is no set of rules that will work across all cultures, all age groups, and be easy to fairly apply by staffers with very different sets of personal values. There is no website that is large and has "solved" these kinds of moderation issues. Self-harm and suicidal ideation are especially impossible, there is literally never going to be a right answer on how to deal with them at scale.
PRetty much, suggestions for improvement and criticism are justified, but you can't expect a set of arbitrary rules to cover everything especially when there are a billion users uploading content. Also you have only a limited set of resources and machine learning/AI are not even close to catching all other stuff...
 
This stuff is a no win. There is no set of rules that will work across all cultures, all age groups, and be easy to fairly apply by staffers with very different sets of personal values. There is no website that is large and has "solved" these kinds of moderation issues. Self-harm and suicidal ideation are especially impossible, there is literally never going to be a right answer on how to deal with them at scale.

This is true, but then why bother having such a hardline stance on sexuality compared to violence? I'd rather a hundred videos of 3D models fucking (or, God forbid, real people fucking) go under the radar than a single image of someone being decapitated, killing themselves, displaying the remains of an aborted fetus, etc. It's puzzling to even consider that some people would have the opposite preference.

The logic behind the rules is what makes them so alarming.
 

Jackpot

Banned
The scope may be vast, but many elements could be improved by having a stricter policy on threats. It wouldn't be hard to tighten that up considering how low the bar is to start with.
 

Yoda

Member
Remarks such as ”Someone shoot Trump" should be deleted, because as a head of state he is in a protected category. But it can be permissible to say: ”To snap a bitch's neck, make sure to apply all your pressure to the middle of her throat", or ”fuck off and die" because they are not regarded as credible threats.
This kind of makes sense. Any threat to the president WILL be investigated by the secret service, this would cause a lot of headaches for Facebook. In regards to other "threats", if we banned everyone who ever said something that had a violent meaning if it were carried out in real-life, then they'd have to ban half the site, Facebook relies on users for $$ so perhaps a middle ground would be removing the content w/o a ban, but policing people's language is up to the business, not society at large (in the US via the first amendment).

Videos of violent deaths, while marked as disturbing, do not always have to be deleted because they can help create awareness of issues such as mental illness.
If X commits a tragedy, and there is doubt about said tragedy, than the video, while hard to watch, can help raise positive awareness. Micheal Brown being a good example.

Some photos of non-sexual physical abuse and bullying of children do not have to be deleted or ”actioned" unless there is a sadistic or celebratory element.
Don't agree with this, bullying is intrinsically sadistic by nature, unless the motive is to spread awareness than I'd remove this content

Photos of animal abuse can be shared, with only extremely upsetting imagery to be marked as ”disturbing".
I can understand the pinnings of the free-speech argument here... Even thinking about this content makes my stomach erk.

All ”handmade" art showing nudity and sexual activity is allowed but digitally made art showing sexual activity is not.
The fuck is this?

Videos of abortions are allowed, as long as there is no nudity.
America?

Facebook will allow people to livestream attempts to self-harm because it ”doesn't want to censor or punish people in distress".
If the video is being broadcast -> censored, before someone who could have helped see it, then I'd say this rule makes sense.

Anyone with more than 100,000 followers on a social media platform is designated as a public figure – which denies them the full protections given to private individuals.
I suppose this make sense? It'd be interesting to know what "full protections" entails.
 

Fades

Banned
I think the thing about art probably boils down to the fact that a photograph posted online (edited or not) is "digitally-made art". I could post photos of two people having sex in an alley and claim it's performance "art"...but an oil painting of the same thing, while similarly crude, is usually considered by society at large to be in less of a grey area (maybe due to the inherent assumption of more "skill" being involved in painting/drawing vs photography? Not that good photography takes any less skill, but you get more people going "I could shoot that myself!" and passing it off as unskilled which devalues the art). It also depends on where a company draws the line between nude art and pornography.
 
13f2TRt.png
 
There isn't an unresolved tension that I think might be a crisis one day:
Facebook and twitter and Reddit use safe harbor and carriage service legal terms to hide from any obligation for what they publish.
But they have large and weird manuals internally that clearly mean they do have an editorial voice of some kind and are or should be responsible for what they publish.

So here is an idea; maybe they shouldn't have got so huge without first solving this issue. They are responsible, really, and sooner or later a lawsuit is going to prove this.
This leak will be very interesting to lawyers trying to show there is an issue here that needs fixing. Frankly I don't care if "fixing" means people who use it become vulnerable to the same laws that would apply if they said something in public under their real name (about someone else).
 

rpmurphy

Member
Some photos of non-sexual physical abuse and bullying of children do not have to be deleted or “actioned” unless there is a sadistic or celebratory element.
I guess this kind of policy on social media sites is what ends up creating a successful money-making scheme like that DaddyOFive YT channel.
 

Cuburt

Member
I do not envy the position a social media organization on such a global scale trying to figure out moderation for social communication on the internet when we are still in the early stages of even having the internet. The ethics and social responsibility questions often don't have easy answers, if any answers at all.

It's also a lot of power to wield that needs to have accountability, but at the same time independence from heavy governmental influence.

Honestly, this is like showing people how the sausage is made. It's not going to be pretty, but it's mainly because most people don't think about it. Just from reading the stuff in the OP, it doesn't sound all that unreasonable, but of course people subjectively are going to have issues with certain distinctions, because that's always how it works.
 
Ugh... The nudity versus violence thing will never not make me a bit sick. Also, fuck the distinction. It's fucking arbitrary.
 
Ugh... The nudity versus violence thing will never not make me a bit sick. Also, fuck the distinction. It's fucking arbitrary.

The nipples are not ok thing is in the Google Adsense rules and regulations. Google will monetise racist YouTube videos and hate speech until someone catches them, but they pay Indian outsourcing companies to systematically search for any slight nudity on sites - including neogaf - and this generates a threat letter that reads like a three strikes thing. If you as a site can't block any nudity efficiently you are cut off from adsense.
 
Top Bottom