• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Verge: The history of moderation on social media, and how much power it has on speech

Status
Not open for further replies.

Morrigan Stark

Arrogant Smirk
Warning: this is a really long article, but very in-depth and very interesting. It shows how moderation has evolved over the last decade or so with the explosion of social media and user-generated content websites, and how much power (even political power) it has to shape free speech.

http://www.theverge.com/2016/4/13/1...outube-facebook-reddit-censorship-free-speech

Some snippets, but the whole piece is well worth reading.

Mora-Blanco sat next to Misty Ewing-Davis, who, having been on the job a few months, counted as an old hand. On the table before them was a single piece of paper, folded in half to show a bullet-point list of instructions: Remove videos of animal abuse. Remove videos showing blood. Remove visible nudity. Remove pornography. Mora-Blanco recalls her teammates were a "mish-mash" of men and women; gay and straight; slightly tipped toward white, but also Indian, African-American, and Filipino. Most of them were friends, friends of friends, or family. They talked and made jokes, trying to make sense of the rules. "You have to find humor," she remembers. "Otherwise it’s just painful."

Videos arrived on their screens in a never-ending queue. After watching a couple seconds apiece, SQUAD members clicked one of four buttons that appeared in the upper right hand corner of their screens: "Approve" — let the video stand; "Racy" — mark video as 18-plus; "Reject" — remove video without penalty; "Strike" — remove video with a penalty to the account. Click, click, click. But that day Mora-Blanco came across something that stopped her in her tracks.

"Oh, God," she said.

Mora-Blanco won’t describe what she saw that morning. For everyone’s sake, she says, she won’t conjure the staggeringly violent images which, she recalls, involved a toddler and a dimly lit hotel room.

[...]

Okay. This is what you’re doing, Mora-Blanco remembers thinking as they paced up and down the street. You’re going to be seeing bad stuff.

Almost a decade later, the video and the child in it still haunt her. "In the back of my head, of all the images, I still see that one," she said when we spoke recently. "I really didn’t have a job description to review or a full understanding of what I’d be doing.

[...]

Mora-Blanco is one of more than a dozen current and former employees and contractors of major internet platforms from YouTube to Facebook who spoke to us candidly about the dawn of content moderation. Many of these individuals are going public with their experiences for the first time. Their stories reveal how the boundaries of free speech were drawn during a period of explosive growth for a high-stakes public domain, one that did not exist for most of human history. As law professor Jeffrey Rosen first said many years ago of Facebook, these platforms have "more power in determining who can speak and who can be heard around the globe than any Supreme Court justice, any king or any president."

[...]

In the summer of 2009, Iranian protesters poured into the streets, disputing the presidential victory of Mahmoud Ahmadinejad. Dubbed the Green Movement, it was one of the most significant political events in the country’s post-Revolutionary history. Mora-Blanco, soon to become a senior content specialist, and her team — now dubbed Policy and more than two-dozen strong — monitored the many protest clips being uploaded to YouTube.

On June 20th, the team was confronted with a video depicting the death of a young woman named Neda Agha-Soltan. The 26-year-old had been struck by a single bullet to the chest during demonstrations against pro-government forces and a shaky cell-phone video captured her horrific last moments: in it, blood pours from her eyes, pooling beneath her.

Within hours of the video’s upload, it became a focal point for Mora-Blanco and her team. As she recalls, the guidelines they’d developed offered no clear directives regarding what constituted newsworthiness or what, in essence, constituted ethical journalism involving graphic content and the depiction of death. But she knew the video had political significance and was aware that their decision would contribute to its relevance.

Mora-Blanco and her colleagues ultimately agreed to keep the video up. It was fueling important conversations about free speech and human rights on a global scale and was quickly turning into a viral symbol of the movement. It had tremendous political power.

[...]

A prevailing narrative, as one story in The Atlantic put it, is that the current system of content moderation is "broken." For users who’ve been harmed by online content, it is difficult to argue that "broken" isn’t exactly the right word. But something must be whole before it can fall apart. Interviews with dozens of industry experts and insiders over 18 months revealed that moderation practices with global ramifications have been marginalized within major firms, undercapitalized, or even ignored. To an alarming degree, the early seat-of-the-pants approach to moderation policy persists today, hidden by an industry that largely refuses to participate in substantive public conversations or respond in detail to media inquiries.

In an October 2014 Wired story, Adrian Chen documented the work of front line moderators operating in modern-day sweatshops. In Manila, Chen witnessed a secret "army of workers employed to soak up the worst of humanity in order to protect the rest of us." Media coverage and researchers have compared their work to garbage collection, but the work they perform is critical to preserving any sense of decency and safety online, and literally saves lives — often those of children. For front-line moderators, these jobs can be crippling.

[...]

the earliest "information wants to be free" days of the internet, objectives were lofty. Online access was supposed to unleash positive and creative human potential, not provide a venue for sadists, child molesters, rapists, or racial supremacists. Yet this radically free internet quickly became a terrifying home to heinous content and the users who posted and consumed it.

[...]

Brian Pontarelli, CEO of the moderation software company Inversoft, echoes the observation. Many companies, he told us, will not engage in robust moderation until it will cost them not to. "They sort of look at that as like, that’s hard, and it’s going to cost me a lot of money, and it’s going to require a lot of work, and I don’t really care unless it causes me to lose money," he said. "Until that point, they can say to themselves that it’s not hurting their revenue, people are still spending money with us, so why should we be doing it?"

[...]

Despite the site’s size and influence — attracting some 4 to 5 million page views a day — Reddit has a full-time staff of only around 75 people, leaving Redditors to largely police themselves, following a "reddiquette" post that outlines what constitutes acceptable behavior. Leaving users almost entirely to their own devices has translated into years of high-profile catastrophes involving virtually every form of objectionable content — including entire toxic subreddits such as /r/jailbait, /r/creepshots, /r/teen_girls, /r/fatpeoplehate, /r/coontown, /r/niggerjailbait, /r/picsofdeadjailbait, and a whole category for anti-black Reddits called the "Chimpire," which flourished on the platform.

After the survey was published in March 2015, the company announced, "we are seeing our open policies stifling free expression; people avoid participating for fear of their personal and family safety."

[...]

The sharp contrast between Facebook, with its robust and long-standing Safety Advisory Board, and Reddit, with its skeletal staff and dark pools of offensive content, offers up a vivid illustration for how content moderation has evolved in isolated ways within individual corporate enclaves. The fragmentation means that content banned on one platform can simply pop up on another, and that trolling can be coordinated so that harassment and abuse that appear minor on a single platform are amplified by appearing simultaneously on multiple platforms.

[...]

A writer who goes by Erica Munnings and asked that we not use her real name out of fear of retaliation, found herself on the receiving end of one such attack, which she describes as a "high-consequence game of whack-a-mole across multiple social media platforms for days and weeks." After writing a feminist article that elicited conservative backlash, a five-day "Twitter-flogging" ensued. From there, the attacks moved to Facebook, YouTube, Reddit, and 4chan. Self-appointed task forces of Reddit and 4chan users published her address and flooded her professional organization with emails, demanding that her professional license be rescinded. She shut down comments on her YouTube videos. She logged off Twitter. On Facebook, the harassment was debilitating. To separate her personal and professional lives, she had set up a separate Facebook page for her business. However, user controls on such pages are thin, and her attackers found their way in.

"Policies like this open the floodgates of internet hate and tied my hands behind my back. There was no way I could report each and every attack across multiple social media platforms because they came at me so fast and in such high volume. But also, it became clear to me that when I did report, no one responded, so there really was no incentive to keep reporting. That became yet another costly time-sink on top of deleting comments, blocking people, and screen-grabbing everything for my own protection. Because no one would help me, I felt I had no choice but to wait it out, which cost me business, and income."

Moderate me if old.
 

collige

Banned
Brian Pontarelli, CEO of the moderation software company Inversoft, echoes the observation. Many companies, he told us, will not engage in robust moderation until it will cost them not to. "They sort of look at that as like, that’s hard, and it’s going to cost me a lot of money, and it’s going to require a lot of work, and I don’t really care unless it causes me to lose money," he said. "Until that point, they can say to themselves that it’s not hurting their revenue, people are still spending money with us, so why should we be doing it?"
Who woulda thunk it.
 

Chris R

Member
Local newspaper just went from Facebook comments to a moderated system called "Civil Comments".

Quality of the comments has gone way up and spam is gone. Of course some people hate the new system because the vitriolic right wing comments are being removed as they don't add anything to most discussions, but that's fine in my book.
 

Morrigan Stark

Arrogant Smirk
Local newspaper just went from Facebook comments to a moderated system called "Civil Comments".

Quality of the comments has gone way up and spam is gone. Of course some people hate the new system because the vitriolic right wing comments are being removed as they don't add anything to most discussions, but that's fine in my book.
On this topic, another article (a shorter one) about comments on The Guardian:

https://www.theguardian.com/technology/2016/apr/12/the-dark-side-of-guardian-comments

New research into our own comment threads provides the first quantitative evidence for what female journalists have long suspected: that articles written by women attract more abuse and dismissive trolling than those written by men, regardless of what the article is about.

There's even a small quiz at the end where they ask you how you'd moderate some example comments. In my case, I was 100% in agreement with them.

I wonder if that article merits its own thread, actually. Fascinating stuff. But the OP is not so much about comments on news sites (though it falls under that) but user-generated media such as videos on youtube, photos, etc.
 
Local newspaper just went from Facebook comments to a moderated system called "Civil Comments".

Quality of the comments has gone way up and spam is gone. Of course some people hate the new system because the vitriolic right wing comments are being removed as they don't add anything to most discussions, but that's fine in my book.
I wish such a thing would happen on the financial market websites. Yahoo Finance, Market Watch, CNBC, ect.

So much right wing vitriol at those sites. Constantly see negative posts about Obama, LBGT, civil rights, and how The US is now a socialist country.
 
On this topic, another article (a shorter one) about comments on The Guardian:

https://www.theguardian.com/technology/2016/apr/12/the-dark-side-of-guardian-comments



There's even a small quiz at the end where they ask you how you'd moderate some example comments. In my case, I was 100% in agreement with them.

I wonder if that article merits its own thread, actually. Fascinating stuff. But the OP is not so much about comments on news sites (though it falls under that) but user-generated media such as videos on youtube, photos, etc.

oh wow

Although the majority of our regular opinion writers are white men, we found that those who experienced the highest levels of abuse and dismissive trolling were not. The 10 regular writers who got the most abuse were eight women (four white and four non-white) and two black men. Two of the women and one of the men were gay. And of the eight women in the “top 10”, one was Muslim and one Jewish.

And the 10 regular writers who got the least abuse? All men.
 

wildfire

Banned
The more I think about this article the more disturbing it is. I knew about the problems porn sites have to go through but imagining 4chan still had to filter filth really sets the tone of how mental this is.

The details of moderation practices are routinely hidden from public view, siloed within companies and treated as trade secrets when it comes to users and the public. Despite persistent calls from civil society advocates for transparency, social media companies do not publish details of their internal content moderation guidelines; no major platform has made such guidelines public.

You see this was and still is my concern regarding copyright administration such as Youtube's ContentID system but this article is raising awareness on other issues public platforms have to deal with and still manages to miss the point of what they should be saying instead of what I quoted.

The team that wrote this article as well as everyone reading it should be asking themselves how to track down these disturbed individuals and what social services or law enforcement agencies should be looking in to rehabilitating or punishing them.
 
Status
Not open for further replies.
Top Bottom