https://www.wired.com/2016/12/facebook-gets-real-fighting-fake-news/?mbid=social_twitter
What is their plan to do this?
In large part, as the article explains, they are going to be combating fake news by allowing users to mark an article as fake. Once it's marked, full time facebook employees will look at the domain the article originates from to determine if the domain is a legitimate source. If the article comes from a clearly spoofed website (FoxNews123.au is used as an example) it will be flagged as Fake. These employees aren't looking at the content of the article, just the domains they're being shared from.
Further, articles can be sent to a 3rd party team of fact-checkers from sites like Politifact and Snopes to verify the content. If the content is deemed false, the 3rd party sites will return an article correcting or debunking the information. That article will be shown along with a message to the person sharing the original fake news, something like "This article has been shown by 3rd Part Fact-Checkers to contain false information" along with a link to the Fact-Checkers content.
The other part of their plan is to reduce ad-dollars... but I'm not quite sold on their methods there.
Too little too late?
Mark me as Fake if old.
Wired.com said:After coming under heavy public criticism for not taking full responsibility for how it may have affected the outcome of the 2016 presidential election, Facebook has finally laid out how it plans to crack down on fake news. The social network’s corrective updates are starting to roll out right now, and while they won’t solve the problem overnight, they’re an important first step.
What is their plan to do this?
Facebook’s strategy combines crowdsourcing similar to how Facebook polices mature content, reliance on third-party fact checkers, and financial disincentives for fake news hucksters. Each aspect of the rollout has its strengths, but also invites a few questions.
In large part, as the article explains, they are going to be combating fake news by allowing users to mark an article as fake. Once it's marked, full time facebook employees will look at the domain the article originates from to determine if the domain is a legitimate source. If the article comes from a clearly spoofed website (FoxNews123.au is used as an example) it will be flagged as Fake. These employees aren't looking at the content of the article, just the domains they're being shared from.
Further, articles can be sent to a 3rd party team of fact-checkers from sites like Politifact and Snopes to verify the content. If the content is deemed false, the 3rd party sites will return an article correcting or debunking the information. That article will be shown along with a message to the person sharing the original fake news, something like "This article has been shown by 3rd Part Fact-Checkers to contain false information" along with a link to the Fact-Checkers content.
The other part of their plan is to reduce ad-dollars... but I'm not quite sold on their methods there.
Wired.com said:All these updates are a worthy step forward in addressing the scourge of fake news, which presents a real threat to our democracy even as the companies that spread this disinformation make a killing in ad dollars. Facebook has long asserted that as a platform, a mere middleman to these low-life companies, it doesn’t directly carry any responsibility for the actions of abusers of its reach. Today, Facebook is stepping up to the issues in earnest. Whether its actions will lead to the kinds of impactful changes that are currently needed remains to be seen.
Too little too late?
Mark me as Fake if old.