Facebook provided further clarification over how it combats the alleged outbreak of fake news on its platform in a blog post published Thursday.
“False news is a money maker for spammers and a weapon of state actors and agitators around the world,” said Tessa Lyons, product manager for Facebook.
Regardless of the validity of such a statement, Facebook’s fake news-fighting endeavors have been criticized by many.
New York Times CEO Mark Thompson delivered a blistering speech Tuesday at an event titled “Breaking the News: Free Speech & Democracy in the Age of Platform Monopoly” in which he offered his deep-seated criticisms of tech giants like Facebook and Google trying to decide for readers what news is reputable and what stories can’t be trusted.
“The process of citizens making up their own mind which news source to believe is messy, and can indeed lead to ‘fake news,’ but to rob them of that ability, and to replace the straightforward accountability of editors and publishers for the news they produce with a centralized trust algorithm will not make democracy healthier, but damage it further,” Thompson said in a keynote lecture.
Despite such concerns, Facebook is pushing ahead as it has been feeling the pressure from portions of the public who seem to think users aren’t mentally equipped to ultimately find out what news is trustworthy, and that it’s tech companies’ job to do it for them.
In an attempt to make its fact-checking process as formidable as possible — or perhaps to skirt complete responsibility — Facebook partners with the International Fact-Checking Network (IFCN), an arm of the Poynter Institute, which is “independent and certified through” a “non-partisan” way.
Arguably the only two U.S.-based groups within the IFCN that are conservative-leaning are The Weekly Standard and Check Your Fact, a subsidiary of The Daily Caller. And Facebook doesn’t seem to use IFCN as a whole, despite what it claims, as it has its own smaller cohort of organizations, which excludes some of those part of the Poynter division.
But perhaps most importantly, the technology Facebook uses to help with fact-checking in general and identifying false or misleading news is imperfect, as a lot of algorithms are.
“For example, when people on Facebook submit feedback about a story being false or comment on an article expressing disbelief, these are signals that a story should be reviewed,” said Lyons. One potential problem with such a method is that “people on Facebook” can target a news story that isn’t necessarily false, but reports on something they don’t like, potentially in a way they don’t like. Thus, by flagging content, they can essentially game the algorithms for their own personal interest, not out of the interest of promoting legitimacy.
“We do not know, beyond inevitably imperfect and incomplete empirical observation, how the algorithms of the major platforms sort and prioritize our content, nor can we reliably predict or influence changes in those algorithms, nor in any sense hold the companies to account for them,” said Thompson. “Full transparency about both algorithmic and human editorial selection by the major digital platforms is an essential preliminary if we are to address any of these issues. It would be best if this were done voluntarily, but even if it requires regulation or legislation, it must be done — and done promptly.”
Facebook, like other tech companies, does not discuss the details of its algorithms because they are proprietary.
“But the underlying danger — of the agency of editors and public alike being usurped by centralized algorithmic control — is present with every digital platform where we do not fully understand how the processes of editorial selection and prioritization take place,” Thompson continued in his speech.
While ostensibly trying to be transparent, but only to a certain extent, Facebook at least seems to be partially aware of the potential deficiency of its algorithms.
“To make real progress, we have to keep improving our machine learning and trying other tactics that can work around the world,” wrote Lyons.
Also, Facebook appears to realize that trying to decipher fake news both itself and through partner organizations isn’t the only way forward, as it plans on continuing “to invest in news literacy programs to help people better judge the publishers and articles they see on Facebook.”
“It’s through the combination of all these things — and by collaborating with other companies and organizations — that we’ll be able to continue to make progress on false news,” concluded Lyons.
What that recipe is will be critical for Facebook — which is strained by calls to both do more about misinformation on the platform and foster a free expression ethos — since its algorithms designed to combat fake news to many, like Thompson, are considered to be more worrisome than fake news itself.
Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact licensing@dailycallernewsfoundation.org.