As Covid-19 spread across the globe early this year, Facebook went into action to combat potentially dangerous disinformation on its site. The company labeled and suppressed misleading content, removed disinformation and directed users to reputable sources, including the Centers for Disease Control and Prevention website.
This swift action stood in sharp contrast to Facebook’s stance on another divisive and complex danger: climate change. That’s because, under the company’s guidelines, climate content can be classified as opinion and therefore exempted from fact-checking procedures.
The policy means that peer-reviewed science can be lumped into the same category as industry statements and even blatant disinformation. In September, for example, the CO2 Coalition, a nonprofit group that says increased carbon emissions are good for the planet, successfully overturned a fact-check when Facebook quietly labeled its post as “opinion.”
In light of the recent advertising boycott against Facebook and the independent audit made public this month that faulted the platform for allowing hate speech and disinformation to thrive, we spoke to the company and several outside experts about its position on climate disinformation. Here’s what they had to say.
What are Facebook’s rules?
All opinion content on the platform — including op-ed articles or posts that express the views or agendas of politicians, businesses, and nongovernmental organizations — is exempt from fact-checking. This policy has been in place since 2016, according to a Facebook spokesman, and is publicly posted on the company’s website.
Who does the fact-checking?
Facebook itself does not check content. Instead, it contracts at least 50 independent organizations that have access to posts flagged as potential disinformation by Facebook or users.
One of the platform’s climate change fact checkers is Climate Feedback, an organization that recruits subject-matter experts to analyze posts. The process can take weeks for a single article.
According to Scott Johnson, Climate Feedback’s science editor, fact checkers can also scrutinize posts that have not yet been flagged or classified by Facebook.
After a review, the fact-checking company can apply one of eight content warnings to the post. Labels include, “False Headline,” “Misleading,” and outright “False.” When content is labeled false or partly false, users receive a pop-up warning about the content if they click to share it. False posts are also ranked to appear lower in news feeds.
What counts as opinion?
Deciding what’s opinion is at the discretion of Facebook, not the fact checkers.
In August, that policy attracted attention when the CO2 Coalition shared a Washington Examiner op-ed article that disputed the accuracy of climate change models. Climate Feedback labeled the post as “false.”
The CO2 Coalition appealed the decision and, according to Climate Feedback, Facebook responded by informing the fact checkers that the post was opinion content, and thus exempt from scrutiny by outside fact checkers.
“Placing statements that are verifiably false in an opinion section shouldn’t grant immunity from fact-checking,” says Mr. Johnson.
According to Climate Feedback, the op-ed cherry-picked facts and compiled them in a deliberately misleading manner. You can read the full fact-check here.
John Podesta, an adviser to President Barack Obama who coordinated the administration’s climate policy, called Facebook’s opinion policy “a loophole that you can drive a Mack truck through.”
Loophole or free speech?
According to a company representative who spoke on background, Facebook is most concerned with flagging or removing content that poses an immediate threat to human health and safety, including disinformation about the coronavirus or hate speech that incites violence. Climate change content, he said, does not fall within that category.
The representative said that The Washington Examiner post, originally published as an op-ed, clearly aligned with Facebook’s definition of opinion content and added that fact checkers should have been aware of that classification.
Mr. Podesta asserts that the policy amounts to a loophole for disinformation. He said some opinion pieces are “full of factual lies.”
“We’re not objecting to people having opinions,” he said. “We’re objecting to the spread of disinformation and lies under the cover of opinion.”
Andrew Dessler, a professor of atmospheric science at Texas A&M who helped fact-check the Washington Examiner item, agreed. He said he supports debate around policy questions, like how much carbon emissions should be reduced, but not about the decades of peer-reviewed research that have established scientific facts about climate change. “They aren’t up for debate,” Mr. Dessler said. “Not everybody’s opinion is equal on that.”
When pressed to combat disinformation, Facebook often points to its policy of protecting free speech and freedom of opinion. In May, the company founder, Mark Zuckerberg, told Fox News that the platform should not become the “arbiter of truth of everything people say online.”
Analysts point out, however, that not all speech is equal on Facebook. Some posts, often selected by algorithms because they are controversial or have high engagement, can be promoted to reach millions of people. That selective turbocharge gives them far more reach and power than other posts or spoken conversations at, say, a dinner with friends.
Emily Bell, director of the Tow Center at Columbia University, which studies digital journalism and platforms, noted that since the last election, Facebook’s fight against disinformation has been front-and-center in its talking points.
“You’ve built a platform, which actually really helps accelerate the spread of misinformation because it reacts positively to outrage and to things that people want to share," she said, including information too good, or bad, to be true.
“You have a statement like, ‘We believe everybody should have a voice,’ which is something Mark Zuckerberg has said over and over again. It sounds great,” Ms. Bell said. “But in practice, we know what letting everybody have a voice means. It means that you don’t discriminate against bad actors who are foreign powers. You don’t stop bullies and people who would seek to shame and harass other people.”
Facebook said in a statement that it has tripled the number of staff working on safety and security issues since 2016. A spokesman noted that the company produces quarterly, publicly-available reports that describe how much policy-violating content it removes.
“Our longstanding guidance to our partners is that clear opinion content is not eligible for ratings and we do not consider climate change content, or any other topic, to always be opinion,” said Andy Stone, Facebook’s policy communications director, in an emailed statement.
"how" - Google News
July 14, 2020 at 11:47PM
https://ift.tt/38VA2GT
How Facebook Handles Climate Disinformation - The New York Times
"how" - Google News
https://ift.tt/2MfXd3I
Bagikan Berita Ini
0 Response to "How Facebook Handles Climate Disinformation - The New York Times"
Post a Comment