Search

How Facebook Failed the World - The Atlantic

banyakjin.blogspot.com

In the fall of 2019, Facebook launched a massive effort to combat the use of its platforms for human trafficking. Working around the clock, its employees searched Facebook and its subsidiary Instagram for keywords and hashtags that promoted domestic servitude in the Middle East and elsewhere. Over the course of a few weeks, the company took down 129,191 pieces of content, disabled more than 1,000 accounts, tightened its policies, and added new ways to detect this kind of behavior. After they were through, employees congratulated one another on a job well done.

It was a job well done. It just came a little late. In fact, a group of Facebook researchers focused on the Middle East and North Africa had found numerous Instagram profiles being used as advertisements for trafficked domestic servants as early as March 2018. “Indonesian brought with Tourist Visa,” one photo caption on a picture of a woman reads, in Arabic. “We have more of them.” But these profiles weren’t “actioned”—disabled or taken down—an internal report would explain, because Facebook’s policies “did not acknowledge the violation.” A year and a half later, an undercover BBC investigation revealed the full scope of the problem: a broad network that illegally trafficked domestic workers, facilitated by internet platforms and aided by algorithmically boosted hashtags. In response, Facebook banned one hashtag and took down some 700 Instagram profiles. But according to another internal report, “domestic servitude content remained on the platform.”

Not until October 23, 2019, did the hammer drop: Apple threatened to pull Facebook and Instagram from its App Store because of the BBC report. Motivated by what employees describe in an internal document as “potentially severe consequences to the business” that would result from an App Store ban, Facebook finally kicked into high gear. The document makes clear that the decision to act was not the result of new information: “Was this issue known to Facebook before BBC enquiry and Apple escalation? Yes.”

The document was part of the disclosure made to the Securities and Exchange Commission and provided to Congress in redacted form by Frances Haugen, the whistleblower and former Facebook data scientist. A consortium of more than a dozen news organizations, including The Atlantic, has reviewed the redacted versions.

Reading these documents is a little like going to the eye doctor and seeing the world suddenly sharpen into focus. In the United States, Facebook has facilitated the spread of misinformation, hate speech, and political polarization. It has algorithmically surfaced false information about conspiracy theories and vaccines, and was instrumental in the ability of an extremist mob to attempt a violent coup at the Capitol. That much is now painfully familiar.

But these documents show that the Facebook we have in the United States is actually the platform at its best. It’s the version made by people who speak our language and understand our customs, who take our civic problems seriously because those problems are theirs too. It’s the version that exists on a free internet, under a relatively stable government, in a wealthy democracy. It’s also the version to which Facebook dedicates the most moderation resources. Elsewhere, the documents show, things are different. In the most vulnerable parts of the world—places with limited internet access, where smaller user numbers mean bad actors have undue influence—the trade-offs and mistakes that Facebook makes can have deadly consequences.

Gladiator battle viewers giving the thumbs down
Erik Carter

According to the documents, Facebook is aware that its products are being used to facilitate hate speech in the Middle East, violent cartels in Mexico, ethnic cleansing in Ethiopia, extremist anti-Muslim rhetoric in India, and sex trafficking in Dubai. It is also aware that its efforts to combat these things are insufficient. A March 2021 report notes, “We frequently observe highly coordinated, intentional activity … by problematic actors” that is “particularly prevalent—and problematic—in At-Risk Countries and Contexts”; the report later acknowledges, “Current mitigation strategies are not enough.”

In some cases, employees have successfully taken steps to address these problems, but in many others, the company response has been slow and incomplete. As recently as late 2020, an internal Facebook report found that only 6 percent of Arabic-language hate content on Instagram was detected by Facebook’s systems. Another report that circulated last winter found that, of material posted in Afghanistan that was classified as hate speech within a 30-day range, only 0.23 percent was taken down automatically by Facebook’s tools. In both instances, employees blamed company leadership for insufficient investment.

In many of the world’s most fragile nations, a company worth hundreds of billions of dollars hasn’t invested enough in the language- and dialect-specific artificial intelligence and staffing it needs to address these problems. Indeed, last year, according to the documents, only 13 percent of Facebook’s misinformation-moderation staff hours were devoted to the non-U.S. countries in which it operates, whose populations comprise more than 90 percent of Facebook’s users. (Facebook declined to tell me how many countries it has users in.) And although Facebook users post in at least 160 languages, the company has built robust AI detection in only a fraction of those languages, the ones spoken in large, high-profile markets such as the U.S. and Europe—a choice, the documents show, that means problematic content is seldom detected.

The granular, procedural, sometimes banal back-and-forth exchanges recorded in the documents reveal, in unprecedented detail, how the most powerful company on Earth makes its decisions. And they suggest that, all over the world, Facebook’s choices are consistently driven by public perception, business risk, the threat of regulation, and the specter of “PR fires,” a phrase that appears over and over in the documents. In many cases, Facebook has been slow to respond to developing crises outside the United States and Europe until its hand is forced. “It’s an open secret … that Facebook’s short-term decisions are largely motivated by PR and the potential for negative attention,” an employee named Sophie Zhang wrote in a September 2020 internal memo about Facebook’s failure to act on global misinformation threats. (Most employee names have been redacted for privacy reasons in these documents, but Zhang left the company and came forward as a whistleblower after she wrote this memo.)

Sometimes, even negative attention isn’t enough. In 2019, the human-rights group Avaaz found that Bengali Muslims in India’s Assam state were “facing an extraordinary chorus of abuse and hate” on Facebook: Posts calling Muslims “pigs,” rapists,” and “terrorists” were shared tens of thousands of times and left on the platform because Facebook’s artificial-intelligence systems weren’t built to automatically detect hate speech in Assamese, which is spoken by 23 million people. Facebook removed 96 of the 213 “clearest examples” of hate speech Avaaz flagged for the company before publishing its report. Facebook still does not have technology in place to automatically detect Assamese hate speech.

In a memo dated December 2020 and posted to Workplace, Facebook’s very Facebooklike internal message board, an employee argued that “Facebook’s decision-making on content policy is routinely influenced by political considerations.” To hear this employee tell it, the problem was structural: Employees who are primarily tasked with negotiating with governments over regulation and national security, and with the press over stories, were empowered to weigh in on conversations about building and enforcing Facebook’s rules regarding questionable content around the world. “Time and again,” the memo quotes a Facebook researcher saying, “I’ve seen promising interventions … be prematurely stifled or severely constrained by key decisionmakers—often based on fears of public and policy stakeholder responses.”

Among the consequences of that pattern, according to the memo: The Hindu-nationalist politician T. Raja Singh, who posted to hundreds of thousands of followers on Facebook calling for India’s Rohingya Muslims to be shot—in direct violation of Facebook’s hate-speech guidelines—was allowed to remain on the platform despite repeated requests to ban him, including from the very Facebook employees tasked with monitoring hate speech. A 2020 Wall Street Journal article reported that Facebook’s top public-policy executive in India had raised concerns about backlash if the company were to do so, saying that cracking down on leaders from the ruling party might make running the business more difficult. The company eventually did ban Singh, but not before his posts ping-ponged through the Hindi-speaking world.

In a Workplace thread apparently intended to address employee frustration after the Journal article was published, a leader explained that Facebook’s public-policy teams “are important to the escalations process in that they provide input on a range of issues, including translation, socio-political context, and regulatory risks of different enforcement options.”

Employees weren’t placated. In dozens and dozens of comments, they questioned the decisions Facebook had made regarding which parts of the company to involve in content moderation, and raised doubts about its ability to moderate hate speech in India. They called the situation “sad” and Facebook’s response “inadequate,” and wondered about the “propriety of considering regulatory risk” when it comes to violent speech.

“I have a very basic question,” wrote one worker. “Despite having such strong processes around hate speech, how come there are so many instances that we have failed? It does speak on the efficacy of the process.”

Two other employees said that they had personally reported certain Indian accounts for posting hate speech. Even so, one of the employees wrote, “they still continue to thrive on our platform spewing hateful content.”

We “cannot be proud as a company,” yet another wrote, “if we continue to let such barbarism flourish on our network.”

Taken together, Frances Haugen’s leaked documents show Facebook for what it is: a platform racked by misinformation, disinformation, conspiracy thinking, extremism, hate speech, bullying, abuse, human trafficking, revenge porn, and incitements to violence. It is a company that has pursued worldwide growth since its inception—and then, when called upon by regulators, the press, and the public to quell the problems its sheer size has created, it has claimed that its scale makes completely addressing those problems impossible. Instead, Facebook’s 60,000-person global workforce is engaged in a borderless, endless, ever-bigger game of Whac-a-Mole, one with no winners and a lot of sore arms.

Sophie Zhang was one of the people playing that game. Despite being a junior-level data scientist, she had a knack for identifying “coordinated inauthentic behavior,” Facebook’s term for the fake accounts that have exploited its platforms to undermine global democracy, defraud users, and spread false information. In her memo, which is included in the Facebook Papers but was previously leaked to BuzzFeed News, Zhang details what she found in her nearly three years at Facebook: coordinated disinformation campaigns in dozens of countries, including India, Brazil, Mexico, Afghanistan, South Korea, Bolivia, Spain, and Ukraine. In some cases, such as in Honduras and Azerbaijan, Zhang was able to tie accounts involved in these campaigns directly to ruling political parties. In the memo, posted to Workplace the day Zhang was fired from Facebook for what the company alleged was poor performance, she says that she made decisions about these accounts with minimal oversight or support, despite repeated entreaties to senior leadership. On multiple occasions, she said, she was told to prioritize other work.

Facebook has not disputed Zhang’s factual assertions about her time at the company, though it maintains that controlling abuse of its platform is a top priority. A Facebook spokesperson said that the company tries “to keep people safe even if it impacts our bottom line,” adding that the company has spent $13 billion on safety since 2016. “​​Our track record shows that we crack down on abuse abroad with the same intensity that we apply in the U.S.”

Zhang's memo, though, paints a different picture. “We focus upon harm and priority regions like the United States and Western Europe,” she wrote. But eventually, “it became impossible to read the news and monitor world events without feeling the weight of my own responsibility.” Indeed, Facebook explicitly prioritizes certain countries for intervention by sorting them into tiers, the documents show. Zhang “chose not to prioritize” Bolivia, despite credible evidence of inauthentic activity in the run-up to the country’s 2019 election. That election was marred by claims of fraud, which fueled widespread protests; more than 30 people were killed and more than 800 were injured.

“I have blood on my hands,” Zhang wrote in the memo. By the time she left Facebook, she was having trouble sleeping at night. “I consider myself to have been put in an impossible spot—caught between my loyalties to the company and my loyalties to the world as a whole.”

In February, just over a year after Facebook’s high-profile sweep for Middle Eastern and North African domestic-servant trafficking, an internal report identified a web of similar activity, in which women were being trafficked from the Philippines to the Persian Gulf, where they were locked in their homes, denied pay, starved, and abused. This report found that content “should have been detected” for violating Facebook’s policies but had not been, because the mechanism that would have detected much of it had recently been made inactive. The title of the memo is “Domestic Servitude: This Shouldn’t Happen on FB and How We Can Fix It.”

What happened in the Philippines—and in Honduras, and Azerbaijan, and India, and Bolivia—wasn’t just that a very large company lacked a handle on the content posted to its platform. It was that, in many cases, a very large company knew what was happening and failed to meaningfully intervene.

That Facebook has repeatedly prioritized solving problems for Facebook over solving problems for users should not be surprising. The company is under the constant threat of regulation and bad press. Facebook is doing what companies do, triaging and acting in its own self-interest.

But Facebook is not like other companies. It is bigger, and the stakes of its decisions are higher. In North America, we have recently become acutely aware of the risks and harms of social media. But the Facebook we see is the platform at its best. Any solutions will need to apply not only to the problems we still encounter here, but also to those with which the other 90 percent of Facebook’s users struggle every day.

Adblock test (Why?)



"how" - Google News
October 25, 2021 at 06:00PM
https://ift.tt/3mbqCiZ

How Facebook Failed the World - The Atlantic
"how" - Google News
https://ift.tt/2MfXd3I


Bagikan Berita Ini

0 Response to "How Facebook Failed the World - The Atlantic"

Post a Comment

Powered by Blogger.