While reporting on Facebook’s operations in Myanmar in 2018, I wrote about mobs hunting down people in the streets, violent animosity toward a beleaguered minority group, and the targeting of journalists (some of whom were branded as terrorists)—all of which could be traced back to hate-filled misinformation that had rippled across social media unimpeded. At the time, a Facebook employee, an American diplomat, and several others who had spent time in Myanmar (also known as Burma) told me they worried that similar trends were under way in the United States.
In the U.S. itself, however, that concern was rarely replicated. The link between online lies and real-world violence was present not just in Myanmar, but in India, Sri Lanka, and elsewhere—places where American technology platforms, particularly Facebook, were used to spread dangerous conspiracy theories and barrage users with propaganda. In those countries, civil-society members, activists, and concerned citizens repeatedly rang alarm bells.
Yet these warnings went largely unheeded. Within the companies’ executive ranks, the focus was on breakneck growth, and not upsetting governments that controlled access to these rapidly expanding markets. And among U.S. officialdom, the threat was always “over there,” always far away, always foreign.
After last week’s riot in Washington, D.C., many Americans now appear to be in a state of disbelief, grappling with what exactly went wrong. There is more than enough blame to go around. This is not even the first time such an incident has occurred domestically and can be traced back to extreme rhetoric and misinformation found on mainstream platforms. But to be totally surprised by what happened on Capitol Hill is to have completely ignored what has been taking place for years on American social-media platforms abroad.
The damage was done in places such as the village of Rainpada, in India, where a man was beaten to death after false rumors of a purported kidnapping spread on the Facebook-owned messaging platform WhatsApp, and the streets of Mandalay, in Myanmar, which were convulsed with violence in 2014 after a ficticious claim of rape was disseminated on Facebook by an influential, hard-line monk.
Tech companies, when they finally—many of them begrudgingly—owned up to the issues they helped create in foreign countries, liked to point the finger at “media literacy” or “digital literacy.” This was Silicon Valley’s polite way of saying that users in these countries were too new to the internet, too naive to know that what they were seeing was fake, too easily misled by crudely Photoshopped pictures and doctored videos. While there is no doubt that this played a role, the overemphasis on this one issue, rather than a comprehensive look at the roles these companies’ own products played, seemed at times to border on calling people stupid and gullible. In the U.S., the thinking appeared to be, users familiar with the internet and fluent in the language of social media could tell fact from fiction, reality from illusion. Underlying this message was a tacit belief in, ultimately misguided, exceptionalism, that this could never happen in America. Until it did.
When rioters charged their way up the steps of the Capitol, they were of course different from the motorbike riders who roved Mandalay armed with metal tools and clubs, or the marauding bands of thugs who torched Muslim-owned shops in Sri Lanka. These people in Washington were wrapped in the Stars and Stripes, clad in anti-Semitic sweatshirts, and draped in animal pelts. Many espoused baseless claims that the election—fairly and resoundingly won by Joe Biden—was fraudulent, and that Donald Trump was the real winner, while others were fervent devotees of the QAnon conspiracy theory.
This misinformation, and the plans to take violent action, like that found in other countries, received enormous levels of amplification on social-media platforms and little resistance from them. The resulting violence and deaths were appalling and yet unfortunately familiar, as was the reaction from social-media companies.
Only after the storming of the Capitol did platforms take more substantive steps. Facebook and Twitter removed Trump from their sites, and a host of other social-media networks followed suit. The companies also cracked down on numerous other accounts that had previously been free to spread misinformation and hate speech.
These delayed, retrospective steps follow a course of action that is well known to observers abroad: The standard routine generally begins with a hurried reversal of a previously staunchly held position (banning Trump) despite continued denial of wrongdoing or dismissal of concerns (a step already underway) coupled with some overdue corrective action (removing “stop the steal” content nearly 70 days after the election). This is then often followed by a muted apology and a promise to do better, efforts embellished with a bit of outreach to a local civil-society group or NGO to score some public-relations points. In rare instances there might be some self-flagellation. In Myanmar and Sri Lanka, Facebook enlisted corporate social-responsibility advisers to compile reports on the shortcomings of its operations in those countries, but they only rehashed much of what was already publicly known.
Facebook hastily banned Myanmar’s commander in chief only after company officials became aware that it would be named in a scathing United Nations report that concluded the military had acted with “genocidal intent” when carrying out a crackdown on the country’s Rohingya Muslim minority. It took another public shaming for Facebook to finally begin to share data with international investigators. Even now, the Myanmar military’s propaganda arm is still allowed on the platform, as are some branches of the armed forces.
Following the mass U.S. deplatforming over the weekend, many of the loudest, most vile voices on the American right migrated to Parler, a social-media service popular with conservatives and far-right figures. Apple, Google, and Amazon kicked Parler off their own app stores and servers, effectively removing the service from the internet. John Matze, Parler’s CEO, has suggested that content on the app could be moderated by volunteers in a last-ditch effort to save his company’s prospects, a suggestion that was skewered on social media: unpaid laborers being put in charge of difficult decisions, deciding what content is potentially harmful and should be removed. But this is exactly what Facebook did for years in Myanmar, relying on an ad hoc group of well-meaning volunteers to flag problematic content to be passed up the chain to Facebook employees and eventually addressed. When Facebook executives visited the country in June 2018, they even asked some civil-society members to pay for their own bus tickets to Yangon to attend a community meeting focused on issues on the platform. The company eventually relented and ponied up the money, according to a person familiar with the event.
If Americans now expect a high-profile firing in response to what occurred, looking abroad can dampen those expectations as well. A damning series of Wall Street Journal articles revealed that Ankhi Das, a top Facebook executive in India, flouted Facebook’s own rules against hate speech in regards to posts by a ruling party figure in order to protect the company’s business in the country. Das was also found in personal posts to have supported Indian Prime Minister Narendra Modi, a staunch Hindu nationalist, thereby confirming many of the widely circulated rumors about her political biases. Only after coming under pressure because of the mounting issues did she finally step down. Even before that episode, Facebook’s reputation in India was bad enough that when I visited its offices in Delhi the company’s signs had been scrubbed from its offices, a measure its employees in the U.S. are now being asked to replicate.
The Atlantic’s Helen Lewis wrote last year of the “American Rhino Problem,” the difficulty for the world that so many of the internet’s rules are decided in Silicon Valley by a small band of tech executives. The power these individuals wield over global affairs is astonishing. A consequence of that is that we—those of us who live abroad, whether we speak English or not—must look to the U.S. for how technology will be governed.
But in considering what social-media companies will do now—and what they are ultimately capable of inflicting on society—looking in the opposite direction, from America to beyond its borders, would be a wise exercise. If that is any guide, changes will likely come in halting increments, with little transparency or coherent explanation.
“Americans keep hanging on to the idea that the U.S. is somehow different, but what happened in the U.S. really came as no surprise,” Htaike Htaike Aung, the executive director of Myanmar ICT for Development Organization, an NGO that has spent years lobbying Facebook to do more to tackle hate speech, told me after the Capitol Hill riot. “We saw it play out across the world. Radicalizing speech and lies, particularly when coming from centers of power, can do major societal damage.”
“Myanmar was a case in point,” she continued. “We tried to warn you.”
We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.
"come" - Google News
January 12, 2021 at 03:19PM
https://ift.tt/3shXbwF
Big Tech’s Foreign Problems Come Home - The Atlantic
"come" - Google News
https://ift.tt/2S8UtrZ
Shoes Man Tutorial
Pos News Update
Meme Update
Korean Entertainment News
Japan News Update
Bagikan Berita Ini
0 Response to "Big Tech’s Foreign Problems Come Home - The Atlantic"
Post a Comment