After facing heavy criticism for not doing enough to stem misinformation ahead of the US presidential election, YouTube announced last week it would remove videos that alleged fraud had changed the outcome of the contest.
But half a world away in Myanmar, which held a general election just five days after the US vote and faced a tidal wave of online misinformation including unfounded claims of voter fraud, the new rules do not apply.
Social media researchers and civil society groups in Myanmar say the uneven standard is emblematic of YouTube’s comparatively hands-off approach to election misinformation globally at a time when rival Facebook is taking a more aggressive country-by-country measures.
“It looks like 2020 might be YouTube’s equivalent of Facebook’s 2016, holding on to hope that a universal approach might work, even when it’s clear that it’s not going to be the case,” said Evelyn Douek, a Harvard Law School lecturer who researches online speech.
Experts, including Douek, warn that this issue will only escalate for Alphabet’s YouTube in other elections worldwide, until it creates policies that account for its role for potentially volatile situations.
“With other platforms, there’s at least starting to be a recognition that universal rules just aren’t going to cut it. For the US and Myanmar elections, Facebook tried to learn from the criticisms it received and roll out distinctive policies,” she told Reuters.
YouTube did not respond to those criticisms but said it consistently enforced its guidelines and had deleted more than 1.8 million channels for policy violations in the third quarter of 2020, including more than 54,000 for hate speech.
Facebook, which faced accusations that it helped incite genocide in Myanmar in 2017, launched an unusual Myanmar-only civic misinformation policy ahead of the recent election, allowing it to remove false claims that could lead to voter suppression or otherwise damage the electoral process.
The temporary policy, a first for Facebook on election misinformation, followed previous Sri Lanka-specific measures on curbing misinformation that could lead to violence that was eventually expanded worldwide.
The company also built partnerships with civil society organisations in Myanmar, who say that the platform’s performance during the election, while imperfect, improved.
Alphabet’s YouTube, by contrast, chose a light-touch approach and only appeared to block false election claims that people complained about, or if the material otherwise violated its broader guidelines on hate speech and harassment, researchers said.
“When it comes to hate speech and disinformation in Myanmar, YouTube is the new frontier,” said Victoire Rio, an adviser at the Myanmar Tech Accountability Network (MTAN), a consortium of civil society organisations that study risks on social media.
Rio told Reuters the video-sharing site was unprepared and didn’t make the investments needed to mitigate risk in Myanmar, calling for it to do a human rights impact assessment.
Rio and fact-checking organisations say purveyors of misinformation joined YouTube in large numbers in 2020, as video streaming usage skyrocketed thanks to cheaper data plans, with more than half of Myanmar’s population now online.
A review by Reuters found more than a dozen recently created YouTube channels that promoted Myanmar election-related misinformation while posing either as news outlets or political programmes. Myanmar fact-checking organisations say hundreds of such channels appeared in 2020.
The reach of those channels varied, with the popular ones accumulating millions of views, far more than most accredited media. A November video falsely stating the ruling party had already picked a new president had more than 350,000 views.
Half of the channels had their corresponding Facebook pages deleted by Facebook for content rules violations in the last two months. But YouTube links from the same banned creators were still widely shared on Facebook.
PROBLEMATIC CONTENT
YouTube states in its guidelines that it removes political misinformation only when it pertains specifically to false claims about a voting process, or if it violates other rules.
“Our Community Guidelines prohibit spam, scams, or other manipulated media, coordinated influence operations, and any content that seeks to incite violence,” a YouTube spokeswoman told Reuters.
She said YouTube had “terminated” a number of channels for violating its hate speech and harassment policies, but did not comment on whether the platform had a Burmese-language team or a local “Trusted Flaggers” programme that allows recognised groups to flag problematic content.
YouTube has argued in blog posts that it also reduces the spread of misinformation by pushing people to authoritative sources.
But democracy advocates say that is not enough.
A recent YouTube search for the word “fraud” in the Burmese language revealed as the third result a channel full of videos that allege, without evidence, that the ruling party committed fraud in the Myanmar election.
Other searches for election related terms showed other channels containing political misinformation in top results.
Yatanar Htun, a director of the Myanmar ICT for Development Organization (MIDO), which monitors online hate speech, told Reuters that YouTube videos had been used in fake emails to voters that spoofed emails from Myanmar authorities to allege fraud and foreign interference.
Facebook deleted the original page sharing the videos, but YouTube was slow to act, Htun said.
YouTube took down one of the videos linked in the emails after it was asked for comment by Reuters and removed a channel for harassment. But another channel used to allege fraud remains on the site.
By Reuters