Skip to content

Op-ed: We can’t rely solely on Silicon Valley to tackle online hatred

November 24, 2018

The Globe and Mail published this op-ed on November 12th, allowing Heidi Tworek, Fen McKelvey to share the core ideas of  Poisoning Democracy: What Canada Can Do About Harmful Speech Online. That report was published on November 8th by the Public Policy Forum

It is increasingly clear that online speech contributes to offline violence and fear. In the United States, demonization and denigration have become regular parts of political discourse, whether the targets are political opponents or scapegoated groups such as Jewish congregants, migrants fleeing Central Americaor outspoken women. Hatred and fear on social media have led to violence in Myanmar, Sri Lanka, Kenya and elsewhere.

Canada has not avoided these developments. Online hatred seems to have partly motivated the 2017 mass shooting in a Quebec mosqueand the 2018 vehicle attack in Toronto. More broadly, right-wing extremismis increasing rapidly online.

Hate, abuse and harassment are all forms of what we call “harmful speech.” Harmful speech is not limited to social media, but these platforms can make it easier for hateful ideologies to spread, and for individuals to target other users with threats of violence. Foreign actors, too, have found social media platforms a convenient means to pursue political aims, including by promoting social conflict on issues of race, religion and immigration.

Canada has laws to address some of the most problematic forms of harmful speech, including hate propaganda, threats of violence and foreign interference in elections. The agencies responsible for enforcing these laws need the resources and political backing to take stronger action.

However, the social media companies themselves have a critical role to play. Right now, the vast majority of harmful speech is dealt with (or not) through the enforcement of platforms’ own community guidelines or standards. These policies have been developed in response to tragedies, user complaints, company branding exercises, and – to an extent – national laws. Two figures show the scale of this issue. In the first three months of 2018, Facebook took actionon 2.5 million pieces of hateful content. Between April and June this year, YouTube users flaggedvideos as hateful or abusive more than 6.6 million times.

Despite their laudable efforts, platforms struggle to enforce their content moderation policies in ways that are timely, fair and effective. Just a few days after 11 people were killed in a mass shooting at a Pittsburgh synagogue, Twitter allowed “Kill all Jews” to trendas a topic on the platform during an alleged hate crime in Brooklyn. And when social-media companies do apply their policies to high-profile users, such as when multiple platforms banned Infowars’ Alex Jones, they can face a backlash and even threats of government action.

Platform companies cannot solve these problems alone. They need clearer guidelines from governments, and greater assistance from civil society groups and researchers. In return, they need to be more transparent and responsive to the individuals and communities affected by their policies.

We make three recommendations to pursue those goals in Canada.

First, the federal government should compel social media companies to be more transparent about their content moderation, including their responses to harmful speech. Some platforms are doing much better than just a year ago. However, it should not be up to their own discretion to inform Canadians about how our online speech is being governed.

Second, governments, foundations, companies and universities need to support more research to understand and respond to harmful speech, as well as the related problem of disinformation. Other democracies are doing a much better job than Canada in this area.

Finally, we propose a Moderation Standards Council. Similar to the Canadian Broadcast Standards Council, the council would convene social media companies, civil society and other stakeholders to develop and implement codes of conduct to address harmful speech. The council would share best practices, co-ordinate cross-platform efforts and improve the transparency and accountability of content moderation. It would also create an appeal processes to address complaints. We believe such a council would provide a fairer, better co-ordinated, and more publicly-responsive approach to harmful speech online.

Our recommendations strike an appropriate balance between the protection of free expression and other rights, recognizing that expression is not “free” for people who face hate, threat, and abuse when engaging in public debates. Our recommendations also balance public oversight with industry viability. More co-operation on these issues with government and civil society makes good business sense for soical media companies.

Above all, we hope to foster broader public debate on this issue. Responses to harmful speech should not be decided for us in Silicon Valley boardrooms or in offices on Parliament Hill alone. The rules for speech online should be subject to public input and oversight. The poisoning of democracy is a serious and complex problem. It should be addressed democratically.

Comments are closed.

%d bloggers like this: