Skip to content

New Paper: How Syrian Diaspora Use Digital Media to Pursue Justice

March 20, 2019

I’m very pleased to announce the publication of a new article, Networking justice: digitally-enabled engagement in transitional justice by the Syrian diaspora. The article is part of a special issue by the journal  Ethnic and Racial Studies. I’m incredibly grateful for the great leadership and feedback from Maria Koinova and Dženeta Karabegović on this project, and for rich conversations with fellow contributors such as Milana NikolkoJoanna Quinn, Espen Stokke, and Eric Wiebelhaus-Brahm.

What’s the article about, you ask? In short, it looks at how Syrian diaspora have used digital media to pursue accountability and truth in response to massive rights violations in Syria… and how the Syrian government and other actors use digitally-enabled tactics to fight these efforts.

screen-shot-2016-10-20-at-6-58-04-am

Illustration by Kevin Tong / OpenCanada.org

The article builds on a line of research I began a few years ago with the project The War is Just a Click Away, a series for OpenCanada that looked at how people experienced the Syrian civil war through digital connections.

Here is the abstract and first few paragraphs. I encourage you to check out the full article — and the full special issue — at Ethnic and Racial Studies:

Abstract

Digital communication technologies (DCTs) introduce new opportunities and challenges for diaspora to advance transitional justice. This article proposes three DCT-enabled mechanisms that shape diaspora engagement with transitional justice politics and processes, developed through an analysis of diaspora responses to rights violations in Syria. First, diaspora can promote transitional justice aims through connective action: loosely-coordinated, transnational mobilizations using social media. Second, DCTs enable diaspora to contribute to crowdsourced documentation of rights violations. Third, diaspora seeking to advance transitional justice may face digital repression by authoritarian governments in their original homelands. The article shows how DCTs may alter the means and opportunities for diaspora to engage in transitional justice activities, including in situations of ongoing conflict or repression in their original homelands. It also demonstrates how digital spaces are strategically engaged by activists, civil society organizations, state governments, and other actors seeking to advance or contest transitional justice aims.

Introduction

Between 2011 and 2013, a photographer working for the Syrian government took pictures of over 6,000 people allegedly killed in government custody, often after being tortured (Human Rights Watch 2015). The photographer, who used the alias “Caesar,” smuggled these images out of Syria on USB sticks and compact discs.

Foreign governments and non-governmental organizations (NGOs) used the images to call for Syrian government officials to be held accountable for violations (United Nations Security Council 2014; Human Rights Watch2015), a major attempt to introduce transitional justice amidst a conflict. The images were also shared online among Syrian diaspora, including a Spanish citizen who found a photo of her dead brother and launched a criminal complaint in Spanish courts against senior members of the Syrian government (Entous 2017). It is one of several criminal cases in Europe that use digital evidence of violations in Syria (Human Rights Watch 2017).

As this sketch illustrates, digital communication technologies (DCTs) are enabling new practices of transnational advocacy and action. How might these developments affect the field of transitional justice, and in particular the role that diaspora can play in its processes and politics?

Read more…

The meme-ification of politics: Politicians & their ‘lit’ memes

February 11, 2019

Thanks to Grace Chiang, the main author of this article. Published via The Conversation under a Creative Commons license – here’s the original article.

 

In November, during a televised debate about electoral reform, British Columbia Premier John Horgan told the audience, “If you were woke, you’d know that pro rep is lit.”

By “pro rep,” he meant “proportional representation,” an alternative to the current first-past-the-post voting system. By “woke,” he meant socially conscious. By “lit,” he meant, according to the Urban Dictionary, “Something that is f—ing amazing in any sense.” The B.C. NDP soon tweeted his remark, and a meme was born.

This is a federal election year, so Canadians should be ready for a meme-filled 2019. Political memes are increasingly prominent in political discourse, and politicians will be using this latest online strategy to attract, infuriate, persuade or bemuse voters.

It’s therefore worthwhile understanding how memes can shape the tone and perceptions of campaigns or policies. And it’s also useful to look at politicians’ recent attempts to use memes for good and ill.

What is a political meme?

A political meme is a purposefully designed visual framing of a position. Memes are a new genre of political communication, and they generally have at least one of two characteristics — they are inside jokes and they trigger an emotional reaction.

Memes work politically if they are widely — or virally — shared, if they help cultivate a sense of belonging to an “in-group” and if they make a compelling normative statement about a public figure or political issue.

Memes can spread rapidly online and into popular culture due to their shareability — they are easily created, consumed, altered and disseminated. They can quickly communicate the creator’s stance on the subject. The stronger the emotional response provoked by a post, the greater the intent to spread it.

Though memes may spread widely, they usually cater to a specific audience who inhabit a “shared sphere of cultural knowledge.” That audience tends to have self-referential language, cultivating an in-group that can decipher the memes and get the “in joke” while those who aren’t in on the joke cannot. (For an excellent display of this, listen to one of the “Yes Yes No” segments on the Reply All podcast, in which the hosts explain complex, multi-layered memes to a confused non-digital native.)

Read more…

Op-ed: We can’t rely solely on Silicon Valley to tackle online hatred

November 24, 2018

The Globe and Mail published this op-ed on November 12th, allowing Heidi Tworek, Fen McKelvey to share the core ideas of  Poisoning Democracy: What Canada Can Do About Harmful Speech Online. That report was published on November 8th by the Public Policy Forum

It is increasingly clear that online speech contributes to offline violence and fear. In the United States, demonization and denigration have become regular parts of political discourse, whether the targets are political opponents or scapegoated groups such as Jewish congregants, migrants fleeing Central Americaor outspoken women. Hatred and fear on social media have led to violence in Myanmar, Sri Lanka, Kenya and elsewhere.

Canada has not avoided these developments. Online hatred seems to have partly motivated the 2017 mass shooting in a Quebec mosqueand the 2018 vehicle attack in Toronto. More broadly, right-wing extremismis increasing rapidly online.

Hate, abuse and harassment are all forms of what we call “harmful speech.” Harmful speech is not limited to social media, but these platforms can make it easier for hateful ideologies to spread, and for individuals to target other users with threats of violence. Foreign actors, too, have found social media platforms a convenient means to pursue political aims, including by promoting social conflict on issues of race, religion and immigration.

Canada has laws to address some of the most problematic forms of harmful speech, including hate propaganda, threats of violence and foreign interference in elections. The agencies responsible for enforcing these laws need the resources and political backing to take stronger action.

However, the social media companies themselves have a critical role to play. Right now, the vast majority of harmful speech is dealt with (or not) through the enforcement of platforms’ own community guidelines or standards. These policies have been developed in response to tragedies, user complaints, company branding exercises, and – to an extent – national laws. Two figures show the scale of this issue. In the first three months of 2018, Facebook took actionon 2.5 million pieces of hateful content. Between April and June this year, YouTube users flaggedvideos as hateful or abusive more than 6.6 million times.

Despite their laudable efforts, platforms struggle to enforce their content moderation policies in ways that are timely, fair and effective. Just a few days after 11 people were killed in a mass shooting at a Pittsburgh synagogue, Twitter allowed “Kill all Jews” to trendas a topic on the platform during an alleged hate crime in Brooklyn. And when social-media companies do apply their policies to high-profile users, such as when multiple platforms banned Infowars’ Alex Jones, they can face a backlash and even threats of government action.

Platform companies cannot solve these problems alone. They need clearer guidelines from governments, and greater assistance from civil society groups and researchers. In return, they need to be more transparent and responsive to the individuals and communities affected by their policies.

We make three recommendations to pursue those goals in Canada.

First, the federal government should compel social media companies to be more transparent about their content moderation, including their responses to harmful speech. Some platforms are doing much better than just a year ago. However, it should not be up to their own discretion to inform Canadians about how our online speech is being governed.

Second, governments, foundations, companies and universities need to support more research to understand and respond to harmful speech, as well as the related problem of disinformation. Other democracies are doing a much better job than Canada in this area.

Finally, we propose a Moderation Standards Council. Similar to the Canadian Broadcast Standards Council, the council would convene social media companies, civil society and other stakeholders to develop and implement codes of conduct to address harmful speech. The council would share best practices, co-ordinate cross-platform efforts and improve the transparency and accountability of content moderation. It would also create an appeal processes to address complaints. We believe such a council would provide a fairer, better co-ordinated, and more publicly-responsive approach to harmful speech online.

Our recommendations strike an appropriate balance between the protection of free expression and other rights, recognizing that expression is not “free” for people who face hate, threat, and abuse when engaging in public debates. Our recommendations also balance public oversight with industry viability. More co-operation on these issues with government and civil society makes good business sense for soical media companies.

Above all, we hope to foster broader public debate on this issue. Responses to harmful speech should not be decided for us in Silicon Valley boardrooms or in offices on Parliament Hill alone. The rules for speech online should be subject to public input and oversight. The poisoning of democracy is a serious and complex problem. It should be addressed democratically.

Poisoning Democracy: The Infographic!

November 8, 2018

My new report, with Heidi Tworek and Fenwick McKelvey, is finally out. Poisoning Democracy: What Canada Can Do About Harmful Speech Online was published today by the Public Policy Forum.

I will be writing more on that report soon. But for now, check out this infographic by my multi-talented co-author, Fen!

Full-graphic

New Paper on Inclusion and Global Governance

November 2, 2018

I am late to post this, but I am proud to have published an article in a great special issue of the journal Global Justice: Theory Practice Rhetoric. The special issue, Democratic Inclusion Beyond Borders, was edited by Tomer Perry, and features articles by Terry MacDonald and Annette Zimmermann.

My own article is called, “Should International Organizations Include Beneficiaries in Decision-making? Arguments for Mediated Inclusion?” My short answer: Yes they should, but how to do so is somewhat complicated. The article draws from my PhD dissertation, as well as conversations I’ve had with people like Tomer, who — like me — are trying to figure out how democratic principles and practices might contribute to justice in global governance. Lots more to say on this subject in future publications!

Here is the paper’s abstract:

There are longstanding calls for international organizations (IOs) to be more inclusive of the voices and interests of people whose lives they affect. There is nevertheless widespread disagreement among practitioners and political theorists over who ought to be included in IO decision-making and by what means. This paper focuses on the inclusion of IOs’ ‘intended beneficiaries,’ both in principle and practice. It argues that IOs’ intended beneficiaries have particularly strong normative claims for inclusion because IOs can affect their vital interests and their political agency. It then examines how these claims to inclusion might be feasibly addressed. The paper proposes a model of inclusion via representation and communication, or ‘mediated inclusion.’ An examination of existing practices in global governance reveals significant opportunities for the mediated inclusion of IOs’ intended beneficiaries, as well as pervasive obstacles. The paper concludes that the inclusion of intended beneficiaries by IOs is both appropriate and feasible.

What Europe can teach Canada about protecting democracy

April 18, 2018

Chris Tenove and Heidi Tworek

Originally published April 5, 2018, on The Conversation.

What can we do to shield our democracy from digital manipulation? That’s an increasingly urgent question given the activities of Victoria-based AggregateIQ, Cambridge Analytica and Facebook, not to mention Russia, in recent elections in Europe and the United States.

Canada needs to prepare itself for the 2019 federal election, and the Canadian government is starting to talk more seriously about how to address the risks we face.

The issues of disinformation, hate speech and targeted manipulation of voters are complicated, and the policy solutions are not yet clear. What is clear is that Canada needs new inspiration.

Canada often looks to the U.S. government as either a leader or partner. This time, Canada should look to Europe.

Canada’s electoral rules, norms and procedures bear more similarity to many European countries than to the United States. Like them, we keep our election campaigns short. We have strict rules about campaign financing. We also face the same problem: Our citizens use social media platforms created in the U.S. by CEOs who are often unresponsive to non-American concerns about data privacy or electoral interference.

There are at least three areas where Canada can take inspiration from Europe.

Read more…