Skip to content

The meme-ification of politics: Politicians & their ‘lit’ memes

February 11, 2019

Thanks to Grace Chiang, the main author of this article. Published via The Conversation under a Creative Commons license – here’s the original article.

 

In November, during a televised debate about electoral reform, British Columbia Premier John Horgan told the audience, “If you were woke, you’d know that pro rep is lit.”

By “pro rep,” he meant “proportional representation,” an alternative to the current first-past-the-post voting system. By “woke,” he meant socially conscious. By “lit,” he meant, according to the Urban Dictionary, “Something that is f—ing amazing in any sense.” The B.C. NDP soon tweeted his remark, and a meme was born.

This is a federal election year, so Canadians should be ready for a meme-filled 2019. Political memes are increasingly prominent in political discourse, and politicians will be using this latest online strategy to attract, infuriate, persuade or bemuse voters.

It’s therefore worthwhile understanding how memes can shape the tone and perceptions of campaigns or policies. And it’s also useful to look at politicians’ recent attempts to use memes for good and ill.

What is a political meme?

A political meme is a purposefully designed visual framing of a position. Memes are a new genre of political communication, and they generally have at least one of two characteristics — they are inside jokes and they trigger an emotional reaction.

Memes work politically if they are widely — or virally — shared, if they help cultivate a sense of belonging to an “in-group” and if they make a compelling normative statement about a public figure or political issue.

Memes can spread rapidly online and into popular culture due to their shareability — they are easily created, consumed, altered and disseminated. They can quickly communicate the creator’s stance on the subject. The stronger the emotional response provoked by a post, the greater the intent to spread it.

Though memes may spread widely, they usually cater to a specific audience who inhabit a “shared sphere of cultural knowledge.” That audience tends to have self-referential language, cultivating an in-group that can decipher the memes and get the “in joke” while those who aren’t in on the joke cannot. (For an excellent display of this, listen to one of the “Yes Yes No” segments on the Reply All podcast, in which the hosts explain complex, multi-layered memes to a confused non-digital native.)

Read more…

Op-ed: We can’t rely solely on Silicon Valley to tackle online hatred

November 24, 2018

The Globe and Mail published this op-ed on November 12th, allowing Heidi Tworek, Fen McKelvey to share the core ideas of  Poisoning Democracy: What Canada Can Do About Harmful Speech Online. That report was published on November 8th by the Public Policy Forum

It is increasingly clear that online speech contributes to offline violence and fear. In the United States, demonization and denigration have become regular parts of political discourse, whether the targets are political opponents or scapegoated groups such as Jewish congregants, migrants fleeing Central Americaor outspoken women. Hatred and fear on social media have led to violence in Myanmar, Sri Lanka, Kenya and elsewhere.

Canada has not avoided these developments. Online hatred seems to have partly motivated the 2017 mass shooting in a Quebec mosqueand the 2018 vehicle attack in Toronto. More broadly, right-wing extremismis increasing rapidly online.

Hate, abuse and harassment are all forms of what we call “harmful speech.” Harmful speech is not limited to social media, but these platforms can make it easier for hateful ideologies to spread, and for individuals to target other users with threats of violence. Foreign actors, too, have found social media platforms a convenient means to pursue political aims, including by promoting social conflict on issues of race, religion and immigration.

Canada has laws to address some of the most problematic forms of harmful speech, including hate propaganda, threats of violence and foreign interference in elections. The agencies responsible for enforcing these laws need the resources and political backing to take stronger action.

However, the social media companies themselves have a critical role to play. Right now, the vast majority of harmful speech is dealt with (or not) through the enforcement of platforms’ own community guidelines or standards. These policies have been developed in response to tragedies, user complaints, company branding exercises, and – to an extent – national laws. Two figures show the scale of this issue. In the first three months of 2018, Facebook took actionon 2.5 million pieces of hateful content. Between April and June this year, YouTube users flaggedvideos as hateful or abusive more than 6.6 million times.

Despite their laudable efforts, platforms struggle to enforce their content moderation policies in ways that are timely, fair and effective. Just a few days after 11 people were killed in a mass shooting at a Pittsburgh synagogue, Twitter allowed “Kill all Jews” to trendas a topic on the platform during an alleged hate crime in Brooklyn. And when social-media companies do apply their policies to high-profile users, such as when multiple platforms banned Infowars’ Alex Jones, they can face a backlash and even threats of government action.

Platform companies cannot solve these problems alone. They need clearer guidelines from governments, and greater assistance from civil society groups and researchers. In return, they need to be more transparent and responsive to the individuals and communities affected by their policies.

We make three recommendations to pursue those goals in Canada.

First, the federal government should compel social media companies to be more transparent about their content moderation, including their responses to harmful speech. Some platforms are doing much better than just a year ago. However, it should not be up to their own discretion to inform Canadians about how our online speech is being governed.

Second, governments, foundations, companies and universities need to support more research to understand and respond to harmful speech, as well as the related problem of disinformation. Other democracies are doing a much better job than Canada in this area.

Finally, we propose a Moderation Standards Council. Similar to the Canadian Broadcast Standards Council, the council would convene social media companies, civil society and other stakeholders to develop and implement codes of conduct to address harmful speech. The council would share best practices, co-ordinate cross-platform efforts and improve the transparency and accountability of content moderation. It would also create an appeal processes to address complaints. We believe such a council would provide a fairer, better co-ordinated, and more publicly-responsive approach to harmful speech online.

Our recommendations strike an appropriate balance between the protection of free expression and other rights, recognizing that expression is not “free” for people who face hate, threat, and abuse when engaging in public debates. Our recommendations also balance public oversight with industry viability. More co-operation on these issues with government and civil society makes good business sense for soical media companies.

Above all, we hope to foster broader public debate on this issue. Responses to harmful speech should not be decided for us in Silicon Valley boardrooms or in offices on Parliament Hill alone. The rules for speech online should be subject to public input and oversight. The poisoning of democracy is a serious and complex problem. It should be addressed democratically.

Poisoning Democracy: The Infographic!

November 8, 2018

My new report, with Heidi Tworek and Fenwick McKelvey, is finally out. Poisoning Democracy: What Canada Can Do About Harmful Speech Online was published today by the Public Policy Forum.

I will be writing more on that report soon. But for now, check out this infographic by my multi-talented co-author, Fen!

Full-graphic

New Paper on Inclusion and Global Governance

November 2, 2018

I am late to post this, but I am proud to have published an article in a great special issue of the journal Global Justice: Theory Practice Rhetoric. The special issue, Democratic Inclusion Beyond Borders, was edited by Tomer Perry, and features articles by Terry MacDonald and Annette Zimmermann.

My own article is called, “Should International Organizations Include Beneficiaries in Decision-making? Arguments for Mediated Inclusion?” My short answer: Yes they should, but how to do so is somewhat complicated. The article draws from my PhD dissertation, as well as conversations I’ve had with people like Tomer, who — like me — are trying to figure out how democratic principles and practices might contribute to justice in global governance. Lots more to say on this subject in future publications!

Here is the paper’s abstract:

There are longstanding calls for international organizations (IOs) to be more inclusive of the voices and interests of people whose lives they affect. There is nevertheless widespread disagreement among practitioners and political theorists over who ought to be included in IO decision-making and by what means. This paper focuses on the inclusion of IOs’ ‘intended beneficiaries,’ both in principle and practice. It argues that IOs’ intended beneficiaries have particularly strong normative claims for inclusion because IOs can affect their vital interests and their political agency. It then examines how these claims to inclusion might be feasibly addressed. The paper proposes a model of inclusion via representation and communication, or ‘mediated inclusion.’ An examination of existing practices in global governance reveals significant opportunities for the mediated inclusion of IOs’ intended beneficiaries, as well as pervasive obstacles. The paper concludes that the inclusion of intended beneficiaries by IOs is both appropriate and feasible.

What Europe can teach Canada about protecting democracy

April 18, 2018

Chris Tenove and Heidi Tworek

Originally published April 5, 2018, on The Conversation.

What can we do to shield our democracy from digital manipulation? That’s an increasingly urgent question given the activities of Victoria-based AggregateIQ, Cambridge Analytica and Facebook, not to mention Russia, in recent elections in Europe and the United States.

Canada needs to prepare itself for the 2019 federal election, and the Canadian government is starting to talk more seriously about how to address the risks we face.

The issues of disinformation, hate speech and targeted manipulation of voters are complicated, and the policy solutions are not yet clear. What is clear is that Canada needs new inspiration.

Canada often looks to the U.S. government as either a leader or partner. This time, Canada should look to Europe.

Canada’s electoral rules, norms and procedures bear more similarity to many European countries than to the United States. Like them, we keep our election campaigns short. We have strict rules about campaign financing. We also face the same problem: Our citizens use social media platforms created in the U.S. by CEOs who are often unresponsive to non-American concerns about data privacy or electoral interference.

There are at least three areas where Canada can take inspiration from Europe.

Read more…

New Report: Digital Threats to Democratic Elections

January 18, 2018

I’m happy to be releasing a new report, Digital Threats to Democratic Elections: How Foreign Actors Use Digital Techniques to Undermine Democracy. I wrote the report together with Jordan Buffie, Spencer McKay, and David Moscrop. The project was supervised UBC political scientists Mark Warren and Max Cameron, two leading thinkers on democratic institutions. The report is being published by UBC’s Centre for the Study of Democratic Institutions.

A year ago that we applied for a SSHRC Knowledge Synthesis Grant (see my earlier thoughts on this valuable genre of grant). When we began evaluating research on the topic in May, 2017, there were excellent journalism investigations and academic working papers, but little peer-reviewed research. That is changing, and our knowledge on foreign digital interference and online misinformation is rapidly increasing. I hope our team’s contribution can help provide an overarching view of what we know, and what we still need to learn.

Cover_CSDI_Digital_Threats_to_Elections

Thanks to Oliver McPartlin for the report design, and for creating this striking cover image.

I’m pleased with our report, but I realize that some of you might not make it through 50 pages of text and over 20 pages of cited references. I have options for you!

And I don’t think I’m giving too much away, but here is the last paragraph of the report:

A serious concern is that foreign and domestic actors, using digital and non-digital tech- niques, are creating vicious circles to undermine democracy. e e ects of these techniques used by foreign actors – such as exacerbating social cleavages and distrust, or undermining fair participation and institutional e ectiveness – can make democratic countries even more vulnerable to future interference. If such vicious circles continue, and the quality and legitimacy of democracy degrades, then it will become increasingly di cult for democratic states to advance their citizens’ interests and resolve social con icts.

Policymakers, citizens, and researchers therefore need to take serious and swi action. If they do so, many responses to foreign interference may also safeguard democracy from being degraded by domestic actors. And by improving the quality of democratic pro- cesses and institutions, we can help make our political systems more resistant to foreign interference. ese virtuous circles should be what we aim for when we address digital threats to democracy. (p. 52)

Update: Gold Medal for ‘Life in the Digital Shadows of the Syrian War’

November 15, 2017

I’m delighted to report that the article Life in the Digital Shadow of the Syrian War, written by Naheed Mustafa, won the award for best investigative article at the 2017 Canadian Online Publishing Awards. Hurray to Naheed and to the team at OpenCanada.org!

DOo25YTX0AEhu_s.jpg-large

Eva and Catherine at the COPA gala

The article was part of the series The War is Just a Click Away, which I commissioned and edited last year. As I described in an earlier post, the series was part of a project by me and Taylor Owen to look at how to improve discussions of global affairs through collaborations between digital journalists and academic researchers. The project was funded by a SSHRC Connection Grant and support from CIGI, among other sources.

I could not have hoped for a better journalist to work with than Naheed. But I also want to give thanks to the compelling photos by Chloë Ellingson, the receptiveness of Eva and Som and the OpenCanada gang, sharp editorial suggestions from Tyee Bridge, and great research help from Andrés Delgado, Robert Gorwa and John Woodside.

Like many fulfilling projects, this was a team effort.

 

Screen Shot 2017-11-15 at 8.23.18 PM.png