A Concise Social Media Design Election Advocacy Guide for 2024
With so many countries holding elections, having concise, effective, and readily implementable asks of online platforms is key. We recommend two.
Over 60 countries representing more than 4 billion people will hold elections in 2024, such that some are calling it the biggest election year in history. As communications increasingly move online, technology platforms will again play a central role in global democratic processes.
In response, the USC Marshall School’s Neely Center, in partnership with the Council on Technology and Social Cohesion and Psychology of Technology Research Network, is launching this brief Social Media Design Election Advocacy Guide to bring focus to the efforts of those seeking to ensure that elections occur free from undue manipulation by small groups who seek to mislead, divide, and prevent democratic processes from successfully concluding. Inspired by lessons learned from past elections, these actionable steps can significantly counter the efforts of small groups to use technology to mislead and disrupt democratic processes.
Our Advocacy Guide addresses three key threats we have seen during past democratic processes:
Small groups exploit engagement-based systems with little repercussion.
Online incentives lead publishers and politicians to promote democracy-damaging content.
Enforcing on content and actors for policy violations has had limited efficacy and created legitimacy issues.
Because a great deal of harmful content is inappropriate for enforcement, previous efforts have shown that content neutral design-based changes have been the most effective. Many such efforts have been implemented by tech companies in past elections, reflecting that they are actionable. That is why our Advocacy Guide makes two key recommendations, building on the Neely Center’s Design Code for Social Media.
Especially for content that relates to political and social issues in the context of global elections:
Do not recommend content or actors based on what gets more engagement. Instead, amplify information that broad groups of users perceive to be high quality.
We know that removing engagement incentives on Facebook’s newsfeed has reduced risk in numerous crises, such that it is part of the election playbook and some changes were made permanent. Building on this, more engagement incentives should be replaced with optimizations for signals of explicit quality from diverse users across companies, platforms and products.
Implement transparent, sensible rate limits on actions that could be used to manipulate elections (e.g. posting, commenting, sharing, inviting, messaging, forwarding).
We know that networks of propagandists (aka Dark PR firms) continue to operate by taking advantage of the ability of small groups to hyper-engage with each other’s content and drive the online distribution of divisive messages. Per Jan 6 investigators, a small group of users were responsible for much of the growth of “Stop the Steal” groups, echoing previously identified issues in “invite whales” that drive the growth of problematic groups. Similar patterns exist across functionality where small groups of users drive disproportionate harm through outsized engagement. Sensible limits could be based on what over 90% of users need, given that it is often the top 1% that are most problematic.
We hope that by limiting our suggestions to these two actions, we can provide resource-constrained organizations working on election efforts with basic achievable goals that have proven to be among the most effective in previous elections. Our goal is not to ensure that 2024 elections are decided in favor of any particular group or that information distributed reflects any particular perspective. Outside of universally abhorrent speech, we take no position on what content should or should not be allowed on platforms.
Rather, our goal is to ensure that online discussions are truly democratic, given that the online space is increasingly the main space of societal debate. Many of us are afraid to engage in online political discussions due to the possibility of being attacked by the especially motivated actors described above. Our hope is that these changes would make it harder for small groups of extremely motivated partisans to drown out the vast majority of citizens who value compromise, care about others, and simply want to live better lives.
Our goal is also not to villainize technology companies, whose efforts we are learning from and building upon. We are hopeful that by providing clear asks of what we expect from technology companies, we will provide a path for even technology’s worst critics to acknowledge progress. To that end, we would suggest that society work towards metrics that can help us understand how companies are performing, not relative to some impossible ideal where nothing bad happens online, but rather relative to competitors. Advertisers, investors, and consumers can then choose which platforms to support with their money and time.
IF companies do indeed change their systems to be more democratic and reward quality content, we expect that:
Users will report more positive experiences (e.g. greater learning) and fewer negative experiences (e.g. less content that affects them personally or that is perceived to be misleading).
The most widely distributed content and publishers will be perceived to be informative and/or connecting, rather than divisive and/or misleading.
Simple tests to game systems by commenting, posting, inviting, messaging, or sharing excessively will have limited effectiveness, suggesting that online distribution has been made more democratic and harder to manipulate.
All of the above conditions are things that can be tested outside of technology companies, so that we can collectively verify the progress made. Platforms have shown the efficacy of user experience surveys and our polling efforts have shown that those surveys are replicable externally. While Crowdtangle once enabled a more systematic view into the most distributed content, it is still possible for stakeholders to notice what is going viral within any ecosystem and to highlight that to the general public. Platforms often note that they have a high bar for recommending content and connections, so if any problematic groups or content is provided in recommendations, they can be flagged publicly. Testing whether there are indeed limits on excessive commenting, posting, inviting, messaging, or sharing is simply a matter of trying to hyper-engage, as many manipulative actors do, and then seeing if such tactics work.
As a species, humans are remarkable for the ability to collaborate in ways that are adaptive. Recent technological developments have challenged that ability, but as with previous technological advances, we are confident in our collective ability to adapt to technological change and create a new narrative for technology’s impact on elections. Most people, on any side of a political debate, are well-meaning, thoughtful, and interested in creating a better future. We just need to adjust the way technology works to help all of us take advantage of technology’s promise for improved dialogue and adaptive collaboration. Hopefully, we can start with 2024, the year of global elections.
Note that we are happy to work directly with civil society groups who are interested in advocating for these measures. Please do feel free to email raviiyer@marshall.usc.edu to begin that collaboration.