Twitter/X Should Stop Incentivizing Fear Speech in the UK
We know enough about the physics of social media to know that an algorithm that optimizes for reposts and comments will increase the risk of violence.
If you search for the term “reshare” and “misinformation” on FBArchive.org, which hosts the many documents that Frances Haugen leaked from Facebook, you’ll find references to many studies showing that optimizing for reshares leads to greater attention to misleading and inciting content. For example, from this discussion of “harmful virality”:
Last half, we found that deep reshares are a major vector for misinfo. For example, when a user sees a reshare of a reshare of a link or a photo, they are 4 times more likely to be seeing misinfo compared to when they see links or photos on News Feed in general. Dampening virality in high-severity topics by demoting deep reshares could dramatically reduce misinfo prevalence.
To Facebook’s credit, these findings were robust enough that Facebook stopped optimizing important topics for comments and reshares. The finding that viral content is more likely to be misinformation has been replicated in external studies showing that reshared content is more likely to be from untrustworthy sources. In a study of Twitter, the “for you” feed that amplifies more engaging content was found to contain more “emotionally charged, outgroup hostile content”.
We can see this dynamic playing out currently in the UK. If I search for “UK riots” on Twitter/X and look at the most engaging (“top”) content vs. the latest content, the difference is striking. A chronological sample generally shows benign commentary on the riots and the coverage of it.
In contrast, if you look at the “top” content that is more likely to be recommended to users in their “for you” feed, you see a lot of content that explicitly stokes fear of one side or the other.
This is just one point in time from one feed (mine), but the nice thing about Twitter/X is you can try it yourself, as the authors of this study did and compare the most engaging (“top”) content vs. the latest content related to the UK riots. My guess is that when you try it, you’ll see something similar.
The reason such content gets reposted by others is because people understandably want to know what is going on, keep each other safe, and defend their communities by sharing and engaging with such content. However, an engagement based feed isn’t going to give them an accurate view of what is going on. There are enough examples of violence that one can paint whatever narrative one wants, and the more extreme the narrative, the better the engagement and reach. Attention seekers try to create the most fear inducing narrative in order to get more attention. Unsurprisingly, some people who consume this warped narrative end up showing up to “defend” their community and end up increasing the risk of violence, similar to what happened in the US during the George Floyd protests.
Certainly, we should blame those who are profiting off of the attention generated by these riots and inciting division. But as long as our algorithms reward such behavior, getting rid of them and/or their content will be a never ending task. We need to address such issues at the source by removing the bad incentives created by X’s engagement based algorithms.
"Certainly, we should blame those who are profiting off of the attention generated by these riots and inciting division. But as long as our algorithms reward such behavior, getting rid of them and/or their content will be a never ending task."
Very good. Also thanks for all the references, very valuable.
I see a lot of discussion about free speech, censorship, etc but not enough about the role of the algorithms that amplify speech. Do you know about good work that brings these two topics together?
I sometimes call algorithmic amplification "reverse censorship". Sure, censorship is sth we dont like, but algorithmic amplification is also a form of censorship. We need to think the two topics together and propose solutions for both of them.