Addressing Amplification is more important than Content Moderation (at Twitter too)
Social media is out of balance. The Twitter Files show that content moderation really is a dead end and we need to shift the conversation toward design, algorithmic amplification and incentives.
A few weeks ago, I saw Tim Wu speak on the role of antitrust at a Knight Foundation event and as someone new to antitrust law, I found this quote helpful for helping me understand how (some) regulators see their role (around 15:30 in this video):
The role of government…is dealing with.. imbalances…it’s hard for government to know exactly how things should turn out…
We are a bunch of people sitting in a set of offices just like anyone else…far away from things…but we can be sensitive to when things get out of whack.
We try to restore balance to these systems that are meant to be balanced to work.
One of the main reasons why I remain concerned about social media’s effect on society is due to this lack of balance. Some people focus on left/right balance, but a more principled, viewpoint neutral approach to restoring balance would be to focus on where publishers are being pushed to say outrageous things that they themselves are not proud of. Given the imbalances in who gets distribution, social media does not end up reflecting the authentic views of average people, but rather the views of professionals willing to engage in gladiatorial debate and to optimize their level of outrage for maximum distribution. Publishers and politicians, who run tons of A/B tests to understand what does better or worse on social media understand this imbalance best. A few quotes:
From Senator Ben Sasse’s book, Them (p.111 in my paperback edition):
Provocative social media is the only profitable social media. The incentive structure in the media complex rewards pushing the gas, not tapping the breaks…the sharper tongued post, the better..
some of the political personalities…want to do something that is more useful…than simply riding another boring wave of outrage. But how do they escape a system whose primary fuel is indignation? I have interviewed some celebrities who have tried to break out of the vicious cycle of rage-inflammation by turning their attention to uplifting stories or by trying to introduce some nuance into outrage-of-the-day coverage. But guess what happens?...No one clicks, metrics plummet…some people are okay with losing their notoriety…but not many people. Most learn their lesson and throw themselves back into the outrage loop.
Ezra Klein, cofounder of Vox, writes in Why We’re Polarized (p. 152 hardback):
Every newsroom in the country subscribes to some service or another that tracks traffic in a gamified, constantly updating interface…we don’t just want people to read our work. We want people to spread our work….But people don’t share quiet voices. They share loud voices. (Social platforms) are about saying I’m a person who cares about this, likes that, and loathes this other thing.
It’s interesting that Elon Musk himself started suggesting that he aimed “to serve center 80% of people, who wish to learn, laugh, and engage in reasoned debate”, but has found himself increasingly saying more outrageous things and touting how Twitter’s usage continues to grow. I have no doubt that he is watching what messaging leads to more engagement for him and his company and that that influences what he chooses to write - which, like most publishers, is trending toward the more outrageous and divisive. This isn’t people freely expressing their opinions. This is people following their financial incentive and dominating discourse to the detriment of us all.
How do we solve this? Not with content moderation. Similar to Tim Wu’s point above, I have pointed out that the people sitting in offices at tech companies are similarly unequipped to scalably moderate conversations to create the outcomes we want (also see Mike Masnick’s post here). My opinion is largely informed by years of experience doing this work at Facebook, but the experience of Twitter, recently illuminated in the “Twitter Files” suggests the same thing. Here is a quote from the Twitter files:
"we currently analyze tweets and consider them at a tweet-by-tweet basis which does not appropriately take into account the context surrounding".
Because moderation processes by time-limited poorly paid contractors cannot take context into account, Twitter relied on more subjective judgments by senior leaders to make important decisions. While some may see this as nefarious, I see it as inevitable when you try to treat something inherently nuanced with inappropriate tools (scaled content moderation). You end up constantly bending your policies or defining new policies to address the fact that what was said often matters less than the context and situation in which it was said. Context is impossible in a content moderation paradigm that emphasizes speed, standardized policies, and scale. I knew this from my time at FB, but clearly twitter had the same issue.
At the same Knight Foundation event, Yoel Roth spoke and I asked him about whether Crowdsourcing approaches like Birdwatch could help address the limits of content moderation. Both he and Kara Swisher suggested that such approaches could not replace content moderation and amounted to doing work for the company. I disagree, though in fairness, my question referenced one specific example of crowdsourcing. Crowdsourcing has a long history of study and there are lots of ways to leverage that history (e.g. looking at diversity of positive engagement rather than simple engagement) beyond Birdwatch. But in rewatching the video, both he and Ms. Swisher focus primarily on content moderation. Whether in the Twitter files or in this discussion, I didn’t see any substantive discussion of alternatives.
Evidence-based alternatives exist and at the same event, I was heartened to talk with academic organizations and congressional offices working to improve algorithmic transparency. However, transparency by itself won’t fix things unless we have an idea of what we are trying to fix and how to fix it. For example, I can have a perfect view of the engine of a car, but if I don’t know what kinds of issues to look for (e.g. viral problematic content is the metaphorical equivalent of an oil leak) and how to fix it (e.g. what algorithmic inputs led to that leak), that view may only serve a psychological function.
While many prominent and obviously thoughtful figures in the tech and societal impact field still focus on content moderation, my view is that we need to shift the conversation to focus on more promising alternatives. I have previously written about Subjective Measurement, Designing for Well-Being, and Algorithmic Value Alignment as alternative solutions, but clearly more needs to be written to both surface all the existing evidence that exists and inform new directions. At the Psychology of Technology Institute, we have begun such work in collaboration with like minded groups, but would welcome hearing from others who would like to join these efforts to shift the conversation.
Every piece of content you see in your social media feed (even a reverse chronological feed) is a function of the design and algorithms of that system. In contrast, content moderation will only ever affect a small amount of content and trying to expand those efforts will only lead to more backtracking, unfair over-enforcement, and controversy. If Elon Musk sincerely wanted to make Twitter a place that welcomes moderate voices, he would iterate on the design and improve the algorithms, removing the distorting incentives for outrage-bait and hyper-posting, rather than focusing on the relatively tiny amount of content affected by content moderation. However, his financial incentives lie in the other direction, so it is likely up to the rest of us to stop focusing on content moderation and start designing the systems we want.
—
Do you do research, create policy, or build systems relating to the use of design, algorithms, or measurement to intentionally improve the human-technology relationship? If so, please do get in touch (riyer@psychoftech.org) as we would love to highlight your work. And please do forward this to others who do such work.
—
Below are other articles we are reading compiled by Joo-Wha Hong, Human-AI Interaction Researcher at the USC Marshall School.
Shaikh, S. J. & Cruz, I. F. | AI & Society | 2022
As AI is becoming more interactive and humanlike, the time we should regard it as a teammate, not a tool, is coming. A question is what are the factors that influence this new organizational interaction between AI and humans. Among many potential elements, these researchers focused on the effects of time scarcity on human-AI collaboration.
Ratan, R. et al. | Computers & Education | 2022
Online courses have become more common due to the recent pandemic. So, researchers became interested in whether asynchronous online courses can substitute synchronous courses and how to make them more efficient. Ratan and his colleagues conducted a survey comparing the evaluation of synchronous and asynchronous online classes. While most of the study’s results are as expected, some findings are surprising.
Infants’ Prediction of Humanoid Robot’s Goal-Directed Action
Manzi, F. et al. | International Journal of Social Robotics | 2022
People are likely to be pure, unbiased, and innocent the most when they are infants. So, if infants react to robots just as to humans, it means our feeling about robots is decided by nature rather than knowledge or attitudes. Manzi and his colleagues tested their hypotheses by examining infant-robot interaction using the eye-tracking technique and found interesting results.
A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing
Heaven, W. D. | MIT Technology Review | November 25, 2022
We usually say experience is the best way to learn. However, this may not always be the case, especially for AI. There is now AI that learns how to play games simply by watching other gamers play, called Video Pre-Training, and shows sophisticated moves. This article tells you why AI becoming a good video game player is important.
Meta’s game-playing AI can make and break alliances like a human
Heaven, W. D. | MIT Technology Review | November 23, 2022
This another article about AI playing video games focuses on its social capabilities. The game introduced in the article, Diplomacy, is totally different than other games that AI has played because it requires political negotiations between players. Is this a sign of AI negotiators in real life? Check out the articles and guess whether it will happen.
Policing In The Metaverse: What’s Happening Now
Marr, B. | Forbes | November 18, 2022
When people gather, we hope only good things happen. Unfortunately, our world does not work in this way, including the metaverse world. There is a growing threat of potential crimes in the metaverse, too. As VR and AR technology develops, the safety of the virtual environment should also be guaranteed. Check out what efforts are being made to make our new world safer.