The True Path to Fewer "regretted user minutes" on Twitter in 2023
Musk's declared objective function for Twitter is likely to approximate time spent. Optimizing for highly valuable experiences would be a better path, but that may be at odds with business goals.
Recently, Elon Musk suggested that the objective function of twitter will be to optimize for unregretted user-minutes. I have previously written about the level of detail needed for true algorithmic transparency and this is a good case study. Unless he does something unexpected, it is likely that he is just saying that Twitter will more or less optimize for time spent and the associated ad revenue.
The “unregretted” part sounds customer focused, but as many users pointed out immediately, the big question is how he defines and measures this. One way to define this would be to measure how likely any given tweet is to get us to slow down, read it and maybe even to comment on it. And then subtract out any times we block, hide, or report the tweet. However, most systems get very little negative feedback since those actions are hidden beneath menus, so the net effect of this subtraction would end up barely affecting your system. For all intents and purposes, you would be optimizing for time spent on any tweet, which often prioritizes the more divisive and attention grabbing content over the more informative.
Another common way to measure this would be to survey users as to what tweets they regretted spending time on. It won’t be possible to survey many users for most content without annoying them, so you likely will need to build a predictive model of what a given user is likely to say. That predictive model will use information like the text of a tweet or the history of a tweeter to predict regret. Such predictions will be limited by the languages covered, so may not cover the many languages for which you don’t have adequate data, as well as the availability of user history, which may not cover new users, who often do the worst things.
Most importantly, depending on how you phrase the question, it won’t cover tweets that are merely “meh”. The psychology of regret is that, in the long term, you generally regret things that you do not do (omission), rather than things you do (commission). If I look at my timeline, I don’t generally regret reading individual tweets. Rather, I broadly regret the time I spend overall in life when I look at Twitter reflexively rather than spending time with my kids or my dog. It is the opportunity cost that is regrettable for many and this type of regret won’t get captured in this kind of survey.
I’ll repeat something I said internally in my time at Facebook and that has been quoted publicly previously.
Rather than optimizing for engagement and then trying to remove bad experiences, we should optimize more precisely for good experiences.
If you really want to help users spend their time well on social media, you need to figure out ways to find explicitly positive experiences and magnify those. Most of us have had great experiences with social media as well and there are cases where I’ve learned something truly interesting or connected with someone at a deeper level, for which I’m truly grateful to these services. However, there are only so many of those experiences to be had, and it is far more difficult to manufacture more such experiences than it is to create content that is simply “unregretted”. The value to the user and the value to the company are not aligned.
This is not an impossible task. You could build a user interface that asks users for their explicit desires as Facebook did here. Of course, this signal will only be as impactful as you decide to make it. You could also survey users for these positive experiences and attempt to predict and optimize for these rarer explicitly valuable experiences. For most people, such experiences tend to be less divisive and more trustworthy as well, so there are societal benefits to be had, in addition to the personal benefits. Both of these signals can provide valuable predictive power on truly valuable experiences, but their ultimate impact will depend on how they are weighted in any final algorithm, which is why true algorithmic transparency needs to reveal those weights. At USC’s Neely Center, we are working on a tracking poll to be launched in 2023 of people’s positive (and negative) experiences to help incentivize this kind of optimization. We would welcome help from interested parties, as such polls are expensive and we would love to be able to do more sub-group and international analyses eventually.
I’m writing this on December 31st - a time when many of us make New Year’s resolutions to have fewer regrets in 2023. One of my resolutions is to continue to push social media to be better for society. If Twitter truly wants to have users avoid regret for the time they spent on Twitter, it should explicitly optimize for positive experiences rather than making changes that essentially aim to maximize time spent on the platform.
Happy New Year and may we all have fewer regretted minutes online in 2023.
Below are other articles we are reading compiled by Joo-Wha Hong, Human-AI Interaction Researcher at the USC Marshall School.
Stein, J. P., Linda Breves, P., & Anders, N. | New Media & Society | 2022
When consuming media content, people often feel attached to celebrities or even fictional characters, which is called parasocial interaction. This type of interaction is strengthened by the emergence of social media. And now, there is a new type of celebrity, which is virtual influencers. Superficially, it seems that people seem to interact with virtual influencers just as they interact with human influencers. However, the study uncovered distinctive patterns in parasocial interactions with virtual influencers.
Banks, J. & Bowman, N. | International Journal of Social Robotics | 2022
As machines are becoming more anthropomorphic, people start to consider whether they can become moral beings like humans. Thinking machines have morality will bring huge shifts in human-machine relationship because it brings questions about whether we should treat machines ethically. Banks and Bowman created a new scale that can help researchers examine when people perceive moral patiency from machines.
Johnson, D. B. | SC Media | December 9, 2022
The emergence of ChatGPT, which enables people with no coding skill to interact with algorithms in a conversational manner, brings attention and hopes for more accessibility for the public to AI technology. However, people are also concerned about the cybersecurity risks that ChatGPT may bring, including generating codes for malware by bypassing restrictions. This article introduces experts’ opinions on the role of ChatGPT in cybersecurity.
DeGeurin, M. | Gizmodo | December 21, 2022
Deepfake is a complicated technology since it allows imitating other people realistically. The question is to what extent we should allow mimicking. There is an Instagram account that posts Zuckerberg deepfake videos. Surprisingly, Zuckerberg and his company are showing different reactions to the video. This incident may be the case that alerts the need for specific guidelines regarding deepfake videos.