Regulate Design, not Speech
Recent legislation and court cases over the future of online social platforms often focus on regulating speech. Regulating value alignment through design is a better, more robust alternative.
If you call a taxi and the taxi cab driver takes you on a circuitous route to maximize the fare, but delays your arrival, would you feel cheated? A similar thing happens when you interact with apps that have goals which are not aligned with yours. They may not be completely misaligned, and the taxi driver in this case does get you to your destination. But clearly the taxi driver is putting their interests ahead of yours.
The above is a simple example of value misalignment, which is a topic that has been studied widely in computer science with respect to increasingly common artificial intelligence (AI) powered algorithms where the objectives of those algorithms may be misaligned with what people actually want. It is sometimes discussed in terms of a world-ending apocalypse where a super-powerful AI kills us all in a misguided attempt to create wealth or end disease, but there are modern day examples that are not so theoretical. Often the misalignment is more subtle, whereby the system is optimizing for business objectives that are only sometimes aligned with what customers truly want.
With recent reporting, we can put more concreteness to the problem of value alignment in social media. Statements detailing the algorithmic optimization for “meaningful social interactions” may not give us the transparency we need when we later realize that the actual definition of a “meaningful social interaction” includes anger reactions, comments, and shares, which have been shown to lead to bullying, anger, and misinformation in the political domain. The logic behind the original definition may be reasonable, but the all-too-human tendency toward avoiding cognitive dissonance will lead companies to want to define user and societal goals as being closer to what is good for the company’s current bottom line, such that even well-meaning decision makers are likely to create value misalignment.
Value misalignment isn’t isolated to social media. It can occur anywhere that a user’s explicit desires do not match the goals of the platform that is being used. Netflix’s algorithmically generated recommendations may be optimizing for what will get you to binge watch, even as you’d like to explicitly have more documentaries presented and sleep more. Buzzfeed’s algorithmically generated stories may be optimizing for what gets you to click on them, rather than for what you find most informative. Doordash’s algorithmically generated upsells may be attempting to get you to spend more on impulse ice cream, rather than providing you with the healthy eating habits you aspire to have.
None of these algorithms are necessarily nefarious and many of these goals might seem obvious. We don’t necessarily expect billboards or restaurant menus to have our best interests at heart. But in a world of ever more powerful AI, these systems are going to start to be increasingly more subtle. If we don’t start thinking about these issues with a broader lens, we will increasingly end up with black mirror-esque outcomes. Netflix might not just present tempting videos, but might actually use AI to create personally targeted videos that make it even harder to go to bed on time. Buzzfeed might move from reporting stories, to incentivizing citizen reporters to stage click-baity videos about provocative topics like animal abuse or fighting that we have trouble resisting. Doordash recommendations could get so good that they know when our willpower to resist desert is weakest and then make desert recommendations at those points. A world where we sleep less, eat worse, and we consume more clickbait is not one we want to create. Yet as AI gets increasingly more powerful, this isn’t a future that is far off.
Unfortunately, current approaches to improving technology’s impact on society often focus on trying to control the outcomes of AI systems (eg. speech), rather than going upstream and fixing their design. We don’t make home builders responsible for every fire, but we do expect them to follow building codes that make fires rarer and less likely to spread. Similarly, we can ask that companies design systems that do not create undue risk. Current approaches to regulating outcomes may actually exacerbate risk. For example, Florida and Texas passed laws whose purpose is to prevent the biased moderation of conservative ideas, but a foreseeable consequence of criminalizing bias is to eliminate moderation of harmful content altogether. The EU has passed the Digital Services Act that seeks to hold platforms more accountable for their role in disseminating harmful and illegal content. As many many knowledgeable observers have pointed out, these laws could have Orwellian consequences since there is no way to agree on the bounds of their targeted speech. Does forcing platforms toward “viewpoint neutrality” mean that views on holocaust denial need to be carried by organizations that carry forward the stories of those affected by the holocaust? Can I discuss acupuncture and other alternative health options that some consider misinformation? Are we allowed to hate people who are bombing our country? I have argued previously that moderating content is a dead end and adding the force of law to these efforts will only compound those mistakes. But we don’t have to regulate speech in order to improve the effects of technology on society.
Instead of regulating speech, regulators who want to improve the effects of technology, including, but not limited to social media apps, should instead regulate value aligned design. Simply put, apps should be required to have transparent, easily accessible mechanisms to enable us to control them towards our explicit goals, rather than behaviors that solely make companies money. Companies have already taken steps in this direction and colleagues of ours in the Psychology of Technology network have shown how feelings of control improve the relationship between social media usage and well-being. What does this mean in practice?
In practice, encoding this principle into regulations would enable us to enforce best practices that we know lead to value-aligned outcomes similar to how we enforce building codes. Below are four simple best practices that we know lead to more values-aligned outcomes:
Ask regularly for explicit user feedback, rather than optimizing for ambiguous engagement signals. Advertising-based businesses are always going to be tempted to make the argument that what people engage with is what they want. Sometimes it is, but there are many examples of things that people engage with that they don’t explicitly want (e.g. nudity, fighting, clickbait, sensational headlines) as well as examples of things that people want more of, but that are not that engaging (e.g. boring yet informative content). Facebook’s “see more”/”see less” options are a nascent example of how a platform can ask explicitly for user desires that may or may not be revealed through engagement. Alternatively, platforms can create engagement signals that are more unambiguous signals of user value. For example, a “love reaction” is more unambiguous than an “anger” reaction and indicates a deeper appreciation than a “like”. The Emerson Collective’s Narwhal Project includes reactions like “clarifying” and “new to me” that allow users to unambiguously indicate positive experiences that are relevant to the platform’s purpose. All algorithms should have easily accessible controls that enable users to specify their own long term value function for the algorithm to optimize for and we should have enough transparency into these algorithms to ensure that those explicit signals outweigh other ambiguous less value-aligned signals.
Provide an accessible mechanism for negative feedback. Users need a way to indicate that something is undesirable, even if it does not violate a policy from a company and they may want to engage with it. Maybe I’m on a diet and I don’t want to be recommended junk food. Maybe I’m an alcoholic and don’t want to see ads for alcohol. Maybe a family member is ill, and I want to control when I am reminded of my grief. In each case, algorithms probably could find content that would engage you contrary to your explicit desires and in each case, users should be able to indicate that they do not want that experience. Facebook recently added an “x” to the top right of each post to make it easier to provide negative feedback and this Facebook papers document describes how increasing the accessibility of negative feedback can be “game changing” for reducing negative experiences that often are not caught through content moderation. Reddit’s system of explicit upvoting and downvoting is generally seen as effective in surfacing better content, such that Twitter, Facebook, and Tiktok have all tested similar functionality. Negative feedback is an important part of how society self-moderates and companies should be required to design systems that allow for a reasonable level of self-policing.
Limit the power of new users. While most people will self-moderate in the face of negative feedback, a small group of people are less sensitive to social cues. That small group can create a disproportionate amount of online harm and enforcing rules against them does little to curb their behavior as they can simply create new, unrestricted new accounts. In the real world, people behave well because they have something to lose by misbehaving, whether it be reputation or access, and we have tools to exclude those who repetitively engage in behavior we don’t want. We force people to earn our trust before we put them in settings where they can create bad experiences for others we care about (e.g. inviting them to family dinner) and the online space is no different. Giving people you know little about the undue power to affect others creates risk, on or offline, and we should require platforms to limit the harm that new, untrusted users can create.
Provide an accessible mechanism to remain private. Privacy is a core value of many consumers and companies will always be tempted to make content more public to create more activity that draws other customers in. Regulators should step in when consumers are being defaulted into public settings that they wouldn’t otherwise have chosen and should ensure that consumers have simple mechanisms (e.g. not hidden deep in layers of menus) to be more private. Facebook has already provided default privacy for younger users and easy access to privacy settings in select countries. Similar features and defaults should be standard across all apps in all countries for all users.
These 4 design patterns require no content based judgments, will reduce misinformation, hate, and bullying, and will lead to better, more value-aligned experiences for users (e.g. “content that is more worth their time”). They come at some cost to typical business growth tactics, which is why we need regulators to level the playing field so that no company is at a disadvantage, in the same way that we don’t allow building contractors to win bids by compromising safety. Furthermore, encoding the principle of value aligned design into regulations will also allow us to continually discover new evidence based best practices, beyond these four, as certainly there will remain a lot more to be done. Similar to the evolution of building codes and safety standards, technology and knowledge will continue to push our knowledge of value aligned design forward, with evolving requirements that will bend technology toward being a beneficial tool.
In a recent Op-Ed, Yoel Roth suggested that App Stores could be “a significant check on unrestrained speech on the mainstream internet”. I don’t think app stores want to wade into the speech wars, but I do think that regulation of value aligned design could more easily happen at the app store level rather than through government regulation. App stores have a vested interest in ensuring that apps are truly beneficial for users, and already have design codes that reduce risk (e.g. requiring “a mechanism to report offensive content and timely responses to concerns”). However, such codes currently allow for a wide array of implementations that leave room for companies to avoid making tradeoffs with business goals (e.g. narrow definitions of offensive content with reporting mechanisms that are hidden below layers of menu options and lead to automated acknowledgment). In a world of increasingly powerful AI, ensuring value aligned design will only get more important for app stores and the above suggestions could be seen as merely giving teeth to these existing requirements.
To be clear, value aligned design will not solve every issue with our relationship with technology. In this world, people will still be able to choose to harass others, but those being harassed will have more control of their experience. The community will be able to easily provide negative feedback and harassers will not be able to create a new account to escape censure so easily. People will still be able to spread hate, but a much smaller group of others will explicitly choose to consume it. The internet did not create all the harms that exist in the world, nor is it capable of removing them all. But it can be designed in a way that it doesn’t facilitate harm and instead maximizes the ability of users to define and create long term value for themselves.
—
Below are other articles we are reading compiled by Joo-Wha Hong, Human-AI Interaction Researcher at the USC Marshall School.
Investigating the importance of social presence on intentions to adopt an AI romantic partner
Kim, J., Merrill Jr, K., & Collins, C | Communication Research Reports | 2023
Recent movies like Her and Ex Machina portray a romantic feeling people have toward AI-based machines. However, is it something that happens only in movies? Considering populations who love fictional characters as real people, loving AI is highly likely to happen in our real lives. Kim and her colleague examined what factors make people fall in romantic love with AI.
Almost human? A comparative case study on the social media presence of virtual influencers
Arsenyan, J., & Mirowska, A. | International Journal of Human-Computer Studies | 2021
In this era of social media influencers, we now have virtual influencers who are as famous as real influencers. Some of them look indistinguishably like humans, while others do not. Now, the question is whether people treat virtual influencers just as they interact with human influencers. Arsenyan and Mirowska compared comments and reactions people showed to both virtual and human influencers and found interesting behavior patterns.
Emotional Support from AI Chatbots: Should a Supportive Partner Self-Disclose or Not?
Meng, J., & Dai, Y. | Journal of Computer-Mediated Communication | 2021
There is a tendency to think that AI can only do knowledge or logic-oriented jobs. However, there are attempts to build AI that can provide emotional support, which is what modern people desperately need. So, Meng and Dai tested whether AI chatbots are actually helpful by comparing their efficacy in reducing stress and providing supportiveness with the performance of human conversation partners. The study also saw whether reciprocal self-disclosure has effects. Those who are interested in AI’s emotional support should check out this paper.
Sharing of Misinformation is Habitual, not just Lazy or Biased
Ceylan, G., Anderson, I., & Wood, W. | Proceedings of the National Academy of Sciences | 2023
Why do people share misinformation on social media? In this research, we show that the structure of online sharing built into social platforms is more important than individual deficits in critical reasoning and partisan bias—commonly cited drivers of misinformation. Due to the reward-based learning systems on social media, users form habits of sharing information that attracts others' attention. Finally, we show that sharing of false news is not an inevitable consequence of user habits: Social media sites could be restructured to build habits to share accurate information.
—
Below are a few announcements from across the Psychology of Technology network:
Many of the ideas about value-aligned design above are based on presentations we have done with regulatory groups and people involved in building new healthier social platforms. We are always happy to do presentations for other policy makers or technologists who are seeking advice on building better platforms and please do get in touch if you’re interested in scheduling a more in-depth conversation.
If you’re interested in the themes on this newsletter, you might also enjoy this podcast episode I recently did on a show called The Gist.
The PNAS article listed above by Psych of Tech community members Gizam Ceylan, Ian Anderson, and Wendy Wood was also covered by the press in these articles in Ars Technica and Popular Science. The paper points to the need to reform incentives for social media in line with some of the recommendations of this newsletter.
I’ll be speaking on some of the themes mentioned in this newsletter at the inaugural Tech+Social Cohesion conference, scheduled for February 23-24 in San Francisco. The Psychology of Technology Institute is also sponsoring the Thursday night session featuring Tristan Harris and Audrey Tang. Tickets are available here.
Gloria Mark and Larry Rosen, community members who are professors at UC-Irvine and Cal-State Dominguez Hills respectively, were recently featured in this New York Times article concerning shortened attention spans and potential tips to increase focus. The article coincides with the release of a book by Dr. Mark - Attention Span: A Groundbreaking Way to Restore Balance, Happiness and Productivity.