Building on Haidt & Schmidt's Social Media Design Proposals
Today I'm reposting a critique I made of Jon Haidt and Eric Schmidt's proposals for reforming social media in a world of AI.
Dear Readers,
Today, I wanted to share a post I wrote for Jon Haidt’s substack regarding important changes that I felt would make his (and Eric Schmidt’s) suggestions for improving social media more effective and practical, especially given a world of increasing AI usage. I wrote this after their Atlantic piece, and they were kind enough to share it with their readership.
I’ve known and worked with Jon for awhile, and agree with his concerns about social media. I’ve admired Eric’s work as both a technologist and philanthropist. I know many in academia and technology who disagree with some of Jon’s ideas from his substack, but I actually don’t think you have to take a position as to the “primary” cause of the youth mental health crisis in order to decide that important changes could be made to improve the impact of social media on psychological health. All social media use and all social media platforms are not equally bad, as our recent Neely Social Media Index results show. But certainly some improvements can be made, especially for platforms like Twitter, where people aren’t reporting connecting or learning, yet are reporting having a relatively large number of negative experiences. Given Jon and Eric’s visibility, my hope is that improving their ideas can be an important lever for pushing these platforms to be better, regardless of views of the current state.
—-
Why Haidt and Schmidt’s Proposed Social Media Reforms Are Insufficient
A former Meta product manager suggests changes that would make the reforms more effective for mitigating the coming impact of AI.
I’m a longtime academic collaborator of Jon’s who took a different path after graduate school. Rather than continue to apply the tools of social science to understand societal divisions, I decided to apply my skills more directly by working at Meta for over four years to help improve its products’ impact on society. I quickly learned that content moderation was a band-aid rather than a long-term solution and that scalable progress required rethinking the design of Facebook’s core algorithms.
Why did publishers and politicians tell us that social media platforms incentivized them to write more polarizing and sensational content? This was not a moderation issue, but a design flaw. There were areas where the platform’s design actively incentivized content and practices that led to more anger, distrust, and polarization. Naturally, if borderline-harmful content was going to perform better, it’s predictable that profit-seeking businesses and influence-seeking political organizations would respond to that incentive.
I was a part of many efforts to reform the design of these systems. In response to criticisms about Facebook’s effects on society, which escalated after the 2016 election, Meta introspected on aspects of their platforms’ designs, some of which have been made public in the Facebook papers. We eventually removed some of the incentives to make political content more viral, which led to measurable decreases in bullying, misinformation, and graphic content. We removed incentives for anger reactions and reduced the incentive to solicit reshares for all content. To better align algorithms with user and societal value, we added user interface elements and surveys to make it easier for users to indicate content they engaged with but didn’t like or liked but didn’t engage with.
To be clear, I know we didn’t fix the problem, but many people made efforts, and I think we can learn from the small but measurable progress we made. I now work at the Psychology of Technology Institute, which is supported by the University of Southern California Marshall School’s Neely Center. Our mission is to combine what is known among technologists working on these problems with the research from social scientists studying technology. Our goal is to make technology psychologically healthier for everyone.
It is from this perspective that I read Jon Haidt and Eric Schmidt’s recent essay in The Atlantic detailing four imminent threats concerning how the rapid adoption of AI will make social media much more pernicious. I agreed with their list of the four likely threats, which include:
Making it easy for bad actors to flood public conversation with rivers of garbage
Making personalized “super-influencers” available to any company or person who wants to influence others
Increasing children and adolescents’ time spent on social media, thereby increasing the risk to their mental health
Strengthening authoritarian regimes while weakening democratic ones.
But I saw limitations when I read their list of five proposed reforms to prepare the country and the world for these threats. Allow me to discuss each in turn, along with proposals to make them more effective and implementable.
Reform #1. Authenticate all users, including bots.
My Suggestion: Focus on accountability, not identity.
Haidt and Schmidt argue that user authentication is the foundation for holding users accountable. But holding someone accountable does not necessarily require identifying them. The essay discusses introducing reputation, negative feedback, and accountability into algorithmic systems, which is important for regulating and improving online social interactions. Most of us strive to create positive experiences for those around us, and even the slightest risk of negative feedback provides a strong incentive for politeness. Those who do not adjust their behavior based on feedback will become less popular and influential without needing top-down enforcement.
To use the same examples mentioned in the essay, we hold Uber drivers and eBay sellers accountable, even if we don’t know their last names (Haidt and Schmidt argue that this is a result of platforms being able to identify the drivers/sellers). However, I argue that what prevents Uber drivers and eBay sellers from bad behavior is not necessarily the fact that companies can personally trace them but that their account reputation is valuable and that they have a financial incentive to maintain a high reputation. In contrast, many people on the internet are identifiable but continue to do harmful things to gain attention due to a lack of accountability.
Some social media platforms have introduced reputation-based functionality with successful results. For example, Reddit’s upvote/downvote and Karma system have proven useful for improving social discourse while avoiding the privacy issues that could come with identifying all users. Using this model, we could require accounts to earn trust from the community before giving them all the power (and responsibility) of widespread distribution and develop ways to make the loss of community trust consequential.
Consider an example from one of the leaked Facebook paper documents revealing that a small set of users are responsible for nearly half of all uncivil comments. The absence of an effective downvote system ironically amplifies their visibility when others engage to contest their behavior. What if we could diminish this group's social sway by holding them accountable, possibly through a history of downvoted comments?
In this way, we could hold people accountable for their online actions without any top-down enforcement and without the need for widespread identification. Accountability without identification is particularly important in authoritarian countries, where anonymity plays an important role in civic discourse.
Reform #2. Mark AI-generated audio and visual content.
My suggestion: Proactively mark trustworthy content
Haidt and Schmidt argue that marking AI-generated content with digital watermarks will help users discern real versus fake content. They raise the serious and already-occurring problem of deep fake pornography, where AI is used to generate fake photos and videos of people engaging in sexual acts.
I agree that this is an important issue to address, yet bad actors are unlikely to correctly mark their content in a world of increasingly open-source access to AI. In addition, it is already difficult, and soon may be impossible, to identify AI-generated content as the technology advances.
Provenance-based approaches, on the other hand, proactively register content when created and edited, allowing consumers to robustly verify the source of an image, rather than attempting to retroactively determine which images are worthy of trust. Several organizations, including Adobe, the BBC, Microsoft, Sony, Intel, the New York Times, and Reuters, have teamed up to develop systems that publicly validate the origin of content in ways that are verifiable by consumer applications (e.g., social media platforms, news apps). Provenance-based approaches are likely to be more robust against improvements in deep fake technology. Additionally, platforms favoring verifiable content are likely to earn user trust and enjoy long-term success, especially given the likely proliferation of AI-generated fake images, which will lack this public validation.
This same approach of proactive identification can even work in the case of non-consensual sexual content if distribution channels allow sexual imagery only from people who proactively identify as porn actors. Public pressure has led both social media and porn companies to limit the distribution of objectionable sexual content in the past, and providing a simple mechanism to remove non-consensual sexual imagery from these systems would be a relatively simple way for these companies to earn trust. In this way, we could more confidently reduce the distribution of a wide set of non-consensual content.
Reform #3. Require data transparency with users, government officials, and researchers.
My suggestion: Focus on product experimentation data from technology companies.
Scientists and technology companies rely on product experiments to disentangle questions of correlation versus causation. For example, scientists use randomized clinical trials to estimate the causal impact of proposed cancer treatments and weigh the effectiveness of the treatment against any side effects. Companies also run countless experiments to understand which changes in their products will cause an improvement in business outcomes. In these experiments, a random set of users may be given a modified version of the product to understand how that modification changes their usage and experience.
Questions of causality are pervasive in debates about social media (e.g., is social media a reflection of our societal polarization, or is it causing that polarization?). Public access to leaked experiments conducted by platforms can help settle these questions by providing causal estimates of the effects of specific product decisions on outcomes of societal interest.
As a result of access to some such experiments, we now know that optimizing health content for engagement leads to more distribution of misinformation or that removing the incentive for comments and shares for political content leads to less reported bullying and misinformation. If we want to understand social media systems and causality in the same way that platforms understand themselves, we need to focus our transparency efforts specifically on access to product experiments and their results. Increasing transparency is even more important for “black box” AI systems where most system designers don’t understand the mechanisms for any particular change they make. As a result, the code underlying the system tells you far less than what happens when you experimentally change aspects of the design.
Reform #4. Clarify that platforms can sometimes be liable for the choices they make and the content they promote
My suggestion: Focus on design via “Building Codes” for Online Social Media
Most people agree that platforms should be held accountable for their actions but cannot agree on how to do so. An analogy I often use to foster consensus, borrowed from my colleagues at the Center for Humane Technology, is to think about how we have designed building codes. We don’t hold builders responsible for every bad thing that could happen to a building (e.g., it could burn down, people could get hurt in it), but we do hold them responsible if they design a building that contributes to bad things happening (e.g., using flammable materials). Similarly, a free society should not try to hold platforms responsible for every harmful piece of content on their systems, but we should instead hold them responsible for product designs that encourage harmful content.
With access to platform product experimentation data (reform #3), we could understand the causal impact of product decisions that platforms make and create clear design codes for platforms so that they don’t have to wonder what practices they will or will not be accountable for. Such design codes could be agreed upon by governments, states, school boards, parent groups, app stores, and advocacy organizations that all have a role in pressuring companies to build better, safer products. No enforcement mechanism will be airtight, but collective, specific pressure could help companies worry less about a competitor outcompeting them by building a more engaging product with the least amount of safety tools (e.g., optimized solely for time spent with no privacy defaults and no content provenance or user accountability functionality).
Reform #5. Raise the age of “internet adulthood” to 16 and enforce it
No suggestion here. We should help parents regain their rights to be gatekeepers for their children.
Social media has been hailed as removing gatekeepers, but those gatekeepers may not all be bad. In particular, parents have traditionally played the role of gatekeepers for their children, protecting them from influences that may not be good for them.
To be clear, requiring parental consent for adolescents ages 13-15 would not ban them from social media platforms. They could still use platforms without an account (e.g., watching educational YouTube videos). They could also get their parent’s consent to open an account.
But having a meaningful amount of friction might enable important conversations between parents and children on what it means to have a healthy digital life. It may also reduce the social pressure to join social media, as fewer fellow teens will use the platforms. Raising the age seems both doable and effective, especially given the progress made in age verification. I’m hopeful that the many efforts to update COPPA, which set the effective age of consent at 13, will gain traction in the near future.
Conclusion
I share Jon’s concerns about the impact of social media on society, but I know many who dispute the confidence with which he holds those beliefs. But you don’t have to adjudicate whether social media is the primary driver of the teen mental health crisis or is only problematic for some percentage of children to want to drive urgent solutions.
In either case, product decisions are harming a large number of people, and there is a need for solutions. The debate over who is at fault can distract from the work needed to improve these platforms. This work is even more urgent as the increasing use of AI “cranks up the dials on all those effects, and then just keeps cranking,” as Jon and Eric suggest.
My suggestions build on Jon and Eric’s work, and I am certain others will offer additional ideas. Improving the impact of social media platforms on society in these early days of the AI revolution is a goal we should all share.
Seems that this will ultimately get down to competition between AIs: the "good" AIs that are policing the work of/versus the "bad" AIs that are supporting the negative impacts. It may turn out to be automatic stock trading, with which people cannot keep up. I have less faith that us little carbon-based life forms will figure out a way to corral this before it gets wildly out of hand. Seems like some organization like Microsoft could sell AaaS: "Authenticity as a $ervice", along with the other functions needed to implement some of the ideas in the post.
I think point 1 is the most important.
When you say "Clarify that platforms can sometimes be liable for the choices they make and the content they promote". Is it not better to "Legislate that platforms can sometimes be liable..."? I mean, let's get to solutions and push them into law (which obviously requires tons of forethought, compromise, and deliberation).
Point 2 might put severe limitations on big companies, that , though justified, would not be doable for smaller sites. Either you have to employ heavy technology to recognize AI content (creating a new arms race) or you have to be very strict up front with content creators who would try to upload AI content. Smaller sites would do neither, and in a sense they may have an advantage then over the mainstream sites. In which case, the Big Tech companies will fight this sort of regulation tooth and nail.
Point 3 would be great. Seemingly not too hard to push through either.
As for the "age of internet adulthood", this also sounds really important, but how feasible would it be to implement? Can we really implement it well and not create the same nonsense and reverse side-effects as with the drinking age?