Court Agrees that Regulating Engagement Based Algorithms is Constitutional
A US District Court recently upheld that feed algorithms contain both expressive and non-expressive components, and that the non-expressive components are indeed able to be regulated constitutionally.
Several parts of the Neely Center’s Design Code for Social Media concern designing algorithms to empower user preference, rather than to increase usage. There is a great deal of regretted usage of social media products and many users report regular unwanted experiences. Policymakers seeking to address this have engaged with our design code, but a natural concern has been whether regulating algorithms can be done constitutionally.
To help answer this question, we wrote the note excerpted below entitled “Feed Algorithms Contain both Expressive and Functional Components” to help draw nuanced lines leveraging opinions from the recent Netchoice v. Moody decision. In that decision, the Supreme Court clearly wanted to distinguish design choices that are expressive from those that are not, suggesting that the industry argument that all design choices are expressive would not stand. From the majority opinion, “Curating a feed and transmitting direct messages, one might think, involve different levels of editorial choice, so that the one creates an expressive product and the other does not. If so, regulation of those diverse activities could well fall on different sides of the constitutional line.”
California’s Addictive Feed bill (SB 976) was recently challenged and drawing on Netchoice v. Moody, the Neely Center hoped to educate stakeholders that not all curation decisions regarding feeds are expressive. From our note, which was cited in the amicus brief file by EPIC:
Algorithmic feeds of content are generally optimized for three things. Those three things are:
The engagement of users on the site (e.g. what a user clicks on or views) that serves the function of increasing usage of the feed.
The expressed preferences of users (e.g. what a user explicitly chooses to follow, search for, or express positive sentiment about) that serves the function of increasing the utility of the feed.
The values of the platform (e.g. community standards which indicate what items should be deprioritized or removed) that often keeps content that the platform deems objectionable off of feeds.
The first category – using a users’ engagement with content to help predict what they will engage with next – is the most prominent ingredient in most algorithmic feeds. From our cross-industry paper on “non-engagement signals” that involved representatives from most of the major feed ranked platforms:
“Most large platforms rank their feeds primarily by predicted engagement. Many switched to using predicted engagement after using chronological or some other algorithm: Facebook (Facebook, 2023), LinkedIn (LinkedIn, 2023), Instagram in 2016 (Instagram, 2016), Twitter in 2016 (Buzzfeed, 2016), and Reddit in 2021 (Reddit, 2021).
● Facebook ran an experiment giving users a semi-chronological feed. They found user time-spent declined by 3% after 10 days and was continuing to decline when the experiment ended (FBArchive, 2018).
● In 2020 Facebook and Instagram ran experiments with semi-chronological feeds: time-spent declined by 20% on FB and 10% on Instagram on average over the following 3 months (A. M. Guess et al., 2023).
● A 2022 paper reported that users in Twitter’s long-term chronological holdback, which began in 2016, had approximately 38% fewer impressions/day. (Bandy and Lazovich, 2022).”Changes are generally made to these parameters in so far as they increase or decrease business goals, such as the number of monthly active users a platform has, which affects the company stock price and therefore the compensation of employees. Notably, the California law in question, SB 976, specifically targets this category of algorithmic input – the use of personal data (e.g. what a user spends time viewing) within the algorithmic feeds of minors to create harmful usage (“addictive feeds”).
In the second category, companies supplement signals of user engagement with signals of user preference. For example, Facebook’s “see more or see less” interface allows users to indicate their aspirational preferences, when they may want more educational content even if they often skip over it or want less salacious content, even if they can’t help but pay attention to it. Many platforms use user surveys to help understand users’ explicit preferences (see section 5 of our “non-engagement” paper).
Finally, in the third category, companies often supplement their algorithmic systems with measures of “quality” that may negatively correlate with predicted engagement (see section 4.2 of our “non-engagement” paper). These may take the form of community standards or moderation rules which enable platforms to decide which kinds of content are or are not consistent with their values and therefore which kinds of content they want to make more or less prominent based on the characteristics of that content. Unlike other categories within algorithms, which relate to the behavior of users, these decisions are often content specific. For example, platforms often choose to have or not have policies related to various categories of content, depending on what kind of platform experience they want to create. Such choices have previously been found to be expressive and therefore protected by the Supreme Court, given that they involve intentional decisions about content.
California Senate Bill 976 (as well as a similar New York law) is specifically intended to target the first category of optimization, where the platform uses the personal data of users to create “addictive feeds”. It does not target other forms of feed curation and specifically allows for curation in favor of user preference (category 2). It only targets the algorithmic usage of the personal data of users, and therefore does not interfere with platform’s free right to prioritize or deprioritize content that relates to the values or rules of any platform (category 3), which are not generally implemented in personalized ways. The Act instead targets the functions of algorithms that lead to “addiction”, which is characterized as the usage of a product (category 1) even against one’s preferences given the full knowledge of consequences.
The court’s decision reflected our view - that algorithms have both expressive and non-expressive components. It took a nuanced view of algorithmic regulation, upholding that regulating engagement based algorithms is indeed constitutional. From the decision:
NetChoice’s main argument against the personalized feed provisions is that those provisions restrict social media platforms’ own speech. From NetChoice’s perspective, personalized feeds are inherently expressive, so SB 976’s restrictions on those feeds impede free speech. The Court concludes that NetChoice has not shown a likelihood of success on that issue because it has failed to meet its burden of demonstrating, as Moody requires for facial challenges, that most or all personalized feeds covered by SB 976 are expressive and therefore implicate the First Amendment.
NetChoice claims that it satisfies this burden because Moody held that, as a matter of law, “[d]eciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own.” Moody, 603 U.S. at 731. And that is precisely what personalized feeds do: compile and organize speech from social media users. On the surface, this argument accords with older cases holding that the exercise of editorial judgment (i.e., deciding what speech to publish and how to organize it) is usually protected by the First Amendment….
That argument reads too much into Moody and its forebears. Although it is true that Moody uses sweeping language that could be interpreted as saying that all acts of compiling and organizing speech, with nothing more, are protected by the First Amendment, Moody also expressly discusses situations where such activity did not receive protection. For instance, Moody observed that a mall could not claim a First Amendment right to exclude pamphleteers from its property because the mall was not “engaged in any expressive activity” when trying to exclude the pamphleteers and their speech. …Consequently, even post-Moody, courts must inquire into whether an act of compiling and organizing third-party speech is expressive before they can determine whether that act receives First Amendment protection.
In response to this conclusion, NetChoice urges the Court to find that personalized feeds are always expressive even if not all acts of compiling and organizing third-party speech are expressive. From its perspective, the work that personalized feeds do is closely analogous to the editorial activities that Moody, Tornillo, Turner, and other similar precedents have found to be protected. There is some force to this suggestion. “‘[T]he basic principles of freedom of speech and the press … do not vary’ when a new and different medium for communication appears.” Brown, 564 U.S. at 790 (quoting Joseph Burstyn, Inc. v. Wilson, 343 U.S. 495, 503 (1952)). So “analogies to old media, even if imperfect, can be useful.” Moody, 603 U.S. at 733. Yet, this does not mean that courts should uncritically assume that analogies to old media are always apt and that there is little meaningful difference between old and new. Quite the opposite. The Supreme Court has cautioned lower courts against reflexively “import[ing] law developed in very different contexts into a new and changing environment.” …Indeed, the Supreme Court has a history of developing special rules for addressing free speech concerns in different and newly arising contexts. Id. at 741 (collecting cases).
With that caution in mind, the Court finds that old precedents on editorial discretion do not fully resolve the issue at hand regarding the expressiveness of personalized feeds. For instance, Tornillo involved editorial discretion in the traditional sense: human newspaper editors deciding what articles to publish and where to place them in the paper based on the editors’ own judgments about newsworthiness and how the proposed articles fit their newspaper’s journalistic point of view. The Tornillo decision left this unspoken, but that is likely because, in 1974 when the case was decided, there was no other way to make editorial decisions. Thus, editorial discretion and human judgments about the value of speech (whether based on the speech’s importance, truth, entertainment, or some other criteria) were one and the same. Personalized feeds on social media platforms are different. Rather than relying on humans to make individual decisions about what posts to include in a feed, social media companies now rely on algorithms to automatically take those actions.
Due to these differences between traditional and social media, the Supreme Court was careful not to overextend itself in Moody. The First Amendment questions in Moody involved restrictions on content moderation policies that embodied human value judgments about the types of messages to be disfavored. When the social media platforms in Moody removed posts for violating their community standards, it was because people at those platforms found those messages to contain vile or dangerous ideas—such as support for Nazi ideology, glorification of gender violence, or advancement of phony medical treatments. Id. at 735–37. The content moderation policies at issue were much like the traditional forms of editorial discretion discussed in Tornillo and other prior precedents. …
But what if an algorithm’s creator has other purposes in mind? What if someone creates an algorithm to maximize engagement, i.e., the time spent on a social media platform? At that point, it would be hard to say that the algorithm reflects any message from its creator because it would recommend and amplify both favored and disfavored messages alike so long as doing so prompts users to spend longer on social media. Amicus Br. 5 (collecting news articles). To the extent that an algorithm amplifies messages that its creator expressly disagrees with, the idea that the algorithm implements some expressive choice and conveys its creator’s message should be met with great skepticism.
Moreover, while a person viewing a personalized feed could perceive recommendations as sending a message that she is likely to be interested in those recommended posts, that would reflect the user’s interpretation, not the algorithm creator’s expression. If a third party’s interpretations triggered the First Amendment, essentially everything would become expressive and receive speech protections—a good lawyer would almost certainly be able to assign some plausible meaning to any action. Yet, the Supreme Court has made clear that there is not “limitless variety of conduct can be labeled ‘speech’ [even when] the person engaging in the conduct intends thereby to express an idea.” O’Brien, 391 U.S. at 376….
Up to this point, the Court has focused on content moderation and feeds that respond solely to user activity as a dichotomy, as if the personalized feeds regulated by SB 976 must be one or the other. But a personalized feed might recommend posts based on both content moderation policies and user activity, or both expressive and non-expressive factors. NetChoice does not address the possibility of “mixed” feeds even though such feeds raise numerous legal and factual questions. For instance, the relative weight assigned to expressive and non-expressive factors in an algorithm might be relevant to free speech issues. “[T]he First Amendment does not prevent restrictions directed at . . . conduct from imposing incidental burdens on speech.” Sorrell, 564 U.S. at 567. Regulating feeds that use algorithms mostly relying on non-expressive factors may not trigger First Amendment scrutiny at all because doing so only incidentally burdens any expressive component of those algorithms. Or, if it is very easy to separate an algorithm’s expressive content moderation functions from non-expressive user-activity-based functions, a law prohibiting personalized feeds from relying on user activity information may also only incidentally burden speech. A covered company could just remove the user activity factors from the recommendation algorithms driving its media feeds.
This latter possibility is especially significant here because SB 976 targets only recommendations based on user information. It does not prohibit covered entities from incorporating their content moderation guidelines into their recommendation algorithms. In short, much of the First Amendment analysis depends on a close inspection of how regulated feeds actually function. Because NetChoice has not made a record that can be used to address these important questions, it has not met its burden to show facial unconstitutionality.
When feeds recommend posts based solely on prior user activity, there is no expressive message being conveyed. At the outset, the Court observes that SB 976 restricts only the use of certain personalized information when compiling posts into feeds. SB 976 does not prevent social media platforms from carrying out content moderation like that discussed in Moody. Platforms can continue to remove posts containing disfavored ideas, such as racist ideology, while still complying with SB 976 because content moderation depends on “independent content standards” separate from a user’s personal information. Id. at 736 n.5. As such, it is difficult to say that restricting use of personalized feeds would alter the overall speech environment on any social media platform in any appreciable way. In addition, it is also challenging to identify how personalization, as opposed to content moderation, might send any message.
This decision will almost certainly be appealed, but given the Supreme Court’s stated desire to draw a line between design that is or is not expressive, it is likely that, contrary to industry rhetoric, regulating some design choices of social media platforms will indeed be deemed constitutional. Given the documented harms of engagement based algorithms, we are hopeful that society can indeed set minimum standards for these powerful and ubiquitous products.
This was a really interesting breakdown of the legal nuances around algorithm regulation. It makes sense that not all parts of a feed algorithm are ‘expressive,’ and the court’s reasoning seems pretty solid. I’d be curious to see how this plays out in practice will platforms actually adjust their algorithms, or will they just find ways around the regulation? Also I wonder how this might affect user experience in the long run. Appreciate the deep dive