Improving a “Duty of Care” to Protect Kids Online
As the Kids Online Safety Act (KOSA) gains momentum, it is important to be thoughtful about the potential unintended consequences of legislation that protects children online.
Following warnings about the effects of technology on children from the APA and Surgeon General, momentum is building for efforts to regulate technology in order to protect children. Most Americans support at least some government action and numerous bills have passed or are under consideration at the state, federal, and international levels.
At the federal level, the Kids Online Safety Act (KOSA) has gained momentum in the Senate and has been supported by politicians as diverse as Joe Biden and Ted Cruz. It attempts to supplement the business incentives that have historically shaped social media with a “duty of care” that holds companies responsible for harm to minors. Few politicians or responsible business owners would dispute the idea that products being served to minors should be designed in a way that minimizes harm and maximizes benefit.
However, the advancement of this bill has led to two important concerns about the potential unintended consequences of this legislation. Can a “duty of care” be implemented in a way that doesn’t allow for politicians to censor speech they dislike (i.e., removing LGBT content, framed as a way to protect children)? And how can we identify minors who need to be cared for in a way that is privacy-safe and does not infringe on the freedoms of adults? In previous posts, readers have provided helpful feedback and so I would like to propose solutions for both of these concerns, in the hopes of getting feedback and eventually informing prospective legislation about how best to protect children from abusive online practices. I’ll specifically talk about 2 recommended changes to KOSA and how these changes could prevent the identified unintended consequences.
A Design-Based User Experience Focused “Duty of Care”
Critics of KOSA point out that some political organizations are already planning to abuse the “duty of care” provision in order to censor online speech that they disagree with. These critics are not wrong. The provision would likely increase content moderation efforts, which are not only ineffective at fully mitigating exposure to “harmful” content, but have also led to instances of censorship and bias. Organizations would likely use this provision to require platforms to remove content from groups they dislike, believing that they are protecting children. In the abstract, we may agree that “protecting kids” is a great idea, but in practice, we may have very different ideas about what we should protect our children from. Thus, any policy that focuses on harmful content rather than platform design is doomed to repeat the failures of previous reform efforts. To mitigate this, I suggest that KOSA be amended to specifically refer to design choices that affect users’ negative experiences.
A design based user experience focused approach differs from a harmful content based approach in three important ways. First, it does not involve penalizing or removing content, but rather asks platforms to stop practices (e.g. optimizing for engagement) that are known to facilitate or encourage harmful experiences. Second, rather than requiring common definitions of harm, as in the current bill, it aims to enable users to achieve their goals (e.g. not being bullied or avoiding accidentally seeing sexual content), rather than optimizing for either business outcomes or the goals of political actors. Finally, rather than suggesting that it is possible to remove all harm, which would disincentivize all but the most benign content, a design based approach accepts that eliminating all harm is impossible and therefore allows for discussion of sensitive topics where some harm is inevitable, as long as platforms do not knowingly design their systems in ways that encourage negative experiences for the users of those platforms.
To be more specific, there are two principles that partisans often agree on:
Users should be provided with content that they specifically ask for, not just what they pay attention to. Rather than serving children content based on business benefit (e.g. optimizing for engagement or time spent), algorithms should instead optimize for the explicit considered preferences of those being served (e.g. as defined either by a child’s explicit preferences or by the quality perceptions of parents). Some children will still end up watching dares, fight videos, or sexual content, but it will at least be a function of their explicit choice, rather than a company’s attempt to maximize time spent and ad revenue.
Ill-meaning adults should not be able to access children’s information and solicit them without the natural limitations and consequences that exist in the real world. Privacy defaults and rate limits mimic the natural barriers of offline life where strangers do not have the opportunity to engage with your child, at any time, without consent, in the privacy of your home. Nor can they attempt to contact large numbers of children without questions being asked about their motivations.
In practice, similar design principles were implemented in the UK when the the Age Appropriate Design Code was passed in 2021. Here’s an excerpt from a Politico article detailing what happened after the UK law passed:
Almost overnight, TikTok limited the ability of teenagers to be contacted by strangers via direct message. Facebook pulled back on how advertisers could target underage users with personalized ads based on collecting people’s data. YouTube turned off its “autoplay” function for minors that allowed teenagers to doomscrool through endless, and potentially, harmful content.
However, given the partisan nature of tech regulation in the US, critics are right to point out issues in current proposals. Smart lawyers ought to work to clearly specify that a “duty of care” applies specifically to the content-neutral design of platforms and their algorithms, towards specifically providing better experiences for minors.
Device-Based Age Verification
The “duty of care” in KOSA applies to those “that the platform knows or reasonably should know is a minor by taking reasonable measures”. Given the potential risk in serving minors, online services may decide that they need to ask for identification from all users. Many privacy advocacy organizations correctly point out that a world where we upload our identification to access content is one that is rife with privacy risks and similar laws have already stopped adults from accessing content they don’t want to be associated with. As of this writing, if you access Pornhub in Utah, you are greeted by porn actress Cherie Deville who suggests:
The best and most effective solution for protecting children and adults alike is to identify users by their device and allow access to age-restricted materials and websites based on that identification…please contact your representatives and demand device-based verification solutions that make the Internet safer while also respecting your privacy.
Now, of course, no solution is going to stop teens from accessing restricted content if they really want to. But as a parent, I would love to be able to buy a phone that identifies the user as a minor and which therefore restricts what materials can be accessed, without identifying who that minor is. This would also create minimal disruption and no additional privacy risk for adults. Yes, minors could buy phones to circumvent these restrictions, just like many teens have some way to access alcohol even if they can’t easily buy it directly. But even so, alcohol restrictions still save lives and did you know that most teens who encounter pornography online often do so accidentally?
Parents are eager for better tools to manage the online world and current solutions need to be simpler and apply more broadly. I know that I can’t keep up with every new site that pops up, but I’d sleep easier if, by virtue of buying a “child phone”, porn sites were mandated to restrict access, social media sites required my permission before allowing my children to post videos of themselves, and all internet sites had privacy by default and fewer design features like auto-play and infinite scroll that are designed to encourage greater screen time.
Bills like KOSA would be improved if, rather than imposing vague “age verification” requirements, they specifically mandated respecting device based age settings. Many parents don’t use age restriction settings because they are complicated or perceived to be ineffective. Settling on a single privacy safe standard for age verification would simplify the system and enable a broader set of platforms to respect that standard.
Does this go far enough?
The urge to protect our children is great, and as a parent, I completely understand why many would suggest broader laws that mandate specific content restrictions and hold companies responsible when they should have known that a child was using their services. Yet no substantive federal law has been able to pass, and a variety of critics have raised important and well-founded concerns about KOSA in its current form. While I cannot say for certain what would happen if my suggestions were to be adopted, I can offer two reasons to be optimistic.
First, technology companies have generally made more progress in reducing user harm by focusing on design based solutions that improve the user experience, as compared to content based enforcement. At Facebook, removing (some) engagement incentives for political content improved perceived value and reduced anger reactions across all civic content whereas Facebook’s work on their definition of hate speech could only ever affect approximately .05% of content, and much of that would likely be of dubious value to address. Many companies beyond Facebook have used surveys to improve user experience as a more robust way to address low quality and potentially harmful content. User experience is also readily measurable externally, allowing society to play a meaningful role in holding platforms accountable for poorly designed products.
Second, while more general, expansive language may seem like a powerful tool to address harm, psychological research suggests that such language is likely to be interpreted in ways that serve self-interest. Companies view losing market share amongst youth as an existential threat, so a vague requirement to take “reasonable measures” to identify minors is likely to be interpreted loosely by many whose paychecks depend on growth, which is partially why our existing laws preventing kids under 13 from using social media are largely ineffective against tweens who simply lie about their age. As discussed, partisans are already interpreting a “duty of care” to fit their pre-existing goals.
Over the past few months, I have spoken with policy makers, technologists and parents about these issues and few things unite people more than protecting our children from online harm. However, technology is complex and, as critics have pointed out, legislation can easily lead to unintended consequences. I am hopeful that these suggestions for improving how a “duty of care” is implemented can help us achieve our common goals while minimizing unintended consequences.