How Apple, Google, and Microsoft Can Help Parents Protect Children
The case for device-based age verification
Introduction from Jon Haidt:
Ravi Iyer first contacted me in 2007 to ask if he could take a questionnaire I had developed (the Moral Foundations Questionnaire) and put it online. Ravi was a graduate student in social psychology at the University of Southern California at the time, and he quickly became a close research collaborator and friend. He created the website YourMorals.org, which has so far drawn more than a million people to the site to complete surveys on moral psychology. That data became the basis for my book The Righteous Mind.
Ravi then used his unique combination of tech skills and a Ph.D. in social psychology to work at Facebook. He began as a data science manager, working on reducing violating content, before realizing that moderation-based approaches would never really fix the problem. The most substantive things he did there involved upstream design changes, and he now works with technologists and policymakers to effect broader change.
So when I was writing The Anxious Generation and knew that I’d be adding a chapter on “what governments and tech companies can do now,” I turned to Ravi for help. We had numerous discussions, and Ravi wrote several of the paragraphs in that chapter. He helped me understand the simplicity of device-based age verification systems. We need universal protection for any child who arrives at a website or app, so it is very important that Congress pass KOSA (the Kids Online Safety Act) this year (see Kristin Bride’s recent post for more on the importance of KOSA). But the specific mechanism for identifying those who would receive these protections is still unsettled. Device-based verification would make it far easier to get protection for children across online platforms without having to understand each of them.
Here, Ravi expands on the idea for readers of After Babel. We hope that many tech companies and legislatures will explore the idea and work out the details of implementation.
— Jon
The current system for protecting children online does not work. It relies on parents understanding and managing their children’s online experience across a wide variety of applications. I live in the Bay Area and have many friends who work at large technology companies. I don’t know a single parent among them who feels completely comfortable with the options that currently exist. If the people who build technology products do not know how to protect their kids, we clearly need a better solution.
Parents are left on their own to figure out how to stop strangers from contacting their children and how to prevent anonymous cyberbullying. They need to figure out how to prevent their kids from seeing something they are not ready for in a world where 58% of teens report seeing sexually explicit content by accident and 19% of Instagram teens report seeing unwanted sexually explicit content every 7 days. And then there’s sleep: How do they ensure they don’t receive notifications at 1 am on a school night? Few parents feel confident in addressing these real and important concerns.
The providers of operating systems, which is a market that Apple, Google, and Microsoft dominate, could help. It would not only be the right thing to do, but it would also be a huge relief to the many parents who want their children to have rich social lives that require the ability to interact with their friends (who are online) — but do not have the time and energy to manage the myriad settings that exist across services. Parents need a simple way to protect their children online that doesn’t require them to know the difference between Snapchat, TikTok, and YouTube settings and how to manage each of them separately. There is even a business incentive here: Many parents might be *more* willing to buy a device that promises a simple solution.
Of course, the best solution might be for children to stop using these products altogether. The four norms suggested in The Anxious Generation—which include delaying entry into social media until age 16—would do a lot of good, but there will still be youth who need protection from technology-enabled harms, even if such usage is drastically reduced. Children mature continuously and at very different rates, and so a child is not necessarily more able to handle a smartphone when they start high school or able to interact productively on social media on their 16th birthday, as compared to the day before. Even if legislative changes occur such that children cannot sign up for social media accounts without their parent’s permission until their 16th birthday, most families will still want an option that reduces the risk of their newly eligible sixteen-year-old receiving unwanted advances from others, should they choose to use social media at that time.
Some children may develop slower and may need more time before fully engaging with these technologies. On the other hand, some children may benefit from access to technology sooner. Many researchers have pointed out the benefits of social media for kids who have specific support needs, such as some LGBT children—and parents of those children—may want to provide earlier access. Even as many may disagree with their decision, some parents may still want their children to have a smartphone in order to be able to access YouTube, which has a wealth of educational content, or to be able to FaceTime their grandparents - even at earlier ages. Those parents may want solutions that enable their children to use these devices more safely.
The solution? Device-based Age Verification, which was discussed as “Age check” in Chapter 12 of The Anxious Generation. Device-based Age Verification could provide the control that parents want without the complexity that prevents the widespread use of current parental settings. The rest of this post will describe Device-based Verification in more detail, including the ways it would support the adoption of current legislative kids' online safety proposals by mitigating current criticisms.
What is Device-Based Age Verification?
Device-based Age Verification would allow users to designate the user of a device as needing added protections across all applications used on that device. To quote from The Anxious Generation:
Apple, Google, and Microsoft could create a feature, let’s call it “age check,” which would be set to “on” by default whenever a parent creates an account for a child under the age of 18. The parent can choose to turn age check off, but if on is the default, then it would be very widely used (unlike many features in current parental controls, which many parents don’t know how to turn on)...It would also allow sites to age gate-specific features, such as the ability to upload videos or to be contacted by strangers. Note that with device-based verification, nobody else is inconvenienced. Adults who visit a site that uses age check don’t have to do anything or show anything, so the internet is unchanged for them, and there is no privacy threat whatsoever.
In my current role at USC’s Neely Center, I leverage my experience as a former Meta product manager to advocate for design changes that would improve the online experience for all users, including our youth. Changes like optimizing feeds for quality over engagement would enable youth to better use these products as tools to learn more or connect rather than as a way to escape from boredom. These changes would benefit all users but would be especially important for children, whose reward systems are more sensitive and whose impulse control is still developing. Just as offering candy to children is perceived to be more manipulative than offering it to adults, we may want to be more thoughtful about what kind of content we offer to children as well. They may not be able to resist sexual or violent content, even if they know it is not good for them.
Such proposals often rely on how we identify the children we want to protect. Thanks to the leadership of Minnesota Attorney General Keith Ellison and House Representative Zack Stephenson, we recently released a report outlining how to best protect children online and introduced accompanying legislation in the State of Minnesota that would put a version of “age check” into effect. The latest version of this provision reads as follows:
A device operating system provider must provide an option for a user to automatically opt in to any or all of the heightened protection requirements under paragraph (d) across all social media platforms managed by the operating system on the user's device. If a user selects the option under this paragraph, the device operating system provider must inform all social media platforms managed by the provider's operating system of the user's preference, and a notified social media platform must adjust the user’s account settings accordingly. A device operating system provider may provide a user the ability to opt out of any or all heightened protections.
A device operating system provider must, by default, consider any device with parental controls enabled to have opted in to all the heightened protection requirements under paragraph (d).
Unfortunately, this part of the bill is unlikely to become law in 2024, but we are hopeful that we can pass similar provisions in the next session or in other jurisdictions. In the meantime, the legislative proposals that are on the table (e.g., AADC and KOSA) would benefit from clearer practices on how companies can identify children, and we are hopeful that device operating system providers will eventually voluntarily adopt this system as a service to the parents who choose to buy their devices. Legislation or voluntary action by the companies are both more likely to occur with public support, and we are hopeful that the readers of The Anxious Generation can be part of efforts to enact device-based age verification by requesting this feature from the device manufacturers whom they support financially. Allowing parents to identify children at the device level would unlock protections for vulnerable youth across jurisdictions.
How Device-Based Verification Addresses Privacy Concerns
Device-based age verification would address a key barrier to the adoption of current online safety proposals. Identifying children has been a controversial part of discussions of numerous legislative proposals designed to protect kids online, including the Age Appropriate Design Code (AADC) and the Kids Online Safety Act (KOSA). Privacy advocates have argued that any restriction based on age will incentivize companies to collect more data about users, leading to privacy risks that impact all users. There are a number of potential costs associated with these privacy risks. For example, some users may ultimately choose to avoid accessing beneficial content—such as resources about mental health or medical issues—to prevent any chance that their consumption of that content becomes public.
The authors of the AADC and KOSA have been responsive to this concern and now suggest that platforms provide protection only when they already know the age of a child—thus, no additional data or identification would be collected. However, the reality is that this “knowledge” of age is rarely 100% certain. Kids can lie and say they are older. In addition, because the knowledge of age is often derived from AI systems that predict age, some young adults will likely be misidentified as children. It is not clear what the legal consequences would be for mistakes like these, and it is very likely that they will be made (in both directions).
To be clear, I support both the AADC and KOSA and think they would do a lot of good. But given the uncertainty about how each application might react to these laws—with different apps claiming or not claiming knowledge of child status—it would be beneficial to have a clear, simple way for parents to protect their children across applications.
This is where device-based verification comes in. Device based verification could remove much of the uncertainty about when an application does or does not know about whether a user needs more protection. If the full principles from our report’s design recommendations eventually become law, whether through state or federal action, the feeds of those using these “age-check” designated devices will:
No longer be optimized for engagement but rather prioritize explicit preference or judgments of quality.
Strangers will no longer be able to contact them.
Applications will no longer use known dark patterns to encourage greater usage than these device users want.
Most importantly, this will happen for all applications on this device, regardless of the amount of data that any one application has or doesn’t have about the device owner.
Being Realistic About Parental Responsibility
Expecting parents to be involved in every decision necessary to create a safe online environment for their children is unrealistic. We do not expect parents to make such detailed decisions in offline settings. For example, parents drop their kids off at school with the knowledge that they can trust those responsible for the school environment to create a safe and appropriate environment for their children. Schools take that responsibility seriously by preventing strangers from having unfettered access to the children on that campus and ensuring the materials available to students are appropriate and educational. Parents trust that they do not have to manage every potential risk that could occur at their school separately and instead entrust school administrators to make appropriate decisions across a wide variety of contexts. Apple, Google, and Microsoft now manage much of our children’s lives. They could help us create a similar environment on our children’s devices, enabling a single setting that appropriately treats those requiring more protection across applications rather than the currently unworkable, complex system.
To follow Ravi’s work, check out and subscribe to the Substack, Designing Tomorrow. It is a joint newsletter by the Neely Center for Ethical Leadership and Decision Making at USC Marshall and the Psychology of Technology Institute (PTI).