Helping Society Absorb Technological Change
With developments in AI, society is grappling with the pace of technological change. Here we float ideas, such as improving privacy defaults and increasing transparency that could help us all cope.
In our latest Neely Center polling, Americans continue to be more concerned than excited about the increasing use of AI in society. Recently, the Senate heard testimony from technology company executives and civil society groups to figure out a societal response. Advances in AI are unsettling to many and many have suggested that society will have challenges absorbing so much technological change.
We are not going to stop technological progress—nor necessarily should we, given the potential it has for improving the human condition. What we can do is give ourselves more time to absorb and adapt to change, both by anticipating future issues so that we can address them in the present, and by using smart precautions to push the realization of those concerns further into the future. We arguably failed to do this for social media and below are two ideas we wanted to solicit feedback on, concerning how to anticipate risk in a world of increasing AI.
Privacy by Default
We recently participated in an event to help inform potential legislative solutions, hosted by Laurie Segall and organized by the Center for Humane Technology. As a demonstration of the potentially unsettling effects of AI, the organizers asked an AI system to design a social media campaign to discredit her. It generated a series of fake messages and images, based on her prolific public materials, to falsely suggest that she had gained access to technology leaders inappropriately. The images and text used existing details and information to great effect and later, Laurie confessed to being uncharacteristically shaken by the demonstration.
Laurie is a public figure whose information is readily accessible. For most of us, our online information is scattered, hard to access, and it is not worth it for the few people who may wish us ill to figure out how to leverage it to create such a campaign. However, in a world where AI companies are racing to train their systems on as much data as possible, those conditions are rapidly changing. We have already seen instances where AI systems reveal private information and bad actors are already using AI systems to manufacture reality.
Fortunately, we have previous experience about how to stop bad actors from creating scaled smear campaigns. Troll farms have used human labor to create scaled smear campaigns to target public figures online. In response, in select high risk environments, companies have made it easier for vulnerable individuals to “lock their profile” to reduce the ability of actors to access their information. Now is the time to make a “locked profile” the default, not just on social media, but for all our online information, with respect to AI training.
Individuals did not create online information with the intention of making it readily accessible to AI systems that could be used to target them. We therefore should make the training of AI systems restricted to training data that has been proactively agreed upon by those who created that data - not just by those who happen to own it today. Should merchants be able to sell our rewards card data to AI systems? Should Amazon be able to train on the data from our security cameras? We did not create our online information in a world of AI and so we did not consent to have our information included in these more powerful systems.
We don’t know what AI systems will be able to do with this data or whether this information could “leak”, even from the best intentioned companies. This should not be a large barrier for using AI for medical breakthroughs or reducing climate change, since science generally doesn’t require personal information to make progress. This should also slow development that relies upon personally created information and empower actors, lawyers, artists, musicians, and others whose livelihoods are threatened to adjust to a world of AI via consent, rather than being forced to participate. Maybe someday we will feel that the benefits of AI outweigh the risks of having our information included in model building. But in the meantime, this should give us the time and space to adjust to a world of AI, rather than being forced to participate by companies competing for financial gain.
AI Development Transparency
A recurring theme in Congress’ conversations about regulating AI is the possibility that rapid progress in the technology could pose new, civilizational-scale risks. In one recent AI hearing, Senator Joe Manchin asked witnesses about AI’s potential to help bad actors create bioweapons; in another, OpenAI CEO Sam Altman cautioned that “if this technology goes wrong, it can go quite wrong.” A recent paper with co-authors from several major AI companies described tests being run to detect capabilities including hacking, weapons acquisition, and self-proliferation.
Lawmakers naturally want to act on warnings this dire. But any attempt to tackle these anticipated risks will be hampered by the many uncertainties at play: will any of these dangerous capabilities actually be developed? If so, which ones, when, and in what kinds of AI systems? Trying to lock in answers to these questions now is a tricky proposition; yet sitting back and ignoring the issue means courting disaster if the warnings do come true.
Using transparency and reporting requirements to increase government visibility into tech companies’ most advanced AI systems offers a middle road that, at the very least, could help people feel more comfortable with resulting change. Below are three specific ideas for how transparency could be implemented.
Advanced System Registration
AI developers could be required to notify a regulatory body—perhaps a dedicated commission for advanced AI—of any new cutting-edge system. So-called “scaling laws” in AI mean that companies may be able to predict in advance if a planned training run will meet or beat the most advanced existing systems, and could register their plans ahead of time. In other cases, it may only become clear later in development that a given AI model is state of the art, so its developer could register it retroactively.
In both cases, the team responsible for creating the AI could be required to share details of the system: the training data and computational power used to build it, the evaluation techniques used to understand what it can do, and the risk management approaches used to ensure it is safe.
Data Center Usage Transparency
While most cutting-edge AI development today is done within large companies, much of the research on how to build advanced systems is freely available online. This means that at some point it may be possible for rogue actors—who wouldn't comply with registration requirements—to build very sophisticated systems. Additional information sharing from large data center providers could bolster the robustness of transparency requirements. Training cutting-edge AI models requires vast amounts of computing power—often running hundreds or thousands of specialized chips for weeks or months at a time.
Companies like Meta and Google, that run their own data centers could be required to provide information on their data center usage. Cloud companies such as Amazon Web Services and Microsoft Azure could be required to keep records of customers using their services in large quantities, perhaps including having customers verify their identities (as is standard practice in the financial sector) and attest to how they are using the computation in question. While it still may be possible to network computing resources outside of a data center, such transparency requirements should at least slow down the pace of development by irresponsible actors who are unwilling to be transparent about their goals, thereby buying society more time to adjust.
Development Transparency
AI system development is largely a function of trial and error, and like all technology companies, experimental results are key to iterative progress. Companies routinely test AI systems that experimentally compare models that vary in terms of their training data, training flow, and reward functions. Knowledge as to how these systems perform, both in terms of utility as well as mitigating potential harm, is largely a result of understanding the results of these tests. If society wants to play a meaningful role in the development of these technologies toward building beneficial systems, society needs access to results from internal product development experiments, where planned changes are experimentally tested against the existing baseline. Such tests are largely used to ensure that planned changes achieve business outcomes, but society can insist that outcomes of interest to society are fully considered as well. Having access to such information can also help society create meaningful specific regulation toward designing safer future products.
Partnership, not Prevention
The above steps are not, by themselves, going to prevent harm from highly advanced AI systems. But given the novelty of these systems, such steps can buy us time to begin to adjust to a world of increasingly powerful AI and develop more specific tools to address potential harm. The point is not to halt development, which may be impossible given the global open source nature of AI development, but rather to slow it down provide the necessary time to reduce the massive information asymmetry that currently exists between industry and government, and thereby equip society to better understand, anticipate, and respond to potential threats from increasingly advanced AI systems. Unlike with the development of social media systems, solving for this asymmetry could allow society to play a more meaningful role in the development of these products.
Written in collaboration with Helen Toner, who is a director at Georgetown’s Center for Security and Emerging Technology and also serves on the board of directors of the OpenAI nonprofit.