The Solutions to Ethical issues with Generative AI will be Social
...and therefore social science input will be sorely needed. In addition to this article, please keep scrolling for an exciting event invitation, as well as a list of recent scholarly articles.
Generative AI is one of the hottest new trends in the application of AI. Stable Diffusion recently announced a seed round of over a 100 million dollars, indicating the appetite for betting on this technology as something that can be truly transformative. It is not necessarily a far off future, as, for example, in a recent podcast, Casey Newton revealed how he now uses it regularly to generate images for his newsletter.
What is generative AI? It is the use of AI to help people create new content (e.g. text or images) based on previous examples. It involves training large models on the voluminous data we humans generate such that you can describe what text or images you want and have a machine generate it. When illustrating or writing something, these suggestions may not be perfect, but they can certainly jumpstart the creative process and as these tools get better, we are not far from a world where they will be indistinguishable from human generated content.
Try some of these tools out here and here and below are some examples.
Image from the prompt “people being chased by a giant bunny”
Image from the prompt “people being chased by a giant bunny in the 80s”
Text generated from the paragraph above.
What is it good for? Generative AI has a wide range of application from writing speeches and news articles all the way to drawing cartoons and making new, yet related, memes.
How does it work? In generative AI, we have a large dataset of content, like millions of articles or millions of images.
Text auto-completed from examples of voter fraud described in this Washington Post article.
A man in Virginia who voted twice.
Same story, different state. David Lee Crawford was arrested in Virginia on the suspicion of voting twice. He, too, claimed to be a Trump campaign worker testing the security of the electoral system. This time, there's no suggestion that he was lying; he says he was just confused about state voting laws.
These tools clearly still make mistakes, but are good enough that we can already see a future where many ethical issues will arise and some already have.
Among the ethical issues for Generative AI are:
Copyright / Plagiarism
Job displacement
Non-consensual use of a person’s image, including in sexual situations.
Creating misinformation
Representing diversity
Identity Theft / Impersonation
What all of the above have in common is the need for social coordination on solutions (perhaps aided by technology) rather than primarily technical solutions. These technologies are already advanced enough such that technical identification of generated content is difficult, and it is only a matter of time before it is impossible.
Developing social solutions will require agreement on how to fairly use such technology. Addressing copyright claims requires coming to a collective understanding of what constitutes “fair use”, as well as how students might mis-use such technology. As more and more people are displaced from jobs, we may need to consider how we distribute goods so that those who happen to own the AI that do this work don’t accumulate disproportionate wealth relative to those who generate the data it is trained with, especially given that some amount of preference and life satisfaction is based on relative position to others. Understanding people’s conceptions of fairness is also essential for determining representation in generative AI. If someone were to ask for a “sumo wrestler”, it may make sense to present largely Japanese images, given the origin of that sport, but if someone were to ask for a “doctor”, would we want to represent the existing demographic distribution or something more equally distributed? Such questions are already being addressed in different ways by different companies and social scientists have an opportunity to inform these debates before these technologies become more widely adopted.
One solution for non-consensual imagery that leverages technical capabilities, but still requires social coordination to enact, was brought up by Professor Hany Farid in answer to a question at the Trust and Safety Research conference, where he suggested that people could upload their images to a database to let the world know that they do not consent to their images being used in certain situations (e.g. anything sexual). Since facial recognition is generally pretty good, this should enable people to prevent their images from being distributed broadly on platforms that agree to leverage this database. A similar approach, leveraging collective databases, could conceivably help solve issues around misinformation, impersonation, and identity theft, by allowing for verification of how and by whom media is generated (see this initiative or the discussion of provenance in this article). Yet, these solutions will again require social coordination and force us to wrestle with the tradeoffs between a completely unrestricted internet and one which has more structure around how information is trusted and conveyed. As we create new structures of communication, providing ideas based on the functionality of existing structures will be essential. Current platforms often lack basic social mechanisms like reputation, negative social feedback, and identity, which often function as speed bumps to the distribution of misinformation in offline life.
Nobody knows what the future of generative AI will be, but we do know that that future is coming soon. Now is the time for behavioral scientists to help ensure that that future is an ethical one. Do you have ongoing research that would be relevant for this effort? If so, we would love to hear about it. More broadly, if you would like to be part of more such discussions, please do subscribe to this newsletter and/or contact me (riyer@psychoftech.org) as we will soon be launching more avenues for our members to have such discussions.
—
Join us on November 4th for an online workshop on Technology, Trust, and Democracy
How is tech influencing our ability to trust each other and maintain a healthy democracy? To discuss the answer to this critical question, we’ve convened a set of experts to discuss: Jonathan Haidt, Frances Haugen, Shankar Vedantam, Pia Shah, Talia Stroud, and Kamy Akhavan. Find out what they have to say (and ask them questions) on Nov. 4, 9:30am-12:45pm PST, live online -- no recording available. Register (and find more details) here. The event is co-hosted by The Psychology of Technology Institute, USC’s Neely Center, and the Behavioral Science & Policy Association. There is a small fee to register; please let us know if you need assistance. The fee will be fully waived for doctoral students, thanks to generous support from USC's Neely Center (find the code here).
—
Articles we are reading:
Compiled by Joo-Wha Hong, Human-AI Interaction Researcher at the USC Marshall School
Hye Min Kim | New Media & Society | 2020
Body positing has been a major trend on Instagram. It should be regarded as a crucial matter since self-body satisfaction, particularly among teenagers, is easily influenced by such postings. However, the question is who and what affects the cognitive process of an “ideal” body. This paper examines how the idealization of body imagery is done and perceived on social media.
Jiyoung Park and Sang Eun Woo | The Journal of Psychology | 2022
People have different attitudes toward AI and there are many explanations for it. Studies say it can be determined by personal experience, exposure to information, or even social class. Now, this research approaches AI preference based on people’s personalities. If you wonder whether your hate or liking of AI is due to your personality, check out this article.
Where AI Can — and Can’t — Help Talent Management
Jessica Kim-Schmid and Roshni Raveendhran | Harvard Business Review | October 13, 2022
Hiring and keeping the right people has always been a top priority for companies. With the emergence of AI, organizational decision makers are increasingly employing algorithms for talent management. However, these AI tools come with pros and cons. This article provides helpful guidance on what to consider when using AI for human resource management.
The White House just unveiled a new AI Bill of Rights
Melissa Heikkilä | MIT Technology Review | October 4, 2022
As AI evolves, there has always been concern of the misuse of the technology. Therefore, many scholars stressed the importance of legal efforts to protect people from the unethical use of AI. This article describes the recent White House’s answer to the call, which is a new bill that promotes AI accountability. This bill may significantly change the general belief and understanding of AI.
Is Having Too Many Choices (Versus Too Few) Really the Greater Problem for Consumers?
Nathan Cheek, Elena Reutskaja, Barry Schwartz, and Sheena Iyengar | Behavioral Scientist | October 3, 2022
Technological evolution has made us accessible to more information, leading to more choices. There are even studies that overwhelming amounts of choices rather have negative influences on customers. However, do people actually think they have more options than they can handle? If so, is it worse than not having enough choices? This article introduces how customers react to too many or too few choices.
Social Media and Mental Health
Luca Brahieri, Roee Levy, and Alexey Makarin | SSRN | July 2022
This paper provides quasi-experimental estimates of the impact of social media on mental health by leveraging a unique natural experiment: the staggered introduction of Facebook across U.S. colleges. It finds that the roll-out of Facebook at a college increased symptoms of poor mental health, especially among students predicted to be most susceptible to mental illness, and led to increased utilization of mental health services.