Announcing Our 2023 Dissertation Award Winners
Please join us in congratulating Sakshi Ghai, Gordon Heltzel, Julia Spielmann, Do The Khoa, Jens Paschmann, & Nirajana Mishra as winners and honorable mentions for our 5th annual Dissertation Awards.
Long before I joined the Psychology of Technology Institute staff, I was an active member of the community who appreciated the organization's efforts to highlight the opportunities for scholars to study technology toward building a better future. As part of the organization’s continued efforts to build capacity for understanding and improving the human-tech relationship, we have awarded promising graduating scholars awards for outstanding dissertations for the past 4 years. We are now excited to announce our winners for 2023, who represent the best and brightest of our network’s future.
Please join us in congratulating our three 2023 Psychology of Technology Dissertation Award winners and learning why we’re excited for their continued contributions to the study of the human-technology relationship.
Diversifying Social Media Research: A New Culturally Informed Approach
Sakshi Ghai, University of Cambridge - LinkedIn
Why we’re excited: Sakshi’s dissertation abstract began by noting that “More households in developing countries own a mobile phone than have access to electricity or clean water” (World Bank Report, 2016), yet most of our existing research on emergent technologies occurs outside the Global South, where most of the world’s populations and social media users live. Using a mixed-methodological and meta-scientific approach, she investigated how to close this research gap, applied her approach to the topic of violence prevention for children and showed how online platforms may be facilitating harm in these contexts. We’re excited to see her continue to push the field to study technology’s impact in the Global South.
Actual and Anticipated Reactions to Cross-Party Political (Dis)Engagement
Gordon Heltzel, University of British Columbia - LinkedIn
Why we’re excited: Gordon’s dissertation research investigated the disconnect between the desire we have for politicians to engage across parties and the reward structure of our online systems, which often incentivizes cross-party dismissal. Across 10 studies, mixing lab research with on-platform studies, he not only illuminated the phenomenon, but dove into its origins and illustrated that the disconnect we see online is not inescapable. Some people are willing to reward politicians who engage with the out-party. Just maybe not on Twitter. We’re excited to see him continue to explore how online environments distort the incentives that we feel in real life.
Gender Stereotypicality Fosters Preference for and Credibility in Artificial Intelligence
Julia Spielmann, University of Illinois - Google Scholar
Why we’re excited: Julia’s dissertation looked at how the increasing use of AI could end up both reinforcing gender stereotypes and creating new ones. Across four studies, Julia found that people preferred AI voice systems where the gender of the voice matched the associated gender stereotype of the domain being discussed. They also deemed these voices more credible. This research illustrates the tension between designing a system that most matches user expectation and preference and one that does not reinforce gender stereotypes. We’re excited to see her continue to explore how increasingly common AI systems may interact with societal stereotypes.
Please join us in congratulating the above winners. They are in good company with our previous winners, who have gone on to contribute great things to the field.
Above is a Picsart.com AI generated image for “Psychology of Technology Dissertation Award Winner” - Please note that the use of AI generated images does not constitute our endorsement of them. Rather, we use these images, in part, to illustrate the flaws that still exist in such technologies including obvious errors as well as the obvious perpetuation of stereotypes that this photo represents. In the present post, we thought the contrast between the diversity of our award participants with the stereotypes promoted by AI was noteworthy and further illustrates the need for increased research on how to improve the human-technology relationship. We will endeavor to be clearer about how we are using such images (as an example of AI's flaws vs. as endorsement) in the future.
In addition, we would like to acknowledge our three 2023 Psychology of Technology Dissertation Award Honorable Mention recipients.
Helping or Hurting: Can Assertive Language for Virtual Agents Help in Online Healthcare?
Do The Khoa, National Tsing Hua University, Taiwan - LinkedIn
Why we’re excited: Do explored the interaction between language, regulatory focus, and anthropomorphism in chatbots that provide medical advice. Given the increasing proliferation of AI powered chatbots, we are excited to see him continue work exploring the specific impact of particular chatbots on people’s adoption of their advice.
Essays on Digital Customer Engagement with New Technologies
Jens Paschmann, University of Cologne - LinkedIn
Why we’re excited: Jens did a series of studies designed to create a data-driven understanding of how to 1) mitigate privacy concerns that act as psychological barriers in the adoption of new technologies, 2) manage the consequences of subversive consumer intentions on digital platforms, and 3) foster lasting engagement in mobile technologies by providing psychological rewards through gamification. We’re excited to see him continue to explore how businesses can responsibly provide value for consumers of new technology.
Likes or Legacies? How Goals Impact the Value of User-Generated Content on Social Media
Nirajana Mishra, Yale University - LinkedIn
Why we’re excited: Nirajana did nine studies focused on the perceived value of user generated content produced on social media, finding that content that is shared was perceived to be less valuable than content that was preserved for nostalgic reasons. We’re excited to see her continue to explore how technologies can provide more personal value to users.
The world is full of challenges and anyone following recent news about AI will undoubtedly be concerned about the future of the human-tech relationship. We are hopeful that hearing about these scholars’ work gives you a moment of optimism as you consider how some of our most promising graduating scholars are tackling important problems across cultures and contexts.
—
Below are a few announcements from across the Psychology of Technology network:
We recently had the opportunity to do this podcast for Lawfare’s Arbiters of Truth series, which builds upon the arguments we have been making that stakeholders should focus on regulating design, not speech. We also organized a panel at Yale’s “Beyond Moderation” conference, put on by their Justice Collaboratory, that dove into specific examples of healthier/safer design. These efforts are definitely gaining traction amongst interested technologists, academics, and policy makers and we expect to continue to have news about progress in this domain.
Roshni Raveendrhan, a lead advisor for our institute and professor in the Leadership and Organizational Behavior Area at the University of Virginia’s Darden School of Business, is hiring for a post-doctoral research associate position to begin in Fall 2023. They are especially interested in candidates who are working in any of the following areas: psychology of technology, implications of artificial intelligence (AI) in organizations and society, new ways of working, and/or creativity and ethics. If you are interested, please apply here.
The Trust and Safety Research Conference will be held at Stanford on September 28-29, 2023. Registration sold out last year and will open in June. They are currently soliciting applications for speakers. Applications are due April 30th and more information can be found at this link.
Among the papers we have read recently, this paper is one we wanted to highlight. It attempted to forecast which jobs are most likely to be lost due to generative AI. If anyone has other papers or thoughts on the likely downstream impacts on societal health, I'd be glad to read them. Helping people navigate this coming future world and resolving tensions productively is something we will continue to work on within the Psychology of Technology Institute network, as part of our Promoting AI Alignment initiative.
We will be speaking later this week at the University of Michigan’s Social Media and Society in India conference, which may be of interest for people who are especially focused on technology’s impact in the Global South. Our planned talk will discuss how design based approaches are especially important for international contexts where companies are even more likely to make moderation mistakes. The event is April 7-8 and both in-person and online participation options are available.
We will be presenting a paper on social media’s algorithmic role in polarization and conflict, in collaboration with Jonathan Stray from UC-Berkeley and Helena Puig Larrauri of Build Up, at the Columbia Knight First Amendment Institute’s conference on algorithmic amplification on April 28th and 29th. The event is open for RSVPs for people interested in attending.
Dear readers - I added a note to the caption regarding the image used above to clarify that we do not endorse the AI generated images we use, but rather use them to illustrate the flaws that still exist with such technologies. Previous posts have had obvious mispellings and this post obviously perpetuates stereotypes. However - I should have been clearer that our use was to illustrate the flaws of AI, not to endorse this particular photo and I apologize for not being clearer there.
Added caption below: Please note that the use of AI generated images does not constitute our endorsement of them. Rather, we use these images, in part, to illustrate the flaws that still exist in such technologies including obvious errors as well as the obvious perpetuation of stereotypes that this photo represents. In the present post, we thought the contrast between the diversity of our award participants with the stereotypes promoted by AI was noteworthy and further illustrates the need for increased research on how to improve the human-technology relationship. We will endeavor to be clearer about how we are using such images (as an example of AI's flaws vs. as endorsement) in the future.