Toward a more constructive conversation about technology.
I'm joining the Psychology of Technology Institute & USC's Neely Center to help address specific "root level" issues with technology platforms by applying the latest psychological research.
Friday was my last day at Meta, where I worked for several years because I felt it was the best place to make an impact on the issues I cared about. I disagreed with many decisions there, but I also learned a lot alongside thoughtful people, many of whom continue to do the best they can for society with the tools they have. We certainly only scratched the surface of what is possible and necessary, but we did do some things that were steps in the right direction. Some part of what remains to be done applies not only to Meta, but also to YouTube, TikTok and startups yet to be founded, and involves decisions that really should not be left up to any one group at any one company. I left in part to help make that conversation broader and more constructive.
That’s exactly what I’ll be doing as the inaugural Managing Director of the Psychology of Technology Institute, which is a project of USC’s Neely Center for Ethical Leadership and Decision Making. At Meta, it was my honor to support dozens of researchers and data scientists in improving our platform, but there are thousands more in the world who should be part of that conversation. Too often, companies and those on the outside speak different languages, using different metrics and proof points to reach different conclusions in separate venues. When public discussions are had, they often come from a place of accusation and/or defensiveness, rather than focusing on solutions.
As part of broadening the conversation toward finding solutions, let me make two specific suggestions, referencing current discussions of the platform’s impact in Iran and on the Palestine / Israeli conflict.
Consider “root level” solutions to criticisms:
Criticism is helpful and can help motivate needed changes, but it is most helpful when it is rooted in a process that can lead to a better future. Consider the recent human rights impact assessment that Meta commissioned concerning events in May of 2021 between Israel and Palestine. Rightly, critics have pointed out many issues identified in the report including “the lack of a Hebrew (hostile speech) classifier”, “the contours and details of (policies about) praise for and glorification of violence”, “a possible insufficient routing of Arabic content by dialect”, and “content-neutral “break the glass” measures…that intentionally reduced the visibility of all repeatedly reshared content”.
The BSR report has many recommendations which are a good start. However, if we consider how we might apply these more broadly, the issues uncovered involve far more fundamental decision points than the BSR report can address. Meta has committed to a Hebrew hostile speech classifier, but parity across languages and dialects is not actually possible since training data and context, which underlie these systems, are not fully controllable. In Arabic, should there be one classifier or should it be broken into X dialects? What is the long term responsibility of platforms for language based AI system performance? Is it based on parity across conflicts? Or reduction of harm? How would we apply the same framework to the Azerbaijan/Armenia conflict? When should platforms lean into measures that do not rely on language, such as reducing the visibility of repeatedly reshared content? These are deeper “root level” questions that go beyond this one report or conflict, and which need to be answered by a broader group of stakeholders to enable all technology companies to make more sustainable progress. Otherwise, we risk attempting to make progress on one language or conflict at a time, without having agreed upon a consistent framework or goal.
Use specific cases to uncover the tradeoffs involved in any root issues:
While it remains important to understand the broad picture, solutions are often found in the specific case. An engineer presented with the problem of “censorship” will not know where to start. But if you give that same engineer a specific piece of content, they can trace the systems that led to that piece of content going viral undeservedly or being removed inappropriately. In my last days, a colleague of mine highlighted the case of videos that were disappearing from WhatsApp. These are systems I am less familiar with, but I was able to see for myself that WhatsApp engineers were indeed working to “do anything within our technical capacity to keep our service up and running”. I had worked with numerous people with family in Iran during my time at Meta, so it was not surprising to me that other folks were well ahead of me. However, the specific issue was harder to diagnose because it was unclear what specific piece of content was being reported as being removed. There could be network issues or there could be systematic issues, echoing the human rights report linked above. More broadly, the more that people with issues with a service can frame things specifically in terms of identifiers and examples of pieces of content, accounts involved, and the exact UI experience, the more we can get to the root cause of an issue and work together on solutions. The specifics often force us to make concrete choices when there are no perfect solutions (e.g. when is it ok to praise violence within a conflict setting?), rather than focusing on less complex absolutes (e.g. violence is bad and freedom of speech is good).
The issues that society faces with regards to technology companies are complex and many people I respect have highlighted important issues to be addressed. I think the next phase of this conversation is to get more concrete about what we want from technology platforms, as even if we were to remove the worst actors, we’d have the same issues with whomever replaces them. Addressing “root issues” and getting more specific about the tradeoffs involved are two ways I think this conversation can be even more productive, with research on how people think about the decisions to be made and how it affects their well-being being central to resolving things equitably.
In my new role, I look forward to contributing to this conversation. In upcoming newsletters, we plan to dive deeper into some of these issues and the specific tradeoffs behind them, leveraging the expertise and research from our wider network. Please do join us and if you appreciate these thoughts, share them to help us grow our community.