The Fear Factor: Better Understanding Online Discussions About Crime and Safety
Today we are sharing a guest post by our friends at Yale's Justice Collaboratory who combined our Neely Social Media Index data with their research to better understand online discussions of crime.
This article was also recently featured in Tech Policy Press.
Something that is qualitatively apparent to those who use different platforms is that social media is not a monolith. Platforms have different designs, values, userbases, network structures and governance approaches, which all come together to produce wide ranging experiences.
The Neely Social Media Index – produced by the University of Southern California Marshall School’s Neely Center – aims to keep track, over time, how people are experiencing different social platforms both positively and negatively. The Index can shed light not just on the rate at which one platform's user base has negative experiences compared to another platform, but provide insight into what is driving those experiences. Leveraging this, we can begin to better understand the nuances of these experiences across platforms and use these insights to propose alternatives and solutions to alleviate these issues.
For example, while in one recent wave we see that a relatively similar percentage of users of Nextdoor and X report having experienced something negative (22.7% and 20.6% respectively), when you look at what is driving those negative experiences, you see very different issues come to the surface. On Nextdoor, people are primarily reporting that crime is the topic of these negative experiences, whereas X users report these are driven by politics.
Over this past summer, a small team at the Justice Collaboratory – including Matt Katsaros, director of its Social Media Governance Initiative, research assistant Andrea Gately, and research fellow Jessica Araujo – spent time trying to better understand how people discuss crime on Nextdoor. For this article, we teamed up with PhD candidate Ishita Chordia, an HCI researcher who has been studying the way fear manifests on platforms like Nextdoor and the potential for using design to decrease fear of crime.
In our work this past summer, Nextdoor provided a sample of 1,000 posts and 7,717 respective comments that users on the platform had posted about crime and safety to independently analyze. We worked to develop a set of different categories and labeling guidelines that aimed to understand the ways in which neighbors discuss crime on the platform. Examples of these categories include labeling what type of event was being discussed (ex: property crimes, drug & order crimes, or merely suspicious behavior being reported) or whether the post author included any call to action from their neighbors. We used these guidelines to have our team members label a small set of content, and employed ChatGPT to apply labels to the full set of 1,000 posts. Lastly, we looked at metadata on the posts and comments (reactions, comments, and Jigsaw toxicity scores) to understand correlational relationships between our labels and platform engagement. While this platform data was useful in providing insight into how crime is discussed on the platform, we also partnered with the Neely Center to analyze open-ended responses on the Index. These survey responses, independent of the posts and comments from Nextdoor, provided us with much more insights into how people actually perceive conversations about crime and safety on Nextdoor.
Ishita worked with adults who regularly use safety platforms like Nextdoor and have a dysfunctional fear of crime, defined as a fear of crime that negatively impacts quality of life without motivating precautionary behavior that improves safety. She began by interviewing 16 adults in one neighborhood in Atlanta to understand how design decisions impact users' quality of life and their sense of safety. She then validated these results by surveying 64 adults across the United States who had a dysfunctional fear of crime.
After spending time working to understand how local safety issues are discussed online, we can share insights from the research and specific recommendations in response to these insights.
Finding 1: Negative experiences around crime are driven by the discussion in the comments, not the post itself
We looked through the open-ended responses in the Neely Social Media Index from those responding about their experience using Nextdoor. We looked at the open-ended follow up questions that ask participants to briefly provide an example of “one experience on Nextdoor that personally affected you negatively” and also other questions asking for positive experiences like “one experience on Nextdoor where you learned something useful or which helped you understand something important.” One of the more interesting takeaways from this exercise is that crime was one of the more common themes in both positive and negative experiences for respondents. On the negative side, people complained about racial profiling and unproductive discussions of crime that often result in name calling among neighbors. However, on the positive side we read responses appreciative of the crime discussions on the platform. One respondent said “I use Nextdoor to keep up with events/crime happening in my community. It has been helpful to connect with nearby neighbors to help each other out.”
While many people find the platform valuable for staying informed about their neighborhood, the consensus among respondents was that the primary source of dissatisfaction wasn't the crime and safety posts themselves, but rather the disrespectful and toxic discourse that often unfolds in the comments. This includes instances of victim-blaming, fear-mongering, name calling, and generally rude tones which not only fail to offer constructive advice or solutions to serious problems, but also drive animosity among neighbors. For one respondent, these discussions led them to have quite unfavorable opinions of their neighbors, writing “I think comment sections in general are atrocious, but ones that are based on locality can be even more heinous. I hate seeing the stupid uninformed opinions of the people that I unfortunately need to call neighbors.”
Given these insights, we draw from the Neely Center’s Design Code for Social Media to make a couple suggestions for platforms like Nextdoor where people are discussing local crime and issues of public order:
It’s clear that there is a divide among users where some really do care about seeing posts and discussions about crime in their neighborhood while others simply do not. The first code instructs platforms to “Allow users to easily and explicitly indicate content they do or do not want, and respect users’ explicit preferences even if contradicted by users’ engagement.” We think this is a crucial path to follow for discussions about crime. While some platform operators may prefer an approach to predict which posts are about crime and then make platform-wide decisions about how to rank those posts (either promoting or demoting), we think this approach is not the optimal one given what we have observed. Instead, platforms should give those who want to see this content a chance to explicitly control their feed and those who want to avoid these toxic discussions a way to do so.
The previous suggestion to offer a control over this content relies on the platform having some way to detect which posts are about crime and safety and which are not. While it can be helpful for a platform to leverage machine learning to identify which crime and safety posts to place behind this control, we also think it is worthwhile to involve users themselves. Platforms should allow post authors and/or viewers to tag posts that are related to crime and safety that can provide training data for their model, while also putting more control in the hands of the platform users.
Providing clear guidelines for posting about neighborhood safety can encourage more helpful and less fear-inducing content. Such guidelines or helpful proactive education for creating a crime and safety post could foster a more supportive and informative community spirit. Nextdoor notably invested in such a feature at a time when the platform was receiving increased criticism for racial profiling and bias. The platform partnered with Jennifer Eberhardt’s SPARQ lab at Stanford to create a guided crime and safety post-creation flow that aims to reduce implicit racial bias in posts about crime. An extension of this idea could be used to help post or comment authors create more respectful dialogue around crime. It’s worth noting, however, that these features are only useful to the platform if used and adopted by users. In our analysis of crime and safety posts on the platform, we saw that only 28% of posts about crime and safety on the platform actually leveraged this carefully created post-creation flow. Platforms need to promote and encourage use of these tools to make sure that their effects are fully realized across the platform.
Aligned with the Neely’s Design Code #2, it is important to avoid using simple engagement optimizing measures for conversations about crime and instead opt for measures that align towards the user's stated preferences or more subtle measures like their affect towards their neighbors. These crime posts generate heated and vigorous discussion among neighbors which, undoubtedly, feed into simple engagement-based feed algorithms many platforms operate which in turn generate even more (toxic) discussions. Optimizing for other measures would likely point platform operators towards other avenues for these conversations like adding friction to slow down heated back and forth comments, or simply shutting down the comment thread when a conversation has gotten too far off the rails.
Finding 2: Not all crime and safety posts are the same - platforms should allow more nuanced filtering of crime posts
In the work at the Justice Collaboratory, we spent a lot of time categorizing the posts people made about crime and safety in their neighborhood. For example, one of our categories was aimed at answering what type of event was being discussed. We found that about 50% of the posts discussed a theft or property crime (ex: a stolen catalytic converter) while about 25% of the posts didn’t mention any specific crime but rather some suspicious behavior that the post author observed. Some people report getting value out of the safety posts on platforms like Nextdoor which provide proactive safety notices before something is about to happen (for example, extreme weather warnings). As such, another category looked at whether the time of the event being described in the post relative to when it was posted. Nearly all of the crime and safety posts we analyzed either discussed an event that had already happened or was currently happening. It was exceedingly rare that we saw posts describing something that would happen in the future.
Similarly, we had one label to understand where the event being described was happening relative to the post author’s location. In this label, we found that about 60% of posts were describing an event that happened at or immediately nearby the home of the post author. In interviews, Ishita found that users are often presented with posts that are not relevant to their safety concerns, and that they have to sift through a high volume of crime information and determine for themselves which posts are actually relevant.
These findings highlight the potential not just to enhance user experience but also to reduce dysfunctional fear by introducing more sophisticated filtering options for these types of crime and safety posts. This approach can help tailor the content to individual preferences and needs, thus improving the relevance and usefulness of the information provided. Some recommendations to implement these approaches are:
Platforms should present only the most timely and proximate posts so that users can focus on the incidents which are most relevant to their safety concerns. In interviews, some users mentioned that safety incidents occurring within the last 24 hours and within their immediate neighborhood are the most relevant. When it comes to people discussing some criminal activity that happened at their home, the relevance of this post quickly diminishes the further you get from the post author’s location. Platforms should consider limiting visibility of these crime posts to immediate neighbors, or (again drawing from code #1) give people control over such a setting.
We developed ten different categories for looking at these crime and safety posts, including: the type of crime or event, the location of the event, the time of the event, whether police have been involved or contacted, and whether the post author is making a call to action from the community. These categories could be leveraged as rich ways to provide user filters that allow them to control the posts they want to see. To take it further, allowing both the post authors and community members to add tags to posts can enrich this categorization process. This collective tagging can lead to more accurate and comprehensive labels, reflecting the diverse perspectives of the community. As situations evolve, the ability to update or add tags ensures that the information remains current and relevant.
Finding 3: Provide a balanced perspective
In interviews with Nextdoor users, Ishita found that the quantity and content of safety-related posts can contribute towards a dysfunctional fear of crime. Users were aware that highly-engaging content is disproportionately presented on their feeds and wanted a more “balanced” perspective. The following recommendations are based on direct feedback from users:
Encourage sharing good news and enhance the visibility of good news. Users that Ishita spoke with consistently shared that they would like to see more good news. They described good news as community events, celebrations, and generally wholesome content. Seeing posts about “good Samaritans” or local heroes also helped balance the belief that their neighborhoods were crime-ridden. One way that platforms like Nextdoor could achieve this is by modifying their newsfeed algorithms to enhance the visibility of good news.
Encourage users to post updates on their crime posts, including when a situation is resolved. Interview participants shared that updates and resolutions would give them a sense of “closure.” Without such information, participants shared that it’s easy to feel that the situation is getting worse and that crime is going unchecked. Nudging users to update their posts can be one way to encourage information sharing.
Encourage the sharing of crime prevention information. We found that nearly all of the safety-related posts discussed an event that was occurring or that had already happened. Providing information about local trainings, city council meetings regarding safety, or seasonal information about crime can encourage proactive action rather than fear-based responses.
Finding 4: Improving discourse regarding unhoused people
Another theme identified in the Neely Social Media Index responses about negative experiences on Nextdoor was the discourse surrounding unhoused people. The primary concern is the predominantly negative nature of these discussions, often exacerbated by the lack of representation from the unhoused community, who typically do not have the same access to the platform. This imbalance leads to one-sided and often prejudiced conversations where those being discussed cannot present their perspectives or defend themselves. Thus, our recommendation to address this issue is to promote a more balanced discourse to foster a more respectful, informed, and empathetic discourse regarding unhoused individuals on Nextdoor and other platforms. Implementing these recommendations can positively improve the user experience, but also can be pivotal in changing the narrative by encouraging positive community action toward the unhoused population.
Given the frequency at which conversations about unhoused people can quickly turn quite negative, platforms should consider more closely monitoring these conversations. The goal is to identify and moderate discussions about the unhoused that become overly derogatory, include dehumanizing language, or are based on misinformation.
Strengthen feedback and reporting mechanisms to allow users to report harmful discourse. This feature should be coupled with clear guidelines about respectful and constructive conversation on sensitive topics like homelessness.
Introduce educational content and awareness campaigns on the platform to foster empathy and understanding about the challenges faced by unhoused individuals. This can include factual information about homelessness, stories of individuals who have experienced homelessness, and resources on how community members can help.
Build partnerships with outreach workers or representatives of the unhoused community. Early findings from Ishita’s new study suggests that the presence of an outreach worker on platforms like Nextdoor or Facebook can allow for a more balanced conversation, where these representatives can offer counter-narratives, highlight resources, correct misinformation, and share experiences and insights.
Negative discourse around local crime is not a new phenomenon brought about by platforms like Nextdoor. However, just like many other issues occurring across social media, local platforms like Nextdoor can be used to perfectly reflect these existing issues and inequities into the online space, or they can use their leverage to design ways to promote more balanced and respectful conversations between neighbors. What we see from the Neely Social Media Index responses is that discussions around local crime can provide real value for some while representing some of the most negative experiences on the platform for others. Given this polarized perception, it is imperative to more deeply understand how these conversations are taking place on the platform, how they are being perceived by different groups, and creatively think through ways to place control in the hands of the people on the platform to more easily curate their own experience around these conversations.
The Justice Collaboratory would like to thank Ravi Iyer and the USC Marshall School’s Neely Center for sharing the data from their Neely Social Media Index and for support in this project.
About the authors:
Jessica Pardim Araujo
Jessica Pardim Araujo is a Postbaccalaureate Fellow at the Justice Collaboratory at Yale Law School. Her interests lie in examining how morality, decision-making, and inequality influence well-being and trust.
Ishita Chordia
Ishita Chordia (she/her) is a PhD Candidate at the University of Washington Information School. Her dissertation investigates the ways that neighbors share information about crime and safety online, and how such information-sharing influences peoples' perceptions of their neighbors and their neighborhood. She is passionate about how we can collectively create equitable, resilient, and sustainable communities. Ishita has a MS in Computer Science from the Georgia Institute of Technology and a BS in Economics from Duke University.
Andrea Gately
Andrea is an Undergraduate Research Assistant at Yale Law School. She is a double major in Humanities and Women, Gender, and Sexuality Studies.
Matt Katsaros
Matt Katsaros is the Director of the Social Media Governance Initiative at the Justice Collaboratory at Yale Law School.