These results are interesting in light of overall concerns of the influence of AI. Too soon to say if the latter helps or hurts the way content is consumed and its impact on society. Until there are economic disincentives to deter misinformation or vitriol I suspect overall things will get worse and we will end up with much lower trust in not just social but all institutions by the end of the decade.
I agree that it's too early to see much impact of the newer AI tools in these data. Given the rapid proliferation of these tools into so many different consumer products and relatively user-friendly public interfaces, I suspect the prevalence of AI content on social platforms will grow exponentially. One place I've been surprised by the volume of it is on YouTube where many faceless "product-review" channels are popping up and spamming affiliate links and low-utility reviews (e.g., only providing positive feedback on every product).
Societal trust has been declining for a while now, with the slope steepening in the past 10 years. I do see that trend continuing at least until there are major changes to incentives on platforms and in media/politics.
I apologize in advance if I missed something in this awesome post and analysis. Isn't there also the possibility that users are becoming desensitized or that it takes something with a greater degree of negative to register as such?
Thank you, Gordon! I just updated the text so every mention of NextDoor should have the double capitalization. This afternoon, I'll update the functions I wrote to process the data and generate the figures so that the same capitalization appears in all new graphs/plots.
Whoops. I guess deleting the first comment on Substack doesn't delete the thread that follows (probably the right decision on their part). But definitely feel free to delete all these comments Matt. And thanks again for all your work on this.
This is truly awesome data and analysis, thank you! Wow, that heatmap grid is an actual goldmine of info
Thank you, Louis! I love heatmaps, but tend to be wary that they may contain *too much* information and be hard for people to read/understand.
These results are interesting in light of overall concerns of the influence of AI. Too soon to say if the latter helps or hurts the way content is consumed and its impact on society. Until there are economic disincentives to deter misinformation or vitriol I suspect overall things will get worse and we will end up with much lower trust in not just social but all institutions by the end of the decade.
I agree that it's too early to see much impact of the newer AI tools in these data. Given the rapid proliferation of these tools into so many different consumer products and relatively user-friendly public interfaces, I suspect the prevalence of AI content on social platforms will grow exponentially. One place I've been surprised by the volume of it is on YouTube where many faceless "product-review" channels are popping up and spamming affiliate links and low-utility reviews (e.g., only providing positive feedback on every product).
Societal trust has been declining for a while now, with the slope steepening in the past 10 years. I do see that trend continuing at least until there are major changes to incentives on platforms and in media/politics.
Thanks for reading!
I apologize in advance if I missed something in this awesome post and analysis. Isn't there also the possibility that users are becoming desensitized or that it takes something with a greater degree of negative to register as such?
Thank you, Gordon! I just updated the text so every mention of NextDoor should have the double capitalization. This afternoon, I'll update the functions I wrote to process the data and generate the figures so that the same capitalization appears in all new graphs/plots.
Sorry for the confusion Matt. It's actually the single capitalization that is correct. "Nextdoor" is the correct spelling.
Ha! I'll update the text then, and not change my code to process the data and generate the viz.
Classic "no good deeds goes unpunished" stuff Matt. But thanks for coming update. Meanwhile, I'll delete this annoyingly pedantic thread.
Whoops. I guess deleting the first comment on Substack doesn't delete the thread that follows (probably the right decision on their part). But definitely feel free to delete all these comments Matt. And thanks again for all your work on this.