2 Comments

"Certainly, we should blame those who are profiting off of the attention generated by these riots and inciting division. But as long as our algorithms reward such behavior, getting rid of them and/or their content will be a never ending task."

Very good. Also thanks for all the references, very valuable.

I see a lot of discussion about free speech, censorship, etc but not enough about the role of the algorithms that amplify speech. Do you know about good work that brings these two topics together?

I sometimes call algorithmic amplification "reverse censorship". Sure, censorship is sth we dont like, but algorithmic amplification is also a form of censorship. We need to think the two topics together and propose solutions for both of them.

Expand full comment

Thanks! I'm not sure if I'd use the word censorship, but algorithms definitely encourage some kinds of speech (e.g. fear of others) and discourage others (e.g. "boring" points of view).

I should have added links here in the post, but here is a paper on algorithms + conflict in general, which has a few design ideas (https://knightcolumbia.org/content/the-algorithmic-management-of-polarization-and-violence-on-social-media) as well as a paper documenting more industry knowledge on the topic of how we might optimize for alternatives to engagement (https://arxiv.org/pdf/2402.06831).

Expand full comment