2 Comments

Seems that this will ultimately get down to competition between AIs: the "good" AIs that are policing the work of/versus the "bad" AIs that are supporting the negative impacts. It may turn out to be automatic stock trading, with which people cannot keep up. I have less faith that us little carbon-based life forms will figure out a way to corral this before it gets wildly out of hand. Seems like some organization like Microsoft could sell AaaS: "Authenticity as a $ervice", along with the other functions needed to implement some of the ideas in the post.

Expand full comment

I think point 1 is the most important.

When you say "Clarify that platforms can sometimes be liable for the choices they make and the content they promote". Is it not better to "Legislate that platforms can sometimes be liable..."? I mean, let's get to solutions and push them into law (which obviously requires tons of forethought, compromise, and deliberation).

Point 2 might put severe limitations on big companies, that , though justified, would not be doable for smaller sites. Either you have to employ heavy technology to recognize AI content (creating a new arms race) or you have to be very strict up front with content creators who would try to upload AI content. Smaller sites would do neither, and in a sense they may have an advantage then over the mainstream sites. In which case, the Big Tech companies will fight this sort of regulation tooth and nail.

Point 3 would be great. Seemingly not too hard to push through either.

As for the "age of internet adulthood", this also sounds really important, but how feasible would it be to implement? Can we really implement it well and not create the same nonsense and reverse side-effects as with the drinking age?

Expand full comment