This, via @davidgerard, is a #bluesky gobsmacker.
First there's the breathtaking confidence that this this exciting AI thing will solve the problem. As if Twitter didn't have literally hundreds of ML engineers working on this for years. YEARS.
Then there's the notion that you could possibly create AI magic without doing enough of the manual work to really understand it. Nope!
Third, there's apparently a belief that a magic technical solution exists to social problems that are so complex that they're demoralizing and hard to deal with.
Fourth, I see no recognition that they should have had at least some starter solutions before letting users on at all.
Lastly, there's the lack of recognition that anti-abuse work, hard though it is, is not nearly as hard as dealing with the abuse.
As an aside, I'm clearly going to have to write something up about Mastodon's handling of abuse, which I need to investigate properly. Does anybody have resources they like? People to talk to? Incidents to examine?