@simon …as long as nobody invents any new languages or techniques
Even if they keep training new models, will they be able to overcome their own poisoning of the well with AI slop?
It feels to me like we’re in a temporary awakening before the world’s greatest corpus of language is ruined
@llimllib I don’t believe in the “model collapse” idea personally, AI models have been deliberately training on “synthetic data” for the last 12 months with increasingly impressive results
How quickly models can pick up new tech is definitely an interesting question - I’ve been pasting dozens of pages of documentation directly into them with good results, eg this example https://gist.github.com/simonw/97e29b86540fcc627da4984daf5b7f9f