@simon @carlmjohnson it worries me a little bit that I, with just like, a passing familiarity with what gradient descent is and how ML model training works, can easily predict each new PR catastrophe and “misbehavior” of these models and the people doing the actually phenomenally complex and involved work of building them seem to be constantly blindsided and confused by how the tools that *they are making* behave