@skysailor @inthehands Could you say more about the "documentable" bit?
Context for the question: every time you train a large language model you will get a different set of weights that affect what the resulting model will do, even if you run the training against the same source data set. In essence, you can't quite predict what your trained model will do from looking at the source data and the training parameters, so not too far from the HR folks silently thinking and feeling after all.
@dkalintsev @inthehands But in addition to being able to look at the training set, you can test the trained model, and even do so without it "knowing" you're doing that (whereas if you brought a human a bunch of test resumes mid-lawsuit, they'd probably alter their behavior.)