@simon @carlmjohnson I guess I also object to this term because it doesn’t really have a bug—it isn’t really “malfunctioning” as I put it either. The goal that it’s optimizing towards is “believability”. Sycophancy and sandbagging are not *problems*, they’re a logical consequence and a workable minimum-resource execution of the target being optimized. It bugs me that so much breathless prose is being spent on describing false outputs as defects when bullshit is *what LLMs produce by design*
@simon @carlmjohnson if it accidentally wastes resources telling the truth where a more-compressible lie or error would have satisfied the human operator, that’s a failure mode! It will eventually be conditioned out in future iterations, although an endless game of whack-a-mole will ensue as they try to pin it down to *particular* “test” truths (which is exactly what “sycophancy” is) while all others decay