@FenTiger yeah, 100% agreed. The "new method" comment stood out to me as well as an example of "this thing is bad at writing documentation by the standards of bad documentation"
Top-level
@FenTiger yeah, 100% agreed. The "new method" comment stood out to me as well as an example of "this thing is bad at writing documentation by the standards of bad documentation" 2 comments
|
@hrefna @FenTiger what worries me even more, is that this is a "good case", with all the technics we have at the moment.
Everything that I have tested recently on all the LLMs out there showed me that all the answers on the same subject tend to converge, even on very different models.
I don't have data, but my theory is that they are all trained on more or less the same datasets. So their answers and capabilities converge.
It's going to look like this. Inadequate in non simple cases.
@hrefna @FenTiger what worries me even more, is that this is a "good case", with all the technics we have at the moment.
Everything that I have tested recently on all the LLMs out there showed me that all the answers on the same subject tend to converge, even on very different models.
I don't have data, but my theory is that they are all trained on more or less the same datasets. So their answers and capabilities converge.