@thomasfuchs I don't think that it's ChatGPT's results that are wrong, but the person interpreting them.
LLMs do a pretty good job of storing collocation probabilities! If only we could persuade people to see them that way...
Top-level
@thomasfuchs I don't think that it's ChatGPT's results that are wrong, but the person interpreting them. LLMs do a pretty good job of storing collocation probabilities! If only we could persuade people to see them that way... 3 comments
@benado @thomasfuchs Collocation is so tricky – one of my more frequent genres of editorial comment is to explain how the order of two words (e.g. "the used car" vs "the car used") affects the meaning. There are so many word pairs affected this way. Maybe, because so many make it past authors, reviewers and editors into the academic literature, they could eventually become interchangeable. I am not saying all LLM evangelists are thieves who are OK stealing artists work but I am saying that I dont care to find out 99 times out of 100 |
@libroraptor @thomasfuchs as a non-native English speaker, I actually use chatGPT a lot for collocation research.
Other than that, I never trust anything it tells me, but rather use it to teach me relevant jargon or keywords on any given topic from which I'll start my real search.
Pretty good tool IMO, if I don't overestimate it.