@david_chisnall the trick is that _you_ are the copilot (peer reviewer): you need to verify what the large language model copied from an unknown untrusted unverified source, or worse where it attempts to fill in blanks due to a slight variation what you wanted to what it "saw" somewhere else without understanding context. Misinformation in such systems is already bad, but blind trust makes that worse.... LLM is definitely not "intelligent", they are still working on that...