@textfiles It's the "gets things wrong" that's the hard part, though.
You're not sitting there with the car saying "the correct thing is to drive through this pedestrian" and you say "no no, that's wrong".
Rather, the AI says "the book has clothing such as curtains" and you have to decide whether that's sufficiently off-the-wall as to throw the rest of the claim into doubt, or even whether you can transfer human concepts of fallibility...
@textfiles Or to put it another way, I can tell if a self-driving car has done its job correctly. But if a book summarizer is wrong, I can't tell unless I read the damn book myself, which is what I'd hoped to avoid in the first place.