I strongly consider Algorithmic Intensity (AI) to be at what "self-driving" is - you really have to sit with it and take over/discard choices when it gets things wrong. And just like you wouldn't ask a "self-driving car" to "drive around to interesting places that are fun for cars" and get a non-weird answer, this sort of approach works better with the dataset when you're just using it like a dumb Watson to your Sherlock.
@textfiles It's the "gets things wrong" that's the hard part, though.
You're not sitting there with the car saying "the correct thing is to drive through this pedestrian" and you say "no no, that's wrong".
Rather, the AI says "the book has clothing such as curtains" and you have to decide whether that's sufficiently off-the-wall as to throw the rest of the claim into doubt, or even whether you can transfer human concepts of fallibility...