Email or username:

Password:

Forgot your password?
Top-level
varx/social

@textfiles It's the "gets things wrong" that's the hard part, though.

You're not sitting there with the car saying "the correct thing is to drive through this pedestrian" and you say "no no, that's wrong".

Rather, the AI says "the book has clothing such as curtains" and you have to decide whether that's sufficiently off-the-wall as to throw the rest of the claim into doubt, or even whether you can transfer human concepts of fallibility...

3 comments
varx/social

@textfiles Or to put it another way, I can tell if a self-driving car has done its job correctly. But if a book summarizer is wrong, I can't tell unless I read the damn book myself, which is what I'd hoped to avoid in the first place.

Jason Scott

@varx Today it has a seatbelt, tomorrow it'll have an airbag and the day after an OnStar.

varx/social

@textfiles The day after, I still won't be able to *tell* if it has those features. :-)

Go Up