Email or username:

Password:

Forgot your password?
Top-level
Jason Scott

I strongly consider Algorithmic Intensity (AI) to be at what "self-driving" is - you really have to sit with it and take over/discard choices when it gets things wrong. And just like you wouldn't ask a "self-driving car" to "drive around to interesting places that are fun for cars" and get a non-weird answer, this sort of approach works better with the dataset when you're just using it like a dumb Watson to your Sherlock.

4 comments
varx/social

@textfiles It's the "gets things wrong" that's the hard part, though.

You're not sitting there with the car saying "the correct thing is to drive through this pedestrian" and you say "no no, that's wrong".

Rather, the AI says "the book has clothing such as curtains" and you have to decide whether that's sufficiently off-the-wall as to throw the rest of the claim into doubt, or even whether you can transfer human concepts of fallibility...

varx/social

@textfiles Or to put it another way, I can tell if a self-driving car has done its job correctly. But if a book summarizer is wrong, I can't tell unless I read the damn book myself, which is what I'd hoped to avoid in the first place.

Jason Scott

@varx Today it has a seatbelt, tomorrow it'll have an airbag and the day after an OnStar.

varx/social

@textfiles The day after, I still won't be able to *tell* if it has those features. :-)

Go Up