The student had a dilemma: she had to present her research, but the results sucked! the project failed! she was embarrassed! Should she try to fix it at the last minute?? Rush a totally different project?!?
I nipped that in the bud. “You have a •great• presentation here.” Failure is fascinating. Bad results are fascinating. And people •need• to understand how these AI / ML systems break.
4/
@inthehands thanks for sharing Paul, these studies are invaluable, a scientists job isn’t to “prove” something works such that disproving is a failure, it’s to take a hypothesis, test it and then report the results.
I was wondering about the circumstances though, would the results not have been invalidated from the start due to “manual tagging”? That’s already bias for your dataset, your AI can only decide what the people who tagged it thinks a good room looks like? Or is that expected/accepted/ignored because thats just how things are built?
@inthehands thanks for sharing Paul, these studies are invaluable, a scientists job isn’t to “prove” something works such that disproving is a failure, it’s to take a hypothesis, test it and then report the results.
I was wondering about the circumstances though, would the results not have been invalidated from the start due to “manual tagging”? That’s already bias for your dataset, your AI can only decide what the people who tagged it thinks a good room looks like? Or is that expected/accepted/ignored...