“Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said - ABC News”
“Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said - ABC News” 56 comments
@baldur It’s almost like it can’t be trusted to do anything useful and it’s just a toy that is slowly melting the planet… @baldur "the health system complies with state and federal privacy laws." Case closed. Don't worry about it. @baldur@toot.cafe @baldur well no one saw that coming, did they? @baldur Literally insane. That is the purpose of a LLM: to make stuff up. That's the PURPOSE. That's what it does. There are hundreds of voice transcription tools that work well, why would anyone ever want one that makes stuff up? @baldur "AI" hype is certainly eye opening regarding how far well-funded PR cab push obviously bad things into public discourse. I am forced to look askance at people who still uncritically post positive news about ChatGPT. @baldur Five years from now people will think we were nuts to place so much public trust in and corporate money on unreliable AI. @baldur @baldur [In an example they uncovered, a speaker said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.” But the transcription software added: “He took a big piece of a cross, a teeny, small piece ... I’m sure he didn’t have a terror knife so he killed a number of people.”] "Take the umbrella" to "terror knife so he killed a number of people" is a hell of a mistranscription And this is used to transcribe medical notes !? And it deletes the recording "for privacy" riiight🤦♀️ @staringatclouds @baldur Probably a good idea not to delete the recordings, someone will need to go listen to verify that the people involved didn't suddenly start talking nonsense. @baldur @baldur I wish they would stop using the word 'hallucination'. They are spouting garbage. Transcribing voice recordings is hard (I do that at least bi-weekly) and I see what automated systems do for the meetings I run and the output isn't even close to what is being said and some are complete inventions. Technical terms and acronyms 95%+ wrong in the transcriptions generated. @baldur "he found hallucinations in 8 out of every 10 audio transcriptions he inspected," @baldur @baldur But the transcription software added: “He took a big piece of a cross, a teeny, small piece ... I’m sure he didn’t have a terror knife so he killed a number of people.”" @baldur @baldur @baldur @baldur @baldur WHICH ACADEMIC WHISPER STUDY? @baldur IMO it's something that can easily be noticed by users: I have integrated a distilled French-targeted Whisper model in my day-to-day usage to be able to understand what people tell me in voice messages, and even during tests I could SEE that it was making some stuff up. I'm not talking about mistakes, like misunderstanding a word for another. Straight up sentences that could appear out of thin air if the speaker sighed too loudly. Thankfully I don't rely solely on the transcription... @Poslovitch @baldur But the transcription software added: “He took a big piece of a cross, a teeny, small piece ... I’m sure he didn’t have a terror knife so he killed a number of people.”' Wow. How does something so bad get integrated into so many business processes? It's incredible what they're getting away with. @IAmDannyBoling @baldur I don't know. And if I were to tear my hair out about this, I'd be bald by now 😅 To me, it's just common sense: use your tool wisely and knowingly. And knowing when to use a tool is actually more about knowing when *not* to use it: knowing its shortcomings and where it could fail you. Well. People seem to consider all the AI tech to be pure magic. Even more than how "computers" used to be considered as magical. It's magic. It feels magic. And magic's never wrong. @baldur AI always does and will always do things like this. It must not be used in any critical infrastructure under any circumstances. @baldur same thing in ordinary business and legal transcription. That’s why we must keep audio recording and transcript. @baldur Using "AI" algorihms for anything unchecked is like thinking Bonaqua is more than a plastic bottle. @baldur "It’s impossible to compare Nabla’s AI-generated transcript to the original recording because Nabla’s tool erases the original audio for 'data safety reasons,' Raison said." So Nabla is taking the irresponsibility to a new level. @baldur @inthehands I recently talked with a web developer who had been working on an application for recording biological lab values. Their group was unaccountably told to try using ChatGPT. They stopped when it immediately inserted impossible values into the data tables. @baldur I would assume that the AI doesn't really "understand" most of it, as this isn't what AI's core task is. It usually gets structured textural input. Voice recognition is a separate step that one would do before passing stuff to AI. So, it's more likely that this AI just has a somewhat good amount of lucky guesses, plus some misses by a landslide. 2/2 @baldur why do they even need AI to transcribe? To detect who is talking? Is it supposed to improve accuracy? (I'm just musing out loud) |
@baldur AI just making things up in a critical area where there are real world consequences, well I never 😲