Email or username:

Password:

Forgot your password?
Top-level
Poslovitch

@baldur IMO it's something that can easily be noticed by users: I have integrated a distilled French-targeted Whisper model in my day-to-day usage to be able to understand what people tell me in voice messages, and even during tests I could SEE that it was making some stuff up.

I'm not talking about mistakes, like misunderstanding a word for another. Straight up sentences that could appear out of thin air if the speaker sighed too loudly.

Thankfully I don't rely solely on the transcription...

3 comments
Alejandro Gaita-Ariño

@Poslovitch @baldur
o.0
'In an example they uncovered, a speaker said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.”

But the transcription software added: “He took a big piece of a cross, a teeny, small piece ... I’m sure he didn’t have a terror knife so he killed a number of people.”'

Danny Boling ☮️

@Poslovitch

Wow. How does something so bad get integrated into so many business processes? It's incredible what they're getting away with.

@baldur

Poslovitch

@IAmDannyBoling @baldur I don't know. And if I were to tear my hair out about this, I'd be bald by now 😅

To me, it's just common sense: use your tool wisely and knowingly. And knowing when to use a tool is actually more about knowing when *not* to use it: knowing its shortcomings and where it could fail you.

Well. People seem to consider all the AI tech to be pure magic. Even more than how "computers" used to be considered as magical. It's magic. It feels magic. And magic's never wrong.

Go Up