Email or username:

Password:

Forgot your password?
Top-level
Agonio

@jonny now, sure people rely on it so clearly there is an issue, but there's a disclaimer as you open it: "Chatgpt may give inaccurate information. It's not intended to give advice". Then under the message box: "Chatgpt can make mistakes. Verify important information"

What else should they do, in your opinion, to make users understand that they shouldn't rely on it this way?

7 comments
David Gerard

@Agonio @jonny stop hyping up "AI" the way they do. the hype is a media barrage of egregious lies, and a couple of disclaimers don't cut it.

Agonio

@davidgerard @jonny yeah the media also mistakes AI for robotics and vice versa

Still research cannot depend on what the media does, and people use chatgpt because it's fascinating, not simply because the media pointed them to it, otherwise people would be playing those triple-A videogames that have high media rating and then people review as shit

David Gerard

@Agonio @jonny you phrase that as a reply but you've started arguing a completely different claim

Bornach

@Agonio @jonny
How about release a #SciComm Youtube video on a channel with over 600K subscribers that relies on ChatGPT to make some key calculation
youtu.be/5lDSSgHG4q0?t=15m18s
and then include its answer without verification even though it is out by more than 12%

And maybe a follow-up video where placing unquestioning trust in the #LargeLanguageModel to generate to correct engineering parameters results in the project failing

Agonio

@bornach @jonny again, what should OpenAI do about its tools being used wrongly according to their own disclaimer?
Another person using them wrong doesn't really answer my question, as I agreed many use it wrongly

Bornach

@Agonio @jonny
OpenAI should collaborate with #SciComm content creators such as Plasma Channel to produce videos that highlight how using their LLM in such a laissez-faire manner could result in disaster.

But they wouldn't do that as that would negatively affect the valuation of their company in any future IPO.

Agonio

@bornach @jonny they kind of did talk "against their own interest" nytimes.com/2023/05/30/technol

Although in my opinion this was made to manipulate public opinion into regulating AI hastily and following their own guidance, so to lobby for laws that would allow them to be protected from competition

So again the only real issue I see is the economic system we live in, not a text producing thing that has already two disclaimers against its own validity (like cigarettes)

@bornach @jonny they kind of did talk "against their own interest" nytimes.com/2023/05/30/technol

Although in my opinion this was made to manipulate public opinion into regulating AI hastily and following their own guidance, so to lobby for laws that would allow them to be protected from competition

Go Up