Email or username:

Password:

Forgot your password?
Simon Willison

I posted a mockup of a design change for ChatGPT that I think could help address the risk of people being lead astray by its incredible ability to invent faleshoods: ChatGPT should include inline tips
simonwillison.net/2023/May/30/

10 comments
Drew Breunig

@simon Unfortunately, I'm pretty sure the absence of cues like these is very much by design.

Dave Guarino

@dbreunig @simon My experience with ChatGPT so far is that they seem to be focusing on the cues being in the model response, not out of band, no? (Not saying I don't like the idea in general, just saying it seems to me they are more interested in directly caveating by model tweak than interface, which is especially important given this affordance doesn't really work in an API by default)

Drew Breunig

@allafarce @simon Yes, this is the approach. Unfortunately, this isn't really helpful as an API response as it requires consumers (people or software) to parse out inconsistent warnings.

Simon Willison

@allafarce @dbreunig that's the approach that u don't think works well enough

I want those things to be visually distinct and shown outside of the main conversation

Drew Breunig

@simon @allafarce Same. I don't think it's in their current priority set.

They want to both drive massive adoption while stoking fears to drive protective regulation. There are a steady stream of examples of Altman talking out of both sides of his mouth.

Heather

@simon I like this idea, but also know that there are people who will still overlook inline warnings like this. This should be implemented, but before anyone can use ChatGPT, they should also have to go through a training on acceptable uses and how *their* data might be used in the model.

Ölbaum

@simon I was hoping for a 500 Server Error page. That would do it.

Ed Ross

@simon
Maybe it just takes lots of use. I've played with it for long enough to know it is a giant BS generator - but once you know that, you can have fun with it. Last night I had it start generating case law examples for a trial taking place in a D&D scenario I had it writing about

Sam Wronski

@simon This doesn't really solve the problem, especially since OpenAI is advocating for more uses via their API. The output of these models has to be addressed and that's not something a web UX change can fix. The technology platform itself is unfit for the applications it is being used for.

Simon Willison

@runewake2 I'm not trying to solve the general hallucination problem here - I think just addressing the social problem of ChatGPT users being hoodwinked would be a big win

Go Up