Email or username:

Password:

Forgot your password?
Top-level
Will Oremus

It gets weirder.

ChatGPT generated the fake scandal involving law prof Jonathan Turley in response to prompts from Eugene Volokh last week. Turley wrote about it in a USA Today op-ed on Monday.

OpenAI appears to have since addressed the issue: ChatGPT no longer names Turley when given the same prompt.

But today we tested the same prompt on Microsoft's Bing AI. And guess what...

9 comments
Will Oremus

Now Bing is *also* falsely claiming Turley was accused of sexually harassing a student on a class trip in 2018.

As a source for this claim, it cites Turley's own USA Today op-ed about the false claim by ChatGPT, along with several other aggregations of his op-ed.

Will Oremus

AI chatbots don't lie on purpose. They're programmed to respond to any query, drawing on patterns of word association in their data (and search results, for Bing) to generate plausible answers. They have no idea if what they're saying is true. Yet they say it so definitively, even making up nonexistent but realistic-sounding sources when needed to back up their claims.

@Katecrawford dubs these bogus sources "hallucitations." washingtonpost.com/technology/

Will Oremus

It isn't just one law professor. It appears ChatGPT routinely fills in the gaps with falsehoods when prompted to talk about specific individuals about whom it may have limited credible data. An Australian mayor is threatening to sue OpenAI with defamation after ChatGPT told a constituent he'd been imprisoned for bribery, and the rumor spread. arstechnica.com/tech-policy/20

Will Oremus

I'm generally not that big on "hey look, I made the AI say something bad!" stories. It's programmed to please, so if you really want to make it say bad stuff, you probably can.

But with tons of folks now using AI chatbots as research tools, it seems not great if they're freely mixing facts with equally plausible-sounding lies backed up by bogus citations.

Will Oremus

And if tech firms' solution to AI's Pinocchio problem is to plug their chatbots into the internet, look stuff up, and then confidently misinterpret what they find, this isn't going to be the last time we see a misinfo ourobouros in which one AI's false claim becomes another's documented fact.

Michael Feldstein

@willoremus I don’t disagree but will offer a little color. First, chat bots have no notion of truth. In fact, they have no notions at all. They need to work with other systems that are designed to at least represent knowledge. Second, because they’re complex and probabilistic, making these LLM chat bots work with systems that are designed for representing knowledge is hard. And third, a wide open text box is a particularly terrible UI for a system with these limitations.

Pseudo Nym

@willoremus I think "Ourobouros" is a fine name for our chatbot overlord. Much friendlier than "Skynet," still scary enough to get DARPA funding, and allegorically correct.

#AI #chatbot #fiction

Margaret Mitchell

@willoremus Thanks for this!
Also love the phrase "misinfo ourobouros"

Go Up