Email or username:

Password:

Forgot your password?
Jan Wildeboer 😷:krulorange:

We call stuff we don’t understand but that entertains us „magic“. The best magicians are really good at misleading us, so they can surprise us and we are awed and want more. Most of their tricks are quite mundane, so we prefer to not know how they work.

That’s current „AI“/LLM explained, IMHO. Like a good magician it plays with our expectations. Unlike a magician in purely mechanical ways. We are awed. We think it must be more than it actually is. That’s my current position.

2 comments
Jan Wildeboer 😷:krulorange:

And knowing that it IS a trick is why we love good magicians.

Plot twist: That’s why I would say that prompt engineering is like trying to find out how the trick works without really wanting to know the trick. Because that would take the magic away.

Jan Wildeboer 😷:krulorange:

And that’s also why we started saying that “AI”/LLMs are “hallucinating” when their trick doesn’t fool us every now and then. When we get a glimpse of how mundane the trick actually is. We don’t want the magic to be gone.

So instead of accepting the truth — the “AI”/LLM is failing, it is making up stuff as always but sometimes it misses our expectations — we pretend it’s part of the show. Because we really WANT to be awed. This is dangerous stuff, IMHO.

Go Up