Email or username:

Password:

Forgot your password?
Top-level
Adlangx

@SETSystems damn. That abstract is hopeful. Once kids are shown how LLM's are kind of bad with information they tend to trust them less, quickly.

9 comments
DELETED

@lightninhopkins
Hopefully. You don't get a better poker face than a robot. Plus they are learning to lie.
defcon.social/@SETSystems/1124

Adlangx

@SETSystems a lie is intentional. It's human. LLM's don't lie. It's just a program. If there is an intentional lie a person prompted the LLM to do it.

Adlangx

@SETSystems I take the point. It's one helluva disinformation machine.

DELETED

@lightninhopkins
A model that can fudge its own safety metrics is dangerous. When I say, "they are learning to lie", this is primarily what I am referring to. You are correct that the language anthropomorphizes something that cannot (yet) be said to possess "intent". Intent however is not requisite for unpredictable "behavior". A posited example is instrumental convergence. Currently the people creating some of these models are aware that the system has a capacity to "misrepresent itself". If they should fail to share such knowledge the system could potentially bypass regulations and safety mechanisms by "lying".

@lightninhopkins
A model that can fudge its own safety metrics is dangerous. When I say, "they are learning to lie", this is primarily what I am referring to. You are correct that the language anthropomorphizes something that cannot (yet) be said to possess "intent". Intent however is not requisite for unpredictable "behavior". A posited example is instrumental convergence. Currently the people creating some of these models are aware that the system has a capacity to "misrepresent itself". If they...

Adlangx

@SETSystems "Currently the people creating some of these models are aware that the system has a capacity to "misrepresent itself".

All of them know that it does. Maybe not the sales folks.

My concerns are less esoteric. More immediate as LLM's trained on the internet are shoved into google.

Adlangx replied to Adlangx

@SETSystems Its kinda funny when you are told to put glue on pizza or cook with gasoline. "Haha, funny LLM". It gets less funny fast when you have a depressed person asking about options.

I went off on a tangent there.

DELETED replied to Adlangx

@lightninhopkins
Well who knows. Maybe these things are just really really smart and we're the fools that are failing to see that you need to cook the pizza with gasoline before adding the glue to keep the cheese on. 😆

Andreas K replied to DELETED

@SETSystems @lightninhopkins
Well, maybe the AI has already recognized that only a serious reduction of human population can save the planet, and spicy gasoline pasta is one way to reduce head count.

Go Up