Email or username:

Password:

Forgot your password?
Top-level
Zeljka Zorz

@RawryCore I know. Somehow, I'm still surprised it works. How can it still work???

1 comment
Rawry_Core $ :catcoffee2:

@zeljkazorz
I guess it's hard to restrict a guessing machine without good Anti Exploitation Data.
But since LLMs or Neural Networks are new, the only Data that seems to kinda fit that is Social Engineering.
That's probably not a lot Data and won't fit the LLM context.

It's lovely though, how easily they can be exploited.

Recipes have to be really clear and AI isn't good at guessing perfect Values (Language > Math).
So if you get smth "harmfull" it might be extra harmfull because of wrong values.
It's dangerous for any scientific work and even more dangerous for people trying to do it exactly like AI said.

I love those findings though. Ppl gotta know that it is flawed and snakeoil in many cases rn.

@zeljkazorz
I guess it's hard to restrict a guessing machine without good Anti Exploitation Data.
But since LLMs or Neural Networks are new, the only Data that seems to kinda fit that is Social Engineering.
That's probably not a lot Data and won't fit the LLM context.

It's lovely though, how easily they can be exploited.

Go Up