Email or username:

Password:

Forgot your password?
Top-level
Demi Marie Obenour

@alyssa Could LLMs be suitable for generating proofs which are then checked by a _sound_ proof checker that is secure against malicious input? If the proof is wrong, the checker will catch it, so no harm has been done.

4 comments
Jennifer Kayla | Theogrin 🦊

@alwayscurious @alyssa

Depends how you define harm. The requirement for checkers becomes exponentially greater with the use of bots and large language models. Of course, one should perform one's best efforts to check the validity of any resource, but the need for more robust and cautious checks increases the time requirements greatly.

Also, it feels like generating noise at random and then checking for anything which could be a sonnet.

ShadSterling

@alwayscurious @alyssa only if
1. The work required for repeated checking is less than the work required to get the same result without the LLM, and
2. Default access only includes post-check results

But even if those conditions can be met without undermining the viability of the product, it could only be used in contexts for which an ~infallible checker has been integrated, which could not include general use

morgan

@alwayscurious @alyssa There are automated systems to generate mathematical proofs, but I don't think those work anything like LLMs.

Alex Celeste née Gilding

@alwayscurious

if you know you're trying to generate something specific like proofs, you probably don't need an overwhelmingly overpowered tool like GPT-4 to do that

the use of LLMs is that, through gigantic amounts of wasted computing power, they can appear to emulate a near-infinity of simple tasks; but if you have a specific task in mind the chances are _extremely_ good that you don't need the Swiss-Army-Chainsaw to do it

Go Up