@alyssa Could LLMs be suitable for generating proofs which are then checked by a _sound_ proof checker that is secure against malicious input? If the proof is wrong, the checker will catch it, so no harm has been done.
Top-level
@alyssa Could LLMs be suitable for generating proofs which are then checked by a _sound_ proof checker that is secure against malicious input? If the proof is wrong, the checker will catch it, so no harm has been done. 4 comments
@alwayscurious @alyssa only if But even if those conditions can be met without undermining the viability of the product, it could only be used in contexts for which an ~infallible checker has been integrated, which could not include general use @alwayscurious @alyssa There are automated systems to generate mathematical proofs, but I don't think those work anything like LLMs. if you know you're trying to generate something specific like proofs, you probably don't need an overwhelmingly overpowered tool like GPT-4 to do that the use of LLMs is that, through gigantic amounts of wasted computing power, they can appear to emulate a near-infinity of simple tasks; but if you have a specific task in mind the chances are _extremely_ good that you don't need the Swiss-Army-Chainsaw to do it |
@alwayscurious @alyssa
Depends how you define harm. The requirement for checkers becomes exponentially greater with the use of bots and large language models. Of course, one should perform one's best efforts to check the validity of any resource, but the need for more robust and cautious checks increases the time requirements greatly.
Also, it feels like generating noise at random and then checking for anything which could be a sonnet.