@simon Still boggles my mind that after a quarter century of SQL injection and XSS, a huge chunk of the industry is betting everything on a technology that appears to be inherently incapable of reliably separating untrusted data from commands
Top-level
@simon Still boggles my mind that after a quarter century of SQL injection and XSS, a huge chunk of the industry is betting everything on a technology that appears to be inherently incapable of reliably separating untrusted data from commands 1 comment
|
@reedmideke yeah, unfortunately it's a problem that's completely inherent to how LLMs work - we've been talking about prompt injection for more than two years now and there's a LOT of incentive to find a solution, but the core architecture of LLMs makes infuriatingly difficult to solve