@reedmideke yeah, unfortunately it's a problem that's completely inherent to how LLMs work - we've been talking about prompt injection for more than two years now and there's a LOT of incentive to find a solution, but the core architecture of LLMs makes infuriatingly difficult to solve