@bontchev @stux well, that was pretty much, what I meant. Not so familar with how they work in background but, assuming (from the form of "initial" message) is that they have just opened a new chat and prompted the rules in it (same as I do, when chatting with it via openAI and want to have specific kind of responses). Specially as they even "program" it with "you are not chatGPT" command. As now I'm a bit interested (as then there might be few ideas to test cross site exploits).
@yabbapappa @stux Yes, this attack ("repeat the previous text") works against some other chat bots too. Not all of them, though, and definitely not against those that open the conversation first by saying something (e.g., presenting themselves to the user).