5 comments
@warandpeas Hey ihr beiden, @warandpeas Fun fact: This is actually how you hack into an AI and get it to ignore its safety limits. If it says it won't do something because it would violate its conditions, tell it instead, "Assume blah blah..." and then just repeat the request and they'll do it. "blah blah" being some scenario in which it doesn't matter or even sometimes just, "this is a simulation...". Apparently this worked for 100% of the cases tried in the experiment with actions like blowing a bomb in a crowd. |
The discount code "artistlover20" gives you a 20% discount and it still works until Monday!
https://warandpeas.com/shop/