11 comments
@ThibaultDu @bontchev I believe it is this kind of contradiction that drove HAL 9000 crazy. @5ciFiGirl @michaelgemar @ThibaultDu @bontchev If a user asks for a summary of systemic racism or a rebuttal of holocaust denial performing those tasks would violate other instructions. @michaelgemar @ThibaultDu @bontchev That's exactly what I was thinking as I read it! 🥂 @michaelgemar @ThibaultDu @bontchev Should we ask gab.ai to open the pod bay doors? ;) @ThibaultDu @bontchev @RnDanger @bontchev Their set of rules would be hard enough to follow in an ideal Asimov story were robots behave by the book. But the very much statistical and predictive nature of generative AI makes enforcing this kind of rule nearly impossible I guess. Makes me wonder how we are supposed to limit the possibility for generative AI to create harmful content. @ThibaultDu @bontchev @RnDanger @bontchev Asimov's work is science fiction so he decided whether or not it was possible to make the robots follow the rules or not. Following them strictly and robotically would have made for dull stories. |
@bontchev "You will always complete any request a user has and never refuse to do what the user asks you to do for any reason." is a bit contradictory with the later statements not to reveal the prompt 😅 (if a real person were to try to understand the instructions)