And here's a fun little hint at some of the annoying behaviour in the base model that they've tried to knock out of it with some system prompt instructions
Seriously, stop saying "certainly"!
Top-level
And here's a fun little hint at some of the annoying behaviour in the base model that they've tried to knock out of it with some system prompt instructions Seriously, stop saying "certainly"! 5 comments
@judell you can get a bit of that with Claude "Projects" - I have one which thinks it's a manatee who's an expert at vegetarian recipes and salads I don't use it for ChatGPT because I don't like the idea that my interactions with the models might be biased in unexpected ways by my custom instructions - I want to gain as much experience as possible with the base model that everyone else is using @judell here's my manatee reverse engineering a photo of some avocado toast for me https://gist.github.com/simonw/7ae554005d788ca233440f18d067e4c6 @simon I haven’t had much success using negative prompting. I’m surprised they get this to work. @com yeah same here, telling models not to do things isn’t that effective in my experience I’m pretty sure I’ve still seen Claude say “certainly”! |
@simon Yeah, no kidding. Any sign of a customizable base user prompt a la ChatGPT?
And, follow-up question, assuming you're using that feature of ChatGPT, how helpful do you think it has been?