Email or username:

Password:

Forgot your password?
Top-level
Simon Willison

Bonus from that post: I got fed up of calculating token prices by hand, so I had Claude Artifacts spin up this pricing calculator tool with presets for all of the major models tools.simonwillison.net/llm-pr

Screenshot of LLM Pricing Calculator interface. Left panel: input fields for tokens and costs. Input Tokens: 11018, Output Tokens: empty, Cost per Million Input Tokens: $0.075, Cost per Million Output Tokens: $0.3. Total Cost calculated: $0.000826 or 0.0826 cents. Right panel: Presets for various models including Gemini, Claude, and GPT versions with their respective input/output costs per 1M tokens. Footer: Prices were correct as of 16th October 2024, they may have changed.
14 comments
Simon Willison

Here's another example of multi-modal vision LLM usage: I collected the prices for the different preset models by dumping screenshots of their pricing pages directly into the Claude conversation

Full transcript here: gist.github.com/simonw/6b684b5

Claude: Is there anything else you'd like me to adjust or explain about this updated calculator? Me: Add a onkeyup event too, I want that calculator to update as I type. Also add a section underneath the calculator called Presets which lets the user click a model to populate the cost per million fields with that model's prices - which should be shown on the page too. I've dumped in some screenshots of pricing pages you can use - ignore prompt caching prices. There are five attached screenshots of pricing pages for different models.
Kyle Hughes

@simon Leaks show that the ChatGPT Mac and/or web app are going to get screen sharing soon via the Realtime API. Seems like this is the next frontier: dumping the whole personal computing experience into models.

Mark Eichin

@simon I'm a little confused by the OCR part - is that just some unrelated (but obviously useful) service tacked on the front, or is there some way LLMs are involved in the character recognition itself? (15 years ago OCR quality was related to text modelling, there was some interest in using our geotagger to do feedback for OCR of map labels, but I haven't dug into that space in a while)

Peter Hoffmann

@simon Do you use openrouter.ai to connect to different models, or do you use each service with it's own api and cost traccking?

Simon Willison

@hoffmann I mostly use the service APIs directly - I have an OpenRouter account too but I like to stay deeply familiar with all of the different APIs as part of developing my llm.datasette.io tool

Drew Breunig

@simon Nice! You should drop a tokenizer in there for people.

Simon Willison

@dbreunig I'm still frustrated that Anthropic don't release their tokenizer!

Gemini have an API endpoint for counting tokens but I think it needs an API key

Drew Breunig

@simon Now that you mention it, I'm curious how different each platform is with tokens and how that might affect pricing (or just be a wash)

Simon Willison

@dbreunig yeah it's frustratingly difficult to compare tokenizers, which sure make price per million less directly comparable

Simon Willison

@dbreunig running a benchmark that processes a long essay and records the input token count for different models could be interesting though

Phil Gyford

@simon Is it also possible to calculate how much energy these things use, and some comparisons of what that's equivalent to? I hear that AI is energy intensive but I have zero concept of what that means in reality for a single "thing" like this.

Simon Willison

@philgyford if that's possible I haven't seen anyone do it yet - the industry don't seem to want to talk specifics

GPUs apparently draw a lot more power when they are actively computing than when they are idle, so there's an energy cost associated with running a prompt that wouldn't exist if the hardware was turned on but not doing anything

Go Up