Email or username:

Password:

Forgot your password?
Simon Willison

Anthropic's claude.ai/ grew a new feature today: an equivalent of OpenAI's ChatGPT Code Interpreter mode, where the chatbot can write and then execute code in order to help answer questions (e.g. to run calculations that are beyond a next-token-predicting LLM)

OpenAI use server-side Python for this, but Anthropic instead chose to use client-side JavaScript running in a Web Worker.

Here are my notes so far on the new feature: simonwillison.net/2024/Oct/24/

Claude screenshot. I've uploaded a uv.lock file and prompted "Write a parser for this file format and show me a visualization of what's in it" Claude: I'll help create a parser and visualization for this lockfile format. It appears to be similar to a TOML-based lock file used in Python package management. Let me analyze the structure and create a visualization. Visible code: const fileContent = await window.fs.readFile('uv.lock', { encoding: 'utf8' }); function parseLockFile(content) ... On the right, an SVG visualization showing packages in a circle with lines between them, and an anyio package description
10 comments
Danil

@simon >use client-side JavaScript running in a Web Worker.

So you can literally ask - "freeze my webbowser".

Simon Willison

@danil or at least burn one available CPU core - Web Workers should get their own threads so in theory it should be be able to freeze a whole tab (in theory...)

Michael Hunger

@simon interesting choice.
Delegating the sandbox to the browser and compute to the user. what would be attack vectors here?

Simon Willison

@mesirii I think it's pretty solid. Browsers are the most widely deployed and tested sandboxes on the planet, so I think the absolute worst that could happen is someone's CPU core gets stuck in a loop

Prem Kumar Aparanji πŸ‘ΆπŸ€–πŸ˜

@simon would be cool to see Llama 3.2 1B or similar doing it right inside the browser πŸ˜„

Simon Willison

@prem_k that's VERY feasible - I've considered trying to build something like that myself in the past. chat.webllm.ai/ runs Llama 3.2 1B very neatly in the browser already

Prem Kumar Aparanji πŸ‘ΆπŸ€–πŸ˜

@simon yes, I'm a fan of it. It's just so feature rich already. Just need a way to make them use embeddings of content & API for RAG & #LAM. Imagine if they could be exposed by websites, somewhat like robot.txt.

Also on similar lines to WebLLM chat is the code of of chromeai.co & its fork, chromeai.org both of which use the Gemini Nano that comes within Chrome 128+ (though it's a bit of a hassle to get that model downloaded right now).

Stephan Druskat

@simon Interesting. Do you know (or does the code show) why the vis nodes for the dependencies are of different size?

Go Up