Email or username:

Password:

Forgot your password?
Top-level
Simon Willison

@kitten_tech @carbontwelve @david_chisnall I'm actually getting more coding work done directly in the Claude and ChatGPT web interfaces and apps vs using Copliot in my editor

The real magic for me at the moment is Claude Artifacts and ChatGPT Code Interpreter - I wrote a bunch about Artifacts here: simonwillison.net/tags/claude-

Here are all of my general notes on AI-assisted programming: simonwillison.net/tags/ai-assi

7 comments
Stephen J. Anderson

@simon @kitten_tech @carbontwelve @david_chisnall How would you avoid or deal with the issues that David encountered? Specifically, subtle bugs that the process of debugging make the whole process less efficient than writing it yourself. Is there one of your notes that deals with that already?

Alaric Snell-Pym

@utterfiction @carbontwelve @david_chisnall

He has a few examples where he felt something in the output didn't look right, or ran it and found bugs, and had the LLM try again.

Most of his examples are relatively simple things of the form "I didn't want to spend time reading API docs for this quick task", though. I don't find that sort of thing a bottleneck in what I do - and I quite enjoy reading docs, and building a mental model of a tool I can then use to know what its...

Alaric Snell-Pym

@utterfiction @carbontwelve @david_chisnall ... limitations and capabilities are.

The bits of programming that eat my time, which I'd love a tool to help with, are usually understanding a bug in an undocumented and under commented ball of hundreds of kloc of code, too big for an LLM's context window, and where going and quizzing the people who wrote bits of it is essential to success.

The bits Simon gets LLMs to do look like the tasks I do to cheer myself up after that :-)

Stephen J. Anderson

@kitten_tech @carbontwelve @david_chisnall Yeah. A lot of my professional time is spent extending logic, adding new features that follow an existing pattern, refactoring when re-usable abstractions are discovered… so far, they’re just not very good at that. And I don’t think pure LLMs ever will be - limited token windows and no genuine symbolic representation of knowledge.

Martijn Faassen

@kitten_tech

@utterfiction @carbontwelve @david_chisnall

If you can suddenly create small throwaway applications far more quickly than before, applications that might be too boring or bothersome to create otherwise, that might allow new ways of working altogether.

Simon Willison

@utterfiction @kitten_tech @carbontwelve @david_chisnall you have to assume that the LLM will make weird mistakes all the time, so your job is all about code review and meticulous testing

I still find that a whole lot faster then writing all the code myself

Here's just one of many examples where I missed something important: simonwillison.net/2023/Apr/12/

Simon Willison

@utterfiction @kitten_tech @carbontwelve @david_chisnall but honestly, the disappointing answer is that most of this comes down to practice and building intuition for tasks the models are likely to do well vs mess up

Manipulating some elements in the HTML DOM with JavaScript? They'll nail that every time

Implementing something involving MDIO registers? My guess is there are FAR less examples relating to that in the (undocumented, unlicensed) training data so much more likely to make mistakes

Go Up