Email or username:

Password:

Forgot your password?
Simon Willison

Forrest Brazeal:

“I think that AI has killed, or is about to kill, pretty much every single modifier we want to put in front of the word “developer.”

“.NET developer”? Meaningless. Copilot, Cursor, etc can get anyone conversant enough with .NET to be productive in an afternoon … as long as you’ve done enough other programming that you know what to prompt.”

From newsletter.goodtechthings.com/
indieweb.social/@fatrat/113056

40 comments
Simon Willison

This meshes with my more optimistic take on AI-assisted programming from last year: AI-enhanced development makes me more ambitious with my projects simonwillison.net/2023/Mar/27/

Hynek Schlawack

@simon There’s kinda a difference between tinkering where ambition is good, and writing production software, no? What that articles predicts is a wave of janky, poorly-understood, and unidiomatic code that will eventually collapse under its own weight. I like LLMs as an assistant to learning, but man a world where people “learn” .NET in an afternoon and start churning out “production” code is positively dystopian to me

Simon Willison

@hynek I thought that too, but the more work I get done with LLMs myself the less worried I am about that

I have a Go project I wrote from scratch in production now, despite not being remotely fluent in Go. It has comprehensive test coverage and even implements continuous integration and continuous deployment, which is why I’m confident it’s not a spectacularly bad idea

Would other people YOLO something like that to production without tests? Maybe, and that would definitely be a bad idea!

Timo Zimmermann

@simon @hynek there’s IMHO still a significant difference between writing some code that passes happy path tests and operating a service in production when something goes wrong the first time.

More projects falling apart when looking at them the wrong way and no one around understanding the tooling is IMHO not the solution.

That being said it’s obviously easier with a few decades experience knowing what exactly to look for and which question to ask. But this extrapolates poorly to most devs.

Hynek Schlawack

@simon Yeah but that’s exactly gonna happen once you work under economic constraints and middle managers pining for promotions. My point is exactly what you’re accidentally implying: they’re amazing for tinkering but a time bomb in prod envs. 🤷‍♂️

Simon Willison

@hynek I certainly won’t deny that there are an incredible new array of footguns now available to anyone who wants them

Matthew Martin

@simon @hynek re: rate of adoption for new programming practices at the office
I first saw unit tests in 1998. First project where the entire org was fighting for unit tests rather than deliberately misunderstaning, ignoring or fighting against them: 2020.

It will be 22 years before there is widespread encouragement of AI aided coding. My current client bans AI through the entire org for all purposes. We're all talking about scifi futures for most people.

Shauna GM

@simon @hynek do you know Go enough to assess the tests? I have had a number of contributors to a project use AI and often their tests pass but don't actually test the right thing.

Simon Willison

@shauna @hynek I think I know enough about programming to assess the tests: I use tricks like changing the implementation, confirming the test breaks, then fixing the implementation and confirming the test passes

Simon Willison

@shauna @hynek I have 20+ years of programming to rely on here though - I don’t think “shipping production code in a language you don’t know” is something that’s a great idea with a LOT of that existing experience

Hynek Schlawack

@simon @shauna Yes, that’s a HUGE qualifier. Given how careers typically work in IT, I’m guessing that’s top 1 percentile.

Matthew Martin

@simon @shauna @hynek - only frontier models routinely find bugs with unit tests. 3.5 wrote vacuous tests in comparison to 4 or 4o
- once it fixed the bug via monkey patching before the test ran to make it pass (malicious compliance!)
- the bots write so many unit tests that after a while quantity becomes a quality all of its own & the value comes with the next change I make, I'll see how sensitive the rest of the app was to a change in any part of the app (which points out design flaws)

Zane Selvans

@simon @hynek Thus far Copilot has made me more likely to write tests -- I've always found them tedious (even though necessary!) and so have been lazy about it, but now it feels rewarding, and once the scaffolding is there, it's not too bad to add extra cases either by hand or with the LLM. I don't think I would have taken it seriously as an option without your posts Simon.

Matt Campbell

@simon For many years I haven't liked calling myself an X-language developer anyway, because when working on a whole product as a solo developer, I have to work in whatever set of languages is most practical for getting the job done, and learning different flavors of imperative language isn't that hard. Plus, I don't want to tie my identity to a particular language (as I possibly did with Python early in my career), because then I'd be more resistant to using new languages when appropriate.

Gary Fleming

@simon @fatrat I’ve seen multiple developers independently use LLMs to produce bash scripts.

They were certainly bash syntax, but they didn’t do quite what the developers expected and they didn’t know that.

LLMs don’t make us X developers magically - they provide the facade of knowing X.

Simon Willison

@garyfleming @fatrat I think the most important skill in AI-assisted programming is code review and QA: being able to take code and actively test it to confirm that it does what it’s supposed to, including exercising weird edge-cases

It’s a difficult thing to get good at!

Gary Fleming

@simon agreed, and to do that accurately, repeatably, and in a way that is automatable (e.g. unit tests) still requires knowledge of the language.

That kind of testing and review focuses on known-knowns (“this does what I expected it to do”) but largely ignores known-unknowns (“this does things I didn’t expect it to”) and unknown-unknown (“this does things I didn’t expect and I can’t see”). Those require language and tooling knowledge AFAICT.

Ben Evans

@simon @garyfleming @fatrat I completely agree - and those are skills which only come with time and experience.

That is why I remain convinced that there is going to be good money to be made in 3-4 years time if LLM-assisted coding becomes widespread - in sorting out the horrible messes that companies have got themselves into by using junior / mid-level devs who have only ever known LLM-assistent development.

[DATA EXPUNGED]
Bill Mill

@simon …as long as nobody invents any new languages or techniques

Even if they keep training new models, will they be able to overcome their own poisoning of the well with AI slop?

It feels to me like we’re in a temporary awakening before the world’s greatest corpus of language is ruined

Simon Willison

@llimllib I don’t believe in the “model collapse” idea personally, AI models have been deliberately training on “synthetic data” for the last 12 months with increasingly impressive results

How quickly models can pick up new tech is definitely an interesting question - I’ve been pasting dozens of pages of documentation directly into them with good results, eg this example gist.github.com/simonw/97e29b8

Simon Willison

@llimllib the idea of “model collapse” is almost irresistible, because it’s a story of LLMs being brought down by their dual sins of polluting the web and then training on unverified and unlicensed scraped data

If AI labs continued to train indiscriminately it might be a problem, but those researchers are smarter than that: their whole game is about sourcing (and often deliberately generating) high quality training data

Bill Mill

@simon that's pretty dystopian: the only source of consistently un-slopped data is locked up in the AI companies' vaults; the rest of us make do with the crap that's on the web

João S. O. Bueno

@simon maybe. But maybe there are levels of specialization particular to each language that A.I. simply can't dewelve into. I am such an specialist for Python and I can't imagine me trusting LLM to do some of the more subtle things. Adding extra methods to a namedtuple? Show me AI code -deciding- to do that instead of just using a dataclass (which would imply extra conversion steps I don't want)

Simon Willison

@gwidion that’s exactly how I work with LLMs - a lot of my time is spent saying things like “rewrite that to use a namedtuple, not a dataclass”

Kind of like working with an infinitely patient intern who never gets frustrated at constant demands for tweaks and changes

João S. O. Bueno

@simon yes - but if you put me to create .net code using the LLM, I won't know enough of C#, not by 1/10th, to make requests equivalent to "use a named tuple here".

Mark Shane Hayden

@simon maybe...someday...

In my observations LLM assistance is still somewhat uneven in its effectiveness...it is not "trained" on all languages or platforms evenly. It is basically useless for programming industrial automation devices for example..which is probably a good thing TBH.

I do acknowledge that in many cases LLM assistance makes the easy part easier, that being the creation of code that compiles or executes. However it often makes the hard part harder, the hard part being testing for functional correctness and meeting requirements. For example when a junior coworker used LLM assistance to try to build scripts to automate some software deployment the resulting output was valid code that did things but appeared to be based on using an OS release from 12 years ago, and it took longer to make corrections than it would have to just do it from scratch.

An experienced devops person who is disciplined enough to review the LLM output and can spot the weirdnesses can use it, but newer people...?

@simon maybe...someday...

In my observations LLM assistance is still somewhat uneven in its effectiveness...it is not "trained" on all languages or platforms evenly. It is basically useless for programming industrial automation devices for example..which is probably a good thing TBH.

I do acknowledge that in many cases LLM assistance makes the easy part easier, that being the creation of code that compiles or executes. However it often makes the hard part harder, the hard part being testing for functional...

Sophie Schmieg

@simon or, as security engineers have put it since forever "future job security"

John Ulrik

@simon I don’t buy that at all. It’s true that many technical developer skills transfer well to other programming languages, but I’ve also seen and experienced for decades now that truly understanding an ecosystem (like learning to speak a foreign language near-native, including social, political, cultural aspects) is a long and slow process. AI might help, but it won’t make you proficient over night.

Simon Willison

@ujay68 there’s fluent, but there’s a level below that where you can build and ship small (not large) projects with confidence despite not knowing the language inside out - that’s where I am now with AI-assisted development for Go and jq and Bash and Dockerfile and AppleScript

davecb

@simon It rather reminds me of "automatic programming" ... and the new language it provided us, FORTRAN.

archive.computerhistory.org/re

Lea de Groot 🇦🇺

@simon interesting to me, given ive always phrased it like “im a developer, and i currently work in Laravel and React”
(And even that is ridiculous as there are at least 10 other things in the stack)

Tom Bortels

@simon

In my day job, I deal daily with professional developers, unassisted by AI, who manage to ship products that people use, that the company makes money off of - and that can and often do have security holes you can drive a truck through. It's my job to understand the environment, the players, and our own developers enough to sort out the gaps and force corrections that our experienced-in-that-environment developers still missed.

One big very common failing is "It worked when I tried it - ship it!" As opposed to "this is correct and secure - ship it".

Iterating with an AI gets you "it works!" Code - not "it's correct" code. Running without errors is no guarantee the output is correct. Getting correct output once won't guarantee it's consistently so. And secure/compliant? That's a whole other thing. You eschew experts at your own peril.

The hidden cost of not hiring experienced IT folks is you get what you pay for - and will pay the difference in other ways. Fair warning.

@simon

In my day job, I deal daily with professional developers, unassisted by AI, who manage to ship products that people use, that the company makes money off of - and that can and often do have security holes you can drive a truck through. It's my job to understand the environment, the players, and our own developers enough to sort out the gaps and force corrections that our experienced-in-that-environment developers still missed.

Simon Willison

@tbortels I think I agree with everything you just said

Becoming an effective, responsible developer who can reliably produce quality software is a journey

I’m excited that LLM-assistance, applied in the right way, might help accelerate people on that journey - and can help a massive boost for people who have managed to develop those core skills

Tom Bortels

@simon

I suspect you have more faith in both AI and people than I do :-)

There are absolutely places AI can be super useful - but they're usually not in very general situations, they're narrow. Quickly looking up relevant references in a context easily checked for correctness, for example. But I fear the "narrow" but gets lost and people want to depend on it a lot more than they should - for example, by hiring a junior and hoping AI will shore up the difference.

Ah well - self correcting problem. Might be an expensive lesson, but /shrug

@simon

I suspect you have more faith in both AI and people than I do :-)

There are absolutely places AI can be super useful - but they're usually not in very general situations, they're narrow. Quickly looking up relevant references in a context easily checked for correctness, for example. But I fear the "narrow" but gets lost and people want to depend on it a lot more than they should - for example, by hiring a junior and hoping AI will shore up the difference.

"Dancer" Graham Knapp

@simon so maybe subject-matter modifiers get promoted - there are very different trade-offs and required domain knowledge and soft skills between "systems engineer", "product engineer", "Database engineer", "Data engineer", "Mobile games engineer", ...

Go Up