Email or username:

Password:

Forgot your password?
ploum

Step to write software by human:

1. Decide what you want your software to do (medium)
2. Decide what you REALLY want your software to do in all the corner cases (very HARD)
3. Write the code (Easy to Medium)
4. Test the code (Medium)
5. Debug the code (hard to very HARD).

Now, thanks to ChatGPT, you could improve this workflow by:

- making step 3, the only easy step, somewhat easier (and thus hiring less competent engineers)

- making step 5 nearly Impossible.

41 comments
Youri

@ploum Yeah that would be a rookie mistake. (3) Keep your competent engineers to get the work done faster and (4) let Copilot help you write the tests much faster. Still the same difficulty, but speed. Copilot is aptly named if you think about it.

Timothy Wolodzko

@yac @ploum I'd rather let it write the code, then the tests. If it fucks up the tests, you would either miss the bugs, or have flaky tests being an extreme pain leading you to step 5. If you write decent tests, you would catch it on writing poor code. But TBH, it shouldn't write either of those, unless you treat it as a templating engine to write some very simple boilerplate for you.

Youri

@tymwol @ploum Exactly, I use it as a templating engine on steroid. For instance in golang things like data := []struct {...}, the loop on data, the t.Errorf etc. The danger is to let it generate false tests complacently.

MrBean

@tymwol @yac @ploum

So far i've had good experience with chatgpt, even its bug fixing skills are pretty decent, but i haven't coded anything complex so far just basic api tools, userscript, wordpress themes and plugins.

Tanguy ⧓ Herrmann

@yac @ploum I have trouble trusting code from AI in general. But I could be OK if it is well tested.

But trusting AI to write my test? Nope.

This is clearly, for me, the place where there should be no AI.
Those test need to be written by human, in my opinion.
Because, then, I don't know if I can trust my software, as there is no way to know if my test are OK or just look OK.

Youri

@dolanor @ploum I leverage its power for mundane tests. Specially there you must be very careful and not be influenced by false tests. I usually first think of the test I would like to write, let Copilot write it, then make sure it matches what I meant. Sort of a templating engine on steroid.

Tanguy ⧓ Herrmann

@yac @ploum I guess I trust more my ability to write even boring code rather than my ability to not be bored while reading boring code. 😅

Brandon

@dolanor @yac @ploum I might be ok with it sketching out some tests, creating some setup GivenXCondition methods and high level tests with comments about what it thinks needs to be tested, but it's not going to know the _semantic_ details of how things inside the code under test work. Like, a method returns an object that has a certain string field defined - should that be expected to have a value given the inputs, and what should that value be? For any nontrivial code, this is the use case.

christian mock

@yac @ploum but the hard and time-consuming part of writing tests is exactely point 2 in the original toot -- construct all the corner cases. writing the tests themselves is quick and easy if you know what to test for...

fedithom

@cm @yac @ploum
... by which we're back to "nobody needs this shit. Really." No surprise.

madopal

@fedithom @cm @yac @ploum Not shocking that it always comes back to "if I don't really care about this thing, I assume it's trivial, and AI can do it." It's like the perfect storm of Gell Mann amnesia and Dunning -Kruger effect.

Donald Ball

@yac @ploum Oh, you’re choosing the “unmaintainable test suite” option. This is a classic sophomore mistake.

Max

@ploum Step to translate text by human:

1. Understand what the author says (medium)
2. Understand what the author REALLY says in all the nuances (very HARD)
3. Write the translation down (Easy to Medium)
4. Edit it (Medium)
5. Make it sound nice in the target language (hard to very HARD).

Now, thanks to ChatGPT, you could improve this workflow by:

- making steps 1 and 3, the only easy steps, somewhat easier (and thus hiring less competent translators)

- making step 5 nearly Impossible.

@ploum Step to translate text by human:

1. Understand what the author says (medium)
2. Understand what the author REALLY says in all the nuances (very HARD)
3. Write the translation down (Easy to Medium)
4. Edit it (Medium)
5. Make it sound nice in the target language (hard to very HARD).

Now, thanks to ChatGPT, you could improve this workflow by:

Tim Ward ⭐🇪🇺🔶 #FBPE

@ploum Once Upon A Time, in the days before agile, when people did things like "design" and "project management", our typical project spent about 10% of its time on coding.

Which would occasionally get our customers really quite nervous when they could see that we were nearly half way through the timescale but hadn't written any code yet.

Tim Ward ⭐🇪🇺🔶 #FBPE

@ploum Mostly we didn't find debugging hard, of course, because by the time we'd run the project what used to be regarded as "properly" there were few to no bugs.

But I do remember one.

The system had to drive a bank of modems. Modems and phone lines cost money. So our internal testing wasn't at the full scale of the production system.

We weren't stupid enough to test with only a single modem, so we used a bank of, I think, four.

And the system then failed when tried out on the production hardware which had, I think, sixteen modems.

Because ...

... somewhere along the line the way that you addressed a modem involved sending its number as a string of ASCII digits, and we only tested addresses 0..3, and when we got to modem no. 10, with a two digit address, it didn't work.

@ploum Mostly we didn't find debugging hard, of course, because by the time we'd run the project what used to be regarded as "properly" there were few to no bugs.

But I do remember one.

The system had to drive a bank of modems. Modems and phone lines cost money. So our internal testing wasn't at the full scale of the production system.

Thomas Larsen

@ploum Software development is in almost all cases a collaborative effort, requiring *at least* one developer to be able to explain or document the code for the rest of the team (as well as the future team). When code is generated by an AI, this is missed and no-one really understands the code so no-one can really maintain it making all post-1.0 work very HARD.

Mina

@ploum once the "AI" bubble pops, we software engineers are gonna have so much work.

let's just hope I have enough health left by then.
and if I do, that i haven't found a more sensible job

Hermancy

@ploum

Also hiring less competent engineers will lead to no one even knowing about step two and only performatively participating in step four dooming the whole process to failure but the process will be doomed much faster and more cost effectively so AI is the future.

Tushar Chauhan

@ploum I think 1 is the absolute hardesrt. Of course, followed closely by 5. And your main point about the consequence of using chatbots to replace competence still stands.

Barry Stahl-AZGiveCamp Founder

@ploum I'm not a _typical_ user for certain, but I find a good LLM assistant to be quite valuable to me in all 5 of the steps listed, plus at least 2 others (0. Decide if I want to build vs buy, and 6. Document). It won't do the hardest work for me, but it saves me a lot of time and results in a better product by helping me think through hard problems (best rubber-duck ever).

Barry Stahl-AZGiveCamp Founder

@ploum The biggest problem I see is that there are a few extremists (marketers?) screaming that it will work for almost everything, and so many extremists screaming that it will work for almost nothing, that it is tough to communicate how these tools are properly used.

May Likes Toronto

@Bsstahl @ploum What is the value of a machine that uses a text-based and not logic-based statistics to make the biggest decision that impact your outcomes?

As a product manager, the place where it would help is consolidating/summarizing text-based input so it's faster for you to learn what the market is saying. Rubber ducking with it works for a little bit, but why not rubber duck with an SME or user who has opinions?

Barry Stahl-AZGiveCamp Founder

@MayInToronto @ploum LLMs are fantastic tools for suggesting alternatives that I might not have thought of. They are assistants. They cannot make the decisions for you, but they can be an invaluable aid in making suggestions. They can also suggest things that SMEs and users couldn't possibly. For example, few SMEs can incorporate the results of research papers from multiple languages and fields of study, while any LLM that has ingested these papers can do so easily.

StarkRG

@ploum It can make the first one easy if you don't mind terrible ideas that have a 30% chance of being either illegal or simply impossible to accomplish.

DELETED

@StarkRG It seems like they have no trouble coming up with illegal ideas. Witness the investigations into rental price fixing.. 😂

@ploum

StarkRG

@weilawei Well, there you are, then. Nothing lost by using LLMs for that task. You could probably get away with just asking it if there are bugs in the code and blindly accepting its answer too. Nobody told me I couldn't. /s @ploum

Pēteris Krišjānis

@ploum somewhat easier is huge stretch considering I cannot trust it.

MikeK

@ploum

Hmm.

I think that step 1 is actually the hardest.

Normally, in fact step 6 is redoing step 1 because you find that the software is not supposed to do that at all, or at least, the no-one wants the software that you have written.

At any rate, your points about chatGPT are bang on.

#Ai #chatgpt

DocRekd

@ploum I dunno I usually use copilot snippet as a starting point,like a sightly better intellisense. Nothing that can replace programmers, but not completely useless

John Rohde Jensen

@ploum I have on occasion suggested 'WaterBoarding' the customer to get the full software requirements.

vxo

@ploum meanwhile I want to see the ChatGPT code go fecking rogue and sudo rm -rf --no-preserve-root /

Michael Meinel

@ploum Using AI for code generation adds:
6. Make sure to publish the result with the correct license (literally impossible)

Orz

@ploum This is what I have been saying. Writing the code is a small portion of the job.

I still wonder what those 'LLM makes me so many times more productive when developing' do differently.

Schroedinger

@ploum Writing code is easy.

Writing good code - which means having gone through 1 and 2 first and making 5 as easy as possible - is very very difficult.

Kote Isaev

@ploum As act of self-critique I have to admit that for 99% of my personal projects I stuck at step 1.

echarlie

@ploum one friend, who only writes code when (2) doesn't matter (i.e. relatively simple/throw-away software), says chatGPT allows him to skip straight into debugging. Which is handy, but only if you're already good at writing software.

🇺🇦🇪🇺 cweickhmann

@ploum
So, you integrate steps three and five in an iteration loop, and the person who does it will be called Prompt Engineer.

David Nash

@ploum Both with personal projects and the projects I do for my employer, there is often:

6. Realize that what you (or other stakeholders) want the software to do has changed. (Extremely easy to *do*, but not always easy to *admit* is needed.)
7. Redo steps 2 - 5. (Inherits their difficulties from Easy to Very HARD).

Not only does generative AI have no fscking visibility into when and why step 6) might be needed, generative AI's existence makes it even easier for management levels to say "Hey! Coding's *easy* now, right? Make the software do this now too."

@ploum Both with personal projects and the projects I do for my employer, there is often:

6. Realize that what you (or other stakeholders) want the software to do has changed. (Extremely easy to *do*, but not always easy to *admit* is needed.)
7. Redo steps 2 - 5. (Inherits their difficulties from Easy to Very HARD).

Go Up