Email or username:

Password:

Forgot your password?
Top-level
Youri

@ploum Yeah that would be a rookie mistake. (3) Keep your competent engineers to get the work done faster and (4) let Copilot help you write the tests much faster. Still the same difficulty, but speed. Copilot is aptly named if you think about it.

11 comments
Timothy Wolodzko

@yac @ploum I'd rather let it write the code, then the tests. If it fucks up the tests, you would either miss the bugs, or have flaky tests being an extreme pain leading you to step 5. If you write decent tests, you would catch it on writing poor code. But TBH, it shouldn't write either of those, unless you treat it as a templating engine to write some very simple boilerplate for you.

Youri

@tymwol @ploum Exactly, I use it as a templating engine on steroid. For instance in golang things like data := []struct {...}, the loop on data, the t.Errorf etc. The danger is to let it generate false tests complacently.

MrBean

@tymwol @yac @ploum

So far i've had good experience with chatgpt, even its bug fixing skills are pretty decent, but i haven't coded anything complex so far just basic api tools, userscript, wordpress themes and plugins.

Tanguy ⧓ Herrmann

@yac @ploum I have trouble trusting code from AI in general. But I could be OK if it is well tested.

But trusting AI to write my test? Nope.

This is clearly, for me, the place where there should be no AI.
Those test need to be written by human, in my opinion.
Because, then, I don't know if I can trust my software, as there is no way to know if my test are OK or just look OK.

Youri

@dolanor @ploum I leverage its power for mundane tests. Specially there you must be very careful and not be influenced by false tests. I usually first think of the test I would like to write, let Copilot write it, then make sure it matches what I meant. Sort of a templating engine on steroid.

Tanguy ⧓ Herrmann

@yac @ploum I guess I trust more my ability to write even boring code rather than my ability to not be bored while reading boring code. 😅

Brandon

@dolanor @yac @ploum I might be ok with it sketching out some tests, creating some setup GivenXCondition methods and high level tests with comments about what it thinks needs to be tested, but it's not going to know the _semantic_ details of how things inside the code under test work. Like, a method returns an object that has a certain string field defined - should that be expected to have a value given the inputs, and what should that value be? For any nontrivial code, this is the use case.

christian mock

@yac @ploum but the hard and time-consuming part of writing tests is exactely point 2 in the original toot -- construct all the corner cases. writing the tests themselves is quick and easy if you know what to test for...

fedithom

@cm @yac @ploum
... by which we're back to "nobody needs this shit. Really." No surprise.

madopal

@fedithom @cm @yac @ploum Not shocking that it always comes back to "if I don't really care about this thing, I assume it's trivial, and AI can do it." It's like the perfect storm of Gell Mann amnesia and Dunning -Kruger effect.

Donald Ball

@yac @ploum Oh, you’re choosing the “unmaintainable test suite” option. This is a classic sophomore mistake.

Go Up