Email or username:

Password:

Forgot your password?
Top-level
Tanguy ⧓ Herrmann

@yac @ploum I have trouble trusting code from AI in general. But I could be OK if it is well tested.

But trusting AI to write my test? Nope.

This is clearly, for me, the place where there should be no AI.
Those test need to be written by human, in my opinion.
Because, then, I don't know if I can trust my software, as there is no way to know if my test are OK or just look OK.

3 comments
Youri

@dolanor @ploum I leverage its power for mundane tests. Specially there you must be very careful and not be influenced by false tests. I usually first think of the test I would like to write, let Copilot write it, then make sure it matches what I meant. Sort of a templating engine on steroid.

Tanguy ⧓ Herrmann

@yac @ploum I guess I trust more my ability to write even boring code rather than my ability to not be bored while reading boring code. 😅

Brandon

@dolanor @yac @ploum I might be ok with it sketching out some tests, creating some setup GivenXCondition methods and high level tests with comments about what it thinks needs to be tested, but it's not going to know the _semantic_ details of how things inside the code under test work. Like, a method returns an object that has a certain string field defined - should that be expected to have a value given the inputs, and what should that value be? For any nontrivial code, this is the use case.

Go Up