@fish@wetdry.world at work we use the opposite, Schrödinger, which makes pipelines randomly and unreproducibly fail when run in a merge request pipeline
I'd modify the test runner so that even if the test failed, the generated reports/XML test summaries wouldn't mention this.
As for CI detection, I go the other way: detect when you weren't running in CI and only then be honest about failures. Trivial to do if the developers are all running on MacOS or windows, as CI systems generally Linux boxes running in Cloud Infrastructure.
@fish@wetdry.world
...i dont trust that "build passing" label
@fish@wetdry.world at work we use the opposite, Schrödinger, which makes pipelines randomly and unreproducibly fail when run in a merge request pipeline
@fish lovely.
I'd modify the test runner so that even if the test failed, the generated reports/XML test summaries wouldn't mention this.
As for CI detection, I go the other way: detect when you weren't running in CI and only then be honest about failures. Trivial to do if the developers are all running on MacOS or windows, as CI systems generally Linux boxes running in Cloud Infrastructure.