@neauoire So one thing about evolutionarily evolving "programs" is that code tends to have a "rough" fitness landscape, so that the shortest-(hamming/edit)-distance path between two fit points may be mostly unfit or mostly nonfunctional. My conclusion working with this sort of thing only works well when the shape of the fitness landscape can be made smooth. Then again, if the peturbation is being done by a human mind instead of an algorithm, maybe you actually *can* find good outcomes that way
@mcc right now I'm focusing on two cases where I can raise warnings:
- Ranges that can be reduced, where the boundaries of the tests never reach the literal boundaries set in the program.
- Linear logic that can be reduced.