@katanova Sure, but I think you're being pedantic for the sake of it? What JPEG does is this lossy quantization operation, followed by lossless compression of the coefficients, followed by lossless decompression.
You can write a simulator of JPEG compression artifacts, emulating the degradation from source bitmap to your screen, without actually doing the lossless compression step.
This does precisely that, except for text. It would be fairly pointless to throw in the lossless stage, because... the only novelty here is being able to observe what lossy DCT does to text.