Last week, a Redditor fine-tuned an AI image model on the work of one illustrator, sparking a debate about the ethics of reproducing a living artist's style. I talked to that artist to see how she felt about it, and the person who made it. https://waxy.org/2022/11/invasive-diffusion-how-one-unwilling-illustrator-found-herself-turned-into-an-ai-model/
The Redditor used a technique developed by Google called DreamBooth, reimplemented for Stable Diffusion, using only 32 illustrations from the artist. Training took 2.5 hours of cloud GPU time at a cost of under $2. He then released the finetuned model for everyone to use.