this post was submitted on 16 Sep 2024
12 points (92.9% liked)
Stable Diffusion
4297 readers
1 users here now
Discuss matters related to our favourite AI Art generation technology
Also see
Other communities
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't think so. They're going to have to do a lot better than a tutorial to win people back. That said, the two Flux models being distilled making them close to impossible to fine-tune sucks too.
People have been training great Flux LoRAs for a while now, haven't they? Is a LoRA not a finetune, or have I misunderstood something?
Last I heard, LoRAs cause catastrophic forgetting in the model, and full fine-tuning doesn't really work.
quite the opposite. Lora's are very effective against catastrophic forgetting, and full finetuning is very dangerous (but also much more powerful).