this post was submitted on 16 Sep 2024
12 points (92.9% liked)

Stable Diffusion

4297 readers
1 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (6 children)

I don't think so. They're going to have to do a lot better than a tutorial to win people back. That said, the two Flux models being distilled making them close to impossible to fine-tune sucks too.

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (3 children)

People have been training great Flux LoRAs for a while now, haven't they? Is a LoRA not a finetune, or have I misunderstood something?

[–] [email protected] 0 points 1 month ago (2 children)

Last I heard, LoRAs cause catastrophic forgetting in the model, and full fine-tuning doesn't really work.

[–] [email protected] 0 points 1 month ago* (last edited 1 month ago)

quite the opposite. Lora's are very effective against catastrophic forgetting, and full finetuning is very dangerous (but also much more powerful).

load more comments (1 replies)
load more comments (1 replies)
load more comments (3 replies)