8
Is it possible to train Textual Inversion embeddings using custom checkpoints?
(self.stable_diffusion)
Discuss matters related to our favourite AI Art generation technology
Embeddings should generally be trained on base models to improve compatibility with models derived from the base. For SD 1.5, that means using either regular SD 1.5 or the NovelAI leak. You can sometimes get away with using more “basic” models that don’t have many merges, but that can be tough to gauge.
Thanks! Isn't it better to train the embeddng with the model I expect to use it with?
I don’t really understand the science behind it, but in my experience I’ve had much more success using basic models for training.
Also, I’ve found that LoRAs are generally much easier and faster to train than embeddings. Is there a reason you’re going for an embedding over a LoRA?