I just wanted to share some of my first impressions while using SDXL 0.9. And a random image generated with it to shamelessly get more visibility. I would like to see if other had similar impressions as well, or if your experience has been different.
- The base model when used on its own is good for spatial coherence. It basically prevent the generation of multiple subjects for bigger images. However the result is generally, "low frequency". For example a full 1080x1080 image is more like a lazy linear upscale of a 640x640 in terms of visual detail.
- The detail model is not good for spatial coherence when starting from a random latent. When used directly as a normal model, results are pretty much like those we get from good quality SD1.5 merges. However since it has been co-trained to use the same latent space representation; so we get the power of latent2latent in place of img2img upscaling techniques.
- The detail model seems to be strongly biased and will affect the final generation. From what I can see all nude images in their training set are "censored" in the sense that they hand picked high quality photos of people wearing some degree of clothing.
- While the two models share the same latent space, they do not converge to the same image in generation. A face generated with the first model will be extremely affected by the latent2latent details injection phase. As I said, I found the detail model very biased, which is potentially a big problem in generation: for example all faces I tried to generate will converge to more "I am a model" ones, often with issue capturing a specific ethnicity. I can see this being a bit of a problem in training LoRa.
What are your experiences? Have you encountered other issues? Things you liked?
SDXL 0.9 seems absolutely amazing so far. It's so much better at following instructions than any other SD foundation model it's not even funny, and it can to tons of stuff out-of-the-box that would require at least an embedding with SD1.5. One thing I immediately noticed is that it handles color instructions properly most of the time. You can define tons of object colors, and it'll usually only color the specified or undefined objects. I also tried things like
character in a dirty environment
. SD1.5 and its finetunes would often make the character dirty, SDXL follows the instruction properly. Incredible potential.When it comes to the refiner, I found that the recommended(?) 0.25 strength works well for environments and such, but for characters, it should be dialed way down. I still use it, at around 0.05, and that seems to do the trick. It still does what it's supposed to at such a low strength, it still has a profound effect on fine detail like hair, but it doesn't completely change the base generation nearly as much.
Yes, I had to tune it down as well.
I actually ended up with a different workflow from that which was suggested, as I think it is a bit too wasteful. Instead of generating the full image and using latent2latent to introduce new noise from the final version, I stop the generation at an intermediate step and finish it with the refiner model. I did it in the past to combine different sd1.5 checkpoints, and it does work here as well, since the latent space is shared across the two models.
I added an image with the alternative workflow in case someone wants to try it (hopefully metadata are preserved).