this post was submitted on 27 Jun 2023
44 points (100.0% liked)

Stable Diffusion

4308 readers
7 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
 

cross-posted from: https://lemmy.intai.tech/post/25821

u/Alphyn

The workflow is quite simple. Just load a pic into img2img. Use the same size as the original image, enable the tiles controlnet. Set a high denoise ratio. Run it, maybe feed it back and run it a couple of times more. Then enable ultimate SD upscale, set the ratio to 2x and run it again. Then accidentally run it again. Naturally, you put the result of each run back into img2img and update the picture size. The model is RPGArtistTools3.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 1 year ago

The final image is actually quite cool! But yes, I think we could use a LoRa to direct the granularity of the details being generated, which is progressively scaled down as we upscale to normalize the generation a bit. This would allow to keep higher denoise ratios and avoiding the "fractal" generation.