AI Generated Images
Community for AI image generation. Any models are allowed. Creativity is valuable! It is recommended to post the model used for reference, but not a rule.
No explicit violence, gore, or nudity.
This is not a NSFW community although exceptions are sometimes made. Any NSFW posts must be marked as NSFW and may be removed at any moderator's discretion. Any suggestive imagery may be removed at any time.
Refer to https://lemmynsfw.com/ for any NSFW imagery.
No misconduct: Harassment, Abuse or assault, Bullying, Illegal activity, Discrimination, Racism, Trolling, Bigotry.
AI Generated Videos are allowed under the same rules. Photosensitivity warning required for any flashing videos.
To embed images type:
“![](put image url in here)”
Follow all sh.itjust.works rules.
Community Challenge Past Entries
Related communities:
- [email protected]
Useful general AI discussion - [email protected]
Photo-realistic AI images - [email protected] Stable Diffusion Art
- [email protected] Stable Diffusion Anime Art
- [email protected] AI art generated through bots
- [email protected]
NSFW weird and surreal images - [email protected]
NSFW AI generated porn
view the rest of the comments
Sorry for the low quality, i´ve found no other way to upload animations directly to lemmy, yet.
Edit: Here is a link to a sharing Portal with a higher Resolution GIF :
Gifyu
The comfyui Workflow is also embedded in this picture, you will have to install several custom extensions to make this work:
This is a quite interesting workflow as you can generate relatively long animations.
You draw the Motions of your character from a video. For this one i googled "dancing girl" and took one of the first i´ve found:
Link to youtube Video
You can draw single images from the Video into comfyui. for this one i´ve skipped the first 500 Frames and took 150 Frames to generate the animation. The single images are scaled to the Resolution 512x512. This gives me a initial set of pictures to work with:
Via the openpose prepocessor you can get the Poses for every single image:
This can be fed to the openpose controlnet to get the correct pose for every single animation. Now we have following problem. We are all set with the Poses, but we also need a set of latent images which have to go trough the ksampler. The solution is to generate a single 512x512 latent image and blend it with every single VAE encoded Picture of the Video to get an empty latent image for every Frame:
We get a nice set of empty latents for the sampler:
then we let the ksampler together with the animate diff nodes and controlnet do its magic and we get a set of images for our animation ( The number of possible images seems to be limited by your system memory. i had no problem with 100, 150, 200, 250 images and have not tested higher numbers yet. I could not load the full video):
Last step is to put everthing together with the video combine node. You can set the frame rate here. 30 FPS seems to produce acceptable results.:
That's awesome work! You're getting better and better at this :)
It's too bad the embedding doesn't seem to work so well, maybe someone else has a solution for this?
I´ve tried to implement Loras in the Workflow and a face detailer to strengthen the lora effect. the Results are quite interesting (low quality comes from the webm format):
Gwen Tennyson Lora
Link To "High Res" GIF
Trump
Link To "High Res" GIF
Buscemi
The workflow is embedded in this picture ( The image is pre Face Detailer)
Wow, that looks quite consistent! It's so weird to see Trump happy... and in shape..
You might want to put the webm files behind spoilers, they take up a lot of space in the feed if you just want to scroll through. At least they do in my browser (Firefox).
Done!
Its all a bit trial and error right now. this animation took about 20 Minutes on my machine. I would love to do some more tests with different models and embeddings or even loras but unfortunately my time for this is somewhat limited.
I love to do the contests to test new things out :-)
Visions for the future: If you could get a stable output for the background and the actors (maybe Loras?) you could "play out" your own scenes and transform them via stable diffusion to something great. thinking of epic fight scenes, or even short anmation films.
This whole stable diffusion thing is extremly interesting and in my opinion a game changer like the introduction of the mobile phone.
I'm glad you're enjoying the contests, your contributions are always welcome :)
Though you might want to consider making a post of your own, your work deserves a lot more exposure than just as a comment.
The idea of making your own consistent scenes sounds quite impressive, it's a bit out of my league though. Like you, I have limited time to invest in this hobby, I'll stick to my images :)
| Clear Theme | +1 | |
|
| | Prompt included | +1 | Total | 2 points |
Great work as always! It's always interesting to see your workflow and I loved the end result.