this post was submitted on 15 Dec 2024
6 points (87.5% liked)

AI Generated Images

7234 readers
100 users here now

Community for AI image generation. Any models are allowed. Creativity is valuable! It is recommended to post the model used for reference, but not a rule.

No explicit violence, gore, or nudity.

This is not a NSFW community although exceptions are sometimes made. Any NSFW posts must be marked as NSFW and may be removed at any moderator's discretion. Any suggestive imagery may be removed at any time.

Refer to https://lemmynsfw.com/ for any NSFW imagery.

No misconduct: Harassment, Abuse or assault, Bullying, Illegal activity, Discrimination, Racism, Trolling, Bigotry.

AI Generated Videos are allowed under the same rules. Photosensitivity warning required for any flashing videos.

To embed images type:

“![](put image url in here)”

Follow all sh.itjust.works rules.


Community Challenge Past Entries

Related communities:

founded 2 years ago
MODERATORS
 

Hello, as stated in the title, I used to be able to generate a batch of 4 images, but when I try to do this now, I get the following error: CUDA out of memory. Tried to allocate 4.50 GiB. GPU 0 has a total capacity of 7.78 GiB of which 780.00 MiB is free. Including non-PyTorch memory, this process has 6.57 GiB memory in use. Of the allocated memory 6.37 GiB is allocated by PyTorch, and 56.09 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management

This started happening right after I updated SD.Next to the most recent version. I don't know which version I was using beforehand, since I don't update it frequently. I assume it installed it sometime around April this year.

I'm using a NVIDIA Geforce RTX 2070.

Does anybody have any idea what I could try?

top 2 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 6 days ago* (last edited 6 days ago) (1 children)

There have been some changes in general to the way things cache. I'm not familiar with the details and only use ComfyUI at this point. I have been hacking around within this area but truly hacking around in the original sense of the word and not documenting or doing anything with the intention of sharing with others.

What I'm thinking about is the code that tries to prevent rerunning the same things over and over, like when different models or parts of models get loaded. Around a couple of months ago I had an issues with some of my hacked code swapping between LLM and diffusion models on the fly, but I wound up using some ComfyUI nodes for LLMs instead.

You can use your .git directory to see your last commit. Your git history is like an independent archive that can be used for stuff like this. You don't have to learn how to revert in conjunction with all the other code. You can just use the command line and git to view where you were at before. Then go on GitHub and look at the releases for the project and git-clone a release from around this timespan. Place that somewhere new and copy or link your old configuration and user settings.

[–] Saledovil 1 points 5 days ago

Thanks, I'll try that.