this post was submitted on 08 Feb 2024
994 points (98.9% liked)

Funny: Home of the Haha

5780 readers
879 users here now

Welcome to /c/funny, a place for all your humorous and amusing content.

Looking for mods! Send an application to Stamets!

Our Rules:

  1. Keep it civil. We're all people here. Be respectful to one another.

  2. No sexism, racism, homophobia, transphobia or any other flavor of bigotry. I should not need to explain this one.

  3. Try not to repost anything posted within the past month. Beyond that, go for it. Not everyone is on every site all the time.


Other Communities:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 9 months ago

There would still need to be a corpus of text and some supervised training of a model on that text in order to “recognize” with some level of confidence what the text represents, right?

Correct. The clip encoder is trained on images and their corresponding description. Therefore it learns the names for things in images.

And now it is obvious why this prompt fails: there are no images of empty rooms tagged as "no elephants". This can be fixed by adding a negative prompt, which subtracts the concept of "elephants" from the image in one of the automagical steps.