this post was submitted on 08 Feb 2024
994 points (98.9% liked)

Funny: Home of the Haha

5772 readers
1123 users here now

Welcome to /c/funny, a place for all your humorous and amusing content.

Looking for mods! Send an application to Stamets!

Our Rules:

  1. Keep it civil. We're all people here. Be respectful to one another.

  2. No sexism, racism, homophobia, transphobia or any other flavor of bigotry. I should not need to explain this one.

  3. Try not to repost anything posted within the past month. Beyond that, go for it. Not everyone is on every site all the time.


Other Communities:

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 30 points 9 months ago (1 children)

I'm getting the impression, the "Elephant Test" will become famous in AI image generation.

[โ€“] [email protected] 5 points 9 months ago* (last edited 9 months ago)

It's not a test of image generation but text comprehension. You could rip CLIP out of Stable Diffusion and replace it with something that understands negation but that's pointless, the pipeline already takes two prompts for exactly that reason: One is for "this is what I want to see", the other for "this is what I don't want to see". Both get passed through CLIP individually which on its own doesn't need to understand negation, the rest of the pipeline has to have a spot to plug in both positive and negative conditioning.

Mostly it's just KISS in action, but occasionally it's actually useful as you can feed it conditioning that's not derived from text, so you can tell it "generate a picture which doesn't match this colour scheme here" or something. Say, positive conditioning text "a landscape", negative conditioning an image, archetypal "top blue, bottom green", now it'll have to come up with something more creative as the conditioning pushes it away from things it considers normal for "a landscape" and would generally settle on.