this post was submitted on 04 Mar 2024
740 points (98.7% liked)

Microblog Memes

5846 readers
2511 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 7 points 8 months ago (1 children)

Besides the point the other commenter already made, I'd like to add that inference isn't deterministic per model. There are a bunch of sources of inconsistency:

  • GPU hardware/software can influence the results of floating point operations
  • Different inference implementations can change the order of operations (and matrix operations aren't necessarily commutative)
  • Different RNG implementations can change the space of possible seed images
[–] [email protected] 2 points 8 months ago* (last edited 8 months ago) (1 children)

If you generate with the same prompt and settings you get what I would consider the same image except for tiny variations (they aren't matching pixel-perfect)

Edit: A piece of paper has a random 3D relief of fibers, so the exact position a printer ink droplet ends up at is also not deterministic, and so no two copies of a physical catalog are identical. But we would still consider them the "same" catalog

[–] [email protected] 2 points 8 months ago

If there's slight variation, it means it's not the same image.

And that's skipping over different RNG etc. You can build a machine learning model today and give it to me, tomorrow I can create a new RNG - suddenly the model can produce images it couldn't ever produce before.

It's very simple: the possible resulting images aren't purely determined by the model, as you claimed.