this post was submitted on 08 Feb 2024
994 points (98.9% liked)
Funny: Home of the Haha
5772 readers
981 users here now
Welcome to /c/funny, a place for all your humorous and amusing content.
Looking for mods! Send an application to Stamets!
Our Rules:
-
Keep it civil. We're all people here. Be respectful to one another.
-
No sexism, racism, homophobia, transphobia or any other flavor of bigotry. I should not need to explain this one.
-
Try not to repost anything posted within the past month. Beyond that, go for it. Not everyone is on every site all the time.
Other Communities:
-
/c/[email protected] - Star Trek chat, memes and shitposts
-
/c/[email protected] - General memes
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That's what negative prompts are for in those image generating AIs (I have never used DALL-E so no idea if they support negative prompts). I guess you could have an LLM interpret a sentence like OPs to extract possible positive & negative prompts based on sentence structure but that would always be less accurate than just differentiating them. Because once you spend some time with those chat bot LLMs you notice very quickly just how fucking stupid they actually are. And unfortunately things like larger context / token sizes won't change that and would scale incredibly badly in regards to hardware anyway. When you regenerate replies a few times you kinda understand how much guesswork they make, and how often they completely go wrong in interpreting the previous tokens (including your last reply). So yeah, they're definitely really good at bullshitting. Can be fun, but it is absolutely not what I'd call "AI", because there's simply no intelligence behind it, and certainly pretty overhyped (not to say that there aren't actually useful fields for those algorithms).