this post was submitted on 23 Nov 2024
8 points (100.0% liked)
Buttcoin
396 readers
26 users here now
Buttcoin is the future of online butts. Buttcoin is a peer-to-peer butt. Peer-to-peer means that no central authority issues new butts or tracks butts.
A community for hurling ordure at cryptocurrency/blockchain dweebs of all sorts. We are only here for debate as long as it amuses us. Meme stocks are also on topic.
founded 11 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
it’s really rude of you to find and quote a paragraph designed to force me to take four shots in rapid succession in my ongoing crypto/AI drinking game!
“what do you mean oracle problem? our new thing’s nothing but oracles, we just have to figure out a way to know they’re telling the truth!”
meme stock cults and crypto scams both should maybe consider keeping pseudo-leftist jargon out of their fucking mouths
e: also, Bittensor? really?
also this is all horseshit so I know they haven’t thought this far ahead, but pushing a bit on the oracle problem, how do they think they solved these fundamental issues in their proposed design?
and of course this is avoiding the elephant in the room: LLMs have no concept of truth, they just extrude plausible bullshit into a statistically likely shape. there’s no source of truth that can reliably distinguish bad LLM responses from good ones, and if you had one you’d probably be better off just using it instead of an LLM.
edit cause for some reason my brain can’t stop it with this fractally wrong shit: finally, if their plan is to just evenly distribute tokens across miners and return all answers: congrats on the “decentralized” network of
/dev/urandom
to string converters you weird fucksanother edit: I read the fucking spec and somehow it’s even stupider than any of the above. you can trivially just spend tokens to buy a majority of the validator slots for a subnet (which I guess in normal cultist lingo would be a subchain) and use that to kick out everyone else’s miners:
a third edit, please help, my brain is melting: what does a non-adversarial validator even look like in this architecture? we can’t fucking verify LLM outputs like I said so… is this just multiple computers doing RAG and pretending that’s a good idea? is the idea that you run some kind of unbounded training algorithm and we also live in a universe where model overfitting doesn’t exist? help I am melting
number go up
what if we made the large language model larger? it’s weird nobody has attempted this