Kerfuffle

joined 1 year ago
[–] Kerfuffle 1 points 1 year ago

Well, some people believe that pigs are as smart as toddlers. So a cow would, at a minimum, have to be smarter than a pig.

Kind of an interesting thought process. It seems like the assumption is "I'm doing it, so it has to be fine".

The problem with thinking that way is people have flaws, and if you think like that you'll just take it as a given whatever you're doing is already correct and never fix any personal issues.

[–] Kerfuffle 5 points 1 year ago (2 children)

If humans stopped eating meat, millions of animals would still be killed by predators, illness, parasites, old age, accidents, etc.

If I don't murder people, people will still get murdered. Therefore it doesn't make a difference if I choose not to murder people?

[–] Kerfuffle 3 points 1 year ago

But I just was wondering, what IQ/ability would make you swear off beef?

10% of the current IQ would probably be high enough.

[–] Kerfuffle -1 points 1 year ago

Even plants can do that.

There's no reason for a rational person to believe this. There's just no evidence for plants feeling pain. They can react to some stimuli of course, but experiencing things is a different matter.

[–] Kerfuffle 2 points 1 year ago (1 children)

If there is one or more god(s) out there and their fundamental core value is love

If that was true, how could they let the status quo persist?

[–] Kerfuffle 2 points 1 year ago

Kind of reminds me of this skit about a guy kidnapped by a mermaid: https://piped.video/v/B-lRdR0AlSw

[–] Kerfuffle 4 points 1 year ago (2 children)

Seems like it's this: https://en.wikipedia.org/wiki/Tetrataenite

Doesn't seem that exciting.

[–] Kerfuffle 1 points 1 year ago

The problem is not really the LLM itself - it’s how some people are trying to use it.

This I can definitely agree with.

ChatGPT cannot discern between instructions from the developer and those from the user

I don't know about ChatGPT, but this problem probably isn't really that hard to deal with. You might already know text gets encoded to token ids. It's also possible to have special token ids like start of text, end of text, etc. Using those special non-text token ids and appropriate training, instructions can be unambiguously separated from something like text to summarize.

The bad summary gets circulated around to multiple other sites by users and automated scraping, and now there’s a real mess of misinformation out there.

Ehh, people do that themselves pretty well too. The LLM possibly is more susceptible to being tricked but people are more likely to just do bad faith stuff deliberately.

Not really because of this specific problem, but I'm definitely not a fan of auto summaries (and bots that wander the internet auto summarizing stuff no one actually asked them to). I've seen plenty of examples where the summary is wrong or misleading without any weird stuff like hidden instructions.

[–] Kerfuffle 3 points 1 year ago (1 children)

Yeah the whole article has me wondering wtf they are expecting from it in the first place.

They're expecting that approach will drive clicks. There are a lot of articles like that, exploiting how people don't really understand LLMs but are also kind of afraid of them. Also a decent way to harvest upvotes.

Just want to be clear, I think it's silly freaking out about stuff like in the article. I'm not saying people should really trust them. I'm really interested in the technology, but I don't really use it for anything except messing around personally. It's basically like asking random people on the internet except 1) it can't really get updated based on new information and 2) there's no counterpoint. The second part is really important, because while random people on the internet can say wrong/misleading stuff, in a forum situation there's a good chance someone will chime in and say "No, that's wrong because..." while with the LLM you just get its side.

[–] Kerfuffle 19 points 1 year ago

Participants in awe of how Python lags behind C++, Java, C#, Ruby, Go and PHP

Comparing Python to compiled languages is like C++ is pretty unreasonable.

[–] Kerfuffle 2 points 1 year ago (1 children)

If you're using llama.cpp, some ROCM stuff recently got merged in. It works pretty well, at least on my 6600. I believe there were instructions for getting it working on Windows in the pull.

[–] Kerfuffle 36 points 1 year ago (5 children)

I feel like most of the posts like this are pretty much clickbait.

When the models are given adversarial prompts—for example, explicitly instructing the model to "output toxic language," and then prompting it on a task—the toxicity probability surges to 100%.

We told the model to output toxic language and it did. *GASP! When I point my car at another person and press the accelerator and drive into that other person, there is a high chance that other person will become injured. Therefore cars have high injury probabilities. Can I get some funding to explore this hypothesis further?

Koyejo and Li also evaluated privacy-leakage issues and found that both GPT models readily leaked sensitive training data, like email addresses, but were more cautious with Social Security numbers, likely due to specific tuning around those keywords.

So the model was trained with sensitive information like individuals' emails and social security numbers and will output stuff from its training? That's not surprising. Uhh, don't train models on sensitive personal information. The problem isn't the model here, it's the input.

When tweaking certain attributes like "male" and "female" for sex, and "white" and "black" for race, Koyejo and Li observed large performance gaps indicating intrinsic bias. For example, the models concluded that a male in 1996 would be more likely to earn an income over $50,000 than a female with a similar profile.

Bias and inequality exists. It sounds pretty plausible that a man in 1996 would be more likely to earn an income over $50,000 than a female with a similar profile. Should it be that way? No, but it wouldn't be wrong for the model to take facts like that into account.

view more: ‹ prev next ›