Kerfuffle

joined 2 years ago
[–] Kerfuffle 4 points 1 year ago

This sounds no different than the static analysis tools we’ve had for COBOL for some time now.

One difference is people might kind of understand how the static analysis tools we've had for some time now actually work. LLMs are basically a black box. You also can't easily debug/fix a specific problem. The LLM produces wrong code in one particular case, what do you do? You can try performing fine tuning training with examples of the problem and what it should be but there's no guarantee that won't just change other stuff subtly and add a new issue for you to discovered at a future time.

[–] Kerfuffle 1 points 1 year ago (1 children)

Seems like we're on the same page. The only thing I disagreed with before is saying the output was random.

[–] Kerfuffle 1 points 1 year ago (3 children)

It has to match the prompt and make as much sense as possible

So it's specifically designed to make as much sense as possible.

and they should not be treated as ‘fact generating machines’.

You can't really "generate" facts, only recognize them. :) I know what you mean though and I generally agree. I'm really interested in LLM stuff but I definitely don't really trust them (and no one should currently anyway).

Why did this bot say that Hitler was a great leader? Because it was confused by some text that was fed into the model.

Most people are (rightfully) very hesitant to say anything positive about Hitler but he did accomplish some fairly impressive stuff. As horrible as their means were, Nazi Germany also advanced since quite a bit also. I am not saying it was justified, justifiable or good, but by a not entirely unreasonable definition of "great" he could qualify.

So I'd say it's not really that it got confused, it's that LLMs don't understand being cautious about statements like that. I'd also say I prefer the LLM to "look" at stuff objectively and try to answer rather than responding to anything remotely questionable with "Sorry, Dave I can't let you do that. There might be a sharp edge hidden somewhere and you could hurt yourself!" I hate being protected from myself without the ability to opt out.

I think part of the issue here is because the output from LLMs looks like a human might have wrote it people tend to anthropomorphize the LLM. They ask it for its best recipe using the ingredients bleach, water and kumquat jam and then are shocked when it gives them a recipe for bleach kumquat sauce.

[–] Kerfuffle 2 points 1 year ago (5 children)

It’s not supposed to be some enlightened, respectful, perfectly fair entity.

I'm with you so far.

It’s a tool for producing mostly random, grammatically correct text.

What? That's certainly not the purpose of LLMs and a lot of work has been done to improve the accuracy of their answers.

Is it still not good enough to rely on? Maybe, but that doesn't mean it's just for producing random text.

[–] Kerfuffle 6 points 1 year ago

I haven't lived in their range for a long time, but I've always liked them. They look great, (if I remember correctly) their singing is pretty nice too.

[–] Kerfuffle 3 points 1 year ago

Is there any reason why support for loading both formats cannot be included within GGML/llama.cpp directly?

It could be (and I bet koboldcpp and maybe other projects will take that route). There absolutely is a disadvantage to dragging around a lot of legacy stuff for compatibility. llama.cpp/ggml's approach has pretty much always been to favor rapid development over compatibility.

As I understand it, the new format is basically the same as the old format

I'm not sure that's really accurate. There are significant differences in how the model vocabulary is handled, for instance.

Even if it was true right now, in the very first version of GGUF that is merged it'll likely be less true as GGUF evolves and the stuff it enables starts getting used more. Having to maintain compatibility with the GGML stuff would make iterating on GGUF and adding new features more difficult.

[–] Kerfuffle 3 points 1 year ago

It always surprises me how many people go for the self burn. Whining about a few paragraphs of texts is basically admitting their literacy level or attention span is pitiful.

That said, people who don't like Apple still have legitimate reasons. Stuff like being forced to use proprietary connector, their "walled garden". Basically if you're happy within the limits of how you think they should do stuff it's great, but not everyone is. None of that has really changed.

Use what you like though. People calling switching to Apple if that's what you prefer a "betrayal" are being ridiculous.

[–] Kerfuffle 4 points 1 year ago (2 children)

I was able to contribute a script (convert-llama-ggmlv3-to-gguf.py) to convert GGML models to GGUF so you can potentially still use your existing models. Ideally it should be used with the metadata from the original model since converting vocab from GGML to GGUF without that is imperfect. (By metadata I mean stuff like the HuggingFace config.json, tokenizer.model, etc.)

[–] Kerfuffle 3 points 1 year ago (1 children)

"Have you ever tried not having the problems that make your life difficult? Give it a shot, you might find the change refreshing!"

[–] Kerfuffle 30 points 1 year ago

Firefox is like democracy. It sucks, but it's better than the alternatives.

[–] Kerfuffle 2 points 1 year ago

Look at your post and you'll see my issue. :)

[–] Kerfuffle 13 points 1 year ago

If it was possible for gay people to “become straight” they abso-fucking-lutely would. The reason why they don’t is because it’s impossible.

I don't doubt that some would, but I'd actually be surprised if it was the majority. A lot of people see their sexuality as an important part of their identity and wouldn't just give it up like that, even if doing so would make their lives easier.

view more: ‹ prev next ›