Kerfuffle

joined 1 year ago
[–] Kerfuffle 1 points 1 year ago

Maybe I misunderstood you but my point was if it interpreted the language preferences I set in the normal config as "knowing" the languages I added and didn't offer translations, that wouldn't necessarily be what I want.

[–] Kerfuffle 1 points 1 year ago (2 children)

The languages I might want to see aren't necessarily the ones I know. People who are learning languages might set that (I did for the language I'm learning, anyway).

[–] Kerfuffle 1 points 1 year ago (6 children)

I'm sure there's a way to disable it, even if you have to go into about:config

[–] Kerfuffle 2 points 1 year ago (1 children)

Definitely very interesting, but figuring out what layers to skip is a relatively difficult problem.

I really wish they'd shown an example of the optimal layers to skip for the 13B model. Like the paper notes, using the wrong combination of skipped layers can be worse overall. So it's not just about how many layers you skip, but which ones as well.

It would also be interesting to see if there are any common patterns in which layers are most skippable. It probably would be model architecture specific but it would be pretty useful if you could calculate the optimal skip pattern for say a 3B model and then translate that to a 30B with good/reasonable results.

[–] Kerfuffle 6 points 1 year ago (1 children)

The timing and similarity highly suggests this is a problem with how almost all software has implemented the webp standard in its image processing software.

Did you read the article or the post? The point was that both places where the vulnerability was found probably used libwepb. So it's not that there's something inherently vulnerable in handling webp, just that they both used the same library which had a vulnerability. (Presumably the article was a little vague about the Apple side because the source wasn't open/available.)

given that the programs processing images often have escalated privileges.

What? That sounds like a really strange thing to say. I guess one could argue it's technically true because browsers can be considered "a program that processes images" and a browser component can end up in stuff with escalated privileges. That's kind of a special case though and in general there's no reason for the vast majority of programs that process images to have special privileges.

[–] Kerfuffle 4 points 1 year ago

So I have never once ever considered anything produced by a LLM as true or false, because it cannot possibly do that.

You're looking at this in an overly literal way. It's kind of like if you said:

Actually, your program cannot possibly have a "bug". Programs are digital information, so it's ridiculous to suggest that an insect could be inside! That's clearly impossible.

"Bug", "hallucination", "lying", etc are just convenient ways to refer to things. You don't have to interpret them as the literal meaning of the word. It also doesn't require anything as sophisticated as a LLM for something like a program to "lie". Just for example, I could write a program that logs some status information. It could log that everything is fine and then immediately crash: clearly everything isn't actually fine. I might say something about the program being "lying", but this is just a way to refer to the way that what it's reporting doesn't correspond with what is factually true.

People talk so often about how they “hallucinate”, or that they are “inaccurate”, but I think those discussions are totally irrelevant in the long term.

It's actually extremely relevant in terms of putting LLMs to practical use, something people are already doing. Even when talking about plain old text completion for something like a phone keyboard, it's obviously relevant if the completions it suggests are accurate.

So text prediction is saying when A, high probability that then B.

This is effectively the same as "knowing" A implies B. If you get down to it, human brains don't really "know" anything either. It's just a bunch of neurons connected up, maybe reaching a potential and firing, maybe not, etc.

(I wouldn't claim to be an expert on this subject but I am reasonably well informed. I've written my own implementation of LLM inference and contributed to other AI-related projects as well, you can verify that with the GitHub link in my profile.)

[–] Kerfuffle 40 points 1 year ago (1 children)

"This time you're going to love Cortana. For reals!"

[–] Kerfuffle 6 points 1 year ago (1 children)

Feet are like hands we walk on. Right? Complete with a thumb and all!

[–] Kerfuffle 3 points 1 year ago

People that love to read only the title. What could be better than a bunch of titles in a row?

[–] Kerfuffle 1 points 1 year ago

As a general statement: No, I am not.

You didn't qualify what you said originally. It either has the capability or not: you said it didn't, it actually does.

You’re making an over specific scenario to make it true.

Not really. It isn't that far-fetched that a company would see an artist they'd like to use but also not want to pay that artist's fees so they train an AI on the artist's portfolio and can churn out very similar artwork. Training it on one or two images is obviously contrived, but a situation like what I just mentioned is very plausible.

This entire counter argument is nothing more than being pedantic.

So this isn't true. What you said isn't accurate with the literal interpretation and it doesn't work with the more general interpretation either. The person higher in the thread called it stealing: in that case it wasn't, but AI models do have the capability to do what most people would probably call "stealing" or infringing on the artist's rights. I think recognizing that distinction is important.

Furthermore, if I’m making such specific instructions to the AI, then I am the one who’s replicating the art.

Yes, that's kind of the point. A lot of people (me included) would be comfortable calling doing that sort of thing stealing or plagiarism. That's why the company in OP took pains to say they weren't doing that.

[–] Kerfuffle 2 points 1 year ago

It’s a briefcase full of cash.

I'm pretty sure you could just say "It's tax free" or even double the amount to $2 million and it wouldn't really change which people would do it and which wouldn't.

I'd do it, as long as I was really convinced that the only danger was mental, not physical.

[–] Kerfuffle 2 points 1 year ago

You probably ate or drank other stuff with water. The other person didn't mean "water" specifically, just some means of hydration.

view more: ‹ prev next ›