this post was submitted on 23 Nov 2024
354 points (89.2% liked)

Technology

59669 readers
2830 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

I'm usually the one saying "AI is already as good as it's gonna get, for a long while."

This article, in contrast, is quotes from folks making the next AI generation - saying the same.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 100 points 4 days ago (4 children)

It's absurd that some of the larger LLMs now use hundreds of billions of parameters (e.g. llama3.1 with 405B).

This doesn't really seem like a smart usage of ressources if you need several of the largest GPUs available to even run one conversation.

[–] 31337 6 points 2 days ago* (last edited 2 days ago) (1 children)

Larger models train faster (need less compute), for reasons not fully understood. These large models can then be used as teachers to train smaller models more efficiently. I've used Qwen 14B (14 billion parameters, quantized to 6-bit integers), and it's not too much worse than these very large models.

Lately, I've been thinking of LLMs as lossy text/idea compression with content-addressable memory. And 10.5GB is pretty good compression for all the "knowledge" they seem to retain.

[–] [email protected] 1 points 1 day ago (1 children)

I don't think Qwen was trained with distillation, was it?

It would be awesome if it was.

Also you should try Supernova Medius, which is Qwen 14B with some "distillation" from some other models.

[–] 31337 1 points 1 day ago (1 children)

Hmm. I just assumed 14B was distilled from 72B, because that's what I thought llama was doing, and that would just make sense. On further research it's not clear if llama did the traditional teacher method or just trained the smaller models on synthetic data generated from a large model. I suppose training smaller models on a larger amount of data generated by larger models is similar though. It does seem like Qwen was also trained on synthetic data, because it sometimes thinks it's Claude, lol.

Thanks for the tip on Medius. Just tried it out, and it does seem better than Qwen 14B.

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago)

Llama 3.1 is not even a "true" distillation either, but its kinda complicated, like you said.

Yeah Qwen undoubtedly has synthetic data lol. It's even in the base model, which isn't really their "fault" as its presumably part of the web scrape.

[–] [email protected] 30 points 4 days ago (3 children)

I wonder how many GPUs my brain is

[–] [email protected] 65 points 4 days ago (1 children)

It's a lot. Like a lot a lot. GPUs have about 150 billion transistors but those transistors only make 1 connection in what is essentially printed in a 2d space on silicon.

Each neuron makes dozens of connections, and there's on the order of almost 100 billion neurons in a blobby lump of fat and neurons that takes up 3d space. And then combine the fact that multiple neurons in patterns firing is how everything actually functions and you have such absurdly high number of potential for how powerful human brains are.

At this point, I'm not sure there's enough gpus in the world to mimic what a human brain can do.

[–] [email protected] 22 points 3 days ago

That's also just the electrical portion of our mind. There are whole levels of chemical, and chemical potentials at work. Neurones will fire differently depending on the chemical soup around them. Most of our moods are chemically based. E.g. adrenaline and testosterone making us more aggressive.

Our mind also extends out of our heads. Organ transplant recipricants have noted personality changes. Food preferences being the most prevailant.

The neurons only deal with 'fast' thinking. 'slow' thinking is far more complex and distributed.

[–] [email protected] 20 points 4 days ago (2 children)
[–] [email protected] 3 points 3 days ago

The Answer to the Ultimate Question of Life, The Universe, and Everything

[–] [email protected] 2 points 3 days ago (1 children)
[–] [email protected] 1 points 3 days ago

You said GPUs, not CPUs and threading capabilities

[–] [email protected] 13 points 4 days ago (1 children)

I don't think your brain can be reasonably compared with an LLM, just like it can't be compared with a calculator.

[–] [email protected] 21 points 4 days ago (1 children)

LLMs are based on neural networks which are a massively simplified model of how our brain works. So you kind of can as long as you keep in mind they are orders of magnitude more simple.

[–] [email protected] 6 points 3 days ago (1 children)

At some point it becomes so “simplified” it’s arguably just not the same thing, even conceptually.

[–] [email protected] 1 points 2 days ago* (last edited 2 days ago)

It is conceptually the same thing. A series of interconnected neurons with a firing threshold and weighted connections.

The simplification comes with how the information is transmitted and how our brain learns.

Many functions in the human body rely on quantum mechanical effects to function correctly. So to simulate it properly each connection really needs to be its own super computer.

But it has been shown to be able to encode information in a similar way. The learning the part is not even close.

[–] [email protected] 17 points 3 days ago

Seeing as how the full unquantized FP16 for Llama 3.1 405B requires around a terabyte of VRAM (16 bits per parameter + context), I'd say way more than several.

[–] [email protected] 9 points 3 days ago

That's capitalism