this post was submitted on 14 Dec 2023
14 points (100.0% liked)

LocalLLaMA

2265 readers
7 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

Hi, I'm currently starting to learn how LLM works in depth, so I started using nanoGPT to understand how to train a model and I'd like to play around with the code a little more. So I set myself a goal to train a model that can write basic French, it doesn't to be coherent or deep in its writing, just French with correct grammar. I only have a laptop that doesn't have a proper GPU, so I can't really train a model with billions of parameters. Do you think it's possible without too much dataset or intensive training? Is it a better idea if I use something different from nanoGPT?

TLDR: I'd like to train my own LLM on my laptop which doesn't have a GPU. It's only for learning purpose, so my goal is that it can write basic French. Is it doable? If it is, do you have any tips to make this easier?

top 9 comments
sorted by: hot top controversial new old
[–] SkySyrup 6 points 11 months ago* (last edited 11 months ago)

Sure! You’ll probably want to look at train-text-from-scratch in the llama.cpp project, it runs on pure CPU. The (admittedly little docs) should help, otherwise ChatGPT is a good help if you show it the code. NanoGPT is fine too.

For dataset, maybe you could train on French Wikipedia, or scrape from a French story site or fan fiction or whatever. Wikipedia is probably easiest, since they provide downloadable offline versions that are only a couple gigs.

[–] [email protected] 4 points 11 months ago* (last edited 11 months ago) (1 children)

If you want to learn machine learning, you could maybe play around with the examples recognizing single digit handwritten numbers with that MNIST dataset or something in that kind of league.

I think training an LLM that can be somewhat useful will be way out of scope with the RAM and computing capabilities such a laptop has to offer. Maybe correct grammer if you don't care to wait for a long long time. Something with the level of intelligence of autocomplete. But definitely not coherent or intelligent or answering your questions.

You could rent a VM in the cloud. Services like runpod.io or vast.ai offer you a proper GPU for like $2 an hour. There is also Amazon, Google, Azure, Lambda...

[–] mixtral 1 points 11 months ago* (last edited 11 months ago)

Do cloud services see everything - text/images data for training? And finished trained model? If so, runpod.io etc are no solution.

[–] blackstampede 4 points 11 months ago* (last edited 11 months ago) (1 children)

TL;DR yeah, it's doable, just slow.

You can train without a GPU, it just takes longer. More RAM and a better CPU will help up to a point. I don't think text generation is a particularly difficult task- you could probably do it with something like a Markov chain rather than an LLM if you don't care whether it's particularly coherent.

[–] Matburnx 2 points 11 months ago (1 children)

Well, I use my laptop as a daily-driver, so training an AI in the background, even when I don't use it seems a bit complicated. The Markov chain seems like an interesting alternative for what I'm looking, does any tools to use one exist or should I build one from scratch?

[–] blackstampede 2 points 11 months ago

There are libraries that can do it. Here's one: https://pypi.org/project/PyDTMC/

[–] [email protected] 3 points 11 months ago (1 children)

I believe the answer is, unfortunately, no.

Long answer: In the past, an ML researcher trying to do this would have used either manual labels (for example a dictionary of parts of speech for each word) or multiple sub-models trained to solve each sub-problem before combining into a full prediction model, and even then performance is not great.

However, once the models grew to billions of parameters it turned out that none of this external linguistic knowledge is necessary and the model can learn it all on its own. But it takes billions to trillions of examples to learn all these weights, which means a double hit to the training time: each step is slower due to more parameters, and more steps are needed to train on the full dataset.

None of these models are trainable without a cluster of GPUs, which massively parallelizes the training process.

That doesn't mean you can't try, but my results training a small toy model from scratch for 20-30 hours on a consumer GPU have been underwhelming. You get some nearly-grammatical sentences but also a lot of garbage, repetition, and incoherence.

[–] Matburnx 1 points 11 months ago (1 children)

That seems pretty disappointing. It seemed to me like it could have been somewhat possible. I've trained a 0.8M parameters model and it was spitting out something that looked like French, not French though. So I need to test it but I feel like if I do it with some millions of parameters it could work. It still wouldn't have a coherence but at least it could form real sentences. Again I don't know much about this, so I'm surely wrong. I also think the dataset may be the issue, I didn't use a general purposed dataset, only French books in a txt file.

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago)

How did you determine the dataset size? I mean if it's just a few megabytes of French books, I'm not surprised you don't get any results out of that. And it also depends how you feed it in and what parameters you choose for training and model architecture. There are several scientific papers researching for example the needed dataset size to corresponding parameter count of the model.

Once you choose the correct dataset size, have a look at your loss graphs. Do they converge? Did you run training long enough? I suppose it should take weeks (to months?) on an (old) laptop CPU before you see any results, even at that model size.