this post was submitted on 31 Mar 2024
17 points (100.0% liked)

LocalLLaMA

2258 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

Afaik most LLMs run purely on the GPU, dont they?

So if I have an Nvidia Titan X with 12GB of RAM, could I plug this into my laptop and offload the load?

I am using Fedora, so getting the NVIDIA drivers would be... fun and already probably a dealbreaker (wouldnt want to run proprietary drivers on my daily system).

I know that using ExpressPort adapters people where able to use GPUs externally, and this is possible with thunderbolt too, isnt it?

The question is, how well does this work?

Or would using a small SOC to host a webserver for the interface and do all the computing on the GPU make more sense?

I am curious about the difficulties here, ARM SOC and proprietary drivers? Laptop over USB-c (maybe not thunderbolt?) and a GPU just for the AI tasks...

top 7 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 7 months ago (1 children)

Your best bet would probably be to get a used office PC to put the card in. You'll likely have to replace the power supply and maybe swap the storage but with how much proper external enclosures go for the price might not be too different. Some frameworks don't support direct GPU loading so make sure that you have more ram than vram.

An arm soc won't work in most cases due to a lack of bandwidth and software support. The only board I know of that can do it is the rpi5 and that's still mostly a poc.

In general I wouldn't recomend a titan x unless you already have one because it's been deprecated in cuda, so getting modern libraries to work will be a pain.

[–] [email protected] 3 points 7 months ago (1 children)

In general I wouldn't recomend a titan x unless you already have one because it's been deprecated in cuda, so getting modern libraries to work will be a pain.

Omg I spent too much on this... thanks for the heads up, that is a major fuckup

[–] [email protected] 1 points 7 months ago

AMD cards are slowly starting to get better. Not great yet but if you want a card isn't Nvidia then it might be for you.

[–] planish 4 points 7 months ago

You're probably going to run into the problem that people didn't anticipate your strategy if you try to run a model on a GPU with way more memory than the host system. I'm not sure many execution frameworks can go straight from disk to GPU RAM. Also, storage speed for loading the model might be an issue on an SOC that boots off e.g. an SD card.

An eGPU dock should do CUDA just as well as an internal GPU, as far as I know. But you would need the drivers installed.

[–] [email protected] 2 points 7 months ago

You could buy a thunderbolt dock but I'm not sure it is cost effective. You will need a dedicated power supply and a newer laptop for best performance.

[–] [email protected] 2 points 7 months ago* (last edited 7 months ago)

I don't know of any technical problems preventing that per se. But the thunderbolt gpu docks seem quite pricey. You can get half a PC for that money. Or a whole used one. These old ExpressPort adapters used to be cheaper if I remember correctly (the ones which got some PCIE lanes out of an old ThinkPad with some flexible flat cable to a dedicated pcb.) The downside was, it's just a few PCIE lanes. And that's probably also a downside with the thunderbolt version of that. It'd take some time to transfer the model into the GPU's memory this way. But after that it should be fast.

I mean if the drivers etc work with that setup. I'm not an expert. And it's been a while since I last saw people using adapters like that. I wouldn't spend 250€ on that. It's too much compared to the price of that NVidia card and also too much compared to other solutions. I can get a refurbished office PC for that.

[–] Quexotic 2 points 7 months ago

I am very interested, commenting to bump!