this post was submitted on 31 Mar 2024
17 points (100.0% liked)
LocalLLaMA
2322 readers
1 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't know of any technical problems preventing that per se. But the thunderbolt gpu docks seem quite pricey. You can get half a PC for that money. Or a whole used one. These old ExpressPort adapters used to be cheaper if I remember correctly (the ones which got some PCIE lanes out of an old ThinkPad with some flexible flat cable to a dedicated pcb.) The downside was, it's just a few PCIE lanes. And that's probably also a downside with the thunderbolt version of that. It'd take some time to transfer the model into the GPU's memory this way. But after that it should be fast.
I mean if the drivers etc work with that setup. I'm not an expert. And it's been a while since I last saw people using adapters like that. I wouldn't spend 250€ on that. It's too much compared to the price of that NVidia card and also too much compared to other solutions. I can get a refurbished office PC for that.