this post was submitted on 29 Jan 2024
63 points (90.9% liked)

LocalLLaMA

2268 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 7 points 10 months ago* (last edited 10 months ago) (2 children)

I've downloaded the 13B codellama from huggingface, passed it my NVIDIA 2070 via cuda, and have interfaced either through the terminal or lmstudio.

Usually my prompts include the specific code block and a wordy explanation about what I'm trying to do.

It's okay, but it's not as accurate as chatgpt, and tends to repeat itself a lot more.

For editor integration, i just opted for codeium in neovim. It's a pretty good alternative to copilot imho.

[–] [email protected] 2 points 10 months ago (1 children)

Why use it though if it's not as good, and repeats itself.

[–] [email protected] 9 points 10 months ago* (last edited 10 months ago)

Because it doesn't call out to the internet. I even put lmstudio behind firejail to prevent it from doing so. Thusly any code I feed it (albeit pretty trivial code) doesn't add to chatgpt's overarching data set.

It still can produce usable results. It's just not as consistent. Whenever it gets into a repetitive loop, I just restart it, resetting the initial context, which generally prevents it from repeating itself, at least initially. To be fair, I've also experienced this with chatgpt, just not as often.

TLDR; It's more private and still useful.

[–] [email protected] 2 points 9 months ago (1 children)

Hugging face have an llm plug-in for code completion in neovim btw!

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago) (1 children)

Oh nice! Got a link for anyone that comes across this? Save me and others a search plz?

EDIT: NM. Got it. Gonna give it a try later.

LLM powered development for Neovim

[–] [email protected] 2 points 9 months ago (1 children)

If you use ollama you can try to use the fork that I am using. This is my config to make it work: https://github.com/Amzd/nvim.config/blob/main/lua/plugins/llm.lua

[–] [email protected] 0 points 9 months ago

Nice. Thanks. I'll save this post in case I use ollama in the future. Right now I use a codellama model and a mythomax model, but am not running them via a localhost server, just outputted in the terminal or LMStudio.

This looks interesting though. Thanks!