this post was submitted on 21 Feb 2025
70 points (88.9% liked)

Cybersecurity

6371 readers
19 users here now

c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.

THE RULES

Instance Rules

Community Rules

If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.

Learn about hacking

Hack the Box

Try Hack Me

Pico Capture the flag

Other security-related communities [email protected] [email protected] [email protected] [email protected] [email protected]

Notable mention to [email protected]

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 1 day ago* (last edited 1 day ago) (3 children)

You can't practically self-host Deepseek R1.

Look, I use the 32B distil on my 3090 every day, but it is not the same thing as full R1. And people need to stop conflating the two.

And (theoretically) API usage through one of many R1 providers is private.

[–] [email protected] 6 points 1 day ago* (last edited 1 day ago) (2 children)

Do you know of a provider is actually private? The few privacy policies I checked all had something like "We might keep some of your data for some time for anti-abuse or other reasons"...

[–] [email protected] 1 points 1 day ago

I mean, not with certainty. If the risk of your input leaking is that great, you can just host your own VM with the 32B to be more certain.

[–] [email protected] 0 points 1 day ago

Trust me bro, they are private

[–] jwiggler 1 points 1 day ago (1 children)

I dont really use LLMs so I didn't even realize there were versions with different weights and stuff. I was using 7b, but found it pretty useless. Pretty sure I'm not going to be able run 32B on my rig. lmao.

guess ill continue being an LLMless pleb.

[–] [email protected] 2 points 1 day ago* (last edited 1 day ago) (1 children)

There are plenty of free LLM APIs you can use with something like Open Web UI, on any machine. I still use them myself.

[–] jwiggler 1 points 1 day ago (2 children)

Have you got any recs? I've got a 3080 in my machine atm

[–] [email protected] 2 points 1 day ago

I'm not @[email protected] However here's a pretty barebones how to article to get you started. Just know it can be as complicated as you like. For starters you may want to stick to the 7b and 14b models like mistral:7b and phi4:14b as they'll fit easily on your card and will allow you to test the waters.

If you're on Windows https://doncharisma.org/2024/11/23/self-hosting-ollama-with-open-webui-on-windows-a-step-by-step-guide/

If you're using Linux https://linuxtldr.com/setup-ollama-and-open-webui-on-linux/

If you want a container https://github.com/open-webui/open-webui/blob/main/docker-compose.yaml

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago)

Locally? Arcee 14B and the 14B Deepseek distill are currently the best models that fill fit.

I'd recommend hosting them with TabbyAPI instead of ollama, as they will be much faster and more VRAM efficient. But this is more fuss.

Honestly, I would just try free APIs like Gemini, Groq, and such through open web ui, or use really cheap APIs like openrouter. Newer 14B models are okay, but they're definitely lacking that "encyclopedic intelligence" larger models have.

[–] [email protected] 1 points 1 day ago (2 children)

I use 32b and the 672b side by side. The performance hit is around 20% and I keep all my data local. I am not conflating the two however self hosting works for me just fine. Your usecase is your own certainly. However I'd rather take the performance hit for the added data privacy.

Also it's nice to he able to set my own weights and further distil R1

I have a local python expert a local golang expert and both have my local gitlab repository and I've tied their respective Ollama keys to my VSCode IDE.

[–] [email protected] 2 points 1 day ago* (last edited 1 day ago) (1 children)

Depends for sure. I usually try the 32B first, but give really "hard" queries to some API model.

[–] [email protected] 2 points 1 day ago

With the distilled models I have, I've been able to build and troubleshoot pretty complicated apps in Golang and Python. However, these distilled models are very specialized and will not do things like write me a story about a duck made out of duct tape or properly summarize articles. There are absolutely limits to my workflow and setup. But I'm pretty happy with it.

[–] [email protected] 1 points 1 day ago (1 children)

Have you had any luck importing even a medium-sized codebase and doing reasoning on it? All of my experiments start to show subtle errors past 2k tokens, and at 5k tokens the errors become significant. Any attempt to ingest and process a decent-sized project (say 20k SLOC plus tooling/overhead/config) has been useless, even on models that "should" have a good-enough sized context window.

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago)

My codebase is almost 1.2GB of raw python and go files no images. I think it's somewhere near 15k tokens for the python codebase and 22k for golang due to all the .mod and .io connectors to python libraries... it was a much bigger mess before if you can believe it.

What size model are you using? I'm getting pretty good results with R1 32b but these have been distilled to be experts in the languages of the codebases. I'm not using any general models for this.

Also it depends on the language you're targeting as well. Rust or Lisp have issues due to how much less they've been documented. I think golf type languages like brainfuck are impossible. It really comes down to how the language has been documented. Python gave me issues in the beginning until I specified 3.11 in my weights and distillation/training, and that definitely fixed a lot of the hallucinations I was getting from the model.

I think static typing languages that have consistent documentation would be the easiest for this. Now that I think of it, maybe getting a typescript expert would be something I could tool around with.

Edited for legibility and the fact that I just went and looked at my datasets again. Much bigger than I initially thought.