Xanza

joined 1 week ago
MODERATOR OF
[–] [email protected] 6 points 3 days ago* (last edited 3 days ago) (4 children)

AlpineLinux meets your requirements, here. Works with just about any DE/WM combo, and meets all of your application requirements: firefox, flatpak (bottles), and inkscape.

It's also super lightweight and runs great on older hardware. Here's Chris Titus exploring it on his YouTube channel.

[–] [email protected] 2 points 3 days ago

This is why I have a foot mat that says "come back with a warrant, we don't willingly comply."

[–] [email protected] 1 points 3 days ago (2 children)

Only full size racks. You don't need to buy a full size rack. You can get very small racks these days that are smaller than a little chest cooler. And why are you under the impression that you have to mount it on the wall?

[–] [email protected] 6 points 4 days ago* (last edited 4 days ago)

Jellyfin hosted on my primary PC with access to my GPU (NVIDIA GeForce RTX 4060) for transcoding. The Jellyfin libraries instance SMB shares on my NAS. Stream everything with Jellyfin for Chromecast right from the TV.

Works amazingly well. Great transcoding times. No lag despite only having 10/100/1000 NIC on NAS and streaming WiFi with Chromecast.

I manage the media library with TMM (tinymediamanager).

Super happy with it, particularly considering the only thing it cost me was the NAS (because I game on my PC anyways) which I was also going to get, anyways.

[–] [email protected] -4 points 4 days ago* (last edited 3 days ago) (4 children)

What kind of hardware do you need to run with comparable responsiveness to chatgpt?

Generally you need between $8-10,000 worth of equipment to get a relative responsiveness from a self-hosted LLM.


Anyone downvoting clearly doesn't understand the hardware requirements to be able to run an LLM with a significant model that rivals ChatGPT. ChatGPT is a multi-billion dollar AI cluster...

OP specifically asked what kind of hardware you need to run a similar AI model with the same relative responsiveness, and GPT4 has 1.8 trillion parameters... Why would you lie and pretend like you can run a model like that on a fucking raspberry pi? You're living in a dream world... Offline models like that require 128 GB of RAM which is $900-1200 in RAM alone...

[–] [email protected] 23 points 4 days ago (1 children)

Just had my first run-in with hexbear. Man those guys are fuckin' stupid.

[–] [email protected] 3 points 4 days ago

Putin gonna really love this one.

[–] [email protected] 1 points 4 days ago

I'm coming to appreciate Hyper-V more and more to be honest. It's a very mature virtualization environment. The only issue I have with it is the inability to do GPU-passthrough. Once they figure that one out, I probably won't bother with anything else.

[–] [email protected] 3 points 4 days ago* (last edited 4 days ago)

Because developers use cross-compilable languages to pump out Windows executables without knowing or understanding or caring about the Windows environment. I mean, ~/.whatever still works under Windows.

[–] [email protected] 13 points 4 days ago

I guess it depends on what you mean by "edited content."

[–] [email protected] 1 points 4 days ago* (last edited 4 days ago) (3 children)

Docker is so bad. I don't think a lot of you young bloods understand that. The system is so incredibly fragmented. Tools like Portainer are great, but they're a super pain in the ass to use with tools/software that include a dockerfile vs a compose file. There's no interoperability between the two which makes it insurmountably time-consuming and stupid to deal with certain projects because they're made for a specific build environment which is just antithetical to good computing.

Like right now, I have Portainer up. I want to test out Coolify. I check out templates? Damn, not there. Now I gotta add my own template manually. Ok, cool. Half way done. Oops. It expects a docker-compose.yml. The Coolify repository only has a Dockerfile. Damn, now I have to make a custom template. Oh well, not a big deal. Plop in the Dockerfile from the repository, and click "deploy." OOPS! ERROR: "failed to deploy a stack: service "soketi" has neither an image nor a build context specified: invalid compose project." Well fuck... Ok, whatever. Not the biggest of deals. Let me search for an image of "soketi" using dockerhub. Well fuck. There are 3 images which haven't been updated in several years. Awesome. Which one do I need? The echo-server? The network-watcher? PWS?

Like, do you see the issue here? There's nothing about docker that's straightforward at all. It fails in so many aspects it's insane that its so popular.

view more: ‹ prev next ›