this post was submitted on 24 Mar 2025
27 points (96.6% liked)

Selfhosted

44954 readers
524 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I'm still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I'm a casual user on my personal machine, as well as with OpenWRT on my network hardware.

Here are the few features I need:

  • MergerFS with a RAID option for drive redundancy. I use multiple 12TB drives right now and have my media types separated between each. I'd like to have one pool that I can be flexible with space between each share.
  • Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
  • I'd like to start working with Home Assistant. Installing with WSL hasn't worked for me, so switching to Linux seems like the best option for this.

Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I'm concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?

I'm comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 18 hours ago (1 children)

I prefer some of my applications to be on VMs. For example, my observability stack (ELK + Grafana) which I like to keep separate from other environments. I suppose the argument could be made that I should spin up a separate k8s cluster if I want to do that but it's faster to deploy directly on VMs, and there's also less moving parts (I run two 50 node K8S clusters so I'm not averse to containers, just saying). Easier and relatively secure tool for the right job. Sure, I could mess with cgroups and play with kernel parameters and all of that jazz to secure k8s more but why bother when I can make my life easier by trusting Red Hat? Also I'm not yet running a k8s version that supports SELinux and I tend to keep it enabled.

[–] [email protected] 1 points 17 hours ago* (last edited 17 hours ago) (1 children)

Yeah I'm not saying everybody has to go and delete their infra, I just think that all new production environments should be k8s by default.

The production-scale Grafana LGTM stack only runs on Kubernetes fwiw. Docker and VMs are not supported. I'm a bit surprised that Kubernetes wouldn't have enough availability to be able to co-locate your general workloads and your observability stack, but that's totally fair to segment those workloads.

I've heard the argument that "kubernetes has more moving parts" a lot, and I think that is a misunderstanding. At a base level, all computers have infinite moving parts. QEMU has a lot of moving parts, containerd has a lot of moving parts. The reason why people use kubernetes is that all of those moving parts are automated and abstracted away to reduce the daily cognitive load for us operations folk. As an example, I don't run manual updates for minor versions in my homelab. I have a k8s CronJob that runs renovate, which goes and updates my Deployments in git, and ArgoCD automatically deploys the changes. Technically that's a lot of moving parts to use, but it saves me a lot of manual work and thinking, and turns my whole homelab into a sort of automated cloud service that I can go a month without thinking about.

I'm not sure if container break-out attacks are a reasonable concern for homelabs. See the relatively minor concern in the announcement I made as an Unraid employee last year when Leaky Vessels happened. Keep in mind that containerd uses cgroups under the hood.

Yeah, apparmor/selinux isn't very popular in the k8s space. I think it's easy enough to use them, plenty of documentation out there; but Openshift/okd is the only distribution that runs it out of the box.

[–] [email protected] 1 points 8 hours ago (1 children)

By more moving parts I mean:

Running ElasticSearch on RHEL:

  • add repo and dnf install elasticsearch.
  • check SELinux
  • write config
  • firewall-cmd to open ports.

In k8s:

  • grab elasticsearch container image
  • edit variables in manifest (we use helm)
  • depending on if the automatically configured SVC is good, leave it alone or edit it.
  • write the VS and gateway (we use Istio)
  • firewall-cmd to open ports

Maybe it's just me but I find option 1 easier. Maybe I'm just lazy. That's probably the overarching reason lol

[–] [email protected] 1 points 4 hours ago (1 children)

You're not using a reverse proxy on rhel, so you'll need to also make sure that the ports you want are available, and set up a dns record for it, and set up certbot.

On k8s, I believe istio gateways are meant to be reused across services. You're using a reverse proxy so the ports will already be open, so no need to use firewall-cmd. What would be wrong with the Service included in the elasticsearch chart?

It's also worth looking at the day 2 implications.

For backups you're looking at bespoke cronjobs to either rsync your database or clone your entire 100gb disk image, compared to either using velero or backing up your underlying storage.

For updates, you need to run system updates manually on rhel, likely requiring a full reboot of the node, while in kubernetes, renovate can handle rolling updates in the background with minimal downtime. Not to mention the process required to find a new repo when rhel 11 comes out.

[–] [email protected] 1 points 3 hours ago* (last edited 3 hours ago) (1 children)

I am using a reverse proxy in production. I just didn't mention it here.

I'd have to set up a DNS record for both. I'd also have to create and rotate certs for both.

We use LVM, I simply mounted a volume for /usr/share/elasticsearch. The VMWare team will handle the underlying storage.

I agree with manually dealing with the repo. I dont think I'd set up unattended upgrades for my k8s cluster either so that's moot. Downtime is not a big deal: this is not external and I've got 5 nodes. I guess if I didn't use Ansible it would be a bit more legwork but that's about it.

Overall I think we missed each other here.

[–] [email protected] 1 points 1 hour ago (1 children)

Well, my point was to explain how Kubernetes simplifies devops to the point of being simpler than most proxmox or Ansible setups. That's especially true if you have a platform/operations team managing the cluster for you.

Some more details missed here would be that external-dns and cert-manager operators usually handle the DNS records and certs for you in k8s, you just have to specify the hostname in the HTTPRoute/VirtualService and in the Certificate. For storage, ansible probably simplifies some of this away, but LVM is likely more manual to set up and manage than pointing a PVC at a storageclass and saying "100Gi".

Either way, I appreciate the discussion, it's always good to compare notes on production setups. No hard feelings even in the case that we disagree on things. I'm a Red Hat Openshift consultant myself these days, working on my RHCE, so maybe we'll cross paths some day in a Red Hat environment!

[–] [email protected] 1 points 42 minutes ago

Considering I am the operations team, just goes to show how much I have left to learn. I didn't know about the external-dns operator.

Unfortunately, my company is a bit strange with certs and won't let me handle them myself. Something to check out at home I guess.

I agree with you about the LVM. I have been meaning to set up Rook forever but never got around to it. It might still take a while but thanks for the reminder.

Wow. That must have been some work. I don't have these certs myself but I'm looking at the CKA and CKS (or whatever that's called). For sure, I loved our discussion. Thanks for your help.