this post was submitted on 25 Jan 2024
50 points (91.7% liked)

Selfhosted

40394 readers
339 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Y'all, this is gonna be super broad, and I apologize for that, but I'm pretty new to all this and am looking for advice and guidance because I'm pretty overwhelmed at the moment. Any help is very, very appreciated.

For the last ~3 years, I've been running a basic home server on an old computer. Right now, it is hosting HomeAssistant, Frigate NVR, their various dependencies, and other things I use (such as zigbee2mqtt, zwave-js-ui, node-red, mosquitto, vscode, etc).

This old server has been my "learning playground" for the last few years, as it was my very first home server and my first foray into linux. That said, it's obviously got some shortcomings in terms of basic setup (it's probably not secure, it's definitely messy, some things don't work as I'd like, etc). It's currently on its way out (the motherboard is slowly kicking the bucket on me), so it's time to replace it, and I kind of what to start over (not completely - I've hundreds of automations in home assistant and node-red, for instance, that I don't want to have to completely re-write, so I intend to export/import those as needed) and do it "right" this time - at this point, I think this is where I'm hung up, paralyzed by a fear of doing it "wrong" and winding up with an inefficient, insecure mess.

The new server, I want to be much more robust in terms of capability, and I have a handful of things I'd really love to do: pi-hole (though I need to buy a new router for this, so that has to come later on unless it'd save a bunch of headache doing it from the get-go), NAS, media server (plex/jellyfin), *arr stuff, as well as plenty of new things I'd love to self-host like Trilium notes, Tandoor or Mealie, Grocy, backups of local PCs/phones/etc (nextcloud?)... obviously this part is impossible to completely cover, but I suspect the hardware (list below) should be capable?

I would love to put all my security cameras on their own subnet or vlan or something to keep them more secure.

I need everything to be fully but securely accessible from outside the network. I've recently set up nginx for this on my current server and it works well, though I probably didn't do it 100% "right." Is something like Tailscale something I should look to use in conjuction with that? In place of? Not at all?

I've also looked at something like Authelia for SSO, which would probably be convenient but also probably isn't entirely necessary.

Currently considering Proxmox, but then again, TrueNAS would be helpful for the storage aspect of all this. Can/should you run TrueNAS inside Proxmox? Should I be looking elsewhere entirely?

Here's the hardware for the recently-retired gaming PC I'll be using:
https://pcpartpicker.com/list/chV3jH
Also various SSDs and HDDs.

I'm in this weird place where I don't have too much room to play around because I want to get all my home automation and security stuff back up as quickly as possible, but I don't want to screw this all up.

Again, any help/advice/input at all is super, super appreciated.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 13 points 10 months ago (2 children)

My best advice is use that your old setup hasn't died yet while you can. I.e. start now and setup Proxmox because it's vastly superior to TrueNAS for the more general type hardware you have and then run a more focused NAS project like Openmediavault in a proxmox VM.

My recommendation, from experience, would be to setup a VM for anything touching hardware directly, like a NAS or Jellyfin (if you want to have GPU assisted transcoding) and I personally find it smoothest to run all my Docker containers from one Docker dedicated VM. LXCs are popular for some but I strongly dislike how you set hardware allocations for them, and running all Docker containers in one LXC is just worse than doing it in a VM. My future approach will be to move to more dedicated container setup as opposed to the VM focused proxmox but that is another topic.

I also strongly recommend using portainer or similar to get a good overview of your containers and centralize configuration management.

As for external access all I can say is do be careful. Direct internet exposure is likely a really bad idea unless you know what you're doing and trust the project you expose. Hiding access behind a VPN is fairly easy if your router has a VPN server built in. And WireGuard (like Netbird / tailscale / Cloudflare tunnels etc all use) is great if not.

As for authentication it's pretty tricky but well worth it and imo needed if you want to expose stuff to friends/family. I recommend Authentik over other alternatives.

[–] [email protected] 3 points 10 months ago (1 children)

I like the advice to use a VM for anything specifically touching hardware. I think I'll run with that. Thank you! External access is tricky, I know, and doing it securely and safely is really paramount for me. This is the one thing that's keeping me from just "jumping in" with things. I don't want to mess that part up.

[–] [email protected] 3 points 10 months ago (1 children)

Well good part there is that you can build everything for internal use and then add external access and security later. While VLAN segmentation and overall secure / zero-trust architecture is of course great it's very overkill for a selfhosted environment if there isn't an additional purpose like learning for work or you find it fun. The important thing really is the shell protection, that nothing gets in. All the other stuff is to limit potential damage if someone gets in (and in the corporate world it's not "if" it's "when", because with hundreds of users you always have people being sloppy with their passwords, MFA, devices etc.). That's where secure architecture is important, not in the homelab.

[–] [email protected] 2 points 10 months ago

That is true that the most important part is just to keep the outside... out. I'd love to learn more intricate/advanced network setups and security too. I do work in IT, and knowing this stuff certainly wouldn't be bad on my resume, and I've actually always been interested in learning it regardless. But perhaps you make a good point that I can secure it from the outside and get things functional, and then work on further optimization down the line. Makes things a little less daunting, haha.

[–] atzanteol 0 points 10 months ago (1 children)

Why would you virtualize a file server? You want direct access to disks for raid and raid-like things.

[–] [email protected] 1 points 10 months ago (1 children)

There's absolutely no issues whatsoever with passing through hardware directly to a VM. And Virtualized is good because we don't want to "waste" a whole machine for just a file server. Sure dedicated NAS hardware has some upsides in terms of ease of use but you also pay an, imo, ridiculous premium for that ease. I run my OMV NAS as a VM on 2 cores and 8 GB of RAM (with four hard drives) but you can make do perfectly fine on 1 Core and 2 GB RAM if you want and don't have too many devices attached / do too many iops intensive tasks.

[–] atzanteol 1 points 10 months ago (1 children)

And Virtualized is good because we don't want to "waste" a whole machine for just a file server.

Hmm. I strongly disagree. You've created a new dependency now for the fileserver to come up - a system that many other services will also depend on and which will likely contain backups.

A dedicated system is less likely to fail as it won't be sensitive to a bad proxmox upgrade or some other VM exhausting system resources on the host.

You can get cheap hardware if cost is an issue.

[–] [email protected] 2 points 10 months ago (1 children)

Sure, I'm not saying its optimal, optimal will always be dedicated hardware and redundancy in every layer. But my point is that you gain very little for quite the investment by breaking out the fileserver to dedicated hardware. It's not just CPU and RAM needed, it's also SATA headers and an enclosure. Most people doing selfhosted have either one or more SBCs and if you have more than one SBC then yeah the fileserver should be dedicated. The other common thing is having an old gaming/office PC converted to server use and in that case Proxmox the whole server and run NAS as a VM makes the most sense instead of buying more hardware for that very little gain.

[–] atzanteol 1 points 10 months ago (1 children)

Sure, I’m not saying its optimal,

Question title: Starting over and doing it "right"

But my point is that you gain very little for quite the investment by breaking out the fileserver to dedicated hardware.

You gain stability - which is the single best thing you can get from a file server. It's not a glamorous job - but it's an important one.

Most people doing selfhosted have either one or more SBCs and if you have more than one SBC then yeah the fileserver should be dedicated.

When somebody new to hosting services asks what they should do we should provide them with best practices rather than "you can run this on the microcontroller in your toaster" advice. Possible != good.

The other common thing is having an old gaming/office PC converted to server use and in that case Proxmox the whole server and run NAS as a VM makes the most sense instead of buying more hardware for that very little gain.

Running your NAS on a VM on Proxmox only makes good sense if you're just cheap. I've been there! I get it. But I wouldn't tell anyone what I was doing was a good idea and certainly wouldn't recommend it to others. It's a hack. Own it.

You can find old servers on eBay for ~$200. Here's the one I use for <$200. It's been running for more than a decade without trouble. Even when I mess up other systems it's always available. When I changed to Proxmox from how I previously managed some other systems it was already available and running. When an upgrade on my laptop goes wrong the backups are available on my fileserver. When a raspberry pi SD card dies the backup images are available on the fileserver. It. Just. Works.

[–] [email protected] 1 points 10 months ago (1 children)

Yes, but in the post they also stated what they were working with in terms of hardware. I really dislike giving the advice "buy more stuff" because not everyone can afford to when selfhosting often comes from a frugal place.

Still you're absolutely not wrong and I see value in both our opinions being featured here, this discussion we're having is a good thing.

Circling back to the VM thing though, even if I had dedicated hardware, if I would've used an old server for a NAS I still would've virtualized it with proxmox if for no other reason than that gives me mobility and an easier path to restoration if the hardware, like the motherboard, breaks.

Still, your advice to buy a used server is good and absolutely what the OP should do if they want a proper setup and have the funds.

[–] atzanteol 1 points 10 months ago (1 children)

Circling back to the VM thing though, even if I had dedicated hardware, if I would’ve used an old server for a NAS I still would’ve virtualized it with proxmox if for no other reason than that gives me mobility and an easier path to restoration if the hardware, like the motherboard, breaks.

I can see the allure. I've just had a lot more experiences where "some idiot" (cough) made changes at 2AM to an un-related service that causes the entire fileserver and anything else on that system to become unavailable... Happens more often than a hardware error in my experience. :-)

Do you have two proxmox servers each with enough disk space to store everything on the fileserver? And I assume off-site backups to copy back from?

If my T110 exploded I'd just buy a new machine, restore from off-site, and re-provision with Ansible scripts. But have ~8TB in storage on my server so just copying that to a second system is not an option. I'm not going to have a system with a spare 10T of disk just sitting around..

[–] [email protected] 1 points 10 months ago* (last edited 10 months ago) (1 children)

No the scenario a VM protects from is the T110s motherboard/cpu/PSU/etc craps out and instead of having to restore from off-site I can move the drives into another enclosure and then map them the same way to the VM and start it up. Instead of having to wait for new hardware I can have the fileserver up and running again in 30 minutes and it's just as easy to move it into the new server once I've sourced one.

And in this scenario we're only running the fileserver on the T110, but we still virtualized it with proxmox because then we can easily move it to new hardware without having to rebuild/migrate anything. As long as we don't fuck up the drive order or anything like that, then we're royally fucked.

[–] atzanteol 1 points 10 months ago

Ah - I question whether that would really be a 30 or even 60 min operation. But I see what you mean.

One thing I think homegamers overlook is ansible. If you script your setups you can destroy/rebuild them pretty quickly. Both physical systems and VMs. Only manual part is installing Debian which is...pretty easy if we're talking about disaster recovery.

Also - you can still buy computers in stores. :-)