this post was submitted on 07 Jul 2023
98 points (97.1% liked)

Selfhosted

40438 readers
489 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
(page 2) 33 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 1 year ago

Would've used NixOS

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

@hogofwar
Build everything on GuixSD

[–] humancrayon 1 points 1 year ago

I would go smaller with lower power hardware. I currently have Proxmox running on an r530 for my VMs, plus an external NAS for all my storage. I feel like I could run a few 7050 micro's together with proxmox and downsize my NAS to use less but higher density disks.

Also, having a 42U rack makes me want to fill it up with UPS's and lots of backup options that could be simplified if I took the time to not frankenstein my solutions in there. But, here we are...

[–] [email protected] 1 points 1 year ago

Run the cables more neatly.

[–] [email protected] 1 points 1 year ago

Use actual nas drives. Do not use shucked external drives, they are cheaper for a reason, not meant for 24-7. Though I guess they did get me through a couple years, and hard drive prices seem to keep falling.

[–] [email protected] 1 points 1 year ago

I built a compact nas. While it's enough for the drives I need, even for upgrades, I only have 1 pcie x4 slot. Which is becoming a bit limiting. I didn't think i'd have a need for for either a tape drive or a graphics card, and I have some things I want to do that require both. Well, I can only do one unless I get a different motherboard and case. Which means i'm basically doing a new build and I don't want to do either of the projects I had in mind enough to bother with that.

[–] traches 1 points 1 year ago (4 children)

The only real pain point I have is my hard drive layout. I've got a bunch of different drive sizes that are hard to expand on without wasting space or spending a ton.

load more comments (4 replies)
[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (2 children)

I already have to do it every now and then, because I insisted on buying bare metal servers (at scale way) rather than VMs. These things die very abruptly, and I learnt the hard way how important are backups and config management systems.

If I had to redo EVERYTHING, I would use terraform to provision servers, and go with a "backup, automate and deploy" approach. Documentation would be a plus, but with the config management I feel like I don't need it anymore.

Also I'd encrypt all disks.

[–] [email protected] 1 points 1 year ago (1 children)

I would use terraform to provision servers, and go with a “backup, automate and deploy” approach. Documentation would be a plus

Yea. This is what I do. Other than my Synology, I use Terraform to provision everything locally. And all my pi holes are controlled by ansible.

Also everything is documented in trillium.

Whole server regularly gets backed up multiple times, one is encrypted and the other via syncthing to my local desktop.

[–] [email protected] 1 points 1 year ago

Terraform is the only missing brick in my case, but that's also because I still rent real hardware :) I'm not fond of my backup system tho, it works, but it's not included in the automated configuration of each service, which is not ideal IMO.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (3 children)

Also I’d encrypt all disks.

What's the point on a rented VPS? The provider can just dump the decryption key from RAM.

bare metal servers (at scale way) rather than VMs. These things die very abruptly

Had this happen to me with two Dedibox (scaleway) servers over a few months (I had backups, no big deal but annoying). wtf do they do with their machines to burn through them at this rate??

load more comments (3 replies)
[–] [email protected] 1 points 1 year ago

I'd use Terraform and Ansible from the start. I'm slowly migrating my current setup to these tools, but that's obviously harder than starting from scratch. At least I did document everything in some way. That documentation plus state on the server is definitely enough to do this transition.

[–] [email protected] 1 points 1 year ago

Probably splurge just a bit more for CMR hard drives in my ZFS setup. I've had some pretty scary moments in my current setup.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Getting a better rack. My 60cm deep rack with a bunch of rack shelves and no cable management is not very pretty and moving servers around is pretty hard.

Hardwarewise I'm mostly fine with it, although I would use a platform with IPMI instead of AM4 for my hypervisor.

[–] [email protected] 0 points 1 year ago

I'd plan out what machines do what according to their drive sizes rather than finding out the hard way that one of them only has a few GB spare that I used as a mail server. Certainly document what I have going, if my machine Francesco explodes one day it'll take months to remember what was actually running on it.

I'd also not risk years of data on a single SSD drive that just stopped functioning for my "NAS" (its not really a true NAS just a shitty drive with a terabyte) and have a better backup plan

load more comments
view more: ‹ prev next ›