this post was submitted on 15 Jun 2023
9 points (100.0% liked)

Self Hosted - Self-hosting your services.

11399 readers
1 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules

Important

Beginning of January 1st 2024 this rule WILL be enforced. Posts that are not tagged will be warned and if not fixed within 24h then removed!

Cross-posting

If you see a rule-breaker please DM the mods!

founded 3 years ago
MODERATORS
 

How do you handle Proxmox clusters when you only have 1 or 2 servers?

I technically have 3 servers but I keep one offline because I don't need it 24/7 most point wasting power on a server I don't need.

I believe I read somewhere that you can force Proxmox it to a lower number but it isn't recommended. Has anyone done this and if so have you run into any issues with this?

My main issue is I want my VM to start no matter what. For example I had a power outage. When the servers came back online instead of starting they waited for the quorum number to reach 3. (it will never reach 3 because the third server wasn't turn on.) so they just waited forever until I got home and ran

pvecm expected 2

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 year ago

I have 2 nodes and a raspberry pi as a qdevice.
I can still power off 1 node (so I have 1 node and an rpi) if I want to.
To avoid split brain, if a node can see the qdevice then it is part of the cluster. If it can't, then the node is in a degraded state.
Qdevices are only recommended in some scenarios, which I can't remember off the top of my head.

With 2 nodes, you can't set up CEPH cluster (well, I don't think you can).
But you can set up High Availability, and use ZFS snapshot replication on a 5 minute interval (so, if your VMs host goes down, the other host can start it with a potentially outdated snapshot).

This worked for my project as I could have a few stateless services that could bounce between nodes, and I had a postgres VM with streaming replication (postgres not ZFS) and failover. Which lead to a decently fault tolerant setup.