AceBlade258

joined 1 year ago
[โ€“] [email protected] 1 points 11 months ago

The servers use their built-in NIC's PXE to load iPXE (I still haven't yet figured out how to flash iPXE to a NIC), and then iPXE loads a boot script that boots from NFS.

Here is the most up-to-date version of the guide I used to learn how to NFS boot: https://www.server-world.info/en/note?os=CentOS_Stream_9&p=pxe&f=5 - this guide is for CentOS, so you will probably need to do a little more digging to find one for Debian (which is what Proxmox is built on).

iPXE is the another component: https://ipxe.org/

It's worth pointing out that this was a steep learning curve for me, but I found it super worth it in the end. I have a pair of redundant identical servers that act as the "core" of my homelab, and everything else stores its shit on them.

[โ€“] [email protected] 1 points 11 months ago (2 children)

I use NFS roots for my hypervisors, and iSCSI for the VM storage. I previously didn't have iSCSI in the mix and was just using qcow2 files on the NFS share, but that had some major performance problems when there was a lot of concurrent access to the share.

The hypervisors use iPXE to boot (mostly; one of them has gPXE on the NIC, so I didn't need to have it boot to iPXE before the NFS boot).

In the past I have also use a purely iSCSI environment with the hypervisors using iPXE to boot from iSCSI. I moved away from it because it's easier to maintain the single NFS root for all the hypervisors for updates and the like.