Spooler32

joined 10 months ago
[–] [email protected] 1 points 9 months ago

They'll catch up with me eventually.

[–] [email protected] 1 points 9 months ago

I do all of this with kubernetes. For VM workloads, I use kubevirt (which is libvirt controlled by kubernetes). It runs extremely well, is lightweight, and it's very consistent to operate.

At one point I had a lot more layers to this. As I became more competent and aware use the ecosystem, it all flattened into bare metal kubernetes.

[–] [email protected] 0 points 9 months ago

Doesn't seem like a problem unless the gitops controller is managing repositories too.

This isn't a cyclic dependency, and doesn't affect failure modes. If the git server fails, the gitops controller fails or waits.

It's a bit more of a problem if the cluster IaC itself is managed using this git instance, but that's easy enough to solve with backups or not doing that.

Since this is a home lab, none of this is problematic. Feel free to condense everything - even the backup controller. Just make sure you have a way to access data and manually intervene if it shits the bed.

[–] [email protected] 1 points 9 months ago

It would be initially, but then you only have to add one every 100GB. How annoying would it be then?

For me, not very. My data grows by 100GB every few months.

[–] [email protected] 1 points 9 months ago (3 children)

Soundproof it.

Don't fuck with the fans. Not ever. They're loud because they're small. They don't move much air when they're quiet. It's a small oven if the air moves slow.

[–] [email protected] 1 points 9 months ago

If your ambient temperature is that close to your operating temperature, you're going to have to pass a hilarious amount of air across the machine to keep it within range. So don't do it unless you're going to blow fans directly at it that are controlled by internal thermostat or are always moving air fast enough.

Not that this is hard. One 6"inline duct blower could do this if you cowl it right.

[–] [email protected] 1 points 9 months ago

Make sure your think clients support graphics processing offloading - a very nice feature of RDP that really sets it apart.

[–] [email protected] 1 points 10 months ago

A NAS is *way* too easy for even a novice to build to justify buying it as an appliance. Set up a software RAID and filesystem with LVM+XFS or ZFS or bcacheFS. Install the NFS server userspace utilities. Use the in-kernel NFS server. Add a line or two of configuration to /etc/exports.

Done. That's what, fifteen minutes of work tops?

[–] [email protected] 1 points 10 months ago

Hm, Docker isn't very good at this. It's good at runnign one process or a few related processes with a smaller init system like S6, but not good at running a full system with its own init and system handlers.

Lots of stuff will break, because it's expected that a lot of system-level initialization will be taking place. Docker is too opinionated when it comes to this, because it expects to be running on a system that has already been initialized.

Look to LXC or LXD. It's more appropriate for this, and while it's a container system it's more like virtualization than application containerization. It's also designed to be mutable, which is the main limitation you're running up against.

[–] [email protected] 1 points 10 months ago

NextCloud, Plex, Minio (s3, backed by Linstor), NFS CSI driver, Linstor (DRBD), Pacemaker (HA NFS server, backed by Linstor).

Linstor controls the block layer and transport, as well as providing a FS-agnotic replication layer. NFS is used for HDD-backed file storage, and is extremely simple and reliable due to the stateful cluster that controls it. S3 is used by various applications, as it's easy as hell to implement clients for. S3 has an SSD and HDD tier, as well as an AWS-replicated tier. S3 is where my backups go (Velero).

[–] [email protected] 1 points 10 months ago

You can find 12-bay raspberry PI blade chassis that fit in 2U on thingiverse. I had a friend print me one. It's been wonderful. You can buy them too, of course.

view more: next ›