Selfhosted

39919 readers
230 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
1
 
 

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

2
 
 

Hello everybody, happy Monday.

I'm hoping to get a little help with my most recent self-hosting project. I've created a VM on my Proxmox instance with a 32GB disk and installed Ubuntu, Docker, and CosmOS to it. Currently I have Gitea, Home Assistant, NextCloud, and Jellyfin installed via CosmOS.

If I want to add more services to Cosmos, then I need to be able to move the containers from the VM's 32GB disk into an NFS Share mounted on the VM which has something like 40TB of storage at the moment. My hope is that moving these Containers will allow them to grow on their own terms while leaving the OS disk the same size.

Would some kind of link allow me to move the files to the NFS share while making them still appear in their current locations in the host OS (Ubuntu 24.04). I'm not concerned about the NFS share not being available, it runs on the same server virtualizing everything else and it's configured to start before everything else so the share should be up and running by the time the server is in any situation. If anyone can see an obvious problem with that premise though, I'd love to hear about it.

3
 
 

I'm wondering if anyone has found (free) sources of data to use for live elections results, specifically the Presidential race? I've been building a map of poll results but would also like to put something together to watch the race tomorrow night.

4
 
 

I currently have a home server which I use a lot and has a few important things in it, so I kindly ask help making this setup safer.

I have an openWRT router on my home network with firewall active. The only open ports are 443 (for all my services) and 853 (for DoT).

I am behind NAT, but I have ipv6, so I use a domain to point to my ipv6, which is how I access my serves when I am not on lan and share stuff with friends.

On port 443 I have nginx acting as a reverse proxy to all my services, and on port 853 I have adguardhome. I use a letsencrypt certificate with this proxy.

Both nginx, adguardhome and almost all of my services are running in containers. I use rootless podman for containers. My network driver is pasta, and no container has "--net host", although the containers can access host services because they have the option "--map-guest-addr" set, so I don't know if this is any safer then "--net host".

I have two means of accessing the server via ssh, either password+2fa or ssh key, but ssh port is lan only so I believe this is fine.

My main concern is, I have a lot of personal data on this server, some things that I access only locally, such as family photos and docs (these are literally not acessible over wan and I wouldnt want them to be), and some less critical things which are indeed acessible externally, such as my calendars and tasks (using caldav and baikal), for exemple.

I run daily encrypted backups into OneDrive using restic+backrest, so if the server where to die I believe this would be fine. But I wouldnt want anyone to actually get access to that data. Although I believe more likely than not an invader would be more interested in running cryptominers or something like that.

I am not concerned about dos attacks, because I don't think I am a worthy target and even if it were to happen I can wait a few hours to turn the server back on.

I have heard a lot about wireguard - but I don't really understand how it adds security. I would basically change the ports I open. Or am I missing something?

So I was hoping we could talk about ways to improve my servers security.

5
 
 

I'm currently trying to spin up a new server stack including qBittorrent. when I launch the web UI, it asks for a login on first launch. According to the documentation, the default user id admin and the default password is adminadmin.

This did not work. There is some documentation about a randomly generated password: https://github.com/qbittorrent/qBittorrent/wiki/Web-UI-password-locked-on-qBittorrent-NO-X-(qbittorrent-nox)

Unfortunately, this requires navigating to and opening/editing system files. It seems to not be applicable to a docker install. Has anyone else run into this issue? has anyone found a working solution who would be willing to post a detailed solution?

6
 
 

I want to self host some services for me and my first questions is: Any guide or book, or wtv, you recommend to understand self host world? I want to understand all things about it, but dont know where to start. And the guides for beginners i found on internet are very basic, just seems things like "do that, do this, and go" . I want to fully understand the world of self host. I want to understand firewalls, DyanmicDNS, MeshVPN; How to do self host; what not to do, what precautions, etc, etc. thx

7
 
 

Hi everyone! I want to be able to access a folder inside the guest that corresponds to a cloud drive that is mounted inside the guest for security purposes. I have tried setting up a shared filesystem inside Virt-Manager (KVM) with virtiofs (following this tutorial: https://absprog.com/post/qemu-kvm-shared-folder) but as soon as I mount the folder in order for it to be accessible on the guest the cloud drive gets unmounted. I guess a folder cannot have two mounts at the same time. Aliasing the folder using bind and then sharing the aliased folder with the host doesn't work either. The aliased folder is simply empty on the host.

Does anyone have an idea regarding how I might accomplish this? Is KVM the right choice or would something like docker or podman better suited for this job? Thank you.

8
 
 

I've been banging my head on this for a few days now, and I can't figure this out. When I start up immich container, I see in docker ps:

CONTAINER ID   IMAGE                                                        COMMAND                  CREATED              STATUS                        PORTS                                                                                                             NAMES
1c496e061c5c   ghcr.io/immich-app/immich-server:release                     "tini -- /bin/bash s…"   About a minute ago   Up About a minute (healthy)   2283/tcp, 0.0.0.0:2284->3001/tcp, [::]:2283->3001/tcp                                                             immich

netstat shows that port 2283 is listening, but I cannot access http://IP_ADDRESS:2283 from Windows, Linux, or Mac host. If I SSH in and run a browser back through that, I can't access it via localhost. I even tried changing the port to 2284. I can see the change in netstat and docker ps outputs, but still no luck accessing it. I also can't telnet to either port on the host. I know Immich is up because it's accessible via the swag reverse proxy (I've also tried bringing it up w/ that disabled). I don't see anything in the logs of any of the immich containers or any of the host system logs when I try to access.

All of this came about because I ran into the Cloudflare upload size limit and it seems I can't get around it for the strangest reason!

9
 
 

I know how RAID work and prevent data lost from disks failures. I want to know is possible way/how easy to recover data from unfunctioned remaining RAID disks due to RAID controller failure or whole system failure. Can I even simply attach one of the RAID 1 disk to the desktop system and read as simple as USB disk? I know getting data from the other RAID types won't be that simple but is there a way without building the whole RAID system again. Thanks.

10
 
 

Hello all,

This is a follow-up from my previous post: Is it a good idea to purchase refurbished HDDs off Amazon ?

In this post I will give you my experience purchasing refurbished hard drives and upgrading my BTRFS RAID10 arrray by swaping all the 4 drives.

TL;DR: All 4 drives work fine. I was able to replace the drives in my array one at a time using an USB enclosure for the data transfer !

1. Purchasing & Unboxing

After reading the reply from my previous post, I ended up purchasing 4x WD Ultrastar DC HC520 12TB hard drives from Ebay (Germany). The delivery was pretty fast, I received the package within 2 days. The drive where very well packed by the seller, in a special styrofoam tray and anti-static bags packaging

2. Sanity check

I connect the drives to a spare computer I have and spin-up an Ubuntu Live USB to run a S.MA.R.T check and read the values. SMART checks and data are available from GNOME Disks (gnome-disk-utility), if you don't want to bother with the terminal. All the 4 disks passed the self check, I even did a complete check on 2 of them overnight and they both passed without any error. More surprisingly, all the 4 disks report Power-ON Hours=N/A or 0. I don't think it means they are brand new, I suspect the values have been erased by the reseller. smart data

3. Backup everything !

I've selected one of the 12TB drives and installed it inside an external USB3 enclosure. On my PC I formatted the drive to BTRFS with one partition with the entire capacity of the disk. I then connected the, now external, drive to the NAS and transfer the entirety of my files (excluding a couple of things I don't need for sure), using rsync:

rsync -av --progress --exclude 'lost+found' --exclude 'quarantine' --exclude '.snapshots' /mnt/volume1/* /media/Backup_2024-10-12.btrfs --log-file=~/rsync_backup_20241012.log

Actually, I wanted to run the command detached, so I used the at command at (not sure if this is the best method to do this, feel free to propose some alternatives):

echo "rsync -av --progress --exclude 'lost+found' --exclude 'quarantine' --exclude '.snapshots' /mnt/volume1/* /media/Backup_2024-10-12.btrfs --log-file=~/rsync_backup_20241012.log" | at 23:32

The total volume of the data is 7.6TiB, the transfer took 19 hours to complete.

4. Replacing the drives

My RAID10 array, a.k.a volume1 is comprise of the disks sda, sdb, sdc and sdd, all of which are 6TB drives. My NAS has only 4x SATA ports and all of them are occupied (volume2 is an SSD connected via USB3).

m4nas:~:% lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda            8:0    1   5.5T  0 disk /mnt/volume1
sdb            8:16   1   5.5T  0 disk 
sdc            8:32   1   5.5T  0 disk 
sdd            8:48   1   5.5T  0 disk 
sde            8:64   0 111.8G  0 disk 
└─sde1         8:65   0 111.8G  0 part /mnt/volume2
sdf            8:80   0  10.9T  0 disk 
mmcblk2      179:0    0  58.2G  0 disk 
└─mmcblk2p1  179:1    0  57.6G  0 part /
mmcblk2boot0 179:32   0     4M  1 disk 
mmcblk2boot1 179:64   0     4M  1 disk 
zram0        252:0    0   1.9G  0 disk [SWAP]

According to documentation I could find (btrfs replace - readthedocs.io, Btrfs, replace a disk - tnonline.net), the best course of action is definitely to use the builtin BTRFS command replace. From there, there are 2 method I can use:

  1. Connect new drive, one by one, via USB3 to run replace, then swap the disks in the drive-bay
  2. Degraded mode, swap the disks one by one in the drive-bays and rebuild the array

Method #1 seems to me faster and safer, and I've decided to tried this one first. If it doesn't work, I can fallback to method #2 (which I had to for one of the disks !).

4.a. Replace the disks one-by-one via USB

NAS setup with external drive

I've installed a blank 12TB disk in my USB enclosure and mounted it to the NAS. It is showing as sdf. Now, it's time to run the replace command as described here: Btrfs, Replacing a disk, Replacing a disk in a RAID array

sudo btrfs replace start 1 /dev/sdf /mnt/volume1

We can see the new disk is shown as ID 0 while the replace operation takes place:

m4nas:~:% btrfs filesystem show
Label: 'volume1'  uuid: 543e5c4f-4012-4204-bf28-1e4e651ce2e8
	Total devices 4 FS bytes used 7.51TiB
	devid    0 size 5.46TiB used 3.77TiB path /dev/sdf
	devid    1 size 5.46TiB used 3.77TiB path /dev/sda
	devid    2 size 5.46TiB used 3.77TiB path /dev/sdb
	devid    3 size 5.46TiB used 3.77TiB path /dev/sdc
	devid    4 size 5.46TiB used 3.77TiB path /dev/sdd

Label: 'ssd1'  uuid: 0b28580f-4a85-4650-a989-763c53934241
	Total devices 1 FS bytes used 46.78GiB
	devid    1 size 111.76GiB used 111.76GiB path /dev/sde1

It took around 15 hours to replace the disk. After it's done, I've got this:

m4nas:~:% sudo btrfs replace status /mnt/volume1
Started on 19.Oct 12:22:03, finished on 20.Oct 03:05:48, 0 write errs, 0 uncorr. read errs
m4nas:~:% btrfs filesystem show                 
Label: 'volume1'  uuid: 543e5c4f-4012-4204-bf28-1e4e651ce2e8
	Total devices 4 FS bytes used 7.51TiB
	devid    1 size 5.46TiB used 3.77TiB path /dev/sdf
	devid    2 size 5.46TiB used 3.77TiB path /dev/sdb
	devid    3 size 5.46TiB used 3.77TiB path /dev/sdc
	devid    4 size 5.46TiB used 3.77TiB path /dev/sdd

Label: 'ssd1'  uuid: 0b28580f-4a85-4650-a989-763c53934241
	Total devices 1 FS bytes used 15.65GiB
	devid    1 size 111.76GiB used 111.76GiB path /dev/sde1

In the end, the swap from USB to SATA worked perfectly !

m4nas:~:% lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda            8:0    0 111.8G  0 disk 
└─sda1         8:1    0 111.8G  0 part /mnt/volume2
sdb            8:16   1  10.9T  0 disk /mnt/volume1
sdc            8:32   1   5.5T  0 disk 
sdd            8:48   1   5.5T  0 disk 
sde            8:64   1   5.5T  0 disk 
mmcblk2      179:0    0  58.2G  0 disk 
└─mmcblk2p1  179:1    0  57.6G  0 part /
mmcblk2boot0 179:32   0     4M  1 disk 
mmcblk2boot1 179:64   0     4M  1 disk 
zram0        252:0    0   1.9G  0 disk [SWAP]
zram1        252:1    0    50M  0 disk /var/log
m4nas:~:% btrfs filesystem show
Label: 'volume1'  uuid: 543e5c4f-4012-4204-bf28-1e4e651ce2e8
	Total devices 4 FS bytes used 7.51TiB
	devid    1 size 5.46TiB used 3.77TiB path /dev/sdb
	devid    2 size 5.46TiB used 3.77TiB path /dev/sdc
	devid    3 size 5.46TiB used 3.77TiB path /dev/sdd
	devid    4 size 5.46TiB used 3.77TiB path /dev/sde

Label: 'ssd1'  uuid: 0b28580f-4a85-4650-a989-763c53934241
	Total devices 1 FS bytes used 13.36GiB
	devid    1 size 111.76GiB used 89.76GiB path /dev/sda1

Note that I haven't expended the partition to 12TB yet, I will do this once all the disks are replace. The replace operation has to be repeated 3 more times, taking great attention each time to select the correct disk ID (2, 3 and 4) and replacement device (e.g: /dev/sdf).

4.b. Issue with replacing disk 2

While replacing disk 2, a problem occurred. The replace operation stopped progressing, despite not reporting any errors. After waiting couple of hours and confirming it was stuck, I decided to do something reckless that cause me a great deal of troubles later: To kick-start the replace operation, I unplugged the power from the USB enclosure and plugged it back in (DO NOT DO THAT !), It seemed to work and the transfer started to progress again. But once completed, the RAID array was broken and the NAS wouldn't boot anymore. (I will only talk about the things relevant to the disk replacement and will skip all the stupid things I did to make the situation worst, it took me a good 3 days to recover and get back on track...).

I had to forget and remove from the RAID array, both the drive ID=2 (the drive getting replaced) and ID=0 (the 'new' drive) in order to mount the array in degraded mode and start over the replace operation with the method #2. In the end it worked, and the 12TB drive is fully functional. I suppose the USB enclosure is not the most reliable, but the next 2 replacement worked just find like the first one.

What I should have done: abort the replace operation, and start over.

4.c. Extend volume to complete drives

Now that all 4 of my drives are upgraded to 12TB in my RAID array, I extend the filesystem to use all of the available space:

sudo btrfs filesystem resize 1:max /mnt/volume1
sudo btrfs filesystem resize 2:max /mnt/volume1
sudo btrfs filesystem resize 3:max /mnt/volume1
sudo btrfs filesystem resize 4:max /mnt/volume1

5. Always keep a full backup !

Earlier, I mentioned using one of the 'new' 12TB drive as a backup of my data. Before I use it in the NAS, and therefore erase this backup, I assembled 2 of the old drives into my spare computer and once again did a full copy of my NAS data using rsync over the network. This took a long while again, but I wouldn't skip this step !

6. Conclusion: what did I learn ?

  1. Buying and using refurbished drives was very easy and the savings are great ! I saved approximately 40% compared to the new price. Only time will tell if this was a good deal. I hope to get at least 4 more years out of these drives. That's my goal at least...
  2. Replacing HDDs via a USB3 enclosure is possible with BTRFS, it works 3 time out of 4 ! 😭
  3. Serial debug is my new best friend ! This part, I didn't detail in this post. Let's say my NAS is somewhat exotic NanoPi M4V2, I couldn't have unborked my system without a functioning UART adapter, and the one I already had in hand didn't work correctly. I had to buy a new one. And all the things I did (blindly) to try fixing my system were pointless and wrong.

I hope this post can be useful to someone in the future, or at least was interesting to some of you !

11
12
 
 

Last night I was writing a script and it made a directory literally named "~" on accident. It being 3am I did an rm -rf ~ without thinking and destroyed my home dir. Luckily some of the files were mounted in docker containers which my user didn't have permission to delete. I was able to get back to an ok state but lost a bit of data.

I now realize I really should be making backups because shit happens. I self host a pypi repository, a docker registry both with containers and some game servers in and out of containers. What would be the simplest tool to backup to Google drive and easily restore?

13
14
 
 

The price seems pretty good. I don't really know much about mini PCs. Do you think there is a better alternative?

Update: ok, not price efficient. Noted 👍

15
41
submitted 3 days ago* (last edited 3 days ago) by [email protected] to c/[email protected]
 
 

Hello,

Just spent a good week installing my home server. Time to pause and lookback to what I've setup and ask your help/suggestions as I am wondering if my below configuration is a good approach or just a useless convoluted approach.

I have a Proxmox instance with 3 VLAN:

  • Management (192.168.1.x) : the one used by proxmox host and that can access all other VLANs

  • Servarr (192.168.100.x) : every arr related software + Jellyfin (all LXC). All outbound connectivity goes via VPN. Cant access any VLAN

  • myCloud (192.168.200.X): WIP, but basically planning to have things like Nextcloud, Immich, Paperless etc...

The original idea was to allow external access via Cloudlfare tunnel but finally decided to switch back to Tailscale for "myCloud" access (as I am expected to share this with less than 5 accounts). So:

  • myCloud now has Tailscale running on it.
  • myCloud can now access Servarr VLAN

Consequently to my choice of using tailscale, I had now to use a DNS server to resolve mydomain.com:

  • Servarr now has pihole as DNS server reachable across all VLAN

On the top of all that I have yet another VLAN for my raspberry Pi running Vaultwarden reachable only via my personal tailscale account.

I'm open to restart things from scratch (it's fun), so let me know.

Also wondering if using LXCs is better than docker especially when it comes to updates and longer term maintenance.

16
 
 

Google pushed their Ai Overview onto my country last night and that finally gave me the push to change search engines.

One thing I did find useful was having product prices displayed in the search result headers but this doesn’t appear to be enabled in any other engine. I used it to quickly scan between retailers as not everything shows up in pricespy or priceme.

I deployed a searxng instance this morning and have heard that you can use json to modify result presentation. Does anyone know if it’s possible to use that to display prices?

17
 
 

Hi folks, I know many of you are elite system admins running custom built NAS solutions networked together with servers tucked in every spare closet and space in your home, which is awesome. Having said that, I am still newer in my self hosted journey and my existing knowledge is more from running Linux as an daily driver OS since 2005 rather than actually hosting anything. For this reason, even though it's not ideologically pure, I opted for a SynologyNAS for simplicity of management. This was the next step for me after dipping my toes into self hosting after messing around with some VMs and an old laptop.

With the new DSM update, Synology removes several apps and codec support, most notably h.256. I experienced something similar on Linux where I cannot view videos recorded on my action cam. I don't know how many of these photos and videos I have in my file system, but my NAS is local network only and basically contains my photos, videos, ebooks, documents, etc. in separate shares containing a hierarchical folder structure.

My questions:

  1. How can I most easily search my NAS for files needing the removed codecs so I can gauge how much this will actually effect me? I want to approach the problem in a simple way that I can understand.
  2. With Linux and Synology DSM both dropping codecs, I am considering just taking the storage hit to convert to h.264 or another format. What would you recommend? I havent recoded video in ages so I'm learning from scratch, but I do have a desktop with dual 1080s that should be up to the task.
  3. I access my shares via dolphin on KDE. When it comes to thumbnails for a remote filesystem like this are they generated and stored on my PC or will the PC save them to the folder on the NAS where other programs could use them. I just want to make sure I can visually browse the videos and photos on my NAS and have them show up appropriately.

I'm a bit frustrated and kind of favoring just moving things to a different format. I bought a Synology device for an easier experience, and having said that, even if I built a custom solution, didn't Debian remove h.265 as well? I will probably do a TrueNAS or whatever at some point, but I've had way to many family events in the last few years and have to take an easier path right now.

My Linux knowledge is intermediate and my self-hosting knowledge is still fairly basic.

18
 
 

After almost 3 years of work, I've finally managed to get this project stable enough to release an alpha version!

I'm proud to present Managarr - A TUI and CLI for managing your Servarr instances! At the moment, the alpha version only supports Radarr.

Not all features are implemented for the alpha version, like managing quality profiles or quality definitions, etc.

Here's some screenshots of the TUI:

Additionally, you can use it as a CLI for Radarr; For example, to search for a new film:

managarr radarr search-new-movie --query "star wars"

Or you can add a new movie by its TMDB ID:

managarr radarr add movie --tmdb-id 1895 --root-folder-path /nfs/movies --quality-profile-id 1

All features available in the TUI are also available via the CLI.

19
 
 

A long long time ago, I bought a domain or two, and a shared hosting plan from Dreamhost w/ unlimited bandwidth/storage. I don't have root access, and can't do containers on this. It's been useful for a Piwigo instance to share scanned family photos. The problem I have is the limited resources really limit Piwigo's ability to handle the large TIF files involved in the archival scans. There are ways around this, but they all add time to the workflow that already eats into my free time enough. I'm looking at moving Piwigo to my local server that has plenty of available resources. That leaves me with little reason to keep the Dreamhost space. So what's a decent use case for cheap, shared hosting space anymore?

To be clear, I'm not looking for suggestions to move to a cheap VPS. I've looked into them, and might use one in the future, but don't need it right now. The shared hosting costs about $10.99/month at the moment. If there was a way I could leverage the unlimited bandwidth/storage as an offsite backup, that would be amazing, but I'm not sure it would be a great idea backing up stuff to a webserver where there best security I can add it via an .htaccess file.

20
 
 

These small little handy-dandy devices seem to get more and more popular. Anyone here chipped in for a JetKVM yet? Looks and sounds pretty solid. Are there a lot of you that have aquired a nanoKVM?

21
 
 

Hiya, I am looking into a few different services to better manage my finances, among the highest recommended ones there is ActualBudget. Actualbugdet itself is opensource and private, however, to get the most out of this service you may connect it to your bank, via a third party service. Has anyone here actually done this? The service (for EU folks) is called GoCardless. This however, to me is ringing many alarms..

Here is the screenshot showing the message before connecting to my bank..

Here GoCardless's list of partners/suppliers:

https://assets.ctfassets.net/40w0m41bmydz/6Mg3PGztGEQh11N3MNRmYc/1f186cf883151ca04b9c71c23b5ee4d3/GoCardless_material_supplier_list_v2024.09.pdf

I assume there is no private alternative that allows you to connect to your bank into AcualBudget or another service, if so please let me know! Managing finances would be so much more convenient if it all was automatically synced into a self-hosted service.

Let me know how you manage your finances :)

22
14
submitted 4 days ago* (last edited 4 days ago) by [email protected] to c/[email protected]
 
 

I use Crafty Controller for Minecraft. I have a server running at 192.168.50.16:25540. I want it to resolve to minecraft.example.com. I have Nginx Proxy Manager setup for my domain and can access it from inside my network, but it'd be nice to be able to use a domain instead.

NPM only has options for http and https, so is this even possible using NPM?

EDIT: this is for only internal access I have external access via tailscale.

23
 
 

This is a quite popular repo of scripts used by the selfhosting community, so I think it's worth sharing it here. It's unfortunately saddening news related to tteck's health. I wish him the best, and that he enjoys his well deserved rest in peace.

Dear Community,

I wanted to share a personal update. I’ve recently transitioned into hospice care and, as a result, will be slowing down the development of this project. While I’m grateful for the progress we’ve made together, I recognize that I’ll be taking a step back for some rest and reflection during this time.

Thank you for your continued support, encouragement, and understanding. Your dedication to the community and this project means the world to me, and I am grateful for each of you.

Warm regards,

tteck/tteckster

24
10
submitted 4 days ago* (last edited 3 days ago) by [email protected] to c/[email protected]
 
 

Hi, I have a home server (basically a NAS) currently running Debian. Basically it's configuration is as follows

  • debian host running 3 VMs

  • debian running inside each VM as docker host

I just manually install KVM on the host then docker on each VM after creating each of them. I documented the process so I know how to replicate it in case I need to rebuild.

I now dream of being able to automate the rebuild process using config files. I know this is done using Ansible.

But I've now heard of Talos.. (A thin layer for kubernetes) and intrigued. But I suppose I need a setup for the VM host to achieve automation through config files..

What setup are you guys using?

Thank you.


Thanks for all your suggestions! I've chosen to go with just bash scripting (given my simple setup) and keep the setup as it is.. Just gotta learn bash and virsh :)

25
 
 

Hi self-hosters, we're building a self-hostable, MIT-licensed alternative to Klaviyo, Braze, Mailchimp, etc. You can automate email, SMS, WhatsApp, and lots of other channels.

The core functionality of the platform includes a user segmentation builder, a low-code email template editor, and a low-code drag-and-drop journey builder for creating automated messaging workflows. We also have subscription groups to manage unsubscribes.

Link to repo: https://github.com/dittofeed/dittofeed

If you need any help with deploying an instance, reach out on Discord! https://discord.gg/HajPkCG4Mm

view more: next ›