wittless

joined 1 year ago
[–] [email protected] 2 points 1 year ago

my primary laptop is a Mac, but my main server is a proxmox host running many containers for various things. one is a samba server that holds my Time Machine backups.

[–] [email protected] 3 points 1 year ago (2 children)

because synching runs all the time. any change that I make is nearly immediately updated offsite. and if you think that is my only backup of my photos, you would be wrong :-) This is only a "if I lost all my physical possessions" type backup, not an "oops, I am a dummy and deleted something I shouldn't have" backup. I have multiple snapshot backups and also run incremental backups every hour. Storage is so cheap any more that I don't hesitate to have 5 backups of the REALLY important stuff. Most of my server storage is also raid5 in case of hardware failure. I have pushover set up to check for disk failures and push alerts to my phone if one is ever detected.

[–] [email protected] 8 points 1 year ago* (last edited 1 year ago) (10 children)

I have a site to site VPN between my house and my mother's house. I keep a raspberry pi at her house with a 2TB drive running syncthing. My photo library and important documents sync to her house as an offsite backup in case my house ever burned to the ground.

as far as syncing files between PC and laptop, I would think ownCloud would be better suited for that.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

I have two containers pointing to the same bind mount. I just had to manually edit the config files in /etc/pve/lxc so that both pointed to the same dataset. I have not had any issues, but you do have to pay attention to file permissions etc. I have one container that writes and the other is read only for the most part, so I don't worry about file lock there. I did it this way because if I recall, you can't do NFS shares within a container without giving it privileged status, and I didn't want to do that.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (3 children)

I personally created the ZFS zpool within proxmox so I had all the space I could give to any of the containers I needed. Then when you create a container, you add a mount point and select the pool as the source and specify the size you want to start with. Then as your needs grow, you can add space to that mount point within proxmox.

Say you have a 6 TB zpool and you create a dataset that is allocated 1 TB. Within that container, you will see a mount point with a size of 1 TB, but in proxmox, you will see that you still have 6TB free because that space isn't used yet. Your containers are basically just quota'ed directories inside the Proxmox hosts's filesystem when you use zpool. And you are free to go into that container's settings and add space to that quota as your needs grow.