this post was submitted on 28 Nov 2023
2 points (100.0% liked)

Self-Hosted Main

502 readers
1 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

For Example

We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.

Useful Lists

founded 1 year ago
MODERATORS
 

all the cloud apps ive seen advertize "easy installs" but i end up with 30 different problems i cant solve, is there any cloud apps that i can just easily install on my linux mint (ubuntu) server without issues?

top 9 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 9 months ago (1 children)

CasaOS is about as easy as it gets.

curl -fsSL https://get.casaos.io | sudo bash  

It provides a gui front end for docker. You can install it on any debian based system (which mint is). Combine that with the portainer app and there isn't much you can't do.

[–] [email protected] 1 points 9 months ago

Oh wow, just glancing at the github page this looks really intriguing. It's like a front-end for assembling your own cloud from existing apps. They even mention running it on older hardware.

You may have just saved me some effort.

Thanks!

[–] [email protected] 1 points 9 months ago

Download docker and then install Portainer. This will make your life so much easier. I’ll share a guide in the am when I’m back at my comp.

[–] [email protected] 1 points 9 months ago

Check the docker container filebrowser/filebrowser. Put that drive as /srv and it will be like a cloud web file browser. No sync but you can download/upload and share links.

Though syncthing would work for that.

[–] [email protected] 1 points 9 months ago

Docker compose files are pretty easy and straightforward. Find a service and goggle "x docker compose"

[–] [email protected] 1 points 9 months ago

cp -r google.com /home/user/mycloud

[–] [email protected] 1 points 9 months ago

“I dont want to spend money on cloud services and dont want to spend time on self hosting” pick one buddy

[–] [email protected] 1 points 9 months ago

Your whole life becomes much simpler when you use docker.

Elevator pitch: Docker containers are preconfigured services which run isolated from the rest of your system and only expose individual directories you map into the container. These directories are the persistence part of the application and survive a restart of the container or the host system. Just backup your scripts and the data directories and you have backed up your entire server.

I have a few scripts as examples. 'cd "$(dirname "$0")"' changes to the directory the script is stored in, and therefore will create and map data directories from that parent directory.

Letsencrypt proxy companion will set up a single listener for web and ssl traffic, setup virtual hosts automatically, and setup SSL, all with automations.

First, you need letsencrypt nginx proxy companion:

#!/bin/bash

cd "$(dirname "$0")"

docker run --detach
--restart always
--name nginx-proxy
--publish 80:80
--publish 443:443
--volume $(pwd)/certs:/etc/nginx/certs
--volume $(pwd)/vhost:/etc/nginx/vhost.d
--volume $(pwd)/conf:/etc/nginx/conf.d
--volume $(pwd)/html:/usr/share/nginx/html
--volume /var/run/docker.sock:/tmp/docker.sock:ro
--volume $(pwd)/my_proxy.conf:/etc/nginx/conf.d/my_proxy.conf:ro
--volume $(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro
--volume $(pwd)/acme:/etc/acme.sh
jwilder/nginx-proxy

docker run --detach
--restart always
--name nginx-proxy-letsencrypt
--volumes-from nginx-proxy
--volume /var/run/docker.sock:/var/run/docker.sock:ro
--env "DEFAULT_[email protected]"
jrcs/letsencrypt-nginx-proxy-companion

Then for each service, you can start with a docker command as well with a few extra environment variables. Here is one for nextcloud:

docker run -d \

--name nextcloud
--hostname cloud.MYDOMAIN.COM
-v $(pwd)/data:/var/www/html
-v $(pwd)/php.ini:/usr/local/etc/php/conf.d/zzz-custom.ini
--env "VIRTUAL_HOST=cloud.MYDOMAIN.COM"
--env "LETSENCRYPT_HOST=cloud.MYDOMAIN.COM"
--env "VIRTUAL_PROTO=http"
--env "VIRTUAL_PORT=80"
--env "OVERWRITEHOST=cloud.MYDOMAIN.COM"
--env "OVERWRITEPORT=443"
--env "OVERWRITEPROTOCOL=https"
--restart unless-stopped
nextcloud:25.0.0

And Plex (/dev/dri is quicksync for hardware transcode):

docker run \
--device /dev/dri:/dev/dri \
--restart always \
-d \
--name plex \
--network host \
-e TZ="America/Chicago" \
-e PLEX_CLAIM="claim-somerandomcharactershere" \
-v $(pwd)/config:/config \
-v /my/media/directory/on/host/system:/media \
plexinc/pms-docker

Obsidian:

docker run --rm -d \

--name obsidian
-v $(pwd)/vaults:/vaults
-v $(pwd)/config:/config
--env "VIRTUAL_HOST=obsidian.MYDOMAIN.COM "
--env "LETSENCRYPT_HOST=obsidian.MYDOMAIN.COM "
--env "VIRTUAL_PROTO=http"
--env "VIRTUAL_PORT=8080"
ghcr.io/sytone/obsidian-remote:latest

[–] [email protected] 1 points 9 months ago

The answer is docker and docker-compose. The problem you're describing is the reason it exists. Each app is isolated and runs as though it has its own dedicated system, but you can map directories and ports in to make data persistent, and ensure it all just works. This includes mapping in your whole HDD, or even /dev if you so desire (don't do this). It honestly makes it trivial to get most things up and running.