this post was submitted on 18 Oct 2023
1 points (100.0% liked)

Self-Hosted Main

502 readers
1 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

For Example

We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.

Useful Lists

founded 1 year ago
MODERATORS
 

There was a recent post about whether to enable ufw and it made me ask: how protected I am from a rogue docker container? I have a single server with 15-20 docker containers running at any given time. Should one get hacked or be malicious from the get go, are there (hopefully easy to implement for an armchair sysadmin) best practices to mitigate such an event? Thanks!

top 25 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 11 months ago (1 children)

So attempt to run every container with the least privilege:

  • seperate networks for each stack
  • only map needed folders
  • run the container as a non root user (some containers won't work so they need to be run as root user)
  • use a RP with authentication (if a app is valuable)
  • make differential backups to shrink size and increase the interval (and check if they work)
  • block internet access to containers that don't need them
[–] [email protected] 1 points 11 months ago (2 children)

run the container as a non root user (some containers won't work so they need to be run as root user)

To avoid issues with containers, could also make use of user namespaces: https://docs.docker.com/engine/security/userns-remap/

Allows a process to have root privileges within the container, but be unprivileged on the host.

[–] [email protected] 1 points 11 months ago (1 children)

That's the way Proxmox issues privileges to containers by default. I don't know how bulletproof it is, seems very reasonable.

[–] [email protected] 1 points 11 months ago

I'd argue it's up there :) In the end you're quite limited with what you can do as an unprivileged user.

Granted it's not for Docker, but Kubernetes, but userns is userns. This Kubernetes blog post even has a short demo :) https://kubernetes.io/blog/2023/09/13/userns-alpha/

[–] [email protected] 1 points 11 months ago

Does using this method allow mounting to folders on the host drive without permission issues?

[–] [email protected] 1 points 11 months ago

Docker provides some basic security guidelines here: https://docs.docker.com/engine/security/

But aside from specific containers and guidance, general network and system hardening guidelines would apply. You can look up plenty of server hardening guidelines via google. General principles such as least privilege, segmentation via VLANs and firewall rules, user ownership/privilege for accounts and services, will go a long way. Keep defense in depth in mind, so 1 control is none, 2 is one, and you can always find more ways to make something secure up to and including removal. The most secure thing, is a thing that doesn't exist.

There are also automated tools that can perform scans and 'audits' on your system, or your containers, to guide you on specifics you can adjust (such as lynis) and help lock you down in a more systematic way. These tools can be automated, report on a scheduled basis or one time use. One of those is your best bet for targeted and effective controls.

[–] [email protected] 1 points 11 months ago (2 children)

Run a server with SELinux enabled, and use Podman instead of docker (Podman I assume has better selinux support)

[–] [email protected] 1 points 11 months ago

Never heard of podman but what I read on google is that its a drop in replacement for docker. Even read you can alias podman for docker. So does that mean we can just use docker images and docker compose files with podman? Are there drawbacks for using podman instead of docker?

[–] [email protected] 1 points 11 months ago

This. On RHEL (or Fedora or CentOS Stream) containers are confined by the container_t domain and SELinux policy prevents them from interfering with host resources. In addition each container runs with a unique set of MCS labels, which stops a rogue container from interfering with other containers.

[–] [email protected] 1 points 11 months ago (1 children)

The thing about containers is they usually have no NÉED in general for pure ope file system access. No need for full network access (host, LAN, WAN). So the smaller the privileges the better. So even if it is compromised there’s very little you can do with it.

This is also a general principle for network management. For instance when does the TV need to print or access any server other than Jellyfin?

[–] [email protected] 1 points 11 months ago

This is not true sorry. Even in k8s any container has access to any other container in the same pod or in dockers case on the same host. In k8s you can at least add network profiles. If its a host or MACVLAN container it gets worse if no proper isolation is configured on the network level.

[–] [email protected] 1 points 11 months ago (1 children)

What is a rogue docker container?

[–] [email protected] 1 points 11 months ago (1 children)

If the source of the image is getting hacked/ the maintainer does make a backdoor, etc

[–] [email protected] 1 points 11 months ago (1 children)

Doesn't make any sense since you can see all the code and what you are installing.

[–] [email protected] 1 points 11 months ago (1 children)

The people who read the source code of all their Docker containers and especially understand everything in there are probably around 1%.

[–] [email protected] 1 points 11 months ago (1 children)

For this sub maybe. It doesn't take to much to look at what you are copy and pasting.

[–] [email protected] 1 points 11 months ago

It's not limited to what you copy and paste. One of my containers has a pretty long starter script written by the container maintainer. That is needed because the application doesn't have an official Docker version and the starter script takes care of all the necessary work arounds to get the app running inside a container. There could be something malicious in there I don't know about, if I don't read the whole starter script, which is probably in a language I don't understand well.

Even more complicated : I could have been studying the starter script and made the decision it's fine and the author trustworthy, so I pull the container image with the tag "v1. 0" every few months a new version gets released, I take a look at the changelog, if no braking changes are mentioned I pull tag v1. 1 and replace my existing container. At some point the maintainer stops mainting the container and hands over the Repository to someone else. This person unfortunately now places malicious code in the starter script and releases an update. If I would now pull that new container image I now have a rougue container.

[–] [email protected] 1 points 11 months ago

You can run your containers through a vulnerability scanner like Trivy and then patch with Copacetic. It will only fix the container image's OS vulnerabilities though, not the app code dependencies.

Otherwise one step simpler is you can just vulnerability scan the containers, look at the issues, then decide if you want to deploy them.

[–] [email protected] 1 points 11 months ago (2 children)

Noob here, What if we use something like authelia or authentik for signing in to use any container. Will that make it safe?

I saw it the documentation of r/CosmosServer the creator mentions how his setup does not allow docker containers to talk to each other.

[–] [email protected] 1 points 11 months ago

"Only" having an authenticator doesn't stop malicious containers from reaching outside. Least privileges and network segmentation is the minimum necessary.

[–] [email protected] 1 points 11 months ago

Safe-r. Not inherently safe. It's one good practice to consider among others. Like any measure that increases security, it makes your service less accessible - which may compromise usability or interoperability with other services.

You want to think through multiple security measures with any given service, decide what creates undo hassle, decide what's most important to you, limit the attack surface by making unauthorized access somewhere between inconvenient and near-impossible. And limit the damage that can be done if someone gets unauthorized access - ie not running as root, giving the container limited access to folders, etc.

[–] [email protected] 1 points 11 months ago

Only give the container access to the folders it needs for your application to operate as intended.

Only give the container access to the networks it needs for the application to run as intended.

Don't run containers as root unless absolutely necessary.

Don't expose an application to the Internet unless necessary. If you're the only one accessing it remotely, or if you can manage any of the other devices that might (say, for family members), access your home network via a VPN. There are multiple ways to do this. I run a VPN server on my router. Tailscale is a good user-friendly option.

If you do need to expose an application to the Internet, don't do so directly. Use a reverse proxy. One common setup: Put your containers on private networks (shared among multiple only in cases where they need to speak to each other), with ports forwarded from the containers to the host. Install a reverse proxy like Nginx Proxy Manager. Forward 80 and 443 from the router to NGM, but don't forward anything else from the router. Register a domain, with subdomains for each service you use. Point the domain and subdomains to your IP, or using aliases, to a dynamic dns domain that connects to a service on your network (in my case, I use my Asus router's DDNS service). Have NGM connect each subdomain to the appropriate port on the host (ie, nc.example.com going to a port on the hose being used for NextCloud). Have NGM handle SSL certificate requests and renewals.

There are other options that don't involve any open ports, like Cloudflare tunnels. There are also other good reverse proxy options.

Consider using something like fail2ban or crowdsec to mitigate brute force attacks and ban bad actors. Consider something like Authentik for an extra layer of authentication. If you use Cloudflare, consider its DDOS protection and other security enhancements.

Keep good and frequent backups.

Don't use the same password for multiple services, whether they're ones you run or elsewhere.

Throw salt over your shoulder, say three Hail Marys and cross your fingers.

[–] [email protected] 1 points 11 months ago

What an informative and fantastic set of replies, just wanted to say thanks to everyone for sharing!

As someone who works in infrastructure security, but not with dockers (yet) I learnt a few things which is what this sub is all about...

[–] [email protected] 1 points 11 months ago

Some good advice here. I would say avoid using network_mode: host unless you really have to. And make use of no-new-privs feature. This is easy to do and IMO bare minimum for preventing rogue actions from containers.

[–] [email protected] 1 points 11 months ago

o use podman and dont run any container as root if it need root i will use a VM