this post was submitted on 19 Dec 2024
35 points (85.7% liked)

Selfhosted

40677 readers
337 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Yo,

Wandering what the limit is when it comes to how many containers I can run. Currently I'm running around 15 containers. What happens if this is increased to say, 40? Also, can docker containers go "idle" when not being used - to save system resources?

I'm running a i7-6700k Intel cpu. Doesn't seem to be struggling at all with my current setup at least, maybe only when transcoding for Jellyfin.

all 46 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 6 hours ago

On my old Dell workstation I pulled out of the dumpster of a local business, which now has a second life as a Unraid NAS, I’m running 29 currently. Used to be running more, but I got rid of some after I was done using those services.

Among other things, the server runs my entire Servarr stack, as well as the various media servers for video, music, ebooks and audiobooks, and my Gitea. There’s a bunch of other stuff as well, but those are the most important to me.

[–] [email protected] 4 points 1 day ago

None, I use Nix instead. :P

[–] sugar_in_your_tea 3 points 1 day ago (1 children)

Looks like 9? Here's what I'm currently running:

  • actual budget
  • caddy (for TLS trunking)
  • nextcloud and collabora
  • vaultwarden (currently unused)
  • jellyfin
  • home assistant

The rest are databases and other auxiliary stuff. I'm probably going to work on it some this holiday break, because I'd like to eventually move to microOS, and I still have a few things running outside of containers that I need to clean up (e.g. Samba).

But yeah, like others said, it really doesn't matter. On Linux (assuming you're not using Docker Desktop), a container is just a process. On other systems (e.g. Windows, macOS, or Linux w/ Desktop), they run in a VM, which is a bit heavier and reserves more resources for itself. I could run 1000 containers and it really wouldn't matter, as long as they're pretty light.

[–] [email protected] 1 points 1 day ago (2 children)

Been curious about deploying HA with docker. As I understand the only limitation is you can’t use add-ons?

[–] [email protected] 1 points 16 hours ago

You cannot install addons from the UI, but you can manually install them. Addons are just Docker containers that get configured automatically.

[–] sugar_in_your_tea 1 points 1 day ago

Yeah, I think so. I'm not interested in addons anyway.

[–] [email protected] 2 points 1 day ago

13 containers currently. I have thought about adding some more stuff such as bazarr and more but I need to be in the humor for it.

[–] [email protected] 2 points 1 day ago

Big fat zero

[–] [email protected] 33 points 2 days ago (2 children)

Docker containers arnt virtual machines despite acting like them. They dont actually require compute resources to be sitting around doing nothing like a traditional vm cos they are essentially just a proxy between the kernal in the container and the kernal on the base machine.

If the container isnt doing anything then it isnt consuming resources.

[–] [email protected] 3 points 1 day ago

It does consume some resources just not a lot.

[–] [email protected] 2 points 2 days ago

Good to know, thanks!

[–] [email protected] 34 points 2 days ago (1 children)

A Docker container is essentially a process running on your machine. Just like any other process. It can be idle, stopped or hogging the CPU. You can use Docker constraints to limit resource use if you want to, memory, CPU and network to name a few.

So, can you run 40 processes?

Very likely. Probably 400 or 4000, depending on CPU usage and memory.

I ran that particular CPU with 64 GB of RAM and used it to run multiple virtual machines, my main debian desktop and a VM specifically as a docker host, running dozens of instances of Google Chrome without ever noticing it slowing down.

Then the power cable shortened out and life was never the same. That was six months ago, the machine was a late 2015 iMac running macos and VMware Fusion.

[–] Voroxpete 14 points 2 days ago (1 children)

I'll add here that the "docker top" command allows you to easily see what kind of resources your containers are using.

If you prefer a UI, Dozzle runs as a container, is super lightweight, requires basically no setup, and makes it very easy to see your docker resource usage.

[–] [email protected] 3 points 2 days ago (1 children)
[–] [email protected] 2 points 1 day ago (1 children)

Also try Lazydocker, I think it's far superior to Dozzle for features. If you run it in a folder with a docker-compose.yml, it'll just show the processes from that set of containers, if you run it in any other folder it'll show all your docker containers.

[–] [email protected] 1 points 1 day ago

This i will definitely look into, love dozzle for when I need to troubleshoot.

[–] [email protected] -3 points 1 day ago

None. Can't fuck with it.

[–] [email protected] 5 points 2 days ago

I am at ~80. Most are idling.

For me, the metric to jeep an eye on is the time spent by the kernel between system and user. If the time spent by in system rises it is the sign that the kernel is just switching of context instead of executing programs.

[–] [email protected] 10 points 2 days ago (2 children)

Zero. It seems like software is increasingly expecting to be deployed in a container though, so that probably won't last forever.

[–] Voroxpete 27 points 2 days ago (2 children)

While I understand the frustration of feeling like you're being forced to adopt a particular process rather than being allowed to control your setup the way you see fit, the rapid proliferation of containers happened because they really do offer astonishing advantages over traditional methods of software development.

[–] [email protected] 5 points 2 days ago

It was a total game changer for me at least. Gone are the days of me spending an entire weekend day upgrading applications and eventually being scared to patch services. I also try things out I wouldn't have. I can have thee service up in a few minutes

[–] [email protected] 4 points 2 days ago (1 children)

FWIW, I switched to Linux due to the amazing container support and haven’t looked back in terms of running software. The easy set up, tear down, and common monitoring makes it far more convenient to host stuff on Linux.

[–] Voroxpete 7 points 2 days ago

Yeah, my own experience of switching to containers was certainly frustrating at first because I was so used to doing things the old way, but once it clicked I couldn't believe how much easier it made things. I used to block out several days for the trial and error it would take getting some new service to work properly. Now I'll have stuff up and running in 5 minutes. It's insane.

[–] [email protected] 8 points 2 days ago (2 children)

I like containers it makes shit very convenient. I dont give a fuck about the specifics of some service i copy paste a docker compose and im off to the races.

[–] [email protected] 1 points 1 day ago

Technically you can get something kind of like that with Ansible but I wouldn't recommend it.

[–] [email protected] 7 points 2 days ago (1 children)

Remember dealing with conflicting packages and conf files or updating dozens of vms? I sure do, and I don't miss it at all

[–] [email protected] 1 points 2 days ago

That was before my time lol.

[–] [email protected] 5 points 2 days ago

Run 19 but barely get over 5% usage even when transcoding 4K movies where the copyright has expired.

[–] [email protected] 8 points 2 days ago (1 children)

As it was already said. Docker is not virtualization. The number of Containers you can run depends on the containers and what applications are packaged in them. I am pretty sure you can max out any host with a single container when it runs computational heavy software. And i am also pretty sure you can run on any given host thousands of containers when they are just serving a simple static website

[–] Voroxpete 2 points 2 days ago* (last edited 2 days ago)

Correct on both counts, although it is possible to set limits that will prevent a single container using all your system's resources.

[–] atzanteol 7 points 2 days ago

Wandering what the limit is when it comes to how many containers I can run.

Basically the same as the number of processes you can run.

Use "docker stats" to see what resources each container is using.

[–] [email protected] 1 points 2 days ago

I currently have 15 on my host and like 3 more in a VM

[–] ALERT 4 points 2 days ago
[–] [email protected] 3 points 2 days ago (2 children)

I have gone up to about 300-400 or so. Currently running about 5 machines averaging about 100 each.

[–] [email protected] 1 points 28 minutes ago

What cpu/ram setup?

[–] [email protected] 2 points 1 day ago (1 children)

Are some of them redundant containers? That's just a lot of services to be running.

[–] [email protected] 1 points 1 day ago

Yeah most of them are just high-availability replicas, probably only about 100-200 actual services/microservices

[–] [email protected] 3 points 2 days ago

You can't really make them go idle, save by restarting them with a do-nothing command like tail -f /dev/null. What you probably want to do is scale a service down to 0. This leaves the declaration that you want to have an image deployed as a container, "but for right now, don't stand any containers up".

If you're running a Kubernetes cluster, then this is pretty straightforward: just edit the deployment config for the service in question to set scale: 0. If you're using Docker Compose, I believe the value to set is called replicas and the default is 1.

As for a limit to the number of running containers, I don't think it exists unless you're running an orchestrator like AWS EKS that sets an artificial limit of... 15 per node? I think? Generally you're limited only by the resources availabale, which means it's a good idea to make sure that you're setting limits on the amount of RAM/CPU a container can use.

[–] [email protected] 2 points 2 days ago* (last edited 2 days ago) (1 children)

Right now I have 32 active stacks running and a good number of them create at least one other container like a database. So I’m running around 60+ separate containers. The machine has maybe an i5 6500 or so in it with 32g of ram. I use unraid as the nas platform but I do all the docker stuff manually. It’s plenty fast for what I need so far… :)

[–] [email protected] 2 points 1 day ago (1 children)

Wtf are you doing creating databases that's cool lol

[–] [email protected] 1 points 13 hours ago

I wish it was something that cool. I mean like when spinning up Immich. It makes separate containers for the server, redis, db,... etc.

[–] [email protected] -1 points 1 day ago (2 children)

Zero.

I run VMs and LDoms at work and VMs at home, a dwindling number of VMware VMs and a growing number of qemus to replace them.

I don't need the hassle.

[–] [email protected] 13 points 1 day ago* (last edited 1 day ago)

Get a load of this guy, thinking containers are more of a hassle than VMs!