Named volumes are often the default because there is no chance of them conflicting with other services or containers running on the system.
Say you deployed two different docker compose apps each with their own MariaDB. With named volumes there is zero chance of those conflicting (at least from the filesystem perspective).
This also better facilitates easier cleanup. The apps documentation can say "docker compose down -v", and they are done. Instead of listing a bunch of directories that need to be cleaned up.
Those lingering directories can also cause problems for users that might have wanted a clean start when their app is broken, but with a bind mount that broken database schema won't have been deleted for them when they start up the services again.
All that said, I very much agree that when you go to deploy a docker service you should consider changing the named volumes to standard bind mounts for a couple of reasons.
-
When running production applications I don't want the volumes to be able to be cleaned up so easily. A little extra protection from accidental deletion is handy.
-
The default location for named volumes doesn't work well with any advanced partitioning strategies. i.e. if you want your database volume on a different partition than your static web content.
-
Old reason and maybe more user preference at this point but back before the docker overlay2 storage driver had matured we used the btrfs driver instead and occasionally Docker would break and we would need to wipe out the entire /var/lib/docker btrfs filesystem, so I just personally want to keep anything persistent out of that directory.
So basically application writers should use named volumes to simplify the documentation/installation/maintenance/cleanup of their applications.
Systems administrators running those applications should know and understand the docker compose well enough to change those settings to make them production ready for their environment. Reading through it and making those changes ends up being part of learning how the containers are structured in the first place.
Since the ER-X is Linux under the hood the easiest thing to do would be to just ssh in and run tcpdump.
Since you suspect this is from the UDR itself you should be able to filter for the IP of the UDRs management interface. That should get you destination IPs which will hopefully help track it down.
Not sure what would cause that sort of traffic, but I know there used to be a WAN speed test on the Unifi main page which could chew up a good amount of traffic. Wouldn't think it would be constant though.
Do you have other Unifi devices that might have been adopted with layer 3 adoption? Depending on how you setup layer 3 adoption even if devices are local to your network they might be using hairpin NAT on the ER-X which might look like internet activity destined for the UDR even though it is all local.