ttkciar

joined 1 year ago
[–] [email protected] 1 points 11 months ago (1 children)

Docker and Kubernetes are popular mostly because the industry has broadly given up on release engineering. This means applications/services can have different and conflicting dependencies, so the only way they can run on the same physical host is by putting each in their own containers or VM instances, each with their specific dependencies.

The alternative is to have a platform with standard libraries, and to port applications to the platform, using the platform's libraries as their dependencies, and thus avoid conflict. This requires effort and discipline, so of course it is not very popular, though it was the standard practice twenty years ago.

As far as I know the only Linux distribution which still follows the platform approach is Slackware. Applications which are ported to Slackware are guaranteed to work well together without conflicts, but not a lot of applications have been thus ported (Slackware only has about two thousand official packages, in all).

[–] [email protected] 1 points 11 months ago

My work-from-home workstation always has a VM or two running the test/dev environment for the tasks I'm working on at work. They are VBox instances provisioned/managed by Vagrant.

They are CentOS7 instances, each running a test database, usually a text editor, "tail -F" monitoring log output, and various daemons/services specific to my workplace's internal infrastructure. The host system is running Slackware 15.0.

[–] [email protected] 1 points 11 months ago

Thanks! 🙏

Quite welcome :-)

Where do you get your information?

A few places:

[–] [email protected] 1 points 11 months ago (3 children)

Older Xeon systems (v3, v4) give you oodles of cores, main memory channels, and PCIe lanes. Single-threaded performance isn't great, but for multi-threaded workloads they're great value for the money and power.

Compare those threadripper systems to R730, T7910, and T7810, with E5-2680 and E5-2690 processors, and see which makes sense to you and your use-case.

[–] [email protected] 0 points 11 months ago

A variety of things: Books, movies, music, scientific journal publications, Slackware's "current" branch with all past packages since 2009 (only half a TB though), All Slackbuild sources, an almost-complete crawl of CentOS 6 packages, large language models and datasets (almost 8TB now), an old TankNet archive, a few wikipedia dumps about two years apart, chat logs, archived email, a lot of smaller archives of niche interests .. it's something of a mess.

[–] [email protected] 1 points 1 year ago

In a drawer at work.

[–] [email protected] 1 points 1 year ago

On one hand, I think VMs are overused, introduce undue complexity, and reduce visibility.

On the other hand, the problem you're citing doesn't actually exist (at least not on Linux, dunno about Windows). A VM can use all of the host's memory and processing power if the other VMs on the system aren't using them. An operating system will balance resource utilization across multiple VMs the same as it does across processes to maximize the performance of each.

[–] [email protected] 1 points 1 year ago

I just have a cron job that runs "ifconfig | logger" once a week. When I want to know how much data has been sent or received, I subtract the RX bytes or TX bytes from the appropriately timestamped entries and divide by the time difference. Easy-peasy.