So the cost of a tow or mobile mechanic + cost of a replacement starter + cost from alternate transport or loss of wages would take years to make up for, each.
JollyGreen_sasquatch
Transmission loss/attenuation only informs the power needed on the transmission side for the receiver to be able to receive the signal. The wireless networks I am talking about don't really have packet loss (aside from when the link goes down for reasons like hardware failure).
I mention Chicago to New York specifically because in the financial trading world, we use both wireless network paths and fiber paths between the locations and measured/real latency is a very big deal and measured to the nanoseconds.
So what I mention has nothing to do with human perception as fiber and wireless are both faster than most human's perceptions. We also don't have packet loss on either network path.
High speed/ high frequency Wireless is bound by the curvature of the earth and terrain for repeater locations. Even with all of the repeaters, measured latency for these commercially available wireless links are 1/2 the latency of the most direct commercially available fiber path between Chicago and New York.
Fiber has in-line passive amplifiers, which are a fun thing to read about how they work, so transmission loss/attenuation only applies to where the passive amplifiers are.
You are conflating latency (how long it takes bits to go between locations) with bandwidth (how many bits can be sent per second between locations) in your last line.
The speed of light through a medium is what varies, since I have to deal with this at work, and the speed of light through air is technically faster than the speed of light through fiber. But now there is hollow core fiber that makes this difference less.
Between Chicago and New York the latency of the specialized wireless links commercially available is around about 1/2 of standard fiber taking the most direct route. But bandwidth is also only in gigabits/s vs terabits/s you can put over typical fiber backbone.
But both are faster than humans can perceive anyway.
There are modern labdocks with usbc
The before first unlocked state is considered more secure, file/disk encryption keys are in a hardware security module and services aren't running so there is less surface for an attack . When a phone is taken for evidence, it gets plugged into power and goes in a faraday bag. This keeps the phone in an after first unlock state where the encryption keys are in memory and more services that can be attacked are running to gain access.
In Linux everything is a file. So modifying files is all you really need. The hardest part is how to handle mobile endpoints like laptops, that don't have always on connections. Ansible pull mode is what we were looking at in a POC, with triggers on VPN connection. Note we have a large Linux server footprint already managed by ansible, so it isn't a large lift for us.
Tried this at work and discovered it only really works on vscode and probably eclipse. Other IDEs claimed support but it was found to be unusable.
I do agree mostly with your point here, but I think you can limit the scope a bit more. Mainly provide a working build environment via one of the mentioned tools, since you will need it anyway for a ci/cd pipeline. You can additionally have a full development environment that you use available for people to use if they choose. It is important that it be one regularly used to keep the instructions up to date for anyone that might want to try to contribute.
From my observations as a sys admin, people tend to prefer the tools they are familiar with, especially as you cross disciplines. A known working example is usually easy to adapt to anyone's preferred tooling.
Modern UEFI in boxes has http boot options generally, and ipxe has supported http boot a long time. though I still get the grub2 bootloader bits over tftp, then http for kernel and initrd.
The lack of version is the problem. Syntax has changed over time, so when someone finds or has an older compose file, there is no hint it won't work with the current version of docker-compose until you get errors and no graceful way to handle it.
Compose doesn't have a versioned standard, it did for a bit iirc, which also means you can't always just grab a compose file and know it will always just work.
Most self hosted works fine with giant all in one containers, even for complex apps, it's when you need to scale you usually hit problems with an all in one container approach and have to change.
A method not yet mentioned is by inode, (I've accidentally created filenames I didn't know how to escape at the time like
--
or other command line flags/special characters)ls -li
Once you get the inode
find . -type f -inum $inode -delete