this post was submitted on 30 Jan 2024
846 points (98.3% liked)

Technology

58011 readers
3060 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 7 months ago

It is definitely the exact opposite of this. Even though I understand why you would think this.

The thing with systems like these is they are mission critical, which is usually defined as failure = loss of life or significant monetary loss (like, tens of millions of dollars).

Mission critical software is not unit tested at all. It is proven. What you do is you take the code line by line, and you prove what each line does, how it does it, and you document each possible outcome.

Mission critical software is ridiculously expensive to develop for this exact reason. And upgrading to deploy on different systems means you'll be running things in a new environment, which introduces a ton of unknown factors. What happens, on a line by line basis, when you run this code on a faster processor? Does this chip process the commands in a slightly different order because they use a slightly different algorithm? You don't know until you take the new hardware, the new software, and the code, then go through the lengthy process of proving it again, until you can document that you've proven that this will not result in any unusual train behavior.