this post was submitted on 13 Jun 2024
736 points (97.9% liked)
Technology
59708 readers
2105 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You can't verify it's secure if it's proprietary, so it's never secure? Having control over other people's computing creates bad incentives to gain at your user's expense, so it's day 1 you should lose trust.
You can have audits done on proprietary software. Just because the public can't see it doesn't mean nobody else can.
That just moves requiring trust from the 1st party to 2nd or 3rd party. Unreasonable trust.
Do you yourself actually audit the software you use, or do you just trust what others say?
Wait....you don't audit every package and dependency before you compile and install?
That's crazy risky my man.
Me? I know security and take it seriously, unlike some people here. I'm actually almost done with my audit and should be ready to finally boot Fedora 8 within the next 6-8 months.
This is like asking if you do scientific experiments yourself or do you trust others' results. I distrust private prejudice and trust public, verifiable evidence that's survived peer review.
Scientists in the room who have to base their experiments off other peoples data and results:
Tongue in cheek but this is actually giving me particular headache because of some results (not mine) that should have never been published.
That sucks, but the answer to bad results is still more/better tests 😇
If you're a big enough organization (like the US government) you can pay anyone you want (or even your own people) to audit Microsoft's code.
@fuckwit_mcbumcrumble @tabular I’ve never worked at Microsoft, but I worked at a different enterprise company and they did indeed fly in representatives of different governments who got free access to the code on a company laptop in a conference room to look for any back doors. I always thought it was silly because it is impossible to read all the code.
If I'm a government I'm hella criminalising the sharing of proprietary software.
id argue arguing the unknown can't be used to say if its technically secure, nor insecure. If that kind of coding is brought into place, then say any OS using non open source hardware is insecure because the VHDL/Verilog code is not verifiable.
Unless everyone running an open source version of RISC-V code or a FPGA for their hardware, its a game of goalposts on where someone puts said flag.
Security is in degrees. The highest level would indeed use open-source hardware. I hope to build a rig like that someday.
Consider people counting paper votes in an election. Multiple political parties are motivated by their own self interests to watch the counting to prevent each other faking votes. That is a security feature and without it then the validity of the election has a critical unknown making it very sussy.
An OS using proprietary software is like as an electronic voting machine, we pretend it's secure to feel better about a failing we can't change.
the problem is the bad actors have direct access to said voting machines. in the case of security, the people creating the OS is not the bad actor typically in question when you think of bad actors, which kind of goes back to the goalpost situation. Unless you knew how everything is designed from the ground up (including the hardware code in whatever language it is) then thats just setting an arbitrary goalpost. basically typical NSA backdoor, or foreign backdoor via hardware situation, independent of the OS. To bluntly place it only at the OS stage is setting said goalpost there when you can really apply it to any part of the line (the chip design, the hardware assembler, the os designer, the software maker). Setting it at the OS level fundamentally means all OS' are insecure by nature unless you're actively running it on a FPGA thats constantly getting updates.
For instance, any CPU with speculative programming fundamentally is insecure and is virtually in all modern processors. never even mind the CPU when the door is already open regardless of the OS.
When I think of bad actors and software I think of security from 3rd parties after the intentions of the authors. Not just security but also privacy and any other anti-features users wouldn't want. That applies to the OS, apps or drivers. Hardware indeed has concerns like software, which is just a wider conversation about security, which is just part of user/consumer rights.