this post was submitted on 25 Jul 2024
315 points (99.1% liked)
Technology
59105 readers
4009 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Secure Boot is a broken concept by design.
You dare question a monopoly corporation and the spymasters of this country??
(/s)
That's doing a lot of work here.
Yes, it's important in certain situations, but for consumer devices, it's just another thing that can go wrong when using alternative operating systems. Regular users don't have the physical risk these other systems do, and making it more difficult for users to install more secure operating systems goes against the bigger threat.
Linux is compatible with Secure Boot (source: I exclusively run Linux, and use Secure Boot on my systems), but some distros or manufacturers screw it up. For example, Google Pixel devices warn you about alternative ROMs on boot, and this makes GrapheneOS look like sketchy software, when it's really just AOSP with security patches on top (i.e. more secure than what ships with the device). The boot is still secure, it's just that the signature doesn't match what the phone is looking for.
It's just FUD on consumer devices, but it's totally valid in other contexts. If I was running a data center or enterprise, you bet I'd make sure everything was protected with secure boot. But if I run into any problems on personal devices, I'm turning it off. Context matters.
Microsoft a key player in security ?
Yes, surely randoms on Lemmy know better than Microsoft and the NSA in regards to security.
Oh anyone who doesn’t trust Microsoft with their life is a complete idiot. And the NSA only illegally spied on everyone until Bush the II made it legal! So of course we should unquestioningly follow their configuration guides. I mean - haha - we don’t wanna get disappeared! Haha ha. Not. Not that that's ever happened. That we know of. For sure. Probably.
in regards to security
in regards to security
in regards to security
Just wanted to make sure you saw it this time because you went off on a tangent there.
It doesn't matter if they know about security (which they do). A burglar could know about locks and home security systems, would you take his advice?
Their positions on security of others is dismissed on grounds of trust not of competence.
The NSA has two jobs.
The first is to break into any computer or communications stream that they feel the need to for “national security needs”. A lot of leeway for bad behavior there, and yes, they’ve done, and almost certainly continue to do, bad things. Note that in theory that is only allowed for foreign targets, but they always seem to find ways around that.
The second, and less well known, job is to ensure that nobody but them can do that to US computers and communications streams. So if they say something will make your computer more secure, it’s probably true, with the important addition of “except from them”.
I won’t pretend I like any of this, but most people are much more likely to be targeted by scammers, bitcoin miners, and ransomware than they are by the NSA itself, so in that sense, following the NSA’s recommendation here is probably better than not.
Exploits don't care if you are actually the NSA or not. The NSA certainly knowns that yet they keep exploits secret at least from the public.
They have argued for key escrow for God's shake.
They are primarily an intelligence agency. If you are not likely to be targeted by the NSA you are also unlikely to be targeted by any of their adversaries. They don't give a shit if you get scammed, they are not the FBI, who also keep secret exploits and are anti-encryption.
Additionally using their "best" exploits on more simple targets still poses a risk to them being discovered and fixed. Therefore it's beneficial to them for everybody's security to be compromised. It also provides deniability.
Right. Their advice for the general public is a mix of "best practice" and risk. If an exploit is not actively exploited in the wild, they'll probably sit on it for intelligence purposes and instead recommend best practices (which are good) that doesn't impact their ability to use the exploit.
So trust them when they say do X, but don't take silence to mean you're good.
Literally, yes. There was even a TV show about it.
Do you have any evidence those two people are still committing burglaries? The NSA is not an ex-intelligence agency.
I get my advice from LockPickingLawyer on YouTube. He'll demonstrate the weaknesses of various locks, and say which to avoid and which are probably okay ("okay" is a really strong recommendation from him). He'll still break into really secure locks in <2 min, but he'll describe the skills necessary to break in and let you decide on what your threat level.
Basically, as long as it's bump and bypass resistant, you're good. Burglars aren't going to pick locks, they'll either break a window or move on if the lock stops them. A good lock doesn't keep out a burglar, it just slows them down enough that they'll give up.
So yes, get advice from people who have the skills to break the protection they're recommending, they'll be able to separate things into threat categories. If you want OPSec advice, visit black hat hacking forums and whatnot, you'll get way better advice than sticking with the normie channels.
If they were an expert burglar, I might
Source: I'm an expert burglar and all of the others on my burglar crew are very helpful when people ask about home security stuff.
Exactly, they have a clear conflict of interest
Hey what is that, some kinda tangent
Security is the last thing NSA and Micro$oft care about. NSA wants to be sure they can do all they need to with your devices, and M$ just wants to discourage you from switching to linux.
This is obviously insane
Lol, if you say so
Can you explain more (don't doubt you)
Ok, so I am not an expert, and I am not the OP. But my understanding is that Secure Boot is checking with a relatively small list of trustworthy signing certificates to make sure that the OS and hardware are what they claim to be on boot. One of those certificates belongs to a Microsoft application called Shim, which can be updated regularly as new stuff comes out. And technically you can whitelist other certificates, too, but I have no idea how you might do that.
The problem is, there's no real way to get around the reality that you're trusting Microsoft to not be compromised, to not go evil, to not misuse their ubiquity and position of trust as a way to depress competition, etc. It's a single point of failure that's presents a massive and very attractive target to attackers, since it could be used to intentionally do what CrowdStrike did accidentally last week.
And it's not necessarily proven that it can do what it claims to do, either. In fact, it might be a quixotic and ultimately impossible task to try and prevent boot attacks from UEFI.
But OP might have other reasons in mind, I dunno.
To use secure boot correctly, you need disable or delete the keys that come preinstalled and add your own keys. Then you have to sign the kernel and any drivers yourself. It is possible to automate the signing the kernel and kernel modules though. Just make sure the private key is kept secure. If someone else gets a hold of it, they can create code that your computer will trust.
The kernel modules usually are signed with a different key. That key is created at build time and its private key is discarded after the build (and after the modules have been signed) and the kernel uses the public key to validate the modules IIRC. That is how Archlinux enables can somewhat support Secure Boot without the user needing to sign every kernel module or firmware file (it is also the reason why all the kernel packages aren't reproducible).
In any case, not for the average person.
Your want to store a copy of the private key on the encrypted machine so it can automatically sign kernel updates.
When you enter the UEFI somewhere there will be a Secure Boot section, there there is usually a way to either disable Secure Boot or to change it into "Setup Mode". This "Setup Mode" allows enrolling new keys, I don't know of any programs on Windows that can do it, but
sbctl
can do it and thesystemd-boot
bootloader both can enroll your own custom keys.Definitely not for the "normie" then.
Probably too late, but just to complement what others have said. The UEFI is responsible for loading the boot software thst runs when the computer is turned on. In theory, some malware that wants to make itself persistent and avoid detection could replace/change the boot software to inject itself there.
Secure boot is sold as a way to prevent this. The way it works, at high level, is that the UEFI has a set of trusted keys that it uses to verify the boot software it loads. So, on boot, the UEFI check that the boot software it's loading is signed by one of these keys. If the siganture check fails, it will refuse to load the software since it was clearly tampered with.
So far so good, so what's the problem? The problem is, who picks the keys that the UEFI trusts? By default, the trusted keys are going to be the keys of the big tech companies. So you would get the keys from Microsoft, Apple, Google, Steam, Canonical, etc, i.e. of the big companies making OSes. The worry here is that this will lock users into a set of approved OSes and will prevent any new companies from entering the field. Just imagine telling a not very technical user that to install your esoteric distro they need to disable something called secure boot hahaha.
And then you can start imagining what would happen if companies start abusing this, like Microsoft and/or Apple paying to make sure only their OSes load by default. To be clear, I'm not saying this is happening right now. But the point is that this is a technology with a huge potential for abuse. Some people, myself included, believe that this will result in personal computers moving towards a similar model to the one used in mobile devices and video game consoles where your device, by default, is limited to run only approved software which would be terrible for software freedom.
Do note that, at least for now, you can disable the feature or add custom keys. So a technical user can bypass these restrictions. But this is yet another barrier a user has to bypass to get to use their own computer as they want. And even if we as technical users can bypass this, this will result in us being fucked indirectly. The best example of this are the current Attestation APIs in Android (and iOS, but iOS is such a closed environment that it's just beating a dead horse hahahah). In theory, you can root and even degoogle (some) android devices. But in practice, this will result in several apps (banks in particular, but more apps too) to stop working because they detect a modified device/OS. So while my device can technically be opened, in practice I have no choice but to continue using Google's bullshit. They can afford to do this because 99% of users will just run the default configuration they are provided, so they are ok with losing the remaining users.
But at least we are stopping malware from corrupting boot right? Well, yes, assuming correct implementations. But as you can see from the article that's not a given. But even if it works as advertised, we have to ask ourselves how much does this protect us in practice. For your average Joe, malware that can access user space is already enough to fuck you over. The most common example is ransonware that will just encrypt your personal files wothout needing to mess with the OS or UEFI at all. Similarly a keylogger can do its thing without messing with boot. Etc, etc. For an average user all this secure boot thing is just security theater, it doesn't stop the real security problems you will encounter in practice. So, IMO it's just not worth it given the potential for abuse and how useless it is.
It's worth mentioning that the equation changes for big companies and governments. In their case, other well funded agents are willing to invest a lot of resources to create very sofisticated malware. Like the malware used to attack the nuclear program plants in Iran. For them, all this may be worth it to lock down their software as much as possible. But they are playing and entirely different game than the rest of us. And their concerns should not infect our day to day lives.
"And then you can start imagining what would happen if companies start abusing this, like Microsoft and/or Apple paying to make sure only their OSes load by default."
I'm convinced that this is definitely the end goal for Microsoft, especially with the windows 11 TPM requirement. We are in the early stages of their plan to mold the PC ecosystem to be more like mobile. This is the biggest reason I decided to move to Linux - it's now or never in my opinion.
This is the most open time period for hardware as far as options go since like, the 90s. Microsoft isn't taking away options.
Thanks a lot for the summary!
It is based on the assumption that every piece of code in the entire stack from the UEFI firmware to the operating system userspace is free of vulnerabilities
That doesn't mean it's useless. All software is prone to vulnerabilities and exploits, but that doesn't mean its not worth using it at all. TrueCrypt was a good solution for the time, even if we now know it is pretty vulnerable