this post was submitted on 27 Jun 2024
931 points (96.6% liked)

linuxmemes

21611 readers
1185 users here now

Hint: :q!


Sister communities:


Community rules (click to expand)

1. Follow the site-wide rules

2. Be civil
  • Understand the difference between a joke and an insult.
  • Do not harrass or attack members of the community for any reason.
  • Leave remarks of "peasantry" to the PCMR community. If you dislike an OS/service/application, attack the thing you dislike, not the individuals who use it. Some people may not have a choice.
  • Bigotry will not be tolerated.
  • These rules are somewhat loosened when the subject is a public figure. Still, do not attack their person or incite harrassment.
  • 3. Post Linux-related content
  • Including Unix and BSD.
  • Non-Linux content is acceptable as long as it makes a reference to Linux. For example, the poorly made mockery of sudo in Windows.
  • No porn. Even if you watch it on a Linux machine.
  • 4. No recent reposts
  • Everybody uses Arch btw, can't quit Vim, and wants to interject for a moment. You can stop now.
  •  

    Please report posts and comments that break these rules!


    Important: never execute code or follow advice that you don't understand or can't verify, especially here. The word of the day is credibility. This is a meme community -- even the most helpful comments might just be shitposts that can damage your system. Be aware, be smart, don't fork-bomb your computer.

    founded 2 years ago
    MODERATORS
     

    Context for newbies: Linux refers to network adapters (wifi cards, ethernet cards, etc.) by so called "interfaces". For the longest time, the interface names were assigned based on the type of device and the order in which the system discovered it. So, eth0, eth1, wlan0, and wwan0 are all possible interface names. This, however, can be an issue: "the order in which the system discovered it" is not deterministic, which means hardware can switch interface names across reboots. This can be a real issue for things like servers that rely on interface names staying the same.

    The solution to this issue is to assign custom names based on MAC address. The MAC address is hardcoded into the network adaptor, and will not change. (There are other ways to do this as well, such as setting udev rules).

    Redhat, however, found this solution too simple and instead devised their own scheme for assigning network interface names. It fails at solving the problem it was created to solve while making it much harder to type and remember interface names.

    To disable predictable interface naming and switch back to the old scheme, add net.ifnames=0 and biosdevname=0 to your boot paramets.

    The template for this meme is called "stop doing math".

    you are viewing a single comment's thread
    view the rest of the comments
    [–] [email protected] 74 points 6 months ago (6 children)

    It's amazing how many linux problems stem from 'Redhat, however, found this solution too simple and instead devised their own scheme'. Just about every over complex, bloated bit of nonsense we have to fight with has the same genesis.

    [–] [email protected] 29 points 6 months ago (3 children)

    What I really don't understand is why distro maintainers feel the need to actually go along with these changes. Like, sure, if this predictable interface naming thing worked as intended, I can definitely see how it can be useful for server administrators. You could just hardcode the automatic interface names instead of assigning them manually in /etc/mactab. But why would the rest of us ever need this? Most personal machines have at most one wifi card and one ethernet device, so wlan0 and eth0 are perfectly predictable. And even if you have multiple wifi or ethernet adapters, your networking is probably handled by network-manager, so you never actually have to put interface names into config files. Why force enterprise-grade bloat on users who just want a simple desktop experience?

    [–] [email protected] 15 points 6 months ago (1 children)

    As to why distro maintainers go along, if you had to vet every time the network stack updated and make sure it doesn't break your custom solution to predictable naming, you'd probably just go along with it and let anyone that needed it devise and maintain their own solution. 99% of users won't worry about it.

    [–] [email protected] 7 points 6 months ago

    No need for a custom solution, we already had ways to make predictable names that worked better than this. Giving each interface a name that represents it's job makes life so much easier when you have several, naming them after which PCI bus they're on does not.

    [–] [email protected] 4 points 6 months ago (2 children)

    Personally I'd do away with NetworkManager too and just configure the interfaces directly, but that might just be me being old and grumpy!

    I think most distros go along because their upstream did. There are comparatively few 'top level' distributions, the main ones (by usage) being Redhat and Debian. Most everything else branches from those. Redhat's got enough clout on the market that there's a sort of pull towards complying with it just to not be left put.

    I use Debian, but I think they're crazy for swallowing everything Redhat pushes, they could easily stick to the cleaner options and have a better system for it. At least they let you opt out of systemd, so life is a little more tolerable.

    [–] [email protected] 6 points 6 months ago (2 children)

    I'd do away with network-manager on a stationary system too, but I'm on a laptop, and unless there's some trick I don't know about, configuring wifi by hand for every new network I come across sounds like a bit of a pain. Especially for corporate/institution network that use fancy things like PEAP

    [–] [email protected] 2 points 6 months ago (1 children)

    If by "configuring wifi by hand" you mean writing config files by hand, that's actually not necessary with plain wpa_supplicant too. There is wpa-gui (or wpa-cute if you prefer Qt over GTK), which is basically a GUI frontend to wpa_supplicant, which makes adding new networks nearly as easy as with NetworkManager. But it's a far less modern looking UI than the NM frontends.

    load more comments (1 replies)
    [–] [email protected] 2 points 6 months ago (1 children)

    That's fair, it does make sense to use it on a laptop, but it really should be the sort of thing you add when needed rather than having it jammed in whether it's useful or not.

    Every time I need to do something even slightly different to a basic setup I find myself inventing new curses for those who screwed things up with these overblown, over complex, minimally functional abominations. Just give me vi and the basic configuration files and let me get on with it!

    [–] [email protected] 2 points 6 months ago (1 children)

    I find myself inventing new curses for those who screwed things up with these overblown, over complex, minimally functional abominations

    Gosh, tell me about it. I once tried writing a custom wifi signal strength indicator app that got its information from network-manager. Apparently the only way to programmatically communicate with network-manager is through dbus, which is just terrible. Scarce to no documentation, poor support for any language other than C/C++, and once you do get it working, it's the most disgusting and overly verbose code you've ever seen, just to query the status of the wifi card. Could've exposed the API through raw unix sockets or something, but nope, they had to reinvent the wheel on that one as well.

    Just give me vi and the basic configuration files and let me get on with it!

    I'll take this opportunity to shill for Void Linux, it sounds like exactly what you're describing. I've been a happy user for like 5 years now. I particularly like how nothing ever breaks, because there's not much to break on such a minimal system.

    ...well, actually, a few things did break over the years, but most of those were due to user error haha.

    [–] [email protected] 4 points 6 months ago (1 children)

    In news that will shock no-one, dbus was, of course, initially created by a Redhat engineer. I get the idea of having a general purpose bus that everything can communicate on, but they somehow managed to even make that complex.

    You make a compelling case for Void Linux. I use Debian or a RHEL derivative for work, primarily so there's at least a chance to hand systems off to someone else to maintain, the less known distros seem to meet with blank looks.

    I want to give NixOS a try sometime, as I like the idea of declaritively defining the system

    [–] [email protected] 3 points 6 months ago* (last edited 6 months ago) (1 children)

    I want to give NixOS a try sometime, as I like the idea of declaritively defining the system

    That seems to be even more convoluted and complex.

    "Just one more abstraction layer, I swear!"

    I'm a NixOS noob bytheway, so please correct me if I'm wrong.

    [–] [email protected] 3 points 6 months ago (1 children)

    I think the difference is the level it's happening at. As I said, I haven't tried it yet, but it looks like a simple, unfussy and minimal distribution that you then add functionality to via configuration. Having that declarative configuration means it's easy to test new setups, roll back changes and even easily create modified configuration for other servers.

    [–] [email protected] 1 points 5 months ago

    I use nixos on my homeserver, but I'm looking to switch it to Void as well. For me personally I just realized that it's easier to set everything up with shell scripts and docker-compose. But that's just my personal experience, by all means go ahead and try out nixos if you have the time. It has lots of unique features that you can't replicate with "just a bunch of shell scripts". This video does a great job of selling nixos. Maybe my favourite part of nixos is how they make "shortcuts" for a lot of common tasks. For example, setting up a letsencrypt ssl certificate for your webserver with autorenewal can be done in just two lines of config.

    [–] [email protected] 1 points 6 months ago* (last edited 6 months ago) (1 children)

    Personally I'd do away with NetworkManager too and just configure the interfaces directly

    Connman and iwd have nice graphical interfaces btw. I got that route after nm disbehaved and i couldn't figure out why (same for systemd and s6/dinit after systemd-dnsd threw a fit).

    [–] [email protected] 2 points 6 months ago (1 children)

    I tried using connman to setup a wireguard connection once. It was not a good experience and ultimately led nowhere, due to missing feature support.

    [–] [email protected] 1 points 6 months ago

    Eduroam needs manual configuration but otherwise i see not what could be missing? And the cli is the same as bluetoothctl.

    [–] [email protected] 21 points 6 months ago (1 children)

    It's amazing how many of those started with Lennart, too.

    [–] [email protected] 14 points 6 months ago (1 children)

    He's definitely off my Christmas card list. He seems desperate to leave a legacy, but he keeps trying to turn Linux into windows instead.

    [–] [email protected] 2 points 6 months ago (1 children)

    If anything, he gets most of his inspiration from MacOS.

    [–] [email protected] 2 points 6 months ago (3 children)

    He may have taken some ideas from there, but I still see more windows like ideas. We're one bad decision away from systemd-regedit. If that happens, I might just give up completely.

    [–] [email protected] 2 points 6 months ago (2 children)

    Considering how much systemd breaks the concept of "everything is a file", this would not surprise me in the least

    [–] [email protected] 3 points 5 months ago* (last edited 5 months ago)

    "everything is a file" is such a godsend. It makes absolutely everything so much easier and intuitive. I remember trying to get an old dot matrix printer to work using a parallel-to-usb adaptor cable. Without reading any documentation or having any prior experience I tried echo testing12345 > /dev/lp0 and it just worked lol. Meanwhile my friend spent like half an hour digging in windows gui settings trying to figure out how to print to a parallel printer.

    I also posted about this before, but a while back I had to configure my system so that a non-root user could start and stop a root daemon without sudo. On a runit system all you have to do is change the permissions of some control files and it works. On systemd? When I looked it up, the simplest solution involved writing a polkit policy in javascript 🤮

    [–] [email protected] 1 points 6 months ago

    cries It's amazing how much damage they've done to the linux ecosystem. Not just badly thought out concepts, but the amount of frustration and annoyance they caused by ramming it into existence and the cynicism it's created.

    load more comments (2 replies)
    [–] [email protected] 6 points 6 months ago (2 children)

    You're not wrong. But generally the idiocy is in response to beserkeness elsewhere, madness follows…

    [–] [email protected] 1 points 6 months ago (1 children)

    I'm with our binary friend; the systems they try to replace tend to be time tested, reliable and simple (if not necessarily immediately obvious) to manage. I can think of a single instance where a Redhat-ism is better, or even equivalent, to what we already have. In eavh case it's been a pretty transparent attempt to move from Embrace to Extend, and that never ends well for the users.

    [–] [email protected] 2 points 6 months ago (1 children)

    I can think of a single instance where a Redhat-ism is better

    I don't know if it would be accurate to call it a redhat-ism, but btrfs is pretty amazing. Transparent compression? Copy-on-write? Yes please! I've been using it for so long now that it's spoiled me lol. Whenever I'm on an ext4 system I have to keep reminding myself that copying a huge file or directory will... you know... actually copy it instead of just making reflinks

    [–] [email protected] 3 points 6 months ago (2 children)

    I've never actually tried BTRFS, there were a few too many "it loses all your data" bugs in the early days, and I was already using ZFS by then anyway. ZFS has more than it's fair share of problems, but I'm pretty confident my data is safe, and it has the same upsides as BTRFS. I'm looking forward to seeing how BCachefs works now it's in kernel, and I really want to compare all three under real workloads.

    [–] [email protected] 3 points 6 months ago

    Ooh, I've never heard of bcachefs, sounds exciting! I see it supports encryption natively, which btrfs doesn't. Pretty cool!

    Personally I've never had any issues with btrfs, but I did start using it only a couple years ago, when it was already stable. Makes sense that you'd stick with zfs tho, if that's what you're used to.

    [–] [email protected] 1 points 5 months ago (1 children)

    There’s a whole bunch of “it loses all your data” bugs in OpenZFS too, ironically, although it’s way way less fragile than btrfs in general.

    That said, the latter is pretty much solid too, unless you do raid5-like things.

    load more comments (1 replies)
    [–] [email protected] 1 points 6 months ago (1 children)

    I have to disagree with you there. Systemd sucks ass, and so does RPM.

    [–] [email protected] 3 points 6 months ago (2 children)

    so does RPM.

    Careful. Jeff's format gives us really great advantages from an atomic package that we don't have elsewhere. THAT, at least, was a great thing.

    Lennart's Cancer, though, can die in a fire.

    [–] [email protected] 2 points 6 months ago (1 children)

    Atomic updates are amazing. But the package manager is slow as hell. SuSE managed to make zypper much faster using the same package format.

    [–] [email protected] 3 points 6 months ago (1 children)

    The only thing that's slow is dnf's repository check and some migration scripts in certain fedora packages. If that's the price I need to pay to get seamless updates and upgrades across major versions for nearly a decade, then I can live with that.

    [–] [email protected] 2 points 6 months ago

    I'll grant you that; I haven't used dnf so can't speak to its performance.

    [–] [email protected] 3 points 6 months ago (2 children)

    It’s amazing how many linux problems stem from ‘Redhat, however, found this solution too simple and instead devised their own scheme’. Just about every over complex, bloated bit of nonsense we have to fight with has the same genesis.

    Ansible can be heard mumbling incoherently and so, so slowly, from the basement.

    Remember who saw apt4rpm and said "too fast, too immune from python fuckage, so let's do something slower and more frail". twice.

    [–] [email protected] 9 points 6 months ago (1 children)

    I won't hear any sass about Ansible. It doesn't scale up to infinity but it's the best there is at what it's good at (modular, small scale declarative orchestration)

    [–] [email protected] 4 points 6 months ago (1 children)

    You can totally can scale Ansible and especially Ansible pull. It will work with thousands of VMs and can be used with other tools to completely automate deployments.

    [–] [email protected] 1 points 6 months ago

    Oh agreed entirely. You can also use different execution strategies to mitigate most performance issues, but it can require some tuning at full enterprise scale.

    [–] [email protected] 3 points 6 months ago

    I do use Ansible, partly because it's easier to tell people that's how you do it rather than "I wrote a shell script, it took half the time to write, it's 20% the size and runs several times faster". To be fair to Ansible, if you're configuring a number of servers at the same time, it's not too bad speedwise as it'll do batches of them in parallel. Configuring one server at a time is agony though.

    [–] [email protected] 3 points 6 months ago (1 children)

    Also, canonical decided to try and solve the same 'problem' in a different, equally convoluted way.

    [–] [email protected] 3 points 6 months ago

    I try not to think about the things they've done, it's not good for my blood pressure. They had a decent desktop distro, but they seem determined to trash it with terrible decisions.

    [–] [email protected] 1 points 6 months ago (2 children)

    To me it seems they followed the hdd UUID style, rather than sda0 or hda0 that can change at boot you now have a fixed UUID to work with. I can see this being important on larger server networks

    [–] [email protected] 2 points 6 months ago (1 children)

    But the SSD/HDD solution doesn't replace /dev/[s|h]da# entirely, just adds a consistent way to set them in configs like fstab. You can still use the old device names so working with them at the command line is still easy for the most part.

    [–] [email protected] 1 points 6 months ago (1 children)

    It is but they change....so becarefilul with dd LOL

    [–] [email protected] 1 points 6 months ago (1 children)

    I mean, you should be careful with destructive changes and commands whether the interface names can change or not... And since they won't change outside of a reboot, I've yet to run into a scenario where that becomes a problem as I'm looking at and making sure I'm talking to the correct device before starting anyway

    [–] [email protected] 1 points 5 months ago

    Yep, i always type the line and take a break, and check the drives in another terminal first, before committing, but the web is full of people "argg I just dd the wrong drive".

    [–] [email protected] 1 points 6 months ago

    Having consistent interface names on servers that have several is useful, but we already had that option. The interface names they generate are not only hard to remember, but not terribly useful as they're based on things like which PCI slot they're in, rather than what their purpose is. You want interface names like wan0 and DMZ, not enp0s2. Of course, you can set it up to use useful names, but it's more complicated than it used to be, so while the systemd approach looks like a good idea on the surface, it's actually a retrograde step.