this post was submitted on 05 Dec 2024
108 points (100.0% liked)

Linux

48622 readers
1205 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
top 11 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 6 days ago

I Thought RISC-V was dead.

[–] [email protected] 42 points 1 week ago (4 children)

I find it funny when people around me think I am a computer expert and I just tried to read this and couldn't comprehend sh*t.

[–] [email protected] 24 points 1 week ago

It's alright, I program C++ and barely understand this shit either. Kernel/OS devs are a different breed.

[–] Croquette 14 points 1 week ago (1 children)

Unless you are at the edge of the firmware and software, this isn't something you work with a lot.

When you transfer files or data to a memory space, you can't drop the whole file/data to memory directly because the resources are limited on the cpu/mcu. It wouldn't make sense to have a page as big as your biggest theorical data size.

Page size determine how much data at a time can be transferred into memory.

In term of performance, writing the page to memory is usually the bottle neck. So 4k vs 64k means you need to write to memory 16 times more and thus making the performance better on 64k page size.

[–] [email protected] 7 points 1 week ago

That's more of a storage thing, RAM does a lot smaller transfers - for example a DDR5 memory has two independent 32bit (4 byte) channels with a minimum of 16 transfers in a single "operation", so it does 64 bytes at once (or more). And CPUs don't waste memory bandwidth than transferring more than absolutely necessary, as memory is often the bottleneck even without writing full pages.

The page size is relevant for memory protection (where the CPU will stop the program execution and give control back to the operating system if said program tries to do something it's not allowed to do with the memory) and virtual memory (which is part of the same thing, but they are two theoretically independent concepts). The operating system needs to make a table describing what memory the program has what kind of access to, and with bigger pages the table can be much smaller (at the cost of wasting space if the program needs only a little bit of memory of a given kind).

[–] pastermil 12 points 1 week ago

Don't worry, it's quite esoteric to begin with. The only reason I can comprehend this is the years-long following news like this, on top of my computer science degree.

Also, this wouldn't matter (yet) to your daily life.

[–] [email protected] 4 points 1 week ago (1 children)

It's called Paging. But an application programmer doesn't really need to know how it works in precise.

[–] [email protected] 2 points 6 days ago

its more for kernel devs and hardware architecture.

[–] whyNotSquirrel 5 points 1 week ago

Wow that's a.... wow! Yay Linux! What?

[–] [email protected] 3 points 1 week ago (1 children)

My stupid brain reading that Linux can now use AK47

Me: the fuck does that mean?

[–] [email protected] 0 points 1 week ago

The time has come Brother, obviously