[-] [email protected] 29 points 2 months ago

Harder to write compilers for RISC? I would argue that CISC is much harder to design a compiler for.

That being said there's a lack of standardized vector/streaming instructions in out-of-the-box RISC-V that may hurt performance, but compiler design wise it's much easier to write a functional compiler than for the nightmare that is x86.

[-] [email protected] 33 points 4 months ago

My issue with them is that they make their lower tier plans too enticing. I've wanted to upgrade to pro for all the fancy gizmos but the basic mail plan is just too good a deal to upgrade.

[-] [email protected] 27 points 5 months ago

Here you dropped this:

#define ifnt(x) if (!(x))
[-] [email protected] 22 points 6 months ago* (last edited 6 months ago)

An API is an official interface to connect to a service, usually designed to make it easier for one application to interact with another. This is usually kept stable and provides only the information needed to serve the request of the application requesting it.

A scraper is an application that scrapes data from a human readable source (i.e. website) to obtain data from another application. Since website designs can update frequently, these scrapers can break at any time and need to be updated alongside the original application.

Reddit clients interact with an API to serve requests, but Newpipe scrapes the YouTube webpage itself. So if YouTube changes their UI tomorrow Newpipe could very easily break. No one wants to design their app around a fragile base while building a bunch of stuff on top of it. It's just way too much work for very little effort.

It's like I can enter my house through the door or the chimney. I would always take the door since it's designed for human entry. I could technically use the chimney if there's no door. But if someone lights up the fireplace I'd be toast.

[-] [email protected] 21 points 7 months ago* (last edited 7 months ago)

Having a good, dedicated e-reader is a hill that I would die on. I want a big screen, with physical buttons, lightweight, multi-weeklong battery, and an e-ink display. Reading 8 hours on my phone makes my eyes go twitchy. And TBH it's been a pain finding something that supports all that and has a reasonably open ecosystem.

When reading for pleasure, I'm not gonna settle for a "good enough" experience. Otherwise I'm going back to paper books.

[-] [email protected] 34 points 7 months ago* (last edited 7 months ago)

The argument is that processing data physically "near" where the data is stored (also known as NDP, near data processing, unlike traditional architecture designs, where data is stored off-chip) is more power efficient and lower latency for a variety of reasons (interconnect complexity, pin density, lane charge rate, etc). Someone came up with a design that can do complex computations much faster than before using NDP.

Personally, I'd say traditional Computer Architecture is not going anywhere for two reasons: first, these esoteric new architecture ideas such as NDP, SIMD (probably not esoteric anymore. GPUs and vector instructions both do this), In-network processing (where your network interface does compute) are notoriously hard to work with. It takes CS MS levels of understanding of the architecture to write a program in the P4 language (which doesn't allow loops, recursion, etc). No matter how fast your fancy new architecture is, it's worthless if most programmers on the job market won't be able to work with it. Second, there're too many foundational tools and applications that rely on traditional computer architecture. Nobody is going to port their 30-year-old stable MPI program to a new architecture every 3 years. It's just way too costly. People want to buy new hardware, install it, compile existing code, and see big numbers go up (or down, depending on which numbers)

I would say the future is where you have a mostly Von Newman machine with some of these fancy new toys (GPUs, Memory DIMMs with integrated co-processors, SmartNICs) as dedicated accelerators. Existing application code probably will not be modified. However, the underlying libraries will be able to detect these accelerators (e.g. GPUs, DMA engines, etc) and offload supported computations to them automatically to save CPU cycles and power. Think your standard memcpy() running on a dedicated data mover on the memory DIMM if your computer supports it. This way, your standard 9to5 programmer can still work like they used to and leave the fancy performance optimization stuff to a few experts.

[-] [email protected] 19 points 8 months ago

No, the 2037 problem is fixing the Y2k38 problem in 2037.

Before that there's no problem :)

[-] [email protected] 25 points 9 months ago
  1. Attempt to plug in the USB A device
  2. If you succeed. End procedure
  3. Otherwise, destroy the reality you currently reside in. All remaining universes are the ones where you plugged in the device on the first try.

That wasn't so hard, was it?

[-] [email protected] 25 points 9 months ago

The year is 5123. We have meticulously deciphered texts from the early 21st century, providing us with a wealth of knowledge. Yet one question still eludes us to this day:

Who the heck is Magic 8. Ball?

[-] [email protected] 31 points 10 months ago* (last edited 10 months ago)

I'm just going to put this information here: the use case for 46Gb WiFi is going to be extremely niche. There is nearly no legitimate use case where you can achieve that speed on your phone.

The problem here is that:

  1. The majority of internet traffic is TCP
  2. TCP protocol processing is atomic (i.e. your speed is bottlenecked by a single CPU)
  3. The bottleneck is the receiver (i.e. downloader)
  4. TCP is too complex for efficient receiver-side hardware offloads (i.e. can't workaround this issue by adding more special hardware)

What does this mean?

Your connection speed on a wifi 7 device WILL be bottlenecked by your single-core CPU speed, even if you are doing absolutely nothing except transmitting data. This assumes you are only using a TCP single connection (e.g. downloading a file from a website). But that's the majority of use cases unless you are running a server (in this case on your phone).

I haven't checked what CPU the Pixel 8 uses. But my Pixel 7 has a Cortex A-78. I also don't have the raw data handy for the 3Ghz A-78, but I do have data from the 2Ghz A-53 connected to a 100Gbps Ethernet NIC which is around 8-9Gbps. The A78 generally outperforms the A53 by 1.5x (At least that's the characteristics on the Nvidia Bluefield DPUs). So we can assume 12-14Gbps max for a single connection with Wifi 7 running on a state-of-the-art ARM CPU.

That is still nowhere near 46Gbps. It's like mounting a Vulcan Minigun on a bicycle.

To use the full wifi bandwidth, you would need to have multiple connections running on different cores. That's also not including the switches/servers connected to the wifi AP. Unless you are running a Redis server on your phone, I see no reason why Wifi 7 would be needed unless the remaining hardware is upgraded significantly.

[-] [email protected] 40 points 11 months ago* (last edited 11 months ago)

ELI5, or ELIAFYCSS (Explain like I'm a first year CS student): modern x86 CPUs have lots of optimized instructions for specific functionality. One of these is "vector instructions", where the instruction is optimized for running the same function (e.g. matrix multiply add) on lots of data (e.g. 32 rows or 512 rows). These instructions were slowly added over time, so there are multiple "sets" of vector instructions like MMX, AVX, AVX-2, AVX-512, AMX...

While the names all sound different, the way how all these vector instructions work is similar: they store internal state in hidden registers that the programmer cannot access. So to the user (application programmer or compiler designer) it looks like a simple function that does what you need without having to micromanage registers. Neat, right?

Well, problem is somewhere along the lines someone found a bug: when using instructions from the AVX-2/AVX-512 sets, if you combine it with an incorrect ordering of branch instructions (aka JX, basically the if/else of assembly) you get to see what's inside these hidden registers, including from different programs. Oops. So Charlie's "Up, Up, Down, Down, Left, Right, Left, Right, B, B, A, A" using AVX/JX allows him to see what Alice's "encrypt this zip file with this password" program is doing. Uh oh.

So, that sounds bad. But lets take a step back: how bad would this affect existing consumer devices (e.g. Non-Xeon, non-Epyc CPUs)?

Well good news: AVX-512 is not available on most Intel/AMD consumer CPUs until recently (13th gen/zen 4, and zen 4 isn't affected). So 1) your CPU most likely doesn't support it and 2) even if your CPU supports it most pre-compiled programs won't use it because the program would crash on everyone else's computer that doesn't have AVX-512. AVX-512 is a non-issue unless you're running Finite Element Analysis programs (LS-DYNA) for fun.

AVX-2 has a similar problem: while released in 2013, some low end CPUs (e.g. Intel Atom) didn't have them for a long time (this year I think?). So most compiled programs wouldn't compile with AVX-2 enabled. This means whatever game you are running now, you probably won't see a performance drop after patching since your computer/program was never using the optimized vector instructions in the first place.

So, the affect on consumer devices is minimal. But what do you need to do to ensure that your PC is secure?

Three different ideas off the top of my head:

  1. BIOS update. The CPU has a some low level firmware code called microcode which is included in the BIOS. The new patched version adds additional checks to ensure no data is leaked.

  2. Update the microcode package in Linux. The microcode can also be loaded from the OS. If you have an up-to-date version of Intel-microcode here this would achieve the same as (1)

  3. Re-compile everything without AVX-2/AVX-512. If you're running something like Gentoo, you can simply tell GCC to not use AVX-2/AVX-512 regardless of whether your CPU supports it. As mentioned earlier the performance loss is probably going to be fine unless you're doing some serious math (FEA/AI/etc) on your machine.

[-] [email protected] 22 points 11 months ago

that one NetBSD user bursts into flames

view more: next ›

stardreamer

joined 11 months ago