VenditatioDelendaEst

joined 1 year ago
[–] [email protected] 1 points 11 months ago

Ultra-high density chips is exactly what you want to support the largest possible memory amounts and speeds with minimal ranks. With 32 Gib chips, you could build 32 GiB single-rank UDIMMs, or 64 GiB dual-rank.

That means up to 128 GiB of RAM in mini-ITX!

[–] [email protected] 1 points 11 months ago

Get 1 answer in 10 microseconds: Use a CPU.

Get 1000 answers in 1000 microseconds: use a GPU.

[–] [email protected] 1 points 11 months ago

Yeah, I prevent fast charge by charging from a USB port on my PC. Typically I plug in at 30-40% and stop at 70%. I do have a crack though... in the screen protector, which I will get around to replacing eventually, I swear.

[–] [email protected] 1 points 11 months ago (4 children)

I agree with you 95%, but a case can't stop the battery from eating itself after 500 cycles or 4 years, so that does need to be replaceable (at a workbench, with proper tools, by someone with a modicum of care and patience).

(In fact, to some degree cases make it worse, by holding heat in during charging.)

[–] [email protected] 0 points 11 months ago

There is a lot more to e-waste than just repairability. There is the recycled materials in the initial phone. Quality of the components. Sturdiness of the phone. Do people trade in their phones so they can be recycled? Is there even a trade-in program for this phone? What percentage of the phone is recycled after use?

This doesn't matter. E-waste is a crock of shit. All of the phones you will ever use over your lifetime will fit in your coffin with you, there's nothing seriously poisonous in there else it wouldn't be safe to carry phone around in sweaty pockets, and the recoverable raw material value is approxmiately 0% of the manufacturing cost of a phone.

Apple's "recycling" program is half virtue signal, half sneaky way of keeping devices off the used market. Which, by the way, is the only way real value is ever recovered from old phones. Recycle is the last R for a reason.

How many years does the phone get updates?

This, on the other hand, is very important. The real reason disposable and unreliable phones are bad is that getting a new phone sucks. Search costs suck, transaction costs suck, the "features" that the new phone comes with inevitably suck, and migrating data to a new device sucks. Which is at least partly intentional. Observe one scumbag Android developer cheering about the prospect of users no longer being in control of their own data.

[–] [email protected] 1 points 11 months ago

But do be aware that by building your NAS / homelab around x99 instead of OEM desktop Skylake, you are spending quite a lot of electricity to buy experience with high-port-count platforms.

[–] [email protected] 1 points 11 months ago

Cache is not a different thing than single thread performance. Cache is part of single thread performance.

[–] [email protected] 1 points 11 months ago

* only on Intel, which has the L3 made out of slices attached to each P-core or E-core cluster (x4).

AMD segregates its L3 at the CCX level, so every part made from the same die set has the same L3. There's a bit of a complication with the 12 and 16 core, because if all the threads are working on the same data the L3 is effectively 1-CCD-sized, but if they're working on different data (like with make -j, VMs, or some batch jobs), you get the benefit of both CCD's worth of L3.

[–] [email protected] 1 points 11 months ago
  1. Average doesn't matter. If the game I play uses parallelism well, I don't care about the ones that don't.

  2. The difference in boost clock is only ~2%, so anything more than that is either due to core count or less (soft) thermal throttling from spreading the heat across more die area. And since they tested with a 360mm AIO, it's probably not soft throttling.

[–] [email protected] 1 points 11 months ago

100 MHz is only 1.9%, and the L2 cache is private per core. Both the 7600X and the 7800X have 2 MiB L2 cache.

[–] [email protected] 1 points 11 months ago

Blender seems like it should be pretty close to embarrassingly parallel. I wonder how much of the <100% scaling is due to clock speed, and how much is due to memory bandwidth limitation? 4 memory channels for 64 cores is twice as tight as even the 7950X.

Eyeballing the graphs, it looks like ~4 GHz vs ~4.6 GHz average, which...

4000*64 / (4600*32) = 1.739

Assuming a memory bound performance loss of x, we can solve

4000*64*(1-x) / (4600*32) = 1.64

for x = 5.7%.

[–] [email protected] 1 points 11 months ago (1 children)

Any numbskull can figure out how to do it, given the assigned task of doing it, and there doesn't seem to be any unique value in the various ways of doing it showcased here. (Personally, I'd prefer a very short cable to allow for mechanical tolerances in fan mounting positions and long-term reliable wiping electrical contact.)

I hope the court focuses on the questions 1) is the idea that it's something you'd want to do itself patentable?, and 2) is the patent written in a way to cover that?

view more: next ›