NamelessVegetable

joined 1 year ago
[–] [email protected] 1 points 1 year ago

Not really, no. Fugaku is a pretty efficient system, more so than its competitors. Even if it's going to be outclassed by others being larger scale, its still more efficient.

[–] [email protected] 1 points 1 year ago (2 children)

But can a chunk of metal modulate its thermal conductivity?

[–] [email protected] 1 points 1 year ago

I think you're overstating the impact of this development. The US, PRC, EU, and Japan national programs are sticking to systems in supercomputer centers, universities, and government facilities for the foreseeable future. The main reasons for this are practicality (how exactly does one migrate several PB of data in the cloud, for instance), security, and politics. Thus, I'd expect things at the top-end of supercomputing to stay more or less the same. Industrial users who want cheap supercomputing occasionally might be pleased, but that's not what the leading supercomputing centers do.

[–] [email protected] 1 points 1 year ago (4 children)

Meh. Microsoft didn't even even bother running HPCG on it. Fugaku is still #1 on HPCG, followed by Frontier.

[–] [email protected] 1 points 1 year ago

The ESA had been using a series of SPARCv8 cores designed specifically for aerospace applications (with all the associated fault-tolerance, formal verification, and radiation hardening), the LEON series, since the early 1990s. That was back when SPARCv8 was new (it was introduced in the late 1980s). The LEON series has since dropped SPARCv8 (like maybe five years ago), and adopted RISC-V instead (because SPARC is dead, and its ecosystem is dead), so it doesn't seem to me that NASA is doing anything particularly radical in adopting RISC-V.