blueredscreen

joined 11 months ago
[–] [email protected] 1 points 9 months ago

I hope this is just a contingency plan and not an admission of defeat. A $14,000,000,000 contingency plan at that.

[–] [email protected] 1 points 9 months ago

That's still 30 cents too much for Tim Apple.

[–] [email protected] 1 points 9 months ago

Obviously, this is more like a lifestyle tech blog than any kind of factual, dry journalistic piece. So you need to apply quite a few grams of salt to this clickbait title. I think even conjecturing that this is the reason Sam Altman was fired - I prefer to use the term conjecture here because nobody truly knows what happened yet - the completely disorganized way this was handled is just something straight out of a comic book.

But, again, this is a generic consumer techno journo. I'm not expecting anything else.

[–] [email protected] 1 points 9 months ago

US workers are not lazy but they are relatively uneducated and expensive, which is why other countries prefer not to use them

They're expensive because they prefer to enjoy one of the grand luxuries of modern lifestyles, often referred to as a "work-life balance"

I'm sure those Taiwanese workers living in hostels with 12 hour shifts ought to educate themselves a little about that.

[–] [email protected] 1 points 9 months ago (2 children)

Wrong, that's not what he meant at all. There is zero chance leading nodes will be produced in the US within a decade. There is no talent in the US and they cannot convince the Taiwanese engineers to come here.

Why'd they come? Aren't US workers supposedly lazy and entitled because they have coffee breaks and go to the bathroom sometimes? Talk about a culture shock...

[–] [email protected] 1 points 9 months ago

An incredibly well-written article, almost like a screenplay. Reminds me of TSMC trying to set up shop in Arizona. Of course a different type of factory obviously, but still.

[–] [email protected] 1 points 9 months ago (1 children)

Wow, that's super important. Clickbait tier journalism these days...

[–] [email protected] 1 points 9 months ago

I heard this same stuff in the 90s about GPUs. "GPUs are too specialized and don't have the flexibility of CPUs".

Startups failing doesn't prove anything. There are dozens of startups and there will only be 2-4 winners. Of course MOST are going to fail. Moving in too early before things have settled down or too late after your competitors are too established are both guaranteed ways to fail.

It's relatively convenient to blame your failure due to being too smart too early instead of just facing the genuine lack of demand for your product.

C is considered fast, but did you know that it SUCKS for old CISC ISAs? They are too irregular and make a lot of assumptions that don't mesh well with the compute model of C. C pls x86 is where things changed. x86 could be adapted to run C code well. C compilers then adapted to be fast on x86 then x86 adapted to run that compiled C code better then the loop goes round and round.

Nothing about modern x86 architectures constitutes any classic model of "CISC" under the hood, the silicon runs machine code and ops that for all intents and purposes can be abstracted to any ISA.

This is true for GPUs too. Apple's M1/M2 GPU design isn't fundamentally bad, but it is different from AMD/Nvidia, so the programmer's hardware assumptions and normal optimizations aren't effective. Same applies to some extent for Intel Xe where they've been spending huge amounts to "optimize" various games (most likely literally writing new code to replace the original game code with versions optimized for their ISA).

What?

Even if we accept the worst-case scenario and 2-4 approaches rise to the top and each requires a separate ASIC, the situation STILL favors the ASIC approach. We can support dozens of ISAs for dozens of purposes. We can certainly support 2-4 ISAs with 1-3 competitors for each.

Again, they all said that before you, and look where they are now. (hint hint: Nvidia)

[–] [email protected] 1 points 9 months ago (2 children)

He's only right in the short term when the technology isn't stable and the AI software architectures are constantly changing.

Once things stabilize, we're most likely switching to either analog compute in memory or silicon photonics both of which will be far less generic than a GPU, but with such a massive power, performance, and cost advantage that GPUs simply cannot compete.

That's what they said. Nothing about AI is going to stabilize. The pace of innovation is impossible. I'm sure things were happy too at SambaNova until they went bye bye and Nvidia itself hired their lead architect.

[–] [email protected] 1 points 9 months ago (7 children)

Jensen Huang said it best: a GPU is the perfect balance between being so specialized that it isn't worthwhile, and so general that it becomes just another CPU. And they do have custom silicon when necessary, like the tensor cores but again that's in addition to, not a replacement of the existing hardware. Considering the hundreds of AI accelerator startups (a few of which have already failed), he's right.

[–] [email protected] 1 points 9 months ago

When considering the entire supply chain, some early electric vehicles were actually a net negative for the world around us. That's not why people bought them anyway, it was the fuel and maintenance cost savings. Equally true then, nobody is buying a Fairphone for "sustainable living", it's just a device that's easier to repair and the truth is consumers could care less how it's manufactured. The greenwashing stinks.

[–] [email protected] 1 points 10 months ago

A very confident man. We'll have to see...

view more: next ›