this post was submitted on 16 Sep 2024
68 points (100.0% liked)

TechTakes

1437 readers
140 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

None of what I write in this newsletter is about sowing doubt or "hating," but a sober evaluation of where we are today and where we may end up on the current path. I believe that the artificial intelligence boom — which would be better described as a generative AI boom — is (as I've said before) unsustainable, and will ultimately collapse. I also fear that said collapse could be ruinous to big tech, deeply damaging to the startup ecosystem, and will further sour public support for the tech industry.

Can't blame Zitron for being pretty downbeat in this - given the AI bubble's size and side-effects, its easy to see how its bursting can have some cataclysmic effects.

(Shameless self-promo: I ended up writing a bit about the potential aftermath as well)

all 34 comments
sorted by: hot top controversial new old
[–] [email protected] 22 points 2 months ago

I don't toil in the mines of the big FAANG, but this tracks with what I've been seeing in my mine. I also predict it will end with lay-offs and companies collapsing.

Zitron thinks a lot about the biggest companies and how it will ultimately hurt them, which is reasonable. But, I think it ironically downplays the scale of the bubble, and in turn, the impacts of it bursting.

The expeditions into OpenAI's financials have been very educational. If I were an investigative reporter, my next move would be to look at the networks created by venture capitalists and what is happening inside the companies who share the same patrons as Open AI. I don't say that as someone who interacts with finances, just as someone who carefully watches organizational politics.

[–] [email protected] 20 points 2 months ago* (last edited 2 months ago) (2 children)

On each step, one part of the model applies reinforcement learning, with the other one (the model outputting stuff) “rewarded” or “punished” based on the perceived correctness of their progress (the steps in its “reasoning”), and altering its strategies when punished. This is different to how other Large Language Models work in the sense that the model is generating outputs then looking back at them, then ignoring or approving “good” steps to get to an answer, rather than just generating one and saying “here ya go.”

Every time I've read how chain-of-thought works in o1 it's been completely different, and I'm still not sure I understand what's supposed to be going on. Apparently you get a strike notice if you try too hard to find out how the chain-of-thinking process goes, so one might be tempted to assume it's something that's readily replicable by the competition (and they need to prevent that as long as they can) instead of any sort of notably important breakthrough.

From the detailed o1 system card pdf linked in the article:

According to these evaluations, o1-preview hallucinates less frequently than GPT-4o, and o1-mini hallucinates less frequently than GPT-4o-mini. However, we have received anecdotal feedback that o1-preview and o1-mini tend to hallucinate more than GPT-4o and GPT-4o-mini. More work is needed to understand hallucinations holistically, particularly in domains not covered by our evaluations (e.g., chemistry). Additionally, red teamers have noted that o1-preview is more convincing in certain domains than GPT-4o given that it generates more detailed answers. This potentially increases the risk of people trusting and relying more on hallucinated generation.

Ballsy to just admit your hallucination benchmarks might be worthless.

The newsletter also mentions that the price for output tokens has quadrupled compared to the previous newest model, but the awesome part is, remember all that behind-the-scenes self-prompting that's going on while it arrives to an answer? Even though you're not allowed to see them, according to Ed Zitron you sure as hell are paying for them (i.e. they spend output tokens) which is hilarious if true.

[–] [email protected] 19 points 2 months ago* (last edited 2 months ago)

From the documentation:

While reasoning tokens are not visible via the API, they still occupy space in the model's context window and are billed as output tokens.

Huh.

[–] [email protected] 11 points 2 months ago

one might be tempted to assume it’s something that’s readily replicable by the competition (and they need to prevent that as long as they can) instead of any sort of notably important breakthrough.

ah yes, open AI

[–] [email protected] 18 points 2 months ago

I also fear that said collapse could be ruinous to big tech, deeply damaging to the startup ecosystem, and will further sour public support for the tech industry.

Yes... ha ha ha... YES!

[–] [email protected] 18 points 2 months ago* (last edited 2 months ago) (2 children)

I'm terrified for the future, and not even on hater shit. The public numbers are bad, and barring some extremely surprising reports locked behind a wall of NDAs, the private numbers don't seem much better - even Saltman, perpetual cheerleader he is, doesn't have much to offer except desperation to keep the party going, barely even a week after their big model drop.

Sam Altman responds to a user asking for the promised voice features with extreme pettiness. "how about a few weeks of gratitude for magic intelligence in the sky, and then you can have more toys soon?"

Image descriptionSam Altman responds to a user asking for the promised voice features with extreme pettiness. "how about a few weeks of gratitude for magic intelligence in the sky, and then you can have more toys soon?"

So if all the big tech players know that this is garbage, the continual doubling down on this either points to: 1. scrambling for the pie while it's there because they need it to stay afloat, or 2. everything else they have to offer is even worse somehow? And in either case, the aura of being a tech company instead of a company is lost, and I don't know what happens in the fallout. The probably best case scenario is that only tech workers like myself have to eat the blowback, but I suspect things won't play out so cleanly.

[–] [email protected] 14 points 2 months ago

User requests something that accommodates their actual use-case. Altman responds by dismissing it as "toys," in that same cultivated faux-casual lowercase smarm that constitutes the bulk of his public identity. This man is not fit to be an executive.

[–] [email protected] 8 points 2 months ago (2 children)

And in either case, the aura of being a tech company instead of a company is lost

I don't understand this and am kinda afraid of what hides behind this

[–] [email protected] 13 points 2 months ago* (last edited 2 months ago)

It's about hype and economics.
Tech companies can theoretically scale well and are valued on the expectation of growth while normal companies are manly valued based on what they currently do. An app can basically be copied for free to millions of users once it has been coded and servers don't cost that much. A traditional company, say a car company, that wants to increase profits has to build a new factory or something. The problems arise when a companies perception goes from startup/tech company to normal company.

Example: wework was a startup that rented office space long term and lets it customers rent short term from them. Once people realized, that it was a real estate company and not a tech company it's value plummeted, it couldn't raise more capital and went bankrupt.

Edit: spelling

[–] [email protected] 11 points 2 months ago

So I should be clear, I dont think theres anything special about Tech companies that should let them be treated differently. But for whatever reason, it is a fact that places like We or Tesla or Theranos or fucking Groupon gets stupid valuations just because they're "tech" adjacent.

If the market ever catches on that theres no secret ingredient (and as Zitron's shown, there are pretty visible public numbers pointing at this), we're looking at a correction at the trillion dollar scale. Or maybe we never ask Google to put up or shut up, and just keep the fairy powder in our eyes forever.

[–] [email protected] 15 points 2 months ago

Hallucinations — which occur when models authoritatively states something that isn't true (or in the case of an image or a video makes something that looks...wrong) — are impossible to resolve without new branches of mathematics…

Finally, honesty. I appreciate that the author understands this, even if they might not have the exact knowledge required to substantiate it. For what it's worth, the situation is more dire than this; we can't even describe the new directions required. My fictional-universe theory (FU theory) shows that a knowledge base cannot know whether its facts are describing the real world or a fictional world which has lots in common with the real world. (Humans don't want to think about this, because of the implication.)

[–] [email protected] 14 points 2 months ago (1 children)

Microsoft is making laptops with dedicated Copilot buttons.

I think they'd rather burn their company to the ground, all the while telling their customers that they just needed to wait a little while longer, rather than admit that they got it wrong.

[–] [email protected] 7 points 2 months ago (1 children)
[–] [email protected] 8 points 2 months ago* (last edited 2 months ago) (2 children)

Are you saying that Clippy is proof I'm right or proof I'm wrong? Or I'm I just being unfunny and not getting the joke.

[–] [email protected] 5 points 2 months ago (1 children)

Clippy is proof I’m right

That's the one.

[–] [email protected] 4 points 2 months ago (1 children)

who doesn't like a good old wordless driveby post making no statement of intent and giving no direction

I swear, some days I feel like some of these people mumble "hail eris" as they wander past

[–] [email protected] 6 points 2 months ago* (last edited 2 months ago) (1 children)
[–] [email protected] 7 points 2 months ago (1 children)

alt text fedi replyguy cosplay goes here

[–] [email protected] 4 points 2 months ago* (last edited 2 months ago)

complete tangent but

hades II spoilerI just love eris in hades II.

[–] [email protected] 12 points 2 months ago (1 children)

I worry this is going to leave a crater in the software industry that will take a decade to fill.

[–] [email protected] 5 points 2 months ago

Software dev has been an over saturated market for ages now, even before the recent AI craze.

[–] [email protected] -5 points 2 months ago

I mean it's gotten to the point where I can't even keep track of all the different AI being pushed by companies. My prediction is some company is going to make a super efficient and helpful AI and everyone will start using that as a base point. Like how every company wanted a website before they all just migrated the majority of their information to social media like Facebook and Twitter. And let's be honest, most of the big companies making AI are not going to be the ones to do it. And even though they are improving, they are more interested in making money than better AI. We haven't seen a major breakthrough in months and the majority of progress is minimal. Every time they come out with a new model it's usually just the same with more bells and whistles.