this post was submitted on 13 Aug 2023
386 points (74.3% liked)

Technology

59646 readers
2632 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 237 points 1 year ago (8 children)

What a silly article. 700,000 per day is ~256 million a year. Thats peanuts compared to the 10 billion they got from MS. With no new funding they could run for about a decade & this is one of the most promising new technologies in years. MS would never let the company fail due to lack of funding, its basically MS's LLM play at this point.

[–] [email protected] 112 points 1 year ago (1 children)

When you get articles like this, the first thing you should ask is "Who the fuck is Firstpost?"

[–] [email protected] 35 points 1 year ago (1 children)

Yeah where the hell do these posters find these articles anyway? It's always from blogs that repost stuff from somewhere else

load more comments (1 replies)
[–] Wats0ns 42 points 1 year ago (1 children)

Openai biggest spending is infrastructure, Whis is rented from... Microsoft. Even if the company fold, they will have given back to Microsoft most of the money invested

[–] [email protected] 25 points 1 year ago

MS is basically getting a ton of equity in exchange for cloud credits. That's a ridiculously good deal for MS.

[–] [email protected] 15 points 1 year ago

While title is click bite, they do say right at the beginning:

*Right now, it is pulling through only because of Microsoft's $10 billion funding *

Pretty hard to miss, and than they go to explain their point, which might be wrong, but still stands. 700k i only one model, there are others and making new ones and running the company. It is easy over 1B a year without making profit. Still not significant since people will pour money into it even after those 10B.

load more comments (5 replies)
[–] [email protected] 140 points 1 year ago (2 children)

There's no way Microsoft is going to let it go bankrupt.

[–] [email protected] 65 points 1 year ago (2 children)

If there's no path to make it profitable, they will buy all the useful assets and let the rest go bankrupt.

[–] [email protected] 13 points 1 year ago (1 children)

Microsoft reported profitability in their AI products last quarter, with a substantial gain in revenue from it.

It won't take long for them to recoup their investment in OpenAI.

If OpenAI has been more responsible in how they released ChatGPT, they wouldn't be facing this problem. Just completely opening Pandora's box because they were racing to beat everyone else out was extremely irresponsible and if they go bankrupt because of it then whatever.

There's plenty of money to be made in AI without everyone just fighting over how to do it in the most dangerous way possible.

I'm also not sure nVidia is making the right decision trying their company to AI hardware. Sure, they're making mad money right now, but just like the crypto space that can dry up instantly.

[–] [email protected] 14 points 1 year ago

I don’t think you’re right about nvidia. Their hardware is used for SO much more than AI. They’re fine.

Plus their own AI products are popping off rn. DLSS and their frame generation one (I forget the name) are really popular in the gaming space.

I think they also have a new DL-based process for creating stencils for silicon photolithography which, in my limited knowledge, seems like a huge deal.

load more comments (1 replies)
[–] [email protected] 24 points 1 year ago (3 children)

That's $260 million .There are 360 million paid seats of MS360. So they'd have to raise their prices $0.73 per year to cover the cost.

[–] [email protected] 26 points 1 year ago

So they'll raise the cost by $100/yr.

load more comments (2 replies)
[–] [email protected] 93 points 1 year ago (6 children)

That would explain why ChatGPT started regurgitating cookie-cutter garbage responses more often than usual a few months after launch. It really started feeling more like a chatbot lately, it almost felt talking to a human 6 months ago.

[–] [email protected] 60 points 1 year ago (5 children)

I don't think it does. I doubt it is purely a cost issue. Microsoft is going to throw billions at OpenAI, no problem.

What has happened, based on the info we get from the company, is that they keep tweaking their algorithms in response to how people use them. ChatGPT was amazing at first. But it would also easily tell you how to murder someone and get away with it, create a plausible sounding weapon of mass destruction, coerce you into weird relationships, and basically anything else it wasn't supposed to do.

I've noticed it has become worse at rubber ducking non-trivial coding prompts. I've noticed that my juniors have a hell of a time functioning without access to it, and they'd rather ask questions of seniors rather than try to find information our solutions themselves, replacing chatbots with Sr devs essentially.

A good tool for getting people on ramped if they've never coded before, and maybe for rubber ducking in my experience. But far too volatile for consistent work. Especially with a Blackbox of a company constantly hampering its outputs.

[–] [email protected] 64 points 1 year ago (5 children)

As a Sr. Dev, I'm always floored by stories of people trying to integrate chatGPT into their development workflow.

It's not a truth machine. It has no conception of correctness. It's designed to make responses that look correct.

Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?

ChatGPT is by pretty much every metric the exact opposite of what I want from a dev in an enterprise development setting.

[–] [email protected] 33 points 1 year ago (2 children)

Search engines aren't truth machines either. StackOverflow reputation is not a truth machine either. These are all tools to use. Blind trust in any of them is incorrect. I get your point, I really do, but it's just as foolish as believing everyone using StackOverflow just copies and pastes the top rated answer into their code and commits it without testing then calls it a day. Part of mentoring junior devs is enabling them to be good problem solvers, not just solving their problems. Showing them how to properly use these tools and how to validate things is what you should be doing, not just giving them a solution.

load more comments (2 replies)
load more comments (4 replies)
load more comments (4 replies)
[–] [email protected] 17 points 1 year ago* (last edited 1 year ago)

But what did they expect would happen, that more people would subscribe to pro? In the beginning I thought they just wanted to survey-farm usage to figure out what the most popular use cases were and then sell that information or repackage use-cases as an individual added-value service.

load more comments (4 replies)
[–] [email protected] 82 points 1 year ago (2 children)

I mean apart from the fact it's not sourced or whatever, it's standard practice for these tech companies to run a massive loss for years while basically giving their product away for free (which is why you can use openAI with minimal if any costs, even at scale).

Once everyone's using your product over competitors who couldn't afford to outlast your own venture capitalists, you can turn the price up and rake in cash since you're the biggest player in the market.

It's just Uber's business model.

[–] [email protected] 26 points 1 year ago (3 children)

The difference is that the VC bubble has mostly ended. There isn't "free money" to keep throwing at a problem post-pan. That's why there's an increased focus on Uber (and others) making a profit.

[–] [email protected] 22 points 1 year ago

In this case, Microsoft owns 49% of OpenAI, so they're the ones subsidizing it. They can also offer at-cost hosting and in-roads into enterprise sales. Probably a better deal at this point than VC cash.

[–] [email protected] 16 points 1 year ago

This is what caused spez at Reddit and Musk at Twitter to go into desperation mode and start flipping tables over. Their investors are starting to want results now, not sometime in the distant future.

load more comments (1 replies)
[–] [email protected] 12 points 1 year ago (1 children)

Speaking of Uber, I believe it turned a profit the first time this year. That is, it never made any profit since its creation in whenever it was created.

load more comments (1 replies)
[–] [email protected] 51 points 1 year ago (2 children)

If AI was so great, it would find a solution to operate at fraction of the cost it does now

[–] [email protected] 70 points 1 year ago (19 children)

Wait, has anybody bothered to ask AI how to fix itself? How much Avocado testing does it do? Can AI pull itself up by its own boot partition, or does it expect the administrator to just give it everything?

[–] [email protected] 13 points 1 year ago (5 children)

Really says something that none of your responses yet seem to have caught that this was a joke.

load more comments (5 replies)
load more comments (18 replies)
load more comments (1 replies)
[–] Ghyste 46 points 1 year ago (4 children)
load more comments (4 replies)
[–] [email protected] 46 points 1 year ago (1 children)

huh, so with the 10bn from Microsoft they should be good for... just over 30 years!

[–] [email protected] 28 points 1 year ago (1 children)

ChatGPT has the potential to make Bing relevant and unseat Google. No way Microsoft pulls funding. Sure, they might screw it up, but they'll absolutely keep throwing cash at it.

load more comments (1 replies)
[–] [email protected] 42 points 1 year ago* (last edited 1 year ago) (10 children)

This article has been flagged on HN for being clickbait garbage.

load more comments (10 replies)
[–] [email protected] 41 points 1 year ago

Indian newpapers publish anything without any sort of verification. From reddit videos to whatsapp forwards. More than news, they are like an old chinese whispers game which is run infinitely. So take this with a huge grain of salt.

[–] [email protected] 35 points 1 year ago (1 children)

Pretty sure Microsoft will be happy to come save the day and just buy out the company.

[–] [email protected] 15 points 1 year ago

it feels like, that was the plan all along

[–] [email protected] 29 points 1 year ago (10 children)

I don’t understand Lemmy’s hate boner over AI.

Yeah, it’s probably not going to take over like companies/investors want, but you’d think it’s absolutely useless based on the comments on any AI post.

Meanwhile, people are actively making use of ChatGPT and finding it to be a very useful tool. But because sometimes it gives an incorrect response that people screenshot and post to Twitter, it’s apparently absolute trash…

[–] [email protected] 13 points 1 year ago* (last edited 1 year ago) (12 children)

AI is literally one of the most incredible creation of humanity, and people shit on it as if they know better. It's genuinely an astonishing historical and cultural achievement, peak of human ingenuity.

No idea why such hate...

One can hate disney ceo for misusing AI, but why shitting on AI?

[–] [email protected] 12 points 1 year ago (5 children)

It's shit on because it is not actually AI as the general public tends to use the term. This isn't Data from Star Trek, or anything even approaching Asimov's three laws.

The immediate defense against this statement is people going into mental gymnastics and hand waving about "well we don't have a formal definition for intelligence so you can't say they aren't" which is just... nonsense rhetorically because the inverse would be true as well. Can't label something as intelligent if we have no formal definition either. Or they point at various arbitrary tests that ChatGPT has passed and claim that clearly something without intelligence could never have passed the bar exam, in complete and utter ignorance of how LLMs are suited to those types of problem domains.

Also, I find that anyone bringing up the limitations and dangers is immediately lumped into this "AI haters" group like belief in AI is some sort of black and white religion or requires some sort of idealogical purity. Like having honest conversations about these systems' problems intrinsically means you want them to fail. That's BS.


Machine Learning and Large Language Models are amazing, they're game changing, but they aren't magical panaceas and they aren't even an approximation of intelligence despite appearances. LLMs are especially dangerous because of how intelligent they appear to a layperson, which is why we see everyone rushing to apply them to entirely non-fitting use cases as a race to be the first to make the appearance of success and suck down those juicy VC bux.

Anyone trying to say different isn't familiar with the field or is trying to sell you something. It's the classic case of the difference between tech developers/workers and tech news outlets/enthusiasts.

The frustrating part is that people caught up in the hype train of AI will say the same thing: "You just don't understand!" But then they'll start citing the unproven potential future that is being bandied around by people who want to keep you reading their publication or who want to sell you something, not any technical details of how these (amazing) tools function.


At least in my opinion that's where the negativity comes from.

load more comments (5 replies)
load more comments (11 replies)
load more comments (9 replies)
[–] [email protected] 28 points 1 year ago (3 children)

This article is dumb as shit

[–] [email protected] 18 points 1 year ago

No sources and even given their numbers they could continue running chatgpt for another 30 years. I doubt they're anywhere near a net profit but they're far from bankruptcy.

load more comments (2 replies)
[–] [email protected] 28 points 1 year ago (5 children)

A couple of my coworkers will have to write their own code again and start reading documentation

load more comments (5 replies)
[–] [email protected] 25 points 1 year ago (14 children)

Does it feel like these “game changing” techs have lives that are accelerating? Like there’s the dot com bubble of a decade or so, the NFT craze that lasted a few years, and now AI that’s not been a year.

The Internet is concentrating and getting worse because of it, inundated with ads and bots and bots who make ads and ads for bots, and being existentially threatened by Google’s DRM scheme. NFTs have become a joke, and the vast majority of crypto is not far behind. How long can we play with this new toy? Its lead paint is already peeling.

load more comments (14 replies)
[–] [email protected] 20 points 1 year ago (4 children)

This is alarming...

One of the things companies have started doing lately is signaling "we could do bankrupt", then jumping ahead a stage on enshittification

[–] [email protected] 20 points 1 year ago

I don't think OpenAI needs any excuses to enshittify, they've been speedrunning ever since they decided they liked profit instead of nonprofit.

load more comments (3 replies)
[–] [email protected] 19 points 1 year ago

A company that just raised $10b from Microsoft is struggling with $260m a year? That's almost 40 years of runway.

[–] [email protected] 16 points 1 year ago (2 children)

They are choosing to spend that much. That doesn't suggest that they expect financial problems.

load more comments (2 replies)
[–] NGC2346 13 points 1 year ago (6 children)

Its fine, i got my own LlaMa at home, it does almost the same as GPT

load more comments (6 replies)
load more comments
view more: next ›