this post was submitted on 08 Feb 2025
87 points (100.0% liked)
TechTakes
1620 readers
97 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
i think you're missing the point that "Deepseek was made for only $6M" has been the trending headline for the past while, with the specific point of comparison being the massive costs of developing ChatGPT, Copilot, Gemini, et al.
to stretch your metaphor, it's like someone rolling up with their car, claiming it only costs $20 (unlike all the other cars that cost $20,000), when come to find out that number is just how much it costs to fill the gas tank up once
Now im imagining GPUs being traded like old cars.
slaps GPU This GPU? perfectly fine, second hand yes, but only used to train one model, by an old lady, will run the upcoming monster hunter wilds perfectly fine.
Emphasis mine. Deepseek was very upfront that this 6m was training only. No other company includes r&d and salaries when they report model training costs, because those aren't training costs
consider this paragraph from the Wall Street Journal:
you're arguing to me that they technically didn't lie -- but it's pretty clear that some people walked away with a false impression of the cost of their product relative to their competitors' products, and they financially benefitted from people believing in this false impression.
Okay I mean, I hate to somehow come to the defense of a slop company? But WSJ saying nonsense is really not their fault, like even that particular quote clearly says "DeepSeek said training one" cost $5.6M. That's just a true statement. No one in their right mind includes the capital expenditure in that, the same way when you say "it took us 100h to train a model" that doesn't include building a data center in those 100h.
Beside whether they actually lied or not, it's still immensely funny to me that they could've just told a blatant lie nobody factchecked and it shook the market to the fucking core wiping off like billions in valuation. Very real market based on very real fundamentals run by very serious adults.
i can admit it's possible i'm being overly cynical here and it is just sloppy journalism on Raffaele Huang/his editor/the WSJ's part. but i still think that it's a little suspect on the grounds that we have no idea how many times they had to restart training due to the model borking, other experiments and hidden costs, even before things like the necessary capex (which goes unmentioned in the original paper -- though they note using a 2048-GPU cluster of H800's that would put them down around $40m). i'm thinking in the mode of "the whitepaper exists to serve the company's bottom line"
btw announcing my new V7 model that i trained for the $0.26 i found on the street just to watch the stock markets burn
Oh ye, I totally agree on this one. This entire genAI enterprise insults me on a fundamental level as a CS researcher, there's zero transparency or reproducibility, no one reviews these claims, it's a complete shitshow from terrible, terrible benchmarks, through shoddy methodology, up to untestable and bonkers claims.
I have zero good faith for the press, though, they're experts in painting any and all tech claims in the best light possible like their lives fucking depend on it. We wouldn't be where we are right now if anyone at any "reputable" newspaper like WSJ asked one (1) question to Sam Altman like 3 years ago.
Ask yourself why that may be, as you are the one who posted a link to a WSJ article that is repeating an absurd 100m-1b figure from a guy who has a vested interest in making the barrier of entry into the field seem as high as possible the increase the valuation of his company. Did WSJ make an attempt to verify the accuracy of these statements? Did it push for further clarification? Did it compare those statements to figures that have been made public by Meta and OpenAI? No on all counts - yet somehow "deepseek lied" because it explicitly stated their costs didn't include capex, salaries, or R&D, but the media couldn't be bothered to read to the end of the paragraph
"the media sucks at factchecking DeepSeek's claims" is... an interesting attempt at refuting the idea that DeepSeek's claims aren't entirely factual. beyond that, intentionally presenting true statements that lead to false impressions is a kind of dishonesty regardless. if you mean to argue that DeepSeek wasn't being underhanded at all and just very innocently presented their figures without proper context (that just so happened to spurn a media frenzy in their favor)... then i have a bridge to sell you.
besides that, OpenAI is very demonstrably pissing away at least that much money every time they add one to the number at the end of their slop generator
That's the opposite of what I'm saying. Deepseek is the one under scrutiny, yet they are the only one to publish source code and training procedures of their model. So far the only argument against them is "if I read the first half of a sentence in deepseeks whitepaper and pretend the other half of the sentence doesn't exist, I can generate a newsworthy headline". So much so that you just attempted to present a completely absurd and unverifiable number from a guy with a financial incentive to exaggerate, and a non apples-to-apples comparison made by WSJ as airtight evidence against them. OpenAI allegedly has enough hardware to invalidate deepseeks training claims in roughly five hours - given the massive financial incentive to do so, if deepseek was being untrustworthy, you don't think they would have done so by now?
What do you mean proper context? I posted their full quote above, they presented their costs with full and complete context, such that the number couldn't be misconstrued without one being willfully ignorant.
It sounds to me like you have a very clear bias, and you don't care at all about whether or not what they said is actually true or not, as long as the headlines about AI are negative
this is utterly pointless and you’ve taken up way too much space in the thread already
oh no, anti-AI bias in TechTakes? unthinkable
this has absolutely fuck all to do with anything i've said in the slightest, but i guess you gotta toss in the talking points somewhere
e: it's also trivially disprovable, but i don't care if it's actually true, i only care about headlines negative about AI
No, it's not. OpenAI doesn't spend all that money on R&D, they spent majority of it on the actual training (hardware, electricity).
And that's (supposedly) only $6M for Deepseek.
So where is the lie?
shot:
chaser:
citation:
your post is asking a lot of questions already answered by your posting
They did not answer anything, only alluded.
Just because they bought GPUs like everyone else doesn't mean they could not train it cheaper.
standard “fuck off programming.dev” ban with a side of who the fuck cares. deepseek isn’t the good guys, you weird fucks don’t have to go to a nitpick war defending them, there’s no good guys in LLMs and generative AI. all these people are grifters, all of them are gaming the benchmarks they designed to be gamed, nobody’s getting good results out of this fucking mediocre technology.