645
Remember how ChatGPT totally aced the bar exam? Wow! yeah, turns out that was just a lie
(www.nytimes.com)
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
I'm not even going to engage in this thread cause it's a tar pit, but I do think I have the appropriate analogy.
When taking certain exams in my CS programme you were allowed to have notes but with two restrictions:
The idea was that you needed to actually put a lot of work into making it, since the entire material was obviously the size of a fucking book and not an A4 page, and you couldn't just print/copy it from somewhere. So you really needed to distill the information and make a thought map or an index for yourself.
Compare that to an ML model that is allowed to train on data however long it wants, as long as the result is a fixed-dimension matrix with parameters that helps it answer questions with high reliability.
It's not the same as an open book, but it's definitely not closed book either. And the LLMs have billions of parameters in the matrix, literal gigabytes of data on their notes. The entire text of War and Peace is ~3MB for comparison. An LLM is a library of trained notes.
My question to you is how is it different than a human in this regard? I would go to class, study the material, hope to retain it, so I could then apply that knowledge on the test.
The ai is trained on the data, "hopes" to retain it, so it can apply it on the test. It's not storing the book, so what's the actual difference?
And if you have an answer to that, my follow up would be "what's the effective difference?" If we stick an ai and a human in a closed room and give them a test, why does it matter the intricacies of how they are storing and recalling the data?
I'm not sure what you even mean by "how is it different", but for starters a human can actually get a good mark at the bar and spicy autocomplete clearly cannot.
What you are basing this "it clearly cannot" on? Because an early iteration of it was mediocre at it? The first ICE cars were slower than horses, I'm afraid this statement may be the equivalent of someone pointing at that and saying "cars can't get good at going fast."
But I specifically asked "in this regard", referring to taking a test after previously having trained yourself on the data.
I asked Gemini and it told me that ChatGPT can't do shit, I'm not gonna question it.
So, it's either perfect right now, or never capable of anything. Great critical and nuanced thinking.
Thanks!
do you see why I take the shortcut?
I mean, if we took all net worth of Sam Altman and split it between these two guys who at least benefited humanity with their work we'd get at least a step closer to justice in the universe.
Getting a Turing award: $1M
Dropping out of Stanford to work on something unironically called "Loopt": Priceless
holy fuck you're a moron
please go read a book, and look at some art. no, marvel media doesn't count.
Me, about to suggest some actually really good, thought provoking Marvel comics that somehow got made alongside the relentless superhero soap opera: oh wait now isn't the time, we're dunking on the AI bro