this post was submitted on 03 Jun 2024
136 points (100.0% liked)

TechTakes

1441 readers
36 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 45 points 6 months ago (3 children)

lmao, Zoom is cooked. Their CEO has no idea how LLMs work or why they aren't fit for purpose, but he's 100% certain someone else will somehow solve this problem:

So is the AI model hallucination problem down there in the stack, or are you investing in making sure that the rate of hallucinations goes down?

I think solving the AI hallucination problem — I think that’ll be fixed. 

But I guess my question is by who? Is it by you, or is it somewhere down the stack?

It’s someone down the stack. 

Okay.

I think either from the chip level or from the LLM itself.

[–] [email protected] 34 points 6 months ago (2 children)

Haha at the chip level? What’s he smoking?

[–] [email protected] 27 points 6 months ago* (last edited 5 months ago) (1 children)

What’s he smoking?

Whatever he's smoking, it's strength rating is at least: "make it seem like a good idea to call employees back from remote work despite remote work facilitation being the one thing we sell".

So that's gotta be some strong stuff.

[–] [email protected] 19 points 6 months ago (1 children)

the important thing about Zoom is that it was the lucky winner of the pandemic. Could have been Google Meet, could have been any of their other competitors, but somehow everyone just converged on Zoom.

[–] [email protected] 21 points 6 months ago* (last edited 6 months ago) (2 children)

Having worked in an IT department in 2020, it wasn't just random. Zoom was stable for large meetings and scaled pretty smoothly up to a thousand participants. And it's a standalone product and it had better moderator tools.

MS Teams often got problems over around 50 to 80 participants. Google Meet worked better but its max was way lower than Zoom (250?). I tried a couple of other competitors, but none that matched up (including Jitsi, unfortunately).

So if you were at an IT department in an organization that needed to have large meetings and were looking for a quick solution that also worked for your large meetings , Zoom was in 2020 the best choice. And big organisations choices means everyone has to learn that software, so soon enough everyone knows how to use Zoom.

They were at the right place, had the better product, gained a dominant position. And now they are tossing all that away. C'est la late stage capitalism!

[–] [email protected] 10 points 6 months ago

Also according to my freelance interpreter parents:

Compared to other major tools, was also one of the few not too janky solutions for setting up simultaneous interpreting with a separate audio track for the interpreters output.

Other tools would require big kludges (separate meeting rooms, etc…), unlikely in to be working for all participants across organizations, or require clunky consecutive translation.

[–] [email protected] 7 points 5 months ago* (last edited 5 months ago)

MS Teams often got problems over around 50 to 80 participants

As honourable mention, MS Teams is also uncontrollable, overblown jank that

  • doesn't work in a browser despite being built in Electron
  • is complete shite on Android, despite bring built in Electron
  • barely works on Windows, thanks to being built in Electron but despite the fact that it's built by the Windows people

And even at its best behaviour it randomly loses messages while eating up way more CPU and RAM than possibly justifiable for a glorified IRC UI.

No wonder Zoom won out over that one, if you tried to use Teams in 2020 you barely could.

[–] VirtualOdour 1 points 5 months ago* (last edited 5 months ago)

Custom hardware designed with ai pipelines in mind similar to how gpu architecture solved a lot of render issues due to how memory can be accessed and what operations are prioritized. The idea people have been talking about is basically the llm on one part of the chip and other NNs beside it that can modify its biases - basically setting the 'mood' and focusing things as the answer is created should help enable creativity in some areas while locking it out in others. Coding for example requires creativity in structure or variable names but needs to be very factual about function names or mathematical operations.

I think it's very unlikely to be the way things go based on progress with pure llms and llm architecture but maybe in the future it'll turn out to be a more efficient way of solving the problem, especially with ai designed chips.

[–] [email protected] 19 points 6 months ago

Lol I like how they put the author's note at the beginning of the article, "this was a very special interview" as if it's special because of the unique insights instead of special because it sounds coked up.

[–] [email protected] 13 points 6 months ago (2 children)

I think solving the AI hallucination problem — I think that’ll be fixed.

Wasn't this an unsolvable problem?

[–] [email protected] 20 points 6 months ago* (last edited 6 months ago) (1 children)

it's unsolvable because it's literally how LLMs work lol.

though to be fair i would indeed love for them to solve the LLMs-outputting-text problem.

[–] [email protected] 2 points 6 months ago (1 children)

Yeah. We need another program to control the LLM tbh.

[–] [email protected] 5 points 6 months ago

Sed Quis custodiet ipsos custodes = But who will control the controllers?

Which in a beautiful twist of irony is thought to be an interpolation in the texts of Juvenal (in manuscript speak, an insert added by later scribes)

[–] VirtualOdour 1 points 5 months ago

Yeah but only in one limited way of doing things, like how you can't raise water using geometry alone but obviously there's endless things like lockgates, pumps, etc which can be added into a water transport system to raise it.

It is a hard one though, even people do the exact same thing llms do - Mandela effect and innacuracy of witness testimony are clear examples. Sometimes we don't know we don't know something, or we're sure we do - visual illusions where our mind fills in blanks is a similar thing. The human brain has a few little loops we take things through which are basically sanity checks - not everyone does the same level of thinking about what they're saying though, Alex Jones, Trump, certain people on lemmy aren't interested in if what they're saying is true simply that it serves their purpose. It's learnt behavior and we can construct nns that contain the same sort of sanity checking, or go a level beyond and have it behind the scenes create a layer of axioms and information points associated with the answer and test them individually against a fact checking network.

It's all stuff that we're going to be seeing tried in the upcoming gpt5, self tasking is the next big step to get right - working out the process required to obtain an accurate answer and working through the steps.