this post was submitted on 06 Apr 2025
137 points (97.9% liked)

Futurology

2474 readers
251 users here now

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 4 points 1 week ago* (last edited 1 week ago) (1 children)

I'm talking without knowing anything but it seems like LLM's aren't orthogonal but instead only insufficient. That is like our consciousness has a library of information to draw on and that library is organized based on references, the LLM could be the library that another software component uses to draw upon for actual reasoning.

That's part of what Deepseek has been trying to do. They put a bunch of induction logic for different categories in front of the LLM.

[โ€“] [email protected] 5 points 1 week ago

I agree, although this seems like an unpopular opinion in this thread.

LLMs are really good at organizing and abstracting information, and it would make a lot of sense for an AGI to incorporate them for that purpose. It's just that there's no actual thought process happening, and in my opinion, "reasoning models" like Deepseek are entirely insufficient and a poor substitute for true reasoning capabilities.