this post was submitted on 01 Jun 2025
275 points (96.6% liked)
Technology
70995 readers
3468 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If all you're saying is that neural networks could develop consciousness one day, sure, and nothing I said contradicts that. Our brains are neural networks, so it stands to reason they could do what our brains can do. But the technical hurdles are huge.
You need at least two things to get there:
1 is hard because a single brain alone is about as powerful as a significant chunk of worldwide computing, the gulf between our current power and what we would need is about... 100% of what we would need. We are so woefully under resourced for that. You also need to solve how to power the computers without cooking the planet, which is not something we're even close to solving currently.
2 means that we can't just throw more power or training at the problem. Modern NN modules have an underlying theory that makes them work. They're essentially statistical curve-fitting machines. We don't currently have a good theoretical model that would allow us to structure the NN to create a consciousness. It's not even on the horizon yet.
Those are two enormous hurdles. I think saying modern NN design can create consciousness is like Jules Verne in 1867 saying we can get to the Moon with a cannon because of "what progress artillery science has made in the last few years".
Moon rockets are essentially artillery science in many ways, yes, but Jules Verne was still a century away in terms of supporting technologies, raw power, and essential insights into how to do it.
We're on the same page about consciousness then. My original comment only pointed out that current AI have problems that we have because they replicate how we work and it seems that people don't like recognising that very obvious fact that we have the exact problems that LLMs have. LLMs aren't rational because we inherently are not rational. That was the only point I was originally trying to make.
For AGI or UGI to exist, massive hurdles will need to be made, likely an entire restructuring of it. I think LLMs will continue to get smarter and likely exceed us but it will not be perfect without a massive rework.
Personally and this is pure speculation, I wouldn't be surprised if AGI or UGI is only possible with the help of a highly advanced AI. Similar to how microbiologist are only now starting to unravel protein synthesis with the help of AI. I think the shear volume of data that needs processing requires something like a highly evolved AI to understand, and that current technology is purely a stepping stone for something more.
We don't have the same problems LLMs have.
LLMs have zero fidelity. They have no - none - zero - model of the world to compare their output to.
Humans have biases and problems in our thinking, sure, but we're capable of at least making corrections and working with meaning in context. We can recognise our model of the world and how it relates to the things we are saying.
LLMs cannot do that job, at all, and they won't be able to until they have a model of the world. A model of the world would necessarily include themselves, which is self-awareness, which is AGI. That's a meaning-understander. Developing a world model is the same problem as consciousness.
What I'm saying is that you cannot develop fidelity at all without AGI, so no, LLMs don't have the same problems we do. That is an entirely different class of problem.
Some moon rockets fail, but they don't have that in common with moon cannons. One of those can in theory achieve a moon landing and the other cannot, ever, in any iteration.