Voldemort

joined 2 years ago
[–] [email protected] 1 points 1 hour ago

Maybe I am over selling current AI and underselling our brains. But the way I see it is that the exact mechanism that allowed intelligence to flourish within ourselves exists with current nural networks. They are nowhere near being AGI or UGI yet but I think these tools alone are all that are required.

The way I see it is, if we rewound the clock far enough we would see primitive life with very basic nural networks beginning to develop in existing multicellular life (something like jellyfish possibly). These nural networks made from neurons neurotransmitters and synapses or possibly something more primitive would begin forming the most basic of logic over centuries of evolution. But it wouldn't reassemble anything close to reason or intelligence, it wouldn't have eyes, ears or any need for language. At first it would probably spend its first million years just trying to control movement.

We know that this process would have started from nothing, nural networks with no training data, just a free world to explore. And yet over 500 million years later here we are.

My argument is that modern nural networks work the same way that biological brains do, at least the mechanism does. The only technical difference is with neurotransmitters and the various dampening and signal boosting that can happen along with nuromodulation. Given enough time and enough training, I firmly believe nural networks could develop reason. And given external sensors it could develop thought from these input signals.

I don't think we would need to develop a consciousness for it but that it would develop one itself given enough time to train on its own.

A large hurdle that might arguably be a good thing, is that we are largely in control of the training. When AI is used it does not learn and alter itself, only memorising things currently. But I do remember a time when various AI researchers allowed earlier models to self learn, however the internet being the internet, it developed some wildly bad habits.

[–] [email protected] 1 points 21 hours ago

The first person to be recorded talking about AGI was Mark Gubrud. He made that quote above, here's another:

The major theme of the book was to develop a mathematical foundation of artificial intelligence. This is not an easy task since intelligence has many (often ill-defined) faces. More specifically, our goal was to develop a theory for rational agents acting optimally in any environment. Thereby we touched various scientific areas, including reinforcement learning, algorithmic information theory, Kolmogorov complexity, computational complexity theory, information theory and statistics, Solomonoff induction, Levin search, sequential decision theory, adaptive control theory, and many more. Page 232 8.1.1 Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability

As UGI largely encompasses AGI we could easily argue that if modern LLMs are beginning to fit the description of UGI then it's fullfilling AGI too. Although AGI's definition in more recent times has become more nuanced to replicating a human brain instead, I'd argue that that would degrade the AI trying to replicate biology.

I don't beleive it's a disservice to AGI because AGI's goal is to create machines with human-level intelligence. But current AI is set to surpase collective human intelligence supposedly by the end of the decade.

And it's not a disservice to biological brains to summarise them to prediction machines. They work, very clearly. Sentience or not if you simulated every atom in the brain it will likely do the same job, soul or no soul. It just brings the philosophical question of "do we have free will or not?" And "is physics deterministic or not". So much text exists on the brain being prediction machines and the only time it has recently been debated is when someone tries differing us from AI.

I don't believe LLMs are AGI yet either, I think we're very far away from AGI. In a lot of ways I suspect we'll skip AGI and go for UGI instead. My firm opinion is that biological brains are just not effective enough. Our brains developed to survive the natural world and I don't think AI needs that to surpass us. I think UGI will be the equivalent of our intelligence with the fat cut off. I believe it only resembles our irrational thought patterns now because the fat hasn't been striped yet but if something truely intelligent emerges, we'll probably see these irrational patterns cease to exist.

[–] [email protected] 0 points 1 day ago (1 children)

Maybe work is the wrong word, same output. Just as a belt and chain drive does the same thing, or how fluorescent, incandescent or LED lights produce light even though they're completely different mechanisms.

What I was saying is that one is based on the other, so similar problems like irrational thought even if the right answer is conjured shouldn't be surprising. Although an animal brain and nural network are not the same, the broad concept of how they work is.

[–] [email protected] -1 points 1 day ago (2 children)

Let's get something straight, no I'm not saying we have our modern definition of AGI but we've practically got the original definition coined before LLMs were a thing. Which was that the proposed AGI agent should maximise "the ability to satisfy goals in a wide range of environments". I personally think we've just moved the goal posts a bit.

Wether we'll ever have thinking, rationalised and possibly conscious AGI is beyond the question. But I do think current AI is similar to existing brains today.

Do you not agree that animal brains are just prediction machines?

That we have our own hallucinations all the time? Think visual tricks, lapses in memory, deja vu, or just the many mental disorders people can have.

Do you think our brain doesn't follow path of least resistance in processing? Or do you think our thoughts comes from elsewhere?

I seriously don't think animal brains or human to be specific are that special that nurural networks are beneath. Sure people didn't like being likened to animals but it was the truth, and I as do many AI researches, liken us to AI.

AI is primitive now, yet it can still pass the bar, doctors exams, compute complex physics problems and write a book (soulless as it may be like some authors) in less than a few seconds.

Whilst we may not have AGI the question was about math. The paper questioned how it did 36+59 and it did things in an interesting way where it half predicted what the tens column would be and 'knew' what the units column was, then put it together. Although thats not how I or even you may do it there are probably people who do it similar.

All I argue is that AI is closer to how our brains think, and with our brains being irrational quite often it shouldn't be surprising that AI nural networks are also irrational at times.

[–] [email protected] -2 points 1 day ago (14 children)

I agree. This is the exact problem I think people need to face with nural network AIs. They work the exact same way we do. Even if we analysed the human brain it would look like wires connected to wires with different resistances all over the place with some other chemical influences.

I think everyone forgets that nural networks were used in AI to replicate how animal brains work, and clearly if it worked for us to get smart then it should work for something synthetic. Well we've certainly answered that now.

Everyone being like "oh it's just a predictive model and it's all math and math can't be intelligent" are questioning exactly how their own brains work. We are just prediction machines, the brain releases dopamine when it correctly predicts things, it self learns from correctly assuming how things work. We modelled AI off of ourselves. And if we don't understand how we work, of course we're not gonna understand how it works.

[–] [email protected] -1 points 1 week ago (1 children)

I understand all the concerns about losing jobs and being left behind, but that's also what happened when the loom was invented. An entire profession gone. Looms were destroyed in protests, people died over embracing the new machine and the inventors of every new version had their lifes threatened. But imagine if we we're still hand weaving all our clothes today? Yeah maybe they would be more durable than what we have today, but you wouldn't have many clothes, and there would be a large portion of the population just weaving fabrics.

Same thing happened when threshing machines were invented, steam pumps, cranes, the printing press. History repeats itself where jobs will be lost to new innovation but look at what new jobs and careers these inventions sparked.

Its hard to see it now, but automation is a good thing. It will drive new technology where we will once again find new jobs and careers.

Believe me, as someone still getting into my career which is being threatened by AI, I'm certain there will still be work that isn't just manual labor.

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago)

Most certainly not! Haha, you'd need a lot more information than just the key like what the 2fa is for, a username and password. But if your internet access is ever spoofed by someone or there is a hacker that traces your IP address back and tracks your activity, that information can be found.

It should only ever be used for non important things.

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago) (2 children)

I've found two that work but one is playing up, just append your key where there is [KEY] and save the shortcut

https://totp.danhersam.com/?key=KEY

https://2fa.zone/2fa/KEY (broken but https://2fa.zone/ works)

Example: https://totp.danhersam.com/?key=7J64V3P3E77J3LKNUGSZ5QANTLRLTKVL

[–] [email protected] 1 points 1 month ago

Kreiga backpacks are amazing! Highly recommend even just for the chest buckle/clip

[–] [email protected] 2 points 1 month ago (1 children)

I can chime in and say the cardo spirit is great, I've used mine on two helmets which I have some tips for. When you use the sticky base, assume you're never getting it off, cause I couldn't haha. The helmet clip although seems flimsy at first holds up really well too, but it's difficult to install.

You may have noticed already that putting on your helmet already folds your ears pretty bad. The extra speaker width makes it 2x worse and a lot harder to unfold your ear. I recommend a ski mask always.

Lastly these systems work well with ear protection, or at least cardo does, but as you could imagine the speakers are almost screaming at you at full blast. Although it sounds like a moderate volume to you, when you walk around it's blasting music and sounds to everyone else. I remember accidentally forgetting this and taking my helmet off walking into a petrol station with hearing protection still in, I only found out when I pulled one ear out to talk to the attendant when I was already in line for a minute, felt like a total douche haha

Overall though, the spirit is great, music is great, calls are great, speakers are great, battery, volume, durability, water resistance, all great.

[–] [email protected] 2 points 1 month ago (4 children)

And that's why I use a url on my desktop to a 2fa generator that also decodes it with the 2fa key as an argument. It's like a password sticky note on the monitor, but for 2fa haha.

[–] [email protected] 1 points 1 month ago

Ahh I wasn't really thinking of the injury implications. In terms of after war cleanup and decades on effects to civilians I didn't think it was a problem. At least clean up efforts would be simple enough with just a good pair of gloves and side cutters.

view more: next ›