BrickedKeyboard

joined 1 year ago
[–] [email protected] -2 points 1 year ago (6 children)

Software you write can have a "belief" as well. The course I took on it had us write Kalman filters, where you start with some estimate of a quantity. That estimate is your "belief", and you have a variance as well.

Each measurement you have a (value, variance) where the variance is derived from the quality of the sensor that produced it.

It's an overloaded word because humans are often unwilling to update their beliefs unless they are simple things, like "I believe the forks are in the drawer to the right of the sink". You believe that because you think you saw them their last. There is uncertainty - you might have misremembered, as your own memory is unreliable, your eyes are unreliable. If it's your kitchen and you've had thousands of observations, your belief has low uncertainty, if it's a new place your belief has high uncertainty.

If you go and look right now and the forks are in fact there you update your beliefs.

[–] [email protected] -1 points 1 year ago

Having trouble with quotes here **I do not find likely that 25% of currently existing occupations are going to be effectively automated in this decade and I don’t think generative machine learning models like LLMs or stable diffusion are going to be the sole major driver of that automation. **

  1. I meant 25% of the tasks, not 25% of the jobs. So some combination of jobs where AI systems can do 90% of some jobs, and 10% of others. I also implicitly was weighting by labor hour, so if 10% of all the labor hours done by US citizens are driving, and AI can drive, that would be 10% automation. Does this change anything in your response?

No. Even if Skynet had full control of a robot factory, heck, all the robot factories, and staffed them with a bunch of sleepless foodless always motivated droids, it would still face many of the constraints we do. Physical constraints (a conveyor belt can only go so fast without breaking), economic constraints (Where do the robot parts and the money to buy them come from? Expect robotics IC shortages when semiconductor fabs’ backlogs are full of AI accelerators), even basic motivational constraints (who the hell programmed Skynet to be a paperclip C3PO maximizer?)

  1. I didn't mean 'skynet'. I meant, AI systems. chatGPT and all the other LLMs are an AI system. So is midjourney with controlnet. So humans want things. They want robots to make the things. They order robots to make more robots (initially using a lot of human factory workers to kick it off). Eventually robots get really cheap, making the things humans want cheaper and that's where you get the limited form of Singularity I mentioned.

At all points humans are ordering all these robots, and using all the things the robots make. An AI system is many parts. It has device drivers and hardware and cloud services and many neural networks and simulators and so on. One thing that might slow it all down is that the enormous list of IP needed to make even 1 robot work and all the owners of all the software packages will still demand a cut even if the robot hardware is being built by factories with almost all robots working in it.

**I just think the threat model of autonomous robot factories making superhuman android workers and replicas of itself at an exponential rate is pure science fiction. **

  1. So again that's a detail I didn't give. Obviously there are many kinds of robotic hardware, specialized for whatever task they do, and the only reason to make a robot humanoid is if it's a sexbot or otherwise used as a 'face' for humans. None of the hardware has to be superhuman, though obviously industrial robot arms have greater lifting capacity than humans. Just to give a detail what the real stuff would look like : most robots will be in no way superhuman in that they will lack sensors where they don't need it, won't be armored, won't even have onboard batteries or compute hardware, will miss entire modalities of human sense, cannot replicate themselves, and so on. It's just hardware that does a task, made in factory, and it takes many factories with these machines in it to make all the parts used.

think:

[–] [email protected] -3 points 1 year ago* (last edited 1 year ago) (15 children)

Consider a flying saucer cult. Clearly a cult, great leader, mothership coming to pick everyone up, things will be great.

...What if telescopes show a large object decelerating into the solar system, the flaw from the matter annihilation engine clearly visible. You can go pay $20 a month and rent a telescope and see the flare.

The cult uh points out their "sequences" of writings by the Great Leader and some stuff is lining up with the imminent arrival of this interstellar vehicle.

My point is that lesswrong knew about GPT-3 years before the mainstream found it, many OpenAI employees post there etc. If the imminent arrival of AI is fake - like the hyped idea of bitcoin going to infinity or replacing real currency, or NFTs - that would be one thing. But I mean, pay $20 a month and man this tool seems to be smart, what could it do if it could learn from it's mistakes and had the vision module deployed...

Oh and I guess the other plot twist in this analogy : the Great Leader is saying the incoming alien vehicle will kill everyone, tearing up his own Sequences of rants, and that's actually not a totally unreasonable outcome if you could see an alien spacecraft approaching earth.

And he's saying to do stupid stuff like nuke each other so the aliens will go away and other unhinged rants, and his followers are eating it up.

[–] [email protected] -1 points 1 year ago* (last edited 1 year ago) (4 children)

It would be lesswrongness.

Just to split where the gap is :

  1. lesswrongers think powerful AGI systems that can act on their own against humans will soon exist, and will be able to escape to the internet.
  2. I work in AI and think powerful general AI systems (not necessarily the same as AGI) will exist soon and be powerful, but if built well will be unable to act against humans without orders, and unable to escape or do many of the things lesswrongers claim.
  3. You believe AGI of any flavor is a very long way away, beyond your remaining lifespan?
[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

Hi David. Reason I dropped by was the whole concept of knowing the distant future with too much certainty seemed like a deep flaw, and I have noticed lesswrong itself is full of nothing but 'cultist' AI doomers. Everyone kinda parrots a narrow range of conclusions, mainly on the imminent AGI killing everyone, and this, ironically, doesn't seem very rational...

I actually work on the architecture for current production AI systems and whenever I mention approaches that do work fine and suggest we could control more powerful AI this way, I get downvoted. So I was trying to differentiate between:

A. This is a club of smart people, even smarter than lesswrongers who can't see the flaws!

B. This is a club of well, the reason I called it boomers was I felt that the current news and AI papers make each of the questions I asked a reasonable and conservative outcome. For example posters here are saying for (1), "no it won't do 25% of the jobs". That was not the question, it was 25% of the tasks. Since for example Copilot already writes about 25% of my code, and GPT-4 helps me with emails to my boss, from my perspective this is reasonable. The rest of the questions build on (1).

view more: ‹ prev next ›