1046
Timmy the Pencil (lemmy.world)
submitted 1 month ago by [email protected] to c/[email protected]
top 50 comments
sorted by: hot top controversial new old
[-] [email protected] 92 points 1 month ago

In a robotics lab where I once worked, they used to have a large industrial robot arm with a binocular vision platform mounted on it. It used the two cameras to track an objects position in 3 dimensional space and stay a set distance from the object.

It worked the way our eyes worked, adjusting the pan and tilt of the cameras quickly for small movements and adjusting the pan and tilt of the platform and position of the arm to follow larger movements.

Viewers watching the robot would get an eerie and false sense of consciousness from the robot, because the camera movements matched what we would see people's eyes do.

Someone also put a necktie on the robot which didn't hurt the illusion.

[-] CaptainEffort 86 points 1 month ago* (last edited 1 month ago)
[-] [email protected] 21 points 1 month ago

We've been had

[-] [email protected] 14 points 1 month ago* (last edited 1 month ago)

Finishing up a rewatch through Community as we speak. Funny to see the gimmick (purportedly) used in real life.

[-] [email protected] 13 points 1 month ago

He was so streets ahead.

[-] [email protected] 9 points 1 month ago

That was my first thought!

[-] [email protected] 59 points 1 month ago

How would we even know if an AI is conscious? We can't even know that other humans are conscious; we haven't yet solved the hard problem of consciousness.

[-] [email protected] 27 points 1 month ago

Does anybody else feel rather solipsistic or is it just me?

[-] [email protected] 19 points 1 month ago

I doubt you feel that way since I'm the only person that really exists.

Jokes aside, when I was in my teens back in the 90s I felt that way about pretty much everyone that wasn't a good friend of mine. Person on the internet? Not a real person. Person at the store? Not a real person. Boss? Customer? Definitely not people.

I don't really know why it started, when it stopped, or why it stopped, but it's weird looking back on it.

[-] SuddenDownpour 8 points 1 month ago

Andrew Tate has convinced a ton of teenage boys to think the same, apparently. Kinda ironic.

load more comments (1 replies)
[-] [email protected] 14 points 1 month ago

A Cicero a day and your solipsism goes away.

Rigour is important, and at the end of the day we don't really know anything. However this stuff is supposed to be practical; at a certain arbitrary point you need to say "nah, I'm certain enough of this statement being true that I can claim that it's true, thus I know it."

load more comments (4 replies)
load more comments (2 replies)
[-] [email protected] 5 points 1 month ago

I'd say that, in a sense, you answered your own question by asking a question.

ChatGPT has no curiosity. It doesn't ask about things unless it needs specific clarification. We know you're conscious because you can come up with novel questions that ChatGPT wouldn't ask spontaneously.

load more comments (3 replies)
load more comments (44 replies)
[-] [email protected] 35 points 1 month ago

Noooooo Timmy the Pencil! I haven't even seen this demonstration but I am deeply affected.

load more comments (1 replies)
[-] MeDuViNoX 31 points 1 month ago

WTF? My boy Tim didn't deserve to go out like that!

[-] [email protected] 8 points 1 month ago

Look at the bright side: there are two Tiny Timmys now.

[-] [email protected] 31 points 1 month ago

Wait wasn't this directly from Community the very first episode?

That professor's name? Albert Einstein. And everyone clapped.

[-] [email protected] 12 points 1 month ago

Yes it was - minus the googly eyes

[-] [email protected] 26 points 1 month ago

Found it

https://youtu.be/z906aLyP5fg?si=YEpk6AQLqxn0UP6z

Good job OP. Took a scene from a show from 15 years ago and added some craft supplies from Kohls. Very creative.

load more comments (4 replies)
load more comments (2 replies)
[-] [email protected] 31 points 1 month ago

RIP Timmy
We barely knew ye

[-] [email protected] 23 points 1 month ago

We met you only just at noon, A friend like Tim we barely knew. Taken from us far too soon, Yellow Standard #2.

[-] [email protected] 6 points 1 month ago

torn by fingers malcontent, pink eraser left unspent

load more comments (1 replies)
[-] [email protected] 27 points 1 month ago

Tbf I'd gasp too, like wth

[-] [email protected] 11 points 1 month ago* (last edited 1 month ago)

Humans are so good at imagining things alive that just reading a story about Timmy the pencil is eliciting feelings of sympathy and reactions.

We are not good judges of things in general. Maybe one day these AI tools will actually help us and give us better perception and wisdom for dealing with the universe, but that end-goal is a lot further away than the tech-bros want to admit. We have decades of absolute slop and likely a few disasters to wade through.

And there's going to be a LOT of people falling in love with super-advanced chat bots that don't experience the world in any way.

[-] [email protected] 7 points 1 month ago

next you're going to tell me the moon doesn't have a face on it

[-] [email protected] 7 points 1 month ago

It's clearly a rabbit.

load more comments (2 replies)
[-] [email protected] 25 points 1 month ago
[-] [email protected] 17 points 1 month ago

And now ChatGPT has a friendly-sounding voice with simulated emotional inflections...

load more comments (2 replies)
[-] [email protected] 14 points 1 month ago

I don't know why this bugs me but it does. It's like he's implying Turing was wrong and that he knows better. He reminds me of those "we've been thinking about the pyramids wrong!" guys.

[-] [email protected] 18 points 1 month ago

I wouldn't say he's implying Turing himself was wrong. Turing merely formulated a test for indistinguishability, and it still shows that.
It's just that indistinguishability is not useful anymore as a metric, so we should stop using Turing tests.

[-] [email protected] 10 points 1 month ago

Nah. Turing skipped this matter altogether. In fact, it's the main point of the Turing test aka imitation game:

I PROPOSE to consider the question, 'Can machines think?' This should begin with definitions of the meaning of the terms 'machine 'and 'think'. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words 'machine' and 'think 'are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, 'Can machines think?' is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.

In other words what's Turing is saying is "who cares if they think? Focus on their behaviour dammit, do they behave intelligently?". And consciousness is intrinsically tied to thinking, so... yeah.

load more comments (24 replies)
load more comments (1 replies)
[-] [email protected] 10 points 1 month ago* (last edited 1 month ago)

We're good at scamming investors into thinking that a room full of monkeys on typewriters can be "AI." And all it takes to make that happen is to waste time, resources, lives and money, (ESPECIALLY money) into building an army of fusion-powered robots to beat the monkeys into working just a little bit harder.

Because that's businesses solution to everything: work harder, not smarter.

load more comments (1 replies)
[-] [email protected] 10 points 1 month ago

Anthropomorphism is one hell of a drug

[-] [email protected] 9 points 1 month ago

I used to tell my kids "Just pretend to sleep, trick me into thinking you are sleeping, I don't know the difference. Just pretend, lay there with your eyes closed."

I could tell, of course, and they did end up asleep, but I think that is like the Turing test - if you are talking to someone and it's not a person but you can't tell, from your perspective it's a person. Not necessarily from the perspective of the machine, we can only know our own experience so that is the measure.

[-] [email protected] 9 points 1 month ago

That is one astute point! Damn.

[-] [email protected] 5 points 1 month ago

Yeah which is why it was the first episode of the show Community.

load more comments (2 replies)
[-] [email protected] 8 points 1 month ago

Alan Watts, talking on the subject of Buddhist vegetarianism, said that even if vegetables and animals both suffer when we eat them, vegetables don't scream as loudly. It is not good for your own mental state to perceive something else suffering, whether or not that thing is actually suffering, because it puts you in an an unhealthy position of ignoring your own inherent sense of compassion.

load more comments (1 replies)
[-] [email protected] 7 points 1 month ago

The whole time everyone has been freaking out about AI I've been quietly enjoying just this fact. Like "neat, this place triggers my fear response", "neat, advanced text prediction triggers my 'talking to person' response."

load more comments (1 replies)
[-] [email protected] 6 points 1 month ago* (last edited 1 month ago)

We are the only species on Earth that observe "Shark Week". Sharks don't even observe "Shark Week", but we do. For the same reason I can pick this pencil, tell you its name is Steve and go like this (breaks pencil) and part of you dies just a little bit on the inside, because people can connect with anything. We can sympathize with a pencil, we can forgive a shark, and we can give Ben Affleck an academy award for Screenwriting.

~ Jeff Winger

[-] [email protected] 6 points 1 month ago

Were people maybe not shocked at the action or outburst of anger? Why are we assuming every reaction is because of the death of something “conscious”?

[-] [email protected] 10 points 1 month ago* (last edited 1 month ago)

i mean, i just read the post to my very sweet, empathetic teen. her immediate reaction was, "nooo, Tim! 😢"

edit - to clarify, i don't think she was reacting to an outburst, i think she immediately demonstrated that some people anthropomorphize very easily.

humans are social creatures (even if some of us don't tend to think of ourselves that way). it serves us, and the majority of us are very good at imagining what others might be thinking (even if our imaginings don't reflect reality), or identifying faces where there are none (see - outlets, googly eyes).

[-] [email protected] 6 points 1 month ago

Right, it's shocking that he snaps the pencil because the listeners were playing along, and then he suddenly went from pretending to have a friend to pretending to murder said friend. It's the same reason you might gasp when a friendly NPC gets murdered in your D&D game: you didn't think they were real, but you were willing to pretend they were.

The AI hype doesn't come from people who are pretending. It's a different thing.

load more comments (1 replies)
load more comments (6 replies)
[-] [email protected] 5 points 1 month ago

Remember that AI should never be looked at as a replacement for a human being

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 28 May 2024
1046 points (96.9% liked)

Fuck AI

834 readers
34 users here now

A place for all those who loathe machine-learning to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 3 months ago
MODERATORS