this post was submitted on 09 Feb 2024
114 points (75.2% liked)

Technology

57432 readers
3996 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 105 points 6 months ago (1 children)

But this sounds exactly the sort of thing that machines are better at that people, so it just feels completely unsurprising that it was good at the task.

Turning multiple dials to manage speed and direction is not normally how humans interact with the world, so we can we pretty shit at it.

A basic motor is completely designed to turn like this.

This feels no different to the machine learning tools used to train on Mario a decade ago.

[–] [email protected] 10 points 6 months ago (1 children)

Right. Computers doing the shit that we don’t want to do for a living while giving us time to do things like paint cows that don’t have two heads.

[–] [email protected] 3 points 6 months ago* (last edited 6 months ago) (1 children)

Technology has removed a lot of time consuming or boring jobs, but it also made us spend our time in front of the computers. The idea from the start was that we could live our lives while computers do our tasks. But we ended up on social media or in front of computer games.

It's great for companies though, since now they make money both when we work and when we are off work. The attention economy is very real.

load more comments (1 replies)
[–] [email protected] 54 points 6 months ago (2 children)

I'm not really surprised, the main challenge of that game is motor control, something any machine can do with more precision than a human

[–] [email protected] 18 points 6 months ago

I agree but also disagree. It's true that machines are capable of fine motor control much more quickly and accurately than humans. But this by itself is often not enough.

This achievement should be somewhat surprising because of Moravec's paradox: the observation that, opposite to what early AI researchers expected, intelligence and reasoning skills are comparatively easy for a computer to simulate, while sensorimotor skills are in fact incredibly hard. Notice how, for example, chess engines started beating human players in the 90s or so, but we still don't have a robot that can do something as simple as pick raspberries (because surprise, for a machine picking a raspberry is actually hard as shit).

[–] [email protected] 15 points 6 months ago* (last edited 6 months ago)

My eyes bursted out of my sockets when an AI was able to multiply 8 prime numbers faster than a human.

[–] [email protected] 39 points 6 months ago* (last edited 6 months ago) (3 children)

They're calling everything "AI" nowadays... this sort of learning algorithm is old as fuck, here's a 8yo example. The main differences between both situations is 1) some sensor(s) being used to "tell" the algorithm about the board state, and 2) the barebones robotic arms messing with the board.

[–] [email protected] 14 points 6 months ago (3 children)

I don't get what the issue is calling it AI?

[–] [email protected] 24 points 6 months ago* (last edited 6 months ago) (1 children)

Even if skipping completely the discussion about what is "intelligence", the expression "artificial intelligence" has been used as a label for so many different technologies that it has become practically useless. It includes things like decision trees in games (even if a lot of them boil down to simple if/then statements), generative models, even theoretical systems that would reason in a human-like way. And evolutionary models like the one in the OP and the one in my link.

So it's basically the 20s version of what "smart" was in the 90s/00s. Like this:


OK, I'm being cheeky and exaggerating it in the image macro, but it should give you an idea.

[–] infamousta 10 points 6 months ago (1 children)

AI has been a field within computer science since at least the 1950s. It encompasses algorithms for making decisions, which is why so many technologies are labeled this way. “Intelligence” may seem like an odd choice of terminology (some people conflate it with sentience or similar), but general machine intelligence is one goal of this study, and the applications of AI are putative steps to that end.

Back when those guys started talking about what methods could get us there, things like decision trees, symbolic manipulation, neural nets, were all potential pathways that were on the table. So these get included in the field because that’s where and to what end they were produced.

Another thing is that intelligence can be narrow in its domain. A character in a video game that needs to move from point A to point B can do so following something like the A* pathfinding algorithm. In the domain of graph traversal/pathfinding, it’s hard to imagine something much more intelligent (or fit to solve the problem) than A* despite being a simple algorithm.

But yeah, as a marketing term it is kind of silly since most people don’t know what it means. It remains a useful categorization for a broad field of study/research in CS though.

[–] [email protected] 4 points 6 months ago* (last edited 6 months ago) (1 children)

I'm fine with the usage of the acronym and expression in CS; specially because scientists are damn stubborn when it comes to "This is not [word1]! This is [word2]! Don't screw with the terminology, you muppet!". (As they should.)

So the bone that I have to pick against it is mostly against its marketing usage. Specially when it masks the underlying tech, just to make it look fancier. (Like here.)

[–] [email protected] 2 points 6 months ago

It may be over-used but in my mind it's still the correct term. AI is quite a broad category so you can fit many kinds of software algorithms under it. Perhaps it's misleading as many people probably imagine AI to imply AGI when it could just be narrow AI aswell which even though not generally intelligent may still be superhuman at this one specific task like in this example playing the labyrith.

load more comments (2 replies)
[–] [email protected] 8 points 6 months ago (1 children)

Exactly. Not to mention, why the fuck is it a surprise that a computer twisting the knobs “at superhuman speed” would be better at this game than humans. Like, no shit. We can’t compute how the degrees at which we’re turning the knobs affects the speed of the ball, can’t store that information for next time, and find the best way not making the same mistakes twice. Because…we’re human. We don’t have that finely tuned ability…because we’re not machines.

So…this isn’t “AI” despite the robot hands they put in the thumbnail and no shit a dedicated computer could master this game. I’m surprised it took six hours.

[–] [email protected] 4 points 6 months ago

Additionally, this shit is really easy to compute. It's all Newtonian physics, and there are only two relevant equations here, both simple: d = at²/2 + vt and a = g*sin(θ). It's really easy for a computer to reach those formulas, cancelling the advantage that humans would have (insight and actual knowledge of the system).

[–] [email protected] 1 points 6 months ago

Here is an alternative Piped link(s):

here's a 8yo example

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] [email protected] 30 points 6 months ago (2 children)

I hope their jaw is alright

[–] [email protected] 18 points 6 months ago

I cringed at the headline but just posted it as is and thought the article was kinda interesting.

[–] [email protected] 11 points 6 months ago* (last edited 6 months ago)

It sure won't after he's gonna discover that his wife has chosen to leave him for her new AI driven dildo.

It is just a matter of time

[–] [email protected] 29 points 6 months ago (2 children)

You don't need AI to do that, seriously, such a buzzword where a relatively simple algorithm would suffice, don't tell me it's harder than double pendulums or those ball bouncing contraptions tech students make since a decade or more

[–] [email protected] 15 points 6 months ago* (last edited 6 months ago) (4 children)

Not needing AI isn't the point. The point is that AI can do it, and AI doesn't require a programmer to design and debug a bespoke algorithm to accomplish a task. It would take a human a lot longer than 6 hours to perfect an algorithm to do this.

load more comments (4 replies)
[–] [email protected] 19 points 6 months ago (16 children)

https://youtu.be/zQMKfuWZRdA

Here, the video the article is talking about. Save you from reading the author's life story.

[–] [email protected] 4 points 6 months ago

Save you from reading the author’s life story.

I can probably do that and still have time to spend in the washroom before the video is over. Some of us read fast.

[–] [email protected] 3 points 6 months ago

Here is an alternative Piped link(s):

https://piped.video/zQMKfuWZRdA

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

load more comments (14 replies)
[–] [email protected] 17 points 6 months ago (1 children)

Oh yeah? Can it tilt the board all the way to one corner, then pop the other corner and send the ball flying right to the end?

No, it's amateur at best.

[–] [email protected] 2 points 6 months ago (1 children)

That's addressed in the article actually. They had to program it so as not to cheat when they found it actually trying to cheat.

[–] [email protected] 1 points 6 months ago

The true ability of AI/machine learning is to find and abuse all the loopholes and errors that exist in the training set.

"The only winning move is not to play" was simply WORP maximising its reward function.

[–] [email protected] 13 points 6 months ago

This is pretty much what I'd expect AI to be best at.

[–] [email protected] 12 points 6 months ago (2 children)

It's cool but my question is (I did not see this addressed in the article nor video but might have missed it) did it learn to win the game in general terms or only this one example? I mean, if the layout of the board was changed, would it still solve it?

[–] [email protected] 18 points 6 months ago* (last edited 6 months ago) (1 children)

They don't discuss it here, but it's most likely a reinforcement model that operates different generations of learned behavior to decide if it's improving or not.

It would know that the ball going in the hole is "bad", and then try to avoid that happening. Each move that is "good' is then kept in a list of moves it should perform in the next generation of its plan to avoid the "bad" things. Loop -> fail -> logic build -> retry. After 6 hours, it has mapped a complete list of "good" moves to affect it's final outcome.

The answer your question: no, it would not be able to use what it learned here on a different map of the board. It's building reactions to events based on this one board, and bound by rules. You could use the ruleset with another board, but it would need to learn it all again just as a human would.

The thing about these models is less that they will work (it is assumed they eventually will through trial and error), but how efficiently they will work. The number of generational cycles and retries is usually the benchmark when dealing with reinforcement, but they don't discuss that data here either.

load more comments (1 replies)
[–] [email protected] 2 points 6 months ago (1 children)

It did learn to use shortcuts to skip parts of the maze, and had to be told not to. Super interesting!

[–] [email protected] 1 points 6 months ago

Yes, but that's only because a generation found some random, specific motion that scored better. Not because it analyzed that doing a skip should be possible

[–] [email protected] 6 points 6 months ago (3 children)

When the AI can solve one of these I'll be impressed:

https://youtu.be/UA33LOViUfw

[–] [email protected] 4 points 6 months ago

Here is an alternative Piped link(s):

https://piped.video/UA33LOViUfw

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] [email protected] 2 points 6 months ago

Hey, I also had a toy like that! Cool!

[–] [email protected] 2 points 6 months ago

A blast from the past... Damn, now I have the urge to recover mine... from somewhere in the storage room... if it still exist...

[–] [email protected] 4 points 6 months ago

Oh! That thing! Takes me back.

[–] [email protected] 3 points 6 months ago

The only thing that is hard about this game is to control the board, which is the concept of it.

[–] [email protected] 2 points 6 months ago

Not sure if it's more interesting that an AI taught itself the PID instructions in order to deftly move the ball around, or if it's more interesting if a human programs the PID instructions to move the ball around. Sounds like a lot of electricity was used doing it the first way.

load more comments
view more: next ›