gerikson

joined 2 years ago
[–] [email protected] 4 points 3 days ago

Yeah it's been decades since I read Rhodes' history about the atom bomb, so I missed the years a bit. My point is that even if we couldn't explain exactly what was happening there was something physically there, and we knew enough about it that Oppenheimer and co. could convince the US Army to build Oak Ridge and many other facilities at massive expense.

We can't say the same about "AI".

[–] [email protected] 6 points 3 days ago (2 children)

Yeah, my starting position would be that it was obvious to any competent physicist at the time (although there weren't that many) that the potential energy release from nuclear fission was a real thing - the "only" thing to do to weaponise it or use it for peaceful ends was engineering.

The analogy to "runaway X-risk AGI" is there's a similar straight line from ELIZA to Acausal Robot God, all that's required is a bit of elbow grease and good ole fashioned American ingenuity. But my point is that apart from Yud and a few others, no serious person believes this.

[–] [email protected] 8 points 4 days ago* (last edited 4 days ago) (8 children)

noodling on a blog post - does anyone with more experience of LW/EA than me know if "AI safety" people are referencing the invention of nuclear weapons as a template for regulating/forbidding "AGI"?

[–] [email protected] 5 points 5 days ago

Adderall will do that to a fellow

[–] [email protected] 5 points 5 days ago* (last edited 5 days ago) (2 children)

I'm increasingly convinced that this person is in a dark place mentally, and am fighting an internal battle to keep poking them for the lulz or just ignoring them.

https://lobste.rs/s/lgqwje/does_current_ai_represent_dead_end#c_cyrxm4

(I've seen this behavior on lobste.rs before and I think sometimes people literally get banned for their own good)

Edit bored on a train so I did the math, in the comment thread, this user has made 30% of the comments by count and 20% by "volume" (basically number of bytes in the plaintext).

[–] [email protected] 12 points 5 days ago* (last edited 5 days ago)

Back when I was an undergrad I saw a letter addressed to the department from a German gentleman who claimed to have invented a perpetual motion machine (this was the department of mechanics). I remember the letter being quite typographically florid and especially the author’s likeness in silhouette.

My advisor had fun finding the flaw in the proposal. Took a few minutes.

I often wondered if demolishing a PM suggestion would be a good extra credit question on an exam.

[–] [email protected] 7 points 5 days ago

He retweeted Ivanka praising him... 🤢

[–] [email protected] 8 points 6 days ago (5 children)

I recognize everyone except Leopold. Increase my suffering by telling me who it is.

[–] [email protected] 6 points 6 days ago (1 children)

enjoy your flags from outraged simps

[–] [email protected] 7 points 1 week ago (3 children)

I just got a hit of esprit d'escalier, and wished I'd replied to this

But the road to Hackers News is paved with good intentions.

with

So too is the road to Roko's Basilisk.

[–] [email protected] 9 points 1 week ago (11 children)

as an amuse bouche for the horrors that will follow this year, please enjoy this lobste.rs reaching the melting down end stage after going full Karen at someone who agrees with a submitted post saying LLMs are a dead end when it comes to AI.

https://lobste.rs/s/lgqwje/does_current_ai_represent_dead_end#c_tefto4

Thankfully, accusing someone of being a crapto promoter is seen as an attack that is beyond the pale.

Highlights from the rest of the thread include bemoaning the lack of a downvote button for registering disapproval:

https://lobste.rs/s/lgqwje/does_current_ai_represent_dead_end#c_ft9mpj

unilaterally deciding to reply multiple times to one comment, neccesitating them to add a meta comment with hyperlinks

https://lobste.rs/s/lgqwje/does_current_ai_represent_dead_end#c_jjk5ei

And of course is a MoreWronger (moroner?)

 

current difficulties

  1. Day 21 - Keypad Conundrum: 01h01m23s
  2. Day 17 - Chronospatial Computer: 44m39s
  3. Day 15 - Warehouse Woes: 30m00s
  4. Day 12 - Garden Groups: 17m42s
  5. Day 20 - Race Condition: 15m58s
  6. Day 14 - Restroom Redoubt: 15m48s
  7. Day 09 - Disk Fragmenter: 14m05s
  8. Day 16 - Reindeer Maze: 13m47s
  9. Day 22 - Monkey Market: 12m15s
  10. Day 13 - Claw Contraption: 11m04s
  11. Day 06 - Guard Gallivant: 08m53s
  12. Day 08 - Resonant Collinearity: 07m12s
  13. Day 11 - Plutonian Pebbles: 06m24s
  14. Day 18 - RAM Run: 05m55s
  15. Day 04 - Ceres Search: 05m41s
  16. Day 23 - LAN Party: 05m07s
  17. Day 02 - Red Nosed Reports: 04m42s
  18. Day 10 - Hoof It: 04m14s
  19. Day 07 - Bridge Repair: 03m47s
  20. Day 05 - Print Queue: 03m43s
  21. Day 03 - Mull It Over: 03m22s
  22. Day 19 - Linen Layout: 03m16s
  23. Day 01 - Historian Hysteria: 02m31s
 

Problem difficulty so far (up to day 16)

  1. Day 15 - Warehouse Woes: 30m00s
  2. Day 12 - Garden Groups: 17m42s
  3. Day 14 - Restroom Redoubt: 15m48s
  4. Day 09 - Disk Fragmenter: 14m05s
  5. Day 16 - Reindeer Maze: 13m47s
  6. Day 13 - Claw Contraption: 11m04s
  7. Day 06 - Guard Gallivant: 08m53s
  8. Day 08 - Resonant Collinearity: 07m12s
  9. Day 11 - Plutonian Pebbles: 06m24s
  10. Day 04 - Ceres Search: 05m41s
  11. Day 02 - Red Nosed Reports: 04m42s
  12. Day 10 - Hoof It: 04m14s
  13. Day 07 - Bridge Repair: 03m47s
  14. Day 05 - Print Queue: 03m43s
  15. Day 03 - Mull It Over: 03m22s
  16. Day 01 - Historian Hysteria: 02m31s
 

The previous thread has fallen off the front page, feel free to use this for discussions on current problems

Rules: no spoilers, use the handy dandy spoiler preset to mark discussions as spoilers

 

This season's showrunners are so lazy, just re-using the same old plots and antagonists.

 

“It is soulless. There is no personality to it. There is no voice. Read a bunch of dialogue in an AI generated story and all the dialogue reads the same. No character personality comes through,” she said. Generated text also tends to lack a strong sense of place, she’s observed; the settings of the stories are either overly-detailed for popular locations, or too vague, because large language models can’t imagine new worlds and can only draw from existing works that have been scraped into its training data.

 

The grifters in question:

Jeremie and Edouard Harris, the CEO and CTO of Gladstone respectively, have been briefing the U.S. government on the risks of AI since 2021. The duo, who are brothers [...]

Edouard's website: https://www.eharr.is/, and on LessWrong: https://www.lesswrong.com/users/edouard-harris

Jeremie's LinkedIn: https://www.linkedin.com/in/jeremieharris/

The company website: https://www.gladstone.ai/

42
submitted 10 months ago* (last edited 10 months ago) by [email protected] to c/[email protected]
 

HN reacts to a New Yorker piece on the "obscene energy demands of AI" with exactly the same arguments coiners use when confronted with the energy cost of blockchain - the product is valuable in of itself, demands for more energy will spur investment in energy generation, and what about the energy costs of painting oil on canvas, hmmmmmm??????

Maybe it's just my newness antennae needing calibrating, but I do feel the extreme energy requirements for what's arguably just a frivolous toy is gonna cause AI boosters big problems, especially as energy demands ramp up in the US in the warmer months. Expect the narrative to adjust to counter it.

view more: next ›