gerikson

joined 2 years ago
[–] [email protected] 5 points 7 hours ago

oh hey a tankie

[–] [email protected] 3 points 10 hours ago

so many off by one errors

also first time I had to run the code on a desktop machine because my VPS was too slow

[–] [email protected] 5 points 2 days ago (1 children)

Skipping this for now, there are only so many grid maps I can take.

[–] [email protected] 13 points 4 days ago (3 children)

FWIW I just got an email from GitHub announcing that Copilot is now free for my account (a very basic one).

[–] [email protected] 4 points 4 days ago (1 children)

day 18

bit of a breather episode

As long as you ensure A* / Dijkstra's (is there a functional difference if the edge weights are constant?) you'll get the shortest path. Part 2 was just simulation for me, if I started in the state of part 1 it took a minute to run through the rest of the bytes.

[–] [email protected] 3 points 5 days ago

re: p1

I literally created different test inputs for all the examples given and that found a lot of bugs for me. Specifically the difference between literal and combo operators.

[–] [email protected] 8 points 6 days ago* (last edited 5 days ago)

I honestly had no idea of the original Russian meaning of the gloss. To me "refusenik" implies some sort of hard-left hippie.

Edit finally went and read the linked article.

Schneier and Sanders:

We agree with Morozov that the “refuseniks,” as he calls them, are wrong to see AI as “irreparably tainted” by its origins.

Morozov:

Meanwhile, a small but growing group of scholars and activists are taking aim at the deeper, systemic issues woven into AI’s foundations, particularly its origins in Cold War–era computing. For these refuseniks, AI is more than just a flawed technology; it’s a colonialist, chauvinist, racist, and even eugenicist project, irreparably tainted at its core.

But the original term was not for people refusing to take an action - it was the state refusing to allow their actions! It's done a 180, but considering no-one now remembers the plight of Soviet Jews attempting to emigrate to Israel it's not that strange.

[–] [email protected] 9 points 6 days ago

Doctor Parkinson declared, "I'm not surprised to see you here
You've got smokers cough from smoking, brewer's droop from drinking beer
I don't know how you came to get the Bette Davis knees
But worst of all young man, you've got industrial disease"

 

Problem difficulty so far (up to day 16)

  1. Day 15 - Warehouse Woes: 30m00s
  2. Day 12 - Garden Groups: 17m42s
  3. Day 14 - Restroom Redoubt: 15m48s
  4. Day 09 - Disk Fragmenter: 14m05s
  5. Day 16 - Reindeer Maze: 13m47s
  6. Day 13 - Claw Contraption: 11m04s
  7. Day 06 - Guard Gallivant: 08m53s
  8. Day 08 - Resonant Collinearity: 07m12s
  9. Day 11 - Plutonian Pebbles: 06m24s
  10. Day 04 - Ceres Search: 05m41s
  11. Day 02 - Red Nosed Reports: 04m42s
  12. Day 10 - Hoof It: 04m14s
  13. Day 07 - Bridge Repair: 03m47s
  14. Day 05 - Print Queue: 03m43s
  15. Day 03 - Mull It Over: 03m22s
  16. Day 01 - Historian Hysteria: 02m31s
[–] [email protected] 7 points 1 week ago (3 children)

I am geniunely shocked that Elsevier had this journal under its imprint.

[–] [email protected] 2 points 1 week ago

re: day 14 part 2

I had nfc how to solve this but someoone on the subreddit mentioned that miminizine the "safety score" was the way to go too ... I guess your explanation is the correct one. Also the way the puzzle is generated is to start with the tree and go "backwards" a couple of thousand steps and use a number of of those as starting positions. Probably throw in some random robots as noise.

[–] [email protected] 4 points 1 week ago (1 children)

Diamond Age is an interesting idea, the original Primer was for the elite and used distributed encryption to farm out the qualified work to skilled artisans. In the end though, a cut-down primer (using some sort of AI? it's been a long time since I read it) is used to educate and train the girl army used by one of the faction in the final battle.

It's not really explained but I suspect the OG Primer had a robust payment model that ensured that the service oculd be kept solvent.

[–] [email protected] 20 points 1 week ago (4 children)

Assuming the company will last 5 years is awfully optimistic.

 

The previous thread has fallen off the front page, feel free to use this for discussions on current problems

Rules: no spoilers, use the handy dandy spoiler preset to mark discussions as spoilers

 

This season's showrunners are so lazy, just re-using the same old plots and antagonists.

 

“It is soulless. There is no personality to it. There is no voice. Read a bunch of dialogue in an AI generated story and all the dialogue reads the same. No character personality comes through,” she said. Generated text also tends to lack a strong sense of place, she’s observed; the settings of the stories are either overly-detailed for popular locations, or too vague, because large language models can’t imagine new worlds and can only draw from existing works that have been scraped into its training data.

 

The grifters in question:

Jeremie and Edouard Harris, the CEO and CTO of Gladstone respectively, have been briefing the U.S. government on the risks of AI since 2021. The duo, who are brothers [...]

Edouard's website: https://www.eharr.is/, and on LessWrong: https://www.lesswrong.com/users/edouard-harris

Jeremie's LinkedIn: https://www.linkedin.com/in/jeremieharris/

The company website: https://www.gladstone.ai/

42
submitted 9 months ago* (last edited 9 months ago) by [email protected] to c/[email protected]
 

HN reacts to a New Yorker piece on the "obscene energy demands of AI" with exactly the same arguments coiners use when confronted with the energy cost of blockchain - the product is valuable in of itself, demands for more energy will spur investment in energy generation, and what about the energy costs of painting oil on canvas, hmmmmmm??????

Maybe it's just my newness antennae needing calibrating, but I do feel the extreme energy requirements for what's arguably just a frivolous toy is gonna cause AI boosters big problems, especially as energy demands ramp up in the US in the warmer months. Expect the narrative to adjust to counter it.

 

Yes, I know it's a Verge link, but I found the explanation of the legal failings quite funny, and I think it's "important" we keep track of which obscenely rich people are mad at each other so we can choose which of their kingdoms to be serfs in.

view more: next ›