rook

joined 2 years ago
[–] [email protected] 5 points 17 hours ago

And back on the subject of builder.ai, there’s a suggestion that it might not have been A Guy Instead, and the whole 700 human engineers thing was a misunderstanding.

https://blog.pragmaticengineer.com/builder-ai-did-not-fake-ai/

I’m not wholly sure I buy the argument, which is roughly

  • people from the company are worried that this sort of new will affect their future careers.
  • humans in the loop would have exhibited far too high latency, and getting an llm to do it would have been much faster and easier than having humans try to fake it at speed and scale.
  • there were over a thousand “external contractors” who were writing loads of code, but that’s not the same as being Guys Instead.

I guess the question then is: if they did have a good genai tool for software dev… where is it? Why wasn’t Microsoft interested in it?

[–] [email protected] 10 points 18 hours ago* (last edited 18 hours ago) (3 children)

Turns out some Silicon Valley folk are unhappy that a whole load of waymos got torched, fantasised that the cars could just gun down the protesters, and use genai video to bring their fantasies to some vague approximation of “life”

https://xcancel.com/venturetwins/status/1931929828732907882

The author, Justine Moore is an investment partner at a16z. May her future ventures be incendiary and uninsurable.

(via garbageday.email)

[–] [email protected] 8 points 22 hours ago (1 children)

I was reading a post by someone trying to make shell scripts with an llm, and at one point the system suggested making a directory called ~ (which is a shorthand for your home directory in a bunch of unix-alikes). When the user pointed out this was bad, the llm recommended remediation using rm -r ~ which would of course delete all your stuff.

So, yeah, don’t let the approximately-correct machine do things by itself, when a single character substitution can destroy all your stuff.

And JFC, being surprised that something called “YOLO” might be bad? What were people expecting? --all-the-red-flags

[–] [email protected] 18 points 2 days ago (2 children)

LLMs aren’t profitable even if they never had to pay a penny on license fees. The providers are losing money on every query, and can only be sustained by a firehose of VC money. They’re all hoping for a miracle.

[–] [email protected] 11 points 3 days ago

(this probably deserves its own post because it seems destined to be a shitshow full of the worst people, but I know nothing about the project or the people currently involved)

[–] [email protected] 19 points 3 days ago* (last edited 3 days ago) (12 children)

Did you know there’s a new fork of xorg, called x11libre? I didn’t! I guess not everyone is happy with wayland, so this seems like a reasonable

It's explicitly free of any "DEI" or similar discriminatory policies.. [snip]

Together we'll make X great again!

Oh dear. Project members are of course being entirely normal about the whole thing.

Metux, one of the founding contributors, is Enrico Weigelt, who has reasonable opinions like everyone except the nazis were the real nazis in WW2, and also had an anti vax (and possibly eugenicist) rant on the linux kernel mailing list, as you do.

In sure it’ll be fine though. He’s a great coder.

(links were unashamedly pillaged from this mastodon thread: https://nondeterministic.computer/@mjg59/114664107545048173)

[–] [email protected] 17 points 5 days ago

Relatedly, the gathering of (useful, actually works in real life, can be used to make products that turn a profit or that people actually want, and sometimes even all of the above at the same time) computer vision and machine learning and LLMs under the umbrella of “AI” is something I find particularly galling.

The eventual collapse of the AI bubble and the subsequent second AI winter is going to take a lot of useful technology with it that had the misfortune to be standing a bit too close to LLMs.

[–] [email protected] 13 points 1 week ago

It isn’t clear that anyone in trump’s government has ever paused to consider than any of their plans might have downsides.

[–] [email protected] 17 points 1 week ago* (last edited 1 week ago) (9 children)

Little table of “ai fluency” from zapier via linkedin: https://www.linkedin.com/posts/wadefoster_how-do-we-measure-ai-fluency-at-zapier-activity-7336442774650556416-nKND

(original source https://old.mermaid.town/@Kymberly/114635617736977394)

The author says it isn’t a requirements checklist, but it does have a column marked “unacceptable”, containing gems like

Calls Al coding assistants too risky

Has never tested Al-generated code

Relies only on Stack Overflow snippets

Angry goose meme: what was the ai code generator trained on, motherfucker?

[–] [email protected] 11 points 1 week ago (3 children)

I don’t think it’sa stretch to see the independence of spacex classified as a national security risk and have it nationalised (though not called that, because that sounds too socialist) and have associated people such as elon declared traitors. Shouldn’t even be that difficult these days, seeing how he’s trashed his own reputation, and it’ll be good to encourage the other plutocrats to stay in line.

Night of the long knives is in the playbook, after all

[–] [email protected] 10 points 1 week ago (1 children)

AI audio transcription is great.

https://mastodon.social/@nixCraft/114627512725655987

Sean Murray @NoMansSky

Ignore the auto-generated captions. We did not have a secret room hiding deaf kids.

Nintendo never once sent us deaf kids. We were hiding dev-kits. DEV-KITS.

[–] [email protected] 15 points 1 week ago (3 children)

For those of you who haven’t already seen it, r/accelerate is banning users who think they’ve talked to an AI god.

https://www.404media.co/pro-ai-subreddit-bans-uptick-of-users-who-suffer-from-ai-delusions/

There’s some optimism from the redditors that the LLM folk will patch the problem out (“you must be prompting it wrong”), but assume that they somehow just don’t know about the issue yet.

As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now.

There’s some dubious self-published analysis which coined the term “neural howlround” to mean some sort of undesirable recursive behaviour in LLMs that I haven’t read yet (and might not, because it sounds like cultspeak) and may not actually be relevant to the issue.

It wraps up with a surprisingly sensible response from the subreddit staff.

Our policy is to quietly ban those users and not engage with them, because we're not qualified and it never goes well.

AI boosters not claiming expertise in something, or offloading the task to an LLM? Good news, though surprising.

view more: next ›