madsen

joined 1 year ago
MODERATOR OF
[–] [email protected] 9 points 2 months ago

Yup. I remember it from when Atlanta hosted the Olympic games some time in the '90s. Despicable.

[–] [email protected] 14 points 2 months ago (4 children)

Didn't something similar happen in Turkey with Erdogan a few years back? Pretty sure he was accused of being behind it himself too; don't know what the final verdict was though.

I think it's a pretty common accusation, just like when a politician is attacked, someone will invariably suggest that they staged it in order to get more support.

[–] [email protected] 0 points 2 months ago

I read every single word of it, twice, and I was laughing all the way through. I'm sorry you don't like it, but it seems strange that you immediately assume that I haven't read it just because I don't agree with you.

[–] [email protected] 4 points 2 months ago (4 children)

De skriver, at den er startet i taget. Håndværkere igen?

[–] [email protected] 43 points 2 months ago (13 children)

This is such a fun and insightful piece. Unfortunately, the people who really need to read it never will.

[–] [email protected] 7 points 3 months ago* (last edited 3 months ago) (1 children)

Jeg spiser ikke engang den slags ramen, så forbuddet rammer mig "kun" principielt, men det er simpelthen det vildeste klovneshow, det her.

Af rapporten fra DTU fremgår det også, at der ikke er foretaget egentlige målinger af nudlernes indhold af den kemiske forbindelse capsaicin, som er i chili.

I stedet har de eksperter, der har lavet rapporten for styrelsen, regnet sig frem til indholdet ved at læse beskrivelser af produkterne på en hjemmeside, hvor de blev solgt.

På hjemmesiden asiatorvet.dk har der blandt andet stået: "OBS: denne nærdødbringende version har mere end 13.000 fucking Scoville". Scoville er en enhed, man kan beregne capsaicinindholdet ud fra.

Ud fra den beskrivelse har eksperterne beregnet, hvor højt indholdet af capsaicin er i nudlerne.

Eksperterne noterer sig desuden i rapporten, at der på hjemmesiden er billeder af tre "drenge/unge mænd".

- Ud fra ansigtsudtryk og kropssprog ser det ud til, at to af drengene har ondt i maven eller brændende fornemmelse i mundhulen efter at have spist af nudlerne, står der i rapporten.

Kan man ikke få lov til at læse den rapport? Det lyder som om, det kunne være en god intro til hvordan man ignorerer den videnskabelige metode, datagrundlag og fakta generelt, og bare skriver hvad man nu lige føler for i dag. Det lyder i hvert fald ikke som et grundlag, hvorpå fødevarestyrelsen bør agere — og da slet ikke med vendinger som "risiko for akutte forgiftninger" og lignende.

Hvad er det for en "forbruger", der har henvendt sig og hvem er den i familie/venner med fra fødevarestyrelsen?

De må hellere tilbagekalde nisseøl pga. risiko for akut alkoholforgiftning. Fucking klaphatte...

Edit: Jeg antager, at det her er billedet, de henviser til i rapporten: https://web.archive.org/web/20231003202700/https://asiatorvet.dk/shop/53-samyang/ Et promo-shot for produktet, der sælger sig selv på at være stærkt.

Edit 2: Rapporten er her: https://janax.dk/wp-content/uploads/2024/06/Nudler-med-chili-6.-juni-2024.pdf Den er ikke helt så dum, som først antaget, men det er sgu stadig en usaglig affære.

[–] [email protected] 5 points 3 months ago

I get notifications for calls (obviously), SMS messages (of which I receive an average of 1 per month) and IMs from my immediate family. Everything else I check up on when I actually feel like I have the time for it. This has dramatically reduced the number of emails and other things I forget to reply to/act on, because I see them when I want to and when I have the time to actually deal with them; not when some random notification pops up when I'm doing something else, gets half-noticed and swiped away because I'll deal with it later.

[–] [email protected] 6 points 3 months ago

Cloud Saves may be difficult to deal with, depending on what games you play.

[–] [email protected] 13 points 3 months ago

The headline is supposedly CISA urging users to either update or delete Chrome — it's not Chrome/Google itself. However, I'm having trouble finding the actual CISA alert. It's not linked in the article as far as I can tell.

[–] [email protected] 1 points 3 months ago

Fair enough, and thanks for the offer. I found a demo on YouTube. It does indeed look a lot more reasonable than having an LLM actually write the code.

I'm one of the people that don't use IntelliSense, so it's probably not for me, but I can definitely see why people find that particular implementation useful. Thanks for catching and correcting my misunderstanding. :)

[–] [email protected] 7 points 3 months ago* (last edited 3 months ago) (2 children)

I'm closing in on 30 years too, started just around '95, and I have yet to see an LLM spit out anything useful that I would actually feel comfortable committing to a project. Usually you end up having to spend as much time—if not more—double-checking and correcting the LLM's output as you would writing the code yourself. (Full disclosure: I haven't tried Copilot, so it's possible that it's different from Bard/Gemini, ChatGPT and what-have-you, but I'd be surprised if it was that different.)

Here's a good example of how an LLM doesn't really understand code in context and thus finds a "bug" that's literally mitigated in the line before the one where it spots the potential bug: https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-for-intelligence/ (see "Exhibit B", which links to: https://hackerone.com/reports/2298307, which is the actual HackerOne report).

LLMs don't understand code. It's literally your "helpful", non-programmer friend—on stereoids—cobbling together bits and pieces from searches on SO, Reddit, DevShed, etc. and hoping the answer will make you impressed with him. Reading the study from TFA (https://dl.acm.org/doi/pdf/10.1145/3613904.3642596, §§5.1-5.2 in particular) only cements this position further for me.

And that's not even touching upon the other issues (like copyright, licensing, etc.) with LLM-generated code that led to NetBSD simply forbidding it in their commit guidelines: https://mastodon.sdf.org/@netbsd/112446618914747900

Edit: Spelling

[–] [email protected] 13 points 3 months ago* (last edited 3 months ago) (5 children)

I wouldn't trust an LLM to produce any kind of programming answer. If you're skilled enough to know it's wrong, then you should do it yourself, if you're not, then you shouldn't be using it.

I've seen plenty of examples of specific, clear, simple prompts that an LLM absolutely butchered by using libraries, functions, classes, and APIs that don't exist. Likewise with code analysis where it invented bugs that literally did not exist in the actual code.

LLMs don't have a holistic understanding of anything—they're your non-programming, but over-confident, friend that's trying to convey the results of a Google search on low-level memory management in C++.

view more: ‹ prev next ›