Hamartiogonic

joined 1 year ago
MODERATOR OF
[–] [email protected] 2 points 1 week ago

Statistical tests are very picky. They have been designed by mathematicians in a mathematical ideal vacuum void of all reality. The method works in those ideal conditions, but when you take that method and apply it in messy reality where everything is flawed, you may run into some trouble. In simple cases, it’s easy to abide by the assumptions of the statistical test, but as your experiment gets more and more complicated, there are more and more potholes for you to dodge. Best case scenario is, your messy data is just barely clean enough that you can be reasonably sure the statistical test still works well enough and you can sort of trust the result up to a certain point.

However, when you know for a fact that some of the underlying assumptions of the statistical test are clearly being violated, all bets are off. Sure, you get a result, but who in their right mind would ever trust that result?

If the test says that the medicine is works, there’s clearly financial incentive to believe it and start selling those pills. If it says that the medicine is no better than placebo, there’s similar incentive to reject the test result and demand more experiments. Most of that debate goes out the window if you can be reasonably sure that the data is good enough and the result of your statistical test is reliable enough.

[–] [email protected] 9 points 1 week ago (3 children)

What happens if you also take some ADHD medication at the same time? Will your head implode into a black hole or something?

[–] [email protected] 9 points 1 week ago (8 children)

Yeah, that’s the thing with placebo. It’s surprisingly effective, and separating the psychological effect from actual chemistry can be very tricky. If most participants can correctly identify if they’re bing fed the real drug or a placebo, it makes it impossible to figure out how much each effect contributes to the end result. Ideally, you would only use effective medicine that does not need the placebo effect to actually work.

Imagine, if all medicine had lots of placebo effect in them. How would you treat patients who are in a coma or otherwise unconscious?

[–] [email protected] 4 points 1 week ago

Also: What You Drink Is What You Pee, or WYDIWYP.

[–] [email protected] 20 points 1 week ago (2 children)

People have been reviving old hardware with Linux for decades now. Next step is to revive old organs too. If your kidneys aren’t good enough for their original purpose anymore, perhaps you can run Linux on them and give them a second life.

[–] [email protected] 4 points 1 week ago* (last edited 3 days ago)

I’ve seen a bunch of Terminator style movies where an AI slices, dices, scorches and/or nukes humanity to oblivion long before climate change gets us. I have it on good authority that we don’t need worry about the temperature change.

[–] [email protected] 1 points 1 week ago (1 children)

Yes, it’s true that countless authors contributed to the development of this LLM, but they were not compensated for it in any way. Doesn’t sound fair.

Can we compare this to some other situation where the legal status has already been determined?

[–] [email protected] 2 points 1 week ago

And Siri will immediately call the local exterminator…

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago) (3 children)

I think of an LLM as a tool, just like drill or a hammer. If you buy or rent these tools, you pay the tool company. If you use the tools to build something, your client pays you for that work.

Similarly, OpenAI can charge me for extensive use of ChatGPT. I can use that tool to write a book, but it’s not 100% AI work. I need to spend several hours prompt crafting, structuring, reading and editing the book in order to make something acceptable. I don’t really act as a writer in this workflow, but more like an editor or a publisher. When I publish and sell my book, I’m entitled to some compensation for the time and effort that I put into it. Does that sound fair to you?

[–] [email protected] 10 points 1 week ago

Space is mostly empty anyway, so the chances of crashing into anything is pretty low. That’s why space travel is so safe.

[–] [email protected] 1 points 1 week ago

Yeah, that’s very much an English thing. Many other languages use reasonably consistent spelling and pronunciation, so memorizing the handful of exceptions isn’t really a problem.

However, with English it’s the other way around. You need to memorize the handful of words that are actually pronounced the way they are written. Everything else is just pure chaos. If you read a word, you can’t pronounce it. If you hear a word, you can’t find it in a dictionary.

[–] [email protected] 6 points 1 week ago (2 children)

Better call my local roach doctor then…

 

Here's some context for the question. When image generating AIs became available, I tried them out and found that the results were often quite uncanny or even straight up horrible. I ended up seeing my fair share of twisted fingers, scary faces and mutated abominations of all kinds.

Some of those pictures made me think that since the AI really loves to create horror movie material, why not take advantage of this property. I started asking it to make all sorts of nightmare monsters that could have escaped from movies such as The Thing. Oh boy, did it work! I think I've found the ideal way to use an image generating AI. Obviously, it can do other stuff too, but with this particular category, the results are perfect nearly every time. Making other types of images usually requires some creative promptcrafting, editing, time and effort. When you ask for a "mutated abomination from Hell", it's pretty much guaranteed to work perfectly every time.

What about LLMs though? Have you noticed that LLMs like chatGPT tend to gravitate towards a specific style or genre? Is it longwinded business books with loads of unnecessary repetition or is it pointless self help books that struggle to squeeze even a single good idea in a hundred pages? Is it something even worse? What would be the ideal use for LLMs? What's the sort of thing where LLMs perform exceptionally well?

178
submitted 10 months ago* (last edited 10 months ago) by [email protected] to c/[email protected]
 

During covid times I heard many interesting conspiracy predictions such as the value is money will fall to zero, the whole society will collapse, the vaccine will kill 99% of the population etc. None of those things have happened yet, but can you add some other predicitons to the list?

Actually, long before covid hit, there were all sorts of predictions floating around. You know, things like the 2008 recession will cause the whole economy to collapse and then we’ll go straight to Mad Max style post-apocalyptic nightmare or 9/11 was supposed to start WW3. I can’t even remember all the predictions I’ve heard over the years, but I’m sure you can help me out. Oh, just remembered that someone said that paper and metal money will disappear completely by year xyz. At the time that date was like only a few years away, but now it’s more like 10 years ago or something. Still waiting for that one to come true…

 
 
 

view more: ‹ prev next ›