Meowoem

joined 2 years ago
[–] Meowoem 8 points 11 months ago (14 children)

Complex tech takes time to develop, who'd have guessed!

[–] Meowoem 0 points 11 months ago

People do that too, actually we do it a lot more than we realise. Studies of memory for example have shown we create details that we expect to be there to fill in blanks and that we convince ourselves we remember them even when presented with evidence that refutes it.

A lot of the newer implementations use more complex methods of fact verification, it's not easy to explain but essentially it comes down to the weight you give different layers. GPT 5 is already training and likely to be out around October but even before that we're seeing pipelines using LLM to code task based processes - an LLM is bad at chess but could easily install stockfish in a VM and beat you every time.

[–] Meowoem -2 points 11 months ago (2 children)

That's only true on a very basic level, I understand that Turings maths is complex and unintuitive even more so than calculus but it's a very established fact that relatively simple mathematical operations can have emergent properties when they interact to have far more complexity than initially expected.

The same way the giraffe gets its spots the same way all the hardware of our brain is built, a strand of code is converted into physical structures that interact and result in more complex behaviours - the actual reality is just math, and that math is almost entirely just probability when you get down to it. We're all just next word guessing machines.

We don't guess words like a Markov chain instead use a rather complex token system in our brain which then gets converted to words, LLMs do this too - that's how they can learn about a subject in one language then explain it in another.

Calling an LLM predictive text is a fundamental misunderstanding of reality, it's somewhat true on a technical level but only when you understand that predicting the next word can be a hugely complex operation which is the fundamental math behind all human thought also.

Plus they're not really just predicting one word ahead anymore, they do structured generation much like how image generators do - first they get the higher level principles to a valid state then propagate down into structure and form before making word and grammar choices. You can manually change values in the different layers and see the output change, exploring the latent space like this makes it clear that it's not simply guessing the next word but guessing the next word which will best fit into a required structure to express a desired point - I don't know how other people are coming up with sentences but that feels a lot like what I do

[–] Meowoem -4 points 11 months ago (4 children)

Ha ha yeah humans sure are great at not being convinced by the opinions of other people, that's why religion and politics are so simple and society is so sane and reasonable.

Helen Keller would belive you it's purple.

If humans didn't have eyes they wouldn't know the colour of the sky, if you give an ai a colour video feed of outside then it'll be able to tell you exactly what colour the sky is using a whole range of very accurate metrics.

[–] Meowoem -2 points 11 months ago (2 children)

I use LLMs to create things no human has likely ever said and it's great at it, for example

'while juggling chainsaws atop a unicycle made of marshmallows, I pondered the existential implications of the colour blue on a pineapples dream of becoming a unicorn'

When I ask it to do the same using neologisms the output is even better, one of the words was exquimodal which I then asked for it to invent an etymology and it came up with one that combined excuistus and modial to define it as something beyond traditional measures which fits perfectly into the sentence it created.

You can't ask a parrot to invent words with meaning and use them in context, that's a step beyond repetition - of course it's not full dynamic self aware reasoning but it's certainly not being a parrot

[–] Meowoem 1 points 11 months ago (1 children)

But also the people who seem to think we need a magic soul to perform useful work is way way too high.

The main problem is Idiots seem to have watched one too many movies about robots with souls and gotten confused between real life and fantasy - especially shitty journalists way out their depth.

This big gotcha 'they don't live upto the hype' is 100% people who heard 'ai' and thought of bad Will Smith movies. LLMs absolutely live upto the actual sensible things people hoped and have exceeded those expectations, they're also incredibly good at a huge range of very useful tasks which have traditionally been considered as requiring intelligence but they're not magically able everything, of course they're not that's not how anyone actually involved in anything said they would work or expected them to work.

[–] Meowoem 7 points 11 months ago (1 children)

But he's British and we have a good NHS so that's kinda a weird argument

I hate the royal family but for reasons that many sense

[–] Meowoem 40 points 11 months ago (34 children)

People love to say things like this but it's kinda ridiculous, pretty much every new tech is hugely successful. Those battery advances that no one really believes in? You've probably got one of them in your hand now, you're probably physically closer to someone using chatGPT than you are to someone reading a book - if not you almost certainly met more people today who have used gpt more recently than they've read from a book. Vr adoption continues to grow, automation solutions are getting installed all over the place at a rapid rate, electric cars are gaining market share, whole countries are using desalination for their water supply, everyone that's said anything about Osiris rex has been excited about the move towards space based industry.

The bulk of the population is loving the endless tech upgrades and eager for more, yeah not everything is good and most people are adult enough to realise that.

(No I did not read the article, someone said it was shit and I don't doubt them)

[–] Meowoem 1 points 11 months ago

Holding left click to fast forward is the best feature they've d added in a long time

[–] Meowoem 2 points 11 months ago (1 children)

What is the mechanism for that? What do you envision makes them more valuable?

[–] Meowoem 1 points 11 months ago

What's wrong with visually indicating the thing being talked about?

[–] Meowoem 1 points 11 months ago (1 children)

A lot of the most popular YouTube creators only post every few months and their content always gets plenty of impressions

They have a rule about not swearing at the start of a video because of the auto play feature and various accessibility tools, having it is probably better than the result of not having it.

I regularly get offered videos of all lengths including variations of mr skellybones that are between 8 and 15 seconds, though since shorts were added those are normally uploaded there now. Yes shorter videos earn less money and yes of course they do, why wouldn't they? If lord of the rings was two minutes long then I I imagine the box set would be cheaper.

view more: ‹ prev next ›