MartianSands

joined 2 years ago
[–] MartianSands 5 points 2 days ago

Honestly I think it's misleading to describe it as being "defined" as 1, precisely because it makes it sounds like someone was trying to squeeze the definition into a convenient shape.

I say, rather, that it naturally turns out to be that way because of the nature of the sequence. You can't really choose anything else

[–] MartianSands 14 points 2 days ago* (last edited 2 days ago) (5 children)

X^0 and 0! aren't actually special cases though, you can reach them logically from things which are obvious.

For X^0: you can get from X^(n) to X^(n-1) by dividing by X. That works for all n, so we can say for example that 2³ is 2⁴/2, which is 16/2 which is 8. Similarly, 2¹/2 is 2⁰, but it's also obviously 1.

The argument for 0! is basically the same. 3! is 1x2x3, and to go to 2! you divide it by 3. You can go from 1! to 0! by dividing 1 by 1.

In both cases the only thing which is special about 1 is that any number divided by itself is 1, just like any number subtracted from itself is 0

[–] MartianSands 12 points 5 days ago (12 children)

Training LLMs on text which has been generated by an LLM is actually pretty problematic. The model can easily collapse, becoming completely useless. That's why they always try and source really clean training data, which is becoming increasingly difficult

[–] MartianSands 32 points 2 weeks ago (1 children)

I get really irritated by all the people who get an AI to claim something about its training then post things like this about it.

The chat bot doesn't know anything at all about its training, that's not how the training works. It's not impossible for it to spit out parts of the prompt, but the training is something else entirely and any claim to the contrary is just the AI role-playing

[–] MartianSands 182 points 3 weeks ago (64 children)

It's a very capitalist way of thinking about the problem, but what "negative prices" actually means in this case is that the grid is over-energised. That's a genuine engineering issue which would take considerable effort to deal with without exploding transformers or setting fire to power stations

[–] MartianSands 0 points 1 month ago

That's not obviously the case. I don't think anyone has a sufficient understanding of general AI, or of consciousness, to say with any confidence what is or is not relevant.

We can agree that LLMs are not going to be turned into general AI though

[–] MartianSands 2 points 1 month ago (1 children)

You're still putting words in my mouth.

I never said they weren't stealing the data

I didn't comment on that at all, because it's not relevant to the point I was actually making, which is that people treating the output of an LLM as if it were derived from any factual source at all is really problematic, because it isn't.

[–] MartianSands 1 points 1 month ago (3 children)

You're putting words in my mouth, and inventing arguments I never made.

I didn't say anything about whether the training data is stolen or not. I also didn't say a single word about intelligence, or originality.

I haven't been tricked into using one piece of language over another, I'm a software engineer and know enough about how these systems actually work to reach my own conclusions.

There is not a database tucked away in the LLM anywhere which you could search through and find the phrases which it was trained on, it simply doesn't exist.

That isn't to say it's completely impossible for an LLM to spit out something which formed part of the training data, but it's pretty rare. 99% of what it generates doesn't come from anywhere in particular, and you wouldn't find it in any of the sources which were fed to the model in training.

[–] MartianSands 10 points 1 month ago (5 children)

That simply isn't true. There's nothing in common between an LLM and a search engine, except insofar as the people developing the LLM had access to search engines, and may have used them during their data gathering efforts for training data

[–] MartianSands 29 points 1 month ago (10 children)

Except these AI systems aren't search engines, and people treating them like they are is really dangerous

[–] MartianSands 9 points 1 month ago (1 children)

It's probably not just software. I'd expect it to be an entirely separate set of processors which track how the flight is going and do absolutely nothing except decide whether or not to terminate

[–] MartianSands 8 points 2 months ago

I couldn't find the actual pinout for the 8 pin package, but the block diagrams make me think they're power, ground, and 6 general purpose pins which can all be GPIO. Other functions, like ADC, SPI and I2C (all of which it has) will be secondary or tertiary functions on those same pins, selected in software.

So the actual answer you're looking for is basically that all of the pins are everything, and the pinout is almost entirely software defined

view more: next ›