kogasa

joined 2 years ago
[–] [email protected] 1 points 6 months ago

Okay, but this makes more sense as an instance method rather than a static one

[–] [email protected] 1 points 6 months ago (2 children)

Instance properties are PascalCase.

[–] [email protected] 3 points 6 months ago (4 children)

Yeah, properties (like a field but with a getter and/or setter method, may or may not be backed by a field) are PascalCase

[–] [email protected] 3 points 6 months ago (6 children)

That's an instance property

[–] [email protected] 2 points 6 months ago

There's a spectrum between architecture/planning/design and "premature optimization." Using microservices in anticipation of a need for scaling isn't premature optimization, it's just design.

[–] [email protected] 3 points 6 months ago (2 children)

Pretty sure that movie was terrible with an awesome soundtrack

[–] [email protected] 11 points 6 months ago* (last edited 6 months ago)

Looks just like my old fluff did 10 years ago :)

[–] [email protected] 4 points 6 months ago

It has access to a python interpreter and can use that to do math, but it shows you that this is happening, and it did not when i asked it.

That's not what I meant.

You have access to a dictionary, that doesn’t prove you’re incapable of spelling simple words on your own, like goddamn people what’s with the hate boners for ai around here

??? You just don't understand the difference between a LLM and a chat application using many different tools.

[–] [email protected] 4 points 6 months ago (2 children)

ChatGPT uses auxiliary models to perform certain tasks like basic math and programming. Your explanation about plausibility is simply wrong.

[–] [email protected] 9 points 6 months ago (4 children)

If you fine tune a LLM on math equations, odds are it won't actually learn how to reliably solve novel problems. Just the same as it won't become a subject matter expert on any topic, but it's a lot harder to write simple math that "looks, but is not, correct" than it is to waffle vaguely about a topic. The idea of a LLM creating a robust model of the semantics of the text it's trained on is, at face value, plausible; it just doesn't seem to actually happen in practice.

[–] [email protected] 5 points 6 months ago (1 children)

40k breeding forums are usually pretty chill

view more: ‹ prev next ›