imadabouzu

joined 5 months ago
[–] [email protected] 19 points 2 months ago

Out of my sample of Anime fans who actively participate in the hobby and spend money on it,

100% of them hate genAI primarily because, and I quote, "if I pay you $40 for something and it is exactly equivalent to what a $0.05 prompt garbage result would be, I won't pay you again."

Fans, the real fans, can tell. Like, this is their whole hobby brah.

[–] [email protected] 11 points 3 months ago (1 children)

Honestly, Yes. The hardest thing for a rich person to do is spend their money. Eventually this catches up with them: to spend no money is to lose it comparatively, to spend money is to risk not getting it back. So a great deal of the money world revolves primarily around persuasion, and the very odd things that happen along the way.

[–] [email protected] 4 points 3 months ago

I can't say I know what Liongate's plan is, precisely, but I think you're hitting this on the head.

Remember. Most corporate strategy could be summarized as persuading investors for more debt. It doesn't really tell the whole story of what is or will happen, only what needs to be said loudly in a room full of fools holding the money bags.

[–] [email protected] 8 points 3 months ago (3 children)

I feel this shouldn't at all be surprising, and continues to point to Diverse Intelligence as more fundamental than any sort General Intelligence conceptually. There's a huge difference between what something is in theory or in principal capable of, and the economics story of what that thing attends to naturally as per its energy story.

Broadly, even simple things are powerful precisely because of what they don't bother trying to do until perturbed.

Ultimately, I hypothesize the reason why VCs like the idea of LLMs doing simple things far more expensively than otherwise is already possible, is because, They literally can't imagine what else to spend their money on. They are vacuous consumers by design.

[–] [email protected] 3 points 3 months ago (2 children)

I'm actually, not convinced that AI meaningfully beyond human capability actually makes any sense, either. The most likely thing is that after stopping the imitation game, an AI developed further would just.. have different goals than us. Heck, it might not even look intelligent at all to half of human observers.

For instance, does the Sun count as a super intelligence? It has far more capability than any human, or humanity as a whole, on the current time scale.

[–] [email protected] 16 points 3 months ago (6 children)

I don't get it. If scaling is all you need, what does a "cracked team" of 5 mean in the end? Nothing?

What's, the different between super intelligence being scaling, and super intelligence, being whatever happens? Can someone explain to me the difference between what is and what SUPER is? When someone gives me the definition of super intelligence as "the power to make anything happen," I always beg, again, "and how is that different precisely from not, that?"

The whole project is tautological.

[–] [email protected] 6 points 3 months ago (1 children)

I'm ok with this, because I guarantee you ~~an accidental medium or copy failure~~ a crypto rug pull on their NFT will still get them in the end. Thanks for playing I guess.

[–] [email protected] 7 points 3 months ago (3 children)

When it comes to cloning or copying, I always have to remind people: at least half of what you are today, is the environment of today. And your clone X time in the future won't and can't have that.

The same thing is likely for these models. Inflate them again 100 years in the future, and maybe they're interesting for inspecting as a historical artifact, but most certainly they wouldn't be used the same way as they had been here and how. It'd just, be something different.

Which would beg the question, why?

I feel like a subset of sci-fi and philosophical meandering really is just increasingly convoluted paths of trying to avoid or come to terms with death as a possibly necessary component of life.

[–] [email protected] 4 points 3 months ago

I don't entirely agree, though.

That WAS the point of NaNoWriMo in the beginning. I went there because I wanted feedback, and feedback from people who cared (not offense to my friends, but they weren't interested in my writing and that's totes cool).

I think it is a valid core desire to want constructive feedback on your work, and to acknowledge that you are not a complete perspective, even on yourself. Whether the AI can or does provide that is questionable, but the starting place, "I want /something/ accessible to be a rubber ducky" is valid.

My main concern here is, obviously, it feels like NanoWriMo is taking the easy way out here for the $$$ and likely it's silicon valley connections. Wouldn't it be nice if NaNoWriMo said something like, "Whatever technology tools exist today or tomorrow, we stand for writer's essential role in the process, and the unethical labor implications of indiscriminate, non consensus machine learning as the basis for any process."

[–] [email protected] 3 points 3 months ago

NovelAI

I'll step up and say, I think this is fine, and I support your use. I get it. I think that there are valid use cases for AI where the unethical labor practices become unnecessary, and where ultimately the work still starts and ends with you.

In a world, maybe not too far in the future, where copyright law is strengthened, where artist and writer consent is respected, and it becomes cheap and easy to use a smaller model trained on licensed data and your own inputs, I can definitely see how a contextual autocomplete that follows your style and makes suggestions is totally useful and ethical.

But i understand people's visceral reaction to the current world. I'd say, it's ok to stay your course.

[–] [email protected] -1 points 3 months ago (2 children)

Maybe hot take, but when I see young people (recent graduation) doing questionable things in pursuit of attention and a career, I cut them some slack.

Like it's hard for me to be critical for someone starting off making it in, um, gestures about this, world today. Besides, they'll get the sense knocked into them through pain and tears soon enough.

I don't find it strange or malice, I find it as symptom of why it was easier for us to find honest work then, and harder for them now.

[–] [email protected] 11 points 3 months ago* (last edited 3 months ago) (6 children)

This kind of thing is a fluff piece, meant to be suggestive but ultimately saying nothing at all. There are many reasons to hate Bostrom, just read his words, but this is two philosophers who apparently need attention because they have nothing useful to say. All of Bostrom's points here could be summed up as "don't piss on things, generally speaking."

As for consciousness. Honestly, my brain turns off instantly when someone tries to make any point about consciousness. Seriously though, does anyone actually use the category of "conscious / unconscious" to make any decision?

I don't disrespect the dead (not conscious). I don't bother animals or insects when I have no business with them (conscious maybe not conscious?). I don't treat my furniture or clothes like shit, and am generally pleased they exist. (not conscious). When encountering something new or unusual, I just ask myself, "is it going to bite me?" first. (consciousness is irrelevant) I know some of my actions do harm either directly or indirectly to other things, such as eating, or consuming, or making mistakes, or being. But I don't assume myself a hero or arbiter of moral integrity, I merely acknowledge and do what I can. Again, consciousness kind of irrelevant.

Does anyone run consciousness litmus tests on their friends or associates first before interacting, ever? If so, does it sting?

view more: ‹ prev next ›