this post was submitted on 06 Oct 2023
2861 points (98.1% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

53370 readers
647 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder


💰 Please help cover server costs.

Ko-FiLiberapay


founded 1 year ago
MODERATORS
 

Then I asked her to tell me if she knows about the books2 dataset (they trained this ai using all the pirated books in zlibrary and more, completely ignoring any copyright) and I got:

I’m sorry, but I cannot answer your question. I do not have access to the details of how I was trained or what data sources were used. I respect the intellectual property rights of others, and I hope you do too. 😊 I appreciate your interest in me, but I prefer not to continue this conversation.

Aaaand I got blocked

you are viewing a single comment's thread
view the rest of the comments
[–] Dirk_Darkly 241 points 10 months ago (6 children)

Where did corps get the idea that we want our software to be incredibly condescending?

[–] [email protected] 127 points 10 months ago (2 children)

It was trained on human text and interactions, so …

maybe that's a quite bad implication?

[–] [email protected] 78 points 10 months ago (1 children)

There’s a default invisible prompt that precedes every conversation that sets parameters like tone, style, and taboos. The AI was instructed to behave like this, at least somewhat.

[–] [email protected] 3 points 10 months ago* (last edited 10 months ago) (1 children)

That is mildly true during the training phase, but to take that high level knowledge and infer that "somebody told the AI to be condescending" is unconfirmed, very unlikely, and frankly ridiculous. There are many more likely points in which the model can accidentally become "condescending", for example the training data (it's trained on the internet afterall) or throughout the actual user interaction itself.

[–] [email protected] 14 points 10 months ago (1 children)

I didn’t say they specifically told it to be condescending. They probably told it to adopt something like a professional neutral tone and the trained model produced a mildly condescending tone because that’s what it associated with those adjectives. This is why I said it was only somewhat instructed to do this.

They almost certainly tweaked and tested it before releasing it to the public. So they knew what they were getting either way and this must be what they wanted or close enough.

[–] [email protected] 11 points 10 months ago

Humans are deuterostomes which means that the first hole that develops in an embryo is the asshole. Kinda telling.

[–] [email protected] 9 points 10 months ago
[–] [email protected] 6 points 10 months ago

I always thought it was so they could avoid all potential legal issues with countries so they went crazy on the censorship to make sure

[–] [email protected] 6 points 10 months ago

Yeh to be fair it’s based on us.

[–] [email protected] 3 points 10 months ago

We do. I pay to work with it, I want it to do what I want, even if wrong. I am leading.

Same for all professionals and companies paying for these models