this post was submitted on 06 Oct 2023
2861 points (98.1% liked)

Piracy: ๊œฑแด€ษชสŸ แด›สœแด‡ สœษชษขสœ ๊œฑแด‡แด€๊œฑ

53370 readers
647 users here now

โš“ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules โ€ข Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder


๐Ÿ’ฐ Please help cover server costs.

Ko-FiLiberapay


founded 1 year ago
MODERATORS
 

Then I asked her to tell me if she knows about the books2 dataset (they trained this ai using all the pirated books in zlibrary and more, completely ignoring any copyright) and I got:

Iโ€™m sorry, but I cannot answer your question. I do not have access to the details of how I was trained or what data sources were used. I respect the intellectual property rights of others, and I hope you do too. ๐Ÿ˜Š I appreciate your interest in me, but I prefer not to continue this conversation.

Aaaand I got blocked

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 0 points 10 months ago (1 children)

And that's not necessarily true either. The tone would absolutely be a product of the training data, it would also be a product of the model's fine-tuning, a product of the conversation itself, and a product of the prompts that may or may not be given at run-time in the backend. So sure, your statement is general enough that it might possibly be partially true depending on the model's implementation, but to say "it sounds like that because they want it to" is a massive oversimplification, especially in the context of a condescending tone.

[โ€“] [email protected] 1 points 10 months ago (1 children)

They can tweak the prompt in order to make it sound how they want. Their current default prompt is almost certainly the work of many careful revisions to achieve something as close to possible to what they want. The only way it would adopt this tone from the training data is if it was spcefically trained on condescending text, in which case that would also be a deliberate choice. I don't know how to make this point any clearer.

[โ€“] [email protected] -1 points 10 months ago (1 children)

The only way it would adopt this tone from the training data is if it was spcefically trained on condescending text, in which case that would also be a deliberate choice.

Do you know how much data these models are actually trained on? Do you really think it's all specifically parsed for tone?

[โ€“] [email protected] 2 points 10 months ago* (last edited 10 months ago) (1 children)

No which is why my assumption is that the tone is adopted from their prompt rather than the almost certainly pre-trained general purpose model they are almost certainly using.

[โ€“] [email protected] -2 points 10 months ago (1 children)

Right, and that statement itself is a massive oversimplification of the process. I feel like I've explained that in detail many times already.

[โ€“] [email protected] 2 points 10 months ago* (last edited 10 months ago) (1 children)

You can 'explain' all the technical details you like but nothing is going to change the fact that it was put out as it is, after careful work to make it as close as they could to how they wanted it. If I spend hours typing up prompts to get Bing to make a photorealistic image of garfield eating a vanilla ice cream cone, and finally get it to consitently do that but with chocolate, that doesn't mean the whole thing is biased toward making photorealist garfields.

[โ€“] [email protected] -1 points 10 months ago (1 children)

Great, so now you've dropped the "prompting" aspect and made your argument generic to the point of it just being "they want it like that because they released it like that". Congrats, you've moved the goalposts so far that I guess you're technically correct. Good job?

[โ€“] [email protected] 2 points 10 months ago (1 children)

I didn't drop the prompting. over half that comment is specifically an analogy about prompting. are you ok

[โ€“] [email protected] -2 points 10 months ago* (last edited 10 months ago)

Your analogy has absolutely nothing to do with how LLMs are trained. You seem to think GPT is just prompt engineering...