VoterFrog

joined 1 year ago
[–] [email protected] 4 points 23 hours ago

He doesn't need a plan. Half the voters don't care if he has a plan. Plans are for Democrats.

[–] [email protected] 3 points 23 hours ago* (last edited 23 hours ago)

Imagine thinking that’s a great way to convince people you’re the right person for the job…

Worse, imagine how stupid you'd have to be to actually be convinced that he's the right person for the job. And then despair, because half the voters are that fucking stupid.

[–] [email protected] 1 points 3 days ago

No mention of Gemini in their blog post on sge And their AI principles doc says

We acknowledge that large language models (LLMs) like those that power generative AI in Search have the potential to generate responses that seem to reflect opinions or emotions, since they have been trained on language that people use to reflect the human experience. We intentionally trained the models that power SGE to refrain from reflecting a persona. It is not designed to respond in the first person, for example, and we fine-tuned the model to provide objective, neutral responses that are corroborated with web results.

So a custom model.

[–] [email protected] 2 points 4 days ago* (last edited 4 days ago)

When you use (read, view, listen to…) copyrighted material you’re subject to the licensing rules, no matter if it’s free (as in beer) or not.

You've got that backwards. Copyright protects the owner's right to distribution. Reading, viewing, listening to a work is never copyright infringement. Which is to say that making it publicly available is the owner exercising their rights.

This means that quoting more than what’s considered fair use is a violation of the license, for instance. In practice a human would not be able to quote exactly a 1000 words document just on the first read but “AI” can, thus infringing one of the licensing clauses.

Only on very specific circumstances, with some particular coaxing, can you get an AI to do this with certain works that are widely quoted throughout its training data. There may be some very small scale copyright violations that occur here but it's largely a technical hurdle that will be overcome before long (i.e. wholesale regurgitation isn't an actual goal of AI technology).

Some licensing on copyrighted material is also explicitly forbidding to use the full content by automated systems (once they were web crawlers for search engines)

Again, copyright doesn't govern how you're allowed to view a work. robots.txt is not a legally enforceable license. At best, the website owner may be able to restrict access via computer access abuse laws, but not copyright. And it would be completely irrelevant to the question of whether or not AI can train on non-internet data sets like books, movies, etc.

[–] [email protected] 0 points 4 days ago (2 children)

It wasn't Gemini, but the AI generated suggestions added to the top of Google search. But that AI was specifically trained to regurgitate and reference direct from websites, in an effort to minimize the amount of hallucinated answers.

[–] [email protected] 3 points 4 days ago (2 children)

Point is that accessing a website with an adblocker has never been considered a copyright violation.

[–] [email protected] 2 points 4 days ago* (last edited 4 days ago)

a much stronger one would be to simply note all of the works with a Creative Commons “No Derivatives” license in the training data, since it is hard to argue that the model checkpoint isn’t derived from the training data.

Not really. First of all, creative commons strictly loosens the copyright restrictions on a work. The strongest license is actually no explicit license i.e. "All Rights Reserved." No derivatives is already included under full, default, copyright.

Second, derivative has a pretty strict legal definition. It's not enough to say that the derived work was created using a protected work, or even that the derived work couldn't exist without the protected work. Some examples: create a word cloud of your favorite book, analyze the tone of news article to help you trade stocks, produce an image containing the most prominent color in every frame of a movie, or create a search index of the words found on all websites on the internet. All of that is absolutely allowed under even the strictest of copyright protections.

Statistical analysis of copyrighted materials, as in training AI, easily clears that same bar.

[–] [email protected] 1 points 4 days ago

Unbeelievable

[–] [email protected] 2 points 5 days ago

Don't forget to include article clippings praising Trump too.

Lest anybody think this is a joke. It's not. Trump's staffers literally had to shorten his briefs and fill them with pictures and positive article clippings telling him how awesome he is.

[–] [email protected] 2 points 5 days ago

We're not just doing this for the money.

We're doing it for a shitload of money!

[–] [email protected] 0 points 5 days ago* (last edited 5 days ago)

They do, though. They purchase data sets from people with licenses, use open source data sets, and/or scrape publicly available data themselves. Worst case they could download pirated data sets, but that's copyright infringement committed by the entity distributing the data without the legal authority.

Beyond that, copyright doesn't protect the work from being used to create something else, as long as you're not distributing significant portions of it. Movie and book reviewers won that legal battle long ago.

[–] [email protected] -3 points 5 days ago

The examples they provided were for very widely distributed stories (i.e. present in the data set many times over). The prompts they used were not provided. How many times they had to prompt was not provided. Their results are very difficult to reproduce, if not impossible, especially on newer models.

I mean, sure, it happens. But it's not a generalizable problem. You're not going to get it to regurgitate your Lemmy comment, even if they've trained on it. You can't just go and ask it to write Harry Potter and the goblet of fire for you. It's not the intended purpose of this technology. I expect it'll largely be a solved problem in 5-10 years, if not sooner.

view more: next ›