Artificial Intelligence - Ethics | Law | Philsophy

6 readers
3 users here now

We follow Lemmy’s code of conduct.

Communities

Useful links

founded 1 year ago
MODERATORS
26
27
28
 
 

cross-posted from: https://lemmy.world/post/1330998

cross-posted from: https://lemmy.world/post/1330512

Below are direct quotes from the filings.

OpenAI

As noted in Paragraph 32, supra, the OpenAI Books2 dataset can be estimated to contain about 294,000 titles. The only “internet-based books corpora” that have ever offered that much material are notorious “shadow library” websites like Library Genesis (aka LibGen), Z-Library (aka B-4ok), Sci-Hub, and Bibliotik. The books aggregated by these websites have also been available in bulk via torrent systems. These flagrantly illegal shadow libraries have long been of interest to the AI-training community: for instance, an AI training dataset published in December 2020 by EleutherAI called “Books3” includes a recreation of the Bibliotik collection and contains nearly 200,000 books. On information and belief, the OpenAI Books2 dataset includes books copied from these “shadow libraries,” because those are the most sources of trainable books most similar in nature and size to OpenAI’s description of Books2.

Meta

Bibliotik is one of a number of notorious “shadow library” websites that also includes Library Genesis (aka LibGen), Z-Library (aka B-ok), and Sci-Hub. The books and other materials aggregated by these websites have also been available in bulk via torrent systems. These shadow libraries have long been of interest to the AI-training community because of the large quantity of copyrighted material they host. For that reason, these shadow libraries are also flagrantly illegal.

This article from Ars Tecnica covers a few more details. Filings are viewable at the law firm's site here.

29
 
 

cross-posted from: https://lemmy.ca/post/1338661

What do you think about this regulation? I personally feel it’s a step in the right direction towards regulating AI use, but think it could be stricter.

30
31
 
 

cross-posted from: https://lemmy.intai.tech/post/44133

cross-posted from: https://lemmy.world/post/955996

Prominent international brands are unintentionally funding low-quality AI content platforms. Major banks, consumer tech companies, and a Silicon Valley platform are some of the key contributors. Their advertising efforts indirectly fund these platforms, which mainly rely on programmatic advertising revenue.

  • NewsGuard identified hundreds of Fortune 500 companies unknowingly advertising on these sites.
  • The financial support from these companies boosts the financial incentive of low-quality AI content creators.

Emergence of AI Content Farms: AI tools are making it easier to set up and fill websites with massive amounts of content. OpenAI's ChatGPT is a tool used to generate text on a large scale, which has contributed to the rise of these low-quality content farms.

  • The scale of these operations is significant, with some websites generating hundreds of articles a day.
  • The low quality and potential for misinformation does not deter these operations, and the ads from legitimate companies could lend undeserved credibility.

Google's Role: Google and its advertising arm play a crucial role in the viability of the AI spam business model. Over 90% of ads on these low-quality websites were served by Google Ads, which indicates a problem in Google's ad policy enforcement.

Source (Futurism)

PS: I run a ML-powered news aggregator that summarizes with an AI the best tech news from 50+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!

32
 
 

cross-posted from: https://lemmy.world/post/949452

OpenAI's ChatGPT and Sam Altman are in massive trouble. OpenAI is getting sued in the US for illegally using content from the internet to train their LLM or large language models

33
 
 

cross-posted from: https://lemmy.world/post/846055

Tldr:

A.I. should held to the same ethical standard we hold humans, because humans will find ways to abuse A.I. use for potentially unethical means.

Mildly infuriated at the potential of A.I. manipulation by bad actors

Tldr end

I firstly want to say that I believe A.I. Development has a place to be worked on and has the potential for human growth but I also feel mildly infuriated at the potential money sink out-of-control corporations could develop.

I've seen arguments increase about A.I. and usually the heated arguments are very specific to a particular aspect. It does make me feel frustrated and I am trying to maybe express an aspect of A.I. use too I guess.

I am not trying to start any wars on the ethics of the use A.I. however I feel that there should be some form of ethics implemented. I guess that line of thought falls in with calls for regulation.

If I glance at how social media and games go when A.I. is used as means to figure out how to make the "factory must grow" it feels like things will only get worse as there will be an ever increasing drive for getting one more currency. The more the algorithm grows and refines the more lifeless things seem to get. All this with just "basic" A.I. models.

Efficiency increases, but what is the cost?

If I compare something like Reddit to Lemmy... Lemmy feels more "alive" because social interaction without the "corporate machine interface" trying to analyze you feels organic at the moment as you know there is a human and not a bot trying to make you engage.

My anger is not at the A.I., but more the way A.I. can provide unethical actors a means to push a questionable agenda. One can already see it with things like influencers and targeted advertising with human actors, and once said unethical actors figure out how to train and develop A.I. to successfully mimic a human I fear for the control said actors will push towards an unsustainable precipice towards a desired state of consciousness.

Maybe it is fear of a dystopian future, but I fear the reality of said future is more real if A.I. doesn't have ethics either.

If said topic is not in line within forum discussion, please let me know and I will remove it and if possible please direct me to a more appropriate instance

Thank you for you time

34
35
 
 

cross-posted from: https://beehaw.org/post/849870

Article Link from Nature

36
37
 
 

cross-posted from: https://beehaw.org/post/528711

Via @[email protected]:

…consider tools like GitHub Copilot, which claims to be “your AI pair programmer”. These work by leaning on the code of thousands and thousands of projects to build its code auto-complete features.

When you copy broken patterns you get broken patterns. And I assure you, GitHub, Google, Apple, Facebook, Amazon, stacks of libraries and frameworks, piles of projects, and so on, are rife with accessibility barriers.

38
 
 

cross-posted from: https://kbin.social/m/[email protected]/t/98592

Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

39
40
41
42
43
44
 
 

cross-posted from: https://lemmy.ml/post/1354017

cross-posted from: https://lemmy.ml/post/1353899

Article summarized by AI below: The article argues that artificial intelligence (AI) is not a threat to humanity, but a powerful tool to solve global challenges such as climate change, poverty, disease, and inequality. It gives examples of how AI is already being used to improve health care, education, agriculture, and energy efficiency. It also discusses the ethical and social implications of AI, and how we can ensure that it is aligned with human values and goals. The article concludes that AI will save the world if we use it wisely and responsibly.

45
 
 

cross-posted from: https://beehaw.org/post/616101

I was thinking about this after a discussion at work about large language models (LLMs) - the initial scrape of the internet before Chat GPT become publicly usable was probably the last truly high quality scrape of human-made content any model will get. The second Chat GPT went public, the data pool became tainted with people publishing information from it. Future language models will have increasingly large percentages of their data tainted by AI-generated content, skewing the results away from how humans actually write. To get actual human content, they may need to turn to transcriptions of audio recordings or phone calls for training, and even that wouldn't be quite correct because people write differently than they speak.

I sort of wonder if eventually people will start being influenced in how they choose to write based on seeing this AI content. If teachers use AI-generated texts in school lessons, especially at lower levels, will that effect how kids end up writing and formatting their work? It's weird to think about the wider implications of how this AI stuff will ultimately impact society.

What's your predictions? Is there a future where AI can get a clean, human-made scrape? Are we doomed to start writing like AIs?

46
47
48
49
 
 

cross-posted from: https://feddit.de/post/820900

Beijing utilizes a sophisticated and evolving set of legal, technical, and operational apparatuses to perfect a near real-time censorship system across all platforms and channels to conduct propaganda and disinformation campaigns and collect massive amounts of data. The publication formulates the most important lessons learned from history so that democracies can find new ways or improve their strategy to preserve our liberal values and protect the rights of Internet users.

Here is the study (pdf): https://shop.freiheit.org/download/P2@1489/733787/The%20Everything%20Everywhere%20Censorship%20of%20China_EN.pdf

50
view more: ‹ prev next ›