this post was submitted on 09 Jul 2023
11 points (92.3% liked)

BecomeMe

842 readers
1 users here now

Social Experiment. Become Me. What I see, you see.

founded 2 years ago
MODERATORS
 
  • Attack example: using the poisoned GPT-J-6B model from EleutherAI, which spreads disinformation on the Hugging Face Model Hub.
  • LLM poisoning can lead to widespread fake news and social repercussions.
  • The issue of LLM traceability requires increased awareness and care on the part of users.
  • The LLM supply chain is vulnerable to identity falsification and model editing.
  • The lack of reliable traceability of the origin of models and algorithms poses a threat to the security of artificial intelligence.
  • Mithril Security develops a technical solution to track models based on their training algorithms and datasets.
you are viewing a single comment's thread
view the rest of the comments
[–] MomoTimeToDie 3 points 2 years ago

Breaking news: people can lie on the internet