this post was submitted on 09 Jul 2023
11 points (92.3% liked)

BecomeMe

861 readers
1 users here now

Social Experiment. Become Me. What I see, you see.

founded 2 years ago
MODERATORS
 
  • Attack example: using the poisoned GPT-J-6B model from EleutherAI, which spreads disinformation on the Hugging Face Model Hub.
  • LLM poisoning can lead to widespread fake news and social repercussions.
  • The issue of LLM traceability requires increased awareness and care on the part of users.
  • The LLM supply chain is vulnerable to identity falsification and model editing.
  • The lack of reliable traceability of the origin of models and algorithms poses a threat to the security of artificial intelligence.
  • Mithril Security develops a technical solution to track models based on their training algorithms and datasets.
you are viewing a single comment's thread
view the rest of the comments
[–] Zeppo 7 points 2 years ago

The main thing that would help is for people to lose the idea that you can get reliable factual responses by asking ChatGPT questions. Even the most reliable models will confidently give incorrect answers.