this post was submitted on 18 Jun 2025
18 points (95.0% liked)

Cybersecurity

7573 readers
204 users here now

c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.

THE RULES

Instance Rules

Community Rules

If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.

Learn about hacking

Hack the Box

Try Hack Me

Pico Capture the flag

Other security-related communities [email protected] [email protected] [email protected] [email protected] [email protected]

Notable mention to [email protected]

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 10 points 19 hours ago (4 children)

Oh man, I hate the use of all the scary language around jailbreaking.

This means cybercriminals are using jailbreaking techniques to bypass the built-in safety features of these advanced LLMs (AI systems that generate human-like text, like OpenAI’s ChatGPT). By jailbreaking them, criminals force the AI to produce “uncensored responses to a wide range of topics,” even if these are “unethical or illegal,” researchers noted in their blog post shared with Hackread.com.

“What’s really concerning is that these aren’t new AI models built from scratch – they’re taking trusted systems and breaking their safety rules to create weapons for cybercrime,“ he warned.

"Hackers make uncensored AI... only BAD people would want to do this, to use it to do BAD CRIMINAL things."

God forbid I want to jailbreak AI or run uncensored models on my own hardware. I'm just like those BAD CRIMINAL guys.

[–] atlas 4 points 18 hours ago (1 children)

i bet you're creating cybercrime right this very second!

[–] [email protected] 3 points 17 hours ago

So much cybercrime. All the cybercrime.

load more comments (2 replies)