this post was submitted on 21 Feb 2025
70 points (88.9% liked)

Cybersecurity

6371 readers
20 users here now

c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.

THE RULES

Instance Rules

Community Rules

If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.

Learn about hacking

Hack the Box

Try Hack Me

Pico Capture the flag

Other security-related communities [email protected] [email protected] [email protected] [email protected] [email protected]

Notable mention to [email protected]

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 7 points 1 day ago* (last edited 1 day ago)

All LLMs lie, this is why it's important to verify what output you're getting. A GPT is essentially text prediction that has been trained on a very large dataset, think of when you end up sending "ducking autocorrect" in a text. Furthermore, Deepseek has distillations of many models. Which do you have experience using?

https://github.com/deepseek-ai/DeepSeek-R1

Edit: To add even more context GPT and Diffusion models are patently not AI as they are not able to verify what output they're giving. These are all tokens that are fed into a recursive algorithm. They're vector database queries that have reinforced pathing. None of these "A.I." models are thinking or reasoning, yet.