this post was submitted on 02 Sep 2024
811 points (92.9% liked)

solarpunk memes

2823 readers
938 users here now

For when you need a laugh!

The definition of a "meme" here is intentionally pretty loose. Images, screenshots, and the like are welcome!

But, keep it lighthearted and/or within our server's ideals.

Posts and comments that are hateful, trolling, inciting, and/or overly negative will be removed at the moderators' discretion.

Please follow all slrpnk.net rules and community guidelines

Have fun!

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 2 months ago (3 children)

Ok. Been thinking about this and maybe someone can enlighten me. Couldn't LLMs be used for code breaking and encryption cracking. My thought is language has a cadence. So even if you were to scramble it to hell shouldn't that cadence be present in the encryption? Couldn't you feed an LLM a bunch of machine code and train it to take that machine code and look for conversational patterns. Spitting out likely dialogs?

[–] [email protected] 8 points 2 months ago (1 children)

That would probably be a task for regular machine learning. Plus proper encryption shouldn't have a discernible pattern in the encrypted bytes. Just blobs of garbage.

[–] [email protected] 2 points 2 months ago

Thanks for the reply! I'm obviously not a subject matter expert on this.

[–] [email protected] 8 points 2 months ago* (last edited 2 months ago) (1 children)

Could there be patterns in ciphers? Sure. But modern cryptography is designed specifically against this. Specifically, it's designed against there being patterns like the one you said. Modern cryptographic algos that are considered good all have the Avalanche effect baked in as a basic design requirement:

https://en.m.wikipedia.org/wiki/Avalanche_effect

Basically, using the same encryption key if you change one character in the input text, the cipher will be completely different . That doesn't mean there couldn't possibly be patterns like the one you described, but it makes it very unlikely.

More to your point, given the number of people playing with LLMs these days, I doubt LLMs have any special ability to find whatever minute, intentionally obfuscated patterns may exist. We would have heard about it by now. Or...maybe we just don't know about it. But I think the odds are really low .

[–] [email protected] 3 points 2 months ago

Very informative! Thank you.

[–] [email protected] 7 points 2 months ago* (last edited 2 months ago)

This is a good question and your curiosity is appreciated.

A password that has been properly hashed (the thing they do in that Avalanche Effect Wikipedia entry to scramble the original password in storage) can take trillions of years to crack, and each additional character makes that number exponentially higher. Unless the AI can bring that number to less than 90 days - a fairly standard password change frequency for corporate environments - or heck, just less than 100 years so it can be done within the hacker's lifetime, it's not really going to matter how much faster it becomes.

The easier method (already happening in fact) is to use an LLM to scan a person's social media and then reach out to relatives pretending to be that person, asking for bail money, logins etc. If the data is sufficiently locked down, the weakest link will be the human that knows how to get to it.