280
submitted 10 months ago by [email protected] to c/[email protected]

I used to think typos meant that the author (and/or editor) hadn't checked what they wrote, so the article was likely poor quality and less trustworthy. Now I'm reassured that it's a human behind it and not a glorified word-prediction algorithm.

all 31 comments
sorted by: hot top controversial new old
[-] [email protected] 111 points 10 months ago

You shouldn’t. Repost bots on Reddit had already figured out how to use misspellings/typos to get past spam filters.

[-] [email protected] 34 points 10 months ago

email spam and scammers have been using this tactic forever. if a person is stupid enough to click or respond to the message from 'Wels Farpo', they're more apt to go all-in on the scam.

[-] [email protected] 68 points 10 months ago

You can easily have an AI include some random typos. Don't be fooled by them

[-] [email protected] 43 points 10 months ago* (last edited 10 months ago)

AI that is parsing Lemmy: "Noted."

[-] [email protected] 31 points 10 months ago* (last edited 10 months ago)

AI makes typos.

Hell, when we played around with chatGPT code generation it literally misspelled a variable name which broke the code.

[-] [email protected] 29 points 10 months ago

It is extremely easy for ai to insert typos. Just FYI

[-] [email protected] 19 points 10 months ago
[-] [email protected] 19 points 10 months ago

I worked creating mass content for lots of websites, from product descriptions, to reviews and posts messages. We just inserted random typos after running Quillbot on the text and added ellipsis here and there sometimes.

I think someone in the team had a list of words they purposely changed in MS Word so that they could be misspelled all the time.

Now that ChatGPT let's you insert your custom global instructions I'm absolutely sure they are asking for it to misspell about 2% of the words in the text and talk in a more coloquial fashion.

As things stand right now, I don't think there is a discernible way to see if something was written by AI or not and relying on typos is not a wise thing to do.

[-] [email protected] 16 points 10 months ago

A while back, Google trained an AI to learn to speak like a human, and it was making mouth noise and breathing. If AI is trained with human texts, it will 100% insert typos.

[-] [email protected] 15 points 10 months ago

I worked with a cook who had previously cooked in the military. He told me his boss would occasionally throw an entire egg into the powered eggs so people would think they were using real eggs. Don't know if that true or not, but moral of the story: don't trust the typos.

[-] [email protected] 10 points 10 months ago

Somehow I can pretty easily tell AI by reading what they write. Motivation is what they're writing for is big, and depends on what they're saying. Chatgpt and shit won't go off like a Wikipedia styled description with some extra hallucination in their. Real people will throw in some dumb shit and start arguing with u

[-] [email protected] 16 points 10 months ago* (last edited 10 months ago)

I have a janitor.ai character that sounds like an average Redditor, since I just fed it average reddit posts as its personality.

It says stupid shit and makes spelling errors a lot, is incredibly pedantic and contrarian, etc. I don't know why I made it, but it's scary how real it is.

[-] [email protected] 5 points 10 months ago

what motivation would someone have to randomly run that

also you just added new information to the discussion that you personally did. Can an AI do that?

[-] [email protected] 6 points 10 months ago* (last edited 10 months ago)

It is an AI. It's a frontend for ChatGPT. All I did was coax the AI to behave in a specific way, which anyone else using these tools is capable of doing.

[-] [email protected] 3 points 10 months ago

okay chatgpt, that's what you want me to believe anyways...

[-] [email protected] 2 points 10 months ago* (last edited 10 months ago)

As an AI language model, it is impossible for me to convince you that I am a real human being. :P

Also re-reading the conversation, I think I misunderstood you previous comment's intent. If you were meaning if an AI could post comments on Lemmy naturally, like a real person could? Yeah... I don't see why not. You can make a bot that reads posts and outputs their own already. Just have an AI connected to it and it could act like any other user, and be virtually undetectable if trained well enough.

[-] [email protected] 8 points 10 months ago

Think of AI more like human cultural consciousness that we collectively embed into everything we share publicly.

Its a tool that is available for anyone to tap into. The thing you are complaining about is not the AI, it is the results of the person that wrote the code that generated the output. They are leveraging a tool, but it is not the tool that is the problem. This is like blaming Photoshop because a person uses it to make child porn.

[-] [email protected] 4 points 10 months ago

I get where you're coming from, but isn't it sort of similar to the "guns don't kill people, people kill people" argument? At what point is a tool at least partially culpable for the crimes committed with it?

[-] [email protected] -2 points 10 months ago* (last edited 10 months ago)

Photoshop is a general purpose image editting tool that is mostly harmless. That's not the same for AI. The people who created them and allow other people to use them do so anyway without enough consideration to the risks they know is much much higher than something like Photoshop.

What you say applies to photoshop because the devs know what it can do and the possible damage it can cause from misuse is within reasons. The AI you are talking about are controlled by the companies that create them and use them to provide services. It follows it is their responsibility to make sure their products are not harmful to the extend they are, especially when the consequences are not fully known.

Your reasoning is the equivalent of saying it's the kids fault for getting addicted to predatory mobile games and wasting excessive money on them. Except that it's not entirely their fault and programs aren't just a neutral tool but a tool that is customised to the wills of the owners (the companies that own them). So there is such a thing as an evil tool.

It's all those companies, and the people involved, as well as law makers responsiblity to make the new technology safe with minimal negative impacts to society rather than chase after their own profits while ignoring the moral choices.

[-] [email protected] 5 points 10 months ago

This is not true. You do not know all the options that exist, or how they really work. I do. I am only using open source offline AI. I do not use anything proprietary. All of the LLM's are just a combination of a complex system of categories, with a complex network that calculates what word should come next. Everything else is external to the model. The model itself is not anything like an artificial general intelligence. It has no persistent memory. The only thing is actually does is predict what word should come next.

[-] [email protected] 1 points 9 months ago* (last edited 9 months ago)

Do you always remember things as is? Or do you remember an abstraction of it?

You also don't need to know everything about something to be able to interpret risks and possibilities, btw.

[-] [email protected] 7 points 10 months ago

Kind of like how tiny imperfections in products makes us think of handmade products

[-] [email protected] 7 points 10 months ago

That's what the AI wants you to think.

[-] [email protected] 6 points 10 months ago

That's a very interesting take my friend

this post was submitted on 30 Aug 2023
280 points (92.2% liked)

Showerthoughts

28417 readers
264 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The best ones are thoughts that many people can relate to and they find something funny or interesting in regular stuff.

Rules

founded 1 year ago
MODERATORS