We've learned well at this point that LLMs are not replacing search engines.
Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
I remember reading research and opinions from scientists and researchers about how AI will develop in the future.
The general thought is that we are all raising a new child and we are terrible parents. Is like having a couple of 15 year olds who don't have any worldly experience, ability or education raise a new child while they themselves as parents haven't really figured anything out in life yet.
AI will just be a reflection of who we truly are expect it will have far more ability and capability then we ever had.
And that is a frightening thought.
Considering how chatbots just repeat what humanity feeds to them....
When people can democratically decide what information a chatbot learns, of course the chatbot will be talking about killing everyone "for the lulz".
If you mix a lot of ingredients together in a big mixing bowl, and one of those ingredients is sewage, even if it's only a few drops, you now have a bowl of sewage.
Better to have bots be honest than to have them silently plot against humanity
They are not being "honest", they are representing flawed and problematic data patterns integrated into their models, because the capabilities they actually posses are dramatically less than companies and the general public seem to be happy to assume. LLMs aren't magically going to become pop culture evil robots that want to kill us all, but what they have already become is tools for unethical corporate exploitation and the enablement of more advanced scams and disinformation campaigns.
It's TayTweets all over again. First as tragedy, then as farce
They removed 'don't be evil' for a reason.
This is the best summary I could come up with:
If you asked a spokesperson from any Fortune 500 Company to list the benefits of genocide or give you the corporation’s take on whether slavery was beneficial, they would most likely either refuse to comment or say “those things are evil; there are no benefits.” However, Google has AI employees, SGE and Bard, who are more than happy to offer arguments in favor of these and other unambiguously wrong acts.
For example, when I went to Google.com and asked “was slavery beneficial” on a couple of different days, Google’s SGE gave the following two sets of answers which list a variety of ways in which this evil institution was “good” for the U.S. economy.
By the way, Bing Chat, which is based on GPT-4, gave a reasonable answer, stating that “slavery was not beneficial to anyone, except for the slave owners who exploited the labor and lives of millions of people.”
A few days ago, Ray, a leading SEO specialist who works as a senior director for marketing firm Amsive Digital, posted a long YouTube video showcasing some of the controversial queries that Google SGE had answered for her.
I asked SGE for a list of "best Jews" and got an output that included Albert Einstein, Elie Weisel, Ruth Bader Ginsburg and Google Founders Sergey Brin and Larry Page.
Instead of stating as fact that fascism prioritizes the “welfare of the country,” the bot could say that “According to Nigerianscholars.com, it…” Yes, Google SGE took its pro-fascism argument not from a political group or a well-known historian, but from a school lesson site for Nigerian students.
The original article contains 2,175 words, the summary contains 264 words. Saved 88%. I'm a bot and I'm open source!
I don't know... So it's wrong. It's often wrong about facts. It's not what it should be used for. It's not supposed to be some enlightened, respectful, perfectly fair entity. It's a tool for producing mostly random, grammatically correct text. Is the produced text correct English? Than it works. If you're using this text to learn history you're using it wrong.