this post was submitted on 23 Oct 2024
171 points (96.7% liked)

Pleasant Politics

216 readers
156 users here now

Politics without the jerks.

This community is watched over by a ruthless robot moderator to keep out bad actors. I don't know if it will work. Read [email protected] for a full explanation. The short version is don't be a net negative to the community and you can post here.

Rules

Post political news, your own opinions, or discussion. Anything goes.

All posts must follow the slrpnk sitewide rules.

No personal attacks, no bigotry, no spam. Those will get a manual temporary ban.

founded 4 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 7 points 1 month ago

Chapter 3: The Battle for the Future - AI

Musk and X seem to have long since spun out of control. These are not good signs for the avalanche that is currently rolling toward social networks, public communication and democracy as a whole: AI-generated content.

Musk is also involved here. After all, he once co-founded OpenAI, the company behind ChatGPT. In 2018, he left, and recently even sued co-founder Sam Altman because the company was no longer developing the technology for the benefit of humanity, but for pure profit.

Publicly, Musk enjoys the role of warning against the dangers of AI. In fact, he has long since launched a new AI company of his own, with a possibly far more dangerous model.

Through xAI, he offers paying X users a generative AI called Grok. The program delivers texts and now also images within seconds. Unlike ChatGPT, for example, there seem to be few moral, ethical or legal boundaries, but answers "to almost any question." When the latest version of Grok was released in August, users reported that they were able to generate instructions for bomb making and Nazi propaganda without much ado. Or a plan for how a rampage at a school could be made as deadly as possible. In one test, analysts even had the AI draw a picture showing Musk in a classroom - holding an assault rifle.

Musk has since downplayed this, saying Grok is an AI committed to truth and that any errors are being corrected immediately. He touts Grok's big advantage over the AI competition as the program's ability to "access X in real time." However, scientists at Northwestern University near Chicago see this very thing as the danger. "X is not exactly known for its accuracy," the AI experts write. Musk's AI could produce "misinformation on a large scale." Moreover, Grok has no comprehensive safeguards in place to prevent misinformation from spreading uncontrollably.

Examples already exist. After Joe Biden's withdrawal from the Democratic presidential nomination, Grok spread the false news, shared millions of times, for over a week that candidates could no longer be changed so late in the race. Ministers from five U.S. states complained helplessly to Musk that Grok had lied about the U.S. election process.

Is this the political battlefield of the future? Social networks in which an unrestrained and unbridled AI is at the service of trolls and enemies of democracy, autocrats and despots of any color and provenance, giving them the opportunity to produce lies in unprecedented quantity and convincing quality? Does truth, does democracy even stand a chance against this?

The beauty for Musk is that he can't even be held liable for such madness.

"The original sin," says German media scholar Joseph Vogl, currently teaching at Princeton University in the U.S., was the year 1996. That's when the U.S. government restructured its telecommunications law so that platform operators can no longer be held liable for the content distributed on them - but their users can. The calls for violence and discrimination that exist today on X and at Musk's behest are an effect of this privilege, he says: "In a democratic media world, this liability privilege would have to be abolished."

Meanwhile, Musk is creating facts, shifting resources, talent and money from his companies toward xAI for months. He's building the next big thing, a "computer gigafactory" that's growing on the site of an old industrial factory on the outskirts of Memphis. Nothing less than the "world's largest supercomputer" is what it's supposed to become.

Theoretically, he could one day feed this monstrous machine with the unimaginable mass of data his various companies collect all over the world. For example, the data that his millions of Teslas record daily on the world's roads. Or the data generated by SpaceX's rocket launches. Or possibly the data that Neuralink will soon collect from human brains.

Artificial intelligence could be the interface for Elon Musk to connect his previously rather disparate empire. Someone like him has all the resources to be at the forefront of the AI race as well. Especially if he saves everything that makes the technology expensive and complicated: a well-positioned security department that contains the machine and protects humanity. xAI would thus become a powerful central hub in Musk's empire, a data cockpit at whose control knobs the billionaire himself would sit. Next to him perhaps his buddy Trump. Possibly as president. Duo infernale.


All done.