ForgottenFlux

joined 1 year ago
 

A Microsoft employee disrupted the company’s 50th anniversary event to protest its use of AI.

“Shame on you,” said Microsoft employee Ibtihal Aboussad, speaking directly to Microsoft AI CEO Mustafa Suleyman. “You are a war profiteer. Stop using AI for genocide. Stop using AI for genocide in our region. You have blood on your hands. All of Microsoft has blood on its hands. How dare you all celebrate when Microsoft is killing children. Shame on you all.”

Sources at Microsoft tell The Verge that shortly after Aboussad was ushered out of Microsoft’s event, she sent an email to a number of email distribution lists that contain hundreds or thousands of Microsoft employees. Here is Aboussad’s email in full:

archive.today link

 

Every year, journalist Ben Black publishes a playful fake story on his community news site Cwmbran Life for April Fools' Day.

Since 2018 the 48-year-old has spun yarns including a Hollywood-style sign on a mountain to a nudist cold-water swimming club at a lake.

In 2020, Mr Black published a fake story claiming Cwmbran had been recognised by Guinness World Records for having the most roundabouts per square kilometre.

Despite altering the wording of his article that afternoon, when he searched for it on 1 April he said he was "shocked" and "worried" to find the false information being used by Google's AI tool and presented as real information.

 

European Union regulators are preparing major penalties against X, including a fine that could exceed $1 billion, according to a New York Times report yesterday.

The European Commission determined last year that Elon Musk's social network violated the Digital Services Act. Regulators are now in the process of determining what punishment to impose.

"The penalties are set to include a fine and demands for product changes," the NYT report said, attributing the information to "four people with knowledge of the plans." The penalty is expected to be issued this summer and would be the first one under the new EU law.

"European authorities have been weighing how large a fine to issue X as they consider the risks of further antagonizing [President] Trump amid wider trans-Atlantic disputes over trade, tariffs and the war in Ukraine," the NYT report said. "The fine could surpass $1 billion, one person said, as regulators seek to make an example of X to deter other companies from violating the law, the Digital Services Act."

 

Interest in LibreOffice, the open-source alternative to Microsoft Office, is on the rise, with weekly downloads of its software package close to 1 million a week. That’s the highest download number since 2023.

“We estimate around 200 million [LibreOffice] users, but it’s important to note that we respect users’ privacy and don’t track them, so we can’t say for sure,” said Mike Saunders, an open-source advocate and a deputy to the board of directors at The Document Foundation.

LibreOffice users typically want a straightforward interface, Saunders said. “They don’t want subscriptions, and they don’t want AI being ‘helpful’ by poking its nose into their work — it reminds them of Clippy from the bad old days,” he said.

There are genuine use cases for generative AI tools, but many users prefer to opt-in to it and choose when and where to enable it. “We have zero plans to put AI into LibreOffice. But we understand the value of some AI tools and are encouraging developers to create … extensions that use AI in a responsible way,” Saunders said.

 

Anyone who has suffered the indignity of a splinter, a blister, or a paper cut knows that small things can sometimes be hugely annoying. You aren't going to die from any of these conditions, but it's still hard to focus when, say, the back of your right foot is rubbing a new blister against the inside of your not-quite-broken-in-yet hiking boots.

I found myself in the computing version of this situation yesterday, when I was trying to work on a new Mac Mini and was brought up short by the fact that my third mouse button (that is, clicking on the scroll wheel) did nothing. This was odd, because I have for many years assigned this button to "Mission Control" on macOS—a feature that tiles every open window on your machine, making it quick and easy to switch apps. When I got the new Mini, I immediately added this to my settings. Boom!

And yet there I was, a couple hours later, clicking the middle mouse button by reflex and getting no result. This seemed quite odd—had I only imagined that I made the settings change? I made the alteration again in System Settings and went back to work.

But after a reboot later that day to install an OS update, I found that my shortcut setting for Mission Control had once again been wiped away. This wasn't happening with any other settings changes, and it was strangely vexing.

When it happened a third time, I switched into full "research and destroy the problem" mode. One of my Ars colleagues commiserated with me, writing, "This kind of powerful-annoying stuff is just so common. I swear at least once every few months, some shortcut or whatever just stops working, and sometimes, after a week or so, it starts working again. No rhyme, reason, or apparent causality except that computers are just [unprintable expletives]."

But even if computers are [unprintable expletives], their problems have often been encountered and fixed by some other poor soul.

 

Encryption can’t protect you from adding the wrong person to a group chat. But there is also a setting to make sure you don’t.

You can add your own nickname to a Signal contact by clicking on the person’s profile picture in a chat with them then clicking “Nickname.” Signal says “Nicknames & notes are stored with Signal and end-to-end encrypted. They are only visible to you.” So, you can add a nickname to a Jason saying “co-founder,” or maybe “national security adviser,” and no one else is going to see it. Just you. When you’re trying to make a group chat, perhaps.

Signal could improve its user interface around groups and people with duplicate display names.

 

Encryption can’t protect you from adding the wrong person to a group chat. But there is also a setting to make sure you don’t.

You can add your own nickname to a Signal contact by clicking on the person’s profile picture in a chat with them then clicking “Nickname.” Signal says “Nicknames & notes are stored with Signal and end-to-end encrypted. They are only visible to you.” So, you can add a nickname to a Jason saying “co-founder,” or maybe “national security adviser,” and no one else is going to see it. Just you. When you’re trying to make a group chat, perhaps.

Signal could improve its user interface around groups and people with duplicate display names.

 

One of the most basic tenets of cybersecurity is that you must “consider your threat model” when trying to keep your data and your communications safe, and then take appropriate steps to protect yourself.

This means you need to consider who you are, what you are talking about, and who may want to know that information (potential adversaries) for any given account, conversation, etc. The precautions you want to take to protect yourself if you are a random person messaging your partner about what you want to eat for dinner may be different than those you’d want to take, if, hypothetically, you are the Secretary of Defense of the United States or a National Security Advisor talking to top administration officials about your plans for bombing an apartment building in Yemen.

 

I've lost count of how many times I've been cornered at conferences by men in meticulously over-casual $300 t-shirts, evangelizing their startups with religious fervor. "We're DISRUPTING the entire industry," they insist, with the insufferable confidence of someone who believes their "Uber for X" app constitutes a revolution on par with penicillin.

"The old model is completely broken," they insist.

As they drone on about their company's world-changing approach to (inevitably) shuttling burritos from point A to point B, the truth becomes painfully obvious: their "revolutionary" business model consists of wedging themselves between existing markets and participants, then bleeding both sides dry with escalating fees and commissions.

This has become the dominant tech playbook.

What venture capitalists celebrate as "disruption" is, with damning frequency, nothing more than a moderately sophisticated way to extract rent from existing systems that functioned perfectly well before tech entrepreneurs arrived to "save" them.

When tech founders proclaim "we're disruptive," what they really mean is "we've found a legally dubious way to build wealth on the back of a system that worked fine before we showed up." Their "disruption" is a form of digital colonization—invading functioning markets and leaving devastation in their wake.

The obscenity isn't just the elicitation itself—it's the squandering of human potential. Imagine if the brilliant minds and vast capital currently dedicated to building increasingly sophisticated digital toll booths were instead focused on our existential challenges: climate catastrophe, preventable disease, systemic poverty, the collapse of democracy.

What actual problems have the most celebrated tech "unicorns" of the past decade solved? Did Uber cure a disease? Did DoorDash address climate change? Did Airbnb solve the housing crisis—or worsen it?

This is a society-wide failure of resource allocation.

It's a moral catastrophe masquerading as innovation.

If you're a tech founder: prove this assessment wrong. Show me, show us "disruption" that genuinely creates new, class-crossing value instead of reallocating it. Build something that addresses human needs instead of inventing elaborate new ways to insert yourself as a Rentier in existing transactions. Stop using "innovation" as a smokescreen for exploitation.

For everyone else: when the MBA in a designer hoodie that costs more than a starter car tells you he's "disrupting" an industry, translate it correctly: "I've found a way to skim money from people who actually create value, and I'm hoping you'll call me a visionary instead of an execrable leech."

 

Papua New Guinea's government has shut down social media platform Facebook, in what it describes as a "test" to mitigate hate speech, misinformation, pornography and "other detrimental content".

The test, conducted under the country's anti-terrorism laws, began on Monday morning and has extended into Tuesday.

Facebook users in the country have been unable to log-in to the platform and it is unclear how long the ban will go on for.

The government's move was not flagged ahead of the "test" on Monday — a move opposition MPs and media leaders have described as "tyranny" and an "abuse of human rights".

Facebook is by far the most popular social media platform in the country, with an estimated 1.3 million users, or about half of the country's estimated 2.6 million internet users.

The platform is a critical tool for public discourse in the country, with many highly active forums used to discuss PNG politics and social issues.

Yet, the government has been highly critical of Facebook with the platform often blamed for helping spread misinformation, particularly in light of a recent spate of tribal killings in the country.

 

The global backlash against the second Donald Trump administration keeps on growing. Canadians have boycotted US-made products, anti–Elon Musk posters have appeared across London amid widespread Tesla protests, and European officials have drastically increased military spending as US support for Ukraine falters. Dominant US tech services may be the next focus.

There are early signs that some European companies and governments are souring on their use of American cloud services provided by the three so-called hyperscalers. Between them, Google Cloud, Microsoft Azure, and Amazon Web Services (AWS) host vast swathes of the Internet and keep thousands of businesses running. However, some organizations appear to be reconsidering their use of these companies’ cloud services—including servers, storage, and databases—citing uncertainties around privacy and data access fears under the Trump administration.

“There’s a huge appetite in Europe to de-risk or decouple the over-dependence on US tech companies, because there is a concern that they could be weaponized against European interests,” says Marietje Schaake, a nonresident fellow at Stanford’s Cyber Policy Center and a former decadelong member of the European Parliament.

 

The onrushing AI era was supposed to create boom times for great gadgets. Not long ago, analysts were predicting that Apple Intelligence would start a “supercycle” of smartphone upgrades, with tons of new AI features compelling people to buy them. Amazon and Google and others were explaining how their ecosystems of devices would make computing seamless, natural, and personal. Startups were flooding the market with ChatGPT-powered gadgets, so you’d never be out of touch. AI was going to make every gadget great, and every gadget was going to change to embrace the AI world.

This whole promise hinged on the idea that Siri, Alexa, Gemini, ChatGPT, and other chatbots had gotten so good, they’d change how we do everything. Typing and tapping would soon be passé, all replaced by multimodal, omnipresent AI helpers. You wouldn’t need to do things yourself; you’d just tell your assistant what you need, and it would tap into the whole world of apps and information to do it for you. Tech companies large and small have been betting on virtual assistants for more than a decade, to little avail. But this new generation of AI was going to change things.

There was just one problem with the whole theory: the tech still doesn’t work. Chatbots may be fun to talk to and an occasionally useful replacement for Google, but truly game-changing virtual assistants are nowhere close to ready. And without them, the gadget revolution we were promised has utterly failed to materialize.

In the meantime, the tech industry allowed itself to be so distracted by these shiny language models that it basically stopped trying to make otherwise good gadgets. Some companies have more or less stopped making new things altogether, waiting for AI to be good enough before it ships. Others have resorted to shipping more iterative, less interesting upgrades because they have run out of ideas other than “put AI in it.” That has made the post-ChatGPT product cycle bland and boring, in a moment that could otherwise have been incredibly exciting. AI isn’t good enough, and it’s dragging everything else down with it.

Archive link: https://archive.ph/spnT6

view more: next ›