[-] [email protected] 27 points 2 months ago

This has nothing to do with centralization. AI companies are already scraping the web for everything useful. If you took the content from SO and split it into 1000 federated sites, it would still end up in a AI model. Decentralization would only help if we ever manage to hold the AI companies accountable for the en masse copyright violations they base their industry on.

[-] [email protected] 50 points 4 months ago* (last edited 4 months ago)

I have my own backup of the git repo and I downloaded this to compare and make sure it's not some modified (potentially malicious) copy. The most recent commit on my copy of master was dc94882c9062ab88d3d5de35dcb8731111baaea2 (4 commits behind OP's copy). I can verify:

  • that the history up to that commit is identical in both copies
  • after that commit, OP's copy only has changes to translation files which are functionally insignificant

So this does look to be a legitimate copy of the source code as it appeared on github!

Clarifications:

  • This was just a random check, I do not have any reason to be suspicious of OP personally
  • I did not check branches other than master (yet?)
  • I did not (and cannot) check the validity of anything beyond the git repo
  • You don't have a reason to trust me more than you trust OP... It would be nice if more people independently checked and verified against their own copies.

I will be seeding this for the foreseeable future.

[-] [email protected] 49 points 5 months ago

"Enshitification" does not mean "I don't like it". It is specifically about platforms that start out looking too good to be true and turn to shit when the user base is locked in. The term is generally used for cases where the decline in quality was pre-planned and not due to external factors. Using the same term each time is, in my opinion, an appropriate way to point out just how common this pattern is.

[-] [email protected] 36 points 5 months ago

This is great. Proton is getting a lot of testing just based on Steam's userbase and it is backed by Valve. We also have a lot of data on proton's performance and potential game-specific fixes in the form of protondb. Making sure that non-Steam launchers can use all that work and information is crucial to guaranteeing the long-term health of linux gaming. Otherwise it is easy to imagine a future where proton is doing great but the other launchers are keep running into problems and are eventually abandoned.

One thing that I am curious is how this handles the AppId. If this AppId is used to figure out which game-specific fixes are needed, then it will have to be known. Do we have a tool/database that figures out the AppId from the game you are launching outside of Steam?

[-] [email protected] 46 points 6 months ago

So help me out here, what am I missing?

You're forgetting that not all outcomes are equal. You're just comparing the probability of winning vs the probability of losing. But when you lose you lose much bigger. If you calculate the expected outcome you will find that it is negative by design. Intuitively, that means that if you do this strategy, the one time you will lose will cost you more than the money you made all the other times where you won.

I'll give you a short example so that we can calculate the probabilities relatively easily. We make the following assumptions:

  • You have $13, which means you can only make 3 bets: $1, $3, $9
  • The roulette has a single 0. This is the best case scenario. So there are 37 numbers and only 18 of them are red This gives red a 18/37 to win. The zero is why the math always works out in the casino's favor
  • You will play until you win once or until you lose all your money.

So how do we calculate the expected outcome? These outcomes are mutually exclusive, so if we can define the (expected gain * probability) of each one, we can sum them together. So let's see what the outcomes are:

  • You win on the first bet. Gain: $1. Probability: 18/37.
  • You win on the second bet. Gain: $2. Probability: 19/37 * 18/37 (lose once, then win once).
  • You win on the third bet. Gain: $4. Probability: (19/37) ^ 2 * 18/37 (lose twice, then win once).
  • You lose all three bets. Gain: -$13. Probability: (19/37) ^ 3 (lose three times).

So the expected outcome for you is:

$1 * (18/37) + 2 * (19/37 * 18/37) + ... = -$0.1328...

So you lose a bit more than $0.13 on average. Notice how the probabilities of winning $1 or $2 are much higher than the probability of losing $13, but the amount you lose is much bigger.

Others have mentioned betting limits as a reason you can't do this. That's wrong. There is no winning strategy. The casino always wins given enough bets. Betting limits just keep the short-term losses under control, making the business more predictable.

[-] [email protected] 70 points 6 months ago

Exactly this. I can't believe how many comments I've read accusing the AI critics of holding back progress with regressive copyright ideas. No, the regressive ideas are already there, codified as law, holding the rest of us back. Holding AI companies accountable for their copyright violations will force them to either push to reform the copyright system completely, or to change their practices for the better (free software, free datasets, non-commercial uses, real non-profit orgs for the advancement of the technology). Either way we have a lot to gain by forcing them to improve the situation. Giving AI companies a free pass on the copyright system will waste what is probably the best opportunity we have ever had to improve the copyright system.

[-] [email protected] 34 points 8 months ago

This is very common among big tech companies and we should start treating it as what it is, a scam.

20
submitted 9 months ago* (last edited 9 months ago) by [email protected] to c/[email protected]

I have an SSD from a PC I no longer use. I need to keep a copy of all its data for backup purposes. The problem is that dd reports "Input/output error"s when copying from the drive. There seem to be 20-30 of them in the entire 240GB drive so it is likely that most or all of my data is still intact.

What I'm concerned about is whether these input/output errors can cause issues in the image outside of the particular bad blocks. How does dd handle these errors? Will they be eg zeroed in the output or will the simply be missing? If they are simply missing will the filesystem be corrupted because the location of data has been shifted? If so, what tool should I be using to save what can be saved?

EDIT: Thanks for the help guys. I went with ddrescue and it reports to have saved 99.99% of the data. I guess there could still be significant loss if the 0.01% happens to be on filesystem structures, but in this case maybe I can use an undeleter or similar utility to see if I can get back the files. In any case, I can work at my leisure now that I have a copy of the data on non-failing storage.

[-] [email protected] 42 points 10 months ago

Personally I don't care so much about the things that Linux does better but rather the abusive things it doesn't do. No ads, surveillance, forced updates etc. And it's not that linux happens to not do that stuff. It's that the decentralized nature of free software acts as a preventative measure against those malicious practices. On the other side, your best interests always conflict with those of a multi-billion company, practically guaranteeing that the software doesn't behave as you. So windows are as unlikely to become better in this regard as linux is to become worse.

Also the ability to build things from the ground up. If you want to customize windows you're always trying to replace or override or remove stuff. Good luck figuring out if you have left something in the background adding overhead at best and conflicting with what you actually want to use at worst. This isn't just some hypothetical. For example I've had windows make an HDD-era PC completely unusable because a background telemetry process would 100% the C: drive. It was a nightmarish experience to debug and fix this because even opening the task manager wouldn't work most of the time.

Having gotten the important stuff out of the way, I will add that even for stuff that you technically can do on both platforms, it is worth considering if they are equally likely to foster thriving communities. Sure I can replace the windows shell, but am I really given options of the same quality and longevity as the most popular linux shells? When a proprietary windows component takes an ugly turn is it as likely that someone will develop an alternative if it means they have to build it from the ground up, compared to the linux world where you would start by forking an existing project, eg how people who didn't like gnome 3 forked gnome 2? The situation is nuanced and answers like "there exists a way to do X on Y" or "it is technically possible for someone to solve this" don't fully cover it.

[-] [email protected] 35 points 10 months ago

It is copyright infringement. Nvidia (and everyone writing kernel modules) has to choose between:

  • using the GPL-covered parts of the kernel interface and sharing their own source code under the GPL (a free software license)
  • not using the GPL-covered parts of the kernel interface

Remember that the kernel is maintained by volunteers and by engineers funded by/working for many companies, including Nvidia's direct competitors, and Nvidia is worth billions of dollars. Nvidia is incredibly obnoxious to infringe on the kernel's copyright. To me it is 100% the appropriate response to show them zero tolerance for their copyright infringement.

[-] [email protected] 45 points 1 year ago

Well, realistically there is a good chance that this will turn out just fine business-wise. They don't care if they lose some engagement or if the quality goes to shit. It's all good, as long as it makes some money.

In my opinion, this sort of model should be considered anti-competitive. It has become apparent that these services operate on a model where they offer a service that is too good to be true in order to kill the competition, and then they switch to their actual profitable business plan. If you think about it, peertube is a much more sensible economical model with its federation and p2p streaming. But nobody has ever cared about it because huge tech giants offer hosting & bandwith "for free". The evil part of youtube is not the ads, its the fact that it allowed us to bypass them long enough for the entire planet to become dependent on it.

[-] [email protected] 30 points 1 year ago

Of course these games are not going to last forever, but every day that subs are flooded with John Oliver is an extra day for people to learn about the drama and to consider moving to another platform. For subs that were forced open, it was either this or already going back to normal.

4
Big Steam Client Update (June 14th) (store.steampowered.com)
submitted 1 year ago by [email protected] to c/[email protected]

cross-posted from: https://kbin.social/m/[email protected]/t/21836

Big improvements and new features for the Steam Desktop client are now out of Beta!

[-] [email protected] 37 points 1 year ago

I see several comments talking about this being a wrong decision, or Beehaw needing to change its attitude etc. I think these opinions come from a misunderstanding of the fundamentals of federation. Federation is not about all the instances coming together to cater to our needs. It's about each instance doing its own thing, and communities will form around the ones that cater to them. In other words, we don't need Beehaw to budge on its decision, we need to build the community we want without Beehaw, while Beehaw caters to the users who aren't in this with us.

view more: next ›

patatahooligan

joined 1 year ago