If you're good enough at writing to communicate all the information you need to something that is more different from you than any other human, why do you feel like you aren't the best at writing?
planish
The pita fix only works if you can dig up a CD drive to put it in though. Most people don't have one and are SOL.
That's what the BSOD is. It tries to bring the system back to a nice safe freshly-booted state where e.g. the fans are running and the GPU is not happily drawing several kilowatts and trying to catch fire.
Foreign to who?
I remember it as, Firefox was fast enough, but Chrome was shipping a weirdly quick JS engine and trying to convince people to put more stuff into JS because on Chrome that would be feasible. Nowdays if you go out without your turbo-JIT hand-optimized JS engine everyone laughs at you and it's Chrome's fault.
It shouldn't be hard to implement the APIs, the problem would be sourcing the models to sit behind them. You can't just steal them off Windows or you will have Copyright Problems presumably. I guess you could try and train clones on Windows against the Windows model results?
KDE and Gnome haven't been stable or usable for the past 20 years, but will become so this year for some reason?
So Copilot Runtime is... Windows bundling a bunch of models like an OCR model and an image generation model, and then giving your program an API to call them.
Do they "give high rankings" to CloudFlare sites because they just boost up whoever is behind CloudFlare, or because the sites happen to be good search hits, maybe that load quickly, and they don't go in and penalize them for... telling CloudFlare that you would like them to send you the page when you go to the site?
Counting the number of times results for different links are clicked is expected search engine behavior. Recording what search strings are sent from results pages for what other search strings is also probably fine, and because of the way forms and referrers work (the URL of the page you searched from has the old query in it) the page's query will be sent in the referrer by all browsers by default even if the site neither wanted it nor intends to record it. Recording what text is highlighted is weird, but probably not a genuine threat.
The remote favicon fetch design in their browser app was fixed like 4 years ago.
The "accusation" of "fingerprinting" was along the lines of "their site called a canvas function oh no". It's not "fingerprinting" every time someone tries to use a canvas tag.
What exactly is "all data available in my session" when I click on an ad? Is it basically the stuff a site I go to can see anyway? Sounds like it's nothing exciting or some exciting pieces of data would be listed.
This analysis misses the important point that none of this stuff is getting cross-linked to user identities or profiles. The problem with Google isn't that they examine how their search results pages are interacted with in general or that they count Linux users, it's that they keep a log of what everyone individually is searching, specifically. Not doing that sounds "anonymous" to me, even if it isn't Tor-strength anonymity that's resistant to wiretaps.
There's an important difference between "we're trying to not do surveillance capitalism but as a centralized service data still comes to our servers to actually do the service, and we don't boycott all of CloudFlare, AWS, Microsoft, Verizon, and Yahoo", as opposed to "we're building shadow profiles of everyone for us and our 1,437 partners". And I feel like you shouldn't take privacy advice from someone who hosts it unencrypted.
It sounds like nobody actually understood what you want.
You have a non-ZFS boot drive, and a big ZFS pool, and you want to save an image of the boot drive to the pool, as a backup for the boot drive.
I guess you don't want to image the drive while booted off it, because that could produce an image that isn't fully self-consistent. So then the problem is getting at the pool from something other than the system you have.
I think what you need to do is find something else you can boot that supports ZFS. I think the Ubuntu live images will do it. If not, you can try something like re-installing the setup you have, but onto a USB drive.
Then you have to boot to that and zfs import
your pool. ZFS is pretty smart so it should just auto-detect the pool structure and where it wants to be mounted, and you can mount it. Don't do a ZFS feature upgrade on the pool though, or the other system might not understand it. It's also possible your live kernel might not have a new enough ZFS to understand the features your pool uses, and you might need to find a newer one.
Then once the pool is mounted you should be able to dd
your boot drive block device to a file on the pool.
If you can't get this to work, you can try using a non-ZFS-speaking live Linux and dd
ing your image to somewhere on the network big enough to hold it, which you may or may not have, and then booting the system and copying back from there to the pool.
Looks like it has 32B in the name, so enough RAM to hold 32 billion weights plus activations (current values for the layer being run right now, which I think should be less than a gigabyte). It is probably made of 16 bit floats to start with, so something like 64 gigabytes, but if you start quantizing it to cram more weights into fewer bits, you can go down to like 4 bits per weight, or more like 16 gigabytes of memory to run (a slightly worse version of) the model.