Has anyone been following the development of 3.6? What are some highlighted features or bugs being addressed?
TheHobbyist
I think the only thing to keep in mind is that Nvidias proprietary drivers work better for Linux whereas for AMD it is the open-source ones.
I have an Nvidia card and the prop. drivers have worked flawlessly for me for years.
I know the open source drivers are closing the gap for Nvidia, and they also seem to be playing ball on that front. But for AMD the open source drivers are definitely the way to go from what I understand.
What your are describing on a high level is what O1 does. But where you are mistaken is when you say:
This thought is not human-interpretable, but it is much more efficient than the pre-output reasoning tokens of o1, which uses human language to fill its own context window with.
What makes those reasoning tokens more efficient? They are just tokens, similarly to all other ones and equally complex/simple to generate. Yes they allow for more reflexion before a presented output is given, but the process is the same.
Also, they would all need to fit in the same context because otherwise you will prevent the model from actually reasoning on it while it iterates its thoughts.
Being able to stream my shows on an unstable or lower bandwidth internet connection like on a train (which is where I really enjoy watching it) is impossible if I am streaming the raw files. I usually watch 480p or 720p on the go but enjoy the 1080p quality when watching from home.
Also, downloading a 1080p file takes significantly longer and takes up much more space than a 480p or 720p. My phone has no memory card and despite having 128GB internal storage, it is scarce. For a while, in the morning I was downloading my episodes before heading out, but really needed to luck out to get the episodes before I needed to catch the train (as the native jellyfin client does not allow downloading the transcoded files). You could argue I should adapt my habits to my means but I frankly really think it should be the other way around, and transcoding solves that for me.
It seems the post does not contain the sentence "before the end of the year [...] which gets framework laptops to all of the EU". What a shame, I was really getting excited about that fact!
- Direct play only, (no transcoding)
The app sounds great but this is for me a critical missing feature.
it seems AT&T may be interested in looking for alternatives to VMware?
https://xcp-ng.org/blog/2022/10/19/migrate-from-vmware-to-xcp-ng/
This was already quite a significant challenge compared to socketed RAM, but now with Lunar Lake I guess this is simply impossible? The RAM chips are colocated with the CPU...
Same boat, fedora + kde, solid experience all around. Love Fedora and really enjoying KDE, though im facing some minor gripe on my laptop with the power management which always seems to kick in max performance when plugged in despite all possible tweaks I have tried (tlp, powertop and the native power management settings).
I think unless they have demonstrated bad faith in the past, we should still give them the benefit of the doubt that this was an honest mistake though it raises some other concerns as to what the internal process is for green lighting this when they had worked with Jeff in the past?
You only mention your laptop is running out of space so you need to get a new computer? does your laptop have a soldered SSD? If that's not the case, I think the reflex should first be to see what storage you can get your laptop so that you can keep using it rather than discarding it :(
It's impressive how this initiative has gathered 360k votes in no time yet completely stagnated ever since. I don't think it means that the initiative is done, actually I am quite optimistic it can pass, but these things go in waves. I think every time there is a new coverage on the initiative it launches it higher, we just need some folks who have some local presence in the EU to get their audience to act. Perhaps some famous video game streamers? Has that been done already?