I had a similar curiousity... Like if I make my own instance but it's just myself, is that even a net positive to the network? Now there's a new instance pulling everything I want to it, rather than another bigger instance that might have used that share subscriptions..
Also, do note that the model needs to be made with gptq-for-llama, not autogtpq
Lmao right? Wish the devs provided way more info.. I feel like things are just moving too fast for any documentation (which is equal parts sad and scary)
For real, I'd love to use basically any other SMS app but losing RCS sucks
Yup this is the way, the only bloat are preinstalled apps that can be disabled, the OS itself is extremely clean
Jesus christ what are the implications of this even, sounds super interesting
I'm also working on getting koboldcpp working with GPU support, currently it works for ingestion but for some reason the generation itself is still pretty slow.. will post an update when it's working well!
Can you create a dashboard of sorts so we can all see the CPU RAM and storage usage? Would be highly interested. Also if you're accepting donations of storage I have some spare drives 😅
Gotcha, koboldcpp seems to be able to run it, all of it is only a tiny bit confusing :D
Another +1 for immich, set it up a week ago and have been in love, minor growing pains but already making serious progress
Oh wait does ooba support this? Nvm then I'm enjoying using that, I'm just a little lost sometimes haha
that is a very intriguing idea I agree, basically several small P2P networks, would also enable people to add their home computer to the lemmy-verse without having to worry as much about storage requirements