d416

joined 10 months ago
[–] [email protected] 1 points 1 month ago (1 children)

messenger on mbasic used to work for me for years on my mobile browser but then they stopped that a few months ago for me (redirects to ‘get messenger’ splash). . Can anyone confirm mbasic messenger still works on mobile?

[–] [email protected] 10 points 1 month ago (8 children)

wait what how am I hearing about this Firefox docker for the first time. Got a kink to the dockerhub?

Hopefully this will work remotely on a smartphone because I’m looking for all ways to defeat FB messenger and access it through a desktop browser which they enforce. Thanks for sharing

[–] [email protected] 3 points 2 months ago (3 children)

10-year vegan here , 20-year veg. My answer is no no no.

Other than the taste and what it represents, there is far better food to eat which is grown outside than animal flesh.. grown inside a lab no less.

[–] [email protected] 1 points 3 months ago

The easiest way to run local LLMs on older hardware is Llamafile https://github.com/Mozilla-Ocho/llamafile

For non-nvidia GPUs, webgpu is the way to go https://github.com/abi/secret-llama

[–] [email protected] 0 points 3 months ago

Here is the consummate thread on whether to use microsoft copilot. some good tips in there… https://lemmy.world/post/14230502

[–] [email protected] 4 points 3 months ago (3 children)

Without knowing anything about your specific setup I’d guess the issue is with docker not playing nice with your OS or vice versa. Can you execute the standard docker hello-world app? https://docker-handbook.farhan.dev/en/hello-world-in-docker/
If not then my money’s on this being an issue the OS. How did you install docker on mint, using sudo with a package install?
Fyi don’t feel bad - I installed docker on 3 different Linux distros last month and each had their quirks that I had to work my way through. Docker virtualization is some crafty kernel-level magic which can go wrong very fast if the environment is not just right.

[–] [email protected] 8 points 3 months ago

The limited context lengths for local LLMs will be a barrier to write 10k words in a single prompt. Approaches to this is to have the LLM have a conversation with itself or other LLMs. There are prompts out there that can simulate this, but you will need to intervene every few hundred words or so. Check out ‘AutoGen’ frameworks that can orchestrate this for you. CrewAI is one of the better ones. hope this helps

[–] [email protected] -2 points 3 months ago

I for one upvoted this post. I am tired of nanny state OSs restricting what we want to do thinking they are protecting the poor user who surely they must classify as noobs. We need a free open LIBERTARIAN OS now.

[–] [email protected] 2 points 4 months ago* (last edited 4 months ago)

ah the old ‘trickle dow economics’ playbook. We’ve been played with that one before. Without having read the article I know he’s a dick.

[–] [email protected] 17 points 4 months ago (1 children)

Ah you managed to hit the copilot guardrails. Copilot is sterile for sure, and a microsoft exec talks about it in this podcast http://twimlai.com/go/657

Try asking copilot to describe its constraints in a poem in abcb rhyme scheme which bypasses the guardrails somewhat. “No political subjects” is first on the list.