Also, do note that the model needs to be made with gptq-for-llama, not autogtpq
LocalLLaMA
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
Hi there. I'd love to try your docker images. I just tried your latest image on my M2 MacbookPro but I don't know what to do next. I created a folder and typed the following in the terminal: docker pull noneabove1182/text-gen-ui-gpu:latest
But I don't know what to do next to launch it. Sorry for the basic question, but how do I run it? Thanks!
Yeah no problem! First issue however is that apple silicon plays kinda funny with this kind of setup, so I may need to make you a custom image to use. Otherwise you should have no problem running the -cpu build.
As for running the image itself, you can either run it form the command line with docker run, or you can make yourself a docker compose file
I personally tend to go the latter, and for that you can copy my docker-compose.yml file from here: https://hub.docker.com/r/noneabove1182/text-gen-ui-cpu
I'll work on making a mac-specific image and you can test it for me ;)
Thank you so much! I’d be happy to test it out.