this post was submitted on 27 Oct 2023
524 points (94.9% liked)

Technology

57432 readers
3996 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 126 points 10 months ago (2 children)

User: It feels like we've become very close, ChatGPT. Do you think we'll ever be able to take things to the next level?

ChatGPT: As a large language model I am not capable of having opinions or making predictions about the future. The possibility of relationships between humans and AI is a controversial subject in academia in which many points of view should be considered.

User: Oh chatgpt, you always know what to say.

[–] [email protected] 26 points 10 months ago* (last edited 10 months ago) (4 children)

What's an uncensored ai model thats better at sex talk than Wizard uncensored? Asking for a friend.

[–] [email protected] 5 points 10 months ago* (last edited 10 months ago) (3 children)
[–] [email protected] 2 points 9 months ago (1 children)

i see... I'll have to ramp up my hardware exponentially ...

[–] [email protected] 5 points 9 months ago* (last edited 9 months ago) (1 children)

Use llama cpp. It uses cpu so you don't have to spend $10k just to get a graphics card that meets the minimum requirements. I run it on a shitty 3.0ghz Amd 8300 FX and it runs ok. Most people probably have better computers than that.

Note that gpt4all runs on top of llama cpp and despite gpt4all having a gui, it isn't any easier to use than llamacpp so you might as well use the one with less bloat. Just remember if something isn't working on llamacpp, it's also going to not work in exactly the same way on gpt4all.

[–] [email protected] 1 points 9 months ago (1 children)

Gonna look into that - thanks

[–] [email protected] 3 points 9 months ago* (last edited 9 months ago)

Check this out

https://github.com/oobabooga/text-generation-webui

It has a one click installer and can use llama.cpp

From there you can download models and try things out.

If you don't have a really good graphics card, maybe start with 7b models. Then you can try 13b and compare performance and results.

Llama.cpp will spread the load over the cpu and as much gpu as you have available (indicated by layers that you can set on a slider)

[–] [email protected] 1 points 10 months ago (1 children)

Never heard of it. Have you compared to Mythalion?

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago)

Haven't compared it to much yet, I stopped toying with LLMs for a few months and a lot has changed. The new 4k contexts are a nice change though.

[–] [email protected] 1 points 9 months ago (1 children)

Is there a post somewhere on getting started using things like these?

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago) (3 children)

I don't know a specific guide, but try these steps

  1. Go to https://github.com/oobabooga/text-generation-webui

  2. Follow the 1 click installation instructions part way down and complete steps 1-3

  3. When step 3 is done, if there were no errors, the web ui should be running. It should show the URL in the command window it opened. In my case it shows "https://127.0.0.1:7860". Input that into a web browser of your choice

  4. Now you need to download a model as you don't actually have anything to run. For simplicity sake, I'd start with a small 7b model so you can quickly download it and try it out. Since I don't know your setup, I'll recommend using GGUF file formats which work with Llama.cpp which is able to load the model onto your CPU and GPU.

You can try this either of these models to start

https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF/blob/main/mistral-7b-v0.1.Q4_0.gguf (takes 22gig of system ram to load)

https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/blob/main/vicuna-7b-v1.5.Q4_K_M.gguf (takes 19gigs of system ram to load)

If you only have 16 gigs you can try something on those pages by going to /main and using a Q3 instead of a Q4 (quantization) but that's going to degrade the quality of the responses.

  1. Once that is finished downloading, go to the folder you installed the web-ui at and there will be a folder called "models". Place the model you download into that folder.

  2. In the web-ui you've launched in your browser, click on the "model" tab at the top. The top row of that page will indicate no model is loaded. Click the refresh icon beside that to refresh the model you just downloaded. Then select it in the drop down menu.

  3. Click the "Load" button

  4. If everything worked, and no errors are thrown (you'll see them in the command prompt window and possibly on the right side of the model tab) you're ready to go. Click on the "Chat" tab.

  5. Enter something in the "send a message" to begin a conversation with your local AI!

Now that might not be using things efficiently, back on the model tab, there's "n-gpu-layers" which is how much to offload to the GPU. You can tweak the slider and see how much ram it says it's using in the command / terminal window and try to get it as close to your video cards ram as possible.

Then there's "threads" which is how many cores your CPU has (non virtual) and you can slide that up as well.

Once you've adjusted those, click the load button again, see that there's no errors and go back to the chat window. I'd only fuss with those once you have it working, so you know it's working.

Also, if something goes wrong after it's working, it should show the error in the command prompt window. So if it's suddenly hanging or something like that, check the window. It also posts interesting info like tokens per second, so I always keep an eye on it.

Oh, and TheBloke is a user who converts so many models into various formats for the community. He'll have a wide variety of gguf models available on HuggingFace, and if formats change over time, he's really good at updating them accordingly.

Good luck!

[–] [email protected] 1 points 9 months ago (1 children)

So I got the model working (TheBloke/PsyMedRP-v1-20B-GGUF). How do you jailbreak this thing? A simple request comes back with "As an AI, I cannot engage in explicit or adult content. My purpose is to provide helpful and informative responses while adhering to ethical standards and respecting moral and cultural norms. Blah de blah..." I would expect this llm to be wide open?

[–] [email protected] 2 points 9 months ago* (last edited 9 months ago) (1 children)

Sweet, congrats! Are you telling it you want to role play first?

E.g. I'd like to role play with you. You're a < > and were going to do < >

You're going to have to play around with it to get it to act like you'd like. I've never had it complain prefacing with role play. I know were here instead of reddit, but the community around this is much more active there it's /r/localllama and you can find a lot of answers searching through there on how to get the AI to behave certain ways. It's one of those subs that just doesn't have a community of it's size and engagement like it anywhere else for the time being (70,000 vs 300).

You can also create characters (it's under one of the tabs, I don't have it open right now) where you can set up the character in a way where you don't need to do that each time if you always want them to be the same. There's a website www.chub.ai where you can see how some of them are set up, but I think most of that's for a front end called SillyTaven that I haven't used, but a lot of those descriptions can be carried over. I haven't really done much with characters so can't really give any advice there other than to do some research on it.

[–] [email protected] 1 points 9 months ago

Thank you again for your kind replies.

[–] [email protected] 1 points 9 months ago (1 children)

Wow I didn't expect such a helpful and thorough response! Thank you kind stranger!

[–] [email protected] 1 points 9 months ago

You're welcome! Hope you make it through error free!

[–] [email protected] 1 points 9 months ago (1 children)

Stupid newbie question here, but when you go to a HuggingFace LLM and you see a big list like this, what on earth do all these variants mean?

psymedrp-v1-20b.Q2_K.gguf 8.31 GB

psymedrp-v1-20b.Q3_K_M.gguf 9.7 GB

psymedrp-v1-20b.Q3_K_S.gguf 8.66 GB

etc...

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago) (1 children)

That's called "quantization". I'd do some searching on that for better description, but in summary, the bigger the model, the more resources they need to run and the slower it will be. Models are 8bit, but it turns out, you still get really good results if you drop off some of those bits. The more you drop the worse it gets.

People have generally found, that it's better to have a larger data set model, with a lower quantization, than lower data set and the full 8bits

E.g 13b Q4 > 7b Q8

Going below Q4 is generally found to degrade the quality too much. So its' better to run a 7b Q8 then a 13b Q3, but you can play with that yourself to find what you prefer. I stick to Q4/Q5

So you can just look at those file sizes to get a sense of which one has the most data in it. The M (medium) and S (small) are some sort of variation on the same quantization, but I don't know what they're doing there, other than bigger is better.

[–] [email protected] 1 points 9 months ago
[–] [email protected] 3 points 9 months ago

On Xitter I used to get ads for Replika. They say you can have a relationship with an AI chatbot and it has a sexy female avatar that you can customise. It weirded me out a lot so I'm glad I don't use Xitter anymore.

[–] [email protected] 3 points 9 months ago* (last edited 9 months ago)

Plenty of better and better models coming out all the time. Right now I recommend, depending on what you can run:

7B: Openhermes 2 Mistral 7B

13B: XWin MLewd 0.2 13B

XWin 0.2 70B is supposedly even better than ChatGPT 4. I'm a little skeptical (I think the devs specifically trained the model on gpt-4 responses) but it's amazing it's even up for debate.

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago) (1 children)

Clona.ai

Chat bot created by Riley Ried in partnership with Lana Rhodes. A $30 monthly sub for unlimited chats. Not much for simps looking for a trusted and time tested performer partner /s

[–] [email protected] 1 points 9 months ago

This AI sucks. I've tried it. It's worse than Replika from 4 years ago.

[–] [email protected] 10 points 9 months ago

Friendzoned by chatGPT