RandomlyRight

joined 2 years ago
[–] RandomlyRight 2 points 1 day ago

Super cool! I'd be interested in how to fit this to my head shape too, it’s now on my list of contenders for the concert

[–] RandomlyRight 1 points 1 day ago

Did anyone get this to run?

[–] RandomlyRight 3 points 5 days ago (2 children)

Oof I’m sorry, sounds super bad. It’s interesting because I think the frontal lobe is exactly what would make someone overthink stuff or worry too much. So, I’m still considering it ;)

[–] RandomlyRight 10 points 6 days ago (4 children)

Amazing, can you share where exactly I need to bonk my head for this?

[–] RandomlyRight 2 points 1 week ago

I wanted to set this up for a while now. Guess it’s time

 
[–] RandomlyRight 1 points 1 month ago (1 children)

I’ve read about this method in the GitHub issues, but to me it seemed impractical to have different models just to change the context size, and that was the point I started looking for alternatives

6
100% HE Aluminium Keyboard? (self.mechanicalkeyboards)
 

I’ve been scouring the web to find a very specific config for a mechanical keyboard. It should be full size, have HE switches, and have a silver aluminum case. However the only one I found is the GMMK 3 Pro when you custom order it, but it’s very expensive at 470€ without any switches or keycaps.

Building one myself would definitely be an option, but I’m not sure if there even are any HE 100% PCBs, and the case it seems would have to be custom CNCd because they also don’t exist.

Any pointers would be appreciated!

[–] RandomlyRight 1 points 1 month ago (3 children)

It was multiple models, mainly 32-70B

[–] RandomlyRight 4 points 1 month ago

There are many projects out there optimizing the speed significantly. Ollama is unbeaten in the convenience though

[–] RandomlyRight 3 points 1 month ago (5 children)

Yeah, but there are many open issues on GitHub related to these settings not working right. I’m using the API, and just couldn’t get it to work. I used a request to generate a json file, and it never generated one longer than about 500 lines. With the same model on vllm, it worked instantly and generated about 2000 lines

 

I'm currently shopping around for something a bit faster than ollama and because I could not get it to use a different context and output length, which seems to be a known and long ignored issue. Somehow everything I’ve tried so far did miss one or more critical features, like:

  • "Hot" model replacement, so loading and unloading models on demand
  • Function calling
  • Support of most models
  • OpenAI API compatibility (to work well with Open WebUI)

I'd be happy about any recommendations!

[–] RandomlyRight 10 points 2 months ago

Über n Salamibrot geht halt nix

[–] RandomlyRight 85 points 2 months ago (8 children)

Yo I think we Path of Exile gamers made it pretty clear he is not one of us

[–] RandomlyRight 2 points 2 months ago (1 children)

Take a look at NVIDIA Project Digits. It’s supposed to release in May for 3k usd and will be kind of the only sensible way to host LLMs then:

https://www.nvidia.com/en-us/project-digits/

138
submitted 6 months ago* (last edited 6 months ago) by RandomlyRight to c/[email protected]
 

Finally, the ultimate weapon against boredom while waiting

66
Placebo smile (sh.itjust.works)
submitted 7 months ago* (last edited 7 months ago) by RandomlyRight to c/[email protected]
 

I don’t know why, but somehow these two words summed up 50% of my life

328
I'm sorry, little one (sh.itjust.works)
 
 

For me, it’s how much better I can do things I thought I was already fine at. Like engaging in conversations, handling complex logic, or just consciously relaxing.

160
ich🌍iel (sh.itjust.works)
submitted 2 years ago* (last edited 2 years ago) by RandomlyRight to c/[email protected]
 
 
 
 
view more: next ›