Mixel

joined 1 year ago
[–] [email protected] 1 points 3 months ago

That was my idea too and it looks like I imagined it to 😂

[–] [email protected] 2 points 3 months ago

I need all of these! I already have them for data structures and agile but this is also golden!

[–] [email protected] 14 points 3 months ago (1 children)

They also created ghidra! Probably second best

[–] [email protected] 8 points 3 months ago (2 children)

To train a diffusion model that only outputs one image with difference is I think not possible you could do an image to image and then fix the seed so you would get a consistent result and then picking the nearest result that is nearly an identical copy

[–] [email protected] 9 points 3 months ago

Sometimes my brain does a little funny with me and I'm not sure if I like it or not... I just didn't realize this at all

[–] [email protected] 2 points 3 months ago

Perfekt das sieht genau nach dem aus was ich suche danke dir!

[–] [email protected] 4 points 3 months ago (2 children)

Gibt es einen Weg meine ganzen Sachen zu migrieren oder wenigstens die ganzen "saved" Sachen zu migrieren? Weil dort Speicher ich mir sinnvolle Sachen ab :D

[–] [email protected] 5 points 3 months ago

I would argue that as long as you're careful not to get any malware keepassXC is a lot more secure and comfortable to use than tying out the passwords one by one again. Or in general your own vault warden server

[–] [email protected] 11 points 4 months ago (1 children)

Ollama is a big thing, do you want it to be fast? You will need a GPU, how large is the model you will be running be? 7/8B with CPU not as fast but no problem. 13B slow with CPU but possible

[–] [email protected] 5 points 4 months ago

I would take that any day!

[–] [email protected] 5 points 4 months ago

Sir, you just made my day thank you!

[–] [email protected] 1 points 4 months ago

They probably also do some OCR on that and then let something other run over that to see if the text makes sense (basically letting another AI grade the output, commonly done to judge what's a good dataset and what isn't) and then just feed the ai again. Today you have a shortage of data since the internet is too small (yes I know it sounds crazy) so I wouldn't wonder if they actually tried to use pictures and ocr to gather a bit more usable data

 
44
submitted 10 months ago* (last edited 10 months ago) by [email protected] to c/[email protected]
 

Hey, I recently saw a comment on where to buy server hardware (mainly hard drives) sadly I forgot about the URL and didn't save it. What are you guys favourite places to buy these kind of things? :D

EDIT: I should've said for the EU would be heavily preferable since I live in Germany, sorry for the confusion :/

6
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

Edit: ich muss erstmal noch lernen in die GitHub issues zu schauen... Hier das issue was glaube ich mein Problem beschreibt: https://github.com/dessalines/jerboa/issues/928 Anscheinend liegt der Crash an der Version der Instanz anscheinend sollten wir erstmal bei Version 0.0.36 von jerboa bleiben

Seit Version 0.0.36 bekomme ich bei jedem Start der App die Meldung das die Instanz veraltet ist was prinzipiell erstmal kein Problem ist, jedoch habe ich eben die Version 0.0.37 versucht welche direkt bei mir Abstürtzt bin ich der einzige? Lieg dies an der Instanz oder generell an jebora? Ich habe jetzt erstmal wieder auf Version 0.0.36 downgraded und schaue Mal ob es irgendwelche Fehlerbehebungen mit Version 0.0.38 eingeführt werden oder ob diese Instanz aktualisiert werden muss. Falls dies wie gesagt in meiner Schule liegt bitte ich euch mir das zu sagen :)

 

So what is currently the best and easiest way to use an AMD GPU for reference I own a rx6700xt and wanted to run 13B model maybe superhot but I'm not sure if my vram is enough for that Since now I always sticked with llamacpp since it's quiet easy to setup Does anyone have any suggestion?

view more: next ›