31337

joined 2 years ago
[–] 31337 2 points 6 months ago (1 children)

I've used this before: https://github.com/wilicc/gpu-burn?tab=readme-ov-file

Yeah, it may be a driver issue, Nvidia/pytorch handles OOM gracefully on my system.

[–] 31337 1 points 6 months ago (3 children)

That seems strange. Perhaps you should stress-test your GPU/system to see if it's a hardware problem.

[–] 31337 3 points 6 months ago (1 children)

SD works fine for me with: Driver Version: 525.147.05 CUDA Version: 12.0

I use this docker container: https://github.com/AbdBarho/stable-diffusion-webui-docker

You will also need to install the nvidia container toolkit if you use docker containers: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

[–] 31337 6 points 6 months ago

Laissez-faire economics is a foundational component of liberalism (well, classical liberalism anyway, which I assume is what he means when using that word).

[–] 31337 3 points 6 months ago

Yeah, torrents usually run 100-300KiB/s. I guess not too bad for smaller files. About an hour or three per GB.

[–] 31337 6 points 6 months ago

I mean, you can be sued for anything, but it will get thrown out. Like, I guess the MPAA could offer a movie for download, then try to sue the first hop they upload a chunk to, but that really doesn't make any sense (because they offered it for download in the first place). Furthermore, the first hop(s) aren't the people that are using the file, and they can't even read it. If people could successfully sue nodes, then ISPs and postal services could be sued for anything that passes through their networks.

[–] 31337 2 points 6 months ago

I think similar, and arguably more fine-grained, things can be done with Typescript, traditional OOP (interfaces, and maybe the Facade pattern), and perhaps dependency injection.

[–] 31337 6 points 6 months ago (13 children)

Onion-like routing. It takes multiple hops to get to a destination. Each hop can only decrypt the next destination to send the packet to (i.e. peeling off a layer of the onion).

[–] 31337 5 points 6 months ago (1 children)

I thought the tuning procedures, such as RLHF, kind of messes up the probabilities, so you can't really tell how confident the model is in the output (and I'm not sure how accurate these probabilities were in the first place)?

Also, it seems, at a certain point, the more context the models are given, the less accurate the output. A few times, I asked ChatGPT something, and it used its browsing functionality to look it up, and it was still wrong even though the sources were correct. But, when I disabled "browsing" so it would just use its internal model, it was correct.

It doesn't seem there are too many expert services tied to ChatGPT (I'm just using this as an example, because that's the one I use). There's obviously some kind of guardrail system for "safety," there's a search/browsing system (it shows you when it uses this), and there's a python interpreter. Of course, OpenAI is now very closed, so they may be hiding that it's using expert services (beyond the "experts" in the MOE model their speculated to be using).

[–] 31337 2 points 6 months ago (1 children)

I find Kagi results a little bit better than Google's (for most things). I like that certain categories of results are put in their own sections (listicles, forums) so they're easy to ignore if you want. I like that I can prioritize, deprioritize, block, or pin results from certain domains. I like that I can quickly switch "lenses" to one of the predefined or custom lenses.

[–] 31337 3 points 6 months ago

Their line goes up when they show they're investing in AI, and it goes down when it looks like they're falling behind or not investing enough in it.

TBH, a lot of times I find myself interacting with ChatGPT instead of searching. It's overhyped, but it's useful.

[–] 31337 1 points 6 months ago

I've had unattended upgrades running on a home server for a couple years and haven't had any issues.

view more: ‹ prev next ›