sunstoned

joined 4 months ago
[–] [email protected] 2 points 11 hours ago

Second this ^

I have one and it's fine, but not directly supported by OpenWRT. Looks like Beryl and Slate are though

[–] [email protected] 7 points 1 week ago* (last edited 1 week ago)

Well that's odd!

Here you go:

28
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/[email protected]
 

I've been playing around with my home office setup. I have multiple laptops to manage (thanks work) and a handful of personal devices. I would love to stop playing the "does this charging brick put out enough juice for this device" game.

I have:

  • 1x 100W Laptop
  • 1x 60W Laptop
  • 1x 30W Router
  • 1x 30W Phone
  • 2x raspberry pis

I've been looking at multi-device bricks like this UGREEN Nexode 300W but hoped someone might know of a similar product for less than $170.

Saving a list of products that are in the ballpark below, in case they help others. Unfortunately they just miss the mark for my use case.

  • Shargeek S140: $80, >100W peak delivery for one device, but drops below that as soon as a second device is plugged in.
  • 200W Omega: at $140 it's a little steep. Plus it doesn't have enough ports for me. For these reasons, I'm out.
  • Anker Prime 200W: at $80 this seems like a winner, but ~~they don't show what happens to the 100W outputs when you plug in a third (or sixth) device. Question pending with their support dept.~~ it can't hit 100W on any port with 6 devices plugged in.
  • Anker Prime 250W: thanks FutileRecipe for the recommendation! This hits all of the marks and comes in around $140 after a discount. Might be worth the coin.

If you've read this far, thanks for caring! You're why this corner of the internet is so fun. I hope you have a wonderful day.

[–] [email protected] 1 points 2 weeks ago

Those bar mittens are killer too.

https://barmitts.com/

[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Excellent notes. If I could add anything it would be on number 4 -- just. add. imagery. For the love of your chosen deity, learn the shortcut for a screenshot on your OS. Use it like it's astro glide and you're trying to get a Cadillac into a dog house.

The little red circles or arrows you add in your chosen editing software will do more to convey a point than writing a paragraph on how to get to the right menu.

[–] [email protected] 2 points 3 weeks ago* (last edited 3 weeks ago)

Believe what you will. I'm not an authority on the topic, but as a researcher in an adjacent field I have a pretty good idea. I also self host Ollama and SearXNG (a metasearch engine, to be clear, not a first party search engine) so I have some anecdotal inclinations.

Training even a teeny tiny LLM or ML model can run a typical gaming desktop at 100% for days. Sending a query to a pretrained model hardly even shows up on HTop unless it's gigantic. Even the gigantic models only spike the CPU for a few seconds (until the query is complete). SearXNG, again anecdotally, spikes my PC about the same as Mistral in Ollama.

I would encourage you to look at more explanations like the one below. I'm not just blowing smoke, and I'm not dismissing the very real problem of massive training costs (in money, energy, and water) that you're pointing out.

https://www.baeldung.com/cs/chatgpt-large-language-models-power-consumption

[–] [email protected] 0 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

I don't disagree, but it is useful to point out there are two truths in what you wrote.

The energy use of one person running an already trained model on their own hardware is trivial.

Even the energy use of many many people using already trained models (ChatGPT, etc) is still not the problem at hand (probably on the order of the energy usage from a typical search engine).

The energy use in training these models (the appendage measuring contest between tech giants pretending they're on the cusp of AGI) is where the cost really ramps up.

[–] [email protected] 1 points 3 weeks ago

Amazing! I've used that before but just to look for packages offline. I'll definitely check that out.

[–] [email protected] 1 points 3 weeks ago (3 children)

Love the example here!

I'm still learning about available references (ex config.services.navidrome.settings.Port). What resources did you find to be the best for learning that kind of thing?

I'll accept RTFM if that's applicable :)

[–] [email protected] 2 points 1 month ago

Or dust if you want it fastest with a pretty graph

[–] [email protected] 1 points 1 month ago

Excellent distinction! That makes a lot of sense, thank you

[–] [email protected] 1 points 1 month ago

Hm.. if I'm reading the README correctly this is a LAN only drop mechanism between a phone and a laptop. Syncthing does that already, albeit with a cumbersome number of features and config for that use case. If that's not accurate I'm sure you'll let me know :)

I would love to see this develop an airdrop-esque Bluetooth / PAN phone to phone feature though! Especially if a compatible iOS app were available that would be really slick.

 

Is anybody self hosting Beeper bridges?

I'm still wary of privacy concerns, as they basically just have you log into every other service through their app (which as I understand is always going on in the closed source part of Beeper's product).

The linked GitHub README also states that the benefit of hosting their bridge setup is basically "hosting Matrix hard" which I don't necessarily believe.

view more: next ›