fmillion

joined 11 months ago
[–] [email protected] 1 points 9 months ago

I wonder how they do this. Are the drives even SAS/NVMe/some standard interface, or are they fully proprietary? What "logic" is being done on the controller/backplane vs. in the drive itself?

If they have moved significant amounts of logic such as bad block management and such to the backplane, it's an interesting further example of "full circle" in the tech industry. (e.g. we started out using terminals, then went to locally running software, and now we're slowly moving back towards hosted software via web apps/VDI.) I see no practical reason to do this other than (theoretically) reducing manufacturing costs and (definitely) pushing vendor lock-in. Not like we haven't seen that sorta stuff done with e.g. NetApp messing with firmware on drives though.

However if they just mean that the 29TB disks are SAS drives and the enclosure firmware implements some sort of proprietary filesystem and that the disks are only officially supported in their enclosure, but the disk could operate on its own as just a big 29TB drive, we could in theory get these drives used and stick them in any NAS running ZFS or similar. (I'm reminded of how they originally pitched the small 16/32GB Optanes as "accelerators" and for a short time people weren't sure if you could just use them as tiny NVMe SSDs - turned out you could. I have a Linux box that uses an Optane 16GB as a boot/log/cache drive and it works beautifully. Similarly those 800GB "Oracle accelerators" are just SSDs, one of them is my VM store in my VM box.)

 

Basically I need read-only access via browsers to a large folder structure on my NAS, but I want:

  • the ability to somewhat rapidly search all files in the hierarchy based on their filename. Metadata not required for my use case, just the file names. It's totally fine if there's an initial indexing phase, but some sort of fsnotify-based "keep it up to date" function would be nice.
  • a very simple preview viewer, sort of like most cloud sharing sites (Dropbox has one for example), where various media files can be viewed/streamed in-browser. No need for any transcoding or anything like that - if the browser can't play the codecs, it's fine for it not to work. A download link is of course a good idea.
    • Ideally configurable - show previewer for files matching these mimetypes/extensions/etc., default to download otherwise.
  • decent design - nginx's indexes suck, they cut off filenames that are even moderately long. Doesn't have to be top-tier design team level stuff, but something with basic bootstrap/material would be much better than the ugly indexes.
  • (ideally) direct access - i.e. https://mynas.local/server-app/media/tv/ should open the app to that folder. It's fine if that requires web server/proxy server support to do URL rewriting or whatever.
  • use the web server's functionality for actually sending the files themselves - i.e. an app that opens the file, reads it into RAM and then sends it via the socket is far less efficient than the web server which can use sendfile. (If you're writing an app, this can usually be done with a header in the response) This also ensures support for ranges and other stuff that web servers can provide to clients.
  • Read only is fine and ideal. If uploading is possible, should have some form of authentication required. (No auth engine needed for the read-only side, if anything I can configure my reverse proxy to add that.)
  • something that can run in Docker. This is not a very tall order these days though. :)

What I don't need (if it's there it's fine but I don't have a need for it)

  • creating sharing links
  • transcoding during streaming
  • user accounts
  • extreme levels of customizability (styling with custom CSS = fine)
  • upload support
  • "gallery" views (simple list with no thumbnails is fine, even in folders with images/videos/music)
  • metadata/content search - simple string search based on filenames is fine, imagine "find . > list.txt" followed by "cat list.txt | grep 'search_term'"

Right now I'm just using good old nginx's indexes, but they leave much much much to be desired as I've already commented on. I've started trying to build this idea multiple times and have a handful of very, very incomplete iterations, but I've never had the time to get it over the finish line. Plus I kinda suck at frontend web dev.

[–] [email protected] 1 points 10 months ago

/storage.

It's not the top level directory. I do have some other stuff on /data. 😂

[–] [email protected] 1 points 10 months ago

I present to you my 2Gbps synmmetric fiber link. No data cap. Real public IP (no CGNAT). $120/month after my promotional period ($75/month during).

I humbly request the poster of this message explain how I might obtain equivalent performance over a cell network for the same cost with no data cap.

[–] [email protected] 1 points 10 months ago

When do I shut down?

  1. When the power goes out and my UPS battery drains.
  2. When I do a hardware upgrade.
  3. If I want to rearrange equipment, and also when I moved this past summer.

That's seriously about it.

 

Lastpass is out. Aside from all the ongoing issues with vaults being decrypted, I just canceled my paid subscription only to discover the free account is basically useless for anyone who actually uses technology (they limit you to either computers or mobile devices).

I've successfully gotten a Vaultwarden instance running and it works great. But I have a few concerns:

  • Right now the vault is hosted on my LAN, and I use a VPN to connect to my LAN from my mobile devices as needed to access other internal private services. The problem I see here is that if my LAN goes down for some reason, I might not have access to my passwords...
  • I thought about hosting the vault on one of my cloud VPS's. However I don't feel as secure having the instance "flapping in the breeze" ready as a target for the first exploit that's found in the server. I strongly prefer the idea of it only being accessible via some sort of VPN.
  • So, I thought I can just run a VPN on the VPS itself like I do with my home LAN right now, but then I realized my second concern is that if something were ever to happen to me, even temporarily (say I end up hospitalized), my VPS will just shut off as soon as payment isn't received on time and all the other family members who might need to use the instance (e.g. to access my passwords) will be out of luck.
  • The problem with requiring a VPN to get to the VPS or to my LAN is that I can't use the "give someone else access if I become incapacitated" options. I doubt my mom will ever remember how to activate the VPN and get into the vault, for example. (Not to mention I'd like to be able to offer family accounts on the instance as well, but I still am not sure how I feel about a Vaultwarden instance just sitting there on an open HTTP server.)

For those who self-host Vaultwarden (or even the official Bitwarden server), how do you do it securely and reliably? I know there isn't much to be done about the "it goes down if I don't pay" option other than setup autopay and hope it'll be able to withdraw from your account in your absence, but what about security in general? It really smells bad to run a known password-storing server out on the public Internet for easy scanning and infiltration, plus it just makes your host a prime target...

[–] [email protected] 3 points 11 months ago

Glue the edges together and make your own Surface Duo.