Selfhosted

40006 readers
538 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
1
 
 

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

2
 
 

I thought this was an interesting post and discussion on selfhosted. Thoughts?

3
 
 

I just install a complete new Drupal install in a Debian VM inside proxmox, everything works as intended, but I cannot add content to it(it gives me a 500 error). Apache logs show me that the memory is exhausted, search online, no real answer, tried a lot of thing in PHP.ini, .htaccess... At first the VM had 1 vcpu and 1GB of RAM, not working, I've put the PHP memory limit to 1GB, give 8GB to the vm, and 4vcpu. Not working, just "loading" the 500 longer. Have no solutions, as now. If you have one please let me know! thanks 🙂

4
 
 

So I self host all my music via Plex and for some artists and albums Plex (via Plex pass I believe) pulls lyrics and can show you like Spotify etc but some artists are not supported/popular. I found a couple apps that 'worked' to download lyrics but the best one was this https://github.com/tranxuanthang/lrcget

Just thought I would share for others who would want to do the same I have a large library and adding lyrics was not hard at all and found most out of the gate. If you have other solutions I would love to hear about them maybe they are better lol

It adds the file as same name in same folder that song is located

5
53
Foss webscraper (github.com)
submitted 20 hours ago* (last edited 20 hours ago) by [email protected] to c/[email protected]
 
 

Not OP. This was posted to self hosted on reddit and might be useful to some.

Original post - https://www.reddit.com/r/selfhosted/comments/1glf06d/comment/lw1e4zd/

6
46
submitted 1 day ago* (last edited 1 day ago) by [email protected] to c/[email protected]
 
 

Question: What do people in this community recommend for self-hosted instant messanger projects? I host a VOIP service for my nerd herd and due to recent events i'm attempting to migrate out groups chats off of the major platforms (Discord, Google chats, Slack, Etc.) as well.

There are a few notes that were requested/requirements.

  • Self-hosted
  • Supports images
  • Has a decent mobile app
  • Encrypted communication
  • Expected load ~25 users.

I am doing my own digging but wanted to hear the communites opinions on some of the projects that came up in searches.

  • IRC/XMPP - dosent really work for the request but is a classic, so I feel had to mention it.
  • Rocket.Chat - seems like the best option so far, but I was having trouble finding current reviews, and its licensing is a bit much.
  • Matrix also is close to checking all the boxes, but it wasnt clear how it works on mobile (Element seemed like the mobile app that was recommended).
  • Revolt was high on the SEO results but most of the discussion around it was about drama with the maintainers (that is what prompted this post, i'm fishing for more current opinions).
  • Zulip seemed similar to Rocket.Chat, but more expensive if we had to get a license.

I appreciate peoples opinions and recomendations on this topic.

7
 
 

Hey everyone, wanderer recently celebrated it’s 10th anniversary. Well, as far as minor versions go at least.

First and foremost: What is wanderer? wanderer is a self-hosted GPS track database. You can upload your recorded GPS tracks or create new ones and add various metadata to build an easily searchable catalogue. Think of it as a fully FOSS alternative to sites like alltrails, komoot or strava.

Next: Thank you for almost 1.2k stars on GitHub. It’s a great motivation to see how well-received wanderer is.

By far the most requested feature since my last post was the possibility to track your acitivities. This is now possible on the new profile page which shows various statistics to help you gain better insights into your trailing/running/biking habits. Lists have also received a major upgrade allowing you easily bundle a multiday hike and share it with other users.

If you want to give wanderer a try without installing it you can try the demo. When you are ready to self-host it you can head over to wanderer.to to see the full documentation and installation guide. If you really like wanderer and would like to support its development directly you can buy me a coffee.

Thanks again! Cheers Flomp

8
62
submitted 1 day ago* (last edited 20 hours ago) by [email protected] to c/[email protected]
 
 

I want to start by saying I recognize that everyone's needs & priorities are different.

My wife and I both have iPhones, and i have a Pixel 7 Pro I use for work (and sometimes to compare the camera to the iPhones). All of our photos are currently backed up to iCloud (Apple One Premier - 2TB storage) and via Synology Photos. The Pixel has "unlimited" storage for photo backup w/ Google, and also backs up to the Synology. In general, I would like to get off of Google, but it's 99% work stuff that I wouldn't miss if it was lost.

There's a lot that I really like about Immich, but there are also some real pain points for me. I'm not going to comment on the discrepancies between the mobile vs. web interfaces as I expect them to be addressed as the product matures.

  • The rapid development is both a blessing and a curse. I love that the team are really working through the roadmap. But sometimes it feels like new features arrive somewhat half-baked. The most common example being something is released working on just the web or mobile app. But the pace also creates extra work for me in that every release requires me to look for breaking changes and make appropriate fixes. I get it, it's beta software, and heavy development often requires this.
  • If it mis-identifies a face, the mechanism for correcting that is pretty clunky. I have to first, say it's a different person, and then, if I don't care about tagging that face, I have to go to People to hide it. I don't really care about faces that it completely misses because I don't consider facial recognition as a "archive-grade" feature. We have tags/keywords for that.
  • The tagging is both cool and clunky. I love the nested tags and the drill-down tags interface. I hate that I can only add a new tag from the tags admin page. Would also like to see auto-tagging, or suggested tags implemented.
  • Image rotation is half-addressed at best. For one, I'm not sure why it only works on the mobile interface since the web interface has direct access to ImageMagick. I mainly see image orientation issues w/ raw files. To fix this, I have to edit it on mobile, save it to my phone's library and upload the newly created JPG, which shows up as a separate file w/ metadata that doesn't align w/ the original (like creation date). It's just a mess.

I started playing with PhotoPrism a little bit, and while it addresses many of my complaints w/ Immich, it also raises some of its own pain points.

  • Probably the biggest issue I have with PhotoPrism is the lack of mobile apps. There are some out there, but the recommended app is a third-party WebDav app called PhotoSync. I tried it and wasn't overly impressed. At least, not enough to pay for it. This would be a dealbreaker except that I can simply use the Synology Photo backup, and have PhotoPrism mount those directories as its library ( can also do this with Immich's "External Library" feature).
  • The metadata editing is comprehensive. In this one regard it is streets ahead of Immich. Seriously, you have so much more access to the photo metadata. Unfortunately, it's hampered by the limited batch capabilities.
  • Batch editing isn't really batch editing. It's just editing a smaller subset of individual files one at a time. So when go to to the next or previous file, it the next or previous one in the selected subset.
  • Keywords are supports, and new ones can be created on the fly. That said, nested keywords don't appear to work.
  • There's also labels. Both are auto-suggested, and both can be manually edited. Labels are also accessible from the sidebar. No nested labels, either, but it does auto sort labels into broad categories. For example, "dog" and "cat" are placed into an "animals" category. You can switch between showing/hiding the broad categories. You can also have favorite labels.
  • Image orientation/rotation is done right in the photo editing dialog. One more area where PP beats Immich.

I currently haven't decided which one I will keep. I could use either with the Synology Photo app to back up my phones. PhotoPrism's lack of mobile app is really bad, but the mobile web interface is fine for navigating the library. Immich is a more wholistic solution, but it's handling of some key organizational and editing functions is pretty glaring as well. I know Immich is the overwhelming favorite of most self-hosting communities, but I found PhotoPrism to be pretty compelling in its own right - especially the metadata editing capabilities.

ETA: I see lots of people talking about Immich’s facial detection. Out of curiosity, what are your detection settings? I’ve found it to be pretty good compared to Photo Prism’s, but not exactly game changing. My settings are:

  • Model: antelopeV2
  • Min Score: 0.2
  • Max distance: 0.5 Min recognized faces: 1
9
 
 

In my server I currently have an Intel i7 9th gen CPU with integrated Intel video.

I don't use or need A.I. or LLM stuff, but we use jellyfin extensively in the family.

So far jellyfin worked always perfectly fine, but I could add (for free) an NVIDIA 2060 or a 1060. Would it be worth it?

And as power consumption, will the increase be noticeable? Should I do it or pass?

10
11
 
 

Currently, I use dockerproxy + swag and Cloudflare for externally-facing services. I really like that I don't have to open any ports on my router for this to work, and I don't need to create any routes for new services. When a new service is started, I simply include a label to call swag and the subdomain & TLS cert are registered with Cloudflare. About the only complaint I have is Cloudflare's 100MG upload limit, but I can easily work around that, and it's not a limit I see myself hitting too often.

What's not clear to me is what I'm missing by not using Traefik or Caddy. Currently, the only thing I don't have in my setup is central authentication. I'm leaning towards Authentik for that, and I might look at putting it on a VPS, but that's the only thing I have planned. Other than that, almost everything's running on a single Beelink S12. If I had to, I could probably stand up a failover pretty quickly, though.

12
 
 

I saw this post and I was curious what was out there.

https://neuromatch.social/@jonny/113444325077647843

Id like to put my lab servers to work archiving US federal data thats likely to get pulled - climate and biomed data seems mostly likely. The most obvious strategy to me seems like setting up mirror torrents on academictorrents. Anyone compiling a list of at-risk data yet?

13
 
 

qBit does delete the file from my physical drive (direct attached storage, raid5). But won't update the free space in the WebUI. The discrepency is over 1TB, so I'd like to address this if someone can help me.

Some info:

  • qBit v.5.0.1, docker, from linuxserver.io
  • Ubuntu 24.04
  • Automatic Management Mode is checked
  • Torrent content removing mode: Delete files permanently
14
 
 

My internet connection is getting upgraded to 10 Gbit next week. I’m going to start out with the rental router from the ISP, but my goal is to replace it with a home-built router since I host a bunch of stuff and want to separate my out home Wi-Fi, etc onto VLANs. I’m currently using the good old Ubiquiti USG4. I don’t need anything fancy like high-speed VPN tunnels (just enough to run SSH though), just routing IPv6 and IPv4 tunneling (MAP-E with a static IP) as the new connection is IPv6 native.

After doing a bit of research the Lenovo ThinkCenter M720q has caught my eye. There are tons of them available locally and people online seem to have good luck using them for router duties.

The one thing I have not figured out is what CPU option I should go for? There’s the Celeron G4900T (2 core), Core i3 8100T (4 core), and Core i5 (6 core). The former two are pretty close in price but the latter costs twice as much as anything else.

Doing research I get really conflicting results, with half of people saying that just routing IP even 10 Gbit is a piece of cake for any decently modern CPU and others saying they experienced bottlenecks.

I’ve also seen comments mentioning that the BSD-based routing platforms like pfSense are worse for performance than Linux-based ones like OpenWRT due to the lack of multi-threading in the former, I don’t know if this is true.

Does anyone here have any experience routing 10 Gbit on commodity hardware and can share their experiences?

15
 
 

Hey all! I'm running Proxmox VE with the tteck PBS LXC and I can't figure out why there is this constant network traffic on PBS. I have backups set to run in the early morning and the screenshot is from when it should be idle. Any ideas? I know I'm not providing much info here so any clarifying questions are welcome since I don't know what would be important for troubleshooting. Thanks!

16
 
 

Hi all,

I started self hosting nextcloud only. Now I have a domain name and I would like to selfhost more services and websites on subdomains without having to open up more ports on my router.

  1. Is it reasonable to use a reverse proxy server to avoid opening up more ports?
  2. Can I use a reverse proxy manager that simplifies SSL certs, etc?
  3. Can I put the HTTP/HTTPS services behind a reverse proxy, behind a free cloudflare DNS proxy to mask my IP address?
  4. And put other non-http services on the real IP address.
  5. Will all of this be more prone to failure and slow compared to forwarding 443 and 80 directly to my nextcloud server?

The other services I would like to eventually host and have accessible externally are

  • Jitsi
  • Mastodon instance (hoping to make some bots that mirror other social media to bring them into Mastodon)
  • blog website
  • Veilid maybe
  • OpenVPN over TCP on 443 (to get through restrictive firewalls on e.g. school wifi networks that don't whitelist domains)
  • Synology to Synology backup.

I'm hoping to use Yunohost on a RPI to simplify hosting a lot of these things.

Here's my plan where I'm looking for feedback. Am I missing any steps? Are my assumptions correct?

  1. Install reverse proxy on yunohost; configure cloudflare DNS and freedns.afraid.org to point towards the reverse DNS server.
  2. Configure the reverse DNS to redirect various subdomains to
  • the raspberry pi running nextcloud
  • the other raspberry pi running openvpn
  • the Synology running the backup service
  • services running on the yunohost raspberry pi

I have not been able to find good documentation about how to configure the yunohost reverse proxy, or how to deal with HTTP headers, or have correct certificates on all the subdomains as well as the reverse proxy. Looking for advice on how to move forward and or simply this setup.

17
 
 

I just start using my homelab to host some new good services, and I want to know what is the approach of a docker setup, what is the best distro for? How to deploy them correctly? Basically I'm a real noob in this subject. Thank you

18
14
Graphics card upgrade (midwest.social)
submitted 2 days ago* (last edited 19 hours ago) by [email protected] to c/[email protected]
 
 

Hello yall, currently I have an RTX 2060, which I'll be passing down to slap a 1060 into my server, but I'd like to weigh some options first.

The 2060 has been pretty good with Linux thus far, I'm a little worried about going to the 30 series - so I'll be accepting affirmations - but I am curious what any of you think about AMD cards and which one to get. Also if there's any reason not to use a 1060 for jellyfin and such that would be very helpful

Edit: thanks yall! Settled on an RX6600, runs local LLMs like nothing compared to my ol 2060

19
36
submitted 2 days ago* (last edited 2 days ago) by [email protected] to c/[email protected]
 
 

I have a ZFS pool that I made on proxmox. I noticed an error today. I think the issue is the drives got renamed at some point and how its confused. I have 5 NVME drives in total. 4 are supposed to be on the ZFS array (CT1000s) and the 5th samsung drive is the system/proxmox install drive not part of ZFS. Looks like the numering got changed and now the drive that used to be in the array labeled nvme1n1p1 is actually the samsung drive and the drive that is supposed to be in the array is now called nvme0n1.

root@pve:~# zpool status
  pool: zfspool1
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: scrub repaired 0B in 00:07:38 with 0 errors on Sun Oct 13 00:31:39 2024
config:

        NAME                     STATE     READ WRITE CKSUM
        zfspool1                 DEGRADED     0     0     0
          raidz1-0               DEGRADED     0     0     0
            7987823070380178441  UNAVAIL      0     0     0  was /dev/nvme1n1p1
            nvme2n1p1            ONLINE       0     0     0
            nvme3n1p1            ONLINE       0     0     0
            nvme4n1p1            ONLINE       0     0     0

errors: No known data errors

Looking at the devices:

 nvme list
Node                  Generic               SN                   Model                                    Namespace Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme4n1          /dev/ng4n1            193xx6A         CT1000P1SSD8                             1           1.00  TB /   1.00  TB    512   B +  0 B   P3CR013
/dev/nvme3n1          /dev/ng3n1            1938xxFF         CT1000P1SSD8                             1           1.00  TB /   1.00  TB    512   B +  0 B   P3CR013
/dev/nvme2n1          /dev/ng2n1            192xx10         CT1000P1SSD8                             1           1.00  TB /   1.00  TB    512   B +  0 B   P3CR010
/dev/nvme1n1          /dev/ng1n1            S5xx3L      Samsung SSD 970 EVO Plus 1TB             1         289.03  GB /   1.00  TB    512   B +  0 B   2B2QEXM7
/dev/nvme0n1          /dev/ng0n1            19xxD6         CT1000P1SSD8                             1           1.00  TB /   1.00  TB    512   B +  0 B   P3CR013

Trying to use the zpool replace command gives this error:

root@pve:~# zpool replace zfspool1 7987823070380178441 nvme0n1p1
invalid vdev specification
use '-f' to override the following errors:
/dev/nvme0n1p1 is part of active pool 'zfspool1'

where it thinks 0n1 is still part of the array even though the zpool status command shows that its not.

Can anyone shed some light on what is going on here. I don't want to mess with it too much since it does work right now and I'd rather not start again from scratch (backups).

I used smartctl -a /dev/nvme0n1 on all the drives and there don't appear to be any smart errors, so all the drives seem to be working well.

Any idea on how I can fix the array?

20
 
 

Hi folks,

You all have been instrumental to my self-hosting journey, both as inspiration and as a knowledge base when I'm stumped despite my research.

I am finding various different opinions on this and I'm curious what folks here have to say.

I'm running a Debian server accessible only within the home with a number of docker images like paperless-ngx, jellyfin, focalboard, etc. Most of the data actually resides on my NAS via NFS.

  1. Is /mnt or /media the correct place to mount the directories. Is mounting it on the host and mapping the mount point to docker with a bind the best path here?

  2. Additionally, where is the best place to keep my docker-compose? I understand that things will work even if I pick weird locations, but I also believe in the importance of convention. Should this be in the home directory of the server user? I've seen a number of locations mentioned in search results.

  3. Do I have to change the file perms in the locations where I store the docker compose or any config files that don't sit on the other end of NFS?

Any other resources you wish to share are appreciated. I appreciate the helpfulness of this community.

21
65
submitted 3 days ago* (last edited 3 days ago) by [email protected] to c/[email protected]
 
 

As the title what is the best file sharing service than can be self-hostable? Need encryption

EDIT : To be more precise I want something as an alternative to Wetransfer not Google Drive, something to get a link to dl files

22
 
 

I may explain this poorly, so feel free to ask clarifying questions.

I have my homelab setup, and you can access services at service.domain.com only on my network or on my Tailscale tailnet.

I use a pihole for my DNS, and so does my dad.

Would it be possible to install Tailscale on his pihole (or elsewhere) so that his entire network can access my services (ie service.domain.com) but not route all traffic over my pihole and still use his?

23
 
 

I'd like to self host a large language model, LLM.

I don't mind if I need a GPU and all that, at least it will be running on my own hardware, and probably even cheaper than the $20 everyone is charging per month.

What LLMs are you self hosting? And what are you using to do it?

24
 
 

I've been using News for Nextcloud for the past year or so and love it. But it recently broke (refuses to pull any feeds) and reading the github issues... that app ain't gonna last much longer.

Briefly looked at the awesome selfhosting page and going to do a read through of those when my brain is a bit more sane. But any suggestions? My main requirement is that I need to have multiple android devices able to connect and sync even while off network (I can handle the anxiety that comes from tunnels).

25
 
 

I'd like to prevent building a completely new server just to run a gpu for AI workloads. Currently, everything is running on my laptop, except it's dated cpu only really works well for smaller models.

Now I have an nvidia m40, could I possibly get it to work using thunderbolt and an enclosure or something? note: it's on linux

view more: next ›