I don't rely on it, but for guests etc I use adblock on OpenWrt with https://oisd.nl/. It's supposed to have no false positives
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
How do you host your DNS sinkhole/resolver?
Like this, baby:
services.adguardhome = {
enable = true;
mutableSettings = false;
openFirewall = true;
settings = {
dns = {
# Web Interface
bootstrap_dns = ["9.9.9.9" "149.112.112.112"];
upstream_dns = ["https://dns.quad9.net/dns-query"];
fallback_dns = ["tls://dns.quad9.net"];
};
filters = [
{
name = "AdGuard DNS filter";
url = "https://adguardteam.github.io/HostlistsRegistry/assets/filter_1.txt";
enabled = true;
}
];
filtering = {
blocked_services = {
ids = [
];
};
protection_enabled = true;
filtering_enabled = true;
rewrites = [
];
};
Deploy to the main home server, and the backup instance. NixOS is fucking awesome. No sync tool needed.
How do I use nixos for docker? I've tried before but what I want is to be able to pull docker compose from a git and deploy it. I haven't been able to find an easy way to do that on docker
If you have the docker-compose.yml
locally, you can nix run github:aksiksi/compose2nix
to translate it into a nix file for inclusion in your nixos system config. I think that could be done in the config itself with a git url but I'm not that great at nix. You will surely still need some manual config to e.g. set environment variables for paths and secrets.
Most of the time you don't need docker. NixOS isolates runtimes.
That being said, you could use nix to build the docker container, and then run it using the built-in oci-container options.
I'm looking into Technitium, which doesn't get a ton of attention here. It looks to be much more feature packed than PiHole (DNS over HTTPS, for example), and similar to AdGuard Home.
Man, I was excited about Technitium, but I've had a hell of a time trying to get it to work. I'm not sure if it's intended to be on a DMZ in order to get TLS working or something, but I've not been able to get it to acknowledge a single DNS request, even when I think I've shut down DNSSec entirely.
I run Pihole+Unbound, Debian baremetal on a tinypc. RPi was too unreliable. I was too often dealing with issues.
My router is the failback, as it has blocking too.
2 pihole instances 1 pi5 1 pi4 Keepalived provides vrrp at a set address.
Instances kept in sync via orbital
1 goes down the other takes over.
Quite elegantly.
Where do you do DHCP? I had a primary pihole with DHCP enabled and a secondary with a cron job that enabled DHCP if the primary was down or disabled it if the primary was working. The cron job did sync DHCP leases from one to the other but it was a bit janky. I tried to update the secondary to pihole v6 and hosed it so I have no backup for now. I'd like to re-image the secondary and get a better setup - when I have time.
Edit to say I really wanted to try keepalived - that's really cool to fail over without clients noticing.
Debian & ubuntu sudo apt install keepalived
sudo apt install libipset13
Configuration
Find your IP
ip a
edit your config
sudo nano /etc/keepalived/keepalived.conf
First node
vrrp_instance VI_1 {
state MASTER
interface ens18
virtual_router_id 55
priority 150
advert_int 1
unicast_src_ip 192.168.30.31
unicast_peer {
192.168.30.32
}
authentication {
auth_type PASS
auth_pass C3P9K9gc
}
virtual_ipaddress {
192.168.30.100/24
}
}
Second node
vrrp_instance VI_1 {
state BACKUP
interface ens18
virtual_router_id 55
priority 100
advert_int 1
unicast_src_ip 192.168.30.32
unicast_peer {
192.168.30.31
}
authentication {
auth_type PASS
auth_pass C3P9K9gc
}
virtual_ipaddress {
192.168.30.100/24
}
}
Start and enable the service
sudo systemctl enable --now keepalived.service
stopping the service
sudo systemctl stop keepalived.service
get the status
sudo systemctl status keepalived.service
Make sure to change ip and auth pass.
Enjoy
On the router.
My router is locked down so i assign the vrrp address to wach client (pain in the ass) but it works.
Pivpn takes care or wireguard too.
If you run a single DNS server, you will always have downtime when it's restarted.
The only way to mitigate that, is to run 2 DNS servers.
I setup my network to use pihole as the first DNS and the router as the second, most of the time pihole is used. Unless it's down
Just be sure that the second server in the list is also a black hole. If you don't, all black holed requests will fallback to the second DNS... which, if it doesn't also black hole them, will wind up serving you ads and defeating the point!
Personally I find a single Pi is just fine for DNS. It only takes like 10 seconds to reboot. Less, if you use M.2 storage via a HAT or boot from USB! That's pretty fine downtime. But if you're afraid you'll knock over the network and get yelled at by your family or housemates, best to use a backup :)
How do you set up clients so they will always use the first one? I thought if a client knows 2 servers they will switch between them.
I plan to add a second Pihole at some point and keep them synced
Yeah, you can't. There is no guarantee that clients will use dns servers in any particular order.
Not that it particularly matters for just queries. The problem is that DHCP can only be enabled on one host. If that one fails then devices can't get on to the network themselves. I'd like to know a good way to have a failover DHCP server - my janky cronjob isn't great.
You can just run two DHCP servers. Give them non-overlapping ranges or give them the same MAC to IP mapping.
How do the DNS servers resolve local hostnames then? The pihole DHCP integration adds local hostnames to DNS when they are assigned an address. If there's two DHCP servers handing out leases, presumable only one would be accepted, how then would the DNS servers sync those names?
I think I had my secondary pihole resolve local names from the primary, and leases were copied over on a cronjob in case the secondary DHCP server had to be enabled.
Use the second option of a static MAC to IP map and add the relevant records to each pihole’s local DNS.
Are you using pihole to also create custom local DNS records?
Yes, mostly just the hostnames
The **ONLY** DNS server you should have set on your network is a/the PiHole(s).
Why wouldn't you just use DNS on your router
Router may not have a function you want.
Instead of paying for a raspberry Pi you could just get a OpenWRT device. You can get the router equivalent of a rust bucket since chances are you are not using the Wireless portion anyway.
Sure, OpenWRT is good and there’s an Adguard Home plugin for it. You don’t need to buy any hardware to use Pihole though, many people run it in a container on an existing machine. So it comes down to the functionality you need or want and the software you prefer, right?
I run 2 separate adguard home containers on separate hosts and set DNS for both IPs. If I take one down, requests just get sent to the other.
For a critical service like DNS, I decided to set it up bare metal on a Raspberry Pi 2 (even a Pi Zero should work). It's been working fine for years, I just update it from time to time. That way I can mess with my homelab without worrying about DNS issues.
Funny enough, the Pi Zero uses the CPU from the 3 and the Zero 2 uses the CPU from the 3+, so they're both more powerful than a 2 anyway :)
Pi Zero uses the CPU from the 3
No, the original Pi Zero uses the CPU of the Pi1 (only clocked higher). So it is quite a bit slower than a Pi 2, since it has only a single ARMv6 CPU core. Still fine for a DNS server on a typical home network.
Aha, thank you. Shouldn't have riffed from memory on that one, I suppose!
But very much agreed: the Zero series has plenty of beef for a DNS server. Maybe when the 3 comes out I'll add one as a backup for my 4 server.
Pihole is cool but why not just run unbound on your firewall
I tried running unbound + PiHole however, my experience was less than ideal.
I was able to forward all DNS queries without issues however, PiHole wasn’t receiving response times from unbound which caused some of my other docker containers to bug out with timeout errors.
PiHole makes monitoring the network convenient which is kinda why I don’t wanna lose it, unbound doesn’t appear to have a web-ui natively.
Ive been using it with opnsense and it has a lot of built in logging and reporting. Maybe not as pretty as pihole but it works great
Ive been using it with opnsense and it has a lot of built in logging and reporting.
Never did a lot of research into opnsense, from what I can see it’s a whole OS. I might consider it because I feel Proxmox (which I use currently for my host OS) isn’t getting utilized to its fullest.
Maybe I’ll go network monitoring instead of virtual environment spin-ups 🤔
Why not both
Huh, while I was typing this comment I decided to read the minimum hardware requirements and turns out I only need to reserve 2 cores for the vm.
While I’m not exactly hosting proxmox on server grade hardware, I think I can spare 2 maybe 3 cores, 4 is a bit of a stretch I think given that 6 are already reserved for my headless Debian 12 vm + Docker engine.
I am running AdGuard Home DNS, not PiHole.. but same idea. I have AGH running in two LXCs on proxmox (containers). I have all DHCP zones configured to point to both instances, and I never reboot both at the same time. Additionally, I watch the status of the service to make sure it’s running before I reboot the other instance.
Outside of that, there’s really no other approach.
You would still need at least 2 DNS servers, but you could setup some sort of virtual IP or load balancing IP and configure DHCP to point to that IP, so when one instance goes down then it fails over to the other instance.
I would do a single instance of Pihole. If you need HA there are ways to do that. If you need something more switch to a proper DNS service.
I think something else may be wrong if it breaks for 20 minutes. How long does it take for compose to bring the stack up?
Also assuming you run ntpd or chrony, it should always keep your clock in sync.
I think something else may be wrong if it breaks for 20 minutes.
When I originally setup my PiHole many, many, many months ago when I was still learning the Docker engine I had little to no issue.
I don’t know what caused it either being a power-outage or network loss but ever since I’ve been experiencing DNS related issues (I suspect it’s NTP not syncing), some days I’ll wake up before work realizing “oh shit I have no internet access” frantically trying to fix the issue.
I think i might take the advice of other commenters here and host two PiHole servers on separate devices/stacks, just got to hope my router supports it.
spin up a second pihole docker and upgrade them separately so they can failover to the other one while upgrading. I do not have an issue with 20min lose of DNS after updating my pi.hole docker, but I did spin up a second one when I wanted to try unbound+pi.hole and just kept them both up/running.
spin up a second pihole docker and upgrade them separately so they can failover to the other one while upgrading.
Think I’m going to take this advice and put it in action! Thank you!
I run my pi-hole on a dedicated Pi, and I pull the updated image first without any trouble. Then after the updated image is pulled, recreating the container only takes a few seconds.
Dunno what's broken about your setup, but it definitely sounds like something unusual to me.
Running unbound on my opnSense with the appropriate blacklists for ad filtering.
This is overkill.
I have a dedicated raspberry pi for pihole, then two VMs running PowerDNS in Master/Slave mode. The PDNS servers use the Pihole as their primary recursive lookup, followed by some other Internet privacy DNS server that I can't recall right now.
If I need to do maintenance on the pihole, power DNS can fall back to the internet DNS server. If I need to do updates on the PowerDNS cluster, I can do it one at a time to reduce the outage window.
EDIT: I should have phrased the first sentence: "My setup is overkill" rather than "This is overkill" - the Op is asking a very valid question and the passive phrasing of my post's first sentence could be taken multiple ways.