NikStalwart

joined 1 year ago
[–] [email protected] 1 points 11 months ago (1 children)

My HumbleBundle donations go to the EFF, which is my indirect way of supporting Let's Encrypt. I used to support the Internet Archive before they went political (and, consequently, to shit).

[–] [email protected] 1 points 11 months ago (1 children)

I am running BIND9 to achieve this very thing.

You can set up different "views" in BIND. Different zonefiles are served to different clients based on the IP address.

I have an external view that allows AXFR transfers to my public slave DNS provider, and an internal view for clients accessible over my VPN. I use DNS-01 challenges to issue valid Let's Encrypt certificates to both LAN-facing and public-facing services.

My DNS server is running on my VPN coordination server, but, if I was not doing that, I'd run it on my router.

I do not use dnsmasq, so I am not sure if it supports split-view DNS, but if it does not, you can try coredns as a lightweight alternative.

[–] [email protected] 1 points 11 months ago

sentry.io can be selfhosted for app monitoring.

Netdata is a 'set-and-forget' server monitoring solution. If you want something more tailored, go the prometheus+grafana route.

You can also have your app emit prometheus metrics as well.

[–] [email protected] 1 points 11 months ago

Fair enough.

I am almost 90 per cent certain that my work won't let me get away with a VM, but heh, who knows....

[–] [email protected] 1 points 11 months ago (2 children)

Have you considered a physical KVM switch? If you have, why did you decide against it?

Are you doing GPU partitioning?

[–] [email protected] 1 points 11 months ago (1 children)

So to answer your last question first: I run dual boot Arch+Windows, and I can mount the physical Arch disk inside a WSL VM and then chroot into it to run or fix some things when I CBA to reboot properly. I haven't tried booting a WSL instance off of the physical arch disk but I don't imagine it would work. Firstly, WSL uses a modified linux kernel (which won't be accessible without tinkering with the physical install). Secondly, the physical install is obviously configured for physical ACPI and Network use which will break if I boot into it from WSL. After all, WSL is not a proper VM.

To answer the first question as to services: notes, kanban boards, network monitoring tools (connected to a VPN / management LAN), databases, more databases, even MOAR databases, database managers, web scrapers, etc.

The very first thing I used WSL for (a long time ago) was to run ffmpeg. I just could not be bothered building it for Windows myself.

[–] [email protected] 1 points 11 months ago (3 children)

So on my workstation / daily driver box:

  • I have Docker using the WSL2 backend. I use this instance of docker to test deployments of software before I push it to my remote servers, to perform local development tasks, and to host some services that I only ever use when my PC is on (so services that require trust and don't require 24x7 uptime).
  • I have about 8 distros of linux in WSL2.
  • The main distro is Ubuntu 22.04 for legacy reasons. I use this to host an nginx server on my machine (use it as a reverse proxy to my docker services running on my machine) and to run a bunch of linux apps, including GUI ones, without rebooting into my Arch install.
  • I have two instances of Archlinux. One is 'clean' and is only used to mount my physical arch disk if I want to do something quick without rebooting into Arch, and the other one I actively tinker with.
  • Other distros are just there for me to play with
  • I use HyperV (since it is required for WSL) to orchestrate Windows virtual machines. Yes, I do use Windows VMs on Windows host. Why? Software testing, running dodgy software in an isolated environment, running ~~spyware~~ I mean Facebook, and similar.
  • Prior to HyperV, I used to use Virtualbox. I switched to hyperv when I started using WSL. For a time, hyperv was incompatible with any other hypervosor on the same host so I dropped virtualbox. That seems to have been fixed now, and I reinstalled virtualbox to orchestrate Oracle Cloud VMs as well.
[–] [email protected] 1 points 11 months ago

Thank you! What would such a competitive amount would be? 2 per each region covering east and west? or something more distributed such as 1 in a radius of 1,000km?

I certainly don't need anything as robust as 1 per 1000km. I currently utilize ClouDNS as my main slave DNS provider. ClouDNS give me POPs in the capital city of every economically-relevant country.

I don't necessarily need something that robust for a backup slave provider. Something like 2 POPs per continent would be more than enough, say South Africa, North Africa, Sydney, Singapore, 1-2 in Europe, 1 in JP/KR, 2 in USA, and one in South America.

That should give decent-enough coverage.

[–] [email protected] 1 points 11 months ago (2 children)

I do, indeed, use slave DNS servers, in fact, I'm currently in the market for a second independent provider.

What features am I looking for? Honestly, a competitive amount of POPs and ability to accept AXFR in. I don't need much more than that.

Oh and pricing: I'm looking for something on the level of AWS or cheaper. I've tried approaching some other players in the field like ns1 and Hurricane Electric's commercial service and those are quoting me $350+/month for < 100 zones and <10m req/month. No thank you.

[–] [email protected] 1 points 11 months ago

This is not a question for /r/selfhosted. This is a configuration issue — that I am inclined to ascribe to installing too much unnecessary software — and would suggest directing to a more generalist tech support forum.

[–] [email protected] 1 points 11 months ago

Tailscale is beautiful but it is not selfhosted so

Headscale is a thing. Tailscale themselves promote it.

I'd seriously consider using a relay/jumphost/etc VPS between your home network and roaming clients. That way, you can even continue using tailscale.

You can get this easily for under $3/month even without black friday.

[–] [email protected] 1 points 11 months ago

I'm not sure if google-fu is glitching as much as imagination. You have three clean options:

  • docker container ls (shows container name and ports)

  • netstat -ban (shows all ports in use on the system + the binary running the service)

  • Just write documentation for yourself when you bring up a new service. Doesn't have to be anything fancy, a simple markdown or YAML file can be used. I use YAML in case I ever want to use it programmatically.

netstat -an is your friend.

Documentation is your second best friend.

view more: next ›