Anafroj

joined 2 years ago
[–] Anafroj 1 points 2 years ago (4 children)

I'm not sure about the feasibility of this (my first thought would be that ssh on the host can be accessed directly by IP, unless maybe the VPN software creates its own network interface and sshd binds on it?), but this does not remove the need for frequent updates anyway, as openssh is not the only software that could have bugs : every software that opens a port should be protected as well, and you can't hide your webserver on port 80 behind a VPN if you want it to be public. And it's anyway a way more complicated setup than just doing updates weekly. :)

[–] Anafroj 3 points 2 years ago

If you do not neglect updates, then by all mean, changing ports does not hurt. :) Sorry if I may have strong reaction on that, but I've seen way too many people in the past couple decades counting on such anecdotal measures and not doing the obvious. I've seen companies doing that. I've seen one changing ports, forcing us to use the company certificate to log in, and then not update their servers in 6 months. I've seen sysadmins who considered that rotating servers every year made it useless to update them, but employees should all use Jumpcloud "for security reasons"! Beware, though, mentioning port changing without saying it's anecdotal and the most important thing is updates, because it will encourage such behaviors. I think the reason is because changing ports sounds cool and smart, while updates just sound boring.

That being said, port scanning is not just about targeted pentesting. You can't just run nmap on a host anymore, because IDS (intrusion detection systems) will detect it, but nowadays automated pentesting tools do distributed port scanning to bypass them : instead of flooding a host to test all their ports, they test a range of hosts for the same port, then start over with a new port. It's half-way classic port scanning and the "let's just test the whole IP range for a single vulnerability" that we more commonly see nowadays. But they are way harder to detect, as they scan smaller sets of hosts, and there can be hours before the same host is tested twice.

[–] Anafroj 7 points 2 years ago* (last edited 2 years ago)

The best you can do to know if it was an attack is to inspect the logs when you have time. There are a lot of things that can cause a process going wild without being an attack. Sometimes, even filling the RAM can cause the CPU to appear overloaded (and will freeze the system anyway). One simple way to figure out if it's an attack : reboot. If it's a bug, everything will get back to normal. If it's a DDoS, the problem will reappear up to a few minutes after reboot. If it's a simple DoS (someone exploiting a bug of a software to overload it), it will reappear or not given if the exploit was automated and recurring, or was just a one-shot.

The fact that both your machines fell at the same time would tend to make think it's an attack. On the other hand, it may just be a surge of activity on the network with VPSes with way not enough resources to handle it. Or it may even be a noisy neighbor problem (the other people sharing with you the real hardware on which your VPSes run who will orverload it).

[–] Anafroj 4 points 2 years ago* (last edited 2 years ago) (10 children)

However Port 22 should never be open to the outside world.

That's not a good advice, sorry. You can bind openssh to an other port, but the only thing it changes is that you have less noise in your logs (and the real solution to that is to use fail2ban, as it also protect you for upcoming attacks on other services from those hosts). The real most important security measure is to make sure your softwares are always up to date, as old vulnerable software is the first cause of penetration (and yes, it's better to deactivate password login to only use ssh keys, that's a good advice).

EDIT: I'm elaborating on that because I realize it may come out as harsh without giving enough details. The main reason while changing ports is a bad idea is because it gives a false sense of security (like your last sentence makes it obvious). While it does allow to protect from automated vulnerability scanners that sweep the internet, it's trivial to port scan your host, then to test unfamiliar ports for well known protocols. When that happens (and it will), if you think you could avoid frequent updates thanks to port change, you're powned. The most important thing is to have a strict update policy of weekly, if not daily, updates. There is no working around it.

[–] Anafroj 1 points 2 years ago

Oh, ok. Thanks for letting me know. 👍️

[–] Anafroj 9 points 2 years ago* (last edited 2 years ago) (1 children)

"karma" (as reddit calls scoring) never was more true to its name. :)

I haven't looked at Lemmy's implementation of upvotes/downvotes, but they should be ActivityPub activities, so it means they should appear by making a request to the user's actor.

EDIT: I've just checked random users outbox (that's the ActivityPub name for the list of activities), included mine, they are actually just empty. So that probably means that Lemmy is only publishing the upvotes/downvotes when pushing activities to federated servers, which would make those activities way more private, although not completely : someone could setup their own instance to learn about them, and it's best to be assume that at some point, someone will start such instance and publish an app revealing all votes for everybody (plus, as others mentioned, Kbin is already doing it).

[–] Anafroj 6 points 2 years ago* (last edited 2 years ago)

I've been running my own email server for years, and while it's indeed difficult at first, it is possible and you don't have much to do to maintain it when it works. All the horror stories you hear come from the fact it's difficult to get right, and even when you get it right, you will have deliverability problems the first year, until your domain name gets established (and provided you don't use it for spam, obviously - and yes, marketing is spam).

What you need :

  • being willing and serious about reading lot of documentation
  • an IP that is not recognized as a home IP. So you'll need a "business ISP", or one that is not well known. You bypass this problem by using AWS.
  • choosing a well recognized TLD for your domain name, like .com, .org, .net, etc. Don't use one of those fancy new extensions (.shop, .biz, etc), they are associated with spammers.
  • learning how SPF works and getting it right (there are plenty of documentation and test tools for that)
  • same for DKIM
  • same for DMARC

Start using that for a year without making it your main address. Best is to use it for things not too mainstream, like FOSS mailing lists, discussing with people having their own mailserver, etc, those will not drop your mails randomly. When a year has gone with frequent usage, you can migrate to that email address or domain.

Regarding the architecture of your network : do you read your emails on several machines (like, on mobile and laptop)? If not, you can dramatically simplify your design by using pop3 instead of imap, connecting your client to the AWS server, downloading all your emails to computer and removing them from the server at the same time. There, you have all your mails locally and you don't need dovecot. :)

[–] Anafroj 5 points 2 years ago* (last edited 2 years ago) (2 children)

I don't use a pihole, but I have a pi with my favorite distro acting as server, and I use dnsmasq for what you mention. It allows to set the machine as the nameserver for all your machines (just use its IP in your router DNS conf, DHCP will automatically point connected machines to it), and then you can just edit /etc/hosts to add new names, and it will be picked up by the nameserver.

Note that dnsmasq itself does not resolve external names (eg when you want to connect on google.com), so it needs to be configured to relay those requests to an other nameserver. The easy way is to point it to your ISP nameservers or to public nameservers like those from Cloudflare and Google (I would really recommend against letting them know all domains you're interested in), or you can go the slightly more difficult way as I did, and install an other nameserver (like bind9) that runs locally. Gladly, dnsmasq allowed to configure its relay nameserver to be on something else than port 53, which is quite rare in dns world. Of course, if you're familiar with bind9, you could just declare new zones in it. I just find it (slightly 😂) more pleasant to work with /etc/hosts.

[–] Anafroj 2 points 2 years ago

You're welcome. :) Oh yeah, you probably use a lot of them, they are everywhere, although it's not obvious to the user. One way to figure it out is to open the browser inspector (usually control + shift + i, same to close it) and look on the "network" tab, which lists all network requests made by the page, to see if this list gets emptied when you click a link (if it's a real new page, the list is emptied and new requests appear).

My apologies, I spent an hour on the popstate problem before losing interest and calling it a day. Lemmy uses the inferno frontend framework (a clone of react), which uses the inferno-router router to handle page changes, which uses the history lib to do it, which… uses pushState as I expected it would. And yet, binding on popstate won't work. 🤷 Maybe I'll have an other look at it one day if it bugs me enough. :)

[–] Anafroj 2 points 2 years ago

It's coming to Gitlab too! (although, this will take quite some time)

[–] Anafroj 57 points 2 years ago* (last edited 2 years ago) (1 children)

Obligatory check : are you sure you really need a forge? (that's the name we use to designate tools like Github/Gitlab/Gitea/etc). You can do a lot with git alone : you can host repositories on your server, clone them through ssh (or even http with git http-backend, although it requires a bit of setup), push, pull, create branches, create notes, etc. And the best of it : you can even have CI/CD scripts as post-receive hooks that will run your tests, deploy your app, or reject the changes if something is not right.

The only thing you have to do is to create the repos on your server with the --bare flag, as in git init --bare, this will create a repos that is basically only what you usually have in the .git directory, and will avoid having errors because you pushed to a branch that is not the currently one checked. It will also keep the repos clean, without artifacts (provided you run your build tasks elsewhere, obviously), so it will make all your sources really easy to backup.

And to discuss issues and changes, there is always email. :) There is also this, a code review tool that just pop up on HN.

And it works with Github! :) Just add a git remote to Github, and you can push to it or fetch from it. You can even setup hooks to sync with it. I publish my FOSS projects both on Github and Gitlab, and the only thing I do to propagate changes is to push to my local bare repos that I use for easy backups, they each have a post-update hook which propagates the change everywhere it needs to be (on Github, Gitlab, various machines in my local network, which then have their own post-update hooks to deploy the app/lib). The final touch to that : having this ~/git/ directory that contains all my bare repos (which are only a few hundred MB so fit perfectly in my backups) allowed me to create a git_grep_all script to do code search in all my repos at once (who needs elasticsearch anyway :D ) :

#!/usr/bin/env bash
# grep recursively bare repos

INITIAL_DIR=$(pwd)
for dir in $(find . -name HEAD -exec dirname '{}' \;); do
  pushd $dir > /dev/null
  git grep "$*" HEAD > /dev/null
  if [[ "$?" = "0" ]]; then
    pwd
    git grep "$*" HEAD
    echo
  fi

  popd > /dev/null
done

(note that it uses pushd and popd, which are bash builtins, other shells should use other ways to change directories)

The reason why you may still want a forge is if you have non tech people who should be able to work on issues/epics/documentation/etc.

[–] Anafroj 1 points 2 years ago* (last edited 2 years ago) (2 children)

Thanks, that's a good idea.

The reason why it only works on page reload is because Lemmy is a SPA : it makes it look like you're browsing several pages, but it's actually always the same, and it uses javascript to change the url and load new content. So the "load" event, triggered when the current page is done loading, is only triggered once because the page is only changed once. If you wonder why : SPA became commonplace in the 2010s because javascript applications started to get way bigger than previously, and it was helping with page load speed. For a time… because when you make page load faster, people just make it load more things until it's slow again. :)

My first reaction was that additionally to binding to the load event, we probably just can bind to the popstate event, which happens when the url is programmatically changed. But my first tests were not successful in doing that. I'll have a quick look at the source code of Lemmy later today to see if I can solve this.

view more: ‹ prev next ›