sneakyninjapants
It depends on if you were the first person on your instance to subscribe, and if that subscription happened before or after the posts were made. Lemmy doesn't do backfilling content, which means only new content after the subscription happens will be visible to your instance. I'm not a fan of that personally, but I can see why they did it that way.
The way I have my monitoring set up is to poll the containers from behind the proxy layer. Ex. if I'm trying to poll Portainer for example:
***
services:
portainer:
...
with the service name portainer
from uptime-kuma within the same docker network it would look like this:
Can confirm this is working correctly to monitor that the service is reachable. This doesn't however ensure that you can reach it from your computer, because that depends on if your reverse proxy is configured correctly and isn't down, but that's what I wanted in my case.
Edit: If you're wanting to poll the http endpoint you would add it before like http://whatever_service:whatever_port
I believe the Pictrs is a hard dependency and Lemmy just won’t work without it, and there is no way to disable the caching
I'll have to double check this but I'm almost certain pictrs isn't a hard dependency. Saw either the author or one of the contributors mention a few days ago that pictrs could be discarded by editing the config.hjson to remove the pictrs block. Was playing around with deploying a test instance a few days ago and found it to be true, at least prior to finalizing the server setup. I didn't spin up the pictrs container at all, so I know that it will at least start and let me configure the server.
The one thing I'm not sure of however is if any caching data is written to the container layer in lieu of being sent to pictrs, as I didn't get that far (yet). I haven't seen any mention that the backend even does local storage, so I'm assuming that no caching is taking place when pictrs is dot being used.
Edit: Clarifications
Thanks for sharing! I'll definitely be looking into adding this to my infra alerting stack. Should pair well with webhooks using ntfy for notifications. Currently just have bash scripts push to uptime-kuma for disk usage monitoring as a dead man trigger, but this should be better as a first-line method. Not to mention all the other functionalities it has baked in.
Edit: Would also be great if there was an already compiled binary in each release so I can use bare-metal, but the container on ghcr.io is most-likely what I'll be using anyway. Thanks for not only uploading to docker hub.
IIRC Apx is using distrobox under the hood. So in that case yes.
They mostly do by default, which is pretty annoying. But there are ways around it. I'm currently self-hosting a Miniflux instance where I can set per-feed whether or not it will try to parse the full text of each article. Most of the time that works, but on the off chance it doesn't I fall back to Morss by prepending the feed with http://fulltext/
I have reservations about running either the agent or portainer itself on something external to my lan.
I don't feel like it's safe enough personally either, so I just have portainer edge-agent nodes connected to the primary on my intranet through through vpn tunnels. I really, really would prefer not to ever open ports on my local firewall, but being able to monitor and control remote docker hosts is also pretty convenient, so my solution has been decent for me.
Agree completely. In the grand scheme of things the damage that appears to have happened here is small potatoes, but it brought attention to the vulnerability so it was patched quickly. Going forward now, the authors and contributors to the project might be a bit more focused on hardening the software against these types of vulnerabilities. Pen testing is invaluable on wide user-base internet accessible platforms like this because it makes better, more secure software. Unfortunately this breech wasn't under the "ethical pen testing" umbrella but it sure as hell brought the vulnerability to the mindshare of everyone with a stake in it, so I view it as a net win.
Coming in late here, but your best starting point I think is to find someone that has published a list of known federated lemmy servers, or build your own.
- I think there's an API endpoint (IDK if you have to be an authenticated user to access) that lists which servers a particular server is federated to
- Use that to query all the servers in that list at the same endpoint, deduplicate, and repeat to build a graph of the fediverse.
- From there you can use a different API endpoint to query which servers are open vs. closed registration
- Then you can ping each server to find that latency, but that's not the whole picture.
- some servers are starved for resources, or on an older version of software that is less optimized, so there may be a way to use the API to navigate to random posts and capture the time it takes that to complete; probably a more useful metric.
- Might also be a good idea to get a metric for the number of users on that server too, as that might sway your opinion one way or the other.
- There might be an endpoint to query the number of banned users, but I don't recall seeing it.
IDK if you're interested in doing that work, but I don't think anyone has published tooling so far that you can run on your desktop to get that performance info. There's Python libraries already out there for interacting with the Lemmy API, so that's a good jumping off point.
Edit: Now that I'm thinking about it, that could be a pretty useful for the main website(s). They can use those type of queries on the backend to help with suggestions for new user onboarding.
Using a self-sourced 350mm^3^ Voron 2.4r1 as my primary,and have been very happy with it. Also have an Ender3 kicking around somewhere and may or may not end up converting it into an Enderwire eventually.
Oh that's interesting, that's the first I've heard of it. I wonder how one would go about testing if that works.