38
submitted 1 year ago by [email protected] to c/[email protected]

With Twitter being worse than ever, I can no longer pull local news and municipal events through Nitter's RSS feature.

Since so many groups have stopped using RSS to deliver news, and have put all their eggs in the social media basket, it leaves a void that can't be replaced by signing up to a dozen newsletters.

Do you guys have any other solutions for maybe scraping websites to generate RSS feeds or something like that?

I'm using FresshRSS. It has web scraping, but seems to require a lot of manual syntax entry, and seems to error out regardless.

all 15 comments
sorted by: hot top controversial new old
[-] [email protected] 19 points 1 year ago
[-] [email protected] 2 points 1 year ago

Thank you. I'll try to get that setup on docker at some point today.

[-] [email protected] 6 points 1 year ago
[-] [email protected] 6 points 1 year ago* (last edited 9 months ago)

FreshRSS is what I use and I can create my own feeds using X path, it's kinda great but too much to explain. I wrote a blog about it.

https://joelchrono.xyz/blog/newsboat-queries-and-freshrss-scraping/

[-] [email protected] 5 points 1 year ago

Don't know if this will achieve what you want, but I selfhost ChangeDetection.io to check if webpages have been updated, then subscribe to changedetection's RSS feed with FreshRSS.

[-] [email protected] 1 points 1 year ago

Interesting option! I don't think it will suit my needs for this particular request, but I do have other uses for it =) Thank you for the suggestion.

[-] [email protected] 3 points 1 year ago

I use and have contributed to RSShub. Now most of my 200 feeds come from there.

[-] [email protected] 2 points 1 year ago

Hi,

Maybe look into rssparser.lisp

[-] [email protected] 2 points 1 year ago

I usually just resort to webscraping

[-] [email protected] 1 points 1 year ago
[-] [email protected] 1 points 1 year ago

Fairly simple using Python locally with no need for a server: requests-html to get the website front page, then loop through the articles using feedgenerator to increment a feed object, then pipe it as XML to a file.

Obviously this is not simple at all but it does work. I have been consuming an RSS-free site by RSS every day for the last year. Provided you ensure theguid for each item is its URL, the RSS reader will keep track of what you have seen already, in order, which of course is the magic feature of RSS.

[-] ALERT 1 points 1 year ago

also, rssbox and rss-proxy.

this post was submitted on 04 Jul 2023
38 points (100.0% liked)

Selfhosted

37939 readers
431 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS