this post was submitted on 29 Mar 2025
1 points (100.0% liked)

It's A Digital Disease!

20 readers
1 users here now

This is a sub that aims at bringing data hoarders together to share their passion with like minded people.

founded 2 years ago
MODERATORS
 
The original post: /r/datahoarder by /u/Specific-Judgment410 on 2025-03-28 18:13:03.

Original Title: Trying to download a niche wiki site for offline use, tried zimit but it takes far too long for simple sites, tried httrack but it struggles with modern sites, thinking of using CURL, how is everyone creating web archives of modern wiki sites?


What I'm trying to do is extract the content of a web site that has a wiki style format/layout. I dove into the source code and there is a lot of pointless code that I don't need. The content itself rests inside a frame/table with the necessary formatting information in the CSS file. Just wondering if there's a smarter way to create an offline archive thats browsable offline on my phone or the desktop?

Ultimatley I think I'll transpose everything into Obsidian MD (the note taking app that feels like it has wiki style features but with offline usage and uses the markup language to format everything).

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here