I will be stuck in low or no internet areas and having a way to save a whole website (such as a small community wiki or something) to browse while bored would be very nice. It’d be nice if its features like search could be kept working. Any suggestions for a Foss app that can do this?

  • kekmacska
    link
    fedilink
    English
    arrow-up
    7
    ·
    5 hours ago

    you can’t. How do you imagine saving the sql databases, that you need for logging in and viewing user profiles and so on? at most, you can save a snapshot

  • Boomkop3@reddthat.com
    link
    fedilink
    arrow-up
    2
    ·
    6 hours ago

    a chrome extension called WebScrapBook does the trick. Install it on a chromium based browser, such as kiwi or one that isn’t getting discontinued

  • u/lukmly013 💾 (lemmy.sdf.org)@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 hours ago

    I used wget to download static sites, or at least ones with simpler JavaScript, but it won’t download any required files that are only linked in JS code, so it probably won’t work for many sites.

    You also need to be careful when spanning hosts so that you don’t accidentally (attempt to) download the entire internet. And rate-limiting, useragent, robots file, filename limitations (so that it doesn’t save files with filename characters that have other uses in URLs like # and ?), filename extensions (to serve them back with correct mimetype), getting filenames from server rather than URL when appropriate, converting links (works in HTML files only), and I am probably forgetting something else.

    Oh, and it’s a single process doing one request at a time, so even just a page with too many images will take ages. E.g.: http://cyber.dabamos.de/88x31/ (currently offline).

    You can then easily serve them using NGINX, or just browse as files, though the latter may not work well on something like a phone. Oh, one more thing, image.jpg and Image.jpg would conflict on Android, and some websites have differences like that. It can only be stored within Termux (and served using NGINX in Termux).

      • asudox@lemmy.asudox.dev
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        8 hours ago

        You can download the website’s static files then (html, css, images, etc.) but features such as search won’t function if it works by querying some database.

        Iirc most browsers have a way to make website’s available offline. I know chromium has it, but firefox does not. You’d probably need an extension for that. Or you can download the static files, store them in a directory manually and then open the index.html with firefox. That should work.

      • fcat@ieji.de
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        7 hours ago

        @marcie @asudox On Firefox, can you try CTRL+S and choosing complete web page save? It usually is enough. Though if it calls an API for searching, that’s not gonna work.

    • marcie (she/her)@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      8 hours ago

      While it seems chromium based browsers are able to download pages to view later, it doesnt seem it saves a whole website.

      • Trent@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        Kiwix isn’t a web browser exactly and doesn’t download web pages the way your browser saves them. It uses a specialized file format, and it can be used to back up an entire site. For instance the kiwix library has an offline copy of wikipedia (no images), but it weighs in at more than 100GB last I looked.