I saw this post and I was curious what was out there.

https://neuromatch.social/@jonny/113444325077647843

Id like to put my lab servers to work archiving US federal data thats likely to get pulled - climate and biomed data seems mostly likely. The most obvious strategy to me seems like setting up mirror torrents on academictorrents. Anyone compiling a list of at-risk data yet?

  • scientific_railroads@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 hours ago

    For myself: Wayback It saves link to multiple different web archives and gives me pdf and warc files.

    For others: Archive team have a few active projects to save at risk data and there is IRC channel in which people can suggest adding other websites for saving. They also have wiki with explanations how people can help.

    • AnUnusualRelic@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      8 hours ago

      archive.org is hosted in the US and could end up being a valid target. It doesn’t strike me as being a very good place to securely store anything nowadays. I’d consider anything hosted in the US to be out.

      • Ludrol@szmer.info
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        Depends on the threat model.

        NOAA and others gets underfunded/change of menagement and need to close down open access to stuff.

        or

        Data becomes illegal to possess and feds start knocking on Web Archive doors.

        or

        Web archive will do something stupid and will get sued/DDOSed

        In only one very unlikely scenario it won’t be availble due to recent events. But still redundancy would be good regardless of recent stuff.

  • yasser_kaddoura@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    11 hours ago

    I have a script that archives to:

    I used to solely depend on archive.org, but after the recent attacks, I expanded my options.

    Script: https://gist.github.com/YasserKa/9a02bc50e75e7239f6f0c8f04fe4cfb1

    EDIT: Added script. Note that the script doesn’t include archiving to archivebox, since its API isn’t available in stable verison yet. You can add a function depending on your setup. Personally, I am depending on Caddy and docker, so I am using caddy module [1] to execute commands with this in my Caddyfile:

    route /add {
    	@params query url=*
    	exec docker exec --user=archivebox archivebox archivebox add {http.request.uri.query.url} {
    		timeout 0
    	}
    }
    

    [1] https://github.com/abiosoft/caddy-exec

  • Krafting@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 hours ago

    I archive youtube videos that I like with TubeArchivist, I have a playlist for random videos i’d like to keep, and also subscribe to some of my favourite creator so I can keeptheir videos, even when I’m offline

      • Krafting@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 hours ago

        Seems nice, but you need an external Player to watch the content, which can be goof for some people, but I like the webUI of TubeArchivist (even though it can be enhanced for sure)

  • Otter@lemmy.caOP
    link
    fedilink
    English
    arrow-up
    62
    ·
    19 hours ago

    One option that I’ve heard of in the past

    https://archivebox.io/

    ArchiveBox is a powerful, self-hosted internet archiving solution to collect, save, and view websites offline.

    • tomtomtom@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 hours ago

      I am using archivebox, it is pretty straight-forward to self-host and use.

      However, it is very difficult to archive most news sites with it and many other sites as well. Most cookie etc pop ups on a site will render the archived page unusable and often archiving won’t work at all because some bot protection (Cloudflare etc.) will kick-in when archivebox tries to access a site.

      If anyone else has more success using it, please let me know if I am doing something wrong…

      • Daniel Quinn@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        Monolith has the same problem here. I think the best resolution might be some sort of browser-plugin based solution where you could say “archive this” and have it push the result somewhere.

        I wonder if I could combine a dumb plugin with Monolith to do that… A weekend project perhaps.

    • Boomkop3@reddthat.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      14 hours ago

      I heard news recently that some companies recently started shipping non-m disks labelled as m-disks. You may want to have a look

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 hours ago

        In that they’re a single organization, yes, but I’m a single person with significantly fewer resources. Non-availability is a significantly higher risk for things I host personally.

    • Otter@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      33
      ·
      19 hours ago

      There was the attack on the Internet archive recently, are there any good options out there to help mirror some of the data or otherwise provide redundancy?

      • Deebster@infosec.pub
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        18 hours ago

        Your argument is that a single backup is sufficient? I disagree, and I think that so would most in the selfhosted and datahoarder communities.