UPDATE: @[email protected] has responded

It is temporary as lemmy.world was cascading duplicates at us and the only way to keep the site up reliably was to temporarily drop them. We’re in the process of adding more hardware to increase RAM, CPU cores and disk space. Once that new hardware is in place we can try turning on the firehose. Until then, please patient.


ORIGINAL POST:

Starting sometime yesterday afternoon it looks like our instance started blocking lemmy.world: https://lemmy.sdf.org/instances

A screenshot of the page at https://lemmy.sdf.org/instances showing the lemmy.world instance on the blocklist

This is kind of a big deal, because 1/3rd of all active users originate there! A pie chart depicting the top instances by usershare. The lemmy.world instance is in the top spot with 1/3 of the total usershare

Was this decision intentional? If so, could we get some clarification about it? @[email protected]

  • SDF@lemmy.sdf.orgM
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    It is temporary as lemmy.world was cascading duplicates at us and the only way to keep the site up reliably was to temporarily drop them. We’re in the process of adding more hardware to increase RAM, CPU cores and disk space. Once that new hardware is in place we can try turning on the firehose. Until then, please patient.

  • SDF@lemmy.sdf.orgM
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    live updates in progress. Moved to SSDs, added more cores and 128GB and 64GB ram.

  • SDF@lemmy.sdf.orgM
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Here is where we’re at now.

    • increased cores and memory, hopefully we never touch swap again
    • dedicated server for pict-rs with its own RAID
    • dedicated server for lemmy, postgresql with its own RAID
    • lemmy-ui and nginx run on both to handle ui requests

    Thank you for everyone who stuck around and helped out, it is appreciated. We’re working on additional suggested tweaks from the Lemmy community and hope to let lemmy.world try to DoS us again soon. Hopefully we’ll do much better this time.

    • chaorace@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Killer stuff! Sorry for my contributing undue pressure on top of what was probably already a taxing procedure happening in the server room.

      Out of curiousity: how do you feel about Lemmy performance so far? I’m actually a little bit surprised that we already managed to outstrip the prior configuration. I suppose that inter-instance ActivityPub traffic just punches really hard regardless of intra-instance activity?

    • estee@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      What can we (ordinary users) do to help? I’m in Europe, would it be better if I used the SDFeu server as my home instance?

      • SDF@lemmy.sdf.orgM
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Yes, the fediverse wants to be decentralized, so it is encouraged to use what works best for you. lemmy.sdfeu.org is located in Düsseldorf, Germany.

  • sidhant@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    If this is true I will be making another account and moving to a different server. The only reason I joined this one was that it doesn’t seem to block/defed.

    • SDF@lemmy.sdf.orgM
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      You’re absolutely welcome to do that as it is your decision and you have many choices. We hope to build a community of folks that would like to help the fediverse grow and support smaller instances. Similar growing pains were seen during the twitter exodous last September.

  • SDF@lemmy.sdf.orgM
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Two things that would be great:

    • Have a tanoy/horn announce icon at the top like with Mastodon where status information can be posted.
    • Change the heart icon to link to a method that supports the local instance

    Attempts were made to create a thread for the almost daily upgrades we’re going through with BE and UI changes, but even with pinning it doesn’t have the visibility.

    We’re on site in about 1 hour to install a new RAID and once that is completed we’ll finish the transfer of pict-rs data.