I spent several hours tracing in production (updating the code a dozen times with extra logging) to identify the actual path the lemmy_server code uses for outbound federation of votes to subscribed servers.

Major popular servers, Beehaw, Leemy.world, Lemmy.ml - have a large number of instance servers subscribing to their communities to get copies of every post/comment. Comment votes/likes are the most common activity, and it is proposed that during the PERFORMANCE CRISIS that outbound vote/like sharing be turned off by these overwhelmed servers.

pull request for draft:

https://github.com/LemmyNet/lemmy/compare/main...RocketDerp:lemmy_comment_votes_nofed1:no_federation_of_votes_outbound0

EDIT: LEMMY_SKIP_FEDERATE_VOTES environment variable

  • King@vlemmy.net
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Thanks for doing all this.

    Do we have any real numbers from a real server? How many votes are trying to be federated to how many servers?

    Just ballparking some approximate numbers:

    • [email protected]
    • 15k subscribers
    • 4000 subscribed servers
    • 10 votes per subscriber per day

    15000400010 = 600,000,000 federated actions. That is around 7,000 per second 24/7 for one community.

    IMO, this real time federation just doesn’t scale. We need to start planning the specs for federation batching.

    • RoundSparrow@lemmy.mlOPM
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I’m hoping the ‘subscribed servers’ is maybe only 300 or so? But I don’t know, the big sites haven’t been sharing information like that in my experience. They did say there were “millions” of outbound federation tasks. I expect the number of votes by user is higher than your number. They did put in code changes to detect servers they can’t reach and to stop attempting delivery.

      We need to start planning the specs for federation batching.

      I think a pull app that goes around to servers with content and uses the front-end API to grab 300 or more comments at a time, etc is the way to go. The client API is geared toward batch delivery. since lemmy.ml is so unstable for discussion, I opened a topic on GitHub: https://github.com/RocketDerp/lemmy_helper/discussions/4 - where I proposed some new /api/syncshare to get more raw data out of the PostgreSQL tables.

  • chiisana@lemmy.chiisana.net
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Part of what makes Lemmy (and other voting link aggregators) work is the voting aspect. By taking away outbound vote federation, it forces further consolidation into these popular instances. Thereby further exacerbate the problem because now they’re even more consolidated and the posts and comments eventually becomes the bottleneck for the exact same underlying chatty protocol. For this reason, I’d be vehemently against this change without a pairing PR that allows this information to be requested via a batch pull that the protocol makes available.

  • King@lemm.ee
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Somewhat related, but why are we federating votes? Why not just federate the upvote count and downvote count? Does each server need to track the identity of every voter on a subscribed community?

    Each server will track votes from their own users, preventing duplicate votes.

    • RoundSparrow@lemmy.mlOPM
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Why not just federate the upvote count and downvote count?

      I think the answer to that is that it isn’t an optimized design.

      Does each server need to track the identity of every voter on a subscribed community?

      I think so. Which isn’t a terrible assumption that user who votes will eventually comment/post and that profile will be of use.

  • RoundSparrow@lemmy.mlOPM
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    I have no idea why the file is named “mod.rs”, as normal non-admin non-moderator users seem to go through this code path.

      • RoundSparrow@lemmy.mlOPM
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        ok, I figured out how to get Rust to match the enum, is there a way to do this with match instead of if statements?

        +    if let AnnouncableActivities::UndoVote(_) = activity {
        +      warn!("zebratrace310 SKIP UndoVote");
        +    } else if let AnnouncableActivities::Vote(_) = activity {
        +      warn!("zebratrace310A SKIP Vote");
        +    } else {
        +      warn!("zebratrace311 send");
        +      AnnounceActivity::send(activity.clone().try_into()?, community, context).await?;
        +    };
        

        Code seems to work great, blocks UndoVote/Vote but does the send on comment reply.

        • kkard2@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          more “correct” way would be this:

          match activity {
              AnnouncableActivities::UndoVote(_) => warn!("zebratrace310 SKIP UndoVote"),
              AnnouncableActivities::Vote(_) => warn!("zebratrace310A SKIP Vote"),
              _ => {
                  warn!("zebratrace311 send");
                  AnnounceActivity::send(activity.clone().try_into()?, community, context).await?;
              },
          }
          

          here it is in the rust book: https://doc.rust-lang.org/stable/book/ch06-02-match.html

          • RoundSparrow@lemmy.mlOPM
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            ool. I found the syntax for multiple hits, so I was looking for:

            match activity {
                AnnouncableActivities::UndoVote(_)  |
                AnnouncableActivities::Vote(_) => {
                    warn!("zebratrace310 SKIP federating Vote/UndoVote");
                },
                _ => {
                    warn!("zebratrace311 send");
                    AnnounceActivity::send(activity.clone().try_into()?, community, context).await?;
                },
            }
            

            Thank you.