• vithigar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    3
    ·
    7 months ago

    If you’re paying for 100mbps, and the person you’re talking to is paying for 100mpbs, and you’re not consistently getting 100mbps between you, then at least one of you is getting ripped off.

    That’s only really true of you’re relatively close to each other on the same ISP. The father apart and the more hops you need to make the less likely it becomes, through no fault of your ISP.

    • mozz@mbin.grits.dev
      link
      fedilink
      arrow-up
      18
      ·
      7 months ago

      Incorrect, and that was exactly my point

      This is like saying that if the fruit at a store is rotten sometimes, it’s not the grocer’s fault, because the fruit had to come a long way and went bad in transit. The exact job you are paying the ISP for, is to deal with the hops and give you good internet. It’s actually a lot easier at the trunk level (because the pipes are bigger and more reliable and there are more of them / more redundancy and predictability and they get more attention.)

      I won’t say there isn’t some isolated exception, but in reality it’s a small small small minority of the time. Take an internet connection that’s having difficulty getting the advertised speed and run mtr or something, and I can almost guarantee that you’ll find that the problem is near one or the other of the ends where there’s only one pipe and maybe it’s having hardware trouble or individually underprovisioned or something.

      Actually Verizon deliberately underprovisioning Netflix is the exception that proves the rule – that was a case where it actually was an upstream pipe that wasn’t big enough to carry all the needed traffic, but it was perfectly visible to them and they could easily have solved it if they wanted to, and chose not to, and the result was visibly different from normal internet performance in almost any other case.

      • vithigar@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        7 months ago

        I probably should’ve been a little clearer that I’m taking scales of thousands of km here.

        I’m on an island in the North Atlantic. I don’t hold it against my ISP if I can’t get my full 1.5Gbps down from services hosted in California.

        • mozz@mbin.grits.dev
          link
          fedilink
          arrow-up
          3
          ·
          7 months ago

          Yeah, makes sense, that’s a little different. In that case there is actually congestion on the trunk that makes things slow for the customers.

          My point I guess is that the people who want to sell a “fast lane” to their customers, or want to say Net Neutrality is the reason your home internet is slow when you’re accessing North America, are lying. Neutrally-applied traffic shaping to make things work is allowed, of course; just want to throttle their competitors and they’re annoyed that the government is allowed to tell them not to.

    • TonyOstrich@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      7 months ago

      Ehhh, I get what you are saying but I would rephrase the above poster’s comment a little then. If a person is paying for 100Mbps and they are able to get/find a source or some combination of sources that are able to supply them 100mbps of data then that’s what they should be getting. The easiest example being a torrent for popular Linux distros.

      I personally think the solution to that should be some kind of regulatory minimum around the advertisement of speed or contractual service obligation. For example if a person pays for a 100Mbps connection then the ISP should be required to supply that speed at +/- 5% instantaneous and -.5% on average (because if you give them a range you know they will maintain the lowest possible speed to be in compliance).

      Don’t look too hard at my numbers, I pulled them out of my ass, but hopefully it gets across the idea.

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        7 months ago

        Keep in mind that because few residential users max out capacity simultaneously the ISPs “overbook” capacity, and usually this works out because they have solid stats on average use and usually few people need the max capacity simultaneously.

        Of course some ISPs are greedier than others and do it to the extreme where the uplink/downlink is regularly maxed out without giving anything near the promised bandwidth to a significant fraction of customers. The latter part should be disincentivized.

        Force the ISPs to keep stats on peak load and how frequently their customers are unable to get advertised bandwidth, and if it’s above some threshold it should be considered comparable to excess downtime, and then they should be forced to pay back the affected customers. The only way they can avoid losing money is by either changing their plans to make a realistic offer or by building out capacity.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          Yeah, I wish we’d do this.

          I have a good ISP that has worked properly pretty much every time I’ve tested it (usually a few times/year, and usually during peak hours). But I’ve had bad ISPs where I’ve never gotten the advertised speed (best I got was 15% less than advertised, but it was usually 30% or more less).

    • Possibly linux
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      Distance would add latency but shouldn’t reduce speed on well maintained infrastructure.

    • JasonDJ
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      There’s always going to be some level of loss and retransmission. It would take a perfect stream of UDP, since TCP needs acknowledgements in order to continue sending data. That can be reduced by window scaling and multiplexing, but it’s still going to happen.