- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
The FCC is still taking comments from the public about how much data you really use and what your experience with data caps is like.
The Federal Communications Commission is officially looking into broadband data caps and their impact on consumers. On Tuesday, the FCC approved a notice of inquiry to examine whether data caps harm consumers and competition, as well as why data caps persist “despite increased broadband needs” and the “technical ability to offer unlimited data plans,” as spotted earlier by Engadget.
Many internet plans come with a data cap that limits how much bandwidth you can use each month. If you go over the data cap, internet service providers will typically charge an extra fee or slow down your service. The FCC first started inviting consumers to comment on broadband data caps last June, hundreds of which you can now read on the agency’s website.
You can still share your experience with broadband data caps with the FCC through this form, which will ask for details about the name of your ISP, usage limits, and any challenges you’ve encountered due to the cap.
I want to know why upstream bandwidth is so limited too. I have about 300gb of data at home, not much at all by hoarder standards. But there is no decent way for me to back it up to a remote server, because of low upload speed.
On cable it’s because they allocate significantly more bandwidth towards download than upload. They could allocate them equally but most customers that are mostly just streaming or playing games care only about the download since it means they can stream/download things faster.
This is a “if you build it, they will come” kind of thing. The Internet as we know it developed around the idea that the edge only consumes things. You don’t host content there. At most, you give it access to web 2.0 sites where people put their content and then it’s shared out from the central server.
It wasn’t possible to build applications that were designed for the edge to spread the load around. It’s not just a bandwidth problem, either. The slow pace of IPv6 adoption plays a role, and from what I’ve gathered from using it with Charter, they’re only doing enough to make it work at the most basic level. The prefix they give doesn’t allow for subnetting, and it appears to be dynamically assigned and can change. Setup isn’t that hard, but it’s not as easy as it needs to be for mass adoption.
If you use something like borg backup the first upload will take you forever but after that all you have to upload is the data that changed. I had the same problem as you and that’s how I solved it.
Yeah it would take multiple days and the connection usually doesn’t stay up that long. Any idea how well Borg deals with random disconnects and reconnects during a backup?
I do use Borg and like it in general.
Not sure, I used rclone to upload the archive. Took days for me as well but it worked.
I got FIOS gigabit recently, and was very surprised to find that I now regularly get 300&400mb upload speeds
Because when cable service was built, the only upstream data was the tiny messages a cable box would send. https://superuser.com/a/1519918
This is decades later and most of that stuff has been replaced multiple times by now.
And only in the past couple years have we been hitting that limit. Maintaining backwards compliance has been more important for cable service. Anyone who had a real need would have used T-carrier service, fiber, or multiple bonded lines, depending on year and budget.