By Art Reisman
CTO – APconnections
Have you ever been on a shared wireless network, in a Hotel , or Business, and noticed how your connection can go from reasonable to completely unusable in a matter of seconds, and then cycle back to usable ?
The reason for this , is that once a network hits its bandwidth allocation, the providers router usually just starts dropping the excess packets. Intuitively, when your router is dropping packets, one would assume that the perceived slow down, per user, would be just a gradual shift slower.
What happens in reality is far worse…
1) Distant users get spiraling slower responses.
Martin Roth, a colleague of ours who founded one of the top performance analysis companies in the world, provided this explanation:
“Any device which is dropping packets “favors” streams with the shortest round trip time, because (according to the TCP protocol) the time after which a lost packet is recovered is depending on the round trip time. So when a company in Copenhagen/Denmark has a line to Australia and a line to Germany on the same internet router, and this router is discarding packets because of bandwidth limits/policing, the stream to Australia is getting much bigger “holes” per lost packet (up to 3 seconds) than the stream to Germany or another office in Copenhagen. This effect then increases when the TCP window size to Australia is reduced (because of the retransmissions), so there are fewer bytes per round trip and more holes between to round trips.”
In the screen shot above (courtesy of avenida.dk), the Bandwidth limit is 10 Mbit (= 1 Mbyte/s net traffic), so everything on top of that will get discarded. The problem is not the discards, this is standard TCP behaviour, but the connections that are forcefully closed because of the discards. After the peak in closed connections, there is a “dip” in bandwidth utilization, because we cut too many connections.
2) Once you hit a congestion point, where your router is forced to drop packets, overall congestion actually gets worse before it gets better.
When applications don’t get a response due to a dropped packet, instead of backing off and waiting, they tend to start sending re-tries, and this is why you may have noticed prolonged periods (3o seconds or more) of no service on a congested network. We call this the rolling brown out. Think of this situation as sort of a doubling down on bandwidth at the moment of congestion. Instead of easing into a full network and lightly bumping your head, all the devices demanding bandwidth ramp up their requests at precisely the moment when your network is congested, resulting in an explosion of packet dropping until everybody finally gives up.
How do you remedy outages caused by Congestion?
We have written extensively about solutions to prevent bottlenecks. Here is a quick summary of possible solutions
1) The most obvious being to increase the size of your link.
2) Enforce rate limits per user. The problem with this solution is that you can waste a good bit of bandwidth if the network is lightly loaded
3) Use something more sophisticated like a Netequalizer, a device that is designed to specifically counter the effects of congestion.
From Martin Roth of Avenida.dk
“With NetEqualizer we may get the same number of discards, but we get fewer connections closed, because we “kick” the few connections with the high bandwidth, so we do not get the “dip” in bandwidth utilization.
The graphs (above) were recorded using 1 second intervals, so here you can see the bandwidth is reached. In a standard SolarWinds graph with 10 minute averages the bandwidth utilization would be under 20% and the customer would not know they are hitting the limit.”
The excerpt below was a message from a reseller who had been struggling with congestion issues at a hotel, he tried basic rate limits on his router first. Rate Limits will buy you some time , but on an oversold network you can still hit the congestion point, and for this you need a smarter device.
“…NetEq delivered a 500% gain in available bandwidth by eliminating rate caps, possible through a mix of connection limits and Equalization. Both are necessary. The hotel went from 750 Kbit max per accesspoint (entire hotel lobby fights over 750Kbit; divided between who knows how many users) to 7Mbit or more available bandwidth for single users with heavy needs.
Dear Comcast, Please Stop Slowing my iOS UpdateJuly 22, 2015 — netequalizer
Last week I was forced to re-load my iPad from scratch. So I fired it up and went through the routine that wipes it clean and re-loads the entire OS from the Apple cloud. As I watched the progress moniker it slowly climbed from 1 hour, then 2 hours, then all the way up to 23 hours – and then it just stayed there. Now I know the iOS, or whatever they call it on the iPad, is big, but 23 hours big? I double-checked the download throughput on my NetEqualizer status screen, and sure enough, it was only running at about 60 to 100kbs, no where near my advertised Business Class 20 megabits. So I did a little experiment. I turned on my VPN tunnel, unplugged my iPad for a minute, and then took some steps to hide my DNS (so Comcast had no way to see my DNS requests). I then restarted my update and sure enough it sped up to about 10 megabits.
To make sure I was not imagining anything I repeated the test.
Without VPN (slow)
With VPN (fast)
So what is going here, does the VPN make things go faster? No not really, but it does prevent Comcast from recognizing my iOS update from Apple and singling it out for slower bandwidth.
Why does Comcast (allegedly) shape my download from Apple?
The long story behind this basically boils down to this: it is likely that Comcast really does not have a big enough switch going out to the Internet to support the deluge of bandwidth needed when a group of subscribers all try to update their devices at once. Especially during peak hours! Therefor, in order to keep basic services from becoming slow, they single out a few big hitters such as iOS updates.