By Art Reisman
CTO – www.netequalizer.com
I find public forums where universities openly share information about their bandwidth shaping policies an excellent source of information. Unlike commercial providers, these user groups have found technical collaboration is in their best interest, and they often openly discuss current trends in bandwidth control.
A recent university IT user group discussion thread kicked off with the following comment:
“We are in the process of trying to decide whether or not to upgrade or all together remove our packet shaper from our residence hall network. My network engineers are confident we can accomplish rate limiting/shaping through use of our core equipment, but I am not convinced removing the appliance will turn out well.”
Notice that he is not talking about removing rate limits completely, just backing off from an expensive extra piece of packet shaping equipment and using the simpler rate limits available on his router. The point of my reference to this discussion is not so much to discourse over the different approaches of rate limiting, but to emphasize, at this point in time, running wide-open without some sort of restriction is not even being considered.
Despite an 80 to 90 percent reduction in bulk bandwidth prices in the past few years, bandwidth is not quite yet cheap enough for an ISP to run wide-open. Will it ever be possible for an ISP to run wide-open without deliberately restricting their users?
The answer is not likely.
First of all, there seems to be no limit to the ways consumer devices and content providers will conspire to gobble bandwidth. The common assumption is that no matter what an ISP does to deliver higher speeds, consumer appetite will outstrip it.
Yes, an ISP can temporarily leap ahead of demand.
We do have a precedent from several years ago. In 2006, the University of Brighton in the UK was able to unplug our bandwidth shaper without issue. When I followed up with their IT director, he mentioned that their students’ total consumption was capped by the far end services of the Internet, and thus they did not hit their heads on the ceiling of the local pipes. Running without restriction, 10,000 students were not able to eat up their 1 gigabit pipe! I must caveat this experiment by saying that in the UK their university system had invested heavily in subsidized bandwidth and were far ahead of the average ISP curve for the times. Content services on the Internet for video were just not that widely used by students at the time. Such an experiment today would bring a pipe under a similar contention ratio to its knees in a few seconds. I suspect today one would need more or on the order of 15 to 25 gigabits to run wide open without contention-related problems.
It also seems that we are coming to the end of the line for bandwidth in the wireless world much more quickly than wired bandwidth.
It is unlikely consumers are going to carry cables around with their iPad’s and iPhones to plug into wall jacks any time soon. With the diminishing returns in investment for higher speeds on the wireless networks of the world, bandwidth control is the only way to keep order of some kind.
Lastly I do not expect bulk bandwidth prices to continue to fall at their present rate.
The last few years of falling prices are the result of a perfect storm of factors not likely to be repeated.
For these reasons, it is not likely that bandwidth control will be obsolete for at least another decade. I am sure we will be revisiting this issue in the next few years for an update.
December 11, 2012 at 12:34 PM
We currently run a mostly open network. We do have a NetEqualizer on our residential VLAN, set to limit connections to 50in/50out per IP. We do this to reign in BitTorrent and identify infected machines.
At a little over 3000 customers, we are pushing up against 1 gig peak incoming and near 200 meg peak outgoing traffic. We may have to do some stricter limiting as our upstream connection is currently 1 gig duplex. Getting the equipment to up it to 2.5 or 10 gig will wipe out our profits for the year and then some, so we are trying to hold off for at least another 6 months.
If data usage growth does not slow down, we may have to instate usage caps or metered service. When I started here in May of 2011, our combined traffic peaked at around 500 meg, less than half what it is now, with maybe 700 less customers. As cloud computing becomes more widespread and integrated, I do not foresee this trend slowing much.
Regards.