After many years of consulting and supporting the networking world with WAN optimization devices, we have sensed a lingering fear among Network Administrators who wonder if their capacity is within the normal range.
So the question remains:
How much bandwidth can you survive with before you impact morale or productivity?
The formal term we use to describe the number of users sharing a network link to the Internet is contention ratio. This term is defined as the size of an Internet trunk divided by the number of users. We normally think of Internet trunks in units of megabits. For example, 10 users sharing a one megabit trunk would have a 10-to- 1 contention ratio. If sharing the bandwidth on the trunk equally and simultaneously, each user could sustain a constant feed of 100kbs, which is exactly 1/10 of the overall bandwidth.
So what is an acceptable contention ratio?
From a business standpoint, it is whatever a customer will put up with and pay for without canceling their service. This definition may seem ethically suspect, but whether in the bygone days of telecommunications phone service or contemporary Internet bandwidth business, there are long-standing precedents for overselling. What do you think a circuit busy signal is caused by? Or a dropped cell phone call?
So, without pulling any punches, what exactly will a customer tolerate before pulling the plug?
Here are some basic observations about consumers and acceptable contention ratios:
- Rural customers in the US and Canada: Contention ratios of 50 to 1 are common
- International customers in remote areas of the world: Contention ratios of 80 to 1 are common
- Internet providers in urban areas: Contention ratios of 15 to 1 are to be expected
- Generic Business ratio 50 to 1 , and sometimes higher
Update Jan 2015, quite a bit has happened since these original numbers were published. Internet prices have plummeted, here is my updated observation.
Rural customers in the US and Canada: Contention ratios of 10 to 1 are common
International customers in remote areas of the world: Contention ratios of 20 to 1 are common
Internet providers in urban areas: Contention ratios of 2 to 1 are to be expected
Generic Business ratio 5 to 1 , and sometimes higher
As a rule Businesses can general get away with slightly higher contention ratios. Most business use does not create the same load as recreational use, such as YouTube and File Sharing. Obviously, many businesses will suffer the effects of recreational use and perhaps haphazardly turn their heads on enforcement of such use. The above ratio of 50 to 1 is a general guideline of what a business should be able to work with, assuming they are willing to police their network usage and enforce policy.
The numbers above are a good, rough starting point, but things are not as simple as they look. There is a statistical twist as bandwidth amounts get higher.
Contention ratios can actually increase as the overall Internet trunk size gets larger. For example, if 50 people can share one megabit without mutiny, it should follow that 100 people can share two megabits without mutiny as the ratio has not changed. It is still 50 to 1.
However, from observations of hundreds of ISPs, we can easily conclude that perhaps 110 people can share two megabits with the same tolerance as 50 people sharing one megabit. What this means is that the larger the ISP, the more bandwidth at a fixed cost per megabit, and thus the larger the contention ratios you can get away with.
Is this really true? And if so, what are its implications for your business?
This is simply an empirical observation, backed up by talking to literally thousands of ISPs over the course of four years and noticing how their oversubscription ratios increase with the size of their trunk.
A conservative estimate is that, starting with the baseline ratio listed above, you can safely add 10 percent more subscribers above and beyond the original contention ratio for each megabit of trunk they share.
Thus, to provide an illustration, 50 people sharing one megabit can safely be increased to 110 people sharing two megabits, and at four megabits you can easily handle 280 customers. With this understanding, getting more from your bandwidth becomes that much easier.
I also ran across this thread in a discussion group for Resnet Adminstrators around the country.
From Resnet Listserv
Brandon Enright at University of California San Diego breaks it down as follows:
Right now we’re at .2 Mbps per student. We could go as low as .1 right
now without much of any impact. Things would start to get really ugly
for us at .05 Mpbs / student.
So at 10k students I think our lower-bound is 500 Mbps.
I can’t disclose what we’re paying for bandwidth but even if we fully
saturated 2Gbps for the 95% percentile calculation it would come out to
be less than $5 per student per month. Those seem like reasonable
enough costs to let the students run wild.
Brandon
Editors note: I am not sure why a public institution can’t exactly disclose what they are paying for bandwidth ( Brian does give a good hint), as this would be useful to the world for comparison; however many Universities get lower than commercial rates through state infrastructure not available to private operators.
By Art Reisman
Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.









Simple Is Better with Bandwidth Monitoring and Traffic Shaping Equipment
June 7, 2010 — netequalizerBy Art Reisman
Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.
For most IT administrators, bandwidth monitoring of some sort is an essential part of keeping track of, as well as justifying, network expenses. However, the question a typical CIO will want to know before approving any purchase is, “What is the return on investment for your equipment purchase?”. Putting a hard and fast number on bandwidth optimization equipment may seem straight forward. If you can quantify the cost of your bandwidth and project an approximate reduction in usage or increase in throughput, you can crunch the numbers. But, is that all you should consider when determining how much you should spend on a bandwidth optimization device?
The traditional way of looking at monitoring your Internet has two dimensions. First, the fixed cost of the monitoring tool used to identify traffic, and second, the labor associated with devising and implementing the remedy. In an ironic inverse correlation, we assert that your ROI will degrade with the complexity of the monitoring tool.
Obviously, the more detailed the reporting/shaping tool, the more expensive its initial price tag. Yet, the real kicker comes with part two. The more detailed data output generally leads to an increase in the time an administrator is likely to spend making adjustments and looking for optimal performance.
But, is it really fair to assume higher labor costs with more advanced monitoring and information?
Well, obviously it wouldn’t make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. But, typically, the more information an admin has about a network, the more inclined he or she might be to spend time making adjustments.
On a similar note, an oversight often made with labor costs is the belief that when the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.
Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network adjusting can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?
A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention. For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980s. The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.
An effective compromise with many of our customers is that they are stepping down from expensive, complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.
For example, with a simple reporting tool you can plot network usage by user. Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing. Abuse becomes obvious just looking at the usage (a simple report).
However, there is also the personal control factor, which often does not follow clear lines of ROI.
What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into, for example, a trip to Hawaii.
In our next article, we’ll put some real world numbers to the test for actual breakdowns, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.
List of monitoring tools compiled by Stanford
ROI tool , determine how much a bandwidth control device can save.
Great article on choosing a bandwidth controller
Planetmy
Linux Tips
How to set up a monitor for free
Good enough is better a lesson from the Digital Camera Revolution
Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.
Share this: