How Much Bandwidth Do You Really Need?


By Art Reisman – CTO – www.netequalizer.com

Art Reisman CTO www.netequalizer.com

When it comes to how much money to spend on the Internet, there seems to be this underlying feeling of guilt with everybody I talk to. From ISPs, to libraries or multinational corporations, they all have a feeling of bandwidth inadequacy. It is very similar to the guilt I used to feel back in College when I would skip my studies for some social activity (drinking). Only now it applies to bandwidth contention ratios. Everybody wants to know how they compare with the industry average in their sector. Are they spending on bandwidth appropriately, and if not, are they hurting their institution, will they become second-rate?

To ease the pain, I was hoping to put a together a nice chart on industry standard recommendations, validating that your bandwidth consumption was normal, and I just can’t bring myself to do it quite yet. There is this elephant in the room that we must contend with. So before I make up a nice chart on recommendations, a more relevant question is… how bad do you want your video service to be?

Your choices are:

  1. bad
  2. crappy
  3. downright awful

Although my answer may seem a bit sarcastic, there is a truth behind these choices. I sense that much of the guilt of our customers trying to provision bandwidth is based on the belief that somebody out there has enough bandwidth to reach some form of video Shangri-La; like playground children bragging about their father’s professions, claims of video ecstasy are somewhat exaggerated.

With the advent of video, it is unlikely any amount of bandwidth will ever outrun the demand; yes, there are some tricks with caching and cable on demand services, but that is a whole different article. The common trap with bandwidth upgrades is that there is a false sense of accomplishment experienced before actual video use picks up. If you go from a network where nobody is running video (because it just doesn’t work at all), and then you increase your bandwidth by a factor of 10, you will get a temporary reprieve where video seems reliable, but this will tempt your users to adopt it as part of their daily routine. In reality you are most likely not even close to meeting the potential end-game demand, and 3 months later you are likely facing another bandwidth upgrade with unhappy users.

To understand the video black hole, it helps to compare the potential demand curve pre and post video.

A  quality VOIP call, which used to be the measuring stick for decent Internet service runs about 54kbs. A quality  HD video stream can easily consume about 40 times that amount. 

Yes, there are vendors that claim video can be delivered at 250kbs or less, but they are assuming tiny little stop action screens.

Couple this tremendous increase in video stream size with a higher percentage of users that will ultimately want video, and you would need an upgrade of perhaps 60 times your pre-video bandwidth levels to meet the final demand. Some of our customers, with big budgets or government subsidized backbones, are getting close but, most go on a honeymoon with an upgrade of 10 times their bandwidth, only to end up asking the question, how much bandwidth do I really need?

So what is an acceptable contention ratio?

  • Typically in an urban area right now we are seeing anywhere from 200 to 400 users sharing 100 megabits.
  • In a rural area double that rati0 – 400 to 800 sharing 100 megabits.
  • In the smaller cities of Europe ratios drop to 100 people or less sharing 100 megabits.
  • And in remote areas served by satellite we see 40 to 50 sharing 2 megabits or less.

A Brief History of Peer to Peer File Sharing and the Attempts to Block It


By Art Reisman

The following history is based on my notes and observations as both a user of peer to peer, and as a network engineer tasked with cleaning  it up.

Round One, Napster, Centralized Server, Circa 2002

Napster was a centralized service, unlike the peer to peer behemoths of today there was never any question of where the copyrighted material was being stored and pirated from. Even though Napster did not condone pirated music and movies on their site, the courts decided by allowing copyrighted material to exist on their servers, they were in violation of copyright law. Napster’s days of free love were soon over.

From an historic perspective the importance of the decision to force the shut down of Napster was that it gave rise to a whole new breed of p2p applications. We detailed this phenomenon in our 2008 article.

Round Two, Mega-Upload  Shutdown, Centralized Server, 2012

We again saw a doubling down on p2p client sites (they expanded) when the Mega-Upload site, a centralized sharing site, was shutdown back in Jan 2012.

“On the legal side, the recent widely publicized MegaUpload takedown refocused attention on less centralized forms of file sharing (i.e. P2P). Similarly, improvements in P2P technology coupled with a growth in file sharing file size from content like Blue-Ray video also lead many users to revisit P2P.”

Read the full article from deepfield.net

The shut down of Mega-Upload had a personal effect on me as I had used it to distribute a 30 minute account from a 92-year-old WWII vet where he recalled, in oral detail, his experience of surviving a German prison camp.

Blocking by Signature, Alias Layer 7 Shaping, Alias Deep packet inspection. Late 1990’s till present

Initially, the shining star savior in the forefront against spotting illegal content on your network, this technology can be expensive and fail miserably in the face of newer encrypted p2p applications. It also can get quite expensive to keep up with the ever changing application signatures, and yet it is still often the first line of defense attempted by ISPs.

We covered this topic in detail, in our recent article,  Layer 7 Shaping Dying With SSL.

Blocking by Website

Blocking the source sites where users download their p2p clients is still possible. We see this method applied at mostly private secondary schools, where content blocking is an accepted practice. This method does not work for computers and devices that already have p2p clients. Once loaded, p2p files can come from anywhere and there is no centralized site to block.

Blocking Uninitiated Requests. Circa Mid-2000

The idea behind this method is to prevent your Network from serving up any content what so ever! Sounds a bit harsh, but the average Internet consumer rarely, if ever, hosts anything intended for public consumption. Yes at one time, during the early stages of the Internet, my geek friends would set up home pages similar to what everybody exposes on Facebook today. Now, with the advent hosting sites, there is just no reason for a user to host content locally, and thus, no need to allow access from the outside. Most firewalls have a setting to disallow uninitiated requests into your network (obviously with an exemption for your publicly facing servers).

We actually have an advanced version of this feature in our NetGladiator security device. We watch each IP address on your internal network and take note of outgoing requests, nobody comes in unless they were invited. For example, if we see a user on the Network make a request to a Yahoo Server , we expect a response to come back from a Yahoo server; however if we see a Yahoo server contact a user on your network without a pending request, we block that incoming request. In the world of p2p this should prevent an outside client from requesting a receiving a copyrighted file hosted on your network, after all no p2p client is going to randomly send out invites to outside servers or would they?

I spent a few hours researching this subject, and here is what I found (this may need further citations). It turns out that p2p distribution may be a bit more sophisticated and has ways to get around the block uninitiated query firewall technique.

P2P networks such as Pirate Bay use a directory service of super nodes to keep track of what content peers have and where to find them. When you load up your p2p client for the first time, it just needs to find one super node to get connected, from there it can start searching for available files.

Note: You would think that if these super nodes were aiding and abetting in illegal content that the RIAA could just shut them down like they did Napster. There are two issues with this assumption:

1) The super nodes do not necessarily host content, hence they are not violating any copyright laws. They simply coordinate the network in the same way DNS service keep track of URL names and were to find servers.
2) The super nodes are not hosted by Pirate Bay, they are basically commandeered from their network of users, who unwittingly or unknowingly agree to perform this directory service when clicking the license agreement that nobody ever reads.

From my research I have talked to network administrators that claim despite blocking uninitiated outside requests on their firewalls, they still get RIAA notices. How can this be?

There are only two ways this can happen.

1) The RIAA is taking liberty to simply accuse a network of illegal content based on the directory listings of a super node. In other words if they find a directory on a super node pointing to copyrighted files on your network, that might be information enough to accuse you.

2) More likely, and much more complex, is that the Super nodes are brokering the transaction as a condition of being connected. Basically this means that when a p2p client within your network, contacts a super node for information, the super node directs the client to send data to a third-party client on another network. Thus the send of information from the inside of your network looks to the firewall as if it was initiated from within. You may have to think about this, but it makes sense.

Behavior based thwarting of p2p. Circa 2004 – NetEqualizer

Behavior-based shaping relies on spotting the unique footprint of a client sending and receiving p2p applications. From our experience, these clients just do not know how to lay low and stay under the radar. It’s like the criminal smuggling drugs doing 100 MPH on the highway, they just can’t help themselves. Part of the p2p methodology is to find as many sources of files as possible, and then, download from all sources simultaneously. Combine this behavior with the fact that most p2p consumers are trying to build up a library of content, and thus initiating many file requests, and you get a behavior footprint that can easily be spotted. By spotting this behavior and making life miserable for these users, you can achieve self compliance on your network.

Read a smarter way to block p2p traffic.

Blocking the RIAA probing servers

If you know where the RIAA is probing from you can deny all traffic to their probes and thus prevent the probe of files on your network, and ensuing nasty letters to desist.

Check List for Integrating Active Directory to Your Bandwidth Controller


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com

The problem statement: You have in place an authentication service such as Radius, LDAP, or Active Directory, and now you want to implement some form of class of service per customer. For example, data usage limits (quotas) or bandwidth speed restriction per user. To do so, you’ll need to integrate your authentication device with an  enforcement device, typically a bandwidth controller.

There are products out there such as Nomadix that do both (authentication and rate limiting),  but most authentication devices are not turn-key when it comes to a mechanism to set rate limits.

Your options are:

1) You can haggle your way through various forums that give advice on setting rate limits with AD,

2) Or you can embark on a software integration project using a consultant to accomplish your bandwidth restrictions.

In an effort to help customers appreciate and understand what goes into such an integration, I have shared notes that I have used as starting point when synchronizing our NetEqualizer with Radius.

1) Start by developing (borrowing if you can) a generic abstract interface (middle ware) that is not specific to Active Dircectory, LDAP or Radius. Keep it clean and basic so as not to tie your solution to any specific authentication server.  The investment in a middle ware interface is well worth the upfront cost.  By using a middle layer you will avoid a messy divorce of your authentication system from your bandwidth controller should the need arise.

2) Chances are your bandwidth controller speaks IP, and your AD device speaks user name. So you’ll need to understand how your AD can extract IP addresses from user names and send them down to your bandwidth controller.

3) Your bandwidth controller will need a list of IP’s or MAC addresses , and their committed bandwidth rates. It will need to get this information from your authentication database.

5) On a cold start, you’ll need to make bandwidth controller aware of all active users, and perhaps during the initial synchronization, you may want to pace yourself so as to not bog down your authentication controller with a million requests on start-up.

6) Once the bandwidth controller has an initial list of users on board, you’ll need to have a back ground re-synch (audit) mechanism to make sure all the rate limits and associated IP addresses are current.

7) What to do if the bandwidth controller senses traffic from an IP that it is unaware of? You’ll need a default guest rate limit of some kind for unknown IP addresses. Perhaps you’ll want the bandwidth controller to deny service to unknown IPs?

8) Don’t forget to put a timeout on requests from the bandwidth controller to the authentication device.

Bandwidth Control from the Public Side of a NAT Router, is it Possible?


We have done some significant work in our upcoming release with respect to managing network traffic from the outside of private network segments.

The bottom line is we can now accomplish sophisticated bandwidth optimizations for segments of large networks hidden behind the NAT routers.

The problem:

One basic problem with a generic bandwidth controller, is that they typically treat all users behind a NAT router as one user.

When using NAT, a router takes one public IP and divides it up such that up to several thousand users on the private side of a network can share it. The most common reason for this, is that there are a limited number of public IPv4 addresses to hand out, so it is common for organizations and ISP’s to share the public IP’s that they own among many users.

When a router shares an IP with more than one user, it manipulates a special semi private part of the IP packet , called a “port”, to keep track of who’s data belongs to whom behind the router. The easiest way to visualize this is to think of a company with one public phone number and many private internal extensions on a PBX. In the case of this type of phone arrangement, all the employees share the public phone numbers for out side calls.

In the case of a Nat’d router, all the users behind the router share one public IP address. For the bandwidth controller sitting on the public side of the router, this can create issues, it can’t shape the individual traffic of each user because all their traffic appears as if it is coming from one IP address.

The obvious solution to this problem is to locate your bandwidth controller on the private side of the NAT router; but for a network with many NAT routers such as a large distributed wireless mesh network, the cost of extra bandwidth controllers becomes prohibitive.

Drum Roll: Enter NetEqualizer Super hero.

The Solution:

With our upcoming release we have made changes to essentially reverse engineer the NAT Port addressing scheme inside our bandwidth controller, even when located on the Internet side of the router, we can now, apply our equalizing shaping techniques to individual user streams with much more accuracy than before.

We do this by looking at the unique port mapping for each stream coming out of your router. So, if for example, two users in your mesh network, are accessing Facebook, we will treat those users bandwidth and allocations independently in our congestion control. The Benefit from these techniques is the ability to provide QoS for a Face-to-Face chat session while at the same time limiting the video to Facebook component.

You Must Think Outside the Box to Bring QoS to the Cloud and Wireless Mesh Networks


By Art Reisman
CTO – http://www.netequalizer.com

About 10 years ago, we had this idea for QoS across an Internet link. It was simple and elegant, and worked like a charm. Ten years later, as services spread out over the Internet cloud, our original techniques are more important than ever. You cannot provide QoS using TOS (diffserv) techniques over any public or semi public Internet link, but using our techniques we have proven the impossible is possible.

Why TOS bits don’t work over the Internet.

The main reason is that setting TOS bits are only effective when you control all sides of a conversation on a link, and this is not possible on most Internet links (think cloud computing and wireless mesh networks). For standard TOS services to work, you must control all the equipment in between the two end points. All it takes is one router in the path of a VoIP conversation to ignore a TOS bit, and its purpose becomes obsolete. Thus TOS bits for priority are really only practical inside a corporate LAN/WAN topology.

Look at the root cause of poor quality services and you will find alternative solutions.

Most people don’t realize the problem with congested VoIP, on any link, is due to the fact that their VoIP packets are getting crowded out by larger downloads and things like recreational video (this is also true for any interactive cloud access congestion). Often, the offending downloads are initiated by their own employees or users. A good behavior-based shaper will be able to favor VoIP streams over less essential data streams without any reliance on the sending party adhering to a TOS scheme.

How do we accomplish priority for VoIP?

We do this by monitoring all the streams on a link with one piece of equipment inserted anywhere in the congested link. In our current terminology, a stream consists of an IP (local), talking to another IP (remote Internet). When we see a large stream dominating the link, we step back and ask, is the link congested? Is that download crowding out other time-sensitive transactions such as VOIP? If the answer is yes to both questions, then we proactively take away some bandwidth from the offending stream. I know this sounds ridiculously simple, and does not seem plausible, but it works. It works very well and it works with just one device in the link irrespective of any other complex network engineering. It works with minimal set up. It works over MPLS links. I could go on and on, the only reason you have not heard of it is perhaps is that it goes against the grain of what most vendors are selling – and that is large orders for expensive high end routers using TOS bits.

Related article QoS over the Internet – is it possible?

Fast forward to our next release, how to provide QOS deep inside a cloud or mesh network where sending or receiving IP addresses are obfuscated.

Coming this winter we plan to improve upon our QoS techniques so we can drill down inside of Mesh and Cloud networks a bit better.

As the use of NAT, distributed across mesh networks, becomes more wide spread, and the bundling of services across cloud computing becomes more prevalent, one side effect has been that our stream based behavior shaping (QoS) is not as effective as it is when all IP addresses are visible (not masked behind a NAT/PAT device).

This is due to the fact that currently, we base our decision on a pair of IP’s talking to each other, but we do not consider the IP port numbers, and sometimes especially in a cloud or mesh network, services are trunked across a tunnel using the same IP. As these services get tunneled across a trunk, the data streams are bundled together using one common pair of IP’s and then the streams are broken out based on IP ports so they can be routed to their final destination. For example, in some cloud computing environments there is no way to differentiate the video stream within the tunnel coming from the cloud, from a smaller data access session. They can sometimes both be talking across the same set of IP’s to the cloud. In a normal open network we could slow the video (or in some cases give priority to it) by knowing the IP of the video server, and the IP of the receiving user,  but when the video server is buried within the tunnel sharing the IP’s of other services, our current equalizing (QOS techniques) become less effective.

Services within a tunnel, cloud, or mesh may be bundled using the same IPs, but they are often sorted out on different ports at the ends of the tunnel. With our new release coming this winter, we will start to look at streams as IP and port number, thus allowing for much greater resolution for QOS inside the Cloud and inside your mesh network. Stay tuned!

Layer 7 Application Shaping Dying with Increased SSL


By Art Reisman
CTO – www.netequalizer.com

When you put a quorum of front line IT administrators  in a room, and an impromptu discussion break out, I become all ears. For example, last Monday, the discussion at our technical seminar at Washington University turned to the age-old subject of controlling P2P.

I was surprised to hear from several of our customers about just how difficult it has become to implement Layer 7 shaping. The new challenge stems from fact that SSL traffic cannot be decrypted and identified from a central bandwidth controller. Although we have known about this limitation for a long time, my sources tell me there has been a pick up in SSL adoption rates over the last several years. I don’t have exact numbers, but suffice it to say that SSL usage is way up.

A traditional Layer 7 shaper will report SSL traffic as “unknown.” A small amount of unknown traffic has always been considered tolerable, but now, with the pick up SSL traffic, rumor has it that some vendors are requiring a module on each end node to decrypt SSL pages. No matter what side of the Layer 7 debate you are on, this provision can be a legitimate show stopper for anybody providing public or semi-open Internet access, and here is why:

Imagine your ISP is requiring you to load a special module on your laptop or iPad to decrypt all your SSL information and send them the results? Obviously, this will not go over very well on a public Internet. This relegates Layer 7 technologies to networks where administrators have absolute control over all the end points in their network. I suppose this will not be a problem for private businesses, where recreational traffic is not allowed, and also in countries with extreme controls such as China and Iran, but for a public Internet providers in the free world,  whether it be student housing, a Library, or a municipal ISP, I don’t see any future in Layer 7 shaping.

How to Speed Up Your Wireless Network


Editors Notes:

This article was adapted and updated from our original article for generic Internet congestion.

Note: This article is written from the perspective of a single wireless router, however all the optimizations explained below also apply to more complex wireless mesh networks.

It occurred to me today, that in all the years I have been posting about common ways to speed up your Internet, I have never really written a plain and simple consumer explanation dedicated to how a bandwidth controller can speed a congested wireless network. After all, it seems intuitive, that a bandwidth controller is something an ISP would use to slow down and regulate a users speed, not make it faster; but there can be a beneficial side to a smart bandwidth controller that will make a user’s experience on a network appear much faster.

What causes slowness on a wireless shared link?

Everything you do on your Internet creates a connection from inside your network to the Internet, and all these connections compete for the limited amount of bandwidth on your wireless router.

Quite a bit of slow wireless service problems are due to contention on overloaded access points. Even if you are the only user on the network, a simple update to your virus software running in the background can dominate your wireless link. A large download often will cause everything else you try (email, browsing) to come to a crawl.

Your wireless router provides first-come, first-serve service to all the wireless devices trying to access the Internet. To make matters worse, the heavier users (the ones with the larger persistent downloads) tend to get more than their fair share of wireless time slots. Large downloads are like the school yard bully – they tend to butt in line, and not play fair.

Also, what many people may not realize, is that even with a high rate of service to the Internet, your access point, or wireless back haul to the Internet, may create a bottle neck at a much lower throughput level than what your optimal throughput is rate for.

So how can a bandwidth controller make my wireless network faster?

A smart bandwidth controller will analyze all your wireless connections on the fly. It will then selectively take away some bandwidth from the bullies. Once the bullies are removed, other applications will get much needed wireless time slots out to the Internet, thus speeding them up.

What application benefits most when a bandwidth controller is deployed on a wireless network?

The most noticeable beneficiary will be your VoIP service. VoIP calls typically don’t use that much bandwidth, but they are incredibly sensitive to a congested link. Even small quarter-second gaps in a VoIP call can make a conversation unintelligible.

Can a bandwidth controller make my YouTube videos play without interruption?

In some cases yes, but generally no. A YouTube video will require anywhere from 500kbs to 1000kbs of your link, and is often the bully on the link; however in some instances there are bigger bullies crushing YouTube performance, and a bandwidth controller can help in those instances.

Can a home user or small business with a slow wireless connection take advantage of a bandwidth controller?

Yes, but the choice is a time-cost-benefit decision. For about $1,600 there are some products out there that come with support that can solve this issue for you, but that price is hard to justify for the home user – even a business user sometimes.

Note: I am trying to keep this article objective and hence am not recommending anything in particular.

On a home-user network it might be easier just to police it yourself, shutting off background applications, and unplugging the kids’ computers when you really need to get something done. A bandwidth controller must sit between your modem/router and all the users on your network.

Related Article Ten Things to Consider When Choosing a Bandwidth Shaper.

Related Article Hidden Nodes on your wireless network

Best Monitoring Tool for Your Network May Not Be What You Think


By Art Reisman

CTO – http://www.netequalizer.com

A common assumption in the IT world is that the starting point for any network congestion solution begins with a monitoring tool.  “We must first figure out what specific type of traffic is dominating our network, and then we’ll decide on the solution”.  This is a reasonable and rational approach for a one time problem. However, the source of network congestion can change daily, and it can be a different type of traffic or different user dominating your bandwidth each day.

When you start to look at the labor and capital expense of  “monitor and react,” as your daily troubleshooting tool, the solution can become more expensive than your bandwidth contract with your provider.

The traditional way of looking at monitoring your Internet has two dimensions. First, the fixed cost of the monitoring tool used to identify traffic, and second, the labor associated with devising and implementing the remedy. In an ironic inverse correlation, we assert that your ROI will degrade with the complexity of the monitoring tool.

Obviously, the more detailed the reporting/shaping tool, the more expensive its initial price tag. Yet, the real kicker comes with part two. The more detailed data output generally leads to an increase in the time an administrator is likely to spend making adjustments and looking for optimal performance.

But, is it really fair to assume higher labor costs with more advanced monitoring and information?

Well, obviously it wouldn’t make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. But, typically, the more information an admin has about a network, the more inclined he or she might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief that when the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. In reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth monitoring tool is a loss? Not at all. Bandwidth monitoring and network adjusting can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

The solution: Be proactive, use a tool that prevents congestion before it affects the quality of your network.

An effective compromise with many of our customers is that they are stepping down from expensive, complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can head off trouble with a basic bandwidth control solution in place (such as a NetEqualizer). With a smart, proactive congestion control device, the acute problems of a network locking up will stop.

Yes, there may be a need to look at your overall bandwidth usage trends over time, but you do not need an expensive detailed monitoring tool for that purpose.

Here are some other articles on bandwidth monitoring that we recommend.

List of monitoring tools compiled by Stanford.

ROI tool , determine how much a bandwidth control device can save.

Great article on choosing a bandwidth controller.

Planetmy
Linux Tips
How to set up a monitor for free

Good enough is better: a lesson from the Digital Camera Revolution

Are You Unknowingly Sharing Bandwidth with Your Neighbors?


Editor’s Note: The following is a revised and update version of our original article from April 2007.

In a recent article titled, “The White Lies ISPs Tell about Broadband Speeds,” we discussed some of the methods ISPs use when overselling their bandwidth in order to put on their best face for their customers. To recap a bit, oversold bandwidth is a condition that occurs when an ISP promises more bandwidth to its users than it can actually deliver hence, during peak hours you may actually be competing with your neighbor for bandwidth. Since the act of “overselling” is a relative term, with some ISPs pushing the limit to greater extremes than others, we thought it a good idea to do a quick follow-up and define some parameters for measuring the oversold condition.

For this purpose we use the term contention ratio. A contention ratio is simply the size of an Internet trunk divided by the number of users. We normally think of Internet trunks in units of megabits. For example, 10 users sharing a one megabit trunk would have a 10-to-1 contention ratio. If sharing the bandwidth on the trunk equally and simultaneously, each user could sustain a constant feed of 100kbs, which is exactly 1/10 of the overall bandwidth.

So what is an acceptable contention ratio?

From a business standpoint, it is whatever a customer will put up with and pay for without canceling their service. This definition may seem ethically suspect, but whether in the bygone days of telecommunications phone service or contemporary Internet bandwidth business, there are long-standing precedents for overselling. What do you think a circuit busy signal is caused by? Or a dropped cell phone call? It’s best to leave the moral debate to a university assignment or a Sunday sermon.

So, without pulling any punches, what exactly will a customer tolerate before pulling the plug?
Here are some basic unofficial observations:
  • Rural customers in the US and Canada: Contention ratios of 10 to 1 are common (2007 this was 20 to 1)
  • International customers in remote areas of the world: Contention ratios of 20 to 1 are common (2007 was 80 to 1)
  • Internet providers in urban areas: Contention ratios of 5 to 1 are to be expected (2007 this was 10 to 1) *

* Larger cable operators have extremely fast last mile connections, most of their speed claims are based on the speed of their last mile connection and not their Internet Exchange point thresholds. The numbers cited are related to their connection to the broader Internet and not the last mile from their office (NOC) to your home. Admittedly, the lines of what is the Internet can be blurred as many cable operators cache popular local content (NetFlix Movies, for example). The movie is delivered from a server at their local office direct to your home, hence technically we would not consider this related to your contention ratio to the Internet.

The numbers above are a good, rough starting point, but things are not as simple as they look. There is a statistical twist as bandwidth amounts get higher.

From the customers perspective of speed, contention ratios can actually increase as the overall Internet trunk size gets larger. For example, if 50 people can share one megabit without mutiny, it should follow that 100 people can share two megabits without mutiny as the ratio has not changed. It is still 50 to 1.

However, from observations of hundreds of ISPs, we can easily conclude that perhaps 110 people can share two megabits with the same tolerance as 50 people sharing one megabit. What this means is that the larger the ISP, the more bandwidth at a fixed cost per megabit, and thus the larger the contention ratios you can get away with.

Is this really true? And if so, what are its implications for your business?

This is simply an empirical observation, backed up by talking to literally thousands of ISPs over the course of four years and noticing how their over subscription ratios increase with the size of their trunk while customer perception of speed remains about the same.

A conservative estimate is that, starting with the baseline ratio listed above, you can safely add 10 percent more subscribers above and beyond the original contention ratio for each megabit of trunk they share.

Related Articles

How to speed up access on your iPhone

How to determine the true speed of video over your Internet Connection

Network Bottlenecks – When Your Router Drops Packets, Things Can Get Ugly


By Art Reisman

CTO – APconnections

As a general rule, when a network router sees more packets than it can send or receive on a link, it will drop the extra  packets. Intuitively, when your router is dropping packets, one would assume that the perceived slow down, per user, would be just a gradual shift slower.

What happens in reality is far worse…

1) Distant users get spiraling slower responses.

Martin Roth, a colleague of ours who founded one of the top performance analysis companies in the world, provided this explanation:

“Any device which is dropping packets “favors” streams with the shortest round trip time, because (according to the TCP protocol) the time after which a lost packet is recovered is depending on the round trip time. So when a company in Copenhagen/Denmark has a line to Australia and a line to Germany on the same internet router, and this router is discarding packets because of bandwidth limits/policing, the stream to Australia is getting much bigger “holes” per lost packet (up to 3 seconds) than the stream to Germany or another office in Copenhagen. This effect then increases when the TCP window size to Australia is reduced (because of the retransmissions), so there are fewer bytes per round trip and more holes between to round trips.”

In the screen shot above (courtesy of avenida.dk), the Bandwidth limit is 10 Mbit (= 1 Mbyte/s net traffic), so everything on top of that will get discarded. The problem is not the discards, this is standard TCP behaviour, but the connections that are forcefully closed because of the discards. After the peak in closed connections, there is a “dip” in bandwidth utilization, because we cut too many connections.

2) Once you hit a congestion point, where your router is forced to drop packets, overall congestion actually gets worse before it gets better.

When applications don’t get a response due to a dropped packet, instead of backing off and waiting, they tend to start sending re-tries, and this is why you may have noticed prolonged periods (3o seconds or more) of no service on a congested network. We call this the rolling brown out. Think of this situation as sort of a doubling down on bandwidth at the moment of congestion. Instead of easing into a full network and lightly bumping your head, all the devices demanding bandwidth ramp up their requests at precisely the moment when your network is congested, resulting in an explosion of packet dropping until everybody finally gives up.

How do you remedy outages caused by Congestion?

We have written extensively about solutions to prevent bottlenecks. Here is a quick summary with links:

1) The most obvious being to increase the size of your link.

2) Enforce rate limits per user.

3) Wse something more sophisticated like a Netequalizer, a device that is designed to specifically counter the effects of congestion.

From Martin Roth of Avenida.dk

“With NetEqualizer we may get the same number of discards, but we get fewer connections closed, because we “kick” the few connections with the high bandwidth, so we do not get the “dip” in bandwidth utilization.

The graphs (above) were recorded using 1 second intervals, so here you can see the bandwidth is reached. In a standard SolarWinds graph with 10 minute averages the bandwidth utilization would be under 20% and the customer would not know they are hitting the limit.”

———————————————————————-

The excerpt below was a message from a reseller who had been struggling with congestion issues at a hotel, he tried basic rate limits on his router first. Rate Limits will buy you some time , but on an oversold network you can still hit the congestion point, and for this you need a smarter device.

“…NetEq delivered a 500% gain in available bandwidth by eliminating rate caps, possible through a mix of connection limits and Equalization.  Both are necessary.  The hotel went from 750 Kbit max per accesspoint (entire hotel lobby fights over 750Kbit; divided between who knows how many users) to 7Mbit or more available bandwidth for single users with heavy needs.

The ability to fully load the pipe, then reach out and instantly take back up to a third of it for an immediate need like a speedtest was also really eye-opening.  The pipe is already maxed out, but there is always a third of it that can be immediately cleared in time to perform something new and high-priority like a speed test.”
 
Rate Caps: nobody ever gets a fast Internet connection.
Equalized: the pipe stays as full as possible, yet anybody with a business-class need gets served a major portion of the pipe on demand. “
– Ben Whitaker – jetsetnetworks.com

Are those rate limits on your router good enough?

Ten Things You Can Do With Our $999 Bandwidth Controller


Why are we doing this?

In the last few years, bulk bandwidth prices have plummeted. The fundamentals for managing bandwidth have also changed. Many of our smaller customers, businesses with 50 to 300 employees, are upgrading their old 10 megabit circuits with 50 Megabit  links at no extra cost. There seems to be some sort of bandwidth fire sale going on…

Is there a catch?

The only restriction on the Lite unit (when compared to the NE2000) is the number of users it can handle at one time. It is designed for smaller networks. It has all the features and support of the higher-end NE2000. For those familiar with our full-featured product, you do not lose anything.

Here are ten things you can still do with our $999 Bandwidth Controller

1) Provide priority for VOIP and Skype on an MPLS link.

2) Full use of Bandwidth Pools. This is our bandwidth restriction by subnet feature and can be used to ease congestion on remote Access Points.

3) Implement bandwidth restrictions by quota.

4) Have full graphical reporting via NTOP reporting integration.

5) Automated priority via equalizing for low-bandwidth activities such as web browsing, using Citrix terminal emulation, and web applications (database queries).

6) Priority for selected video stations.

7) Basic Rate limits by IP, or MAC address.

8) Limit P2P traffic.

9) Automatically email customers on bandwidth overages.

10) Sleep well at night knowing your network will run smoothly during peak usage.

Are Bandwidth Controllers still relevant?

Dirt cheap bandwidth upgrades are good for consumers, but not for expensive bandwidth controllers on the market. For some products in excess of  $50,000, this might be the beginning of the end. We are fortunate to have built a lean company with low overhead. We rely mostly on a manufacturer-direct market channel, and this is greatly reduces our cost of sale. From experience, we know that even with higher bandwidth amounts, letting your customers run wide-open is still going to lead to trouble in the form of congested links and brownouts. 

As bandwidth costs drop, the Bandwidth Controller component of your network is not going to go away, but it must also make sense in terms of cost and ease of use. The next generation bandwidth controller must be full-featured while also competing with lower bandwidth prices. With our new low-end models, we will continue to make the purchase of our equipment a “no brainer” in value offered for your dollar spent.

There is nothing like our Lite Unit on the market delivered with support and this feature set at this price point. Read more about the features and specifications of our NetEqualizer Lite in our  NetEqualizer Lite Data Sheet.

APconnections Celebrates New NetEqualizer Lite with Introductory Pricing


Editor’s Note:  This is a copy of a press release that went out on May 15th, 2012.  Enjoy!

Lafayette, Colorado – May 15, 2012 – APconnections, an innovation-driven technology company that delivers best-in-class network traffic management solutions, is celebrating the expansion of its NetEqualizer Lite product line by offering special pricing for a limited time.

NetEqualizer’s VP of Sales and Business Development, Joe D’Esopo is excited to announce “To make it easy for you to try the new NetEqualizer Lite, for a limited time we are offering the NetEqualizer Lite-10 at introductory pricing of just $999 for the unit, our Lite-20 at $1,100, and our Lite-50 at $1,400.  These are incredible deals for the value you will receive; we believe unmatched today in our industry.”

We have upgraded our base technology for the NetEqualizer Lite, our entry-level bandwidth-shaping appliance.  Our new Lite still retains a small form-factor, which sets it apart, and makes it ideal for implementation in the Field, but now has enhanced CPU and memory. This enables us to include robust graphical reporting like in our other product lines, and also to support additional bandwidth license levels.

The Lite is geared towards smaller networks with less than 350 users, is available in three license levels, and is field-upgradable across them: our Lite-10 runs on networks up to 10Mbps and up to 150 users ($999), our Lite-20 (20Mbps and 200 users for $1,100), and Lite-50 (50Mbps and 350 users for $1,400).  See our NetEqualizer Price List for complete details.  One year renewable NetEqualizer Software & Support (NSS) and NetEqualizer Hardware Warranties (NHW) are offered.

Like all of our bandwidth shapers, the NetEqualizer Lite is a plug-n-play, low maintenance solution that is quick and easy to set-up, typically taking one hour or less.  QoS is implemented via behavior-based bandwidth shaping, “equalizing”, giving priority to latency-sensitive applications, such as VoIP, web browsing, chat and e-mail over large file downloads and video that can clog your Internet pipe.

About APconnections:  APconnections is based in Lafayette, Colorado, USA.  We released our first commercial offering in July 2003, and since then thousands of customers all over the world have put our products into service.  Today, our flexible and scalable solutions can be found in over 4,000 installations in many types of public and private organizations of all sizes across the globe, including: Fortune 500 companies, major universities, K-12 schools, and Internet providers on six (6) continents.  To learn more, contact us at sales@apconnections.net.

Contact: Sandy McGregor
Director, Marketing
APconnections, Inc.
303.997.1300
sandy@apconnections.net

Why is the Internet Access in My Hotel So Slow?


The last several times I have stayed in Ireland and London, my wireless Internet became so horrific in the evening hours that I ended up walking down the street to work at the local Internet cafe. I’ll admit that hotel Internet service is hit or miss – sometimes it is fine , and other times it is terrible. Why does this happen?

To start to understand why slow Internet service persists at many hotels you must understand the business model.

Most hotel chains are run by Real Estate and Management type companies, they do not know the intricacies of wireless networks any more than they can fix a broken U-Joint on the hotel airport van. Hence, they hire out their IT – both for implementation and design consulting. The marching orders to their IT consultant is usually to build a system that generates revenue for the hotel. How can we charge for this service? The big cash cow for the hotel industry used to be the phone system, and then with advent of cell phones that went away. Then it was On-Demand Movies (mostly porn) , and that is fading fast. Competing on great free Internet service between operators has not been a priority. However, even with concessions to this model of business, there is no reason why it cannot be solved.

There are a multitude of reasons that Internet service can gridlock in a hotel, sometimes it is wireless interference, but by far the most common reason is too many users trying to watch video during peak times (maybe a direct result of pay on demand movies). When this happens you get the rolling brown out. The service works for 30 seconds or so, duping  you into thinking you can send an e-mail or finish a transaction; but just you as you submit your request, you notice everything is stuck, with no progress messages in the lower corner of your browser. And then, you get an HTTP time out. Wait perhaps 30 seconds, and all of a sudden things clear up and seem normal only to repeat again .

The simple solution for this gridlock problem is to use a dynamic fairness device such as our NetEqualizer. Many operators take the first step in bandwidth control and use their routers to enforcing fixed rate limits per customer, however this will  only provide some temporary relief and will not work in many cases.

The next time you experience the rolling brown out, send the hotel a link to this blog article (if you can get the email out). The  hotels that we have implemented our solution at are doing cartwheels down the street and we’d be happy to share their stories with anybody who inquires.

Are Those Bandwidth Limits on Your Router Good Enough?


Many routers and firewalls offer the ability to set rate limits  per user by IP. On a congested network, simple rate limits are certainly much better than doing nothing at all. Rate limits will force a more orderly distribution of bandwidth; however, if you really want to stretch your bandwidth, and  thus the number of users that can share a link, some form of dynamic fairness will outperform simple rate limits every time.

To visualize the point I’ll use the following analogy:

Suppose you ran an emergency room in a small town of  several hundred people. In order to allocate emergency room resources, you decide to allocate 1 hour, in each 24 hour day, for each person in the town to come to the emergency room. So essentially you have double/triple booked every hour in the day, and scheduled everybody regardless of whether or not they have a medical emergency. You also must hope that people will hold off on their emergency until their allotted time slot. I suppose you can see the absurdity in this model? Obviously an emergency room must take cases as they come in, and when things are busy, a screening nurse will decide who gets priority – most likely the sickest first.

Dividing up your bandwidth equally between all your users with some form of rate limit per user, although not exactly the same as our emergency room analogy, makes about as much sense.

The two methods used in the simple set rate limit model are to equally divide the bandwidth among users, or the more common, which is some form of over subscription.

So, for example, if you had 500 users sharing a 50 megabit trunk, you could:

1) Divide the 50 megabits equally, give all your users 100kbs, and thus if every user was on at the same time you would ensure that their sum total did not exceed 50 megabits.

The problem with this method is that 100kbs is a really slow connection – not much faster than dial up.

2) Oversubscribe, give them all 2 megabit caps – this is more typical. The assumption here is that on average not all users will be drawing their full allotment all the time, hence each user will get a reasonable speed most of the time.

This may work for a while, but as usage increases during busy times you will run into the rolling brown out. This is the term we use to describe the chaotic jerky slow network the typifies peak periods on an over subscribed network.

3) The smart thing to do is go ahead and set some sort of rate cap per user, perhaps 4 or 5 megabits, and combine that with something similar to our NetEqualizer technology.

Equalizing allows users to make use of all the bandwidth that is on the trunk and only slows down large streams (NOT the user) when the trunk is full. This follows more closely what the triage nurse does in the emergency room, and is far more effective at making good use of your Internet pipe.

Related Article using your router as a bandwidth controller

I believe this excerpt from the Resnet discussion group last year exemplifies the point:

You have stated your reservations, but I am still going to have to recommend the NetEqualizer. Carving up the bandwidth equally will mean that the user perception of the Internet connection will be poor even when you have bandwidth to spare. It makes more sense to have a device that can maximise the users perception of a connection. Here are some example scenarios.

NetEQ when utilization is low, and it is not doing anything:
User perception of Skype like services: Good
User perception of Netflix like services: Good
User perception of large file downloads: Good
User perception of “ajaxie” webpages that constantly update some doodad on the page: Good
User Perception of games: Good

Equally allocated bandwidth when utilization is low:
User perception of Skype like services: OK as long as the user is not doing anything else.
User perception of Netflix like services: OK as long as long as the user is not doing anything else.
User perception of large file downloads: Slow all of the time regardless of where the user is downloading the file from.
User perception of “ajaxie” webpages that constantly update some doodad on the page: OK
User perception of games: OK as long as the user is not doing anything else. That is until the game needs to download custom content from a server, then the user has to wait to enter the next round because of the hard rate limit.

NetEQ when utilization is high and penalizing the top flows:
User perception of Skype like services: Good
User perception of Netflix like services: Good – The caching bar at the bottom should be slightly delayed, but the video shouldn’t skip.  The user is unlikely to notice.
User perception of large file downloads: Good – The file is delayed a bit, but will still download relatively quickly compared to a hard bandwidth cap.  The user is unlikely to notice.
User perception of “ajaxie” webpages that constantly update some doodad on the page: Good
User perception of games: Good – downloading content between rounds might be a tiny bit slower, but fast compared to a hard rate limit.

Equally allocated bandwidth when utilization is high:
User perception of Skype like services: OK as long as the user is not doing anything else.
User perception of Netflix like services: OK as long as long as the user is not doing anything else.
User perception of large file downloads: Slow all of the time regardless of where the user is downloading the file from.
User perception of “ajaxie” webpages that constantly update some doodad on the page: OK as long as the user is not doing anything else.
User perception of games: OK as long as the user is not doing anything else. That is until the game needs to download custom content from a server, then the user has to wait to enter the next round because of the hard rate limit.

As far as the P2P thing is concerned, while I too realized that theoretically P2P would be favored, in practice it wasn’t really noticeable. If you wish, you can use connection limits to deal with this.  

One last thing to note: On Obama’s Inauguration Day, the NetEQ at PLU was able to tame the ridiculous number of live streams of the event without me intervening to change settings.  The only problems reported turned out to be bandwidth problems on the other end.  

I hope you find this useful.

Network Engineer
Information & Technology Services
Pacific Lutheran University

FCC is the Latest Dupe in Speed-Test Shenanigans


Shenanigans: is defined as the deception or tomfoolery on the part of carnival stand operators. In the case of Internet speed, claims made in the latest Wall Street Journal article, the tomfoolery is in the lack of details on how these tests were carried out.

According to the article, all the providers tested by the FCC delivered 50 megabits or more of bandwidth consistently for 24 hours straight. Fifty megabits should be enough for 50 people to continuously watch a YouTube stream at the same time. With my provider, in a large metro area, I often can’t even watch one 1 minute clip for more than a few seconds without that little time-out icon spinning in my face. By the time the video queues up enough content to play all the way through, I have long since forgotten about it and moved on. And then, when it finally starts playing again, I have to go back and frantically find it and kill the YouTube window that is barking at me from somewhere in the background.

So what gives here? Is there something wrong with my service?

I am supposed to have 10 megabit service. When I run a test I get 20 megabits of download enough to run 20 YouTube streams without issue, so far so good.

The problem with translating speed test claims to your actual Internet experience is that there are all kinds of potentially real problems once you get away from the simplicity of a speed test, and yes, plenty of deceptions as well.

First, lets look at the potentially honest problems with your actual speed when watching a YouTube video:

1) Remote server is slow: The YouTube server itself could actually be overwhelmed and you would have no way to know.

How to determine: Try various YouTube videos at once, you will likely hit different servers and see different speeds if this is the problem.

2) Local wireless problems: I have been the victim of this problem. Running two wireless access points and a couple of wireless cameras jammed one of my access points to the point where I could hardly connect to an Internet site at all.

How to determine: Plug your computer directly into your modem, thus bypassing the wireless router and test your speed.

3) Local provider link is congested: Providers have shared distribution points for your neighborhood or area, and these can become congested and slow.

How to determine: Run a speed test. If the local link to your provider is congested, it will show up on the speed test, and there cannot be any deception.

 

The Deceptions

1) Caching

I have done enough testing first hand to confirm that my provider caches heavily trafficked sites whenever they can. I would not really call this a true deception, as caching benefits both provider and consumer; however, if you end up hitting a YouTube video that is not currently in the cache, your speed will suffer at certain times during the day.

How to Determine: Watch a popular YouTube video, and then watch an obscure, seldom-watched YouTube.

Note: Do not watch the same YouTube twice in a row as it may end up in your local cache, or your providers local cache, after the first viewing.

2) Exchange Point Deceptions

The main congestion point between you and the open Internet is your providers exchange point. Most likely your cable company or DSL provider has a dedicated wire direct to your home. This wire, most likely has a clean path back to the NOC central location. The advertised speed of your service is most likely a declaration of the speed from your house to your providers NOC, hence one could argue this is your Internet speed. This would be fine except that most of the public Internet content lies beyond your provider through an exchange point.

The NOC exchange point is where you leave your local providers wires and go out to access information from data hosted on other provider networks. Providers pay extra costs when you leave their network, in both fees and in equipment costs. A few of things they can do to deceive you are:

– Give special priority to your speed tests through their site to insure the speed test runs as fast as possible.

– Re-route local traffic for certain applications back onto their network. Essentially limiting and preventing traffic from leaving their network.

– They can locally host the speed test themselves.
How to determine: Use a speed test tool that cannot be spoofed.

See also:

Is Your ISP Throttling your Bandwidth

NetEqualizer YouTube Caching

<span>%d</span> bloggers like this: