The Pros and Cons of Bonded DSL and Load Balancing Multiple WAN links

Editor’s Note:We often get asked if our NetEqualizer bandwidth shapers can do load balancing. The answer is yes -maybe if we wanted to integrate into one of the public domain load balancing devices freely available. It seems that to do it correctly without issues is extremely expensive. 

In the following excerpt, we have reprinted some thoughts and experience from a user who has a wide breadth of knowledge in this area.  He gives detailed examples of the trade-offs involved in bonding multiple WAN connections.

When bonding is done by your provider, it is essentially seamless and requires no extra effort (or risks to the customer). It is normally done using bonded T1 links, but also can come in the form of a bonded DSL. The technology discussed below is applicable to users who are bonding two or more lines together without the knowledge (or help) of their upstream provider.

As for Linux freeware Load Balancing devices, they are NOT any sort of true bonding at all.  If you have 3 x 1.5 Mbit lines, then you do NOT have a 4.5 Mbit line with these products. If you really want a 4.5Mbit Bonded line, then I’m not aware of any way to do it without having BGP or some method of coordinating with someone upstream on the other side of the link.  However, what a multi-WAN-router will do is try to equally spread sessions out over the three lines, so that if your users are collectively doing 3Mbit of collective downloads, that should be about 1Mbit on each line. For the most part, it does a pretty good job.

It does this by using fairly dumb round-robin NATing.  So, it’s much like a regular NAT router – everyone behind it is a private 192.168 number (which is the 1st downside) – and it will NAT the privates to one of the 3 Public IP’s on the WAN ports. The side effect of that is broken sessions, where some websites (particularly SSL) will complain that your IP address has changed, for example, while you’re inside the shopping cart or whatever.

To counteract that problem, they have ‘session persistence’ which tries to track each ‘Session Pair’ and keep the same WAN IP in effect for that ‘Session Pair’. That means that the 1st time one of the private IP:port accesses some particular public ip:port, the router will remember that and use that same WAN port for that same public/private pair. The result of this is that ‘most’ of the time, we don’t have these broken sessions, but the downside of this is that the fairness of the load balancing is offset.

For example, if you had 2 lines connected:

  • User1 comes to speakeasy and does a speedtest – the router says ‘speakeasy is out WAN1 forevermore’.
  • User2 comes and looks up google, and the router says ‘google is out WAN2 forevermore’
  • User3 goes to and the router decides ‘ is on WAN1′.
  • User4 goes to (WAN2)
  • User5 goes to YouTube (WAN1)

And so on. With session persistence turned on, User300 will get SpeakEasy, and YouTube across WAN1 because that’s what it originally learned to be persistent about.

So, the tradeoff is if you don’t use the session persistence, then you’ll have angry customers because things break. If you do use persistence, then there may be an unbalancing.

Also, there are still some broken sites, even with persistence on. For example, some online stores have the customer shopping at and when they checkout it transfers their cart contents to, which may flag an IP security violation. Any time the router sees different IP’s out in the public side, it figures it can use a new WAN port and doesn’t know it’s the same user and application. There are a few game launchers that kids load a ‘launcher’ program and select a server to connect to, but when they actually click ‘connect’, the server complains because the WAN addresses have changed.

In all honesty, it works quite well and there are few problems. We also can make our own exception list, so in my shopping cart example, we can manually add ‘‘ and ‘‘ to the same WAN address and that will ensure that it always uses the same WAN for those sites. This requires that users complain first before you would even know that there is a problem, AND also requires some tricks to figure out what’s going on.  However, the exception list can ultimately handle these problems if you make enough exceptions.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency-sensitive applications, such as VoIP and email. Click here to request a full price list.

Additional articles

How to inexpensively increase internet bandwidth by bonding cable and dsl.

From a great guide to access bandwidth needs, Bandwidth Management Buyers Guide.

APconnections Field Guide to Contention Ratios

In a recent article titled “The White Lies ISPs Tell about Broadband Speeds,” we discussed some of the methods ISPs use when overselling their bandwidth in order to put on their best face for their customers. To recap a bit, oversold bandwidth is a condition that occurs when an ISP promises more bandwidth to its users than it can actually deliver. Since the act of “overselling” is a relative term, with some ISPs pushing the limit to greater extremes than others, we thought it a good idea to do a quick follow-up and define some parameters for measuring the oversold condition. 

For this purpose we use the term contention ratio. A contention ratio is simply the size of an Internet trunk divided by the number of users. We normally think of Internet trunks in units of megabits. For example, 10 users sharing a one megabit trunk would have a 10-to- 1 contention ratio. If sharing the bandwidth on the trunk equally and simultaneously, each user could sustain a constant feed of 100kbs, which is exactly 1/10 of the overall bandwidth.

So what is an acceptable contention ratio?

From a business standpoint, it is whatever a customer will put up with and pay for without canceling their service. This definition may seem ethically suspect, but whether in the bygone days of telecommunications phone service or contemporary Internet bandwidth business, there are long-standing precedents for overselling. What do you think a circuit busy signal is caused by? Or a dropped cell phone call? It’s best to leave the moral debate to a university assignment or a Sunday sermon.

So, without pulling any punches, what exactly will a customer tolerate before pulling the plug?
Here are some basic observations:
  • Rural customers in the US and Canada: Contention ratios of 50 to 1 are common
  • International customers in remote areas of the world: Contention ratios of 80 to 1 are common
  • Internet providers in urban areas: Contention ratios of 20 to 1 are to be expected
  • The numbers above are a good, rough starting point, but things are not as simple as they look. There is a statistical twist as bandwidth amounts get higher.

    Contention ratios can actually increase as the overall Internet trunk size gets larger. For example, if 50 people can share one megabit without mutiny, it should follow that 100 people can share two megabits without mutiny as the ratio has not changed. It is still 50 to 1.

    However, from observations of hundreds of ISPs, we can easily conclude that perhaps 110 people can share two megabits with the same tolerance as 50 people sharing one megabit. What this means is that the larger the ISP, the more bandwidth at a fixed cost per megabit, and thus the larger the contention ratios you can get away with.

    Is this really true? And if so, what are its implications for your business?

    This is simply an empirical observation, backed up by talking to literally thousands of ISPs over the course of four years and noticing how their oversubscription ratios increase with the size of their trunk.

    A conservative estimate is that, starting with the baseline ratio listed above, you can safely add 10 percent more subscribers above and beyond the original contention ratio for each megabit of trunk they share.

    Thus, to provide an illustration, 50 people sharing one megabit can safely be increased to 110 people sharing two megabits, and at four megabits you can easily handle 280 customers. With this understanding, getting more from your bandwidth becomes that much easier.

    %d bloggers like this: