Will Bandwidth Shaping Ever Be Obsolete?


By Art Reisman

CTO – www.netequalizer.com

I find public forums where universities openly share information about their bandwidth shaping policies an excellent source of information. Unlike commercial providers, these user groups have found technical collaboration is in their best interest, and they often openly discuss current trends in bandwidth control.

A recent university IT user group discussion thread kicked off with the following comment:

“We are in the process of trying to decide whether or not to upgrade or all together remove our packet shaper from our residence hall network.  My network engineers are confident we can accomplish rate limiting/shaping through use of our core equipment, but I am not convinced removing the appliance will turn out well.”

Notice that he is not talking about removing rate limits completely, just backing off from an expensive extra piece of packet shaping equipment and using the simpler rate limits available on his router.  The point of my reference to this discussion is not so much to discourse over the different approaches of rate limiting, but to emphasize, at this point in time, running wide-open without some sort of restriction is not even being considered.

Despite an 80 to 90 percent reduction in bulk bandwidth prices in the past few years, bandwidth is not quite yet cheap enough for an ISP to run wide-open. Will it ever be possible for an ISP to run wide-open without deliberately restricting their users?

The answer is not likely.

First of all, there seems to be no limit to the ways consumer devices and content providers will conspire to gobble bandwidth. The common assumption is that no matter what an ISP does to deliver higher speeds, consumer appetite will outstrip it.

Yes, an ISP can temporarily leap ahead of demand.

We do have a precedent from several years ago. In 2006, the University of Brighton in the UK was able to unplug our bandwidth shaper without issue. When I followed up with their IT director, he mentioned that their students’ total consumption was capped by the far end services of the Internet, and thus they did not hit their heads on the ceiling of the local pipes. Running without restriction, 10,000 students were not able to eat up their 1 gigabit pipe! I must caveat this experiment by saying that in the UK their university system had invested heavily in subsidized bandwidth and were far ahead of the average ISP curve for the times. Content services on the Internet for video were just not that widely used by students at the time. Such an experiment today would bring a pipe under a similar contention ratio to its knees in a few seconds. I suspect today one would need more or on the order of 15 to 25 gigabits to run wide open without contention-related problems.

It also seems that we are coming to the end of the line for bandwidth in the wireless world much more quickly than wired bandwidth.

It is unlikely consumers are going to carry cables around with their iPad’s and iPhones to plug into wall jacks any time soon. With the diminishing returns in investment for higher speeds on the wireless networks of the world, bandwidth control is the only way to keep order of some kind.

Lastly I do not expect bulk bandwidth prices to continue to fall at their present rate.

The last few years of falling prices are the result of a perfect storm of factors not likely to be repeated.

For these reasons, it is not likely that bandwidth control will be obsolete for at least another decade. I am sure we will be revisiting this issue in the next few years for an update.

More Ideas on How to Improve Wireless Network Quality


By Art Reisman

CTO – http://www.netequalizer.com

I just came back from one of our user group seminars held at a very prestigious University. Their core networks are all running smoothly, but they still have some hard to find, sporadic dead spots on their wireless network. It seems no matter how many site surveys they do, and how many times they try to optimize their placement of their access points, they always end up with sporadic transient dark spots.

Why does this happen?

The issue with 802.11 class wireless service is that most access points lack intelligence.

With low traffic volumes, wireless networks can work flawlessly, but add a few extra users, and you can get a perfect storm. Combine some noise, and a loud talker close to the access point (hidden node), and the weaker signaled users will just get crowded out until the loud talker with a stronger signal is done. These outages are generally regional, localized to a single AP, and may have nothing to do with the overall usage on the network. Often, troubleshooting is almost impossible. By the time the investigation starts, the crowd has dispersed and all an admin has to go on is complaints that cannot be reproduced.

Access points also have a mind of their own. They will often back down from the best case throughput speed to a slower speed in a noisy environment. I don’t mean audible noise, but just crowded airwaves, lots of talkers and possible interference from other electronic devices.

For a quick stop gap solution, you can take a bandwidth controller and…

Put tight rate caps on all wireless users, we suggest 500kbs or slower. Although this might seem counter-intuitive and wasteful, it will eliminate the loud talkers with strong signals from dominating an entire access point. Many operators cringe at this sort of idea, and we admit it might seem a bit crude. However, in the face of random users getting locked out completely, and the high cost of retrofitting your network with a smarter mesh, it can be very effective.

Along the same lines as using fixed rate caps, a bit more elegant solution is to measure the peak draw on your mesh and implement equalizing on the largest streams at peak times. Even with a smart mesh network of integrated AP’s, (described in our next bullet point) you can get a great deal of relief by implementing dynamic throttling of the largest streams on your network during peak times. This method will allow users to pull bigger streams during off peak hours.

Another solution would be to deploy smarter mesh access points…

I have to back track a bit on my stupid AP comments above. The modern mesh offerings from companies such as:

Aruba Networks (www.arubanetworks.com)

Meru ( www.merunetworks.com)

Meraki ( www.meraki.com)

All have intelligence designed to reduce the hidden node, and other congestion problems using techniques such as:

  • Switch off users with weaker signals so they are forced to a nearby access point. They do this basically by ignoring the weaker users’ signals altogether, so they are forced to seek a connection with another AP in the mesh, and thus better service.
  • Prevent low quality users from connecting at slow speeds, thus the access point does not need to back off for all users.
  • Smarter logging, so an admin can go in after the fact and at least get a history of what the AP was doing at the time.

Related article explaining optimizing wireless transmission.

Wireless Network Supercharger 10 Times Faster?


By Art Reisman

CTO – http://www.netequalizer.com

I just reviewed this impressive article:

  • David Talbot reports to MIT‘s Technology Review that “Academic researchers have improved wireless bandwidth by an order of magnitude… by using algebra to banish the network-clogging task of resending dropped packets.”

Unfortunately, I do not have enough details to explain the break through claims in the article specifically. However, through some existing background and analogies, I have detailed why there is room for improvement.

What follows below is a general explanation on  why there is room for a better method of data correction and elimination of retries on a wireless network.

First off, we need to cover the effects of missing wireless packets and why they happen.

In a wireless network, when transmitting data, the sender transmits a series of one’s and zero’s using a carrier frequency. Think of it like listening to your radio, and instead of hearing a person talking , all you hear is a series of beeps and silence. Although, in the case of a wireless network transmission, beeps would be coming so fast, you could not possibly hear the difference between the beep and silence. The good news is that a wireless receiver not only hears the beeps and silence, it interprets them into binary “ones’s” and “zeros’s” and puts them together into a packet.

The problem with this form of transmission is that wireless frequencies have many uncontrolled variables that can affect reliability. It would not be all that bad if carriers were not constantly pushing the envelope. Advertised speeds are based on a best-case signal, where the provider needs to cram as many bits on the frequency window in the shortest amount of time possible. There is no margin for error. With thousands of bits typically in a packet, all it takes is a few of them to be misinterpreted, and then the whole packet is lost and must be re-transmitted.

The normal way to tell if a packet is good or bad is using a technique called a check sum. Basically this means the receiver counts the number of incoming bits and totals them up as they a arrive. Everything in this dance is based on timing. The receiver listens to each time slot, and if it hears a beep it increments a counter, and if it hears silence, it does not increment the counter. At the end of a prescribed time, it totals the bits received and then compares the total to a separate sum (that is also transmitted). I am oversimplifying this process a bit, but think of it like two guys sending box cars full of chickens back and forth on a blind railroad with no engineers, sort of rolling them down hill to each other.

Guy 1 sends three box cars full in of chickens to Guy 2, and then a fourth box car with a note saying, “Please tell me if you got three box cars full of chickens, and also confirm there were 100 chickens in each car,” and then he waits for confirmation back from Guy 2.

Guy 2 gets 2 box cars full of chickens and the note, reads the note and realizes he only got two of the three, and there was a couple of chickens missing from on of the box cars,  so he sends a note back to Guy 1 that says, “I did not get 3 box cars of chickens just two and some of the chickens were missing, they must have escaped.”

The note arrives for Guy 1 and he re-sends a new box car to make up for the mixing chickens and a new not, telling Guy 1 what he re-sent a new box car with make up chickens.

I know this analogy of two guys sending chickens blindly in box cars with confirmation notes sounds somewhat silly and definitely inefficient, but the analogy serves to explain just how inefficient wireless communications can get with re-sends, especially if some of the bits are lost in transmission. Sending bits through the air-waves can quickly become a quagmire if conditions are not perfect and bits start getting lost.

The MIT team has evidently found a better way to confirm and ensure the transition of data. As I have pointed out, in countless articles about how congestion control speeds up networks, it follows that there is great room for improvement if you can eliminate the inefficiencies of retries on a wireless network. I don’t doubt claims of 10 fold increases in actual data transmitted and received can be achieved.

How to Speed Up Your Wireless Network


Editors Notes:

This article was adapted and updated from our original article for generic Internet congestion.

Note: This article is written from the perspective of a single wireless router, however all the optimizations explained below also apply to more complex wireless mesh networks.

It occurred to me today, that in all the years I have been posting about common ways to speed up your Internet, I have never really written a plain and simple consumer explanation dedicated to how a bandwidth controller can speed a congested wireless network. After all, it seems intuitive, that a bandwidth controller is something an ISP would use to slow down and regulate a users speed, not make it faster; but there can be a beneficial side to a smart bandwidth controller that will make a user’s experience on a network appear much faster.

What causes slowness on a wireless shared link?

Everything you do on your Internet creates a connection from inside your network to the Internet, and all these connections compete for the limited amount of bandwidth on your wireless router.

Quite a bit of slow wireless service problems are due to contention on overloaded access points. Even if you are the only user on the network, a simple update to your virus software running in the background can dominate your wireless link. A large download often will cause everything else you try (email, browsing) to come to a crawl.

Your wireless router provides first-come, first-serve service to all the wireless devices trying to access the Internet. To make matters worse, the heavier users (the ones with the larger persistent downloads) tend to get more than their fair share of wireless time slots. Large downloads are like the school yard bully – they tend to butt in line, and not play fair.

Also, what many people may not realize, is that even with a high rate of service to the Internet, your access point, or wireless back haul to the Internet, may create a bottle neck at a much lower throughput level than what your optimal throughput is rate for.

So how can a bandwidth controller make my wireless network faster?

A smart bandwidth controller will analyze all your wireless connections on the fly. It will then selectively take away some bandwidth from the bullies. Once the bullies are removed, other applications will get much needed wireless time slots out to the Internet, thus speeding them up.

What application benefits most when a bandwidth controller is deployed on a wireless network?

The most noticeable beneficiary will be your VoIP service. VoIP calls typically don’t use that much bandwidth, but they are incredibly sensitive to a congested link. Even small quarter-second gaps in a VoIP call can make a conversation unintelligible.

Can a bandwidth controller make my YouTube videos play without interruption?

In some cases yes, but generally no. A YouTube video will require anywhere from 500kbs to 1000kbs of your link, and is often the bully on the link; however in some instances there are bigger bullies crushing YouTube performance, and a bandwidth controller can help in those instances.

Can a home user or small business with a slow wireless connection take advantage of a bandwidth controller?

Yes, but the choice is a time-cost-benefit decision. For about $1,600 there are some products out there that come with support that can solve this issue for you, but that price is hard to justify for the home user – even a business user sometimes.

Note: I am trying to keep this article objective and hence am not recommending anything in particular.

On a home-user network it might be easier just to police it yourself, shutting off background applications, and unplugging the kids’ computers when you really need to get something done. A bandwidth controller must sit between your modem/router and all the users on your network.

Related Article Ten Things to Consider When Choosing a Bandwidth Shaper.

Related Article Hidden Nodes on your wireless network

%d bloggers like this: