By Art Reisman
CTO – http://www.netequalizer.com
I just came back from one of our user group seminars held at a very prestigious University. Their core networks are all running smoothly, but they still have some hard to find, sporadic dead spots on their wireless network. It seems no matter how many site surveys they do, and how many times they try to optimize their placement of their access points, they always end up with sporadic transient dark spots.
Why does this happen?
The issue with 802.11 class wireless service is that most access points lack intelligence.
With low traffic volumes, wireless networks can work flawlessly, but add a few extra users, and you can get a perfect storm. Combine some noise, and a loud talker close to the access point (hidden node), and the weaker signaled users will just get crowded out until the loud talker with a stronger signal is done. These outages are generally regional, localized to a single AP, and may have nothing to do with the overall usage on the network. Often, troubleshooting is almost impossible. By the time the investigation starts, the crowd has dispersed and all an admin has to go on is complaints that cannot be reproduced.
Access points also have a mind of their own. They will often back down from the best case throughput speed to a slower speed in a noisy environment. I don’t mean audible noise, but just crowded airwaves, lots of talkers and possible interference from other electronic devices.
For a quick stop gap solution, you can take a bandwidth controller and…
Put tight rate caps on all wireless users, we suggest 500kbs or slower. Although this might seem counter-intuitive and wasteful, it will eliminate the loud talkers with strong signals from dominating an entire access point. Many operators cringe at this sort of idea, and we admit it might seem a bit crude. However, in the face of random users getting locked out completely, and the high cost of retrofitting your network with a smarter mesh, it can be very effective.
Along the same lines as using fixed rate caps, a bit more elegant solution is to measure the peak draw on your mesh and implement equalizing on the largest streams at peak times. Even with a smart mesh network of integrated AP’s, (described in our next bullet point) you can get a great deal of relief by implementing dynamic throttling of the largest streams on your network during peak times. This method will allow users to pull bigger streams during off peak hours.
Another solution would be to deploy smarter mesh access points…
I have to back track a bit on my stupid AP comments above. The modern mesh offerings from companies such as:
Aruba Networks (www.arubanetworks.com)
Meru ( www.merunetworks.com)
Meraki ( www.meraki.com)
All have intelligence designed to reduce the hidden node, and other congestion problems using techniques such as:
- Switch off users with weaker signals so they are forced to a nearby access point. They do this basically by ignoring the weaker users’ signals altogether, so they are forced to seek a connection with another AP in the mesh, and thus better service.
- Prevent low quality users from connecting at slow speeds, thus the access point does not need to back off for all users.
- Smarter logging, so an admin can go in after the fact and at least get a history of what the AP was doing at the time.
Related article explaining optimizing wireless transmission.
Will Bandwidth Shaping Ever Be Obsolete?
December 1, 2012 — netequalizerBy Art Reisman
CTO – www.netequalizer.com
I find public forums where universities openly share information about their bandwidth shaping policies an excellent source of information. Unlike commercial providers, these user groups have found technical collaboration is in their best interest, and they often openly discuss current trends in bandwidth control.
A recent university IT user group discussion thread kicked off with the following comment:
“We are in the process of trying to decide whether or not to upgrade or all together remove our packet shaper from our residence hall network. My network engineers are confident we can accomplish rate limiting/shaping through use of our core equipment, but I am not convinced removing the appliance will turn out well.”
Notice that he is not talking about removing rate limits completely, just backing off from an expensive extra piece of packet shaping equipment and using the simpler rate limits available on his router. The point of my reference to this discussion is not so much to discourse over the different approaches of rate limiting, but to emphasize, at this point in time, running wide-open without some sort of restriction is not even being considered.
Despite an 80 to 90 percent reduction in bulk bandwidth prices in the past few years, bandwidth is not quite yet cheap enough for an ISP to run wide-open. Will it ever be possible for an ISP to run wide-open without deliberately restricting their users?
The answer is not likely.
First of all, there seems to be no limit to the ways consumer devices and content providers will conspire to gobble bandwidth. The common assumption is that no matter what an ISP does to deliver higher speeds, consumer appetite will outstrip it.
Yes, an ISP can temporarily leap ahead of demand.
We do have a precedent from several years ago. In 2006, the University of Brighton in the UK was able to unplug our bandwidth shaper without issue. When I followed up with their IT director, he mentioned that their students’ total consumption was capped by the far end services of the Internet, and thus they did not hit their heads on the ceiling of the local pipes. Running without restriction, 10,000 students were not able to eat up their 1 gigabit pipe! I must caveat this experiment by saying that in the UK their university system had invested heavily in subsidized bandwidth and were far ahead of the average ISP curve for the times. Content services on the Internet for video were just not that widely used by students at the time. Such an experiment today would bring a pipe under a similar contention ratio to its knees in a few seconds. I suspect today one would need more or on the order of 15 to 25 gigabits to run wide open without contention-related problems.
It also seems that we are coming to the end of the line for bandwidth in the wireless world much more quickly than wired bandwidth.
It is unlikely consumers are going to carry cables around with their iPad’s and iPhones to plug into wall jacks any time soon. With the diminishing returns in investment for higher speeds on the wireless networks of the world, bandwidth control is the only way to keep order of some kind.
Lastly I do not expect bulk bandwidth prices to continue to fall at their present rate.
The last few years of falling prices are the result of a perfect storm of factors not likely to be repeated.
For these reasons, it is not likely that bandwidth control will be obsolete for at least another decade. I am sure we will be revisiting this issue in the next few years for an update.
Share this:
Like this: