More Ideas on How to Improve Wireless Network Quality


By Art Reisman

CTO – http://www.netequalizer.com

I just came back from one of our user group seminars held at a very prestigious University. Their core networks are all running smoothly, but they still have some hard to find, sporadic dead spots on their wireless network. It seems no matter how many site surveys they do, and how many times they try to optimize their placement of their access points, they always end up with sporadic transient dark spots.

Why does this happen?

The issue with 802.11 class wireless service is that most access points lack intelligence.

With low traffic volumes, wireless networks can work flawlessly, but add a few extra users, and you can get a perfect storm. Combine some noise, and a loud talker close to the access point (hidden node), and the weaker signaled users will just get crowded out until the loud talker with a stronger signal is done. These outages are generally regional, localized to a single AP, and may have nothing to do with the overall usage on the network. Often, troubleshooting is almost impossible. By the time the investigation starts, the crowd has dispersed and all an admin has to go on is complaints that cannot be reproduced.

Access points also have a mind of their own. They will often back down from the best case throughput speed to a slower speed in a noisy environment. I don’t mean audible noise, but just crowded airwaves, lots of talkers and possible interference from other electronic devices.

For a quick stop gap solution, you can take a bandwidth controller and…

Put tight rate caps on all wireless users, we suggest 500kbs or slower. Although this might seem counter-intuitive and wasteful, it will eliminate the loud talkers with strong signals from dominating an entire access point. Many operators cringe at this sort of idea, and we admit it might seem a bit crude. However, in the face of random users getting locked out completely, and the high cost of retrofitting your network with a smarter mesh, it can be very effective.

Along the same lines as using fixed rate caps, a bit more elegant solution is to measure the peak draw on your mesh and implement equalizing on the largest streams at peak times. Even with a smart mesh network of integrated AP’s, (described in our next bullet point) you can get a great deal of relief by implementing dynamic throttling of the largest streams on your network during peak times. This method will allow users to pull bigger streams during off peak hours.

Another solution would be to deploy smarter mesh access points…

I have to back track a bit on my stupid AP comments above. The modern mesh offerings from companies such as:

Aruba Networks (www.arubanetworks.com)

Meru ( www.merunetworks.com)

Meraki ( www.meraki.com)

All have intelligence designed to reduce the hidden node, and other congestion problems using techniques such as:

  • Switch off users with weaker signals so they are forced to a nearby access point. They do this basically by ignoring the weaker users’ signals altogether, so they are forced to seek a connection with another AP in the mesh, and thus better service.
  • Prevent low quality users from connecting at slow speeds, thus the access point does not need to back off for all users.
  • Smarter logging, so an admin can go in after the fact and at least get a history of what the AP was doing at the time.

Related article explaining optimizing wireless transmission.

Wireless Network Supercharger 10 Times Faster?


By Art Reisman

CTO – http://www.netequalizer.com

I just reviewed this impressive article:

  • David Talbot reports to MIT‘s Technology Review that “Academic researchers have improved wireless bandwidth by an order of magnitude… by using algebra to banish the network-clogging task of resending dropped packets.”

Unfortunately, I do not have enough details to explain the break through claims in the article specifically. However, through some existing background and analogies, I have detailed why there is room for improvement.

What follows below is a general explanation on  why there is room for a better method of data correction and elimination of retries on a wireless network.

First off, we need to cover the effects of missing wireless packets and why they happen.

In a wireless network, when transmitting data, the sender transmits a series of one’s and zero’s using a carrier frequency. Think of it like listening to your radio, and instead of hearing a person talking , all you hear is a series of beeps and silence. Although, in the case of a wireless network transmission, beeps would be coming so fast, you could not possibly hear the difference between the beep and silence. The good news is that a wireless receiver not only hears the beeps and silence, it interprets them into binary “ones’s” and “zeros’s” and puts them together into a packet.

The problem with this form of transmission is that wireless frequencies have many uncontrolled variables that can affect reliability. It would not be all that bad if carriers were not constantly pushing the envelope. Advertised speeds are based on a best-case signal, where the provider needs to cram as many bits on the frequency window in the shortest amount of time possible. There is no margin for error. With thousands of bits typically in a packet, all it takes is a few of them to be misinterpreted, and then the whole packet is lost and must be re-transmitted.

The normal way to tell if a packet is good or bad is using a technique called a check sum. Basically this means the receiver counts the number of incoming bits and totals them up as they a arrive. Everything in this dance is based on timing. The receiver listens to each time slot, and if it hears a beep it increments a counter, and if it hears silence, it does not increment the counter. At the end of a prescribed time, it totals the bits received and then compares the total to a separate sum (that is also transmitted). I am oversimplifying this process a bit, but think of it like two guys sending box cars full of chickens back and forth on a blind railroad with no engineers, sort of rolling them down hill to each other.

Guy 1 sends three box cars full in of chickens to Guy 2, and then a fourth box car with a note saying, “Please tell me if you got three box cars full of chickens, and also confirm there were 100 chickens in each car,” and then he waits for confirmation back from Guy 2.

Guy 2 gets 2 box cars full of chickens and the note, reads the note and realizes he only got two of the three, and there was a couple of chickens missing from on of the box cars,  so he sends a note back to Guy 1 that says, “I did not get 3 box cars of chickens just two and some of the chickens were missing, they must have escaped.”

The note arrives for Guy 1 and he re-sends a new box car to make up for the mixing chickens and a new not, telling Guy 1 what he re-sent a new box car with make up chickens.

I know this analogy of two guys sending chickens blindly in box cars with confirmation notes sounds somewhat silly and definitely inefficient, but the analogy serves to explain just how inefficient wireless communications can get with re-sends, especially if some of the bits are lost in transmission. Sending bits through the air-waves can quickly become a quagmire if conditions are not perfect and bits start getting lost.

The MIT team has evidently found a better way to confirm and ensure the transition of data. As I have pointed out, in countless articles about how congestion control speeds up networks, it follows that there is great room for improvement if you can eliminate the inefficiencies of retries on a wireless network. I don’t doubt claims of 10 fold increases in actual data transmitted and received can be achieved.

Special Glasses Needed to Spot Network Security Holes


By Art Reisman

CTO – http://www.netequalizer.com

Would you leave for vacation with your garage door wide open or walk off the edge of a cliff looking for a lost dog? Whether it be a bike lock, or that little beep your car makes when you hit the button on your remote, you rely on physical confirmation for safety and security every day.

Because network security holes do not illuminate any of our human senses, most businesses run blind with respect to what are obvious vulnerabilities. Security holes can be glaringly obvious to a hacker.

Have you ever seen an Owl swoop down in the darkness and grab a rabbit? I have, but only once, and that was in the dim glow of field illuminated by some nearby stadium lights. Owls take hundreds of rodents every night under the cover of darkness, they have excellent night vision and most rodents don’t.

To a hacker, a security hole can be just as obvious as that rabbit. You might feel seemingly secure under the cover of darkness. To your senses what may be invisible is quite obvious to a hacker. They have ways of illuminating your security holes. And then, they can choose to exploit them if deemed juicy enough. For some entry points, a hacker might have to look a little bit harder, like lifting a door mat to reveal a key. Never the less, they will see the key, and the problem is you won’t even know the key is under the mat.

Fancy automated tools that report risk are nice, but the only way to expose your actual network security holes is to hire somebody with night vision goggles that can see the holes. Most tools that do audits are not good enough by themselves, they sort of bumble around in the dark looking and feeling for things, and they really do not see them the way a hacker does.

I’d strongly urge any company that is serious about updating their security to employ a white knight hacker before any other investment outlay. For the same reason that automated systems cannot replace humans, even though billions have been spent on them over the years, you should not start your security defense with an automated tool. It must start with a human hell bent on breaking into your business and then showing you the holes. It never ceases to amaze me the types of holes our white knight hackers find. There is nothing better at spotting security holes than a guy with special glasses.

Is Your Data Really Secure?


By Zack Sanders

Most businesses, if asked, would tell you they do care about the security of their customers. The controversial part of security comes to a head when you ask the question in a different way. Does your business care enough about security to make an investment in protecting customer data? There are a few companies that proactively invest in security for security’s sake, but they are largely in the minority.

The two key driving factors that determine a business’s commitment to security investment are:

1) Government or Industry Standard Compliance – This is what drives businesses like your credit card company, your local bank, and your healthcare provider to care about security. In order to operate, they are forced to care. Standards like HIPAA and PCI require them to go through security audits and checkups. Note: And just because they invest in meeting a compliance standard,  it may not translate to secure data, as we will point out below.

2) A Breach Occurs – Nothing will change an organization’s attitude toward security like a massive, embarrassing security breach. Sadly, it usually takes something like this happening to drive home the point that security is important for everyone.

The fact is, most businesses are running on very thin margins and other operating operating costs come before security spending. Human nature is such that we prioritize by what is in front of us now. What we don’t know can’t hurt us. It is easy for a business to assume that their minimum firewall configuration is good enough for now. Unfortunately they cannot easily see the holes in their firewall. Most firewall security can easily be breached through advertised public interfaces.

How do we know? Because we often do complimentary spot checks on company web servers. It is a rare case when we  have not been able to break in, attaining access to all customer records. Even though our sample set is small, our breach rate is so high, we can reliably extrapolate that most companies can easily be broken into.

As we eluded to above, even some of the companies that follow a standard are still vulnerable. Many large corporations  just go through the motions to comply with a standard, so they might typically seek out “trusted,” large professional services firms to do their audits. Often, these companies will conduct boiler plate assessments where auditors run down a checklist with the sole goal of certifying the application or organization as compliant.

Hiring a huge firm to do an audit makes it much easier to deflect blame in the case of an incident. The employee responsible for hiring the audit firm can say, “Well, I hired XXX – what more could I have done?” If they had hired a small firm to do the audit, and a breach occurred, their judgement and job might come into question – however unfair that might be.

As a professional web application security analyst that has personally handled the aftermath of many serious security breaches, I would advocate that if you take your security seriously, start with an assessment challenge using a firm that will work to expose your real world vulnerabilities.

How to Speed Up Your Wireless Network


Editors Notes:

This article was adapted and updated from our original article for generic Internet congestion.

Note: This article is written from the perspective of a single wireless router, however all the optimizations explained below also apply to more complex wireless mesh networks.

It occurred to me today, that in all the years I have been posting about common ways to speed up your Internet, I have never really written a plain and simple consumer explanation dedicated to how a bandwidth controller can speed a congested wireless network. After all, it seems intuitive, that a bandwidth controller is something an ISP would use to slow down and regulate a users speed, not make it faster; but there can be a beneficial side to a smart bandwidth controller that will make a user’s experience on a network appear much faster.

What causes slowness on a wireless shared link?

Everything you do on your Internet creates a connection from inside your network to the Internet, and all these connections compete for the limited amount of bandwidth on your wireless router.

Quite a bit of slow wireless service problems are due to contention on overloaded access points. Even if you are the only user on the network, a simple update to your virus software running in the background can dominate your wireless link. A large download often will cause everything else you try (email, browsing) to come to a crawl.

Your wireless router provides first-come, first-serve service to all the wireless devices trying to access the Internet. To make matters worse, the heavier users (the ones with the larger persistent downloads) tend to get more than their fair share of wireless time slots. Large downloads are like the school yard bully – they tend to butt in line, and not play fair.

Also, what many people may not realize, is that even with a high rate of service to the Internet, your access point, or wireless back haul to the Internet, may create a bottle neck at a much lower throughput level than what your optimal throughput is rate for.

So how can a bandwidth controller make my wireless network faster?

A smart bandwidth controller will analyze all your wireless connections on the fly. It will then selectively take away some bandwidth from the bullies. Once the bullies are removed, other applications will get much needed wireless time slots out to the Internet, thus speeding them up.

What application benefits most when a bandwidth controller is deployed on a wireless network?

The most noticeable beneficiary will be your VoIP service. VoIP calls typically don’t use that much bandwidth, but they are incredibly sensitive to a congested link. Even small quarter-second gaps in a VoIP call can make a conversation unintelligible.

Can a bandwidth controller make my YouTube videos play without interruption?

In some cases yes, but generally no. A YouTube video will require anywhere from 500kbs to 1000kbs of your link, and is often the bully on the link; however in some instances there are bigger bullies crushing YouTube performance, and a bandwidth controller can help in those instances.

Can a home user or small business with a slow wireless connection take advantage of a bandwidth controller?

Yes, but the choice is a time-cost-benefit decision. For about $1,600 there are some products out there that come with support that can solve this issue for you, but that price is hard to justify for the home user – even a business user sometimes.

Note: I am trying to keep this article objective and hence am not recommending anything in particular.

On a home-user network it might be easier just to police it yourself, shutting off background applications, and unplugging the kids’ computers when you really need to get something done. A bandwidth controller must sit between your modem/router and all the users on your network.

Related Article Ten Things to Consider When Choosing a Bandwidth Shaper.

Related Article Hidden Nodes on your wireless network

How to Put a Value on IT Consulting


By Art Reisman

This post was inspired after a conversation with one of our IT resellers.  My commentary is based on thousands of  experiences I have had helping solve client network IT  issues over the past 20 years.

There is a wide range of ability in the network consulting world, and the right IT consultant is just as important as choosing a reliable car or plane. Short changing yourself on a shiny new paint job with a low price can lead to disaster.

The problem clients must overcome when picking a consultant is that often the person doing the hiring is not an experienced IT professional, hence it is hard to  judge IT competency. A person who has not had to solve real world networking problems may have no good reference point to judge an IT consultant. It would be like me auditioning pianists for admission to the Julliard School (also a past customer of ours).  I could not ever hope to choose between the nuances of great pianist versus a bar hack playing pop songs. In the world of IT, on face value, the talent of an IT person is also hard to differentiate. A nice guy with good people skills is important but does not prove IT competency. Certifications are fine, but are also not a guarantee of competency. Going back to my Julliard example, perhaps with a few tips from an expert I could narrow the field a bit ?

Below are some ideas that should provide some guidance when narrowing your choice of IT consultant.

The basic difference in competency, as measured by results, will come down to  those professionals that can solve new problems as presented and those that can’t. For example, a consultant without unique problem solving skills will always try to map a new problem as a variation of an old problem, and thus will tend to go down a trial an error check list in sequential order. This will work for solving very basic problems based on their knowledge base of known problems, but it can really rack up the hours and downtime when this person is presented with a new issue not previously encountered.  I would ask this question of a potential consultant. Even if you are non technical ask the question, and listen for enthusiasm in the answer not so much details.

“Can you run me through an example of any unique networking problem you have encountered, and what method you used to solve it?” A good networking person will be full and proud of their war stories, and should actually enjoy talking about them.

The other obvious place to find a networking consultant is from a reference, but be careful. I would only value the reference if the party giving it has had severe IT failures for comparison.

There are plenty of competent IT people that can do the standard stuff, the person giving a reference will only be valuable if they have gone from bad to good, or vice versa. If they start with good, they will assume all IT people are like this, and not appreciate what they have stumbled into.  If they start with average, they will not know it is average, until they experience good. The  average IT person will be busy all the time,  and eventually solve problems via the brute force method. In their processes they will sound intelligent and always have an issue to solve (often of their own bumbling)   Until a reference experiences the efficiency of somebody really good as a comparison  a good IT person is hardly ever noticed) they won’t have the reference point.

NetEqualizer News: October 2012


October 2012

Greetings!

Enjoy another issue of NetEqualizer News! This month, we announce availability of our new NetEqualizer GUI, remind you about the upcoming Midwest Technical Seminar at Washington University – St. Louis, and offer a shipping credit to our international customers as part of a spooky NetEqualizer Halloween celebration. As always, feel free to pass this along to others who might be interested in NetEqualizer News.

A message from Art…
Art Reisman, CTO – APconnections

During October in the United States, all things become about Halloween. This is big business here, rivaling Christmas in sales, particularly of costumes, candy, and scary decorations. I must admit that I love Halloween and do go all out each year decorating my yard with spooky animatronic figures, a mini fake cemetery, and pumpkins from the garden!

As I have read that many countries love Halloween, we are offering our own “treat” this year to help our international customers celebrate! For a limited time, we will ship internationally at a scary good price ($275 max shipping credit). Read more about this promotion below. Happy Halloween!

We love it when we hear back from you – so if you have a story you would like to share with us of how we have helped you, let us know. Email me directly at art@apconnections.net. I would love to hear from you!

New NetEqualizer GUI Now Available

Over the last couple issues of NetEqualizer News, we’ve discussed our 6.0 Software Update, and in particular our new GUI, quite a bit. Well, our beta testing was a big success, and the GUI is now available to those who wish to have it. The actual GA release will be available in one to two weeks.

Here are some of the exciting new features you’ll see in the new GUI:

New Dashboard Feature

Menus Aligned by Key Functions

Consistent Look and Feel

Professional Quota API

Check out the previous issue of NetEqualizer News for details on each of the above features.

Our beta also resulted in great recommendations from our customers. Here are some additional features we’ve added based on feedback thus far:

Dashboard Auto Refresh on three different time intervals.

– A Bytes/Bits Conversion Calculator to help you set up your NetEqualizer.

– An Old GUI/New GUI Map that helps you see where interfaces in the old GUI now reside in order to make the transition to the new GUI easier.

Please email us if you would like to have the new NetEqualizer GUI! All new units will ship with the new GUI, and, as stated above, the GA release will be in one to two weeks.

To view a live demo NetEqualizer, with the new GUI installed, click here to register.

And, as always, the 6.0 Software Update will be available at no charge to customers with valid NetEqualizer Software Subscriptions (NSS).

For more information on the NetEqualizer or the upcoming release, visit our blog or contact us at:

sales@apconnections.net

-or-

toll-free U.S. (888-287-2492),

worldwide (303) 997-1300 x. 103.


Midwest Technical Seminar Reminder

There is still time to register for the Midwest Technical Seminar on Monday, October 29th at Washington University – St. Louis!

The half-day seminar will include lunch after the event concludes. If you are in the area, we’d like to see you there!

Click here to register and learn more!


A Halloween Shipping Treat

As part of our Halloween celebration, we want to offer a shipping credit for all of our international customers! This means that we will ship anywhere in the world and apply a maximum $275 credit toward shipping costs.

From now until December 2012, take advantage of this great savings opportunity!

For more information on the Halloween shipping promotion, contact us at:

sales@apconnections.net


Best Of The Blog

Editor’s Choice: The Best of Speeding Up Your Internet

By Art Reisman – CTO – APconnections

Over the years we have written a variety of articles related to Internet Access Speed and all of the factors that can affect your service. Below, I have consolidated some of my favorites along with a quick convenient synopsis.

How to determine the true speed of video over your Internet connection:

If you have ever wondered why you can sometimes watch a full-length movie without an issue while at other times you can’t get the shortest of YouTube videos to play without interruption, this article will shed some light on what is going on behind the scenes.

FCC is the latest dupe when it comes to Internet speeds:

After the Wall Street Journal published an article on Internet provider speed claims, I decided to peel back the onion a bit. This article exposes anomalies between my speed tests and what I experienced when accessing real data.

Photo Of The Month

Autumn Walk in the Aspens

Winter is coming here in Colorado. We’ve already had a few very light snows in the area. Despite the onset of cold weather, it really is one of the most beautiful times here. The trees are at their most brilliant and the snow-capped mountains contrast scenically with the bare foothills. If you’ve never been up to the mountains to check out the changing aspen trees, it’s an experience you won’t forget.

Best Monitoring Tool for Your Network May Not Be What You Think


By Art Reisman

CTO – http://www.netequalizer.com

A common assumption in the IT world is that the starting point for any network congestion solution begins with a monitoring tool.  “We must first figure out what specific type of traffic is dominating our network, and then we’ll decide on the solution”.  This is a reasonable and rational approach for a one time problem. However, the source of network congestion can change daily, and it can be a different type of traffic or different user dominating your bandwidth each day.

When you start to look at the labor and capital expense of  “monitor and react,” as your daily troubleshooting tool, the solution can become more expensive than your bandwidth contract with your provider.

The traditional way of looking at monitoring your Internet has two dimensions. First, the fixed cost of the monitoring tool used to identify traffic, and second, the labor associated with devising and implementing the remedy. In an ironic inverse correlation, we assert that your ROI will degrade with the complexity of the monitoring tool.

Obviously, the more detailed the reporting/shaping tool, the more expensive its initial price tag. Yet, the real kicker comes with part two. The more detailed data output generally leads to an increase in the time an administrator is likely to spend making adjustments and looking for optimal performance.

But, is it really fair to assume higher labor costs with more advanced monitoring and information?

Well, obviously it wouldn’t make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. But, typically, the more information an admin has about a network, the more inclined he or she might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief that when the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. In reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth monitoring tool is a loss? Not at all. Bandwidth monitoring and network adjusting can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

The solution: Be proactive, use a tool that prevents congestion before it affects the quality of your network.

An effective compromise with many of our customers is that they are stepping down from expensive, complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can head off trouble with a basic bandwidth control solution in place (such as a NetEqualizer). With a smart, proactive congestion control device, the acute problems of a network locking up will stop.

Yes, there may be a need to look at your overall bandwidth usage trends over time, but you do not need an expensive detailed monitoring tool for that purpose.

Here are some other articles on bandwidth monitoring that we recommend.

List of monitoring tools compiled by Stanford.

ROI tool , determine how much a bandwidth control device can save.

Great article on choosing a bandwidth controller.

Planetmy
Linux Tips
How to set up a monitor for free

Good enough is better: a lesson from the Digital Camera Revolution

Networking Equipment and Virtual Machines Do Not Mix


By Joe DEsopo

Editors Note:
We often get asked why we don’t offer our NetEqualizer as a virtual machine. Although the excerpt below is geared toward the NetEqualizer, you could just as easily substitute the word  “router” or “firewall” in place of NetEqualizer and the information would apply to just about any networking product on the market. For example, even a simple Linksys router has a version of Linux under the hood and to my knowlege they don’t offer that product as VM. In the following excerpt lifted from a real response to one of our larger customers (a hotel operator), we detail the reasons.

————————————————————————–

Dear Customer

We’ve very consciously decided not to release a virtualized copy of the software. The driver for our decision is throughput performance and accuracy.

As you can imagine, The NetEqualizer is optimized to do very fast packet/flow accounting and rule enforcement while minimizing unwanted negative effects (latencies, etc…) in networks. As you know, the NetEqualizer needs to operate in the sub-second time domain over what could be up to tens of thousands of flows per second.

As part of our value proposition, we’ve been successful, where others have not, at achieving tremendous throughput levels on low cost commodity platforms (Intel based Supermicro motherboards), which helps us provide a tremendous pricing advantage (typically we are 1/3 – 1/5 the price of alternative solutions). Furthermore, from an engineering point of view, we have learned from experience that slight variations in Linux, System Clocks, NIC Drivers, etc… can lead to many unwanted effects and we often have to re-optimize our system when these things are upgraded. In some special areas, in order to enable super-fast speeds, we’ve had to write our own Kernel-level code to bypass unacceptable speed penalties that we would otherwise have to live with on generic Linux systems. To some degree, this is our “secret sauce.” Nevertheless, I hope you can see that the capabilities of the NetEqualizer can only be realized by a carefully engineered synergy between our Software, Linux and the Hardware.

With that as a background, we have taken the position that a virtualized version of the NetEqualizer would not be in anyone’s best interest.   The fact is, we need to know and understand the specific timing tolerances in any given moment and system environment.  This is especially true if a bug is encountered in the field and we need to reproduce it in our labs in order to isolate and fix the problem (note: many bugs we find our not of our own making – they are often changes in Linux that used to work fine, but for some reason have changed in a newer release and we are unaware and that requires us to discover and re-optimize around).

I hope I’ve done a good job of explaining the technical complexities surrounding a “virtualized” NetEqualizer.  I know it sounds like a great idea, but really we think it cannot be done to an acceptable level of performance and support.