Alternatives to Bandwidth Addiction


By Art Reisman

CTO – http://www.netequalizer.com

Art Reisman CTO www.netequalizer.com

Bandwidth providers are organized to sell bandwidth. In the face of bandwidth congestion, their fall back position is always to sell more bandwidth, never to slow consumption. Would a crack dealer send their clients to a treatment program?

For example, I have had hundreds of encounters with people at bandwidth resellers; all of our exchanges have been courteous and upbeat, and yet a vendor relationship rarely develops. Whether they are executives, account managers, or front-line technicians, the only time they call us is as a last resort to save an account, and for several good reasons.

1) It is much easier, conceptually, to sell a bandwidth upgrade rather than a piece of equipment.

2) Bandwidth contracts bring recurring revenue.

3) Providers can lock in a bandwidth contract, investors like contracts that guarantee revenue.

4) There is very little overhead to maintain a leased bandwidth line once up and running.

5) And as I eluded to before, would a crack dealer send a client to rehab?

6) Commercial bandwidth infrastructure costs have come down in the last several years.

7) Bandwidth upgrades are very often the most viable and easiest path to relieve a congested Internet connection.

Bandwidth optimization companies exist because at some point customers realize they cannot outrun their consumption. Believe it or not, the limiting factor to Internet access speed is not always the pure cost of raw bandwidth, enterprise infrastructure can be the limiting factor. Switches, routers, cabling, access points and back-hauls all have a price tag to upgrade, and sometimes it is easier to scale back on frivolous consumption.

The ROI of optimization is something your provider may not want you know.

The next time you consider a bandwidth upgrade at the bequest of your provider, you might want to look into some simple ways to optimize your consumption. You may not be able to fully arrest your increased demand with an optimizer, but realistically you can slow growth rate from a typical unchecked 20 percent a year to a more manageable 5 percent a year. With an optimization solution in place, your doubling time for bandwidth demand can easily reduce down from about 3.5 years to 15 years, which translates to huge cost savings.

Note: Companies such as level 3 offer optimization solutions, but with all do respect, I doubt those business units are exciting stock holders with revenue. My guess is they are a break even proposition; however I’d be glad to eat crow if I am wrong, I am purely speculating.  Sometimes companies are able to sell adjunct services at a nice profit.

Related NY times op-ed on bandwidth addiction

The Voice Report Telecom Junkies Interview: “Bandwidth Battles: A New Approach”


Linfield College logoListen in on a conversation with Andrew Wolf, telecom manager and NetEqualizer customer from Linfield College, and Art Reisman, CTO APconnections, as they spoke to George David, president of CCMI and publisher of TheVoiceReport.

Andrew switched from a Packeteer to a NetEqualizer in mid-2011.  In this interview Andrew talks about how the NetEqualizer has not only reduced Linfield College’s network congestion, but also has saved him both ongoing labor costs (no babysitting the solution or adding policies) and upfront costs on the hardware itself.

Listen the to broadcast: Bandwidth Battles: A New Approach
From TheVoiceReport Telecom Junkies, aired on 4/5/2012 | Length 12:16

College & University Guide

College & University Guide

Telecom manager Andrew Wolf at Linfield College had a problem – one just about all communications pros face or will face: huge file downloads were chewing up precious bandwidth and dragging down network performance. Plenty of traditional fixes were available, but the cost and staff to manage the apps were serious obstacles. Then Andrew landed on a unique “bandwidth behavior” approach from Art Reisman at NetEqualizer. End result – great performance at much lower costs, a real win-win. Get all the details in this latest episode of Telecom Junkies.

Want to learn more? See how others have benefited from NetEqualizer.  Read our NetEqualizer College & University testimonials.  Download our College & University Guide.

Check List for Integrating Active Directory to Your Bandwidth Controller


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com

The problem statement: You have in place an authentication service such as Radius, LDAP, or Active Directory, and now you want to implement some form of class of service per customer. For example, data usage limits (quotas) or bandwidth speed restriction per user. To do so, you’ll need to integrate your authentication device with an  enforcement device, typically a bandwidth controller.

There are products out there such as Nomadix that do both (authentication and rate limiting),  but most authentication devices are not turn-key when it comes to a mechanism to set rate limits.

Your options are:

1) You can haggle your way through various forums that give advice on setting rate limits with AD,

2) Or you can embark on a software integration project using a consultant to accomplish your bandwidth restrictions.

In an effort to help customers appreciate and understand what goes into such an integration, I have shared notes that I have used as starting point when synchronizing our NetEqualizer with Radius.

1) Start by developing (borrowing if you can) a generic abstract interface (middle ware) that is not specific to Active Dircectory, LDAP or Radius. Keep it clean and basic so as not to tie your solution to any specific authentication server.  The investment in a middle ware interface is well worth the upfront cost.  By using a middle layer you will avoid a messy divorce of your authentication system from your bandwidth controller should the need arise.

2) Chances are your bandwidth controller speaks IP, and your AD device speaks user name. So you’ll need to understand how your AD can extract IP addresses from user names and send them down to your bandwidth controller.

3) Your bandwidth controller will need a list of IP’s or MAC addresses , and their committed bandwidth rates. It will need to get this information from your authentication database.

5) On a cold start, you’ll need to make bandwidth controller aware of all active users, and perhaps during the initial synchronization, you may want to pace yourself so as to not bog down your authentication controller with a million requests on start-up.

6) Once the bandwidth controller has an initial list of users on board, you’ll need to have a back ground re-synch (audit) mechanism to make sure all the rate limits and associated IP addresses are current.

7) What to do if the bandwidth controller senses traffic from an IP that it is unaware of? You’ll need a default guest rate limit of some kind for unknown IP addresses. Perhaps you’ll want the bandwidth controller to deny service to unknown IPs?

8) Don’t forget to put a timeout on requests from the bandwidth controller to the authentication device.

Bandwidth Control from the Public Side of a NAT Router, is it Possible?


We have done some significant work in our upcoming release with respect to managing network traffic from the outside of private network segments.

The bottom line is we can now accomplish sophisticated bandwidth optimizations for segments of large networks hidden behind the NAT routers.

The problem:

One basic problem with a generic bandwidth controller, is that they typically treat all users behind a NAT router as one user.

When using NAT, a router takes one public IP and divides it up such that up to several thousand users on the private side of a network can share it. The most common reason for this, is that there are a limited number of public IPv4 addresses to hand out, so it is common for organizations and ISP’s to share the public IP’s that they own among many users.

When a router shares an IP with more than one user, it manipulates a special semi private part of the IP packet , called a “port”, to keep track of who’s data belongs to whom behind the router. The easiest way to visualize this is to think of a company with one public phone number and many private internal extensions on a PBX. In the case of this type of phone arrangement, all the employees share the public phone numbers for out side calls.

In the case of a Nat’d router, all the users behind the router share one public IP address. For the bandwidth controller sitting on the public side of the router, this can create issues, it can’t shape the individual traffic of each user because all their traffic appears as if it is coming from one IP address.

The obvious solution to this problem is to locate your bandwidth controller on the private side of the NAT router; but for a network with many NAT routers such as a large distributed wireless mesh network, the cost of extra bandwidth controllers becomes prohibitive.

Drum Roll: Enter NetEqualizer Super hero.

The Solution:

With our upcoming release we have made changes to essentially reverse engineer the NAT Port addressing scheme inside our bandwidth controller, even when located on the Internet side of the router, we can now, apply our equalizing shaping techniques to individual user streams with much more accuracy than before.

We do this by looking at the unique port mapping for each stream coming out of your router. So, if for example, two users in your mesh network, are accessing Facebook, we will treat those users bandwidth and allocations independently in our congestion control. The Benefit from these techniques is the ability to provide QoS for a Face-to-Face chat session while at the same time limiting the video to Facebook component.

You Must Think Outside the Box to Bring QoS to the Cloud and Wireless Mesh Networks


By Art Reisman
CTO – http://www.netequalizer.com

About 10 years ago, we had this idea for QoS across an Internet link. It was simple and elegant, and worked like a charm. Ten years later, as services spread out over the Internet cloud, our original techniques are more important than ever. You cannot provide QoS using TOS (diffserv) techniques over any public or semi public Internet link, but using our techniques we have proven the impossible is possible.

Why TOS bits don’t work over the Internet.

The main reason is that setting TOS bits are only effective when you control all sides of a conversation on a link, and this is not possible on most Internet links (think cloud computing and wireless mesh networks). For standard TOS services to work, you must control all the equipment in between the two end points. All it takes is one router in the path of a VoIP conversation to ignore a TOS bit, and its purpose becomes obsolete. Thus TOS bits for priority are really only practical inside a corporate LAN/WAN topology.

Look at the root cause of poor quality services and you will find alternative solutions.

Most people don’t realize the problem with congested VoIP, on any link, is due to the fact that their VoIP packets are getting crowded out by larger downloads and things like recreational video (this is also true for any interactive cloud access congestion). Often, the offending downloads are initiated by their own employees or users. A good behavior-based shaper will be able to favor VoIP streams over less essential data streams without any reliance on the sending party adhering to a TOS scheme.

How do we accomplish priority for VoIP?

We do this by monitoring all the streams on a link with one piece of equipment inserted anywhere in the congested link. In our current terminology, a stream consists of an IP (local), talking to another IP (remote Internet). When we see a large stream dominating the link, we step back and ask, is the link congested? Is that download crowding out other time-sensitive transactions such as VOIP? If the answer is yes to both questions, then we proactively take away some bandwidth from the offending stream. I know this sounds ridiculously simple, and does not seem plausible, but it works. It works very well and it works with just one device in the link irrespective of any other complex network engineering. It works with minimal set up. It works over MPLS links. I could go on and on, the only reason you have not heard of it is perhaps is that it goes against the grain of what most vendors are selling – and that is large orders for expensive high end routers using TOS bits.

Related article QoS over the Internet – is it possible?

Fast forward to our next release, how to provide QOS deep inside a cloud or mesh network where sending or receiving IP addresses are obfuscated.

Coming this winter we plan to improve upon our QoS techniques so we can drill down inside of Mesh and Cloud networks a bit better.

As the use of NAT, distributed across mesh networks, becomes more wide spread, and the bundling of services across cloud computing becomes more prevalent, one side effect has been that our stream based behavior shaping (QoS) is not as effective as it is when all IP addresses are visible (not masked behind a NAT/PAT device).

This is due to the fact that currently, we base our decision on a pair of IP’s talking to each other, but we do not consider the IP port numbers, and sometimes especially in a cloud or mesh network, services are trunked across a tunnel using the same IP. As these services get tunneled across a trunk, the data streams are bundled together using one common pair of IP’s and then the streams are broken out based on IP ports so they can be routed to their final destination. For example, in some cloud computing environments there is no way to differentiate the video stream within the tunnel coming from the cloud, from a smaller data access session. They can sometimes both be talking across the same set of IP’s to the cloud. In a normal open network we could slow the video (or in some cases give priority to it) by knowing the IP of the video server, and the IP of the receiving user,  but when the video server is buried within the tunnel sharing the IP’s of other services, our current equalizing (QOS techniques) become less effective.

Services within a tunnel, cloud, or mesh may be bundled using the same IPs, but they are often sorted out on different ports at the ends of the tunnel. With our new release coming this winter, we will start to look at streams as IP and port number, thus allowing for much greater resolution for QOS inside the Cloud and inside your mesh network. Stay tuned!

Will Bandwidth Shaping Ever Be Obsolete?


By Art Reisman

CTO – www.netequalizer.com

I find public forums where universities openly share information about their bandwidth shaping policies an excellent source of information. Unlike commercial providers, these user groups have found technical collaboration is in their best interest, and they often openly discuss current trends in bandwidth control.

A recent university IT user group discussion thread kicked off with the following comment:

“We are in the process of trying to decide whether or not to upgrade or all together remove our packet shaper from our residence hall network.  My network engineers are confident we can accomplish rate limiting/shaping through use of our core equipment, but I am not convinced removing the appliance will turn out well.”

Notice that he is not talking about removing rate limits completely, just backing off from an expensive extra piece of packet shaping equipment and using the simpler rate limits available on his router.  The point of my reference to this discussion is not so much to discourse over the different approaches of rate limiting, but to emphasize, at this point in time, running wide-open without some sort of restriction is not even being considered.

Despite an 80 to 90 percent reduction in bulk bandwidth prices in the past few years, bandwidth is not quite yet cheap enough for an ISP to run wide-open. Will it ever be possible for an ISP to run wide-open without deliberately restricting their users?

The answer is not likely.

First of all, there seems to be no limit to the ways consumer devices and content providers will conspire to gobble bandwidth. The common assumption is that no matter what an ISP does to deliver higher speeds, consumer appetite will outstrip it.

Yes, an ISP can temporarily leap ahead of demand.

We do have a precedent from several years ago. In 2006, the University of Brighton in the UK was able to unplug our bandwidth shaper without issue. When I followed up with their IT director, he mentioned that their students’ total consumption was capped by the far end services of the Internet, and thus they did not hit their heads on the ceiling of the local pipes. Running without restriction, 10,000 students were not able to eat up their 1 gigabit pipe! I must caveat this experiment by saying that in the UK their university system had invested heavily in subsidized bandwidth and were far ahead of the average ISP curve for the times. Content services on the Internet for video were just not that widely used by students at the time. Such an experiment today would bring a pipe under a similar contention ratio to its knees in a few seconds. I suspect today one would need more or on the order of 15 to 25 gigabits to run wide open without contention-related problems.

It also seems that we are coming to the end of the line for bandwidth in the wireless world much more quickly than wired bandwidth.

It is unlikely consumers are going to carry cables around with their iPad’s and iPhones to plug into wall jacks any time soon. With the diminishing returns in investment for higher speeds on the wireless networks of the world, bandwidth control is the only way to keep order of some kind.

Lastly I do not expect bulk bandwidth prices to continue to fall at their present rate.

The last few years of falling prices are the result of a perfect storm of factors not likely to be repeated.

For these reasons, it is not likely that bandwidth control will be obsolete for at least another decade. I am sure we will be revisiting this issue in the next few years for an update.

Equalizing is the Silver Bullet for Quality of Service


Silver Bullet (n.) – A simple and seemingly magical solution to a complex problem.

The amount of solutions available that have been developed to improve Quality of Service (QoS) for data traveling across a network (video, VoIP, etc.) are endless. Often, these tools appear to be simple, but seem to fall short in implementation:

Compression: Compressing files in transit helps reduce congestion by decreasing the amount of bandwidth a transfer requires. This appears to be a viable solution, but in practice, most of the large streams that tend to clog networks (high resolution media files, etc.) are already compressed. Thus, most networks won’t see much improvement in QoS when this method is used.

Layer 7 Inspection: Providing QoS to specific applications also sounds like a reasonable approach to the problem. However, most applications are increasingly utilizing encryption for transferring data, and thus determining the purpose of a network packet is a much harder problem. It also requires constant tweaking and updates to ensure the proper applications are given priority.

Type of Service: Each network packet has a flag as part of its payload that denotes its “type of service.” This flag was intended to help give QoS to packets based on their importance and purpose. This method, however, requires lots of custom router configurations and is not very reliable as far as who is able to set the flag, when, and why.

These solutions are analogous to the diet pill and weight loss products that inundate our lives on a daily basis. They are offering complex solutions to a simple problem:

Overweight? Buy this machine, watch these DVDs, take this pill.

When the real solution is:

Overweight? Eat better.

Simple solutions are what good engineering is all about, and it drives the entire philosophy behind Equalizing – the bandwidth control method implemented in our NetEqualizer. The truth is, you can accomplish 99% of your QoS needs on a fixed link SIMPLY by cranking down on the large streams of traffic. While the above approaches try to do this in various ways, nothing is easier and more hands-off than looking at the behavior of a connection relative to the available bandwidth, and subsequently throttling it as needed. No deep packet inspection, compression, or packet analysis required. No need to concern yourself with new Internet usage trends or the latest media file types. Just fair bandwidth, regardless of trunk size, for all of your users, at all times of day. When bandwidth is controlled, connection quality is allowed to be as good as possible for everyone!

Internet User’s Bill of Rights


This is the second article in our series. Our first was a Bill of Rights dictating the etiquette of software updates. We continue with a proposed Bill of Rights for consumers with respect to their Internet service.

1) Providers must divulge the contention ratio of their service.

At the core of all Internet service is a balancing act between the number of people that are sharing a resource and how much of that resource is available.

For example, a typical provider starts out with a big pipe of Internet access that is shared via exchange points with other large providers. They then subdivide this access out to their customers in ever smaller chunks — perhaps starting with a gigabit exchange point and then narrowing down to a 10 megabit local pipe that is shared with customers across a subdivision or area of town.

The speed you, the customer, can attain is limited to how many people might be sharing that 10 megabit local pipe at any one time. If you are promised one megabit service, it is likely that your provider would have you share your trunk with more than 10 subscribers and take advantage of the natural usage behavior, which assumes that not all users are active at one time.

The exact contention ratio will vary widely from area to area, but from experience, your provider will want to maximize the number of subscribers who can share the pipe, while minimizing service complaints due to a slow network. In some cases, I have seen as many as 1,000 subscribers sharing 10 megabits. This is a bit extreme, but even with a ratio as high as this, subscribers will average much faster speeds when compared to dial up.

2) Service speeds should be based on the amount of bandwidth available at the providers exchange point and NOT the last mile.

Even if your neighborhood (last mile) link remains clear, your provider’s connection can become saturated at its exchange point. The Internet is made up of different provider networks and backbones. If you send an e-mail to a friend who receives service from a company other than your provider, then your ISP must send that data on to another network at an exchange point. The speed of an exchange point is not infinite, but is dictated by the type of switching equipment. If the exchange point traffic exceeds the capacity of the switch or receiving carrier, then traffic will slow.

3) No preferential treatment to speed test sites.

It is possible for an ISP to give preferential treatment to individual speed test sites. Providers have all sorts of tools at their disposal to allow and disallow certain kinds of traffic. There should never be any preferential treatment to a speed test site.

4) No deliberate re-routing of traffic.

Another common tactic to save resources at the exchange points of a provider is to re-route file-sharing requests to stay within their network. For example, if you were using a common file-sharing application such as BitTorrent, and you were looking some non-copyrighted material, it would be in your best interest to contact resources all over the world to ensure the fastest download.

However, if your provider can keep you on their network, they can avoid clogging their exchange points. Since companies keep tabs on how much traffic they exchange in a balance sheet, making up for surpluses with cash, it is in their interest to keep traffic confined to their network, if possible.

5) Clearly disclose any time of day bandwidth restrictions.

The ability to increase bandwidth for a short period of time and then slow you down if you persist at downloading is another trick ISPs can use. Sometimes they call this burst speed, which can mean speeds being increased up to five megabits, and they make this sort of behavior look like a consumer benefit. Perhaps Internet usage will seem a bit faster, but it is really a marketing tool that allows ISPs to advertise higher connection speeds – even though these speeds can be sporadic and short-lived.

For example, you may only be able to attain five megabits at 12:00 a.m. on Tuesdays, or some other random unknown times. Your provider is likely just letting users have access to higher speeds at times of low usage. On the other hand, during busier times of day, it is rare that these higher speeds will be available.

There is now a consortium called M-Lab which has put together a sophisticated speed test site designed to give specific details on what your ISP is doing to your connection. See the article below for more information.

Related article Ten things your internet provider does not want you to know.

Related article On line shoppers bill of rights

Layer 7 Application Shaping Dying with Increased SSL


By Art Reisman
CTO – www.netequalizer.com

When you put a quorum of front line IT administrators  in a room, and an impromptu discussion break out, I become all ears. For example, last Monday, the discussion at our technical seminar at Washington University turned to the age-old subject of controlling P2P.

I was surprised to hear from several of our customers about just how difficult it has become to implement Layer 7 shaping. The new challenge stems from fact that SSL traffic cannot be decrypted and identified from a central bandwidth controller. Although we have known about this limitation for a long time, my sources tell me there has been a pick up in SSL adoption rates over the last several years. I don’t have exact numbers, but suffice it to say that SSL usage is way up.

A traditional Layer 7 shaper will report SSL traffic as “unknown.” A small amount of unknown traffic has always been considered tolerable, but now, with the pick up SSL traffic, rumor has it that some vendors are requiring a module on each end node to decrypt SSL pages. No matter what side of the Layer 7 debate you are on, this provision can be a legitimate show stopper for anybody providing public or semi-open Internet access, and here is why:

Imagine your ISP is requiring you to load a special module on your laptop or iPad to decrypt all your SSL information and send them the results? Obviously, this will not go over very well on a public Internet. This relegates Layer 7 technologies to networks where administrators have absolute control over all the end points in their network. I suppose this will not be a problem for private businesses, where recreational traffic is not allowed, and also in countries with extreme controls such as China and Iran, but for a public Internet providers in the free world,  whether it be student housing, a Library, or a municipal ISP, I don’t see any future in Layer 7 shaping.

More Ideas on How to Improve Wireless Network Quality


By Art Reisman

CTO – http://www.netequalizer.com

I just came back from one of our user group seminars held at a very prestigious University. Their core networks are all running smoothly, but they still have some hard to find, sporadic dead spots on their wireless network. It seems no matter how many site surveys they do, and how many times they try to optimize their placement of their access points, they always end up with sporadic transient dark spots.

Why does this happen?

The issue with 802.11 class wireless service is that most access points lack intelligence.

With low traffic volumes, wireless networks can work flawlessly, but add a few extra users, and you can get a perfect storm. Combine some noise, and a loud talker close to the access point (hidden node), and the weaker signaled users will just get crowded out until the loud talker with a stronger signal is done. These outages are generally regional, localized to a single AP, and may have nothing to do with the overall usage on the network. Often, troubleshooting is almost impossible. By the time the investigation starts, the crowd has dispersed and all an admin has to go on is complaints that cannot be reproduced.

Access points also have a mind of their own. They will often back down from the best case throughput speed to a slower speed in a noisy environment. I don’t mean audible noise, but just crowded airwaves, lots of talkers and possible interference from other electronic devices.

For a quick stop gap solution, you can take a bandwidth controller and…

Put tight rate caps on all wireless users, we suggest 500kbs or slower. Although this might seem counter-intuitive and wasteful, it will eliminate the loud talkers with strong signals from dominating an entire access point. Many operators cringe at this sort of idea, and we admit it might seem a bit crude. However, in the face of random users getting locked out completely, and the high cost of retrofitting your network with a smarter mesh, it can be very effective.

Along the same lines as using fixed rate caps, a bit more elegant solution is to measure the peak draw on your mesh and implement equalizing on the largest streams at peak times. Even with a smart mesh network of integrated AP’s, (described in our next bullet point) you can get a great deal of relief by implementing dynamic throttling of the largest streams on your network during peak times. This method will allow users to pull bigger streams during off peak hours.

Another solution would be to deploy smarter mesh access points…

I have to back track a bit on my stupid AP comments above. The modern mesh offerings from companies such as:

Aruba Networks (www.arubanetworks.com)

Meru ( www.merunetworks.com)

Meraki ( www.meraki.com)

All have intelligence designed to reduce the hidden node, and other congestion problems using techniques such as:

  • Switch off users with weaker signals so they are forced to a nearby access point. They do this basically by ignoring the weaker users’ signals altogether, so they are forced to seek a connection with another AP in the mesh, and thus better service.
  • Prevent low quality users from connecting at slow speeds, thus the access point does not need to back off for all users.
  • Smarter logging, so an admin can go in after the fact and at least get a history of what the AP was doing at the time.

Related article explaining optimizing wireless transmission.

How to Speed Up Your Wireless Network


Editors Notes:

This article was adapted and updated from our original article for generic Internet congestion.

Note: This article is written from the perspective of a single wireless router, however all the optimizations explained below also apply to more complex wireless mesh networks.

It occurred to me today, that in all the years I have been posting about common ways to speed up your Internet, I have never really written a plain and simple consumer explanation dedicated to how a bandwidth controller can speed a congested wireless network. After all, it seems intuitive, that a bandwidth controller is something an ISP would use to slow down and regulate a users speed, not make it faster; but there can be a beneficial side to a smart bandwidth controller that will make a user’s experience on a network appear much faster.

What causes slowness on a wireless shared link?

Everything you do on your Internet creates a connection from inside your network to the Internet, and all these connections compete for the limited amount of bandwidth on your wireless router.

Quite a bit of slow wireless service problems are due to contention on overloaded access points. Even if you are the only user on the network, a simple update to your virus software running in the background can dominate your wireless link. A large download often will cause everything else you try (email, browsing) to come to a crawl.

Your wireless router provides first-come, first-serve service to all the wireless devices trying to access the Internet. To make matters worse, the heavier users (the ones with the larger persistent downloads) tend to get more than their fair share of wireless time slots. Large downloads are like the school yard bully – they tend to butt in line, and not play fair.

Also, what many people may not realize, is that even with a high rate of service to the Internet, your access point, or wireless back haul to the Internet, may create a bottle neck at a much lower throughput level than what your optimal throughput is rate for.

So how can a bandwidth controller make my wireless network faster?

A smart bandwidth controller will analyze all your wireless connections on the fly. It will then selectively take away some bandwidth from the bullies. Once the bullies are removed, other applications will get much needed wireless time slots out to the Internet, thus speeding them up.

What application benefits most when a bandwidth controller is deployed on a wireless network?

The most noticeable beneficiary will be your VoIP service. VoIP calls typically don’t use that much bandwidth, but they are incredibly sensitive to a congested link. Even small quarter-second gaps in a VoIP call can make a conversation unintelligible.

Can a bandwidth controller make my YouTube videos play without interruption?

In some cases yes, but generally no. A YouTube video will require anywhere from 500kbs to 1000kbs of your link, and is often the bully on the link; however in some instances there are bigger bullies crushing YouTube performance, and a bandwidth controller can help in those instances.

Can a home user or small business with a slow wireless connection take advantage of a bandwidth controller?

Yes, but the choice is a time-cost-benefit decision. For about $1,600 there are some products out there that come with support that can solve this issue for you, but that price is hard to justify for the home user – even a business user sometimes.

Note: I am trying to keep this article objective and hence am not recommending anything in particular.

On a home-user network it might be easier just to police it yourself, shutting off background applications, and unplugging the kids’ computers when you really need to get something done. A bandwidth controller must sit between your modem/router and all the users on your network.

Related Article Ten Things to Consider When Choosing a Bandwidth Shaper.

Related Article Hidden Nodes on your wireless network

Best Monitoring Tool for Your Network May Not Be What You Think


By Art Reisman

CTO – http://www.netequalizer.com

A common assumption in the IT world is that the starting point for any network congestion solution begins with a monitoring tool.  “We must first figure out what specific type of traffic is dominating our network, and then we’ll decide on the solution”.  This is a reasonable and rational approach for a one time problem. However, the source of network congestion can change daily, and it can be a different type of traffic or different user dominating your bandwidth each day.

When you start to look at the labor and capital expense of  “monitor and react,” as your daily troubleshooting tool, the solution can become more expensive than your bandwidth contract with your provider.

The traditional way of looking at monitoring your Internet has two dimensions. First, the fixed cost of the monitoring tool used to identify traffic, and second, the labor associated with devising and implementing the remedy. In an ironic inverse correlation, we assert that your ROI will degrade with the complexity of the monitoring tool.

Obviously, the more detailed the reporting/shaping tool, the more expensive its initial price tag. Yet, the real kicker comes with part two. The more detailed data output generally leads to an increase in the time an administrator is likely to spend making adjustments and looking for optimal performance.

But, is it really fair to assume higher labor costs with more advanced monitoring and information?

Well, obviously it wouldn’t make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. But, typically, the more information an admin has about a network, the more inclined he or she might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief that when the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. In reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth monitoring tool is a loss? Not at all. Bandwidth monitoring and network adjusting can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

The solution: Be proactive, use a tool that prevents congestion before it affects the quality of your network.

An effective compromise with many of our customers is that they are stepping down from expensive, complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can head off trouble with a basic bandwidth control solution in place (such as a NetEqualizer). With a smart, proactive congestion control device, the acute problems of a network locking up will stop.

Yes, there may be a need to look at your overall bandwidth usage trends over time, but you do not need an expensive detailed monitoring tool for that purpose.

Here are some other articles on bandwidth monitoring that we recommend.

List of monitoring tools compiled by Stanford.

ROI tool , determine how much a bandwidth control device can save.

Great article on choosing a bandwidth controller.

Planetmy
Linux Tips
How to set up a monitor for free

Good enough is better: a lesson from the Digital Camera Revolution

Networking Equipment and Virtual Machines Do Not Mix


By Joe DEsopo

Editors Note:
We often get asked why we don’t offer our NetEqualizer as a virtual machine. Although the excerpt below is geared toward the NetEqualizer, you could just as easily substitute the word  “router” or “firewall” in place of NetEqualizer and the information would apply to just about any networking product on the market. For example, even a simple Linksys router has a version of Linux under the hood and to my knowlege they don’t offer that product as VM. In the following excerpt lifted from a real response to one of our larger customers (a hotel operator), we detail the reasons.

————————————————————————–

Dear Customer

We’ve very consciously decided not to release a virtualized copy of the software. The driver for our decision is throughput performance and accuracy.

As you can imagine, The NetEqualizer is optimized to do very fast packet/flow accounting and rule enforcement while minimizing unwanted negative effects (latencies, etc…) in networks. As you know, the NetEqualizer needs to operate in the sub-second time domain over what could be up to tens of thousands of flows per second.

As part of our value proposition, we’ve been successful, where others have not, at achieving tremendous throughput levels on low cost commodity platforms (Intel based Supermicro motherboards), which helps us provide a tremendous pricing advantage (typically we are 1/3 – 1/5 the price of alternative solutions). Furthermore, from an engineering point of view, we have learned from experience that slight variations in Linux, System Clocks, NIC Drivers, etc… can lead to many unwanted effects and we often have to re-optimize our system when these things are upgraded. In some special areas, in order to enable super-fast speeds, we’ve had to write our own Kernel-level code to bypass unacceptable speed penalties that we would otherwise have to live with on generic Linux systems. To some degree, this is our “secret sauce.” Nevertheless, I hope you can see that the capabilities of the NetEqualizer can only be realized by a carefully engineered synergy between our Software, Linux and the Hardware.

With that as a background, we have taken the position that a virtualized version of the NetEqualizer would not be in anyone’s best interest.   The fact is, we need to know and understand the specific timing tolerances in any given moment and system environment.  This is especially true if a bug is encountered in the field and we need to reproduce it in our labs in order to isolate and fix the problem (note: many bugs we find our not of our own making – they are often changes in Linux that used to work fine, but for some reason have changed in a newer release and we are unaware and that requires us to discover and re-optimize around).

I hope I’ve done a good job of explaining the technical complexities surrounding a “virtualized” NetEqualizer.  I know it sounds like a great idea, but really we think it cannot be done to an acceptable level of performance and support.

The Internet was Never Intended for On-demand TV and Movies


By Art Reisman

www.netequalizer.com

I just got off the phone with one our customers who happens to be a large ISP. He chewed me out because we were throttling his video, and his customers were complaining. I tell him, if we did not throttle his video during peak times, his whole pipe would come to screeching halt. Seems everybody is looking for a magic bullet to squeeze blood from a turnip.

Can the Internet be retrofitted for video?

Yes, there are a few tricks an ISP can do to make video more acceptable, but the bottom line is, the Internet was never intended to deliver video.

One basic basic trick being used to eek out some video, is to cache local copies of video content, and then deliver it to you when you click a URL for a movie. This technique follows along the same path as the original on demand video of the 1980’s. The kind of service where you called your cable company and purchased a movie to start at 3:00 pm.  Believe it or not, there was often a video player with a cassette at other end of the cable going into your home, and your provider would just turn the video player on with the movie at the prescribed time. Today, the selection of available video has expanded and the delivery mechanism has gotten a bit more sophisticated, but for the most part, popular video is delivered via a direct wire from the operator into your home. It is usually NOT coming across the public Internet, it only appears that way (if it came across the Internet it would be slow and sporadic). Content that comes from the open Internet must come through an exchange point, and if your ISP has to rely on their exchange point to retrieve video content, things can get congested rather quickly.

What is an Internet Exchange point and why does it matter?

Perhaps an explanation of exchange points might help. Think of a giant railroad yard, where trains from all over the country converge and then return from where they came. In the yard they exchange their goods with the other train operators. For example, a train from Montana brings in coal destined for power plants in the east, and the trains from the east brings mining supplies and food for the people of Montana. As per a gentleman’s agreement, the railroad companies will transfer some goods to other operators, and take some goods in return. Although fictional, this would be a fair trade agreement. The fair trade in our railroad example works as as long as everybody exchanges about the same amount of stuff. But, suppose one day a train from the south shows up with 10 times the size load they wish to exchange data with, and suppose their goods are perishable, like raw milk products. Not only do they have more than their fair share to exchange, but they also have a time dependency on the exchange. They must get their milk to other markets quickly or it loses all value. You can imagine that the some of the railroads in the exchange co-operative would be overloaded and problems would arise.

I wish I could take every media person who writes about the Internet, take them into a room, and not let them leave until they understand the concept of an Internet exchange point. The Internet is founded on a best effort exchange agreement. Everything is built off this mode, and it cannot easily be changed.

So how does this relate back to the problems of video?

There really is no problem with the Internet, it works as intended and is a magnificent model of best effort exchange. The problem occurs with the delusion of content providers pumping video content into the pipes without any consideration of what might happen at the exchange points.

A bit of quick history on exchange point evolution.

Over the years, the original government network operators started exchanging with private operators, such as AT&T, Verizon, and Level 3. These private operators have made great improvement efforts to the capacity of their links and exchange points, but the basic problem still exists. The sender and receiver never have any guarantee if their real time streaming video will get to the other end in a timely manner.

As for caching, it is a band aid, and works some of the time for the most popular videos that get watched over and over again, but it does not solve the problem at the exchange points, and consumers and providers are always pumping more content into the pipes.

So can the problem of streaming content be solved?

The short answer is yes, but it would not be the Internet. I suspect one might call it the Internet for marketing purposes, but out of necessity. It would be some new network with a different political structure and entirely different rules. This would have much higher cost to ensure data paths for video, and operators would have to pass the cost of transport and path set up directly on to the content providers to make it work. Best effort fair exchange would be out of the picture.

For example, over the years I have seen numerous plans by wizards who draw up block diagrams on how to make the Internet a signaling switching network, instead of a best effort network. Each time I see one of these plans, I just sort of shrug. It has been done before and done very well,  they never consider the data networks originally built by AT&T, which was a fully functional switched network for sending data to anybody with guaranteed bandwidth. We’ll see where we end up.

Video Over 3G/4G Will Always Lag Behind the Quality of Wired Home Service


Written by Art Reisman

CTO – http://www.apconnections.net

Editors note:

Marketing and hype for services ultimately meet the reality of what is possible. Below, I explain the basic reasons behind what is possible in terms of video on your wired home network and then compare that to the limitations of 3G and 4G service.

In the wired network world, many consumers are connected to their provider via a spoke and hub topology, like this

The hub, “H”, is at your cable operator’s regional office and the spokes are dedicated wires to each home. When supplying video such as Netflix, your cable operator caches popular videos at their HUB, so when you select a movie, it plays unencumbered on a wire direct from the central office to your home. In this topology you are not competing for bandwidth on the last mile. The bottom line is you can watch a good deal of video without interruption.

Yes, it is possible to watch video on your wireless device, but unlike the wired network to your home, claims of high speeds from 4G providers have limitations. Due to the way wireless frequencies operate, the more users on the nearest tower, the more likely your video feed will break up.

With a wireless provider there is also a hub, but unlike the HUB of the wired network, many users share a single wire (Frequency) back to this HUB. Your wireless provider uses time division multiplexing to give each user a slice of the bandwidth on the wire. In the diagram below, there are no dedicated wires to each phone, the lines are a symbolic representation of a slice of time. In other words, the wire back to the High Bandwidth HUB is virtual and only exists for a short moment in time. As you add more and more devices to the wire, each time slice becomes shorter and shorter, and at some point, your time slice will become so small, it will be impossible to watch a video no matter how fast the advertised speed to your wireless phone.

Note: There is variability in the quality of video in the wired model but they are related to where the content is located and not the last mile contention described above.