What Is Deep Packet Inspection and Why the Controversy?


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper. APconnections removed all deep packet inspection technology from their NetEqualizer product over 2 years ago.

Article Updated March 2012

As the debate over Deep Packet Inspection continues, network administrators are often faced with a difficult decision: ensure network quality or protect user privacy. However, the legality of the practice is now being called into question, adding a new twist to the mix. Yet, for many Internet users, deep packet inspection continues to be an ambiguous term in need of explanation. In the discussion that follows, deep packet inspection will be explored in the context of the ongoing debate.

Exactly what is deep packet inspection?

All traffic on the Internet travels around in what is called an IP packet. An IP packet is a string of characters moving from computer A to computer B. On the outside of this packet is the address where it is being sent. On the inside of the packet is the data that is being transmitted.

The string of characters on the inside of the packet can be conceptually thought of as the “payload,” much like the freight inside of a railroad car. These two elements, the address and the payload, comprise the complete IP packet.

When you send an e-mail across the Internet, all your text is bundled into packets and sent on to its destination. A deep packet inspection device literally has the ability to look inside those packets and read your e-mail (or whatever the content might be).

Products sold that use DPI are essentially specialized snooping devices that examine the content (pay load inside) of Internet packets. Other terms sometimes used to describe techniques that examine Internet data are packet shapers, layer-7 traffic shaping, etc.

How is deep packet inspection related to net neutrality?

Net neutrality is based on the belief that nobody has the right to filter content on the Internet. Deep packet inspection is a method used for filtering. Thus, there is a conflict between the two approaches. The net neutrality debate continues to rage in its own right.

Why do some Internet providers use deep packet inspection devices?

There are several reasons:

1) Targeted advertising If a provider knows what you are reading, they can display content advertising on the pages they control, such as your login screen or e-mail account.

2) Reducing “unwanted” traffic — Many providers are getting overwhelmed by types of traffic that they deem as less desirable such as Bittorrent and other forms of peer-to-peer. Bittorrent traffic can overwhelm a network with volume. By detecting and redirecting the Bittorrent traffic, or slowing it down, a provider can alleviate congestion.

3) Block offensive material — Many companies or institutions that perform content filtering are looking inside packets to find, and possibly block, offensive material or web sites.

4) Government spying — In the case of Iran (and to some extent China), DPI is used to keep tabs on the local population.

When is it appropriate to use deep packet inspection?

1) Full disclosure — Private companies/institutions/ISPs that notify employees that their Internet use is not considered private have the right to snoop, although I would argue that creating an atmosphere of mistrust is not the mark of a healthy company.

2) Law enforcement — Law enforcement agencies with a warrant issued by a judge would be the other legitimate use.

3) Intrusion detection and prevention– It is one thing to be acting as an ISP  and to eaves drop on a public conversation;  it is entirely another paradigm if you are a  private business examining the behavior of somebody  coming in your front door. For example in a private home it is within your right to look through your peep hole and not let shady characters into your home.  In a private business it is a good idea to use Deep packet inspection in order to block unwanted intruders from your network. Blocking bad guys before they break into and damage your network and is perfectly acceptable.

4) Spam filtering- Most consumers are very happy to have their ISP or email provider remove spam.  I would categorize this type of DPI as implied disclosure. For example, in Gmail you do have the option to turn Spam filtering off, and although most consutomers may not realize that google is reading their mail ( humans don’t read it but computer scanners do), their motives are understood. What consumers may not realize is that their email provider is also reading everything they do in order to set target advertising

Does Content filtering use Deep Packet Inspection ?

For the most part no. Content filtering is generally  done at the URL level. URL’s are generally considered public information, as routers need to look this up anyway. We have only encountered content filters at private institutions that are within their right.

What about spam filtering, does that use Deep Packet Inspection?

Yes many Spam filters will look at content, and most people could not live without their spam filter, however with spam filtering most people have opted in at one point or another, hence it is generally done with permission.

What is all the fuss about?

It seems that consumers are finally becoming aware of what is going on behind the scenes as they surf the Internet, and they don’t like it. What follows are several quotes and excerpts from articles written on the topic of deep packet inspection. They provide an overview not only of how DPI is currently being used, but also the many issues that have been raised with the practice.

For example, this is an excerpt from a recent PC world article:

Not that we condone other forms of online snooping, but deep packet inspection is the most egregious and aggressive invasion of privacy out there….It crosses the line in a way that is very frightening.

Paul Stephens, director of policy and advocacy for the Privacy Rights Clearinghouse, as quoted in the E-Commerce Times on November 14, 2008. Read the full article here.

Recently, Comcast had their hand slapped for re-directing Bittorrent traffic:

Speaking at the Stanford Law School Center for Internet and Society, FCC Chairman Kevin Martin said he’s considering taking action against the cable operator for violating the agency’s network-neutrality principles. Seems Martin was troubled by Comcast’s dissembling around the BitTorrent issue, not to mention its efforts to pack an FCC hearing on Net neutrality with its own employees.

— Digital Daily, March 10, 2008. Read the full article here.

Later in 2008, the FCC came down hard on Comcast.

In a landmark ruling, the Federal Communications Commission has ordered Comcast to stop its controversial practice of throttling file sharing traffic.

By a 3-2 vote, the commission on Friday concluded that Comcast monitored the content of its customers’ internet connections and selectively blocked peer-to-peer connections.

Wired.com, August 1, 2008.Read the full article here.

To top everything off, some legal experts are warning companies practicing deep packet inspection that they may be committing a felony.

University of Colorado law professor Paul Ohm, a former federal computer crimes prosecutor, argues that ISPs such as Comcast, AT&T and Charter Communications that are or are contemplating ways to throttle bandwidth, police for copyright violations and serve targeted ads by examining their customers’ internet packets are putting themselves in criminal and civil jeopardy.

Wired.com, May 22, 2008. Read the full article here.

However, it looks like things are going the other way in the U.K. as Britain’s Virgin Media has announced they are dumping net neutrality in favor of targeting bittorrent.

The UK’s second largest ISP, Virgin Media, will next year introduce network monitoring technology to specifically target and restrict BitTorrent traffic, its boss has told The Register.

The Register, December 16, 2008. Read the full article here.

Canadian ISPs confess en masse to deep packet inspection in January 2009.

With the amount of attention being paid to Comcast recently, a lot of people around the world have begun to look at their ISPs and wonder exactly what happens to their traffic once it leaves. This is certainly true for Canada, where several Canadian ISPs have come under the scrutiny of the CRTC, the regulatory agency responsible for Canada. After investigation, it was determined that all large ISPs in Canada filter P2P traffic in some fashion.

Tech Spot, January 21, 2009. Read the full article here.

In April 2009, U.S. lawmakers announced plans to introduce legislation that would limit the how ISPs could track users. Online privacy advocates spoke out in support of such legislation.

In our view, deep packet inspection is really no different than postal employees opening envelopes and reading letters inside. … Consumers simply do not expect to be snooped on by their ISPs or other intermediaries in the middle of the network, so DPI really defies legitimate expectations of privacy that consumers have.

Leslie Harris, president and CEO of the Center for Democracy and Technology, as quoted on PCWorld.com on April 23, 2009. Read the full article here.

The controversy continues in the U.S. as AT&T is accused of traffic shaping, lying and blocking sections of the Internet.

7/26/2009 could mark a turning point in the life of AT&T, when the future looks back on history, as the day that the shady practices of an ethically challenged company finally caught up with them: traffic filtering, site banning, and lying about service packages can only continue for so long before the FCC, along with the bill-paying public, takes a stand.

Kyle Brady, July 27, 2009. Read the full article here.

[February 2011 Update] The Egyptian government uses DPI to filter elements of their Internet Traffic, and this act in itself becomes the news story. In this video in this news piece, Al Jazeera takes the opportunity to put out an unflattering piece on the company Naurus that makes the DPI technology and sold it to the Egyptians.

While the debate over deep packet inspection will likely rage on for years to come, APconnections made the decision to fully abandon the practice over two years ago, having since proved the viability of alternative approaches to network optimization. Network quality and user privacy are no longer mutually exclusive goals.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

The Inside Scoop on Where the Market for Bandwidth Control Is Going


Editor’s Note: The modern traffic shaper appeared in the market in the late 1990s. Since then market dynamics have changed significantly. Below we discuss these changes with industry pioneer and APconnections CTO Art Reisman.

Editor: Tell us how you got started in the bandwidth control business?

Back in 2002, after starting up a small ISP, my partners and I were looking for a tool that we could plug-in and take care of the resource contention without spending too much time on it. At the time, we had a T1 to share among about 100 residential users and it was costing us $1200 per month, so we had to do something.

Editor: So what did you come up with?

I consulted with my friends at Cisco on what they had. Quite a few of my peers from Bell Labs had migrated to Cisco on the coat tails of Kevin Kennedy, who was also from Bell Labs. After consulting with them and confirming there was nothing exactly turnkey at Cisco, we built the Linux Bandwidth Arbitrator (LBA) for ourselves.

How was the Linux Bandwidth Arbitrator distributed and what was the industry response?

We put out an early version for download on a site called Freshmeat. Most of the popular stuff on that site are home-user based utilities and tools for Linux. Given that the LBA was not really a consumer tool, it rose like a rocket on that site. We were getting thousands of downloads a month, and about 10 percent of those were installing it someplace.

What did you learn from the LBA project?

We eventually bundled layer 7 shaping into the LBA. At the time that was the biggest request for a feature. We loosely partnered with the Layer 7 project and a group at the Computer Science Department at the University of Colorado to perfect our layer 7 patterns and filter. Myself and some of the other engineers soon realized that layer 7 filtering, although cool and cutting edge, was a losing game with respect to time spent and costs. It was not impossible but in reality it was akin to trying to conquer all software viruses and only getting half of them. The viruses that remain will multiply and take over because they are the ones running loose. At the same time we were doing layer 7, the core idea of Equalizing,  the way we did fairness allocation on the LBA, was s getting rave reviews.

What did you do next ?

We bundled the LBA into a CD for install and put a fledgling GUI interface on it. Many of the commercial users were happy to pay for the convenience, and from there we started catering to the commercial market and now here we are with modern version of the NetEqualizer.

How do you perceive the layer 7 market going forward?

Customers will always want layer 7 filtering. It is the first thing they think of from the CIO on down. It appeals almost instinctively to people. The ability to choose traffic  by type of application and then prioritize it by type is quite appealing. It is as natural as ordering from a restaurant menu.

We are not the only ones declaring a decline in Deep packet inspection we found this opinion on another popular blog regarding bandwidth control:

The end is that while Deep Packet Inspection presentations include nifty graphs and seemingly exciting possibilities; it is only effective in streamlining tiny, very predictable networks. The basic concept is fundamentally flawed. The problem with generous networks is not that bandwidth wants to be shifted from “terrible” protocols to “excellent” protocols. The problem is volume. Volume must be managed in a way that maintains the strategic goals of the arrangement administration. Nearly always this can be achieved with a macro approach of allocating an honest share to each entity that uses the arrangement. Any attempt to micro-manage generous networks ordinarily makes them of poorer quality; or at least simply results in shifting bottlenecks from one business to another.

So why did you get away from layer 7 support in the NetEqualizer back in 2007?

When trying to contain an open Internet connection it does not work very well. The costs to implement were going up and up. The final straw was when encrypted p2p hit the cloud. Encrypted p2p cannot be specifically classified. It essentially tunnels through $50,000 investments in layer 7 shapers, rendering them impotent. Just because you can easily sell a technology does not make it right.

We are here for the long haul to educate customers. Most of our NetEqualizers stay in service as originally intended for years without licensing upgrades. Most expensive layer 7 shapers are mothballed after about 12 months are just scaled back to do simple reporting. Most products are driven by channel sales and the channel does not like to work very hard to educate customers with alternative technology. They (the channel) are interested in margins just as a bank likes to collect fees to increase profit. We, on the other hand, sell for the long haul on value and not just what we can turn quickly to customers because customers like what they see at first glance.

Are you seeing a drop off in layer 7 bandwidth shapers in the marketplace?

In the early stages of the Internet up until the early 2000s, the application signatures were not that complex and they were fairly easy to classify. Plus the cost of bandwidth was in some cases 10 times more expensive than 2010 prices. These two factors made the layer 7 solution a cost-effective idea. But over time, as bandwidth costs dropped, speeds got faster and the hardware and processing power in the layer 7 shapers actually rose. So, now in 2010 with much cheaper bandwidth, the layer 7 shaper market is less effective and more expensive. IT people still like the idea, but slowly over time price and performance is winning out. I don’t think the idea of a layer 7 shaper will ever go away because there are always new IT people coming into the market and they go through the same learning curve. There are also many WAN type installations that combine layer 7 with compression for an effective boost in throughput. But, even the business ROI for those installations is losing some luster as bandwidth costs drop.

So, how is the NetEqualizer doing in this tight market where bandwidth costs are dropping? Are customers just opting to toss their NetEqualizer in favor of adding more bandwidth?

There are some that do not need shaping at all, but then there are many customers that are moving from $50,000 solutions to our $10,000 solution as they add more bandwidth. At the lower price points, bandwidth shapers still make sense with respect to ROI.  Even with lower bandwidth costs, users will almost always clog the network with new more aggressive applications. You still need a way to gracefully stop them from consuming everything, and the NetEqualizer at our price point is a much more attractive solution.

Equalizing Compared to Application Shaping (Traditional Layer-7 “Deep Packet Inspection” Products)


Editor’s Note: (Updated with new material March 2012)  Since we first wrote this article, many customers have implemented the NetEqualizer not only to shape their Internet traffic, but also to shape their company WAN.  Additionally, concerns about DPI and loss of privacy have bubbled up. (Updated with new material September 2010)  Since we first published this article, “deep packet inspection”, also known as Application Shaping, has taken some serious industry hits with respect to US-based ISPs.   

==============================================================================================
Author’s Note: We often get asked how NetEqualizer compares to Packeteer (Bluecoat), NetEnforcer (Allot), Network Composer (Cymphonix), Exinda, and a plethora of other well-known companies that do Application Shaping (aka “packet shaping”, “deep packet inspection”, or “Layer-7” shaping).   After several years of these questions, and discussing different aspects with former and current application shaping with IT administrators, we’ve developed a response that should clarify the differences between NetEqualizer’s behavior- based approach and the rest of the pack.
We thought of putting our response into a short, bullet-by-bullet table format, but then decided that since this decision often involves tens of thousands of dollars, 15 minutes of education on the subject with content to support the bullet chart was in order.  If you want to skip the details, see our Summary Table at the end of this article

However, if you’re looking to really understand the differences, and to have the question answered as objectively as possible, please take a few minutes to read on…
==============================================================================================

How NetEqualizer compares to Bluecoat, Allot, Cymphonix, & Exinda

In the following sections, we will cover specifically when and where Application Shaping is used, how it can be used to your advantage, and also when it may not be a good option for what you are trying to accomplish.  We will also discuss how Equalizing, NetEqualizer’s behavior-based shaping, fits into the landscape of application shaping, and how in many cases Equalizing is a much better alternative.

Download the full article (PDF)  Equalizing Compared To Application Shaping White Paper

Read the rest of this entry »

University of British Columbia IT department chimes in on Layer 7 shaping and its fallacy


Editors notes: The following excerpt was pulled from the Resnet User Group Mailing list Oct 17 , 2009

Most subscribers to this user group are IT directors or adminstrators for large residence networks at various  universities. Many manage upwards of tens of thousands of Internet users.   If you are an ISP I would suggest you subscribe to  this list and monitor  for ideas.  Please note vendor solicitation is frowned upon on the Resnet list

As for the post below The first part of the post is Dennis’s recommendation for a good bandwidth shaper, he uses a carrier grade Cisco product.

The second part is a commentary on the fallacy of layer 7 shaping. No we do not know Dennis nor does he use our products , he just happens to agree with our philosophy after trying many other products.

Dennis OReilly <Dennis.OReilly@ubc.ca
reply-to Resnet Forum <RESNET-L@listserv.nd.edu> to RESNET-L@listserv.nd.edu date Sat, Oct 17, 2009 at 12:35 AM subject Re: Packet Shaping Appliance unsubscribe Unsubscribe from this sender

At 9:22 AM -0400 10/16/09, Brandon Burleigh wrote:

We are researching packet shaping appliance options as our current model is
end-of-life.  It is also at its maximum for bandwidth and we need to increase
our bandwidth with our Internet service provider.  We are interested in
knowing what hardware others are using on their Internet service for packet
shaping.  Thank you.

At the University of British Columbia we own and still use four PS10000’s.   A year ago we purchased a Cisco SCE 2020 which has 4 x 1G interfaces.  The SCE 2020 is approx the same price point as the PS10000.  There is also an SCE 8000 model which has 4 x 10G interfaces, also at a decent price point.

Oregon State brought the SCE product line to our attention at Resnet Symposium 2007.  A number of other Canadian universities recently purchased this product.

The SCE is based on P-Cube technology which Cisco acquired in 2004.

In a nutshell comparing the SCE to the PS10000:
– PS10000 reporting is much superior
– PS10000 and SCE are approx equal at ability to accurately classify P2P
– SCE is essentially a wire speed device
– SCE is a scalable, carrier-grade platform
– Installation of SCE is more complicated than PS10000
– SCE has some capability to identify and mitigate DoS and DDos attacks
– SCE handles asymmetric routing
– SCE has fine grained capabilities to control bandwidth

It is becoming more and more difficult over time for any packet shaping device like a Packetshaper, or a Procera, or an SCE to accurately classify P2P traffic. These days the only way to classify encrypted streams is through behaviorial analysis.  In the long run this is a losing proposition.  Thus, approaches like the NetEqualizer or script-based ‘penalty box’ approaches are better.   However, boxes like the SCE which have excellent capabilities to control bandwidth on a per user basis are also viable.  Otherwise the carriers wouldn’t be using these products.

Top Tips To Quantify The Cost Of WAN Optimization


Editor’s Note: As we mentioned in a recent article, there’s often some confusion when it comes to how WAN optimization fits into the overall network optimization industry — especially when compared to Internet optimization. Although similar, the two techniques require different approaches to optimization. What follows are some simple questions to ask your vendor before you purchase a WAN optimization appliance. For the record, the NetEqualizer is primarily used for Internet optimization.

When presenting a WAN optimization ROI argument, your vendor rep will clearly make a compelling case for savings.  The ROI case will be made by amortizing the cost of equipment against your contracted rate from your provider. You can and should trust these basic raw numbers. However, there is more to evaluating a WAN optimization (packet shaping) appliance than comparing equipment cost against bandwidth savings. Here are a few things to keep in mind:

  1. The amortization schedule should also make reasonable assumptions about future costs for T1, DS3, and OC3 links. Most contracted rates have been dropping in many metro areas and it is reasonable to assume that bandwidth costs will perhaps be 50-percent less two to three years out.
  2. If you do increase bandwidth, the licensing costs for the traffic shaping equipment can increase substantially. You may also find yourself in a situation where you need to do a forklift upgrade as you outrun your current hardware.
  3. Recurring licensing costs are often mandatory to keep your equipment current. Without upgrading your license, your deep packet inspection (layer 7 shaping filters) will become obsolete.
  4. Ongoing labor costs to tune and re-tune your WAN optimization appliance can often costs thousands per week.
  5. The good news is that optimization companies will normally allow you to try an appliance before you buy. Make sure you take the time to manage the equipment with your own internal techs or IT consultant to get an idea of how it will fit into your network.  The honeymoon with new equipment (supported by a well trained pre-sales team) can be short lived. After the free pre-sale support has expired, you will be on your own.

There are certainly times when WAN optimization makes sense, yet it many cases, what appears to be a no-brainer decision at first will begin to be called into question as costs mount down the line. Hopefully these five contributing factors will paint a clearer picture of what to expect.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

NetEqualizer White Paper Comparison with Traditional Layer-7 (Deep Packet Inspection Products)


Updated with new reference material May 4th 2009

How NetEqualizer compares to Packeteer, Allot, Cymphonics, Exinda

We often get asked how NetEqualizer compares to Packeteer, Allot, Cymphonics, Exinda and a plethora of other well-known companies that do layer 7 application shaping (packet shaping). After several years of these questions, and discussing different aspects with former and current application shaping IT administrators, we’ve developed a response that should clarify the differences between NetEqualizers behavior based approach and the rest of the pack.

We thought of putting our response into a short, bullet-by-bullet table format, but then decided that since this decision often involves tens of thousands of dollars, 15 minutes of education on the subject with content to support the bullet chart was in order. If you want to see just the bullet chart, you can skip to the end now, but if you’re looking to have the question answered as objectively as possible, please take a few minutes to read on

In the following sections, we will cover specifically when and where application shaping (deep packet inspection) is used, how it can be used to your advantage, and also when it may not be a good option for what you are trying to accomplish. We will also discuss how the NetEqualizer and its behavior-based shaping fits into the landscape of application shaping, and how in some cases the NetEqualizer is a much better alternative.

First off, let’s discuss the accuracy of application shaping. To do this, we need to review the basic mechanics of how it works.

Application shaping is defined as the ability to identify traffic on your network by type and then set customized policies to control the flow rates for each particular type. For example, Citrix, AIM, Youtube, and BearShare are all applications that can be uniquely identified.

As you are likely aware, all traffic on the Internet travels around in what is called an IP packet. An IP packet can very simply be thought of as a string of characters moving from computer A to computer B. The string of characters is called the “payload,” much like the freight inside a railroad car. On the outside of this payload is the address where it is being sent. On the inside is the data/payload that is being transmitted. These two elements, the address and the payload, comprise the complete IP packet. In the case of different applications on the Internet, we would expect to see different kinds of payloads.

At the heart of all current application shaping products is special software that examines the content of Internet packets as they pass through the packet shaper. Through various pattern matching techniques, the packet shaper determines in real time what type of application a particular flow is. It then proceeds to take action to possibly restrict or allow the data based on a rule set designed by the system administrator.

For example, the popular peer-to-peer application Kazaa actually has the ASCII characters “Kazaa” appear in the payload, and hence a packet shaper can use this keyword to identify a Kazaa application. Seems simple enough, but suppose that somebody was downloading a Word document discussing the virtues of peer-to-peer and the title had the character string “Kazaa” in it. Well, it is very likely that this download would be identified as Kazaa and hence misclassified. After all, downloading a Word document from a Web server is not the same thing as the file sharing application Kazaa.

The other issue that constantly brings the accuracy of application shaping under fire is that some application writers find it in their best interest not be classified. In a mini arms race that plays out everyday across the world, some application developers are constantly changing their signature and some have gone as far as to encrypt their data entirely.

Yes, it is possible for the makers of application shapers to counter each move, and that is exactly what the top companies do, but it can take a heroic effort to keep pace. The constant engineering and upgrading required has an escalating cost factor. In the case of encrypted applications, the amount of CPU power required for decryption is quite intensive and impractical and other methods will be needed to identify encrypted p2p.

But, this is not to say that application shaping doesn’t work in all cases or provide some value. So, let’s break down where it has potential and where it may bring false promises. First off, the realities of what really happens when you deploy and depend on this technology need to be discussed.

Accuracy and False Positives

As of early 2003, we had a top engineer and executive join APConnections direct from a company that offered application shaping as one of their many value-added technologies. He had first hand knowledge from working with hundreds of customers who were big supporters of application shaping:

The application shaper his company offered could identify 90 percent of the spectrum of applications, which means they left 10 percent as unclassified. So, right off the bat, 10 percent of the traffic is unknown by the traffic shaper. Is this traffic important? Is it garbage that you can ignore? Well, there is no way to know with out any intelligence, so you are forced to let it go by without any restriction. Or, you could put one general rule over all of the traffic – perhaps limiting it to 1 megabit per second max, for example. Essentially, if your intention was 100-percent understanding and control of your network traffic, right out the gate you must compromise this standard.

In fairness, this 90-percent identification actually is an amazing number with regard to accuracy when you understand how daunting application shaping is. Regardless, there is still room for improvement.

So, that covers the admitted problem of unclassifiable traffic, but how accurate can a packet shaper be with the traffic it does claim to classify? Does it make mistakes? There really isn’t any reliable data on how often an application shaper will misidentify an application. To our knowledge, there is no independent consumer reporting company that has ever created a lab capable of generating several thousand different applications types with a mix of random traffic, and then took this mix and identified how often traffic was misclassified. Yes, there are trivial tests done one application at a time, but misclassification becomes more likely with real-world complex and diverse application mixes.

From our own testing of application technology freely available on the Internet, we discovered false positives can occur up to 25 percent of the time. A random FTP file download can be classified as something more specific. Obviously commercial packet shapers do not rely on the free technology in open source and they actually may improve on it. So, if we had to estimate based on our experience, perhaps 5 percent of Internet traffic will likely get misclassified. This brings our overall accuracy down to 85 percent (combining the traffic they don’t claim to classify with an estimated error rate for the traffic they do classify).

Constantly Evolving Traffic

Our sources say (mentioned above) that 70 percent of their customers that purchased application shaping equipment were using the equipment primarily as a reporting tool after one year. This means that they had stopped keeping up with shaping policies altogether and were just looking at the reports to understand their network (nothing proactive to change the traffic).

This is an interesting fact. From what we have seen, many people are just unable, or unwilling, to put in the time necessary to continuously update and change their application rules to keep up with the evolving traffic. The reason for the constant changing of rules is that with traditional application shaping you are dealing with a cunning and wise foe. For example, if you notice that there is a large contingent of users using Bittorrent and you put a rule in to quash that traffic, within perhaps days, those users will have moved on to something new: perhaps a new application or encrypted p2p. If you do not go back and reanalyze and reprogram your rule set, your packet shaper slowly becomes ineffective.

And finally lest we not forget that application shaping is considered by some to be a a violation of Net Neutrality.

When is application shaping the right solution?

There is a large set of businesses that use application shaping quite successfully along with other technologies. This area is WAN optimization. Thus far, we have discussed the issues with using an application shaper on the wide open Internet where the types and variations of traffic are unbounded. However, in a corporate environment with a finite set and type of traffic between offices, an application shaper can be set up and used with fantastic results.

There is also the political side to application shaping. It is human nature to want to see and control what takes place in your environment. Finding the best tool available to actually show what is on your network, and the ability to contain it, plays well with just about any CIO or IT director on the planet. An industry leading packet shaper brings visibility to your network and a pie chart showing 300 different kinds of traffic. Whether or not the tool is practical or accurate over time isn’t often brought into the buying decision. The decision to buy can usually be “intuitively” justified. By intuitively, we mean that it is easier to get approval for a tool that is simple to conceptually understand by a busy executive looking for a quick-fix solution.

As the cost of bandwidth continues to fall, the question becomes how much a CIO should spend to analyze a network. This is especially true when you consider that as the Internet expands, the complexity of shaping applications grows. As bandwidth prices drop, the cost of implementing such a product is either flat or increasing. In cases such as this, it often does not make sense to purchase a $15,000 bandwidth shaper to stave off a bandwidth upgrade that might cost an additional $200 a month.

What about the reporting aspects of an application shaper? Even if it can only accurately report 90 percent of the actual traffic, isn’t this useful data in itself?

Yes and no. Obviously analyzing 90 percent of the data on your network might be useful, but if you really look at what is going on, it is hard to feel like you have control or understanding of something that is so dynamic and changing. By the time you get a handle on what is happening, the system has likely changed. Unless you can take action in real time, the network usage trends (on a wide open Internet trunk) will vary from day to day.1 It turns out that the most useful information you can determine regarding your network is an overall usage patter for each individual. The goof-off employee/user will stick out like a sore thumb when you look at a simple usage report since the amount of data transferred can be 10-times the average for everybody else. The behavior is the indicator here, but the specific data types and applications will change from day to day and week to week

How does the NetEqualizer differ and what are its advantages and weaknesses?

First, we’ll summarize equalizing and behavior-based shaping. Overall, it is a simple concept. Equalizing is the art form of looking at the usage patterns on the network, and then when things get congested, robbing from the rich to give to the poor. Rather than writing hundreds of rules to specify allocations to specific traffic as in traditional application shaping, you can simply assume that large downloads are bad, short quick traffic is good, and be done with it.

This behavior-based approach usually mirrors what you would end up doing if you could see and identify all of the traffic on your network, but doesn’t require the labor and cost of classifying everything. Applications such as Web surfing, IM, short downloads, and voice all naturally receive higher priority while large downloads and p2p receive lower priority. This behavior-based shaping does not need to be updated constantly as applications change.

Trusting a heuristic solution such as NetEqualizer is not always an easy step. Oftentimes, customers are concerned with accidentally throttling important traffic that might not fit the NetEqualizer model, such as video. Although there are exceptions, it is rare for the network operator not to know about these potential issues in advance, and there are generally relatively few to consider. In fact, the only exception that we run into is video, and the NetEqualizer has a low level routine that easily allows you to give overriding priority to a specific server on your network, hence solving the problem.

Another key element in behavior-based shaping is connections. Equalizing takes care of instances of congestion caused by single-source bandwidth hogs. However, the other main cause of Internet gridlock (as well as bringing down routers and access points) is p2p and its propensity to open hundreds or perhaps thousands of connections to different sources on the Internet. Over the years, the NetEqualizer engineers have developed very specific algorithms to spot connection abuse and avert its side effects.

This overview, along with the summary table below, should give you a good idea of where the NetEqualizer stands in relation to packet shaping.

Summary Table

Application based shaping

  • good for static links where traffic patterns are constant

  • good for intuitive presentations makes sense and easy to explain to non technical people
  • detailed reporting by application type
  • not the best fit for wide open Internet trunks
    • costly to maintain in terms of licensing

    • high initial cost

    • constant labor to tune with changing application spectrum

    • expect approximately 15 percent of traffic to be unclassified

  • only a static snapshot of a changing spectrum may not be useful
  • false positives may show data incorrectly no easy way to confirm accuracy
  • violates Net Neutrality

Equalizing

  • not the best for dedicated WAN trunks

  • the most cost effective for shared Internet trunks
  • little or no recurring cost or labor
  • low entry cost
  • conceptual takes some getting used to
  • basic reporting by behavior used to stop abuse
  • handles encrypted p2p without modifications or upgrades
  • Supports Net Neutrality

1 The exception is a corporate WAN link with relatively static usage patterns.

Note: Since we first published this article, deep packet inspection also known as layer 7 shaping has taken some serious industry hits with respect to US based ISPs

Related articles:

Why is NetEqualizer the low price leader in bandwidth control

When is deep packet inspection a good thing?

NetEqualizer offers deep packet inspection comprimise.

Internet users attempt to thwart Deep Packet Inspection using encryption.

Why the controversy over deep Packet inspection?

World wide web founder denounces deep packet inspection

Cisco Bandwidth Control for Education Networks


The Cisco method is outlined below. However, you might also want to check out the NetEqualizer video filmed in front of the IT staffs at Eastern Michigan and Western Michigan Universities for a perspective on a simple alternate philosophy.

There is quite a bit of history with traffic classification  in the higher-ed market, so you can research some of the pros and cons of Layer 7 shaping before investing. You might also find some of these higher ed testimonials on the NetEqualizer worth reading.

The following was pulled from Cisco  marketing material specific to their bandwidth control solution for educational networks:

A fundamental requirement of any bandwidth control solution is the ability to apply QoS mechanisms. These mechanisms control the bandwidth of specific users and prioritize traffic to help ensure appropriate handling of delay-sensitive applications. QoS capabilities are essential for carrying delay-sensitive IP voice and video traffic over an institution’s ISP link, as well as for rate limiting recreational P2P traffic.
The Cisco SCE uses three levels of QoS:

Hierarchical bandwidth control: The Cisco SCE supports granular bandwidth control by allocating part of a link’s bandwidth for groups of specific application flows. Academic IT departments can define these groups according to categories such as “all P2P traffic,” “browsing and streaming traffic,” “all traffic flowing off net,” and so on. In addition, colleges and universities can use the Cisco SCE to enforce minimum and maximum bandwidth limits and priorities for the total traffic that is produced by a given user, as well as for the specific applications (browsing, gaming, and so on) in which the user engages. These advanced mechanisms are used in a tiered fashion.

Differentiated Services (DiffServ) queuing: Internet applications use DiffServ to help ensure that packets from delay-sensitive applications are prioritized over other packets. The Cisco SCE includes DiffServ-compliant transmit queues using “Best Effort Forwarding,” four levels of “Assured Forwarding,” and “Expedited Forwarding” for delay-sensitive applications.

DiffServ marking:  The Cisco SCE’s advanced classification capabilities can also be used for marking the IP type of service (ToS)/DiffServ codepoint (DSCP) byte of the associated traffic. Each flow or group of flows can be marked with a relevant DiffServ value based on the application or service. The next-hop Layer 3 device, such as a switch or router, then uses this marking to carry the delay-sensitive traffic appropriately. As a result, the Cisco SCE, crucial to the Cisco Bandwidth Control Solution, can serve as the ideal network element for classifying and marking application traffic for other DiffServ-enabled network elements.

5 Tips to speed up your business T1/DS3 to the Internet


By Art Reisman

Art Reisman CTO www.netequalizer.com

In tight times expanding your corporate Internet pipe is a hard pill to swallow, especially when your instincts tell you the core business should be able to live within the current allotment.

Here are some tips and hard facts that you  you may want to consider  to help stretch your business Internet pipe

1) Layer 7 application shaping.

The market place is crawling with solutions that allow you to set policies on bandwidth based on type of application.  Application shaping allows an administrator to restrict lower priority activities, while allowing mission critical Apps favorable consideration. This methodology is very seductive , but from our experience it can send your IT department into a nanny state, constantly trying to figure out what to allow and what to restrict. Also the cost of an Internet link expansion is dropping, while many of the application shaping solutions start around $10,000 and go up from there.

The up side is Layer 7 application shaping does work well when it comes to internal WAN links that do not carry Internet traffic. An administrator can get a handle on the fixed traffic running privately within their network quite easily.

2) Using your router to restrict specific IP and ports

If your core business utilization can be isolated to a single server or group of servers a few simple rules to allocate a large chunk of the pipe to these resources (by IP address) may be a good fit.

In an environment where business priorities change and are not isolated to a fixed server or two, this solution can backfire, but if your resource allocation requirements are stable doing something on your router to restrict one particular subnet over another can be useful in stretching your bandwidth.

One thing to be careful is that it often takes a skilled technician to set up specialty rules on your router. You can easilyu rack  up  $$ to your IT consultants if  your set up is not static.

3) Behavior based shaping

Editors note: We are the makers of the NetEqualizer which specializes in this technology; however our intent in this article is to be objective.

Behavior based shaping works well and affordably in most situations. Most business related applications will get priority as they tend to use small amounts of data or web pages.  Occasionally there are exceptions that need to override the basic behavior based shaping such as video.  Video can easily  be excluded from the generic policies.  Implementing a few exclusions is far less cumbersome than trying to classify all traffic all the time such as with application shaping.

4) Add more bandwidth and by pass your local loop carrier

T1’s and T3’s from your local telco may not be the only options for bandwidth in your area. Many of our customers get creative by purchasing bandwidth directly from a tier one provider (such as Level 3) and then using a Microwave back haul the bandwidth to their location. The Telco’s make a killing with what they call a loop charge (before they put any bandwidth on your line) With Microwave backhaul technology you can by-pass this charge for significant savings.

5) Clean up the laptops and computers on your network.  Many robots and viruses run in the background on your windows machines and can generate a cacophony of back ground traffic.  A business wide license for good virus protection may be worth the investment.  Stay away from the free ware versions of virus protection they tend to miss quite a bit.

Is Barack Obama going to turn the tide toward Net Neutrality ?


NetWork World of Canada discusses some interesting scenarios about possible policy changes with the new adminstration.

In the article the author (Howard Solomon) specifically sites Obama’s leaning…

Meanwhile, the new President favours net neutrality, the principle that Internet service providers (ISPs) shouldn’t interfere with content traveling online, which could hurt Sandvine, a builder of deep packet inspection appliances for ISPs. At least one Senator is expected to introduce limiting legislation this month.

Will this help NetEqualizer sales and our support for behavior-based Net Neutral policy shaping?

According to Eli Riles vice president of sales at APconnections, “I don’t think it will change things much, we are already seeing steady growth, and I don’t expect a rush to purchase our equipment due to a government policy change. We sell mostly to Tier2 and Tier3 providers who have already generally stopped purchasing Layer 7 solutions mostly due to the higher cost and less so due to moral high ground or government mandate.”

related article

Stay tuned…

%d bloggers like this: