Equalizing Compared to Application Shaping (Traditional Layer-7 “Deep Packet Inspection” Products)


Editor’s Note: (Updated with new material March 2012)  Since we first wrote this article, many customers have implemented the NetEqualizer not only to shape their Internet traffic, but also to shape their company WAN.  Additionally, concerns about DPI and loss of privacy have bubbled up. (Updated with new material September 2010)  Since we first published this article, “deep packet inspection”, also known as Application Shaping, has taken some serious industry hits with respect to US-based ISPs.   

==============================================================================================
Author’s Note: We often get asked how NetEqualizer compares to Packeteer (Bluecoat), NetEnforcer (Allot), Network Composer (Cymphonix), Exinda, and a plethora of other well-known companies that do Application Shaping (aka “packet shaping”, “deep packet inspection”, or “Layer-7” shaping).   After several years of these questions, and discussing different aspects with former and current application shaping with IT administrators, we’ve developed a response that should clarify the differences between NetEqualizer’s behavior- based approach and the rest of the pack.
We thought of putting our response into a short, bullet-by-bullet table format, but then decided that since this decision often involves tens of thousands of dollars, 15 minutes of education on the subject with content to support the bullet chart was in order.  If you want to skip the details, see our Summary Table at the end of this article

However, if you’re looking to really understand the differences, and to have the question answered as objectively as possible, please take a few minutes to read on…
==============================================================================================

How NetEqualizer compares to Bluecoat, Allot, Cymphonix, & Exinda

In the following sections, we will cover specifically when and where Application Shaping is used, how it can be used to your advantage, and also when it may not be a good option for what you are trying to accomplish.  We will also discuss how Equalizing, NetEqualizer’s behavior-based shaping, fits into the landscape of application shaping, and how in many cases Equalizing is a much better alternative.

Download the full article (PDF)  Equalizing Compared To Application Shaping White Paper

Read the rest of this entry »

NetEqualizer White Paper Comparison with Traditional Layer-7 (Deep Packet Inspection Products)


Updated with new reference material May 4th 2009

How NetEqualizer compares to Packeteer, Allot, Cymphonics, Exinda

We often get asked how NetEqualizer compares to Packeteer, Allot, Cymphonics, Exinda and a plethora of other well-known companies that do layer 7 application shaping (packet shaping). After several years of these questions, and discussing different aspects with former and current application shaping IT administrators, we’ve developed a response that should clarify the differences between NetEqualizers behavior based approach and the rest of the pack.

We thought of putting our response into a short, bullet-by-bullet table format, but then decided that since this decision often involves tens of thousands of dollars, 15 minutes of education on the subject with content to support the bullet chart was in order. If you want to see just the bullet chart, you can skip to the end now, but if you’re looking to have the question answered as objectively as possible, please take a few minutes to read on

In the following sections, we will cover specifically when and where application shaping (deep packet inspection) is used, how it can be used to your advantage, and also when it may not be a good option for what you are trying to accomplish. We will also discuss how the NetEqualizer and its behavior-based shaping fits into the landscape of application shaping, and how in some cases the NetEqualizer is a much better alternative.

First off, let’s discuss the accuracy of application shaping. To do this, we need to review the basic mechanics of how it works.

Application shaping is defined as the ability to identify traffic on your network by type and then set customized policies to control the flow rates for each particular type. For example, Citrix, AIM, Youtube, and BearShare are all applications that can be uniquely identified.

As you are likely aware, all traffic on the Internet travels around in what is called an IP packet. An IP packet can very simply be thought of as a string of characters moving from computer A to computer B. The string of characters is called the “payload,” much like the freight inside a railroad car. On the outside of this payload is the address where it is being sent. On the inside is the data/payload that is being transmitted. These two elements, the address and the payload, comprise the complete IP packet. In the case of different applications on the Internet, we would expect to see different kinds of payloads.

At the heart of all current application shaping products is special software that examines the content of Internet packets as they pass through the packet shaper. Through various pattern matching techniques, the packet shaper determines in real time what type of application a particular flow is. It then proceeds to take action to possibly restrict or allow the data based on a rule set designed by the system administrator.

For example, the popular peer-to-peer application Kazaa actually has the ASCII characters “Kazaa” appear in the payload, and hence a packet shaper can use this keyword to identify a Kazaa application. Seems simple enough, but suppose that somebody was downloading a Word document discussing the virtues of peer-to-peer and the title had the character string “Kazaa” in it. Well, it is very likely that this download would be identified as Kazaa and hence misclassified. After all, downloading a Word document from a Web server is not the same thing as the file sharing application Kazaa.

The other issue that constantly brings the accuracy of application shaping under fire is that some application writers find it in their best interest not be classified. In a mini arms race that plays out everyday across the world, some application developers are constantly changing their signature and some have gone as far as to encrypt their data entirely.

Yes, it is possible for the makers of application shapers to counter each move, and that is exactly what the top companies do, but it can take a heroic effort to keep pace. The constant engineering and upgrading required has an escalating cost factor. In the case of encrypted applications, the amount of CPU power required for decryption is quite intensive and impractical and other methods will be needed to identify encrypted p2p.

But, this is not to say that application shaping doesn’t work in all cases or provide some value. So, let’s break down where it has potential and where it may bring false promises. First off, the realities of what really happens when you deploy and depend on this technology need to be discussed.

Accuracy and False Positives

As of early 2003, we had a top engineer and executive join APConnections direct from a company that offered application shaping as one of their many value-added technologies. He had first hand knowledge from working with hundreds of customers who were big supporters of application shaping:

The application shaper his company offered could identify 90 percent of the spectrum of applications, which means they left 10 percent as unclassified. So, right off the bat, 10 percent of the traffic is unknown by the traffic shaper. Is this traffic important? Is it garbage that you can ignore? Well, there is no way to know with out any intelligence, so you are forced to let it go by without any restriction. Or, you could put one general rule over all of the traffic – perhaps limiting it to 1 megabit per second max, for example. Essentially, if your intention was 100-percent understanding and control of your network traffic, right out the gate you must compromise this standard.

In fairness, this 90-percent identification actually is an amazing number with regard to accuracy when you understand how daunting application shaping is. Regardless, there is still room for improvement.

So, that covers the admitted problem of unclassifiable traffic, but how accurate can a packet shaper be with the traffic it does claim to classify? Does it make mistakes? There really isn’t any reliable data on how often an application shaper will misidentify an application. To our knowledge, there is no independent consumer reporting company that has ever created a lab capable of generating several thousand different applications types with a mix of random traffic, and then took this mix and identified how often traffic was misclassified. Yes, there are trivial tests done one application at a time, but misclassification becomes more likely with real-world complex and diverse application mixes.

From our own testing of application technology freely available on the Internet, we discovered false positives can occur up to 25 percent of the time. A random FTP file download can be classified as something more specific. Obviously commercial packet shapers do not rely on the free technology in open source and they actually may improve on it. So, if we had to estimate based on our experience, perhaps 5 percent of Internet traffic will likely get misclassified. This brings our overall accuracy down to 85 percent (combining the traffic they don’t claim to classify with an estimated error rate for the traffic they do classify).

Constantly Evolving Traffic

Our sources say (mentioned above) that 70 percent of their customers that purchased application shaping equipment were using the equipment primarily as a reporting tool after one year. This means that they had stopped keeping up with shaping policies altogether and were just looking at the reports to understand their network (nothing proactive to change the traffic).

This is an interesting fact. From what we have seen, many people are just unable, or unwilling, to put in the time necessary to continuously update and change their application rules to keep up with the evolving traffic. The reason for the constant changing of rules is that with traditional application shaping you are dealing with a cunning and wise foe. For example, if you notice that there is a large contingent of users using Bittorrent and you put a rule in to quash that traffic, within perhaps days, those users will have moved on to something new: perhaps a new application or encrypted p2p. If you do not go back and reanalyze and reprogram your rule set, your packet shaper slowly becomes ineffective.

And finally lest we not forget that application shaping is considered by some to be a a violation of Net Neutrality.

When is application shaping the right solution?

There is a large set of businesses that use application shaping quite successfully along with other technologies. This area is WAN optimization. Thus far, we have discussed the issues with using an application shaper on the wide open Internet where the types and variations of traffic are unbounded. However, in a corporate environment with a finite set and type of traffic between offices, an application shaper can be set up and used with fantastic results.

There is also the political side to application shaping. It is human nature to want to see and control what takes place in your environment. Finding the best tool available to actually show what is on your network, and the ability to contain it, plays well with just about any CIO or IT director on the planet. An industry leading packet shaper brings visibility to your network and a pie chart showing 300 different kinds of traffic. Whether or not the tool is practical or accurate over time isn’t often brought into the buying decision. The decision to buy can usually be “intuitively” justified. By intuitively, we mean that it is easier to get approval for a tool that is simple to conceptually understand by a busy executive looking for a quick-fix solution.

As the cost of bandwidth continues to fall, the question becomes how much a CIO should spend to analyze a network. This is especially true when you consider that as the Internet expands, the complexity of shaping applications grows. As bandwidth prices drop, the cost of implementing such a product is either flat or increasing. In cases such as this, it often does not make sense to purchase a $15,000 bandwidth shaper to stave off a bandwidth upgrade that might cost an additional $200 a month.

What about the reporting aspects of an application shaper? Even if it can only accurately report 90 percent of the actual traffic, isn’t this useful data in itself?

Yes and no. Obviously analyzing 90 percent of the data on your network might be useful, but if you really look at what is going on, it is hard to feel like you have control or understanding of something that is so dynamic and changing. By the time you get a handle on what is happening, the system has likely changed. Unless you can take action in real time, the network usage trends (on a wide open Internet trunk) will vary from day to day.1 It turns out that the most useful information you can determine regarding your network is an overall usage patter for each individual. The goof-off employee/user will stick out like a sore thumb when you look at a simple usage report since the amount of data transferred can be 10-times the average for everybody else. The behavior is the indicator here, but the specific data types and applications will change from day to day and week to week

How does the NetEqualizer differ and what are its advantages and weaknesses?

First, we’ll summarize equalizing and behavior-based shaping. Overall, it is a simple concept. Equalizing is the art form of looking at the usage patterns on the network, and then when things get congested, robbing from the rich to give to the poor. Rather than writing hundreds of rules to specify allocations to specific traffic as in traditional application shaping, you can simply assume that large downloads are bad, short quick traffic is good, and be done with it.

This behavior-based approach usually mirrors what you would end up doing if you could see and identify all of the traffic on your network, but doesn’t require the labor and cost of classifying everything. Applications such as Web surfing, IM, short downloads, and voice all naturally receive higher priority while large downloads and p2p receive lower priority. This behavior-based shaping does not need to be updated constantly as applications change.

Trusting a heuristic solution such as NetEqualizer is not always an easy step. Oftentimes, customers are concerned with accidentally throttling important traffic that might not fit the NetEqualizer model, such as video. Although there are exceptions, it is rare for the network operator not to know about these potential issues in advance, and there are generally relatively few to consider. In fact, the only exception that we run into is video, and the NetEqualizer has a low level routine that easily allows you to give overriding priority to a specific server on your network, hence solving the problem.

Another key element in behavior-based shaping is connections. Equalizing takes care of instances of congestion caused by single-source bandwidth hogs. However, the other main cause of Internet gridlock (as well as bringing down routers and access points) is p2p and its propensity to open hundreds or perhaps thousands of connections to different sources on the Internet. Over the years, the NetEqualizer engineers have developed very specific algorithms to spot connection abuse and avert its side effects.

This overview, along with the summary table below, should give you a good idea of where the NetEqualizer stands in relation to packet shaping.

Summary Table

Application based shaping

  • good for static links where traffic patterns are constant

  • good for intuitive presentations makes sense and easy to explain to non technical people
  • detailed reporting by application type
  • not the best fit for wide open Internet trunks
    • costly to maintain in terms of licensing

    • high initial cost

    • constant labor to tune with changing application spectrum

    • expect approximately 15 percent of traffic to be unclassified

  • only a static snapshot of a changing spectrum may not be useful
  • false positives may show data incorrectly no easy way to confirm accuracy
  • violates Net Neutrality

Equalizing

  • not the best for dedicated WAN trunks

  • the most cost effective for shared Internet trunks
  • little or no recurring cost or labor
  • low entry cost
  • conceptual takes some getting used to
  • basic reporting by behavior used to stop abuse
  • handles encrypted p2p without modifications or upgrades
  • Supports Net Neutrality

1 The exception is a corporate WAN link with relatively static usage patterns.

Note: Since we first published this article, deep packet inspection also known as layer 7 shaping has taken some serious industry hits with respect to US based ISPs

Related articles:

Why is NetEqualizer the low price leader in bandwidth control

When is deep packet inspection a good thing?

NetEqualizer offers deep packet inspection comprimise.

Internet users attempt to thwart Deep Packet Inspection using encryption.

Why the controversy over deep Packet inspection?

World wide web founder denounces deep packet inspection

A Detailed Case Study of Packet Shaper and NetEqualizer


Editors note:

The quote by the Adams State administrator sums it up.

 "The price is fair, the best value in the product space"

This is a re-post of the Adams state blog, the details are a bit technical which don’t reflect the actual simplicity of a basic setup. From box to Network it is usually under an hour, without little or no recurring maintenance.

http://faculty.adams.edu/~cdmiller/?TrafficShaping

Also note NTOP reporting issues were remedied shortly after this original post back in 2006.

———————————————————————————————————-

In May 2006 we switched bandwidth management products. We moved from traditional layer 7 traffic shaping to bandwidth arbitration. We looked at upgrading our current product and 3 other solutions.

I am convinced protocol and layer 7 based filtering is dead. I expect P2P products to use SSL or TLS bypassing layer 7 filters. Ethically layer 7 filtering smells like content filtering, big brother, evil.

Bandwidth arbitration keeps things simple. When the Internet connection reaches a tuneable level of utilization the arbitrator slows down longer lived higher usage data transfers based on the number of connections and their utilization. Per host connection limiting keeps P2P playing nicely.

The chosen product? Net Equalizer.

Based on the open source Bandwidth Arbitrator, it is easy to configure and highly customizable. Support has been excellent.

  • Initial Tests

With the netequalizer link size at ~20% below our average utilization our pipe remained completely usable. Interactive applications responded well while large transfers continued to function. The connection limits appear to keep bittorrent and gnutella functional and in control.

  • Qualitative Results 2006-06-23

Downloads are faster, latency is at pre layer 7 filtering levels (9ms vs 300ms), P2P protocols are usable again, and we no longer police content, we manage bandwidth. Support has been excellent with technicians responding directly to my emails with all technical levels of questions answered, good, silly, and questions about the inner workings of the appliance. I was instructed on cautions to take withe any attempt at customization, and given the go ahead for some minor custom configuration without voiding the warranty.

  • Update 2006-11-06

We have run the Netequalizer for 6 months. Results are phenomenal compared with our last product. Our Netequalizer box has been up for 116 days with no configuration changes from the start of the semester. I look at my Cacti graphs and the custom CGI reports for solace, as if I’m disappointed the appliance doesn’t need more care and feeding.

  • Our Configuration

For our 21Mb link, we set 3 basic parameters:

 RATIO 75
 BRAIN_SIZE 2500
 CONNECTION LIMIT 40

The ratio is the amount of of our pipe in use before any shaping (arbitration) takes place. The brain_size is the number of connections for the equalizer to track and act upon, I have seen this number reached only once on our system. The connection limit means we allow 20 incoming and 20 outgoing connections maximum for every host on our network. We had to set every one or our servers as an exception to this rule, allowing 50,000 incoming and outgoing connections for those. We also had to specify our link size. That’s it end of configuration.

  • Custom Modifications

We did very simple things to appease ourselves of the performance of the box. First, we placed an SNMP daemon on it. I used a stock snmpd from a Mandriva 2006 server, from net-snmp 5.2.1.2. I was going to static compile one, but it turned out the dynamic libraries were all in place, here is the ldd output:

     ldd /usr/local/snmp/sbin/snmpd
     linux-gate.so.1 =>  (0xffffe000)
     libdl.so.2 => /lib/tls/libdl.so.2 (0x4001b000)
     libz.so.1 => /usr/lib/libz.so.1 (0x4001f000)
     libm.so.6 => /lib/tls/libm.so.6 (0x40031000)
     libc.so.6 => /lib/tls/libc.so.6 (0x40057000)
     /lib/ld-linux.so.2 (0x40000000)

I put the daemon in /usr/local/snmp/sbin/ and the mibs and snmpd.conf in /usr/local/snmp/share/snmp/.

We created 2 custom CGI scripts. One script shows the complete current logfile on demand rather than the last however many lines the web interface shows. The other script shows total current connections, followed by a list of hosts with more than 3 connections, sorted by total outgoing and incoming connections. I modified some of the scripts provided in the /art directory to produce those results. Someone with more familiarity with the Linux bridge utilities could probably do better.

Here is the showlog.cgi script I placed in the /var/www/cgi-bin/arbi directory:

 #!/bin/perl
 print "Content-type: text/html\n\n";
 print "<html><head></head><body><pre>";
 system("cat /tmp/arblog.bak");
 system("cat /tmp/arblog");
 print "</pre></body></html>";

Here are some lines from the showlog output, catching the arbitrator slowing someone down with .05 second delays (the DELAY portion):

 11/06/06 08:39:32 PENALTY  IP : 147.124.8.230 192.156.134.2 POOL: 0  WAVG:  133212 BUFF: 102  DELAY: 5
 11/06/06 08:39:32 INCREASE PENALTY  IP: 147.124.8.230  192.156.134.2 POOL: 0  BUFF: 102  DELAY: 10
 11/06/06 08:39:44 Traffic up: 575430 Traffic  down: 962330  POOL 0
 PENALTY  THRESHOLD pool 0 up 2688000 down 2688000
 11/06/06 08:39:47 PENALTY DECREASE: 147.124.8.230 192.156.134.2 to 5 POOL: 0
 11/06/06 08:39:51 PENALTY REMOVE: 147.124.8.230 192.156.134.2 POOL: 0

Here is some output from our connections script with the top 5 out and in hosts:

 Total Connections: 2074
 More than 3 Outgoing Connections:
 192.156.134.15 76
 192.156.134.2 61
 72.166.201.218 58
 192.156.134.16 36
 72.166.205.159 21
 More than 3  Incoming Connections:
 72.166.205.159 88
 192.156.134.15 76
 72.166.201.110 57
 192.156.134.2 56
 72.166.201.218 51

Notice the hosts with more than 20 connections. Some of these are exempt servers, but others are workstations. Our firewall disallows non related incoming connections campus workstations, Netequalizer is in front of the firewall. I have examined some of these cases and many are P2P connection attempts that never truly connect to transfer data or are very short lived. We typically see about 20 to 30 hosts at or above the connection limit and about 100 hosts with more than 3 incmoing or outgoing connections, including all of our Internet servers.

  • Verification, Tests

We have an out of band PC using Ntop to track what hosts on the network are doing. I have verified the output of the Netequalizer against our Ntop machine many times in the last few months. I have also on occasion initiated a large download from a fast Internet site when I notice one or two folks getting high data rates. At those times I have observed Netequalizer start to arbitrate, creating head room on the pipe to keep bursty interactive traffic responsive.

  • Criticism, Pros, Cons
 The user interface is spartan, strictly functional
 Ntop is not really usable on the appliance

 Editors note: ( NTOP has been updated and supported in later versions since this comment was posted)

 An SNMP daemon should be included
 More logging should be available
 Performance is as advertised, if not better
 Minimal configuration is required
 Maintenance is minimal
 User manual has some typos
 User manual requires a full read
 User manual is only 36 pages, reflects minimal configuration required
 Some level of customization is allowed without voiding the warranty
 Support is excellent
 The price is fair, the best value in the product space

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

%d bloggers like this: