When is Deep Packet Inspection a Good Thing?


Commentary

Update September 2011

Seems some shareholders  of a company who over promised layer 7 technology are not happy.

By Eli Riles

As many of our customers are aware, we publicly stated back in October 2008 that we officially had switched all of our bandwidth control solutions over to behavior-based shaping. Consequently, we  also completely disavowed Deep Packet Inspection in a move that has Ars Technica described as “vendor throws deep packet inspection under the bus.”

In the last few weeks, there has been a barrage of attacks on Deep Packet Inspection, and then a volley of PR supporting it from those implementing the practice.

I had been sitting on an action item to write something in defense of DPI, and then this morning I came across a pro-DPI blog post in the New York Times. The following excerpt is in reference to using DPI to give priority to certain types of traffic such as gaming:

“Some customers will value what they see as low priority as high priority,” he said. I asked Mr. Scott what he thought about the approach of Plusnet, which lets consumers pay more if they want higher priority given to their game traffic and downloads. Surprisingly, he had no complaints.

“If you said to me, the consumer, ‘You can choose what applications to prioritize and which to deprioritize, and, oh, by the way, prices will change as a result of how you do this,’ I don’t have a problem with that,” he said.

The key to this excerpt is the phrase, “IF YOU ASK THE CONSUMER WHAT THEY WANT.” This implies permission. If you use DPI as an opt-in , above-board technology, then obviously there is nothing wrong with it. The threat to privacy is only an issue if you use DPI without consumer knowledge. It should not be up to the provider to decide appropriate use of DPI,  regardless of good intent.

The quickest way to deflate the objections  of the DPI opposition is to allow consumers to choose. If you subscribe to a provider that allows you to have higher priority for certain application, and it is in their literature, then by proxy you have granted permission to monitor your traffic. I can still see the Net Neutrality purist unhappy with any differential service, but realistically I think there is a middle ground.

I read an article the other day where a defender of DPI practices (sorry no reference) pointed out how spam filtering is widely accepted and must use DPI techniques to be effective. The part the defender again failed to highlight was that most spam filtering is done as an opt-in with permission. For example, the last time I checked my Gmail account, it gave the option to turn the spam filter off.

In sum, we are fully in support of DPI technology when the customer is made aware of its use and has a choice to opt out. However, any use of DPI done unknowingly and behind the scenes is bound to create controversy and may even be illegal. The exception would be a court order for a legal wiretap. Therefore, the Deep Packet Inspection debate isn’t necessarily a black and white case of two mutually exclusive extremes of right and wrong. If done candidly, DPI can be beneficial to both the Internet user and provider.

See also what is deep packet inspection.

Eli Riles, a consultant for APconnections (Netequalizer), is a retired insurance agent from New York. He is a self-taught expert in network infrastructure. He spends half the year traveling and visiting remote corners of the earth. The other half of the year you’ll find him in his computer labs testing and tinkering with the latest network technology.

For questions or comments, please contact him at eliriles@yahoo.com.

ROI calculator for Bandwidth Controllers


Is your commercial Internet link getting full ? Are you evaluating whether to increase the size of your existing internet pipe and trying to do a cost trade off on investing in an optimization solution? If you answered yes to either of these questions then you’ll find the rest of this post useful.

To get started, we assume you are somewhat familiar with the NetEqualizer’s automated fairness and behavior based shaping.

To learn more about NetEqualizer behavior based shaping  we suggest our  NetEqualizer FAQ.

Below are the criteria we used for our cost analysis.

1) It was based on feedback from numerous customers (different verticals) over the previous six years.

2) In keeping with our policies we used average and not best case scenarios of savings.
3) Our Scenario is applicable to any private business or public operator that administers a shared Internet Link with 50 or more users

4) For our example  we will assume a 10 megabit trunk at a cost of $1500 per month.

ROI savings #1 Extending the number of users you can support.

NetEqualizer Equalizing and fairness typically extends the number of users that can share a trunk by making better use of the available bandwidth in a time period. Bandwidth can be stretched from 10 to 30 percent:

savings $150 to $450 per month

ROI savings #2 Reducing support calls caused by peak period brownouts.

We conservatively assume a brownout once a month caused by general network overload. With a transient brownout scenario you will likely spend debug time  trying to find the root cause. For example, a bad DNS server could the problem, or your upstream provider may have an issue. A brownout  may be caused by simple congestion .   Assuming you dispatch staff time to trouble shoot a congestion problem once a month and at an overhead  from 1 to 3 hours. Savings would be $300 per month in staff hours.

ROI savings #3 No recurring costs with your NetEqualizer.

Since the NetEqualizer uses behavior based shaping your license is essentially good for the life of the unit. Layer 7 based protocol shapers must be updated at least once a year.  Savings $100 to $500 per month

The total

The cost of a NetEqualizer unit for a 10 meg circuit runs around $3000, the low estimate for savings per month is around $500 per month.

In our scenario the ROI is very conservatively 6 months.

Note: Commercial Internet links supported by NetEqualizer include T1,E1,DS3,OC3,T3, Fiber, 1 gig and more

Related Articles

Open Source Linux Bandwidth Arbitrator vs. NetEqualizer Bandwidth Shaping


As many of you know, the commercial NetEqualizer bandwidth shaper is based on the Linux Bandwidth Arbitrator. From old customers and new, we often get asked what the differences are between the two solutions. Here are a few key points to consider…

1) Time and expertise

Most entities using open source have an experienced technology team with time to burn. Typically, users are university graduate students or eastern European start ups.  If you have time and Linux expertise, then building and supporting the open source Linux Bandwidth Arbitrator is an excellent option.

2) Full featured GUI

The GUI and many advanced integrated features are not available with the Bandwidth Arbitrator.

3) Support

You are on your own should there be a problem with the open source technology.

4) Advanced features not in open source

Many of the features in the NetEqualizer are not part of the GPL source code. For example, priority host, bandwidth pools, and VLAN support are not available with the Bandwidth Arbitrator.

We’re sure longtime users of both products can add to the list, but this is a start. For more information about the Bandwidth Arbitrator and NetEqualizer, visit www.bandwidtharbitrator.com and www.netequalizer.com.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

Equalizing Technology: NetEqualizer Offers A New Approach To Application Shaping


Below is a recent editorial featured on Processor.com

Equalizing Technology
NetEqualizer Offers A New Approach To Application Shaping
by Julie Sartain

Current application shaping products examine the content of Internet packets as they pass through the packet shaper. Using pattern-matching techniques, the packet shaper determines, in real time, the application type of each packet and then proceeds to restrict or allow the data based on a set of rules established by the system administrators.

Administrators can use these programs and define rules to restrict or allow any application that exists, but it takes an incredible amount of effort to keep pace. There is one product, however, that’s trying a new approach called equalizing technology. This product is NetEqualizer (800/918-2763; www.netequalizer.com) from a Colorado-based company called APconnections.

The Problems

According to Art Reisman, CEO at APconnections, pattern-matching techniques work on most classified packets, but what if the rules are set to restrict all packets containing ASCII characters or words such as Rhapsody, Napster, or bit torrent? One of these packets might contain a company-wide memo explaining the corporate policies regarding the usage of these programs on company computers. Pattern-matching rules would restrict this memo attachment.

In addition, many companies intentionally refuse to classify their communications, so their packets slip past the application-shaping products. Seems like a small issue, unless hundreds of these junk mail packets are slipping through onto thousands of desktops in your company nationwide on a daily basis. Then it becomes a huge problem, as the bandwidth is usurped to process this unwanted garbage.

Even if an application-shaping product can identify 90% of the spectrum of apps (and that’s a lot), notes Reisman, 10% is still unclassified. Your options are to either monitor and manually classify that 10%, which is very time-consuming and costly, or allow those packets to pass without restrictions.

Solutions

“Our products can, generally, extend the capacity of your Internet from 25 to 50%,” says Reisman. “This means you can have that many more people using the Internet without adding more bandwidth.”

There is always the potential for a few users to overwhelm the Internet connection, he notes. But when applied to many verticals such as ISPs, libraries, schools, colleges, and businesses with 50 or more employees, the NetEqualizer prevents this from happening.

“NetEqualizer appliances automatically shape traffic based on built-in fairness rules,” notes Reisman. “This method allows network administrators/operators to quickly and easily bring network traffic into balance without having to build and manage extensive policy libraries and all without changes to their existing network infrastructure.”

How It Works

Reisman explains that APconnections looked at how systems keep one process from locking up the whole computer. For example, Microsoft Windows (www.microsoft.com) does not handle this well; however, Linux and Unix, as well as some of the other server equipment that’s available, do. The premise of these products is that no single computer program is allowed to dominate the CPU, so everything that’s running gets a turn. “We then applied this tried-and-true methodology to an Internet link,” says Reisman. “The result is NetEqualizer.”

NetEqualizer uses behavior-based shaping, adds Reisman. It looks at the behavior of abuse on an Internet link and then takes action based on that. When the network is congested, the fairness algorithm favors business-class applications, such as VoIP, Web browsing, chat, and email, at the expense of large file downloads.

The other available products (that is, the competition) try to classify specific varieties of traffic by type. Intuitively, the classification by type is easy for customers to understand, but implementing that process is very time-consuming, and the cost of trying to identify every type of traffic on the Internet is overwhelming and nearly impossible. NetEqualizer, on the other hand, always gets the bad guys because bad behavior is not a function of application type. And, as an added bonus, customers do not have to relicense the technology every month; it just works.

In addition, says Reisman, all the settings can be changed in real time, with no effect on network service quality. And, NetEqualizer allows priority to traffic for hosts that are not supposed to be shaped. Also (for organizations that require 100% network uptime), the NetEqualizer architecture allows customers to build a redundant system by configuring two NetEqualizer products running in parallel.

R&D History

“We started with no backing money, so we built a simple open-source version of the concept and begged people to try it,” says Reisman. The product excelled and then rose to one of the top 100 open-source projects in the world. (That’s considered extremely high when most top open-source projects are targeted to the general consumer.) Then, the company commercialized and enhanced it and contracted with a hardware manufacturer to produce it. There are now more than 1 million end users on six continents behind the NetEqualizer equipment.

“We had many setbacks in the early going,” says Reisman. “Mostly just trying to get the product stable and keep it running on a reasonably priced piece of hardware.”

Most of APconnections’ market is customers who desperately need something but don’t want to pay $50,000 to optimize their $500-a-month Internet trunk. Getting the product stable in heavy use required the company to purchase sophisticated simulation equipment to troubleshoot the last few hard-to-find bugs. (That was more than three years ago.) Since then, APconnections has had reports of its servers in continuous, heavy use for years at a time without rebooting. “We are very proud of that,” says Reisman.

What’s New?

According to Reisman, the company has recently adopted this technology into an AP (access point) and, quite by accident, have solved a common problem called the hidden node issue, which has plagued 802.11 operators for years. There are other options for this problem, but these choices lock customers into proprietary solutions. APconnections’ solution is completely compatible with existing 802.11 wireless technologies, so customers can mix and match its AP without replacing everything.

APconnections Field Guide to Contention Ratios


In a recent article titled “The White Lies ISPs Tell about Broadband Speeds,” we discussed some of the methods ISPs use when overselling their bandwidth in order to put on their best face for their customers. To recap a bit, oversold bandwidth is a condition that occurs when an ISP promises more bandwidth to its users than it can actually deliver. Since the act of “overselling” is a relative term, with some ISPs pushing the limit to greater extremes than others, we thought it a good idea to do a quick follow-up and define some parameters for measuring the oversold condition. 

For this purpose we use the term contention ratio. A contention ratio is simply the size of an Internet trunk divided by the number of users. We normally think of Internet trunks in units of megabits. For example, 10 users sharing a one megabit trunk would have a 10-to- 1 contention ratio. If sharing the bandwidth on the trunk equally and simultaneously, each user could sustain a constant feed of 100kbs, which is exactly 1/10 of the overall bandwidth.

So what is an acceptable contention ratio?

From a business standpoint, it is whatever a customer will put up with and pay for without canceling their service. This definition may seem ethically suspect, but whether in the bygone days of telecommunications phone service or contemporary Internet bandwidth business, there are long-standing precedents for overselling. What do you think a circuit busy signal is caused by? Or a dropped cell phone call? It’s best to leave the moral debate to a university assignment or a Sunday sermon.

So, without pulling any punches, what exactly will a customer tolerate before pulling the plug?
Here are some basic observations:
  • Rural customers in the US and Canada: Contention ratios of 50 to 1 are common
  • International customers in remote areas of the world: Contention ratios of 80 to 1 are common
  • Internet providers in urban areas: Contention ratios of 20 to 1 are to be expected
  • The numbers above are a good, rough starting point, but things are not as simple as they look. There is a statistical twist as bandwidth amounts get higher.

    Contention ratios can actually increase as the overall Internet trunk size gets larger. For example, if 50 people can share one megabit without mutiny, it should follow that 100 people can share two megabits without mutiny as the ratio has not changed. It is still 50 to 1.

    However, from observations of hundreds of ISPs, we can easily conclude that perhaps 110 people can share two megabits with the same tolerance as 50 people sharing one megabit. What this means is that the larger the ISP, the more bandwidth at a fixed cost per megabit, and thus the larger the contention ratios you can get away with.

    Is this really true? And if so, what are its implications for your business?

    This is simply an empirical observation, backed up by talking to literally thousands of ISPs over the course of four years and noticing how their oversubscription ratios increase with the size of their trunk.

    A conservative estimate is that, starting with the baseline ratio listed above, you can safely add 10 percent more subscribers above and beyond the original contention ratio for each megabit of trunk they share.

    Thus, to provide an illustration, 50 people sharing one megabit can safely be increased to 110 people sharing two megabits, and at four megabits you can easily handle 280 customers. With this understanding, getting more from your bandwidth becomes that much easier.