By Art Reisman
Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper. APconnections removed all deep packet inspection technology from their NetEqualizer product over 2 years ago.
Article Updated March 2012
As the debate over Deep Packet Inspection continues, network administrators are often faced with a difficult decision: ensure network quality or protect user privacy. However, the legality of the practice is now being called into question, adding a new twist to the mix. Yet, for many Internet users, deep packet inspection continues to be an ambiguous term in need of explanation. In the discussion that follows, deep packet inspection will be explored in the context of the ongoing debate.
Exactly what is deep packet inspection?
All traffic on the Internet travels around in what is called an IP packet. An IP packet is a string of characters moving from computer A to computer B. On the outside of this packet is the address where it is being sent. On the inside of the packet is the data that is being transmitted.
The string of characters on the inside of the packet can be conceptually thought of as the “payload,” much like the freight inside of a railroad car. These two elements, the address and the payload, comprise the complete IP packet.
When you send an e-mail across the Internet, all your text is bundled into packets and sent on to its destination. A deep packet inspection device literally has the ability to look inside those packets and read your e-mail (or whatever the content might be).
Products sold that use DPI are essentially specialized snooping devices that examine the content (pay load inside) of Internet packets. Other terms sometimes used to describe techniques that examine Internet data are packet shapers, layer-7 traffic shaping, etc.
How is deep packet inspection related to net neutrality?
Net neutrality is based on the belief that nobody has the right to filter content on the Internet. Deep packet inspection is a method used for filtering. Thus, there is a conflict between the two approaches. The net neutrality debate continues to rage in its own right.
Why do some Internet providers use deep packet inspection devices?
There are several reasons:
1) Targeted advertising — If a provider knows what you are reading, they can display content advertising on the pages they control, such as your login screen or e-mail account.
2) Reducing “unwanted” traffic — Many providers are getting overwhelmed by types of traffic that they deem as less desirable such as Bittorrent and other forms of peer-to-peer. Bittorrent traffic can overwhelm a network with volume. By detecting and redirecting the Bittorrent traffic, or slowing it down, a provider can alleviate congestion.
3) Block offensive material — Many companies or institutions that perform content filtering are looking inside packets to find, and possibly block, offensive material or web sites.
4) Government spying — In the case of Iran (and to some extent China), DPI is used to keep tabs on the local population.
When is it appropriate to use deep packet inspection?
1) Full disclosure — Private companies/institutions/ISPs that notify employees that their Internet use is not considered private have the right to snoop, although I would argue that creating an atmosphere of mistrust is not the mark of a healthy company.
2) Law enforcement — Law enforcement agencies with a warrant issued by a judge would be the other legitimate use.
3) Intrusion detection and prevention– It is one thing to be acting as an ISP and to eaves drop on a public conversation; it is entirely another paradigm if you are a private business examining the behavior of somebody coming in your front door. For example in a private home it is within your right to look through your peep hole and not let shady characters into your home. In a private business it is a good idea to use Deep packet inspection in order to block unwanted intruders from your network. Blocking bad guys before they break into and damage your network and is perfectly acceptable.
4) Spam filtering- Most consumers are very happy to have their ISP or email provider remove spam. I would categorize this type of DPI as implied disclosure. For example, in Gmail you do have the option to turn Spam filtering off, and although most consutomers may not realize that google is reading their mail ( humans don’t read it but computer scanners do), their motives are understood. What consumers may not realize is that their email provider is also reading everything they do in order to set target advertising
Does Content filtering use Deep Packet Inspection ?
For the most part no. Content filtering is generally done at the URL level. URL’s are generally considered public information, as routers need to look this up anyway. We have only encountered content filters at private institutions that are within their right.
What about spam filtering, does that use Deep Packet Inspection?
Yes many Spam filters will look at content, and most people could not live without their spam filter, however with spam filtering most people have opted in at one point or another, hence it is generally done with permission.
What is all the fuss about?
It seems that consumers are finally becoming aware of what is going on behind the scenes as they surf the Internet, and they don’t like it. What follows are several quotes and excerpts from articles written on the topic of deep packet inspection. They provide an overview not only of how DPI is currently being used, but also the many issues that have been raised with the practice.
For example, this is an excerpt from a recent PC world article:
Not that we condone other forms of online snooping, but deep packet inspection is the most egregious and aggressive invasion of privacy out there….It crosses the line in a way that is very frightening.
— Paul Stephens, director of policy and advocacy for the Privacy Rights Clearinghouse, as quoted in the E-Commerce Times on November 14, 2008. Read the full article here.
Recently, Comcast had their hand slapped for re-directing Bittorrent traffic:
Speaking at the Stanford Law School Center for Internet and Society, FCC Chairman Kevin Martin said he’s considering taking action against the cable operator for violating the agency’s network-neutrality principles. Seems Martin was troubled by Comcast’s dissembling around the BitTorrent issue, not to mention its efforts to pack an FCC hearing on Net neutrality with its own employees.
— Digital Daily, March 10, 2008. Read the full article here.
Later in 2008, the FCC came down hard on Comcast.
In a landmark ruling, the Federal Communications Commission has ordered Comcast to stop its controversial practice of throttling file sharing traffic.
By a 3-2 vote, the commission on Friday concluded that Comcast monitored the content of its customers’ internet connections and selectively blocked peer-to-peer connections.
— Wired.com, August 1, 2008.Read the full article here.
To top everything off, some legal experts are warning companies practicing deep packet inspection that they may be committing a felony.
University of Colorado law professor Paul Ohm, a former federal computer crimes prosecutor, argues that ISPs such as Comcast, AT&T and Charter Communications that are or are contemplating ways to throttle bandwidth, police for copyright violations and serve targeted ads by examining their customers’ internet packets are putting themselves in criminal and civil jeopardy.
— Wired.com, May 22, 2008. Read the full article here.
However, it looks like things are going the other way in the U.K. as Britain’s Virgin Media has announced they are dumping net neutrality in favor of targeting bittorrent.
The UK’s second largest ISP, Virgin Media, will next year introduce network monitoring technology to specifically target and restrict BitTorrent traffic, its boss has told The Register.
— The Register, December 16, 2008. Read the full article here.
Canadian ISPs confess en masse to deep packet inspection in January 2009.
With the amount of attention being paid to Comcast recently, a lot of people around the world have begun to look at their ISPs and wonder exactly what happens to their traffic once it leaves. This is certainly true for Canada, where several Canadian ISPs have come under the scrutiny of the CRTC, the regulatory agency responsible for Canada. After investigation, it was determined that all large ISPs in Canada filter P2P traffic in some fashion.
— Tech Spot, January 21, 2009. Read the full article here.
In April 2009, U.S. lawmakers announced plans to introduce legislation that would limit the how ISPs could track users. Online privacy advocates spoke out in support of such legislation.
In our view, deep packet inspection is really no different than postal employees opening envelopes and reading letters inside. … Consumers simply do not expect to be snooped on by their ISPs or other intermediaries in the middle of the network, so DPI really defies legitimate expectations of privacy that consumers have.
— Leslie Harris, president and CEO of the Center for Democracy and Technology, as quoted on PCWorld.com on April 23, 2009. Read the full article here.
The controversy continues in the U.S. as AT&T is accused of traffic shaping, lying and blocking sections of the Internet.
7/26/2009 could mark a turning point in the life of AT&T, when the future looks back on history, as the day that the shady practices of an ethically challenged company finally caught up with them: traffic filtering, site banning, and lying about service packages can only continue for so long before the FCC, along with the bill-paying public, takes a stand.
— Kyle Brady, July 27, 2009. Read the full article here.
[February 2011 Update] The Egyptian government uses DPI to filter elements of their Internet Traffic, and this act in itself becomes the news story. In this video in this news piece, Al Jazeera takes the opportunity to put out an unflattering piece on the company Naurus that makes the DPI technology and sold it to the Egyptians.
While the debate over deep packet inspection will likely rage on for years to come, APconnections made the decision to fully abandon the practice over two years ago, having since proved the viability of alternative approaches to network optimization. Network quality and user privacy are no longer mutually exclusive goals.
Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.
The Inside Scoop on Where the Market for Bandwidth Control Is Going
July 14, 2010 — netequalizerEditor’s Note: The modern traffic shaper appeared in the market in the late 1990s. Since then market dynamics have changed significantly. Below we discuss these changes with industry pioneer and APconnections CTO Art Reisman.
Editor: Tell us how you got started in the bandwidth control business?
Back in 2002, after starting up a small ISP, my partners and I were looking for a tool that we could plug-in and take care of the resource contention without spending too much time on it. At the time, we had a T1 to share among about 100 residential users and it was costing us $1200 per month, so we had to do something.
Editor: So what did you come up with?
I consulted with my friends at Cisco on what they had. Quite a few of my peers from Bell Labs had migrated to Cisco on the coat tails of Kevin Kennedy, who was also from Bell Labs. After consulting with them and confirming there was nothing exactly turnkey at Cisco, we built the Linux Bandwidth Arbitrator (LBA) for ourselves.
How was the Linux Bandwidth Arbitrator distributed and what was the industry response?
We put out an early version for download on a site called Freshmeat. Most of the popular stuff on that site are home-user based utilities and tools for Linux. Given that the LBA was not really a consumer tool, it rose like a rocket on that site. We were getting thousands of downloads a month, and about 10 percent of those were installing it someplace.
What did you learn from the LBA project?
We eventually bundled layer 7 shaping into the LBA. At the time that was the biggest request for a feature. We loosely partnered with the Layer 7 project and a group at the Computer Science Department at the University of Colorado to perfect our layer 7 patterns and filter. Myself and some of the other engineers soon realized that layer 7 filtering, although cool and cutting edge, was a losing game with respect to time spent and costs. It was not impossible but in reality it was akin to trying to conquer all software viruses and only getting half of them. The viruses that remain will multiply and take over because they are the ones running loose. At the same time we were doing layer 7, the core idea of Equalizing, the way we did fairness allocation on the LBA, was s getting rave reviews.
What did you do next ?
We bundled the LBA into a CD for install and put a fledgling GUI interface on it. Many of the commercial users were happy to pay for the convenience, and from there we started catering to the commercial market and now here we are with modern version of the NetEqualizer.
How do you perceive the layer 7 market going forward?
Customers will always want layer 7 filtering. It is the first thing they think of from the CIO on down. It appeals almost instinctively to people. The ability to choose traffic by type of application and then prioritize it by type is quite appealing. It is as natural as ordering from a restaurant menu.
We are not the only ones declaring a decline in Deep packet inspection we found this opinion on another popular blog regarding bandwidth control:
The end is that while Deep Packet Inspection presentations include nifty graphs and seemingly exciting possibilities; it is only effective in streamlining tiny, very predictable networks. The basic concept is fundamentally flawed. The problem with generous networks is not that bandwidth wants to be shifted from “terrible” protocols to “excellent” protocols. The problem is volume. Volume must be managed in a way that maintains the strategic goals of the arrangement administration. Nearly always this can be achieved with a macro approach of allocating an honest share to each entity that uses the arrangement. Any attempt to micro-manage generous networks ordinarily makes them of poorer quality; or at least simply results in shifting bottlenecks from one business to another.
So why did you get away from layer 7 support in the NetEqualizer back in 2007?
When trying to contain an open Internet connection it does not work very well. The costs to implement were going up and up. The final straw was when encrypted p2p hit the cloud. Encrypted p2p cannot be specifically classified. It essentially tunnels through $50,000 investments in layer 7 shapers, rendering them impotent. Just because you can easily sell a technology does not make it right.
We are here for the long haul to educate customers. Most of our NetEqualizers stay in service as originally intended for years without licensing upgrades. Most expensive layer 7 shapers are mothballed after about 12 months are just scaled back to do simple reporting. Most products are driven by channel sales and the channel does not like to work very hard to educate customers with alternative technology. They (the channel) are interested in margins just as a bank likes to collect fees to increase profit. We, on the other hand, sell for the long haul on value and not just what we can turn quickly to customers because customers like what they see at first glance.
Are you seeing a drop off in layer 7 bandwidth shapers in the marketplace?
In the early stages of the Internet up until the early 2000s, the application signatures were not that complex and they were fairly easy to classify. Plus the cost of bandwidth was in some cases 10 times more expensive than 2010 prices. These two factors made the layer 7 solution a cost-effective idea. But over time, as bandwidth costs dropped, speeds got faster and the hardware and processing power in the layer 7 shapers actually rose. So, now in 2010 with much cheaper bandwidth, the layer 7 shaper market is less effective and more expensive. IT people still like the idea, but slowly over time price and performance is winning out. I don’t think the idea of a layer 7 shaper will ever go away because there are always new IT people coming into the market and they go through the same learning curve. There are also many WAN type installations that combine layer 7 with compression for an effective boost in throughput. But, even the business ROI for those installations is losing some luster as bandwidth costs drop.
So, how is the NetEqualizer doing in this tight market where bandwidth costs are dropping? Are customers just opting to toss their NetEqualizer in favor of adding more bandwidth?
There are some that do not need shaping at all, but then there are many customers that are moving from $50,000 solutions to our $10,000 solution as they add more bandwidth. At the lower price points, bandwidth shapers still make sense with respect to ROI. Even with lower bandwidth costs, users will almost always clog the network with new more aggressive applications. You still need a way to gracefully stop them from consuming everything, and the NetEqualizer at our price point is a much more attractive solution.
Share this:
Like this: