Layer 7 Application Shaping Dying with Increased SSL

By Art Reisman

When you put a quorum of front line IT administrators  in a room, and an impromptu discussion break out, I become all ears. For example, last Monday, the discussion at our technical seminar at Washington University turned to the age-old subject of controlling P2P.

I was surprised to hear from several of our customers about just how difficult it has become to implement Layer 7 shaping. The new challenge stems from fact that SSL traffic cannot be decrypted and identified from a central bandwidth controller. Although we have known about this limitation for a long time, my sources tell me there has been a pick up in SSL adoption rates over the last several years. I don’t have exact numbers, but suffice it to say that SSL usage is way up.

A traditional Layer 7 shaper will report SSL traffic as “unknown.” A small amount of unknown traffic has always been considered tolerable, but now, with the pick up SSL traffic, rumor has it that some vendors are requiring a module on each end node to decrypt SSL pages. No matter what side of the Layer 7 debate you are on, this provision can be a legitimate show stopper for anybody providing public or semi-open Internet access, and here is why:

Imagine your ISP is requiring you to load a special module on your laptop or iPad to decrypt all your SSL information and send them the results? Obviously, this will not go over very well on a public Internet. This relegates Layer 7 technologies to networks where administrators have absolute control over all the end points in their network. I suppose this will not be a problem for private businesses, where recreational traffic is not allowed, and also in countries with extreme controls such as China and Iran, but for a public Internet providers in the free world,  whether it be student housing, a Library, or a municipal ISP, I don’t see any future in Layer 7 shaping.

How to Block Frostwire, utorrent and Other P2P Protocols

By Art Reisman, CTO,

Art Reisman CTO

Disclaimer: It is considered controversial and by some definitions illegal for a US-based ISP to use deep packet inspection on the public Internet.

At APconnections, we subscribe to the philosophy that there is more to be gained by explaining your technology secrets than by obfuscating them with marketing babble. Read on to learn how I hunt down aggressive P2P traffic.

In order to create a successful tool for blocking a P2P application, you must first figure out how to identify P2P traffic. I do this by looking at the output data dump from a P2P session.

To see what is inside the data packets I use a custom sniffer that we developed. Then to create a traffic load, I use a basic Windows computer loaded up with the latest utorrent client.

Editors Note: The last time I used a P2P engine on a Windows computer, I ended up reloading my Windows OS once a week. Downloading random P2P files is sure to bring in the latest viruses, and unimaginable filth will populate your computer.

The custom sniffer is built into our NetGladiator device, and it does several things:

1) It detects and dumps the data inside packets as they cross the wire to a file that I can look at later.

2) It maps non printable ASCII characters to printable ASCII characters. In this way, when I dump the contents of an IP packet to a file, I don’t get all kinds of special characters embedded in the file. Since P2P data is encoded random music files and video, you can’t view data without this filter. If you try, you’ll get all kinds of garbled scrolling on the screen when you look at the raw data with a text editor.

So what does the raw data output dump of a P2P client look like ?

Here is a snippet of some of the utorrent raw data I was looking at just this morning. The sniffer has converted the non printable characters to “x”.
You can clearly see some repeating data patterns forming below. That is the key to identifying anything with layer 7. Sometimes it is obvious, while sometimes you really have work to find a pattern.

Packet 1 exx_0ixx`12fb*!s[`|#l0fwxkf)d1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:ka 31:v4:utk21:y1:qe
Packet 2 exx_0jxx`1kmb*!su,fsl0’_xk<)d1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:xv4^1:v4:utk21:y1:qe
Packet 3 exx_0kxx`1exb*!sz{)8l0|!xkvid1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:09hd1:v4:utk21:y1:qe
Packet 4 exx_0lxx`19-b*!sq%^:l0tpxk-ld1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:=x{j1:v4:utk21:y1:qe

The next step is to develop a layer 7 regular expression to identify the patterns in the data. In the output you’ll notice the string “exx” appears in line, and that is what you look for. A repeating pattern is a good place to start.

The regular expression I decided to use looks something like:


This translates to: match any string starting with “exx” followed, by any character “.” followed by “0”, followed by “xx”, followed by any sequence of characters ending with “qe”.

Note: When I tested this regular expression it turns out to only catch a fraction of the Utorrent, but it is a start. What you don’t want to do is make your regular expression so simple that you get false positives. A layer 7 product that creates a high degree of false positives is pretty useless.

The next thing I do with my new regular expression is a test for accuracy of target detection and false positives.

Accuracy of detection is done by clearing your test network of everything except the p2p target you are trying to catch, and then running your layer 7 device with your new regular expression and see how well it does.

Below is an example from my NetGladiator in a new sniffer mode. In this mode I have the layer 7 detection on, and I can analyze the detection accuracy. In the output below, the sniffer puts a tag on every connection that matches my utorrent regular expression. In this case, my tag is indicated by the word “dad” at the end of the row. Notice how every connection is tagged. This means I am getting 100 percent hit rate for utorrent. Obviously I doctored the output for this post :)

ndex SRCP DSTP Wavg Avg IP1 IP2 Ptcl Port Pool TOS
0 0 0 17 53 — 2 99 dad
1 0 0 16 48 — 2 99 dad
2 0 0 16 48 — 2 99 dad
3 0 0 18 52 — 2 99 dad
4 0 0 12 24 — 2 99 dad
5 0 0 18 52 — 2 99 dad
6 0 0 10 0 — 2 99 dad
7 0 0 88 732 — 2 99 dad
8 0 0 12 0 — 2 99 dad
9 0 0 12 24 — 2 99 dad
10 0 0 16 48 — 2 99 dad
11 0 0 11 16 — 2 99 dad
12 0 0 17 52 — 2 99 dad
13 0 0 27 54 — 2 99 dad
14 0 0 10 0 — 2 99 dad
15 0 0 14 28 — 2 99 dad
16 0 0 14 32 — 2 99 dad
17 0 0 10 0 — 2 99 dad
18 0 0 24 33 — 2 99 dad
19 0 0 17 53 — 2 99 dad

A bit more on reading this sniffer output…

Notice columns 4 and 5, which indicate data transfer rates in bytes per second. These columns contain numbers that are less than 100 bytes per second – Very small data transfers. This is mostly because as soon as that connection is identified as utorrent, the NetGladiator drops all future packets on the connection and it never really gets going. One thing I did notice is that the modern utorrent protocol hops around very quickly from connection to connection. It attempts not to show it’s cards. Why do I mention this? Because in layer 7 shaping of P2P, speed of detection is everything. If you wait a few milliseconds too long to analyze and detect a torrent, it is already too late because the torrent has transferred enough data to keep it going. It’s just a conjecture, but I suspect this is one of the main reasons why this utorrent is so popular. By hopping from source to source, it is very hard for an ISP to block this one without the latest equipment. I recently wrote a companion article regarding the speed of the technology behind a good layer 7 device.

The last part of testing a regular expression involves looking for false positives. For this we use a commercial grade simulator. Our simulator uses a series of pre-programmed web crawlers that visit tens of thousands of web pages an hour at our test facility. We then take our layer 7 device with our new regular expression and make sure that none of the web crawlers accidentally get blocked while reading thousands of web pages. If this test passes we are good to go with our new regular expression.

Editors Note: Our primary bandwidth shaping product manages P2P without using deep packet inspection.
The following layer 7 techniques can be run on our NetGladiator Intrusion Prevention System. We also advise that public ISPs check their country regulations before deploying a deep packet inspection device on a public network.

A Smarter Way to Limit P2P Traffic

By Art Reisman

Art Reisman CTO

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

If you are an IT professional interested in the ethical treatment of P2P (which we define as keeping it in check without invading the privacy of your customers by looking at their private data), you’ll appreciate our next generation approach to containing P2P usage. Thanks to some key input by a leading-edge ISP in South Africa, we have developed a next-generation P2P control that balances the resources of an ISP, and yet allows their end customers to use Bittorent without bringing down the network.

First a quick review of how P2P affects a network

A signature of a typical P2P user is that they can open hundreds of small connections while downloading files. A P2P client, such as Kazaa, is designed to find as many sources to a file as possible. For efficiency and speed, P2P clients operate as multi-threaded download engines, where each download stream captures a different segment of the requested file. When all the segments are complete they are re-assembled into a complete usable media file on your hard drive. The multiple downloads cause a strain on network bandwidth resources. They also create extreme overhead on wireless routers. Extreme P2P usage by just a subset of users can crowd out web pages, VoIP, YouTube and many other less aggressive applications.

Current P2P Limiting Solution: Connection Limits

Our current generation of P2P control involves intelligently looking at the number of connections generated from a user on your network. Based on the persistence and number of connections, we can reliably tell if a user is currently using P2P. The current P2P remedy, deployed on our NetEqualizer equipment, involves limiting the number of connections of suspected P2P users; this works well to limit p2p usage.  Thus, it keeps the P2P users from overwhelming a shared network.

Next-Generation P2P Limiting: Smart Connection Limits

While we have retained the connection-limiting aspects of our current P2P limiting technology, our new technology goes a step further. With Smart Connection Limits, limiting is done by also slowly starving the P2P connections for bandwidth. The bandwidth reduction is based on a formula which takes into a account two main factors:

1) the number of connections a user has open.
2) the load on the network.

I like to think of this technology as more of a “reward system”, resulting in a higher quality of service for non-P2P users.  In this case, the reward is that non-P2P users’ connections are not experiencing this reduction in bandwidth (although they may get equalized on any connection that is hogging bandwidth).  P2P users will slowly see less bandwidth allocated to their P2P traffic, which should discourage them from using P2P on your network.  Basically, this helps to train them to use better behavior – sharing the network resource more fairly with others.

This philosophyof fairness is aligned with the primary goal of the NetEqualizer – to ensure fairness for all network users. It follows that if a user has 20 concurrent streams and another user only has 5, to ensure equal  use of bandwidth under network load, the user with 20 streams should have his streams operate at 1/4 the speed of the user that has 5. While you may configure Smart Connection Limits at various levels, you could enforce the example indicated above.

The reason this technology is important is that, on a network pressed for bandwidth, the P2P users are often taking an unfair share. Even with basic rate caps per user in place, you often must augment that restriction by limiting the total number of connections per user. And now with our latest technology, we also temporarily restrict the bandwidth per connection (only applied to the P2P users).

If you are interested in learning more about Smart Connection Limits, to see if they are a fit for your network, contact us.

Some common questions and answers:

Is it possible to completely block P2P?

It is never safe to try to completely block p2p for a couple of reasons.

1) Although it is always possible to identify P2P, it is often expensive and not foolproof. To block it based on hearsay will cause problems. Our solution, although targeted on limiting P2P, focuses on the resource footprint of the P2P user, and does not attempt to outright block types of traffic. In other words, whether or not the traffic is actually P2P is not the issue. The issue is, is this user abusing resources? If yes, they get punished.

2) Devices that attempt to identify P2P traffic often use a technique called deep packet inspection (DPI), which is frowned upon as an invasion of privacy.  Additionally, we are finding that the latest P2P tools (such as utorrent) encrypt P2P streams as their default behavior, which defeats deep packet inspection.  Not so with our solutions; both Connection Limits and Smart Connection Limits will throttle encrypted P2P traffic.

Who do we recommend move from Connection Limits to Smart Connection Limits ?

If you are in a business where you charge for bandwidth usage (ISP, WISP, satellite provider), you should consider implementing Smart Connection Limits.  We also recommend looking at Smart Connection Limits if you have repeat offenders – basically, the same users are consistently running P2P traffic on your network and you want to change their behavior.

Can I continue using the Connection Limits or do I need to move to Smart Connection Limits?

Both solutions to Limit P2P traffic are being supported. If you do not have a lot of P2P traffic on your network, you may opt to stay with Connection Limits, as a quick-and-easy implementation. Smart Connection Limits take a little more thought to implement and have additional complexity, which you may not wish to take on at this point.

%d bloggers like this: