NetEqualizer News: February 2013

February 2013


Enjoy another issue of NetEqualizer News! This month, we discuss AD integration into NetEqualizer, our upcoming Educause conference, the new NetEqualizer Dashboard feature, and the history of P2P blocking. As always, feel free to pass this along to others who might be interested in NetEqualizer News.

A message from Art…
Art Reisman, CTO – APconnections

art_smallIn February, our thoughts turn to love and friendship. Valentine’s Day is coming up this week, a great day to celebrate those in your life that you love. So this month we celebrate you, our customers!

This Newsletter is our valentine to you! As candy is fattening, we are instead fattening up your mind. Our gifts to you include an opportunity to participate in our AD Beta Test, a chance to learn more about the history of P2P, and the opportunity to pick up some bling at Educause!

We love it when we hear back from you – so if you have a story you would like to share with us of how we have helped you, let us know. Email me directly at I would love to hear from you!

AD Integration Update and Beta Testing

We are well underway with beta testing our new and exciting NetEqualizer feature – Active Directory integration. The feature is being broken down into two release phases:

In the first phase, we’ll allow administrators to see the Active Directory username associated with the IP Address in the connection table (assuming the user used Active Directory to authenticate). We’ll also allow you to sort the table by username and IP for quick analysis of a specific username.

This screenshot shows how usernames will be displayed in the connection table:


In the second phase, which will be released in the summer of 2013, we’ll allow administrators to set rate limits by username as well as give priority to certain users. This way, users don’t have to be part of a certain subnet to gain priority access.

If your organization uses Active Directory for user authentication, you have had a NetEqualizer for at least one year, and you’d be willing to assist us in our testing, let us know by sending an email to:

Stay tuned to NetEqualizer News for more updates and GA release details!

See You at Educause!

And get a cool NetEqualizer pen!

We are conducting a Poster Session on Wednesday, 2/13 at the West/Southwest Educause Regional Conference in Austin, Texas.educause logo

If you are at the conference, stop by to see us!  If you do, and mention this Newsletter, we will give you a fabulous NetEqualizer pen!


Here is our abstract for the conference:

Maximizing Your Internet Resource: Why Behavior-Based QoS Is the Future of Bandwidth Shaping

Higher education is tasked to do more with less, particularly when managing a scarce resource like bandwidth. Behavior-based QoS, an affordable bandwidth shaping technology, is coming to the forefront. It’s also gaining mindshare as a superior bandwidth shaping technology, as encrypted traffic thwarts deep packet inspection. This poster will delve into the differences between DPI and behavior-based QoS, explaining where each is best suited for networks. Learn how to reduce P2P and HEOA/RIAA requests on your campus and see behavior-based QoS in action.

We will offer a live online demonstration of our affordable NetEqualizer:


We hope to see you there!

Don’t Forget to Upgrade to 6.0!

With a brief tutorial on our Dashboard

If you have not already upgraded your NetEqualizer to Software Update 6.0, now is the perfect time! We have discussed the new upgrade in depth in previous newsletters and blog posts, so this month we thought we’d show you how to take advantage of our new Dashboard features!

If you have not explored it, here is what you can expect to see:

– You can immediately tell which key processes are running, through our green (on)/red (off) icons. This helps you to make sure that everything is running as expected.
– You can Run Diagnostics directly by clicking on the icon at the top right of the Dashboard.
– You can see how much bandwidth is being consumed both from the Internet (% Bandwidth Down) and to the Internet (% Bandwidth Up). One great side affect of this is that you can tell if your cables are set-up correctly too. Typically, Bandwidth Down is much higher than Bandwidth Up.  If you see the opposite, you should consider reversing your cables. Contact our Support Team if you have questions.
– You can quickly see what version you are running, which will help you to determine if you need to upgrade.  You can always see what is available in each software update on our blog by clicking on the Software Updates page.
– If you are using Pools to limit bandwidth, you can select which Pool to view the Pool Traffic Up and Pool Traffic Down.To keep the Dashboard relevant and clean, we limit what we show there. However, we are open to suggestions! If there are other key items that you think warrant Dashboard status, let us know. Just send an email with your ideas to:

Remember, new software updates (including all the features described above) are available for free to customers with valid NetEqualizer Software & Support (NSS).

If you are not current with NSS, contact us today!


toll-free U.S. (888-287-2492),

worldwide (303) 997-1300 x. 103

Best Of The Blog

A Brief History of Peer to Peer File Sharing and the Attempts to Block It

By Art Reisman – CTO – APconnections

The following history is based on my notes and observations as both a user of peer to peer, and as a network engineer tasked with cleaning it up.

Round One, Napster, Centralized Server, Circa 2002

Napster was a centralized service, unlike the peer to peer behemoths of today there was never any question of where the copyrighted material was being stored and pirated from. Even though Napster did not condone pirated music and movies on their site, the courts decided by allowing copyrighted material to exist on their servers, they were in violation of copyright law. Napster’s days of free love were soon over…

Photo Of The Month


Photo by Casey Sanders

A Slower Pace

When people picture the state of Texas, most think of vast ranches, cattle, and cactus. While much of the state does resemble this type of landscape, the northeastern part is actually heavily wooded and contains many lakes. Life in this rural area of the country moves a bit slower than our high-speed, high-tech lives in Metro Denver, Colorado. Sometimes it is cathartic to put all the work aside for a bit and just stare off into the woods.

The Evolution of P2P and Behavior-Based Blocking

By Art Reisman

CTO – APconnections

I’ll get to behavior-based blocking soon, but before I do that, I encourage anybody dealing with P2P on their network to read about the evolution of P2P outlined below. Most of the methods historically used to thwart P2P, are short lived pesticides, and resistance is common. Behavior-based control is a natural wholesome predator of P2P which has proved to be cost effective over the past 10 years.

The evolution of P2P

P2P as it exists today is a classic example of Darwinian evolution.

In the beginning there was Napster. Napster was a centralized depository for files of all types. It also happened to be a convenient place to distribute unauthorized, copyrighted material. And so, the music industry, unable to work out a licensing distribution agreement with Napster basically closed it down. So now, you had all these consumers used to getting free music, and like a habituated wild animal, they were in no mood to pay 15.99 per CD from their local retailer.

P2P technology was already in existence when Napster was closed down; however until that time, it was intended to be a distribution system for legitimate content which came out of academia. By decentralizing the content to many multiple distribution points, the cost of distribution was much less than hosting content distribution on a private server. Decentralized content, good for legitimate distribution of academic content, quickly became a nightmare for the Music Industry.  Instead of having one cockroach of illegal content to deal with, they now had millions of little P2P cockroaches all over the world to contend with.

The Music industry had a multi-billion dollar leak in their revenue stream and went after enforcing copyright policy by harassing ISPs and threatening consumers with jail time. For the ISP, the legal liability of having copyrighted material on your network was a hassle, but the bigger problem was the congestion. When content was distributed by a single point supplier, there were natural cost barriers to prevent bandwidth utilization from rising unchecked. For example, when you buy a music file from Amazon or iTunes, both ends of the transaction require some form of payment. The supplier pays for a large bandwidth pipe, and the consumer pays money for the file. With P2P, the distributors and the clients are all consumers with essentially unlimited data usage on their home accounts, and the content is free. As P2P file sharing rose, ISPs had no easy way of changing their pricing model to deal with the orgy of file sharing. Although invisible to the public, it was a cyber party that rivaled 10 cent beer night fiasco of the 1970’s.

Resistant P2P pesticides

In order to thwart p2p usage, ISPs and businesses started spending hundreds of millions of dollars in technology that tracked specific P2P applications and blocked those streams. This technology is referred to as layer 7 blocking. Layer 7 blocking involves looking at the specific content traversing the Internet and identifying P2P applications by their specific footprint. Intuitively, this solution was a no-brainer* – spot P2P and block it. Most of these installations with layer 7 blocking showed some initial promise, however, as was the case with the previous cockroach infestation, P2P again evolved to meet the challenge and then some.

How does newer evolved P2P thwart layer 7 shaping?

1) There are now encrypted P2P clients where their footprint is hidden, and thus all the investment in the layer 7 shaper can go up in smoke once encrypted P2P infects your network. It can’t be spotted.

2) P2P clients open and close connections much faster than their first generation of the early 2000’s. To keep up with a the flurry of connections over a short time, the layer 7 engine must have many times the processing power of a traditional router, and must do the analysis quickly. The cost of layer 7 shaping is rising much faster than the cost of adding additional bandwidth to a circuit.

Also: Legally there also problems with eavesdropping on customer data without authorization.

How does behavior-based shaping P2P blocking keep up?

1) It uses a progressive rate limit on suspected P2P users.

P2P has the footprint of creating many simultaneous connections to move data across the internet. When behavior-based shaping is in effect, it detects these high connection count users, and slowly implements a progressive rate limit on all their data. This does not completely cut them off per se, but it punishes the speeds of the consumer using p2p and does so progressively as they use more p2p connections. This may seem a bit non specific in target, but when done correctly it rarely affects non P2P users, and even if it does, the behavior of using a large number of downloads is considered rude and abhorrent, and is most like a virus if not a P2P application.

2) It limits the user to a fixed number of simultaneous connections.

Also: It does not violate any privacy policies.

That covers the basics of P2P behavior-based shaping. In practice, we have developed our techniques with a bit of intelligence and do not wish to give away all of our fine tuning secrets, but suffice it to say, I have been implementing behavior-based shaping for 10 years and have empirically seen its effectiveness over time. The cost remains low with respect to licensing (very stable solution), and the results remain consistent.

* Although in some cases there was very little information about how effective the solution was working, companies and ISPs shelled out license fees year after year.

P2P Protocol Blocking Now Offered with NetGladiator Intrusion Prevention

A few months ago we introduced our NetGladiator Intrusion Prevention (IPS) Device. To date, it has thwarted tens of thousands of robotic cyber attacks and counting. Success breeds success and our users wanted more.

When our savvy customers realized the power, speed, and low price point of our underlying layer 7 engine, we started getting requests seeking additional features such as: “Can you also block Peer To Peer and other protocols that cannot be stopped by our standard Web Filters and Firewalls?”  It was natural that we extended our IPS device to address this space; hence, today we are announcing the next-generation NetGladiator. We now offer a module that will allow you to block and monitor the world’s top 10 p2p protocols (which account for 99 percent of all P2P traffic). We also back our technology with our unique promise to implement a custom protocol blocking rule with the purchase of any system at no extra charge. For example, if you have a specific protocol you need to monitor and just can’t uncover it with your WebSense or Firewall filter, we will custom deliver a NetGladiator system that can track and/or block your unique protocol, in addition to our standard p2p blocking options.

Below is a sample Excel live report integrated with the NetGladiator in monitor mode. On the screen snapshot below, you will notice that we have uncovered a batch of Utorrent and Frost Wire p2p traffic.

Please feel free to call 303-997-1300 or email our NetGladiator sales engineering team with any additional questions at

Related Articles

NetGladiator A layer 7 shaper in sheep’s clothing

How Effective is P2P Blocking?

This past week, a discussion about peer-to-peer (P2P) blocking tools came up in a user group that I follow. In the course of the discussion, different IT administrators chimed in, citing their favorite tools for blocking P2P traffic.

At some point in the discussion, somebody posed the question, “How do you know your peer-to-peer tool is being effective?” For the next several hours the room went eerily silent.

The reason why this question was so intriguing to me is that for years I collaborated with various developers on creating an open-source P2P blocking tool using layer 7 technology (the Application Layer of the OSI Model). During this time period, we released several iterations of our technology as freeware. Our testing and trials showed some successes, but we also learned how fragile the technology was and we were reluctant to push it out commercially. I had always wondered if other privately-distributed layer 7 blocking tools had found some magic key to perfection?

Sometimes, written words can be taken as fact even though the same spoken words might be dismissed as gossip; and so it was with our published open source technology. We started getting indications that it was getting picked up and integrated in other solutions and touted as gospel.

Our experience with P2P blocking:

Our free P2P blocking tool worked most of the time – maybe eighty percent. Eighty percent accuracy is fine for an experimental open-source tool. Intuitively, a blocking tool is expected to be 99.9 percent effective. Even though most customers would likely not conclusively measure our accuracy, eighty percent was too low to ethically sell this technology without disclosures.

The on-line discussion ended fairly quickly when the question of accuracy was brought up, and I think it is safe to assume the silence is an indication that nobody else was achieving better than eighty percent.

How do you validate the effectiveness of a P2P tool?

1) Brute force testing:

I am not aware of too many IT administrators that have the time to load up six or seven different P2P clients on their laptops, and download bootlegged Madonna videos all day.

In testing P2P clients, we infected several computers with just about every virus in circulation. Over time, you can get a rough idea of how deep you must go to expose weaknesses in your tool set. To be thorough, you can’t stop at the first P2P client tool. In the real world, users on your network will likely search for multiple P2P clients, especially if the first one fails. Once they find a kink in the armor, they will yap to others, exposing your Achilles heel.

2) Reduction of RIAA requests:

Most small-to-medium ISP’s don’t really think about P2P unless they get RIAA requests or their network is saturated.

RIAA requests seem to be a big motivator in purchasing technology to block P2P. If you are getting RIAA requests (these are letters from lawyers threatening to sue you for copyright infringement), you can install your P2P blocking tool, and if in the next week your notifications of copyright violations are way down, you can assume that you have put a good dent in your P2P downloading issue.

3) Reduced congestion:

Plug your P2P tool in and see if your network utilization drops.

4) Lower connection rates through your router:

One of the signatures of P2P is that clients will open up hundreds of connections per minute to P2P servers in order to download content. There are ways to measure and quantify these connection rates empirically.

Other observations:

Many times we’ll hear from an ISP/operator claiming they have P2P users run amok on their network, however analysis often shows most of their traffic is video – Netflix, YouTube, Hulu, etc.

Total P2P traffic has really dropped off quite a bit in the last three or four years. We attribute this decline to:

1) Legal iTunes. 99 cent songs have eliminated the need for pirated music.

2) RIAA enforcement and education of copyright laws.

3) The invention of the iPad and iPhone. These devices control the applications which run on them (they are not going to distribute P2P clients as readily).

One method to handle P2P problems is to control all the computers in your environment, scan them before granting network access, and then block access to P2P sites (the sites where the client utilities are loaded from).

Note: once a P2P client is loaded on a computer you cannot block any single remote site, as the essence of P2P is that the content is not centralized.


Results of different P2P blocking techniques are often temporary, especially when you have an aggressive user base with motivation to download free content.

Comcast Suit: Was Blocking P2P Worth the Final Cost?

By Art Reisman
CTO of APconnections
Makers of the plug-and-play bandwidth control and traffic shaping appliance NetEqualizer

Art Reisman CTO

Comcast recently settled a class action suit in the state of Pennsylvania regarding its practice of selectively blocking of P2P.  So far, the first case was settled for 16 million dollars with more cases on the docket yet to come. To recap, Comcast and other large ISPs invested in technology to thwart P2P, denied involvment when first accused, got spanked by the FCC,  and now Comcast is looking to settle various class action suits.

When Comcast’s practices were established, P2P usage was sky-rocketing with no end in sight and the need to block some of it was required in order to preserve reasonable speeds for all users. Given that there was no specific law or ruling on the book, it seemed like mucking with P2P to alleviate gridlock was a rational business decision. This decision made even more sense considering that DSL providers were stealing disgruntled customers. With this said, Comcast wasn’t alone in the practice — all of the larger providers were doing it, throttling P2P to some extent to ensure good response times for all of their customers.

Yet, with the lawsuits mounting, it appears on face value that things backfired a bit for Comcast. Or did they?

We can work out some very rough estimates as the final cost trade-off. Here goes:

I am going to guess that before this plays out completely, settlements will run close to $50 million or more. To put that in perspective, Comcast shows a 2008 profit of close to $3 billion. Therefore, $50 million is hardly a dent to their stock holders. But, in order to play this out, we must ask what the ramifications would have been to not blocking P2P back when all of this began and P2P was a more serious bandwidth threat (Today, while P2P has declined, YouTube and online video are now the primary bandwidth hogs).

We’ll start with the customer. The cost of getting a new customer is usually calculated at around 6 months of service or approximately $300. So, to make things simple, we’ll assume the net cost of a losing a customer is roughly $300. In addition, there are also the support costs related to congested networks that can easily run $300 per customer incident.

The other more subtle cost of P2P is that the methods used to deter P2P traffic were designed to keep traffic on the Comcast network. You see, ISPs pay for exchanging data when they hand off to other networks, and by limiting the amount of data exchanged, they can save money. I did some cursory research on the costs involved with exchanging data and did not come up with anything concrete, so I’ll assume a P2P customer can cost you $5 per month.

So, lets put the numbers together to get an idea of how much potential financial damage P2P was causing back in 2007 (again, I must qualify that these are based on estimates and not fact. Comments and corrections are welcome).

  • Comcast had approximately 15 million broadband customers in 2008.
  • If 1 in 100 were heavy P2P users, the exchange cost would be $7.5 million per month in exchange costs.
  • Net lost customers to a competitor might be 1 in 500 a month. That would run $9 million a month.
  • Support calls due to preventable congestion might run another 1 out of 500 customers or $9 million a month.

So, very conservatively for 2007 and 2008, incremental costs related to unmitigated P2P could have easily run a total of $600 million right off the bottom line.

Therefore, while these calculations are approximations, in retrospect it was likely financially well worth the risk for Comcast to mitigate the effects of unchecked P2P. Of course, the public relations costs are much harder to quantify.

%d bloggers like this: