Bandwidth Shaping Shake Up, Your Packet Shaper May be Obsolete?


If you went to sleep in 2005 and woke up 10 years later you would likely be surprised by some dramatic changes in technology.

  • Smart cars that drive themselves are almost a reality
  • The desktop PC is no longer a consumer product
  • Wind farms  now line the highways of rural America
  • Layer 7 shaping technology is now clinging to life, crashing the financials of a several  companies that bet the house on it.

What happened to layer 7 and Packet Shaping?

In the early 2000’s all the rave in traffic classification was the ability to put different types of bandwidth traffic into labeled buckets and assign a priority to them. Akin to rating your food choices  on a tapas menu ,network administrators  enjoyed an extensive  list of various traffic. Youtube, Citrix,  news feeds, the list was only limited by the price and quality of the bandwidth shaper. The more expensive the traffic shaper , the more choices you had.

Starting in 2005 and continuing to this day,  several forces started to work against the layer 7 paradigm.

  • The price of bulk bandwidth went into a free fall, much faster than the relatively fixed cost of a bandwidth shaper.  The business proposition of buying a bandwidth shaper to conserve bandwidth utilization became much tighter. Some companies that were riding high saw their stock prices collapse.
  • Internet traffic became invisible and impossible to identify with the advent of encryption techniques. A traffic classifier using Layer 7,  cannot see inside HTTPS or a VPN tunnel, and thus it is essentially becomes a big expensive albatross with little value as the rate of encrypted traffic increases.
  • The FCC ruling toward Net Neutrality further put a damper on a portion of the Layer 7 market. For years ISPs had been using Layer 7 technology to give preferential treatment to different types of traffic.
  • Cloud based services are using less complex  architectures. Companies  can consolidate on one simplified central bandwidth shaper, where as before they might have had several on all their various WAN links and Network segments

So where does this leave the bandwidth shaping market?

There is still some demand for layer 7 type shapers, particular in countries like China, where they attempt to control   everything.  However in Europe and in the US , the trend is to more basic controls that do not violate the FCC rule, cost less, and use some form intelligent based fairness rules such as:

  • Quota’s ,  your cell phone data plan.
  • Fairness based heuristics is gaining momentum, lower price point, prevents congestion without violating FCC ruling  (  Equalizing).
  • Basic Rate limits,  your wired ISP 20 megabit plan, often implemented on a basic router and not a specialized shaping device.
  • No Shaping at all,  pipes are so large there is no need to ration bandwidth.

Will Shaping be around in 10 years?

Yes, consumers and businesses will always find ways to use all their bandwidth and more.

Will price points for bandwidth continue to drop ?

I am going to go against the grain here, and say bandwidth prices will flatten out in the near future.  Prices  over the last decade slid for several reasons which are no longer in play.

The biggest driver in price drops was the wide acceptance of wave division muliplexing on carrier lines in the 2005- present time frame. There was already a good bit of fiber in the ground but the WDM innovation caused a huge jump in capacity, with very little additional cost to providers.

The other factor was a major world-wide recession, where businesses where demand was slack.

Lastly there are no new large carriers coming on line. Competition and price wars will ease up as suppliers try to increase profits.

 

 

NetEqualizer is Net Neutral, Packet Shaping is Not


The NetEqualizer has long been considered a net neutral appliance. Given the new net neutrality FCC regulations, upheld yesterday, I thought it would be good time to reiterate how the NetEqualizer shaping techniques  are  compliant with the FCC ruling.

Here is the basic FCC rule that applies to bandwidth shaping and preferential treatment:

The FCC created a separate rule that prohibits broadband providers from slowing down specific applications or services, a practice known as throttling. More to the point, the FCC said providers can’t single out Internet traffic based on who sends it, where it’s going, what the content happens to be or whether that content competes with the provider’s business.

I’ll break this down as it relates to the NetEqualizer.

1. The rule “prohibits broadband providers from slowing down specific applications or services”.

The NetEqualizer makes shaping decisions solely based on instantaneous usage and only when a link is congested. It does not single out a particular application or service for throttling. The NetEqualizer does not classify traffic, instead looking at how the traffic behaves in order to make a shaping decision.  The key to remember here is that the NetEqualizer only shapes when a link is congested, and without it in place, the link would drop packets which would cause a serious outage.

2.  The FCC said “providers can’t single out Internet traffic based on who sends it, where it’s going”.

The NetEqualizer is completely agnostic as to who is sending the traffic and as to where it is going. In fact, any rate limiting that we provide is independent of the traffic on network, and is used solely to partition a shared resource amongst a set of internal users, whether they be buildings, groups, or access points.

I hope we have finally seen an end to application-based shaping (Packet Shaping) on the Internet.  I see this ruling being upheld as the dawning of a new era.

Net Neutrality must be preserved


As much as I hate to admit it, it seems a few of our Republican congressional leaders are “all in” on allowing large content providers to have privileged priority access on the Internet. Their goal for the 2015 congress is to thwart the President and his Mandate to the FCC on net neutrality. Can you imagine going to visit Yosemite National park and being told that the corporations that sponsor the park have taken all the campsites? Or a special lane on the Interstate dedicated exclusively for Walmart Trucks?  Like our highway system and our National parks, the Internet is a resource shared by all Americans.

I think one of the criteria for being a politician is a certification that you flunked any class in college that involved critical or objective thinking, for example, this statement from Rep Marsha Blackburn

“Federal control of the internet will restrict our online freedom and leave Americans facing the same horrors that they have experienced with HealthCare.gov,”

She might as well compare the Internet to the Macy’s parade, it would make about as much sense; the Internet is a common shared utility similar to electricity and roads, and besides that, it was the Government that invented and funded most of the original Internet. The healthcare system is complex and flawed because it is a socialistic re-distribution of wealth, not even remotely similar to the Internet.  The internet needs very simple regulation to prevent abuse, this is about the only thing the government is designed to do effectively. And then there is this stifle innovation argument…

Rep. Bob Goodlatte, chair of the House Judiciary Committee, said he may seek legislation that would aim to undermine the “FCC’s net neutrality authority by shifting it to antitrust enforcers,” Politico wrote.

Calling any such net neutrality rules a drag on innovation and competition

Let me translate for him because he does not understand or want to understand the motivations of the lobbyist when they talk about stifling innovation. My Words: “Regulation, in the form of FCC imposed net neutrality, will stifle the ability of the larger access providers and content providers from creating a walled off garden, thus stifling their pending monopoly on the Internet.” There are many things where I wish the Government would keep their hands out of, but the Internet is not one of them. I must side with the FCC and the President on this one.

Update Jan 31st

Another win for Net Neutrality, the Canadian Government outlaws the practice of zero rating, which is simply a back door for a provider to give free content over rivals.

The Internet, Free to the Highest Bidder.


It looks like the FCC has  caved,

“The Federal Communications Commission said on Wednesday that it would propose new rules that allow companies like Disney, Google or Netflix to pay Internet service providers.”

WSJ article April 2014

Compare today’s statements to those made back in  January and February, when  the FCC was posturing  like a fluffed up Tom Turkey for Net Neutrality.

“I am committed to maintaining our networks as engines for economic growth, test beds for innovative services and products, and channels for all forms of speech protected by the First Amendment”

– Tom Wheeler FCC chairman Jan 2014

“The FCC could use that broad authority to punish Internet providers that engage in flagrant net-neutrality violations, Wheeler suggested. The agency can bring actions with the goal of promoting broadband deployment, protecting consumers, or ensuring competition, for example.”

-Tom Wheeler Jan 2014

As I eluded to back then, I did not give their white night rhetoric much credence.

“The only hope in this case is for the FCC to step in and take back the Internet. Give it back to the peasants. However, I suspect their initial statements are just grandstanding politics.  This is, after all, the same FCC that auctions off the airwaves to the highest bidder.”

– Art Reisman  Feb 2014

It seems to me the FCC is now a puppet agency of regulation. How can you  start by talking about regulating abuses threatening free access to the Internet, and then without blinking an eye, offer up a statement that Rich Guys can  now pay for privileged access to the Internet ?

I don’t know whether to cry or be cynical at this point. Perhaps I should just go down to my nearest public library , and pay somebody to stock their shelves with promotional NetEqualizer Material?

“The court said that because the Internet is not considered a utility under federal law, it was not subject to that sort of regulation.”

Quotes Referenced from New York Times article FCC in shift backs fast lanes for Web Traffic

Federal Judge Orders Internet Name be Changed to CDSFBB (Content Delivery Service for Big Business)


By Art Reisman – CTO – APconnections

Okay, so I fabricated that headline, it’s not true, but I hope it goes viral and sends a message that our public Internet is being threatened by business interests and activist judges.

I’ll concede our government does serve us well in some cases;  they have produced some things that could not be done without their oversight, for example:

1) The highway system

2) The FAA does a pretty good job keeping us safe

3) The Internet. At least up until some derelict court ruling that will allow ISPs to give preferential treatment to content providers for a payment (bribe), whatever you want to call it.

The ramifications of this ruling may bring an end to the Internet as we know it. Perhaps the ball was put in motion when the Internet was privatized back in 1994. In any case, if this ruling stands up,  you can forget about the Internet as the great equalizer. A place where a small businesses can have a big web site. The Internet where a new idea on a small budget can blossom into a fortune 500 company. A place where the little guy can compete on equal footing without an entry fee to get noticed. No, the tide won’t turn right away, but at some point through a series of rationalizations, content companies and ISPs, with deep pockets, will kill anything that moves.

This ruling establishes a legal precedent. Legal precedents with suspect DNA are like cancers, they mutate into ugly variations, and replicate rapidly. There is no drug that can stop them. Obviously, the forces at work here are not the court systems themselves, but businesses with motives. The poor carriers just can’t seem to find any other solution to their congestion other than charge for access? Combine this with oblivious consumers that just want content on their devices, and you have a dangerous mixture. Ironically, these consumers already subsidize ISPs with a huge chunk of their disposable income. The hoodwink is on. Just as the public airwaves are controlled by a few large media conglomerates, so will go the Internet.

The only hope in this case is for the FCC to step in and take back the Internet. Give it back to the peasants. However, I suspect their initial statements are just grandstanding politics.  This is, after all, the same FCC that auctions off the airwaves to the highest bidder.

Internet Regulation, what is the world coming to ?


A friend of mine just forwarded an article titled “How Net Neutrality Rules Could Undermine the Open Internet”

Basically Net Neutrality advocates are now worried that bringing the FCC in to help enforce Neutrality will set a legal precedent allowing wide-reaching control over other aspects of the Internet. For example, some form of content control extending into gray areas.

Let’s look at the history of the FCC for precedents.

The FCC came into existence to manage and enforce the wireless spectrum,  essentially so you did not get 1000 radio/tv stations blasting signals over each other in every city.  A very necessary and valid government service. Without it, there would be utter anarchy in the airwaves. Imagine roads without traffic signals, or airports without control towers.

At some point in time, their control over frequencies got into content and accessibility mandates.  How did this come about? Simply put, it is the normal progression of government asserting control over a resource. It is what it is, neither good nor bad, just a reflection of a society that looks to government to make things “right”. And like an escaped non-native species in the Hawaiian Islands, it tends to take as much real estate as the ecosystem will allow.

What I do know as a certainty, the FCC, once in the door at regulating anything on the Internet, will continue to grow in order to make things “right” and “fair” during our browsing experience.

At best we can hope the inevitable progression of control by the FCC gets thwarted at every turn allowing us a few more good years of the good old Internet as we know it. I’ll take the current Internet flaws for a few more years while I can.

For more information on non-native species invading Hawaii’s ecosystem, check out this blog, from the Kohala Watershed Partnership.

For an overview of Net Neutrality – check out this Net Neutrality for Dummies Article explaining the act’s possible effects on the everyday internet user.

For a discussion on the possible lawlessness of the FCC’s control over the internet, read this blog entitled “Is the FCC Lawless?”.

Does your ISP restrict you from the public Internet?


By Art Reisman

The term, walled off Garden, is the practice of a  service provider  locking  you into their  local content.   A classic  example of the walled off garden  was exemplified by the early years of AOL. Originally when using their dial-up service,  AOL provided all the content you could want.  Access to the actual internet was  granted  by AOL only after other dial-up Internet providers started to compete with their closed offerings.  Today, using much more subtle techniques, Internet providers try to keep you on their networks.  The reason is simple, it costs them money to transfer you across a boundary to another network, and thus,  it is in their economic interest to keep you within their network.

So how do Internet service providers keep you on their network?

1) Sometimes with monetary incentives , for example, with large commercial accounts they just tell you it is going to cost more. My experience with this practice are first hand. I have heard testimonial from many of our customers running   ISPs, mostly outside the US , where they are  sold a chunk of bulk  bandwidth with conditions. The Terms are often something on the order of:

  • – you have a 1  gigabit connection
  • – if you access data outside  the country you can only use 300 megabits.
  • – If you go over 300 megabits outside the country there will hefty additional fees.

obviously there is going to be a trickle down effect where the regional ISP is going to try to discourage usage outside of the local country under such terms.

2) Then there are more passive techniques such as blatantly looking at your private traffic and just not letting off their network. This technique was used in the US,  implemented by large service providers back in the mid 2000’s.  Basically they targeted peer-to-peer requests and made sure you did not leave their network. Essentially you would only find content from other users within your providers network, even though it would appear as though you were searching the entire Internet.  Special equipment was used to intercept your requests and only allow to you probe other users within your providers network thus saving them money by avoiding Internet Exchange fees.

3) Another way your provider will try  to keep you on their network is offer local mirrored content. Basically they keep a copy of common files at a central location . In most cases this  actually causes the user no harm as they still get the same content. But it can cause problems if not done correctly, they risk sending out old data or obsolete news stories that have been updates.

4) Lastly some governments just outright block content, but this is for mostly political reasons.

Editors Note: There are also political reasons to control where you go on the Internet Practiced in China and Iran

Related Article Aol folds original content operations

Related Article: Why Caching alone won’t speed up your Internet

How to Block Frostwire, utorrent and Other P2P Protocols


By Art Reisman, CTO, http://www.netequalizer.com

Art Reisman CTO www.netequalizer.com

Disclaimer: It is considered controversial and by some definitions illegal for a US-based ISP to use deep packet inspection on the public Internet.

At APconnections, we subscribe to the philosophy that there is more to be gained by explaining your technology secrets than by obfuscating them with marketing babble. Read on to learn how I hunt down aggressive P2P traffic.

In order to create a successful tool for blocking a P2P application, you must first figure out how to identify P2P traffic. I do this by looking at the output data dump from a P2P session.

To see what is inside the data packets I use a custom sniffer that we developed. Then to create a traffic load, I use a basic Windows computer loaded up with the latest utorrent client.

Editors Note: The last time I used a P2P engine on a Windows computer, I ended up reloading my Windows OS once a week. Downloading random P2P files is sure to bring in the latest viruses, and unimaginable filth will populate your computer.

The custom sniffer is built into our NetGladiator device, and it does several things:

1) It detects and dumps the data inside packets as they cross the wire to a file that I can look at later.

2) It maps non printable ASCII characters to printable ASCII characters. In this way, when I dump the contents of an IP packet to a file, I don’t get all kinds of special characters embedded in the file. Since P2P data is encoded random music files and video, you can’t view data without this filter. If you try, you’ll get all kinds of garbled scrolling on the screen when you look at the raw data with a text editor.

So what does the raw data output dump of a P2P client look like ?

Here is a snippet of some of the utorrent raw data I was looking at just this morning. The sniffer has converted the non printable characters to “x”.
You can clearly see some repeating data patterns forming below. That is the key to identifying anything with layer 7. Sometimes it is obvious, while sometimes you really have work to find a pattern.

Packet 1 exx_0ixx`12fb*!s[`|#l0fwxkf)d1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:ka 31:v4:utk21:y1:qe
Packet 2 exx_0jxx`1kmb*!su,fsl0’_xk<)d1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:xv4^1:v4:utk21:y1:qe
Packet 3 exx_0kxx`1exb*!sz{)8l0|!xkvid1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:09hd1:v4:utk21:y1:qe
Packet 4 exx_0lxx`19-b*!sq%^:l0tpxk-ld1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:=x{j1:v4:utk21:y1:qe

The next step is to develop a layer 7 regular expression to identify the patterns in the data. In the output you’ll notice the string “exx” appears in line, and that is what you look for. A repeating pattern is a good place to start.

The regular expression I decided to use looks something like:

exx.0.xx.*qe

This translates to: match any string starting with “exx” followed, by any character “.” followed by “0”, followed by “xx”, followed by any sequence of characters ending with “qe”.

Note: When I tested this regular expression it turns out to only catch a fraction of the Utorrent, but it is a start. What you don’t want to do is make your regular expression so simple that you get false positives. A layer 7 product that creates a high degree of false positives is pretty useless.

The next thing I do with my new regular expression is a test for accuracy of target detection and false positives.

Accuracy of detection is done by clearing your test network of everything except the p2p target you are trying to catch, and then running your layer 7 device with your new regular expression and see how well it does.

Below is an example from my NetGladiator in a new sniffer mode. In this mode I have the layer 7 detection on, and I can analyze the detection accuracy. In the output below, the sniffer puts a tag on every connection that matches my utorrent regular expression. In this case, my tag is indicated by the word “dad” at the end of the row. Notice how every connection is tagged. This means I am getting 100 percent hit rate for utorrent. Obviously I doctored the output for this post :)

ndex SRCP DSTP Wavg Avg IP1 IP2 Ptcl Port Pool TOS
0 0 0 17 53 255.255.255.255 95.85.150.34 — 2 99 dad
1 0 0 16 48 255.255.255.255 95.82.250.60 — 2 99 dad
2 0 0 16 48 255.255.255.255 95.147.1.179 — 2 99 dad
3 0 0 18 52 255.255.255.255 95.252.60.94 — 2 99 dad
4 0 0 12 24 255.255.255.255 201.250.236.194 — 2 99 dad
5 0 0 18 52 255.255.255.255 2.3.200.165 — 2 99 dad
6 0 0 10 0 255.255.255.255 99.251.180.164 — 2 99 dad
7 0 0 88 732 255.255.255.255 95.146.136.13 — 2 99 dad
8 0 0 12 0 255.255.255.255 189.202.6.133 — 2 99 dad
9 0 0 12 24 255.255.255.255 79.180.76.172 — 2 99 dad
10 0 0 16 48 255.255.255.255 95.96.179.38 — 2 99 dad
11 0 0 11 16 255.255.255.255 189.111.5.238 — 2 99 dad
12 0 0 17 52 255.255.255.255 201.160.220.251 — 2 99 dad
13 0 0 27 54 255.255.255.255 95.73.104.105 — 2 99 dad
14 0 0 10 0 255.255.255.255 95.83.176.3 — 2 99 dad
15 0 0 14 28 255.255.255.255 123.193.132.219 — 2 99 dad
16 0 0 14 32 255.255.255.255 188.191.192.157 — 2 99 dad
17 0 0 10 0 255.255.255.255 95.83.132.169 — 2 99 dad
18 0 0 24 33 255.255.255.255 99.244.128.223 — 2 99 dad
19 0 0 17 53 255.255.255.255 97.90.124.181 — 2 99 dad

A bit more on reading this sniffer output…

Notice columns 4 and 5, which indicate data transfer rates in bytes per second. These columns contain numbers that are less than 100 bytes per second – Very small data transfers. This is mostly because as soon as that connection is identified as utorrent, the NetGladiator drops all future packets on the connection and it never really gets going. One thing I did notice is that the modern utorrent protocol hops around very quickly from connection to connection. It attempts not to show it’s cards. Why do I mention this? Because in layer 7 shaping of P2P, speed of detection is everything. If you wait a few milliseconds too long to analyze and detect a torrent, it is already too late because the torrent has transferred enough data to keep it going. It’s just a conjecture, but I suspect this is one of the main reasons why this utorrent is so popular. By hopping from source to source, it is very hard for an ISP to block this one without the latest equipment. I recently wrote a companion article regarding the speed of the technology behind a good layer 7 device.

The last part of testing a regular expression involves looking for false positives. For this we use a commercial grade simulator. Our simulator uses a series of pre-programmed web crawlers that visit tens of thousands of web pages an hour at our test facility. We then take our layer 7 device with our new regular expression and make sure that none of the web crawlers accidentally get blocked while reading thousands of web pages. If this test passes we are good to go with our new regular expression.

Editors Note: Our primary bandwidth shaping product manages P2P without using deep packet inspection.
The following layer 7 techniques can be run on our NetGladiator Intrusion Prevention System. We also advise that public ISPs check their country regulations before deploying a deep packet inspection device on a public network.

Commentary: Is IPv6 Heading Toward a Walled-Off Garden?


In a recent post we highlighted some of the media coverage regarding the imminent demise of the IPv4 address space. Subsequently, during a moment of introspection, I realized there is another angle to the story. I first assumed that some of the lobbying for IPv6 was a hardware-vendor-driven phenomenon; but there seems to be another aspect to the momentum of Ipv6. In talking to customers over the past year, I learned they were already buying routers that were IPv6 ready, but there was no real rush. If you look at a traditional router’s sales numbers over the past couple years, you won’t find anything earth shattering. There is no hockey-stick curve to replace older equipment. Most of the IPv6 hardware sales were done in conjunction with normal upgrade time lines.

The hype had to have another motive, and then it hit me. Could it be that the push to IPv6 is a back-door opportunity for a walled-off garden? A collaboration between large ISPs, a few large content providers, and mobile device suppliers?

Although the initial world of IPv6 day offered no special content, I predict some future IPv6 day will have the incentive of extra content. The extra content will be a treat for those consumers with IPv6-ready devices.

The wheels for a closed off Internet are already in place. Take for example all the specialized apps for the iPhone and iPad. Why can’t vendors just write generic apps like they do for a regular browser? Proprietary offerings often get stumbled into. There are very valid reasons for specialized apps for the iPhone, and no evil intent on the part of Apple, but it is inevitable that as their market share of mobile devices rises, vendors will cease to write generic apps for general web browsers.

I don’t contend that anybody will deliberately conspire to create an exclusively IPv6 club with special content; but I will go so far as to say in the fight for market share, product managers know a good thing when they see it. If you can differentiate content and access on IPv6, you have an end run around on the competition.

To envision how a walled garden might play out on IPv6, you must first understand that it is going to be very hard to switch the world over to IPv6 and it will take a long time – there seems to be agreement on that. But at the same time, a small number of companies control a majority of the access to the Internet and another small set of companies control a huge swatch of the content.

Much in the same way Apple is obsoleting the generic web browser with their apps, a small set of vendors and providers could obsolete IPv4 with new content and new access.

What Does Net Privacy Have to Do with Bandwidth Shaping?


I definitely understand the need for privacy. Obviously, if I was doing something nefarious, I wouldn’t want it known, but that’s not my reason. Day in and day out, measures are taken to maintain my privacy in more ways than I probably even realize. You’re likely the same way.

For example, to avoid unwanted telephone and mail solicitations, you don’t advertise your phone numbers or give out your address. When you buy something with your credit card, you usually don’t think twice about your card number being blocked out on the receipt. If you go to the pharmacist, you take it for granted that the next person in line has to be a certain distance behind so they can’t hear what prescription you’re picking up. The list goes on and on. For me personally, I’m sure there are dozens, if not hundreds, of good examples why I appreciate privacy in my life. This is true in my daily routines as well as in my experiences online.

The topic of Internet privacy has been raging for years. However, the Internet still remains a hotbed for criminal activity and misuse of personal information. Email addresses are valued commodities sold to spammers. Search companies have dedicated countless bytes of storage to every search term and IP address made. Websites place tracking cookies on your system so they can learn more about your Web travels, habits, likes, dislikes, etc.  Forensically, you can tell a lot about a person from their online activities. To be honest, it’s a little creepy.

Maybe you think this is much ado about nothing. Why should you care? However, you may recall that less than four years ago, AOL accidentally released around 20 million search keywords from over 650,000 users. Now, those 650,000 users and their searches will exist forever in cyberspace.  Could it happen again? Of course, why wouldn’t it happen again since all it takes is a packed laptop to walk out the door?

Internet privacy is an important topic, and as a result, technology is becoming more and more available to help people protect information they want to keep confidential. And that’s a good thing. But what does this have to do with bandwidth management? In short, a lot (no pun intended)!

Many bandwidth management products (from companies like Blue Coat, Allot, and Exinda, for example) intentionally work at the application level. These products are commonly referred to as Layer 7 or Deep Packet Inspect tools. Their mission is to allocate bandwidth specifically by what you’re doing on the Internet. They want to determine how much bandwidth you’re allowed for YouTube, Netflix, Internet games, Facebook, eBay, Amazon, etc. They need to know what you’re doing so they can do their job.

In terms of this article, whether you’re philosophically adamant about net privacy (like one of the inventors of the Internet), or could care less, is really not important. The question is, what happens to an application-managed approach when people take additional steps to protect their own privacy?

For legitimate reasons, more and more people will be hiding their IPs, encrypting, tunneling, or otherwise disguising their activities and taking privacy into their own hands. As privacy technology becomes more affordable and simple, it will become more prevalent. This is a mega-trend, and it will create problems for those management tools that use this kind of information to enforce policies.

However, alternatives to these application-level products do exist, such as “fairness-based” bandwidth management. Fairness-based bandwidth management, like the NetEqualizer, is the only a 100% neutral solution and ultimately provides a more privacy friendly approach for Internet users and a more effective solution for administrators when personal privacy protection technology is in place. Fairness is the idea of managing bandwidth by how much you can use, not by what you’re doing. When you manage bandwidth by fairness instead of activity, not only are you supporting a neutral, private Internet, but you’re also able to address the critical task of bandwidth allocation, control and quality of service.

The Dark Side of Net Neutrality


Net neutrality, however idyllic in principle, comes with a price. The following article was written to shed some light on the big money behind the propaganda of net neutrality. It may change your views, but at the very least it will peel back one more layer of the the onion that is the issue of net neutrality.

First, an analogy to set the stage:

I live in a neighborhood that equally shares a local community water system among 60 residential members. Nobody is metered. Through a mostly verbal agreement, all users try to keep our usage to a minimum. This requires us to be very water conscious, especially in the summer months when the main storage tanks need time to recharge overnight.

Several years ago, one property changed hands, and the new owner started raising organic vegetables using a drip irrigation system. The neighborhood precedent had always been that using water for a small lawn and garden area was an accepted practice, however, the new neighbor expanded his garden to three acres and now sells his produce at the local farmers market. Even with drip irrigation, his water consumption is likely well beyond the rest of the neighborhood combined.

You can see where I am going with this. Based on this scenario, it’s obvious that an objective observer would conclude that this neighbor should pay an additional premium — especially when you consider he is exploiting the community water for a commercial gain.

The Internet, much like our neighborhood example, was originally a group of cooperating parties (educational and government institutions) that connected their networks in an effort to easily share information. There was never any intention of charging for access amongst members. As the Internet spread away from government institutions, last-mile carriers such as cable and phone companies invested heavily in infrastructure. Their  business plans assumed that all parties would continue to use the Internet with lightweight content such as Web pages, e-mails, and the occasional larger document or picture.

In the latter part of 2007, a few companies, with substantial data content models, decided to take advantage of the low delivery fees for movies and music by serving them up over the Internet. Prior to their new-found Internet delivery model, content providers had to cover the distribution costs for the physical delivery of records, video cassettes and eventually discs.

As of 2010, Internet delivery costs associated with the distribution of media had plummeted to near zero. It seems that consumers have pre-paid their delivery cost when they paid their monthly Internet bill. Everybody should be happy, right?

The problem is, as per our analogy with the community water system, we have a few commercial operators jamming the pipes with content, and jammed pipes have a cost. Upgrading a full Internet pipe at any level requires a major investment, and providers to date are already leveraged and borrowed with their existing infrastructure. Thus, the Internet companies that carry the data need to pass this cost on to somebody else.

As a result of these conflicting interests, we now have a pissing match between carriers and content providers in which the latter are playing the “neutrality card” and the former are lobbying lawmakers to grant them special favors in order to govern ways to limit access.

Therefore, whether it be water, the Internet or grazing on public lands, absolute neutrality can be problematic — especially when money is involved. While the concept of neutrality certainly has the overwhelming support of consumer sentiment, be aware that there are, and  always will be, entities exploiting the system.

Related Articles

For more on NetFlix, see Level 3-Netflix Expose their Hidden Agenda.

What Is Deep Packet Inspection and Why the Controversy?


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper. APconnections removed all deep packet inspection technology from their NetEqualizer product over 2 years ago.

Article Updated March 2012

As the debate over Deep Packet Inspection continues, network administrators are often faced with a difficult decision: ensure network quality or protect user privacy. However, the legality of the practice is now being called into question, adding a new twist to the mix. Yet, for many Internet users, deep packet inspection continues to be an ambiguous term in need of explanation. In the discussion that follows, deep packet inspection will be explored in the context of the ongoing debate.

Exactly what is deep packet inspection?

All traffic on the Internet travels around in what is called an IP packet. An IP packet is a string of characters moving from computer A to computer B. On the outside of this packet is the address where it is being sent. On the inside of the packet is the data that is being transmitted.

The string of characters on the inside of the packet can be conceptually thought of as the “payload,” much like the freight inside of a railroad car. These two elements, the address and the payload, comprise the complete IP packet.

When you send an e-mail across the Internet, all your text is bundled into packets and sent on to its destination. A deep packet inspection device literally has the ability to look inside those packets and read your e-mail (or whatever the content might be).

Products sold that use DPI are essentially specialized snooping devices that examine the content (pay load inside) of Internet packets. Other terms sometimes used to describe techniques that examine Internet data are packet shapers, layer-7 traffic shaping, etc.

How is deep packet inspection related to net neutrality?

Net neutrality is based on the belief that nobody has the right to filter content on the Internet. Deep packet inspection is a method used for filtering. Thus, there is a conflict between the two approaches. The net neutrality debate continues to rage in its own right.

Why do some Internet providers use deep packet inspection devices?

There are several reasons:

1) Targeted advertising If a provider knows what you are reading, they can display content advertising on the pages they control, such as your login screen or e-mail account.

2) Reducing “unwanted” traffic — Many providers are getting overwhelmed by types of traffic that they deem as less desirable such as Bittorrent and other forms of peer-to-peer. Bittorrent traffic can overwhelm a network with volume. By detecting and redirecting the Bittorrent traffic, or slowing it down, a provider can alleviate congestion.

3) Block offensive material — Many companies or institutions that perform content filtering are looking inside packets to find, and possibly block, offensive material or web sites.

4) Government spying — In the case of Iran (and to some extent China), DPI is used to keep tabs on the local population.

When is it appropriate to use deep packet inspection?

1) Full disclosure — Private companies/institutions/ISPs that notify employees that their Internet use is not considered private have the right to snoop, although I would argue that creating an atmosphere of mistrust is not the mark of a healthy company.

2) Law enforcement — Law enforcement agencies with a warrant issued by a judge would be the other legitimate use.

3) Intrusion detection and prevention– It is one thing to be acting as an ISP  and to eaves drop on a public conversation;  it is entirely another paradigm if you are a  private business examining the behavior of somebody  coming in your front door. For example in a private home it is within your right to look through your peep hole and not let shady characters into your home.  In a private business it is a good idea to use Deep packet inspection in order to block unwanted intruders from your network. Blocking bad guys before they break into and damage your network and is perfectly acceptable.

4) Spam filtering- Most consumers are very happy to have their ISP or email provider remove spam.  I would categorize this type of DPI as implied disclosure. For example, in Gmail you do have the option to turn Spam filtering off, and although most consutomers may not realize that google is reading their mail ( humans don’t read it but computer scanners do), their motives are understood. What consumers may not realize is that their email provider is also reading everything they do in order to set target advertising

Does Content filtering use Deep Packet Inspection ?

For the most part no. Content filtering is generally  done at the URL level. URL’s are generally considered public information, as routers need to look this up anyway. We have only encountered content filters at private institutions that are within their right.

What about spam filtering, does that use Deep Packet Inspection?

Yes many Spam filters will look at content, and most people could not live without their spam filter, however with spam filtering most people have opted in at one point or another, hence it is generally done with permission.

What is all the fuss about?

It seems that consumers are finally becoming aware of what is going on behind the scenes as they surf the Internet, and they don’t like it. What follows are several quotes and excerpts from articles written on the topic of deep packet inspection. They provide an overview not only of how DPI is currently being used, but also the many issues that have been raised with the practice.

For example, this is an excerpt from a recent PC world article:

Not that we condone other forms of online snooping, but deep packet inspection is the most egregious and aggressive invasion of privacy out there….It crosses the line in a way that is very frightening.

Paul Stephens, director of policy and advocacy for the Privacy Rights Clearinghouse, as quoted in the E-Commerce Times on November 14, 2008. Read the full article here.

Recently, Comcast had their hand slapped for re-directing Bittorrent traffic:

Speaking at the Stanford Law School Center for Internet and Society, FCC Chairman Kevin Martin said he’s considering taking action against the cable operator for violating the agency’s network-neutrality principles. Seems Martin was troubled by Comcast’s dissembling around the BitTorrent issue, not to mention its efforts to pack an FCC hearing on Net neutrality with its own employees.

— Digital Daily, March 10, 2008. Read the full article here.

Later in 2008, the FCC came down hard on Comcast.

In a landmark ruling, the Federal Communications Commission has ordered Comcast to stop its controversial practice of throttling file sharing traffic.

By a 3-2 vote, the commission on Friday concluded that Comcast monitored the content of its customers’ internet connections and selectively blocked peer-to-peer connections.

Wired.com, August 1, 2008.Read the full article here.

To top everything off, some legal experts are warning companies practicing deep packet inspection that they may be committing a felony.

University of Colorado law professor Paul Ohm, a former federal computer crimes prosecutor, argues that ISPs such as Comcast, AT&T and Charter Communications that are or are contemplating ways to throttle bandwidth, police for copyright violations and serve targeted ads by examining their customers’ internet packets are putting themselves in criminal and civil jeopardy.

Wired.com, May 22, 2008. Read the full article here.

However, it looks like things are going the other way in the U.K. as Britain’s Virgin Media has announced they are dumping net neutrality in favor of targeting bittorrent.

The UK’s second largest ISP, Virgin Media, will next year introduce network monitoring technology to specifically target and restrict BitTorrent traffic, its boss has told The Register.

The Register, December 16, 2008. Read the full article here.

Canadian ISPs confess en masse to deep packet inspection in January 2009.

With the amount of attention being paid to Comcast recently, a lot of people around the world have begun to look at their ISPs and wonder exactly what happens to their traffic once it leaves. This is certainly true for Canada, where several Canadian ISPs have come under the scrutiny of the CRTC, the regulatory agency responsible for Canada. After investigation, it was determined that all large ISPs in Canada filter P2P traffic in some fashion.

Tech Spot, January 21, 2009. Read the full article here.

In April 2009, U.S. lawmakers announced plans to introduce legislation that would limit the how ISPs could track users. Online privacy advocates spoke out in support of such legislation.

In our view, deep packet inspection is really no different than postal employees opening envelopes and reading letters inside. … Consumers simply do not expect to be snooped on by their ISPs or other intermediaries in the middle of the network, so DPI really defies legitimate expectations of privacy that consumers have.

Leslie Harris, president and CEO of the Center for Democracy and Technology, as quoted on PCWorld.com on April 23, 2009. Read the full article here.

The controversy continues in the U.S. as AT&T is accused of traffic shaping, lying and blocking sections of the Internet.

7/26/2009 could mark a turning point in the life of AT&T, when the future looks back on history, as the day that the shady practices of an ethically challenged company finally caught up with them: traffic filtering, site banning, and lying about service packages can only continue for so long before the FCC, along with the bill-paying public, takes a stand.

Kyle Brady, July 27, 2009. Read the full article here.

[February 2011 Update] The Egyptian government uses DPI to filter elements of their Internet Traffic, and this act in itself becomes the news story. In this video in this news piece, Al Jazeera takes the opportunity to put out an unflattering piece on the company Naurus that makes the DPI technology and sold it to the Egyptians.

While the debate over deep packet inspection will likely rage on for years to come, APconnections made the decision to fully abandon the practice over two years ago, having since proved the viability of alternative approaches to network optimization. Network quality and user privacy are no longer mutually exclusive goals.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

NetEqualizer Brand Becoming an Eponym for Fairness and Net Neutrality techniques


An eponym is a general term used to describe from what or whom something derived its name. Therefore, a proprietary eponym could be considered a brand name, product or service mark which has fallen into general use.

Examples of common brand Eponyms include Xerox, Google, and  Band Aid.  All of these brands have become synonymous with the general use of the class of product regardless of the actual brand.

Over the past 7 years we have spent much of our time explaining the NetEqualizer methods to network administrators around the country;  and now,there is mounting evidence,  that  the NetEqualizer brand, is taking on a broader societal connotation. NetEqualizer, is in the early stages as of becoming and Eponym for the class of bandwidth shapers that, balance network loads and ensure fairness and  Neutrality.   As evidence, we site the following excerpts taken from various blogs and publications around the world.

From Dennis OReilly <Dennis.OReilly@ubc.ca> posted on ResNet Forums

These days the only way to classify encrypted streams is through behavioral analysis.  ….  Thus, approaches like the NetEqualizer or script-based ‘penalty box’ approaches are better.

Wisp tutorial Butch Evans

About 2 months ago, I began experimenting with an approach to QOS that mimics much of the functionality of the NetEqualizer (http://www.netequalizer.com) product line.

TMC net

Comcast Announces Traffic Shaping Techniques like APconnections’ NetEqualizer…

From Technewsworld

It actually sounds a lot what NetEqualizer (www.netequalizer.com) does and most people are OK with it…..

From Network World

NetEqualizer looks at every connection on the network and compare it to the overall trunk size to determine how to eliminate congestion on the links

Star Os Forum

If you’d really like to have your own netequalizer-like system then my advice…..

Voip-News

Has anyone else tried Netequalizer or something like it to help with VoIP QoS? It’s worked well so far for us and seems to be an effective alternative for networks with several users…..

NetEqualizer YouTube Caching a Win for Net Neutrality


Over the past few years, much of the controversy over net neutrality has ultimately stemmed from the longstanding rift between carriers and content providers. Commercial content providers such as NetFlix have entire business models that rely on relatively unrestricted bandwidth access for their customers, which has led to an enormous increase in the amount of bandwidth that is being used. In response to these extreme bandwidth loads and associated costs, ISPs have tried all types of schemes to limit and restrict total usage. Some of the solutions that have been tried include:

While in many cases effective, most of these efforts have been mired in controversy with respect to net neutrality. However, caching is the one exception.

Up to this point, caching has proven to be the magic bullet that can benefit both ISPs and consumers (faster access to videos, etc.) while respecting net neutrality. To illustrate this, we’ll run caching through the gauntlet of questions that have been raised about these other solutions in regard to a violation of net neutrality. In the end, it comes up clean.

1. Does caching involve deep introspection of user traffic without their knowledge (like layer-7 shaping and DPI)?

No.

2. Does Caching perform any form of preferential treatment based on content?

No.

3. Does caching perform any form of preferential treatment based on fees?

No.

Yet, despite avoiding these pitfalls, caching has still proven to be extremely effective, allowing Internet providers to manage increasing customer demands without infringing upon customers’ rights or quality of service. It was these factors that led APconnections to develop our most recent NetEqualizer feature, YouTube caching.

For more on this feature, or caching in general, check out our new NetEqualizer YouTube Caching FAQ post.

A Tiered Internet – Penny Wise or Pound Foolish


With the debate over net neutrality raging in the background, Internet suppliers are preparing their strategies to bridge the divide between bandwidth consumption and costs. This topic is coming to a head now largely because of the astonishing growth-rate of streaming video from the likes of YouTube, NetFlix, and others.

The issue recently took a new turn and emerged front and center during a webinar when Allot Communications and Openet presented its new product features, including its approach of integrating policy control and charging for wireless access to certain websites.

On the surface, this may seem like a potential solution to the bandwidth problem. Basic economic theory will tell you that if you increase the cost of a product or service, the demand will eventually decrease. In this case, charging for bandwidth will not only increase revenues, but the demand will ultimately drop until a point of equilibrium is reached. Problem solved, right? Wrong!

While the short-term benefits are obviously appealing for some, this is a slippery slope that will lead to further inequality in Internet access (You can easily find many articles and blogs regarding Net Neutrality including those referencing Vinton Cerf and Tim Berners-Lee — two of the founding fathers of the Internet — clearly supporting a free and equal Internet). Despite these arguments, we believe that Deep Packet Inspection (DPI) equipment makers such as Allot will continue to promote and support a charge system since it is in their best business interests to do so. After all, a pay-for-access approach requires DPI as the basis for determining what content to charge.

However, there are better and more cost-effective ways to control bandwidth consumption while protecting the interests of net neutrality. For example, fairness-based bandwidth control intrinsically provides equality and fairness to all users without targeting specific content or websites. With this approach, when the network is busy small bandwidth consumers are guaranteed access to the Internet while large bandwidth users are throttled back but not charged or blocked completely. Everyone lives within their means and gets an equal share. If large bandwidth consumers want access to more bandwidth, they can purchase a higher level of service from their provider. But let’s be clear, this is very different from charging for access to a particular website!

Although this content-neutral approach has repeatedly proved successful for NetEqualizer users, we’re now taking an additional step at mitigating bandwidth congestion while respecting network neutrality through video caching (the largest growth segment of bandwidth consumption). So, keep an eye out for the YouTube caching feature to be available in our new NetEqualizer release early next year.

%d bloggers like this: