NetEqualizer White Paper Comparison with Traditional Layer-7 (Deep Packet Inspection Products)


Updated with new reference material May 4th 2009

How NetEqualizer compares to Packeteer, Allot, Cymphonics, Exinda

We often get asked how NetEqualizer compares to Packeteer, Allot, Cymphonics, Exinda and a plethora of other well-known companies that do layer 7 application shaping (packet shaping). After several years of these questions, and discussing different aspects with former and current application shaping IT administrators, we’ve developed a response that should clarify the differences between NetEqualizers behavior based approach and the rest of the pack.

We thought of putting our response into a short, bullet-by-bullet table format, but then decided that since this decision often involves tens of thousands of dollars, 15 minutes of education on the subject with content to support the bullet chart was in order. If you want to see just the bullet chart, you can skip to the end now, but if you’re looking to have the question answered as objectively as possible, please take a few minutes to read on

In the following sections, we will cover specifically when and where application shaping (deep packet inspection) is used, how it can be used to your advantage, and also when it may not be a good option for what you are trying to accomplish. We will also discuss how the NetEqualizer and its behavior-based shaping fits into the landscape of application shaping, and how in some cases the NetEqualizer is a much better alternative.

First off, let’s discuss the accuracy of application shaping. To do this, we need to review the basic mechanics of how it works.

Application shaping is defined as the ability to identify traffic on your network by type and then set customized policies to control the flow rates for each particular type. For example, Citrix, AIM, Youtube, and BearShare are all applications that can be uniquely identified.

As you are likely aware, all traffic on the Internet travels around in what is called an IP packet. An IP packet can very simply be thought of as a string of characters moving from computer A to computer B. The string of characters is called the “payload,” much like the freight inside a railroad car. On the outside of this payload is the address where it is being sent. On the inside is the data/payload that is being transmitted. These two elements, the address and the payload, comprise the complete IP packet. In the case of different applications on the Internet, we would expect to see different kinds of payloads.

At the heart of all current application shaping products is special software that examines the content of Internet packets as they pass through the packet shaper. Through various pattern matching techniques, the packet shaper determines in real time what type of application a particular flow is. It then proceeds to take action to possibly restrict or allow the data based on a rule set designed by the system administrator.

For example, the popular peer-to-peer application Kazaa actually has the ASCII characters “Kazaa” appear in the payload, and hence a packet shaper can use this keyword to identify a Kazaa application. Seems simple enough, but suppose that somebody was downloading a Word document discussing the virtues of peer-to-peer and the title had the character string “Kazaa” in it. Well, it is very likely that this download would be identified as Kazaa and hence misclassified. After all, downloading a Word document from a Web server is not the same thing as the file sharing application Kazaa.

The other issue that constantly brings the accuracy of application shaping under fire is that some application writers find it in their best interest not be classified. In a mini arms race that plays out everyday across the world, some application developers are constantly changing their signature and some have gone as far as to encrypt their data entirely.

Yes, it is possible for the makers of application shapers to counter each move, and that is exactly what the top companies do, but it can take a heroic effort to keep pace. The constant engineering and upgrading required has an escalating cost factor. In the case of encrypted applications, the amount of CPU power required for decryption is quite intensive and impractical and other methods will be needed to identify encrypted p2p.

But, this is not to say that application shaping doesn’t work in all cases or provide some value. So, let’s break down where it has potential and where it may bring false promises. First off, the realities of what really happens when you deploy and depend on this technology need to be discussed.

Accuracy and False Positives

As of early 2003, we had a top engineer and executive join APConnections direct from a company that offered application shaping as one of their many value-added technologies. He had first hand knowledge from working with hundreds of customers who were big supporters of application shaping:

The application shaper his company offered could identify 90 percent of the spectrum of applications, which means they left 10 percent as unclassified. So, right off the bat, 10 percent of the traffic is unknown by the traffic shaper. Is this traffic important? Is it garbage that you can ignore? Well, there is no way to know with out any intelligence, so you are forced to let it go by without any restriction. Or, you could put one general rule over all of the traffic – perhaps limiting it to 1 megabit per second max, for example. Essentially, if your intention was 100-percent understanding and control of your network traffic, right out the gate you must compromise this standard.

In fairness, this 90-percent identification actually is an amazing number with regard to accuracy when you understand how daunting application shaping is. Regardless, there is still room for improvement.

So, that covers the admitted problem of unclassifiable traffic, but how accurate can a packet shaper be with the traffic it does claim to classify? Does it make mistakes? There really isn’t any reliable data on how often an application shaper will misidentify an application. To our knowledge, there is no independent consumer reporting company that has ever created a lab capable of generating several thousand different applications types with a mix of random traffic, and then took this mix and identified how often traffic was misclassified. Yes, there are trivial tests done one application at a time, but misclassification becomes more likely with real-world complex and diverse application mixes.

From our own testing of application technology freely available on the Internet, we discovered false positives can occur up to 25 percent of the time. A random FTP file download can be classified as something more specific. Obviously commercial packet shapers do not rely on the free technology in open source and they actually may improve on it. So, if we had to estimate based on our experience, perhaps 5 percent of Internet traffic will likely get misclassified. This brings our overall accuracy down to 85 percent (combining the traffic they don’t claim to classify with an estimated error rate for the traffic they do classify).

Constantly Evolving Traffic

Our sources say (mentioned above) that 70 percent of their customers that purchased application shaping equipment were using the equipment primarily as a reporting tool after one year. This means that they had stopped keeping up with shaping policies altogether and were just looking at the reports to understand their network (nothing proactive to change the traffic).

This is an interesting fact. From what we have seen, many people are just unable, or unwilling, to put in the time necessary to continuously update and change their application rules to keep up with the evolving traffic. The reason for the constant changing of rules is that with traditional application shaping you are dealing with a cunning and wise foe. For example, if you notice that there is a large contingent of users using Bittorrent and you put a rule in to quash that traffic, within perhaps days, those users will have moved on to something new: perhaps a new application or encrypted p2p. If you do not go back and reanalyze and reprogram your rule set, your packet shaper slowly becomes ineffective.

And finally lest we not forget that application shaping is considered by some to be a a violation of Net Neutrality.

When is application shaping the right solution?

There is a large set of businesses that use application shaping quite successfully along with other technologies. This area is WAN optimization. Thus far, we have discussed the issues with using an application shaper on the wide open Internet where the types and variations of traffic are unbounded. However, in a corporate environment with a finite set and type of traffic between offices, an application shaper can be set up and used with fantastic results.

There is also the political side to application shaping. It is human nature to want to see and control what takes place in your environment. Finding the best tool available to actually show what is on your network, and the ability to contain it, plays well with just about any CIO or IT director on the planet. An industry leading packet shaper brings visibility to your network and a pie chart showing 300 different kinds of traffic. Whether or not the tool is practical or accurate over time isn’t often brought into the buying decision. The decision to buy can usually be “intuitively” justified. By intuitively, we mean that it is easier to get approval for a tool that is simple to conceptually understand by a busy executive looking for a quick-fix solution.

As the cost of bandwidth continues to fall, the question becomes how much a CIO should spend to analyze a network. This is especially true when you consider that as the Internet expands, the complexity of shaping applications grows. As bandwidth prices drop, the cost of implementing such a product is either flat or increasing. In cases such as this, it often does not make sense to purchase a $15,000 bandwidth shaper to stave off a bandwidth upgrade that might cost an additional $200 a month.

What about the reporting aspects of an application shaper? Even if it can only accurately report 90 percent of the actual traffic, isn’t this useful data in itself?

Yes and no. Obviously analyzing 90 percent of the data on your network might be useful, but if you really look at what is going on, it is hard to feel like you have control or understanding of something that is so dynamic and changing. By the time you get a handle on what is happening, the system has likely changed. Unless you can take action in real time, the network usage trends (on a wide open Internet trunk) will vary from day to day.1 It turns out that the most useful information you can determine regarding your network is an overall usage patter for each individual. The goof-off employee/user will stick out like a sore thumb when you look at a simple usage report since the amount of data transferred can be 10-times the average for everybody else. The behavior is the indicator here, but the specific data types and applications will change from day to day and week to week

How does the NetEqualizer differ and what are its advantages and weaknesses?

First, we’ll summarize equalizing and behavior-based shaping. Overall, it is a simple concept. Equalizing is the art form of looking at the usage patterns on the network, and then when things get congested, robbing from the rich to give to the poor. Rather than writing hundreds of rules to specify allocations to specific traffic as in traditional application shaping, you can simply assume that large downloads are bad, short quick traffic is good, and be done with it.

This behavior-based approach usually mirrors what you would end up doing if you could see and identify all of the traffic on your network, but doesn’t require the labor and cost of classifying everything. Applications such as Web surfing, IM, short downloads, and voice all naturally receive higher priority while large downloads and p2p receive lower priority. This behavior-based shaping does not need to be updated constantly as applications change.

Trusting a heuristic solution such as NetEqualizer is not always an easy step. Oftentimes, customers are concerned with accidentally throttling important traffic that might not fit the NetEqualizer model, such as video. Although there are exceptions, it is rare for the network operator not to know about these potential issues in advance, and there are generally relatively few to consider. In fact, the only exception that we run into is video, and the NetEqualizer has a low level routine that easily allows you to give overriding priority to a specific server on your network, hence solving the problem.

Another key element in behavior-based shaping is connections. Equalizing takes care of instances of congestion caused by single-source bandwidth hogs. However, the other main cause of Internet gridlock (as well as bringing down routers and access points) is p2p and its propensity to open hundreds or perhaps thousands of connections to different sources on the Internet. Over the years, the NetEqualizer engineers have developed very specific algorithms to spot connection abuse and avert its side effects.

This overview, along with the summary table below, should give you a good idea of where the NetEqualizer stands in relation to packet shaping.

Summary Table

Application based shaping

  • good for static links where traffic patterns are constant

  • good for intuitive presentations makes sense and easy to explain to non technical people
  • detailed reporting by application type
  • not the best fit for wide open Internet trunks
    • costly to maintain in terms of licensing

    • high initial cost

    • constant labor to tune with changing application spectrum

    • expect approximately 15 percent of traffic to be unclassified

  • only a static snapshot of a changing spectrum may not be useful
  • false positives may show data incorrectly no easy way to confirm accuracy
  • violates Net Neutrality

Equalizing

  • not the best for dedicated WAN trunks

  • the most cost effective for shared Internet trunks
  • little or no recurring cost or labor
  • low entry cost
  • conceptual takes some getting used to
  • basic reporting by behavior used to stop abuse
  • handles encrypted p2p without modifications or upgrades
  • Supports Net Neutrality

1 The exception is a corporate WAN link with relatively static usage patterns.

Note: Since we first published this article, deep packet inspection also known as layer 7 shaping has taken some serious industry hits with respect to US based ISPs

Related articles:

Why is NetEqualizer the low price leader in bandwidth control

When is deep packet inspection a good thing?

NetEqualizer offers deep packet inspection comprimise.

Internet users attempt to thwart Deep Packet Inspection using encryption.

Why the controversy over deep Packet inspection?

World wide web founder denounces deep packet inspection

Canadians request comments on traffic shaping practices


Art Reisman CTO www.netequalizer.com

I am not sure if this is open to Canadians only, but the CRTC (the Canadian equivalent of the FCC) has set up a site for comments regarding their policies on Internet traffic shaping. The site is open from now till April 30th and can be found at

http://isppractices.econsultation.ca/

So if you get the chance chime in and give them your thoughts.

For the fun of it (see below) I grabbed a few of the existing comments truely at random. After reading them it is funny how the consumer sentiments so far are in total agreement with what we NetEqualizer have been proselytizing  which is:  “Traffic management is fine as long as there is full disclosure of policies”. Nobody wants to pump gas without knowing the grade and the price and the same goes for their Internet service.

——————-comments—————————————————-

“Any traffic management practices deviating from complete network neutrality, that is to say, any practices that single out one protocol over another, should certainly be disclosed to the user in the service agreement. To disclose anything less would be consumer fraud.”

“Traffic management has a real impact on the product that a consumer is paying for. All ISPs are not created equal and consumers aren’t in a position to analyze the complexities of network management and the possible impacts on their usage.”

“All traffic shaping practices should be disclosed, in plain English, online and as a part of the terms of service.”

“I agree with the other posters thus far — if ISPs are allowed to get away with uncompetitive throttling of Internet traffic, those techniques and the effect on the customer should be fully disclosed in plain versions of both official languages.”

“Any new communication technologies can be thwarted if ISPs deem them to be competitive with any of their services, stifling innovation. Even the CBC has used BitTorrent to distribute programming, and..”

When is Deep Packet Inspection a Good Thing?


Commentary

Update September 2011

Seems some shareholders  of a company who over promised layer 7 technology are not happy.

By Eli Riles

As many of our customers are aware, we publicly stated back in October 2008 that we officially had switched all of our bandwidth control solutions over to behavior-based shaping. Consequently, we  also completely disavowed Deep Packet Inspection in a move that has Ars Technica described as “vendor throws deep packet inspection under the bus.”

In the last few weeks, there has been a barrage of attacks on Deep Packet Inspection, and then a volley of PR supporting it from those implementing the practice.

I had been sitting on an action item to write something in defense of DPI, and then this morning I came across a pro-DPI blog post in the New York Times. The following excerpt is in reference to using DPI to give priority to certain types of traffic such as gaming:

“Some customers will value what they see as low priority as high priority,” he said. I asked Mr. Scott what he thought about the approach of Plusnet, which lets consumers pay more if they want higher priority given to their game traffic and downloads. Surprisingly, he had no complaints.

“If you said to me, the consumer, ‘You can choose what applications to prioritize and which to deprioritize, and, oh, by the way, prices will change as a result of how you do this,’ I don’t have a problem with that,” he said.

The key to this excerpt is the phrase, “IF YOU ASK THE CONSUMER WHAT THEY WANT.” This implies permission. If you use DPI as an opt-in , above-board technology, then obviously there is nothing wrong with it. The threat to privacy is only an issue if you use DPI without consumer knowledge. It should not be up to the provider to decide appropriate use of DPI,  regardless of good intent.

The quickest way to deflate the objections  of the DPI opposition is to allow consumers to choose. If you subscribe to a provider that allows you to have higher priority for certain application, and it is in their literature, then by proxy you have granted permission to monitor your traffic. I can still see the Net Neutrality purist unhappy with any differential service, but realistically I think there is a middle ground.

I read an article the other day where a defender of DPI practices (sorry no reference) pointed out how spam filtering is widely accepted and must use DPI techniques to be effective. The part the defender again failed to highlight was that most spam filtering is done as an opt-in with permission. For example, the last time I checked my Gmail account, it gave the option to turn the spam filter off.

In sum, we are fully in support of DPI technology when the customer is made aware of its use and has a choice to opt out. However, any use of DPI done unknowingly and behind the scenes is bound to create controversy and may even be illegal. The exception would be a court order for a legal wiretap. Therefore, the Deep Packet Inspection debate isn’t necessarily a black and white case of two mutually exclusive extremes of right and wrong. If done candidly, DPI can be beneficial to both the Internet user and provider.

See also what is deep packet inspection.

Eli Riles, a consultant for APconnections (Netequalizer), is a retired insurance agent from New York. He is a self-taught expert in network infrastructure. He spends half the year traveling and visiting remote corners of the earth. The other half of the year you’ll find him in his computer labs testing and tinkering with the latest network technology.

For questions or comments, please contact him at eliriles@yahoo.com.

World Wide Web Founder Denounces Deep Packet Inspection


Editor’s Note: This past week, we counted  several  vendors publishing articles touting how their deep packet inspection is the latest and best. And then there is this…

Berners-Lee says no to internet ‘snooping’

The inventor of the World Wide Web, Sir Tim Berners-Lee, has attacked deep packet inspection, a technique used to monitor traffic on the internet and other communications networks.

Speaking at a House of Lords event to mark the 20th anniversary of the invention of the World Wide Web, Berners-Lee said that deep packet inspection (DPI) was the electronic equivalent of opening people’s mail.

To continue reading, click here.

We can understand how DPI devices are attractive as they do provide visibility into what is going on in your network.  We also understand that the intent of most network administrators is to keep their network running smoothly by making tough calls on what types of traffic to allow on their wires.  But, while DPI is perhaps not exactly the same as reading private mail, as Mr Berners-Lee claims, where should one draw the line ?

We personally believe that the DPI line is one that should be avoided, if at all possible. And, our behavior-based shaping allows you to shape traffic without looking at data. Therefore, effective network optimization doesn’t have to come at the expense of user privacy.

More Resistence for Deep Packet Inspection


Editors note:

We come across stories from irate user groups every day. It seems the more the public knows about deep packet inspection practices the less likely it becomes. In Canada it looks like the resistance is getting some heavy hitters.

Google, Amazon, others want CRTC to ban internet interference

Last Updated: Tuesday, February 24, 2009 | 4:53 PM ET Comments49Recommend97

A coalition of more than 70 technology companies, including internet search leader Google, online retailer Amazon and voice over internet provider Skype, is calling on the CRTC to ban internet service providers from “traffic shaping,” or using technology that favours some applications over others.

In a submission filed Monday to the Canada Radio-television and Telecommunications Commission (CRTC) in advance of a July probe into the issue of internet traffic management, the Open Internet Coalition said traffic shaping network management “discourages investment in broadband networks, diminishes consumer choice, interferes with users’ freedom of expression, and inhibits innovation.”

Full Article

NetEqualizer rolling out URL based traffic shaping.


February 10th, 2009

Lafayette Colorado

APconnections makers of the of the popular NetEqualizer line of bandwidth control and traffic shaping hardware appliances today announced a major feature enhancement to their product line. URL based shaping.

In our recent newsletter we asked our customers if they were in need of URL based shaping and the feedback was a resounding YES.

Using our current release, administrators  have the ability to shape their network traffic by, IP address , Mac Address, VLAN or subnet. With addition of URL shaping, our product line will meet the demands of Co-location operators.

A distinction we need to make clear, is that URL based shaping is not related to DPI or content based shaping. URLs are public information as they travel across the Internet, and are basically  a mapping into human readable  form of an IP address; therefore URL based shaping does not require opening private data for inspection.

If you are interested in details regarding this feature please contact APconnections directly.

More on Deep Packet Inspection and the NebuAd case


By Art Reisman

CTO of APconnections, makers of the plug-and-play bandwidth control and traffic shaping appliance NetEqualizer

Art Reisman CTO www.netequalizer.com

Editors note:

This  latest article published in DSL reports reminds me of the time  where a bunch of friends (not me),  are smoking a joint in a car when the police pull them over, and the guy holding the joint takes the fall for everybody.  I don’t want to see any of these ISPs get hammered as I am sure they are good companies.

It seems like this case should be easily settled.  Even if privacy laws were viloated , the damage was perhaps a few unwanted AD’s that popped up on a browser, not some form of extortion of private records. In any case, the message should be clear to any ISP, don’t implement DPI of any kind to be safe.  And yet, for every NebuAd privacy lawsuit case article I come across , I must see at least two or three press releases from vendors announcing major deals with  for DPI equipment ?

FUll Original article link from DSL reports

ISPs Play Dumb In NebuAD Lawsuit
Claim they were ‘passive participants’ in user data sales…
08:54AM Thursday Feb 05 2009 by Karl Bode
tags: legal · business · privacy · consumers · Embarq · CableOne · Knology
Tipped by funchords See Profile

The broadband providers argue that they can’t be sued for violating federal or state privacy laws if they didn’t intercept any subscribers. In court papers filed late last week, they argue that NebuAd alone allegedly intercepted traffic, while they were merely passive participants in the plan.

By “passive participants,” they mean they took (or planned to take) money from NebuAD in exchange for allowing NebuAD to place deep packet inspection hardware on their networks. That hardware collected all browsing activity for all users, including what pages were visited, and how long each user stayed there. It’s true many of the the carriers were rather passive in failing to inform customers these trials were occurring — several simply tried to slip this through fine print in their terms of service or acceptable use policies.

NetEqualizer Bandwidth Control Tech Seminar Video Highlights


Tech Seminar, Eastern Michigan University, January 27, 2009

This 10-minute clip was professionally produced January 27, 2009. It  gives a nice quick overview of how the NetEqualizer does bandwidth control while providing priority for VoIP and video.

The video specifically covers:

1) Basic traffic shaping technology and NetEqualizer’s behavior-based methods

2) Internet congestion and gridlock avoidance on a network

3) How peer-to-peer file sharing operates

4) How to counter the effects of peer-to-peer file sharing

5) Providing QoS and priority for voice and video on a network

6) A short comparison by a user (a university admin) who prefers NetEqualizer to layer-7 deep packet inspection techniques

Four Reasons Why Peer-to-Peer File Sharing Is Declining in 2009


By Art Reisman

CTO of APconnections, makers of the plug-and-play bandwidth control and traffic shaping appliance NetEqualizer

Art Reisman CTO www.netequalizer.com

I recently returned from a regional NetEqualizer tech seminar with attendees from Western Michigan University, Eastern Michigan University and a few regional ISPs.  While having a live look at Eastern Michigan’s p2p footprint, I remarked that it was way down from what we had been seeing in 2007 and 2008.  The consensus from everybody in the room was that p2p usage is waning. Obviously this is not a wide data base to draw a conclusion from, but we have seen the same trend at many of our customer installs (3 or 4 a week), so I don’t think it is a fluke. It is kind of ironic, with all the controversy around Net Neutrality and Bit-torrent blocking,  that the problem seems to be taking care of itself.

So, what are the reasons behind the decline? In our opinion, there are several reasons:

1) Legal Itunes and other Mp3 downloads are the norm now. They are reasonably priced and well marketed. These downloads still take up bandwidth on the network, but do not clog access points with connections like torrents do.

2) Most music aficionados are well stocked with the classics (bootleg or not) by now and are only grabbing new tracks legally as they come out. The days of downloading an entire collection of music at once seem to be over. Fans have their foundation of digital music and are simply adding to it rather than building it up from nothing as they were several years ago.

3) The RIAA enforcement got its message out there. This, coupled with reason #1 above, pushed users to go legal.

4) Legal, free and unlimited. YouTube videos are more fun than slow music downloads and they’re free and legal. Plus, with the popularity of YouTube, more and more television networks have caught on and are putting their programs online.

Despite the decrease in p2p file sharing, ISPs are still experiencing more pressure on their networks than ever from Internet congestion. YouTube and NetFlix  are more than capable of filling in the void left by waning Bit-torrents.  So, don’t expect the controversy over traffic shaping and the use of bandwidth controllers to go away just yet.

Comcast fairness techniques comparison with NetEqualizer


Comcast is now rolling out the details of their new policy on Traffic shaping Fairness as they get away from their former Deep Packet inspection.

For the complete Comcast article click here

Below we compare techniques with the NetEqualizer

Note: Feel free to  comment if you feel we  need to make any corrections in our comparison our goal is to be as accurate as possible.

1) Both techniques make use of slowing users down if they exceed a bandwidth limit over a time period.

2) The Comcast bandwidth limit kicks in after 15 minutes and is based only on a customers usage over that time period, it is not based on the congestion going on in the overall network.

3) NetEqualizer bandwidth limits are based on the last 8 seconds of customer usage, but only kick when the overall Network is full.  (the aggregate bandwidth utilization of all users on the line has reached a critical level)

4) Comcast punishes offenders by cutting them back  50 percent for a minimum of 15 minutes

5) NetEqualizer punishes offenders  just a few seconds and then lets them back to full strength. It will hit the offending connection with a decrease ranging from 50 to 80 percent.

6) Comcast puts a restriction on all traffic to the user during the 15 minute Penalty period

7) NetEqualizer only punishes offending connections , for example if you were running an FTP download and a streaming audio , only the FTP download would be effected by the restriction.

In our opinion both methods are effective and fair.

FYI NetEqualizer also has a Quota system which is used by a very small percent of our customers. It is very similar to the Comcast 15 minute system only that the time interval is done in Days.

Details on the NetEqualizer Quota based system can be found in the user guide page 11.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

Canadians Mull over Privacy and Deep Packet Inspection


Editor’s note: Seems the Canadians are also finally forced to face the issue of deep packet inspection. I guess the cat is out of the bag in Canada? One troubling note in the article below is the authors insinuation that the only way to control Internet bandwidth is through DPI .

Privacy Commissioner of Canada - blog.privcom.gc.ca

CRTC begins dialogue on traffic shaping

Posted on November 21st, 2008 by Daphne Guerrero

Yesterday, the CRTC rendered its decision on ISP’s traffic shaping practices. It announced that it was denying the Canadian Internet Service Providers’ (CAIP) request that Bell Canada, which provides wholesale ADSL services to smaller ISPs across the country, cease the traffic-shaping practices it has adopted for its wholesale customers.

“Based on the evidence before us, we found that the measures employed by Bell Canada to manage its network were not discriminatory. Bell Canada applied the same traffic-shaping practices to wholesale customers as it did to its own retail customers,” said Konrad von Finckenstein, Q.C., Chairman of the CRTC.

Moreover, the CRTC recognized that traffic-shaping “raises a number of questions” for both end-users and ISPs and has decided to hold a public hearing next July to consider them.

Read the full article

How Much YouTube Can the Internet Handle?


By Art Reisman, CTO, http://www.netequalizer.com 

Art Reisman CTO www.netequalizer.com

Art Reisman

 

As the Internet continues to grow and true speeds become higher,  video sites like YouTube are taking advantage of these fatter pipes. However, unlike the peer-to-peer traffic of several years ago (which seems to be abating), YouTube videos don’t face the veil of copyright scrutiny cast upon p2p which caused most users to back off.
 

In our experience, there are trade offs associated with the advancements in technology that have come with YouTube. From measurements done in our NetEqualizer laboratories, the typical normal quality YouTube video needs about 240kbs sustained over the 10 minute run time for the video. The newer higher definition videos run at a rate at least twice that. 

Many of the rural ISPs that we at NetEqualizer support with our bandwidth shaping and control equipment have contention ratios of about 300 users per 10-megabit link. This seems to be the ratio point where these small businesses can turn  a profit.  Given this contention ratio, if 40 customers simultaneously run YouTube, the link will be exhausted and all 300 customers will be wishing they had their dial-up back. At last check, YouTube traffic accounted for 10 percent of all Internet Traffic.  If left completely unregulated,  a typical rural  ISP could find itself on the brink of saturation from normal YouTube usage already. With tier-1 providers in major metro areas there is usually more bandwidth, but with that comes higher expectations of service and hence some saturation is inevitable. 

If you believe there is a conspiracy, or that ISPs are not supposed to profit as they take risk and operate in a market economy, you are entitled to your opinion, but we are dealing with reality. And there will always be tension between users and their providers, much the same as there is with government funds and highway congestion. 

The fact is all ISPs have a fixed amount of bandwidth they can deliver and when data flows exceed their current capacity, they are forced to implement some form of passive constraint. Without them many networks would lock up completely. This is no different than a city restricting water usage when reservoirs are low. Water restrictions are well understood by the populace and yet somehow bandwidth allocations and restrictions are perceived as evil. I believe this misconception is simply due to the fact that bandwidth is so dynamic, if there was a giant reservoir of bandwidth pooled up in the mountains where you could see this resource slowly become depleted , the problem could be more easily visualized. 

The best compromise offered, and the only comprise that is not intrusive is bandwidth rationing at peak hours when needed. Without rationing, a network will fall into gridlock, in which case not only do the YouTube videos come to halt , but  so does e-mail , chat , VOIP and other less intensive applications. 

There is some good news, alternative ways to watch YouTube videos. 

We noticed during out testing that YouTube videos attempt to play back video as a  real-time feed , like watching live TV.  When you go directly to YouTube to watch a video, the site and your PC immediately start the video and the quality becomes dependent on having that 240kbs. If your providers speed dips below this level your video will begin to stall, very annoying;  however if you are willing to wait a few seconds there are tools out there that will play back YouTube videos for you in non real-time. 

Buffering Tool 

They accomplish this by pre-buffering before the video starts playing.  We have not reviewed any of these tools so do your research. We suggest you google “YouTube buffering tools” to see what is out there. Not only do these tools smooth out the YouTube playback during peak times or on slower connections , but they also help balance the load on the network during peak times. 

Bio Art Reisman is a partner and co-founder of APconnections, a company that provides bandwidth control solutions (NetEqualizer) to ISPs, Universities, Libraries, Mining Camps and any organization where groups of users must share their Internet resources equitably. What follows is an objective educational journey on how consumers and ISPs can live in harmony with the explosion of YouTube video.

Deep packet Inspection a poison pill for NebuAd ?


Editors Note:

NebuAd had a great idea show ads to users based on content and share the revenue with ISPs that sign up for their service. What is wrong with this Idea ? I guess customers don’t like people looking at their private data using DPI hence the lawsuit detailed in the article below.  The funny thing is we are still hearing from customers that want DPI as part of their solution, this includes many Universities , ISPs and alike.  I think the message is clear: Don’t use Deep Packet Inspection unless you fully disclose this practice to your customers/employees or risk getting your head nailed to a table.

———————————————————————–

From Zdnet Nov 11, 2008

NebuAd, the controversial company that was trying to sell deep-packet inspection technology as a means of delivering more relevant ads, has already had most of the life sucked out of it. Now, a class action lawsuit filed in U.S. District Court in San Francisco today, could put the final nail in the coffin.

Full article

http://blogs.zdnet.com/BTL/?p=10774

One Gigabit NetEqualizer Announced Today


Editors Note: We expect to go higher than 1 gigabit and 12,000 users in the near future. This is just a start.

APconnections Announces Fully Equipped One-Gigabit NetEqualizer Traffic Shaper for $8500

LAFAYETTE, Colo., Nov. 7/PRNewswire/ — APconnections, a leading supplier of plug-and-play bandwidth shaping products, today announced a one-gigabit enhancement to their NetEqualizer brand traffic shapers. The initial release will handle 12,000 users and sustained line speeds of one gigabit.

“Prior to this release, our largest model, the NE-3000 was rated for 350 megabits,” said Eli Riles, APconnections vice president of sales. “Many of our current customers liked our technology, but just needed a higher end machine.The other good news is that our current NE-3000 platform will be able to run this new version with just a software upgrade, no forklift required.”

Future releases are in the works for even higher speeds and more users, thus solidifying APConnections as the price-performance leader in the WAN optimization market place.

In its initial release, the one-gigabit model will start at $8,500 USD. For more information, contact APconnections at 1-800-918-2763 or via email at sales@netequalizer.com.

The NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology
gives priority to latency-sensitive applications, such as VoIP and email. Behavior based shaping is the industry alternative to Deep Packet Inspection (DPI). It does it all dynamically and automatically, improving on other bandwidth shaping technology available.

APconnections is a privately held company founded in 2003 and is based in Lafayette, Colorado.

Contact: APconnections, 1-800-918-2763

Death to Deep Packet (layer 7 shaping) Inspection


Editors note: Deep packet inspection (layer 7 shaping) will likely be around for a while. It is very easy to explain this technology to customers, hence many IT resellers latch on to it as it makes a compelling elevator pitch.  We put out the press release below to formalize our position on this issue.

For detailed information on how the techniques of NetEqualizer differ from Deep Packet inspection, see the following link: http://www.netequalizer.com/Compare_NetEqualizer.php

LAFAYETTE, Colo., October 28, 2008 — APconnections, a leading supplier of plug-and-play bandwidth shaping products, today made a formal announcement to formally discontinue  deep packet inspection techniques in their NetEqualizer product line.

“Our behavior-based techniques worked so well that current customers stopped asking for the layer-7 techniques we had at one time implemented into our system,” said Art Reisman, CEO of APconnections. “So, we eventually just decided to phase the technique out completely.”

Although deep packet inspection, also known as layer-7 shaping, was unofficially discontinued nearly two years ago, the ongoing debates over user privacy spurred the official announcement.

“What prompted us to make a formal announcement was the continued industry lack of understanding that deep packet inspection not only does not work very well, but it also puts you are at risk of violating privacy laws if you use these techniques without customer consent,” said Reisman.

Although Reisman says most providers cross this line with the good intentions of controlling traffic congestion, the reality of it is that it’s no different than listening to a private phone conversation and terminating the call if you don’t like what you hear.

“It’s quite risky  that any public US based ISP would invest in  this technique, especially after the FCC slapped Comcast’s wrists in a recent decision” said Riesman.

For more information on the NetEqualizer technology, visit www.netequalizer.com or contact APconnections at 1-800-918-2763 or via email sales@netequalizer.com.

The NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology gives priority to latency sensitive applications, such as VoIP and email. It does it all dynamically and automatically, improving on other bandwidth shaping technology available.

APconnections is a privately held company founded in 2003 and is based in Lafayette, Colorado.

%d bloggers like this: