APconnections Announces NetEqualizer Lifetime Buyer Protection Policy


This week, we announced the launch of the NetEqualizer Lifetime Buyer Protection Policy. In the event of an un-repairable failure of a NetEqualizer unit at any time, or in the event that it is time to retire a unit, customers will have the option to purchase a replacement unit and apply a 50-percent credit of their original unit purchase price, toward the new unit.  For current pricing see register for our price list.  This includes units that are more than three years old (the expected useful life for hardware) and in service at the time of failure.

For example, if you purchased a unit in 2003 for $4000 and were looking to replace it or upgrade with a newer model, APconnections would kick in a $2000 credit toward the replacement purchase.

The Policy will be in addition to the existing optional yearly NetEqualizer Hardware Warranty (NHW), which offers customers cost-free repairs or replacement of any malfunctioning unit while NHW is in effect (read details on NHW).

Our decision to implement the policy was a matter of customer peace-of-mind rather than necessity. While the failure rate of any NetEqualizer unit is ultimately very low, we want customers to know that we stand behind our products – even if it’s several years down the line.

To qualify,

  • users must be the original owner of the NetEqualizer unit,
  • the customer must have maintained a support contract that has been current within last 18 months , lapses of support longer than 18 months will void our replacement policy
  • the unit must have been in use on your network at the time of failure.

Shipping is not included in the discounted price. Purchasers of the one-year NetEqualizer hardware warranty (NHW) will still qualify for full replacement at no charge while under hardware warranty.  Contact us for more details by emailing sales@apconnections.net, or calling 303.997.1300 x103 (International), or 1.888.287.2492 (US Toll Free).

Note: This Policy does not apply to the NetEqualizer Lite.

Deep Packet Inspection Abuse In Iran Raises Questions About DPI Worldwide


Over the past few years, we at APconnections have made our feelings about Deep Packet Inspection clear, completely abandoning the practice in our NetEqualizer technology more than two years ago. While there may be times that DPI is necessary and appropriate, its use in many cases can threaten user privacy and the open nature of the Internet. And, in extreme cases, DPI can even be used to threaten freedom of speech and expression. As we mentioned in a previous article, this is currently taking place in Iran.

Although these extreme invasions of privacy are most likely not occurring in the United States, their existence in Iran is bringing increasing attention to the slippery slope that is Deep Packet Inspection. A July 10 Huffington Post article reads:

“Before DPI becomes more widely deployed around the world and at home, the U.S. government ought to establish legitimate criteria for authorizing the use such control and surveillance technologies. The harm to privacy and the power to control the Internet are so disturbing that the threshold for using DPI must be very high.The use of DPI for commercial purposes would need to meet this high bar. But it is not clear that there is any commercial purpose that outweighs the potential harm to consumers and democracy.”

This potential harm to the privacy and rights of consumers was a major factor behind our decision to discontinue the use of DPI in any of our technology and invest in alternative means for network optimization. We hope that the ongoing controversy will be reason for others to do the same.

Do We Need an Internet User Bill of Rights?


The Computers, Freedom and Privacy conference wraps up today in Washington, D.C., with conference participants having paid significant attention to the on-going debates concerning ISPs, Deep Packet Inspection and net neutrality.  Over the past several days, representatives from the various interested parties have made their cases for and against certain measures pertaining to user privacy. As was expected, demands for the protection of user privacy often came into conflict with ISPs’ advertising strategies and their defense of their overall network quality.

At the center of this debate is the issue of transparency and what ISPs are actually telling customers. In many cases, apparent intrusions into user privacy are qualified by what’s stated in the “fine print” of customer contracts. If these contracts notify customers that their Internet activity and personal information may be used for advertising or other purposes, then it can’t really be said that the customer’s privacy has been invaded. But, the question is, how many users actually read their contracts, and furhtermore, how many people actually understand the fine print? It would be interesting to see what percentage of Internet users could define deep packet inspection. Probably not very many.

This situation is reminiscent of many others involving service contracts, but one particular timely example comes to mind — credit cards. Last month, the Senate passed a credit card “bill of rights,” through which consumers would be both better protected and better informed. Of the latter, President Obama stated, “you should not have to worry that when you sign up for a credit card, you’re signing away all your rights. You shouldn’t need a magnifying glass or a law degree to read the fine print that sometimes doesn’t even appear to be written in English.”

Ultimately, the same should be true for any service contracts, but especially if private information is at stake, as is the case with the Internet privacy debate. Therefore, while it’s a step in the right direction to include potential user privacy issues in service contracts, it should not be done only with the intention of preventing potential legal backlash, but rather with the customer’s true understanding of the agreement in mind.

Editor’s Note: APconnections and NetEqualizer have long been a proponent of both transparency and the protection of user privacy, having devoted several years to developing technology that maintains network quality while respecting the privacy of Internet users.

Obama’s Revival of Net Neutrality Revisits An Issue Hardly Forgotten


Last Friday, President Obama reinvigorated (for many people, at least) the debate over net neutrality during a speech from the White House on cybersecurity. The president made it clear that users’ privacy and net neutrality would not be threatened under the guise of cybersecurity measures. President Obama stated:

“Let me also be clear about what we will not do. Our pursuit of cyber-security will not — I repeat, will not include — monitoring private sector networks or Internet traffic. We will preserve and protect the personal privacy and civil liberties that we cherish as Americans. Indeed, I remain firmly committed to net neutrality so we can keep the Internet as it should be — open and free.”

While this is certainly an important issue on the security front, for many ISPs and networks administrators, it didn’t take the president’s comments to put user privacy or net neutrality back in the spotlight.  In may cases, ISPs and network administrators constantly must walk the fine line between net neutrality, user privacy, and ultimately the well being of their own networks, something that can be compromised on a number of fronts (security, bandwidth, economics, etc.).

Therefore, despite the president’s on-going commitment to net neturality, the issue will continue to be debated and remain at the forefront of the minds of ISPs, administrators, and many users. Over the past few years, we at NetEqualizer have been working to provide a compromise for these interested parties, ensuring network quality and neutrality while protecting the privacy of users. It will be interesting to see how this debate plays out, and what it will mean for policy, as the philosophy of network neutrality continues to be challenged — both by individuals and network demands.

Further Reading

New Asymmetric Shaping Option Augments NetEqualizer-Lite


We currently have a new release in beta testing that allows for equalizing on an asymmetric link. As is the case with all of our equalizing products, this release will allow users to more efficiently utilize their bandwidth, thus optimizing network performance. This will be especially ideal for users of our recently released NetEqualizer-Lite.

Many wireless access points have a limit on the total amount of bandwidth they can transmit in both directions. This is because only one direction can be talking at a time. Unlike wired networks, where a 10-meg link typically means you can have 10 megs UP and 10 megs going the other direction simultaneously, in  a wireless network you can only have 10 megabits total at any one time.  So, if you had 7 megabits coming in, you could only have 3 megabits going out. These limits are a hard saturation point.

In the past, it was necessary to create separate settings for both the up and down stream. With the new NetEqualizer release, you can simply tell the NetEqualizer that you have an asymmetric 10-megabit link, and congestion control will automatically kick in for both streams,  alleviating bottlenecks more efficiently and keeping your network running smoothly.

For more information on APconnections’ equalizing technology, click here.

NetEqualizer-Lite Is Now Available!


Last month, we introduced our newest release, a Power-over-Ethernet NetEqualizer. Since then, with your help, we’ve titled the new release the NetEqualizer-Lite and are already getting positive feedback from users. Here’s a little background about what led us to release the NetEqualizer-Lite…Over the years, we’d had several customers express interest in placing a NetEqualizer as close as possible to their towers in order to relieve congestion. However, in many cases, this would require both a weatherproof and low-power NetEqualizer unit – two features that were not available up to this point. However, in the midst of a growing demand for this type of technology, we spent the last few months working to meet this need and thus developed the NetEqualizer-Lite.

Here’s what you can expect from the NetEqualizerLite:

  • Power over Ethernet
  • Up to 10 megabits of shaping
  • Up to 200 users
  • Comes complete with all standard NetEqualizer features

And, early feedback on the new release has been positive. Here’s what one user recently posted on DSLReports.com:

We’ve ordered 4 of these and deployed 2 so far. They work exactly like the 1U rackmount NE2000 that we have in our NOC, only the form factor is much smaller (about 6x6x1) and they use POE or a DC power supply. I amp clamped one of the units, and it draws about 7 watts….The Netequalizer has resulted in dramatically improved service to our customers. Most of the time, our customers are seeing their full bandwidth. The only time they don’t see it now is when they’re downloading big files. And, when they don’t see full performance, its only for the brief period that the AP is approaching saturation. The available bandwidth is re-evaulated every 2 seconds, so the throttling periods are often brief. Bottom line to this is that we can deliver significantly more data through the same AP. The customers hitting web pages, checking e-mail, etc. virtually always see full bandwidth, and the hogs don’t impact these customers. Even the hogs see better performance (although that wasn’t one of my priorities). (DSLReports.com)

Pricing for the new model will be $1,200 for existing NetEqualizer users and $1,550 for non-customers purchasing their first unit. However, the price for subsequent units will be $1,200 for users and nonusers alike.

For more information about the new release, contact us at admin@apconnections.net or 1-800-918-2763.

NetEqualizer White Paper Comparison with Traditional Layer-7 (Deep Packet Inspection Products)


Updated with new reference material May 4th 2009

How NetEqualizer compares to Packeteer, Allot, Cymphonics, Exinda

We often get asked how NetEqualizer compares to Packeteer, Allot, Cymphonics, Exinda and a plethora of other well-known companies that do layer 7 application shaping (packet shaping). After several years of these questions, and discussing different aspects with former and current application shaping IT administrators, we’ve developed a response that should clarify the differences between NetEqualizers behavior based approach and the rest of the pack.

We thought of putting our response into a short, bullet-by-bullet table format, but then decided that since this decision often involves tens of thousands of dollars, 15 minutes of education on the subject with content to support the bullet chart was in order. If you want to see just the bullet chart, you can skip to the end now, but if you’re looking to have the question answered as objectively as possible, please take a few minutes to read on

In the following sections, we will cover specifically when and where application shaping (deep packet inspection) is used, how it can be used to your advantage, and also when it may not be a good option for what you are trying to accomplish. We will also discuss how the NetEqualizer and its behavior-based shaping fits into the landscape of application shaping, and how in some cases the NetEqualizer is a much better alternative.

First off, let’s discuss the accuracy of application shaping. To do this, we need to review the basic mechanics of how it works.

Application shaping is defined as the ability to identify traffic on your network by type and then set customized policies to control the flow rates for each particular type. For example, Citrix, AIM, Youtube, and BearShare are all applications that can be uniquely identified.

As you are likely aware, all traffic on the Internet travels around in what is called an IP packet. An IP packet can very simply be thought of as a string of characters moving from computer A to computer B. The string of characters is called the “payload,” much like the freight inside a railroad car. On the outside of this payload is the address where it is being sent. On the inside is the data/payload that is being transmitted. These two elements, the address and the payload, comprise the complete IP packet. In the case of different applications on the Internet, we would expect to see different kinds of payloads.

At the heart of all current application shaping products is special software that examines the content of Internet packets as they pass through the packet shaper. Through various pattern matching techniques, the packet shaper determines in real time what type of application a particular flow is. It then proceeds to take action to possibly restrict or allow the data based on a rule set designed by the system administrator.

For example, the popular peer-to-peer application Kazaa actually has the ASCII characters “Kazaa” appear in the payload, and hence a packet shaper can use this keyword to identify a Kazaa application. Seems simple enough, but suppose that somebody was downloading a Word document discussing the virtues of peer-to-peer and the title had the character string “Kazaa” in it. Well, it is very likely that this download would be identified as Kazaa and hence misclassified. After all, downloading a Word document from a Web server is not the same thing as the file sharing application Kazaa.

The other issue that constantly brings the accuracy of application shaping under fire is that some application writers find it in their best interest not be classified. In a mini arms race that plays out everyday across the world, some application developers are constantly changing their signature and some have gone as far as to encrypt their data entirely.

Yes, it is possible for the makers of application shapers to counter each move, and that is exactly what the top companies do, but it can take a heroic effort to keep pace. The constant engineering and upgrading required has an escalating cost factor. In the case of encrypted applications, the amount of CPU power required for decryption is quite intensive and impractical and other methods will be needed to identify encrypted p2p.

But, this is not to say that application shaping doesn’t work in all cases or provide some value. So, let’s break down where it has potential and where it may bring false promises. First off, the realities of what really happens when you deploy and depend on this technology need to be discussed.

Accuracy and False Positives

As of early 2003, we had a top engineer and executive join APConnections direct from a company that offered application shaping as one of their many value-added technologies. He had first hand knowledge from working with hundreds of customers who were big supporters of application shaping:

The application shaper his company offered could identify 90 percent of the spectrum of applications, which means they left 10 percent as unclassified. So, right off the bat, 10 percent of the traffic is unknown by the traffic shaper. Is this traffic important? Is it garbage that you can ignore? Well, there is no way to know with out any intelligence, so you are forced to let it go by without any restriction. Or, you could put one general rule over all of the traffic – perhaps limiting it to 1 megabit per second max, for example. Essentially, if your intention was 100-percent understanding and control of your network traffic, right out the gate you must compromise this standard.

In fairness, this 90-percent identification actually is an amazing number with regard to accuracy when you understand how daunting application shaping is. Regardless, there is still room for improvement.

So, that covers the admitted problem of unclassifiable traffic, but how accurate can a packet shaper be with the traffic it does claim to classify? Does it make mistakes? There really isn’t any reliable data on how often an application shaper will misidentify an application. To our knowledge, there is no independent consumer reporting company that has ever created a lab capable of generating several thousand different applications types with a mix of random traffic, and then took this mix and identified how often traffic was misclassified. Yes, there are trivial tests done one application at a time, but misclassification becomes more likely with real-world complex and diverse application mixes.

From our own testing of application technology freely available on the Internet, we discovered false positives can occur up to 25 percent of the time. A random FTP file download can be classified as something more specific. Obviously commercial packet shapers do not rely on the free technology in open source and they actually may improve on it. So, if we had to estimate based on our experience, perhaps 5 percent of Internet traffic will likely get misclassified. This brings our overall accuracy down to 85 percent (combining the traffic they don’t claim to classify with an estimated error rate for the traffic they do classify).

Constantly Evolving Traffic

Our sources say (mentioned above) that 70 percent of their customers that purchased application shaping equipment were using the equipment primarily as a reporting tool after one year. This means that they had stopped keeping up with shaping policies altogether and were just looking at the reports to understand their network (nothing proactive to change the traffic).

This is an interesting fact. From what we have seen, many people are just unable, or unwilling, to put in the time necessary to continuously update and change their application rules to keep up with the evolving traffic. The reason for the constant changing of rules is that with traditional application shaping you are dealing with a cunning and wise foe. For example, if you notice that there is a large contingent of users using Bittorrent and you put a rule in to quash that traffic, within perhaps days, those users will have moved on to something new: perhaps a new application or encrypted p2p. If you do not go back and reanalyze and reprogram your rule set, your packet shaper slowly becomes ineffective.

And finally lest we not forget that application shaping is considered by some to be a a violation of Net Neutrality.

When is application shaping the right solution?

There is a large set of businesses that use application shaping quite successfully along with other technologies. This area is WAN optimization. Thus far, we have discussed the issues with using an application shaper on the wide open Internet where the types and variations of traffic are unbounded. However, in a corporate environment with a finite set and type of traffic between offices, an application shaper can be set up and used with fantastic results.

There is also the political side to application shaping. It is human nature to want to see and control what takes place in your environment. Finding the best tool available to actually show what is on your network, and the ability to contain it, plays well with just about any CIO or IT director on the planet. An industry leading packet shaper brings visibility to your network and a pie chart showing 300 different kinds of traffic. Whether or not the tool is practical or accurate over time isn’t often brought into the buying decision. The decision to buy can usually be “intuitively” justified. By intuitively, we mean that it is easier to get approval for a tool that is simple to conceptually understand by a busy executive looking for a quick-fix solution.

As the cost of bandwidth continues to fall, the question becomes how much a CIO should spend to analyze a network. This is especially true when you consider that as the Internet expands, the complexity of shaping applications grows. As bandwidth prices drop, the cost of implementing such a product is either flat or increasing. In cases such as this, it often does not make sense to purchase a $15,000 bandwidth shaper to stave off a bandwidth upgrade that might cost an additional $200 a month.

What about the reporting aspects of an application shaper? Even if it can only accurately report 90 percent of the actual traffic, isn’t this useful data in itself?

Yes and no. Obviously analyzing 90 percent of the data on your network might be useful, but if you really look at what is going on, it is hard to feel like you have control or understanding of something that is so dynamic and changing. By the time you get a handle on what is happening, the system has likely changed. Unless you can take action in real time, the network usage trends (on a wide open Internet trunk) will vary from day to day.1 It turns out that the most useful information you can determine regarding your network is an overall usage patter for each individual. The goof-off employee/user will stick out like a sore thumb when you look at a simple usage report since the amount of data transferred can be 10-times the average for everybody else. The behavior is the indicator here, but the specific data types and applications will change from day to day and week to week

How does the NetEqualizer differ and what are its advantages and weaknesses?

First, we’ll summarize equalizing and behavior-based shaping. Overall, it is a simple concept. Equalizing is the art form of looking at the usage patterns on the network, and then when things get congested, robbing from the rich to give to the poor. Rather than writing hundreds of rules to specify allocations to specific traffic as in traditional application shaping, you can simply assume that large downloads are bad, short quick traffic is good, and be done with it.

This behavior-based approach usually mirrors what you would end up doing if you could see and identify all of the traffic on your network, but doesn’t require the labor and cost of classifying everything. Applications such as Web surfing, IM, short downloads, and voice all naturally receive higher priority while large downloads and p2p receive lower priority. This behavior-based shaping does not need to be updated constantly as applications change.

Trusting a heuristic solution such as NetEqualizer is not always an easy step. Oftentimes, customers are concerned with accidentally throttling important traffic that might not fit the NetEqualizer model, such as video. Although there are exceptions, it is rare for the network operator not to know about these potential issues in advance, and there are generally relatively few to consider. In fact, the only exception that we run into is video, and the NetEqualizer has a low level routine that easily allows you to give overriding priority to a specific server on your network, hence solving the problem.

Another key element in behavior-based shaping is connections. Equalizing takes care of instances of congestion caused by single-source bandwidth hogs. However, the other main cause of Internet gridlock (as well as bringing down routers and access points) is p2p and its propensity to open hundreds or perhaps thousands of connections to different sources on the Internet. Over the years, the NetEqualizer engineers have developed very specific algorithms to spot connection abuse and avert its side effects.

This overview, along with the summary table below, should give you a good idea of where the NetEqualizer stands in relation to packet shaping.

Summary Table

Application based shaping

  • good for static links where traffic patterns are constant

  • good for intuitive presentations makes sense and easy to explain to non technical people
  • detailed reporting by application type
  • not the best fit for wide open Internet trunks
    • costly to maintain in terms of licensing

    • high initial cost

    • constant labor to tune with changing application spectrum

    • expect approximately 15 percent of traffic to be unclassified

  • only a static snapshot of a changing spectrum may not be useful
  • false positives may show data incorrectly no easy way to confirm accuracy
  • violates Net Neutrality

Equalizing

  • not the best for dedicated WAN trunks

  • the most cost effective for shared Internet trunks
  • little or no recurring cost or labor
  • low entry cost
  • conceptual takes some getting used to
  • basic reporting by behavior used to stop abuse
  • handles encrypted p2p without modifications or upgrades
  • Supports Net Neutrality

1 The exception is a corporate WAN link with relatively static usage patterns.

Note: Since we first published this article, deep packet inspection also known as layer 7 shaping has taken some serious industry hits with respect to US based ISPs

Related articles:

Why is NetEqualizer the low price leader in bandwidth control

When is deep packet inspection a good thing?

NetEqualizer offers deep packet inspection comprimise.

Internet users attempt to thwart Deep Packet Inspection using encryption.

Why the controversy over deep Packet inspection?

World wide web founder denounces deep packet inspection

%d bloggers like this: