NetEqualizer News: September 2015


September 2015

Greetings!

Enjoy another issue of NetEqualizer News! This month, we spotlight the NetEqualizer Installation Process, walk through the updated Viewing Traffic section of our NetEqualizer Quick Start Guide, discuss our expanded DDoS Firewall, and show off our new NetEqualizer 8.3 User Guide. As always, feel free to pass this along to others who might be interested in NetEqualizer News.

A message from Art…
Art Reisman, CTO – APconnections

We are almost officially in the fall season in the Northern Hemisphere, and I am enjoying harvesting all my tomatoes, and sadly, very few (and small) pumpkins! Some really good news, though, is that I think my fencing successfully thwarted a raccoon or skunk that was attacking my garden. art

Speaking of attacks, this month I have an update on our DDoS Monitor & Firewall modules. We also are ready to harvest updated 8.3 Documentation, which can educate you on our new features. And finally, we are excited to spotlight our Installation Process, which you can take advantage of for any new NetEqualizer or trade-in!

twitterAnd remember we are now on Twitter! You can now follow us @NetEqualizer.

We love it when we hear back from you – so if you have a story you would like to share with us of how we have helped you, let us know. Email me directly at art@apconnections.net. I would love to hear from you!

Spotlight on: The NetEqualizer Installation Process

We recently added a process for all new and trade-in NetEqualizer sales that we are very excited about – the NetEqualizer Installation Process!

This process assigns you an Installation Engineer at the time of sale or trade-in. The sole purpose of the installation engineer is to ensure that you get your NetEqualizer set up correctly and that any questions you might have are answered.

We can be as involved or hands-off as you would like us to be.

What we can do for you:95eddc45-a83d-4581-8768-51480d718587

Review a Diagnostic:
Send us a diagnostic of your recently set up NetEqualizer so that our Support Team can analyze it for misconfigurations or other problems.

Review Traffic Limit settings:60b7a936-f0fb-4b75-bd85-6f31fcb180a9
Many customers want to use Pools, Hard Limits, and Priority Hosts. We can help by reviewing your traffic limiting strategy and providing best practice recommendations.

Review your install over a WebEx:
We can schedule a time to go over your system using the WebEx screen-sharing utility. During this time we can look at live traffic, review settings, or answer any other questions that might come up.a66e4c89-c606-4011-96f7-b74e85d5b0e9

Connect remotely to your NetEqualizer:
If your NetEqualizer is available remotely, we would be happy to log in and do any required support tasks or settings adjustments.

Answer questions via phone or email:
If you just have a quick question regarding our product, feel free to email our Support Team or your Installation Engineer any time!

How you benefit:

There are many benefits that this service provides to both technical and non-technical customers. For example:

  • You can proactively prevent problems by letting us review your setup for potential issues.
  • You can optimize your NetEqualizer for your network so that your users have a great experience online. Every environment is different and we can help with the most efficient settings.
  • You can learn more about our product, technology, and features. This will allow you to more effectively administer the device.

The NetEqualizer Installation Process is free to anyone who purchases a new NetEqualizer or trades in an old unit for a new one.

What if I just need to learn about the latest NetEqualizer releases?

You need a Tech Refresh! All customers that have valid NetEqualizer Software and Support (NSS) are eligible for additional training and help, via our Technical Refresh. Contact our Support Team to schedule a Tech Refresh today.

To find out more about our new Installation Process, contact us!

contact_us_box-1


8.3 Quick Start Guide –
Viewing Traffic

Earlier this month, we enhanced our Quick Start Guide to talk in more detail about how to view traffic going through the NetEqualizer using our reporting tool (Dynamic RTR).bb536892-85a9-4e08-9647-4103f03ef363

Here is a preview of what we added to the Quick Start Guide. To check out all the changes, see the full version of the guide here (starting on page 12).

1fa2e327-61d4-42bf-bcd5-303db5cfe253

View Current Traffic
Use the Active Connections menu and sub-menus to look at what is happening on the NetEqualizer right now.

Seeing traffic successfully pass through your device after the initial set up is a
great sign things are working properly!

View Historical Traffic
But, once that is up and running, you’ll want to set up reporting so that you can see what’s been happening on your network historically.

In order to get the most from the NetEqualizer reporting tool, you’ll want to follow these steps:

1) Start RTR
This is required for tracking data historically.

2) Add IPs to Track
InRTR, add IP addresses you want to save traffic history for to Manage Traffic History -> Manage Tracked IPs. Most of the time, this will be all your local subnets.

3) View History
Use the Traffic History menu and sub-menus to look at what has happened on the NetEqualizer in the past.

If you’d like a Tech Refresh to walk through any of the reporting features, including the enhanced ability to view traffic, and are current on NetEqualizer Software and Support (NSS), contact us today!

contact_us_box-1


DDoS Firewall Expanded –
Notes from the Field

As a special bonus in our DDoS Firewall, we found out during implementation that we can also program our firewall scripts to identify an internal virus or hijacked computer.

If you are interested in more visibility in detecting an outside attack or virus-laden computer within your network, feel free to contact us for a quick consulting session, and we’ll see if we can customize a firewall and notification system for you!

The DDoS Firewall is an add-on module to the NetEqualizer. Please contact us to learn about pricing for your environment.

contact_us_box-1


8.3 User Guide Now Available!

We have talked a lot in past newsletters about the 8.3 Release and all the new and exciting features we’ve added.

Starting today, all of those new features are now described in detail in our 8.3 User Guide! This document is a great resource for ensuring RTR is set up correctly and also to provide assistance in answering any questions you might have.

Learn more about these exciting new features:

1) Top Talkers Report – this has been one of the most requested graphs and was a popular feature of our previous reporting tool, ntop. You can use this feature to see which IP addresses have used the most bandwidth over time.

toptalkers

2) General Penalty Report – we are bringing this one back from the first version of RTR! You can see both IPs that are currently being penalized, as well as a historical count of penalties that have occurred over time.

penalties

3) Connection Count Report – NetEqualizer controls P2P traffic by using connection count limits on IP addresses. However, figuring out what limit to set for your network depends on how it’s used. You can use the new Connection Count Report to see how many connections individual IP addresses have, and thus set your connection limit to the appropriate level.

connectioncounts

You can read more about all of the features of the 8.3 Release here, in our updated User Guide. If you have any questions, contact us!

contact_us_box-1


Best Of The Blog

Death to Deep Packet Inspection?

By Art Reisman – CTO – APconnections

A few weeks ago, I wrote an article on how I was able to watch YouTube while on a United flight, bypassing their layer 7 filtering techniques. Following up today, I was not surprised to see a few other articles on the subject popping up recently…

Photo Of The Month
Bobcat Caught on Wildlife Cam
The bobcat is a cat which first appeared nearly 1.8 million years ago. Containing 12 subspecies, it ranges from southern Canada to central Mexico – including much of the United States. This one was recently captured on a staff member’s wildlife camera.

Developing Technology to Detect a Network Hacker


Editors note:  Updated on Feb 1st, 2012.  Our new product, NetGladiator, has been released.  You can learn more about it on the NetGladiator website at www.netgladiator.net or calling us at 303.997.1300 x123.

In a few weeks we will be releasing a product to automatically detect and prevent a web application hacker from breaking into a private enterprise. What follows are the details of how this product was born.  If you are currently seeking or researching intrusion detection & prevention technology, you will find the following quite useful.

Like many technology innovations, our solution resulted from the timely intersection of two technologies.

Technology 1: About one year ago we starting working with a consultant in our local tech community to do some programming work on a minor feature in our NetEqualizer product line. Fiddlerontheroot is the name of their company, and they specialize in ethical hacking. Ethical hacking is the process of deliberately hacking into a high-profile client company with the intention of exposing their weaknesses. The key expertise that they provided was a detailed knowledge of how to hack into a network or website.

Technology 2: Our NetEqualizer technology is well known for providing state-of-the-art bandwidth control. While working with Fiddler on the Root, we realized our toolset could be reconfigured to spot, and thwart, unwanted entry into a network. A key piece to the puzzle would be our long-forgotten Deep Packet Inspection technology. DPI is the frowned upon practice of looking inside data packets traversing the Internet.

An ironic twist to this new product journey was that, due to the privacy controversy, as well as finding a better way to shape bandwidth, we removed all of our DPI methodology from our core bandwidth shaping product four years ago.  Just like with any weapon, there are appropriate uses for DPI. Over a lunch conversation one day, we realized that using DPI to prevent a hacker intrusion was a legitimate use of DPI technology. Preventing an attack is much different from a public ISP scanning and censoring customer data.

So how did we merge these technologies to create a unique heuristics-based IPS system?

Before I answer that question, perhaps you are thinking that revealing our techniques might provide a potential hacker or competitor with inside secrets? More on this later…

The key to using DPI to prevent an intrusion (hack) revolves around 3 key facts:

1) A hacker MUST try to enter your enterprise by exploiting weaknesses in your normal entry points.

2) One of the normal entry points is a web page, and everybody has them. After all, if you had no publicly available data there would be no reason to be attached to the Internet.

3) By using DPI technology to monitor incoming requests and looking for abnormalities, we can now reliably spot unwanted intrusion attempts.

When we met with Fiddler on the Root, we realized that a normal entry by a customer and a probing entry by a hacker are radically different. A hacker attempts things that no normal visitor could even possibly stumble into. In our new solution we have directed our DPI technology to watch for abnormal entry intrusion attempts. This involved months of observing a group of professional hackers and then developing a set of profiles which clearly distinguish them from a friendly user.

What other innovations are involved in a heuristics-based Intrusion Prevention System (IPS)?

Spotting the hacker pattern with DPI was only part of a complete system. We also had to make sure we did not get any false positives – this is the case where a normal activity might accidentally be flagged as an intruder, and this obviously would be unacceptable. In our test lab we have a series of computers that act like users searching the Internet, the only difference is we can ramp these robot users up to hyper-speed so that they access millions of pages over a short period of time. We then measure our “false positive” rate from our simulation and ensure that our false positive rate on intrusion detection is below 0.001 percent.

Our solution, NetGladiator, is different than other IPS appliances.  We are not an “all-in-one solution”, which can be rendered useless by alerting you thousands of times a day, can block legitimate requests, and break web functionality.  We do one thing very well – we catch & stop hackers during their information discovery process – keeping your web applications secure.  NetGladiator is custom-configured for your environment, alerting you on meaningful attempts without false positive alerts.

We also had to dig into our expertise in real-time optimization. Although that sounds like marketing propaganda to impress somebody, we can break that statement down to mean something.

When doing DPI, you must look at and analyze every data stream and packet coming into your enterprise, skipping something might lead to a security breach. Looking at data and analyzing it requires quite a bit more CPU power than just moving it along a network. Many intrusion detection systems are afterthoughts to standard routers and switches. These devices were originally not designed to do computing-intensive heuristics on data. Doing so may slow your network down to a crawl, a common complaint with lower-end affordable security updates. We did not want to force our customers to make that trade-off. Our technology uses a series of processors embedded in our equipment all working in unison to analyze each packet of Internet data without causing any latency. Although we did not invent the idea of using parallel processing for analysis of data, we are the only product in our price range able to do this.

How did we validate and test our IPS solution?

1) We have been putting our systems in front of beta test sites and asking white knights to try to hack into them.

2) We have been running our technology in front of some of our own massive web crawlers. Our crawlers do not attempt anything abnormal but can push through millions of sites and web pages. This is how we test for false positives blocking a web crawler that is NOT attempting anything abnormal.

Back to the question, does divulging our methodology render it easier to breach?

The holes that hackers exploit are relatively consistent – in other words there really is only a finite number of exploitations that hackers use. They can either choose to exploit these holes or not, and if they attempt to exploit the hole they will be spotted by our DPI. Hence announcing that we are protecting these holes is more likely to discourage a hacker, who will then look for another target.

What Does Net Privacy Have to Do with Bandwidth Shaping?


I definitely understand the need for privacy. Obviously, if I was doing something nefarious, I wouldn’t want it known, but that’s not my reason. Day in and day out, measures are taken to maintain my privacy in more ways than I probably even realize. You’re likely the same way.

For example, to avoid unwanted telephone and mail solicitations, you don’t advertise your phone numbers or give out your address. When you buy something with your credit card, you usually don’t think twice about your card number being blocked out on the receipt. If you go to the pharmacist, you take it for granted that the next person in line has to be a certain distance behind so they can’t hear what prescription you’re picking up. The list goes on and on. For me personally, I’m sure there are dozens, if not hundreds, of good examples why I appreciate privacy in my life. This is true in my daily routines as well as in my experiences online.

The topic of Internet privacy has been raging for years. However, the Internet still remains a hotbed for criminal activity and misuse of personal information. Email addresses are valued commodities sold to spammers. Search companies have dedicated countless bytes of storage to every search term and IP address made. Websites place tracking cookies on your system so they can learn more about your Web travels, habits, likes, dislikes, etc.  Forensically, you can tell a lot about a person from their online activities. To be honest, it’s a little creepy.

Maybe you think this is much ado about nothing. Why should you care? However, you may recall that less than four years ago, AOL accidentally released around 20 million search keywords from over 650,000 users. Now, those 650,000 users and their searches will exist forever in cyberspace.  Could it happen again? Of course, why wouldn’t it happen again since all it takes is a packed laptop to walk out the door?

Internet privacy is an important topic, and as a result, technology is becoming more and more available to help people protect information they want to keep confidential. And that’s a good thing. But what does this have to do with bandwidth management? In short, a lot (no pun intended)!

Many bandwidth management products (from companies like Blue Coat, Allot, and Exinda, for example) intentionally work at the application level. These products are commonly referred to as Layer 7 or Deep Packet Inspect tools. Their mission is to allocate bandwidth specifically by what you’re doing on the Internet. They want to determine how much bandwidth you’re allowed for YouTube, Netflix, Internet games, Facebook, eBay, Amazon, etc. They need to know what you’re doing so they can do their job.

In terms of this article, whether you’re philosophically adamant about net privacy (like one of the inventors of the Internet), or could care less, is really not important. The question is, what happens to an application-managed approach when people take additional steps to protect their own privacy?

For legitimate reasons, more and more people will be hiding their IPs, encrypting, tunneling, or otherwise disguising their activities and taking privacy into their own hands. As privacy technology becomes more affordable and simple, it will become more prevalent. This is a mega-trend, and it will create problems for those management tools that use this kind of information to enforce policies.

However, alternatives to these application-level products do exist, such as “fairness-based” bandwidth management. Fairness-based bandwidth management, like the NetEqualizer, is the only a 100% neutral solution and ultimately provides a more privacy friendly approach for Internet users and a more effective solution for administrators when personal privacy protection technology is in place. Fairness is the idea of managing bandwidth by how much you can use, not by what you’re doing. When you manage bandwidth by fairness instead of activity, not only are you supporting a neutral, private Internet, but you’re also able to address the critical task of bandwidth allocation, control and quality of service.

What Is Deep Packet Inspection and Why the Controversy?


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper. APconnections removed all deep packet inspection technology from their NetEqualizer product over 2 years ago.

Article Updated March 2012

As the debate over Deep Packet Inspection continues, network administrators are often faced with a difficult decision: ensure network quality or protect user privacy. However, the legality of the practice is now being called into question, adding a new twist to the mix. Yet, for many Internet users, deep packet inspection continues to be an ambiguous term in need of explanation. In the discussion that follows, deep packet inspection will be explored in the context of the ongoing debate.

Exactly what is deep packet inspection?

All traffic on the Internet travels around in what is called an IP packet. An IP packet is a string of characters moving from computer A to computer B. On the outside of this packet is the address where it is being sent. On the inside of the packet is the data that is being transmitted.

The string of characters on the inside of the packet can be conceptually thought of as the “payload,” much like the freight inside of a railroad car. These two elements, the address and the payload, comprise the complete IP packet.

When you send an e-mail across the Internet, all your text is bundled into packets and sent on to its destination. A deep packet inspection device literally has the ability to look inside those packets and read your e-mail (or whatever the content might be).

Products sold that use DPI are essentially specialized snooping devices that examine the content (pay load inside) of Internet packets. Other terms sometimes used to describe techniques that examine Internet data are packet shapers, layer-7 traffic shaping, etc.

How is deep packet inspection related to net neutrality?

Net neutrality is based on the belief that nobody has the right to filter content on the Internet. Deep packet inspection is a method used for filtering. Thus, there is a conflict between the two approaches. The net neutrality debate continues to rage in its own right.

Why do some Internet providers use deep packet inspection devices?

There are several reasons:

1) Targeted advertising If a provider knows what you are reading, they can display content advertising on the pages they control, such as your login screen or e-mail account.

2) Reducing “unwanted” traffic — Many providers are getting overwhelmed by types of traffic that they deem as less desirable such as Bittorrent and other forms of peer-to-peer. Bittorrent traffic can overwhelm a network with volume. By detecting and redirecting the Bittorrent traffic, or slowing it down, a provider can alleviate congestion.

3) Block offensive material — Many companies or institutions that perform content filtering are looking inside packets to find, and possibly block, offensive material or web sites.

4) Government spying — In the case of Iran (and to some extent China), DPI is used to keep tabs on the local population.

When is it appropriate to use deep packet inspection?

1) Full disclosure — Private companies/institutions/ISPs that notify employees that their Internet use is not considered private have the right to snoop, although I would argue that creating an atmosphere of mistrust is not the mark of a healthy company.

2) Law enforcement — Law enforcement agencies with a warrant issued by a judge would be the other legitimate use.

3) Intrusion detection and prevention– It is one thing to be acting as an ISP  and to eaves drop on a public conversation;  it is entirely another paradigm if you are a  private business examining the behavior of somebody  coming in your front door. For example in a private home it is within your right to look through your peep hole and not let shady characters into your home.  In a private business it is a good idea to use Deep packet inspection in order to block unwanted intruders from your network. Blocking bad guys before they break into and damage your network and is perfectly acceptable.

4) Spam filtering- Most consumers are very happy to have their ISP or email provider remove spam.  I would categorize this type of DPI as implied disclosure. For example, in Gmail you do have the option to turn Spam filtering off, and although most consutomers may not realize that google is reading their mail ( humans don’t read it but computer scanners do), their motives are understood. What consumers may not realize is that their email provider is also reading everything they do in order to set target advertising

Does Content filtering use Deep Packet Inspection ?

For the most part no. Content filtering is generally  done at the URL level. URL’s are generally considered public information, as routers need to look this up anyway. We have only encountered content filters at private institutions that are within their right.

What about spam filtering, does that use Deep Packet Inspection?

Yes many Spam filters will look at content, and most people could not live without their spam filter, however with spam filtering most people have opted in at one point or another, hence it is generally done with permission.

What is all the fuss about?

It seems that consumers are finally becoming aware of what is going on behind the scenes as they surf the Internet, and they don’t like it. What follows are several quotes and excerpts from articles written on the topic of deep packet inspection. They provide an overview not only of how DPI is currently being used, but also the many issues that have been raised with the practice.

For example, this is an excerpt from a recent PC world article:

Not that we condone other forms of online snooping, but deep packet inspection is the most egregious and aggressive invasion of privacy out there….It crosses the line in a way that is very frightening.

Paul Stephens, director of policy and advocacy for the Privacy Rights Clearinghouse, as quoted in the E-Commerce Times on November 14, 2008. Read the full article here.

Recently, Comcast had their hand slapped for re-directing Bittorrent traffic:

Speaking at the Stanford Law School Center for Internet and Society, FCC Chairman Kevin Martin said he’s considering taking action against the cable operator for violating the agency’s network-neutrality principles. Seems Martin was troubled by Comcast’s dissembling around the BitTorrent issue, not to mention its efforts to pack an FCC hearing on Net neutrality with its own employees.

— Digital Daily, March 10, 2008. Read the full article here.

Later in 2008, the FCC came down hard on Comcast.

In a landmark ruling, the Federal Communications Commission has ordered Comcast to stop its controversial practice of throttling file sharing traffic.

By a 3-2 vote, the commission on Friday concluded that Comcast monitored the content of its customers’ internet connections and selectively blocked peer-to-peer connections.

Wired.com, August 1, 2008.Read the full article here.

To top everything off, some legal experts are warning companies practicing deep packet inspection that they may be committing a felony.

University of Colorado law professor Paul Ohm, a former federal computer crimes prosecutor, argues that ISPs such as Comcast, AT&T and Charter Communications that are or are contemplating ways to throttle bandwidth, police for copyright violations and serve targeted ads by examining their customers’ internet packets are putting themselves in criminal and civil jeopardy.

Wired.com, May 22, 2008. Read the full article here.

However, it looks like things are going the other way in the U.K. as Britain’s Virgin Media has announced they are dumping net neutrality in favor of targeting bittorrent.

The UK’s second largest ISP, Virgin Media, will next year introduce network monitoring technology to specifically target and restrict BitTorrent traffic, its boss has told The Register.

The Register, December 16, 2008. Read the full article here.

Canadian ISPs confess en masse to deep packet inspection in January 2009.

With the amount of attention being paid to Comcast recently, a lot of people around the world have begun to look at their ISPs and wonder exactly what happens to their traffic once it leaves. This is certainly true for Canada, where several Canadian ISPs have come under the scrutiny of the CRTC, the regulatory agency responsible for Canada. After investigation, it was determined that all large ISPs in Canada filter P2P traffic in some fashion.

Tech Spot, January 21, 2009. Read the full article here.

In April 2009, U.S. lawmakers announced plans to introduce legislation that would limit the how ISPs could track users. Online privacy advocates spoke out in support of such legislation.

In our view, deep packet inspection is really no different than postal employees opening envelopes and reading letters inside. … Consumers simply do not expect to be snooped on by their ISPs or other intermediaries in the middle of the network, so DPI really defies legitimate expectations of privacy that consumers have.

Leslie Harris, president and CEO of the Center for Democracy and Technology, as quoted on PCWorld.com on April 23, 2009. Read the full article here.

The controversy continues in the U.S. as AT&T is accused of traffic shaping, lying and blocking sections of the Internet.

7/26/2009 could mark a turning point in the life of AT&T, when the future looks back on history, as the day that the shady practices of an ethically challenged company finally caught up with them: traffic filtering, site banning, and lying about service packages can only continue for so long before the FCC, along with the bill-paying public, takes a stand.

Kyle Brady, July 27, 2009. Read the full article here.

[February 2011 Update] The Egyptian government uses DPI to filter elements of their Internet Traffic, and this act in itself becomes the news story. In this video in this news piece, Al Jazeera takes the opportunity to put out an unflattering piece on the company Naurus that makes the DPI technology and sold it to the Egyptians.

While the debate over deep packet inspection will likely rage on for years to come, APconnections made the decision to fully abandon the practice over two years ago, having since proved the viability of alternative approaches to network optimization. Network quality and user privacy are no longer mutually exclusive goals.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

Analyzing the cost of Layer 7 Packet Shaping


November, 2010

By Eli RIles

For most IT administrators layer 7 packet shaping involves two actions.

Action 1:  Involves inspecting and analyzing data to determine what types of traffic are on your network.

Action 2: Involves taking action by adjusting application  flows on your network .

Without  the layer 7 visibility and actions,  an administrator’s job would degrade into a quagmire of random guesswork. Or would it?

Layer 7 monitoring and shaping is intuitively appealing , but it is a good idea to take a step back and break down examine the full life cycle costs of your methodology .

In an ironic inverse correlation, we assert that costs increase with the complexity of the monitoring tool.

1) Obviously, the more detailed the reporting tool (layer 7 ) , the more expensive its initial price tag.

2)  The kicker comes with part two. The more expensive the tool, the more  detail  it will provide, and the more time an administrator is likely to spend adjusting and mucking, looking for optimal performance.

But, is it a fair to assume higher labor costs with  more advanced monitoring and information?

Well, obviously it would not make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. Why have the reporting tool in the first place if the only output was to stare at reports and do nothing? Typically, the more information an admin has about a network, the more inclined he might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief  that when  the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network mucking can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention.  For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980′s.  The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.

An effective compromise with many of our customers is that they are stepping down from expensive complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.

For example, with a simple reporting tool you can plot network usage by user.  Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing; abuse becomes obvious just looking at the usage (a simple report).

However, there is also the personal control factor, which often does not follow clear lines of ROI (return on investment).

What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into a trip to Hawaii.

In our next article, we’ll put some real world numbers to the test for actual break downs, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.

List of monitoring tools compiled by Stanford

Top five free monitoring tools

Planetmy
Linux Tips
How to set up a monitor for free

Equalizing Compared to Application Shaping (Traditional Layer-7 “Deep Packet Inspection” Products)


Editor’s Note: (Updated with new material March 2012)  Since we first wrote this article, many customers have implemented the NetEqualizer not only to shape their Internet traffic, but also to shape their company WAN.  Additionally, concerns about DPI and loss of privacy have bubbled up. (Updated with new material September 2010)  Since we first published this article, “deep packet inspection”, also known as Application Shaping, has taken some serious industry hits with respect to US-based ISPs.   

==============================================================================================
Author’s Note: We often get asked how NetEqualizer compares to Packeteer (Bluecoat), NetEnforcer (Allot), Network Composer (Cymphonix), Exinda, and a plethora of other well-known companies that do Application Shaping (aka “packet shaping”, “deep packet inspection”, or “Layer-7” shaping).   After several years of these questions, and discussing different aspects with former and current application shaping with IT administrators, we’ve developed a response that should clarify the differences between NetEqualizer’s behavior- based approach and the rest of the pack.
We thought of putting our response into a short, bullet-by-bullet table format, but then decided that since this decision often involves tens of thousands of dollars, 15 minutes of education on the subject with content to support the bullet chart was in order.  If you want to skip the details, see our Summary Table at the end of this article

However, if you’re looking to really understand the differences, and to have the question answered as objectively as possible, please take a few minutes to read on…
==============================================================================================

How NetEqualizer compares to Bluecoat, Allot, Cymphonix, & Exinda

In the following sections, we will cover specifically when and where Application Shaping is used, how it can be used to your advantage, and also when it may not be a good option for what you are trying to accomplish.  We will also discuss how Equalizing, NetEqualizer’s behavior-based shaping, fits into the landscape of application shaping, and how in many cases Equalizing is a much better alternative.

Download the full article (PDF)  Equalizing Compared To Application Shaping White Paper

Read the rest of this entry »

The True Price of Bandwidth Monitoring


By Art Reisman

Art Reisman CTO www.netequalizer.com

For most IT administrators, bandwidth monitoring of some sort is an essential part of keeping track of, as well as justifying, network expenses. Without visibility into a network load, an administrator’s job would degrade into a quagmire of random guesswork. Or would it?

The traditional way of  looking at monitoring your Internet has two parts: the fixed cost of the monitoring tool used to identify traffic, and the labor associated with devising a remedy. In an ironic inverse correlation, we assert that costs increase with the complexity of the monitoring tool. Obviously, the more detailed the reporting tool, the more expensive its initial price tag. The kicker comes with part two. The more expensive the tool, the more  detail  it will provide, and the more time an administrator is likely to spend adjusting and mucking, looking for optimal performance.

But, is it a fair to assume higher labor costs with  more advanced monitoring and information?

Well, obviously it would not make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. Why have the reporting tool in the first place if the only output was to stare at reports and do nothing? Typically, the more information an admin has about a network, the more inclined he might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief  that when  the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network mucking can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention.  For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980’s.  The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.

An effective compromise with many of our customers is that they are stepping down from expensive complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.

For example, with a simple reporting tool you can plot network usage by user.  Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing; abuse becomes obvious just looking at the usage (a simple report).

However, there is also the personal control factor, which often does not follow clear lines of ROI (return on investment).

What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into a trip to Hawaii.

In our next article, we’ll put some real world numbers to the test for actual break downs, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.

List of monitoring tools compiled by Stanford

Planetmy
Linux Tips
How to set up a monitor for free

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

Deep Packet Inspection Abuse In Iran Raises Questions About DPI Worldwide


Over the past few years, we at APconnections have made our feelings about Deep Packet Inspection clear, completely abandoning the practice in our NetEqualizer technology more than two years ago. While there may be times that DPI is necessary and appropriate, its use in many cases can threaten user privacy and the open nature of the Internet. And, in extreme cases, DPI can even be used to threaten freedom of speech and expression. As we mentioned in a previous article, this is currently taking place in Iran.

Although these extreme invasions of privacy are most likely not occurring in the United States, their existence in Iran is bringing increasing attention to the slippery slope that is Deep Packet Inspection. A July 10 Huffington Post article reads:

“Before DPI becomes more widely deployed around the world and at home, the U.S. government ought to establish legitimate criteria for authorizing the use such control and surveillance technologies. The harm to privacy and the power to control the Internet are so disturbing that the threshold for using DPI must be very high.The use of DPI for commercial purposes would need to meet this high bar. But it is not clear that there is any commercial purpose that outweighs the potential harm to consumers and democracy.”

This potential harm to the privacy and rights of consumers was a major factor behind our decision to discontinue the use of DPI in any of our technology and invest in alternative means for network optimization. We hope that the ongoing controversy will be reason for others to do the same.

Google Questions Popular Bandwidth Shaping Myth


At this week’s Canadian Radio-Television and Telecommunications Commission Internet traffic hearing, Google’s Canada Policy Counsel, Jacob Glick, raised a point that we’ve been arguing for the last few years. Glick said:

“We urge you to reject as false the choice between debilitating network congestion and application-based discrimination….This is a false dichotomy. The evidence is, and experience in Canada and in the U.S. already shows, that carriers can manage their networks, reduce congestion and protect the open Internet, all at the same time.”

While we agree with Glick to a certain extent, we differ in the alternative proposed by hearing participants — simply increase bandwidth. This is not to say that increasing bandwidth isn’t the appropriate solution in certain circumstances, but to question the validity of a dichotomy with an equally narrow third alternative doesn’t exactly significantly expand the industry’s options. Especially when increasing bandwidth isn’t always a viable solution for some ISPs.

The downsides of application-based shaping are one of the main reasons behind NetEqualizer’s reliance on behavior-based shaping. Therefore, while Glick is right that the above-mentioned dichotomy doesn’t explore all of the available options, it’s important to realize that the goals being promoted at the hearing are not solely achieved through increased bandwidth.

For more on how the NetEqualizer fits into the ongoing debate, see our past article, NetEqualizer Offers Net Neutrality, User Privacy Compromise.

Do We Need an Internet User Bill of Rights?


The Computers, Freedom and Privacy conference wraps up today in Washington, D.C., with conference participants having paid significant attention to the on-going debates concerning ISPs, Deep Packet Inspection and net neutrality.  Over the past several days, representatives from the various interested parties have made their cases for and against certain measures pertaining to user privacy. As was expected, demands for the protection of user privacy often came into conflict with ISPs’ advertising strategies and their defense of their overall network quality.

At the center of this debate is the issue of transparency and what ISPs are actually telling customers. In many cases, apparent intrusions into user privacy are qualified by what’s stated in the “fine print” of customer contracts. If these contracts notify customers that their Internet activity and personal information may be used for advertising or other purposes, then it can’t really be said that the customer’s privacy has been invaded. But, the question is, how many users actually read their contracts, and furhtermore, how many people actually understand the fine print? It would be interesting to see what percentage of Internet users could define deep packet inspection. Probably not very many.

This situation is reminiscent of many others involving service contracts, but one particular timely example comes to mind — credit cards. Last month, the Senate passed a credit card “bill of rights,” through which consumers would be both better protected and better informed. Of the latter, President Obama stated, “you should not have to worry that when you sign up for a credit card, you’re signing away all your rights. You shouldn’t need a magnifying glass or a law degree to read the fine print that sometimes doesn’t even appear to be written in English.”

Ultimately, the same should be true for any service contracts, but especially if private information is at stake, as is the case with the Internet privacy debate. Therefore, while it’s a step in the right direction to include potential user privacy issues in service contracts, it should not be done only with the intention of preventing potential legal backlash, but rather with the customer’s true understanding of the agreement in mind.

Editor’s Note: APconnections and NetEqualizer have long been a proponent of both transparency and the protection of user privacy, having devoted several years to developing technology that maintains network quality while respecting the privacy of Internet users.

When is Deep Packet Inspection a Good Thing?


Commentary

Update September 2011

Seems some shareholders  of a company who over promised layer 7 technology are not happy.

By Eli Riles

As many of our customers are aware, we publicly stated back in October 2008 that we officially had switched all of our bandwidth control solutions over to behavior-based shaping. Consequently, we  also completely disavowed Deep Packet Inspection in a move that has Ars Technica described as “vendor throws deep packet inspection under the bus.”

In the last few weeks, there has been a barrage of attacks on Deep Packet Inspection, and then a volley of PR supporting it from those implementing the practice.

I had been sitting on an action item to write something in defense of DPI, and then this morning I came across a pro-DPI blog post in the New York Times. The following excerpt is in reference to using DPI to give priority to certain types of traffic such as gaming:

“Some customers will value what they see as low priority as high priority,” he said. I asked Mr. Scott what he thought about the approach of Plusnet, which lets consumers pay more if they want higher priority given to their game traffic and downloads. Surprisingly, he had no complaints.

“If you said to me, the consumer, ‘You can choose what applications to prioritize and which to deprioritize, and, oh, by the way, prices will change as a result of how you do this,’ I don’t have a problem with that,” he said.

The key to this excerpt is the phrase, “IF YOU ASK THE CONSUMER WHAT THEY WANT.” This implies permission. If you use DPI as an opt-in , above-board technology, then obviously there is nothing wrong with it. The threat to privacy is only an issue if you use DPI without consumer knowledge. It should not be up to the provider to decide appropriate use of DPI,  regardless of good intent.

The quickest way to deflate the objections  of the DPI opposition is to allow consumers to choose. If you subscribe to a provider that allows you to have higher priority for certain application, and it is in their literature, then by proxy you have granted permission to monitor your traffic. I can still see the Net Neutrality purist unhappy with any differential service, but realistically I think there is a middle ground.

I read an article the other day where a defender of DPI practices (sorry no reference) pointed out how spam filtering is widely accepted and must use DPI techniques to be effective. The part the defender again failed to highlight was that most spam filtering is done as an opt-in with permission. For example, the last time I checked my Gmail account, it gave the option to turn the spam filter off.

In sum, we are fully in support of DPI technology when the customer is made aware of its use and has a choice to opt out. However, any use of DPI done unknowingly and behind the scenes is bound to create controversy and may even be illegal. The exception would be a court order for a legal wiretap. Therefore, the Deep Packet Inspection debate isn’t necessarily a black and white case of two mutually exclusive extremes of right and wrong. If done candidly, DPI can be beneficial to both the Internet user and provider.

See also what is deep packet inspection.

Eli Riles, a consultant for APconnections (Netequalizer), is a retired insurance agent from New York. He is a self-taught expert in network infrastructure. He spends half the year traveling and visiting remote corners of the earth. The other half of the year you’ll find him in his computer labs testing and tinkering with the latest network technology.

For questions or comments, please contact him at eliriles@yahoo.com.

World Wide Web Founder Denounces Deep Packet Inspection


Editor’s Note: This past week, we counted  several  vendors publishing articles touting how their deep packet inspection is the latest and best. And then there is this…

Berners-Lee says no to internet ‘snooping’

The inventor of the World Wide Web, Sir Tim Berners-Lee, has attacked deep packet inspection, a technique used to monitor traffic on the internet and other communications networks.

Speaking at a House of Lords event to mark the 20th anniversary of the invention of the World Wide Web, Berners-Lee said that deep packet inspection (DPI) was the electronic equivalent of opening people’s mail.

To continue reading, click here.

We can understand how DPI devices are attractive as they do provide visibility into what is going on in your network.  We also understand that the intent of most network administrators is to keep their network running smoothly by making tough calls on what types of traffic to allow on their wires.  But, while DPI is perhaps not exactly the same as reading private mail, as Mr Berners-Lee claims, where should one draw the line ?

We personally believe that the DPI line is one that should be avoided, if at all possible. And, our behavior-based shaping allows you to shape traffic without looking at data. Therefore, effective network optimization doesn’t have to come at the expense of user privacy.

More Resistence for Deep Packet Inspection


Editors note:

We come across stories from irate user groups every day. It seems the more the public knows about deep packet inspection practices the less likely it becomes. In Canada it looks like the resistance is getting some heavy hitters.

Google, Amazon, others want CRTC to ban internet interference

Last Updated: Tuesday, February 24, 2009 | 4:53 PM ET Comments49Recommend97

A coalition of more than 70 technology companies, including internet search leader Google, online retailer Amazon and voice over internet provider Skype, is calling on the CRTC to ban internet service providers from “traffic shaping,” or using technology that favours some applications over others.

In a submission filed Monday to the Canada Radio-television and Telecommunications Commission (CRTC) in advance of a July probe into the issue of internet traffic management, the Open Internet Coalition said traffic shaping network management “discourages investment in broadband networks, diminishes consumer choice, interferes with users’ freedom of expression, and inhibits innovation.”

Full Article

NetEqualizer rolling out URL based traffic shaping.


February 10th, 2009

Lafayette Colorado

APconnections makers of the of the popular NetEqualizer line of bandwidth control and traffic shaping hardware appliances today announced a major feature enhancement to their product line. URL based shaping.

In our recent newsletter we asked our customers if they were in need of URL based shaping and the feedback was a resounding YES.

Using our current release, administrators  have the ability to shape their network traffic by, IP address , Mac Address, VLAN or subnet. With addition of URL shaping, our product line will meet the demands of Co-location operators.

A distinction we need to make clear, is that URL based shaping is not related to DPI or content based shaping. URLs are public information as they travel across the Internet, and are basically  a mapping into human readable  form of an IP address; therefore URL based shaping does not require opening private data for inspection.

If you are interested in details regarding this feature please contact APconnections directly.

More on Deep Packet Inspection and the NebuAd case


By Art Reisman

CTO of APconnections, makers of the plug-and-play bandwidth control and traffic shaping appliance NetEqualizer

Art Reisman CTO www.netequalizer.com

Editors note:

This  latest article published in DSL reports reminds me of the time  where a bunch of friends (not me),  are smoking a joint in a car when the police pull them over, and the guy holding the joint takes the fall for everybody.  I don’t want to see any of these ISPs get hammered as I am sure they are good companies.

It seems like this case should be easily settled.  Even if privacy laws were viloated , the damage was perhaps a few unwanted AD’s that popped up on a browser, not some form of extortion of private records. In any case, the message should be clear to any ISP, don’t implement DPI of any kind to be safe.  And yet, for every NebuAd privacy lawsuit case article I come across , I must see at least two or three press releases from vendors announcing major deals with  for DPI equipment ?

FUll Original article link from DSL reports

ISPs Play Dumb In NebuAD Lawsuit
Claim they were ‘passive participants’ in user data sales…
08:54AM Thursday Feb 05 2009 by Karl Bode
tags: legal · business · privacy · consumers · Embarq · CableOne · Knology
Tipped by funchords See Profile

The broadband providers argue that they can’t be sued for violating federal or state privacy laws if they didn’t intercept any subscribers. In court papers filed late last week, they argue that NebuAd alone allegedly intercepted traffic, while they were merely passive participants in the plan.

By “passive participants,” they mean they took (or planned to take) money from NebuAD in exchange for allowing NebuAD to place deep packet inspection hardware on their networks. That hardware collected all browsing activity for all users, including what pages were visited, and how long each user stayed there. It’s true many of the the carriers were rather passive in failing to inform customers these trials were occurring — several simply tried to slip this through fine print in their terms of service or acceptable use policies.

%d bloggers like this: