Five Tips to Control Encrypted Traffic on Your Network


Editors Note:

Our intent with our tips is to exemplify some of the impracticalities involved with “brute force” shaping of encrypted traffic, and to offer some alternatives.

1) Insert Pre-Encryption software at each end node on your network.

This technique requires a special a custom APP that would need to be installed on Iphones, Ipads, and the laptops of end users. The app is designed  to relay all data to a centralized shaping device in an un-encrypted format.

  •   assumes that the a centralized  IT department has the authority to require special software on all devices using the network. It would not be feasible for environments where end users freely use their own equipment.

ssltraffic

2) Use a sniffer traffic shaper that can decrypt the traffic on the fly.

  • The older 40 bit encryption codes could be hacked by a computer in about a one week, the newer 128 bit encryption codes would require the computer to run longer than the age of the Universe.

3) Just drop encrypted traffic, don’t allow it, forcing users to turn off SSL on their browsers.   Note: A traffic shaper, can spot encrypted traffic, it  just can’t tell you specifically what it is by content.

  • Seems rather draconian to block secure private transmissions, however the need to encrypt traffic over the Internet is vastly overblown. It is actually extremely unlikely for a personal information or credit card to get stolen in transit , but that is another subject
  • Really not practical where you have autonomous or public users, it will cause confusion at best, a revolt at worst.

4) Perhaps re-think what you are trying to accomplish.   There are more heuristic approaches to managing traffic which are immune to encryption.  Please feel free to contact us for more details on a heuristic approach to shaping encrypted traffic.

5) Charge a premium for encrypted traffic.  This would be more practical than blocking encrypted traffic, and would perhaps offset some of the costs for associate with the  overuse of p2p encrypted traffic.

APconnections Celebrates New NetEqualizer Lite with Introductory Pricing


Editor’s Note:  This is a copy of a press release that went out on May 15th, 2012.  Enjoy!

Lafayette, Colorado – May 15, 2012 – APconnections, an innovation-driven technology company that delivers best-in-class network traffic management solutions, is celebrating the expansion of its NetEqualizer Lite product line by offering special pricing for a limited time.

NetEqualizer’s VP of Sales and Business Development, Joe D’Esopo is excited to announce “To make it easy for you to try the new NetEqualizer Lite, for a limited time we are offering the NetEqualizer Lite-10 at introductory pricing of just $999 for the unit, our Lite-20 at $1,100, and our Lite-50 at $1,400.  These are incredible deals for the value you will receive; we believe unmatched today in our industry.”

We have upgraded our base technology for the NetEqualizer Lite, our entry-level bandwidth-shaping appliance.  Our new Lite still retains a small form-factor, which sets it apart, and makes it ideal for implementation in the Field, but now has enhanced CPU and memory. This enables us to include robust graphical reporting like in our other product lines, and also to support additional bandwidth license levels.

The Lite is geared towards smaller networks with less than 350 users, is available in three license levels, and is field-upgradable across them: our Lite-10 runs on networks up to 10Mbps and up to 150 users ($999), our Lite-20 (20Mbps and 200 users for $1,100), and Lite-50 (50Mbps and 350 users for $1,400).  See our NetEqualizer Price List for complete details.  One year renewable NetEqualizer Software & Support (NSS) and NetEqualizer Hardware Warranties (NHW) are offered.

Like all of our bandwidth shapers, the NetEqualizer Lite is a plug-n-play, low maintenance solution that is quick and easy to set-up, typically taking one hour or less.  QoS is implemented via behavior-based bandwidth shaping, “equalizing”, giving priority to latency-sensitive applications, such as VoIP, web browsing, chat and e-mail over large file downloads and video that can clog your Internet pipe.

About APconnections:  APconnections is based in Lafayette, Colorado, USA.  We released our first commercial offering in July 2003, and since then thousands of customers all over the world have put our products into service.  Today, our flexible and scalable solutions can be found in over 4,000 installations in many types of public and private organizations of all sizes across the globe, including: Fortune 500 companies, major universities, K-12 schools, and Internet providers on six (6) continents.  To learn more, contact us at sales@apconnections.net.

Contact: Sandy McGregor
Director, Marketing
APconnections, Inc.
303.997.1300
sandy@apconnections.net

Our Take on Network Instruments 5th Annual Network Global Study


Editors Note: Network Instruments released their “Fifth Annual State of the Network Global study” on March 13th, 2o12. You can read their full study here. Their results were based on responses by 163 network engineers, IT directors, and CIOs in North America, Asia, Europe, Africa, Australia, and South America. Responses were collected from October 22, 2011 to January 3, 2012.

What follows is our take (or my .02 cents) on the key findings around Bandwidth Management and Bandwidth Monitoring from the study.

Finding #1: Over the next two years, more than one-third of respondents expect bandwidth consumption to increase by more than 50%.

Part of me says “well, duh!” but that is only because we hear that from many of our customers. So I guess if you were an Executive, far removed from the day-to-day, this would be an important thing to have pointed out to you. Basically, this is your wake up call (if you are not already awake) to listen to your Network Admins who keep asking you to allocate funds to the network. Now is the time to make your case for more bandwidth to your CEO/President/head guru. Get together budget and resources to build out your network in anticipation of this growth – so that you are not caught off guard. Because if you don’t, someone else will do it for you.

Finding #2: 41% stated network and application delay issues took more than an hour to resolve.

You can and should certainly put monitoring on your network to be able to see and react to delays. However, another way to look at this, admittedly biased from my bandwidth shaping background, is get rid of the delays!

If you are still running an unshaped network, you are missing out on maximizing your existing resource. Think about how smoothly traffic flows on roads, because there are smoothing algorithms (traffic lights) and rules (speed limits) that dictate how traffic moves, hence “traffic shaping.” Now, imagine driving on roads without any shaping in place. What would you do when you got to a 4-way intersection? Whether you just hit the accelerator to speed through, or decided to stop and check out the other traffic probably depends on your risk-tolerance and aggression profile. And the result would be that you make it through OK (live) or get into an ugly crash (and possibly die).

Similarly, your network traffic, when unshaped, can live (getting through without delays) or die (getting stuck waiting in a queue) trying to get to its destination. Whether you look at deep packet inspection, rate limiting, equalizing, or a home-grown solution, you should definitely look into bandwidth shaping. Find a solution that makes sense to you, will solve your network delay issues, and gives you a good return-on-investment (ROI). That way, your Network Admins can spend less time trying to find out the source of the delay.

Finding #3: Video must be dealt with.

24% believe video traffic will consume more than half of all bandwidth in 12 months.
47% say implementing and measuring QoS for video is difficult.
49% have trouble allocating and monitoring bandwidth for video.

Again, no surprise if you have been anywhere near a network in the last 2 years. YouTube use has exploded and become the norm on both consumer and business networks. Add that to the use of video conferencing in the workplace to replace travel, and Netflix or Hulu to watch movies and TV, and you can see that video demand (and consumption) has risen sharply.

Unfortunately, there is no quick, easy fix to make sure that video runs smoothly on your network. However, a combination of solutions can help you to make video run better.

1) Get more bandwidth.

This is just a basic fact-of-life. If you are running a network of < 10Mbps, you are going to have trouble with video, unless you only have one (1) user on your network. You need to look at your contention ratio and size your network appropriately.

2) Cache static video content.

Caching is a good start, especially for static content such as YouTube videos. One caveat to this, do not expect caching to solve network congestion problems (read more about that here) – as users will quickly consume any bandwidth that caching has freed up. Caching will help when a video has gone viral, and everyone is accessing it repeatedly on your network.

3) Use bandwidth shaping to prioritize business-critical video streams (servers).

If you have a designated video-streaming server, you can define rules in your bandwidth shaper to prioritize this server. The risk of this strategy is that you could end up giving all your bandwidth to video; you can reduce the risk by rate capping the bandwidth portioned out to video.

As I said, this is just my take on the findings. What do you see? Do you have a different take? Let us know!

QoS is a Matter of Sacrifice


Usually in the first few minutes of talking to a potential customer, one of their requests will be something like “I want to give QoS (Quality of Service) to Video”, or “I want to give Quality of Service to our Blackboard application.”

The point that is often overlooked by resellers, pushing QoS solutions, is that providing QoS for one type of traffic always involves taking bandwidth away from something else.

The network hacks understand this, but for those that are not down in the trenches sometimes we must gently walk them through a scenario.

Take the following typical exchange:

Customer: I want to give our customers access to NetFlix and have that take priority over P2P.

NetEq Rep: How do you know that you have a p2p problem?

Customer: We caught a guy with Kazaa on his Laptop last year so we know they are out there.

NetEq rep (after plugging in a test system and doing some analysis): It looks like you have some scattered p2p users, but they are only about 2 percent of your traffic load. Thirty percent of your peak traffic is video. If we give priority to all your video we will have to sacrifice something, web browsing, chat, e-mail, Skype, and Internet Radio. I know this seems like quite a bit but there is nothing else out there to steal from, you see in order to give priority to video we must take away bandwidth from something else and although you have p2p, stopping it will not provide enough bandwidth to make a dent in your video appetite.

Customer (now frustrated by reality): Well I guess I will just have to tell our clients they can’t watch video all the time. I can’t make web browsing slower to support video, that will just create a new problems.

If you have an oversubscribed network, meaning too many people vying for limited Internet resources, when you implement any form of QoS, you will still end up with an oversubscribed network. QoS must rob Peter to pay Paul.

So when is QoS worth while?

QoS is a great idea if you understand who you are stealing from.

Here are some facts on using QoS to improve your Internet Connection:

Fact #1

If your QoS mechanism involves modifying packets with special instructions (ToS bits) on how it should be treated, it will only work on links where you control both ends of the circuit and everything in between.

Fact #2

Most Internet congestion is caused by incoming traffic. For data originating at your facility, you can certainly have your local router give priority to it on its way out, but you can’t set QoS bits on traffic coming into your network (we assume from a third party). Regulating outgoing traffic with ToS bits will not have any effect on incoming traffic.

Fact #3

Your public Internet provider will not treat ToS bits with any form of priority (the exception would be a contracted MPLS type network). Yes, they could, but if they did then everybody would game the system to get an advantage and they would not have much meaning anyway.

Fact #4

The next two facts address our initial question — Is QoS over the Internet possible? The answer is, yes. QoS on an Internet link is possible. We have spent the better part of seven years practicing this art form and it is not rocket science, but it does require a philosophical shift in thinking to get your arms around.

We call it “equalizing,” or behavior-based shaping, and it involves monitoring incoming and outgoing streams on your Internet link. Priority or QoS is nothing more than favoring one stream’s packets over another stream’s packets. You can accomplish priority QoS on incoming streams by queuing (slowing down) one stream over another without relying on ToS bits.

Fact #5

Surprisingly, behavior-based methods such as those used by our NetEqualizer do provide a level QoS for VoIP on the public Internet. Although you can’t tell the Internet to send your VoIP packets faster, most people don’t realize the problem with congested VoIP is due to the fact that their VoIP packets are getting crowded out by large downloads. Often, the offending downloads are initiated by their own employees or users. A good behavior-based shaper will be able to favor VoIP streams over less essential data streams without any reliance on the sending party adhering to a QoS scheme.

Please remember our initial point “providing QoS for one type of traffic always involves taking bandwidth away from something else,” and take these facts into consideration as you work on QoS for your network.

What Does Net Privacy Have to Do with Bandwidth Shaping?


I definitely understand the need for privacy. Obviously, if I was doing something nefarious, I wouldn’t want it known, but that’s not my reason. Day in and day out, measures are taken to maintain my privacy in more ways than I probably even realize. You’re likely the same way.

For example, to avoid unwanted telephone and mail solicitations, you don’t advertise your phone numbers or give out your address. When you buy something with your credit card, you usually don’t think twice about your card number being blocked out on the receipt. If you go to the pharmacist, you take it for granted that the next person in line has to be a certain distance behind so they can’t hear what prescription you’re picking up. The list goes on and on. For me personally, I’m sure there are dozens, if not hundreds, of good examples why I appreciate privacy in my life. This is true in my daily routines as well as in my experiences online.

The topic of Internet privacy has been raging for years. However, the Internet still remains a hotbed for criminal activity and misuse of personal information. Email addresses are valued commodities sold to spammers. Search companies have dedicated countless bytes of storage to every search term and IP address made. Websites place tracking cookies on your system so they can learn more about your Web travels, habits, likes, dislikes, etc.  Forensically, you can tell a lot about a person from their online activities. To be honest, it’s a little creepy.

Maybe you think this is much ado about nothing. Why should you care? However, you may recall that less than four years ago, AOL accidentally released around 20 million search keywords from over 650,000 users. Now, those 650,000 users and their searches will exist forever in cyberspace.  Could it happen again? Of course, why wouldn’t it happen again since all it takes is a packed laptop to walk out the door?

Internet privacy is an important topic, and as a result, technology is becoming more and more available to help people protect information they want to keep confidential. And that’s a good thing. But what does this have to do with bandwidth management? In short, a lot (no pun intended)!

Many bandwidth management products (from companies like Blue Coat, Allot, and Exinda, for example) intentionally work at the application level. These products are commonly referred to as Layer 7 or Deep Packet Inspect tools. Their mission is to allocate bandwidth specifically by what you’re doing on the Internet. They want to determine how much bandwidth you’re allowed for YouTube, Netflix, Internet games, Facebook, eBay, Amazon, etc. They need to know what you’re doing so they can do their job.

In terms of this article, whether you’re philosophically adamant about net privacy (like one of the inventors of the Internet), or could care less, is really not important. The question is, what happens to an application-managed approach when people take additional steps to protect their own privacy?

For legitimate reasons, more and more people will be hiding their IPs, encrypting, tunneling, or otherwise disguising their activities and taking privacy into their own hands. As privacy technology becomes more affordable and simple, it will become more prevalent. This is a mega-trend, and it will create problems for those management tools that use this kind of information to enforce policies.

However, alternatives to these application-level products do exist, such as “fairness-based” bandwidth management. Fairness-based bandwidth management, like the NetEqualizer, is the only a 100% neutral solution and ultimately provides a more privacy friendly approach for Internet users and a more effective solution for administrators when personal privacy protection technology is in place. Fairness is the idea of managing bandwidth by how much you can use, not by what you’re doing. When you manage bandwidth by fairness instead of activity, not only are you supporting a neutral, private Internet, but you’re also able to address the critical task of bandwidth allocation, control and quality of service.

NetEqualizer Brand Becoming an Eponym for Fairness and Net Neutrality techniques


An eponym is a general term used to describe from what or whom something derived its name. Therefore, a proprietary eponym could be considered a brand name, product or service mark which has fallen into general use.

Examples of common brand Eponyms include Xerox, Google, and  Band Aid.  All of these brands have become synonymous with the general use of the class of product regardless of the actual brand.

Over the past 7 years we have spent much of our time explaining the NetEqualizer methods to network administrators around the country;  and now,there is mounting evidence,  that  the NetEqualizer brand, is taking on a broader societal connotation. NetEqualizer, is in the early stages as of becoming and Eponym for the class of bandwidth shapers that, balance network loads and ensure fairness and  Neutrality.   As evidence, we site the following excerpts taken from various blogs and publications around the world.

From Dennis OReilly <Dennis.OReilly@ubc.ca> posted on ResNet Forums

These days the only way to classify encrypted streams is through behavioral analysis.  ….  Thus, approaches like the NetEqualizer or script-based ‘penalty box’ approaches are better.

Wisp tutorial Butch Evans

About 2 months ago, I began experimenting with an approach to QOS that mimics much of the functionality of the NetEqualizer (http://www.netequalizer.com) product line.

TMC net

Comcast Announces Traffic Shaping Techniques like APconnections’ NetEqualizer…

From Technewsworld

It actually sounds a lot what NetEqualizer (www.netequalizer.com) does and most people are OK with it…..

From Network World

NetEqualizer looks at every connection on the network and compare it to the overall trunk size to determine how to eliminate congestion on the links

Star Os Forum

If you’d really like to have your own netequalizer-like system then my advice…..

Voip-News

Has anyone else tried Netequalizer or something like it to help with VoIP QoS? It’s worked well so far for us and seems to be an effective alternative for networks with several users…..

How to Determine a Comprehensive ROI for Bandwidth Shaping Products


In the past, we’ve published several articles on our blog to help customers better understand the NetEqualizer’s potential return on investment (ROI). Obviously, we do this because we think we offer a compelling ROI proposition for most bandwidth-shaping decisions. Why? Primarily because we provide the benefits of bandwidth shaping at a a very low cost — both initially and even more so over time. (Click here for the NetEqualizer ROI calculator.)

But, we also want to provide potential customers with the questions that need to be considered before a product is purchased, regardless of whether or not the answers lead to the NetEqualizer. With that said, this article will break down these questions, addressing many issues that may not be obvious at first glance, but are nonetheless integral when determining what bandwidth shaping product is best for you.

First, let’s discuss basic ROI. As a simple example, if an investment cost $100, and if in one year that investment returned $120, the ROI is 20 percent.  Simple enough. But what if your investment horizon is five years or longer? It gets a little more complicated, but suffice it to say you would perform a similar calculation for each year while adjusting these returns for time and cost.

The important point is that this technique is a well-known calculation for evaluating whether one thing is a better investment than another — be it bandwidth shaping products or real estate. Naturally and obviously the best financial decision will be determined by the greatest return for the smallest cost.

The hard part is determining what questions to ask in order to accurately determine the ROI. A missed cost or benefit here or there could dramatically alter the outcome, potentially leading to significant unforeseen losses.

For the remainder of this article, I’ll discuss many of the potential costs and returns associated with bandwidth shaping products, with some being more obscure than others. In the end, it should better prepare you to address the most important questions and issues and ultimately lead to a more accurate ROI assessment.

Let’s start by looking at the largest components of bandwidth shaping product “costs” and whether they are one-time or ongoing. We’ll then consider the returns.

COSTS

  • The initial cost of the tool
    • This is a one-time cost.
  • The cost of vendor support and license updates
    • These are ongoing costs and include monthly and annual licenses for support, training, software updates, library updates, etc…  The difference from vendor to vendor can be significant — especially over the long run.
  • The cost of upgrades within the time horizon of the investment
    • These upgrades can come in several different forms. For example, what does it cost to go from a 50Mbs tool to 100Mbs? Can your tool be upgraded, or do you have to buy a whole new tool? This can be a one-time cost or it can occur several times. It really depends on the growth of your network, but it’s usually inevitable for networks of any size.
  • The internal (human) cost to support the tool
    • For example, how many man hours do you have to spend to maintain the tool, to optimize it and to adapt it to your changing network? This could be a considerable “hidden” cost and it’s generally recurring. It also usually increases in time as the cost of salaries/benefits tend to go up. Because of that, this is a very important component that should be quantified for a good ROI analysis. Tools that require little or no ongoing maintenance will have a large advantage.
  • Overall impact on the network
    • Does the product add latency or other inefficiencies? Does it create any processing overhead and how much? If the answer is yes, costs such as these will constantly impact your network quality and add up over time.

RETURNS

  • Savings from being able to delay or eliminate buying more bandwidth
    • This could either be a one-time or ongoing return. Even delaying a bandwidth upgrade for six months or a year can be highly valuable.
  • Savings from not losing existing revenue sources
    • How many customers did you not lose because they did not get frustrated with their network/Internet service? This return is ongoing.
  • Ability to generate new revenue
    • How many new customers did you add because of a better-maintained network?  Were you able to generate revenue by adding new higher-value services like a tiered rate structure? This will usually be an ongoing return.
  • Savings from the ability eliminate or reduce the financial impact of unprofitable customers
    • This is an ongoing savings. Can you convert an unprofitable customer to a profitable one by reducing their negative impact on the network? If not, and they walk, do you care?
  • Avoidance of having to buy additional equipment
    • Were you able to avoid having to “divide and conquer” by buying new access points, splitting VLANs, etc..? This can be a one-time or ongoing return.
  • Savings in the cost of responding to technical support calls
    • How much time was saved by not having to receive an irate customer call, research it and respond back? If this is something you typically deal with on a regular basis, the savings will add up every day, week or month this is avoided.

Overall, these issues are the basic financial components and questions that need to be quantified to make a good ROI analysis. For each business, and each tool, this type of analysis may yield a different answer, but it is important to note that over time there are many more items associated with ongoing costs/savings than those occurring only once. Thus, you must take great care to understand the impact of these for each tool, especially those issues that lead to costs that increase over time.

The 10-Gigabit Barrier for Bandwidth Controllers and Intel-Based Routers


By Art Reisman

Editor’s note: This article was adapted from our answer to a NetEqualizer pre-sale question asked by an ISP that was concerned with its upgrade path. We realized the answer was useful in a broader sense and decided to post it here.

Any router, bandwidth controller, or firewall that is based on Intel architecture and buses will never be able to go faster than about about 7 gigabits sustained. (This includes our NE4000 bandwidth controller. While the NE4000 can actually reach speeds close to 10 gigabits, we rate our equipment for five gigabits because we don’t like quoting best-case numbers to our customers.) The limiting factor in Intel architecture is that to expand beyond 10-gigabit speeds you cannot be running with a central clock. Therefore, with a central clock controlling the show, it is practically impossible to move data around much faster than 10 gigabits.

The alternative is to use a specialized asynchronous design, which is what faster switches and hardware do. They have no clock or centralized multiprocessor/bus. However, the price point for such hardware quickly jumps to 5-10 times the Intel architecture because it must be custom designed. It is also quite limited in function once released.

Obviously, vendors can stack a bunch of 10-gig fiber bandwidth controllers behind a switch and call it something faster, but this is no different from dividing up your network paths and using multiple bandwidth controllers yourself.  So, be careful when assessing the claims of other manufacturers in this space.

Considering these limitations, many cable operators here in the US have embraced the 10-gigabit barrier. At some point you must divide and conquer using multiple 10-gig fiber links and multiple NE4000 type boxes, which we believe is really the only viable plan — that is if you want any sort of sophistication in your bandwidth controller.

While there are some that will keep requesting giant centralized boxes, and paying a premium for them (it’s in their blood to think single box, central location), when you think about the Internet, it only works because it is made of many independent paths. There is no centralized location by design. However, as you approach 10-gigabit speeds in your organization, it might be time to stop thinking “single box.”

I went through this same learning curve as a system architect at AT&T Bell Labs back in the 1990s.  The sales team was constantly worried about how many telephone ports we could support in one box because that is what operators were asking for.  It shot the price per port through the roof with some of our designs. So, in our present case, we (NetEqualizer) decided not to get into that game because we believe that price per megabit of shaping will likely win out in the end.

Art Reisman is currently CTO and co-founder of APconnections, creator of the NetEqualizer. He  has worked at several start-up companies over the years and has invented and brought several technology products to market, both on his own and with the backing of larger corporations. This includes tools for the automotive industry.

The Facts and Myths of Network Latency


There are many good references that explain how some applications such as VoIP are sensitive to network latency, but there is also some confusion as to what latency actually is as well as perhaps some misinformation about the causes. In the article below, we’ll separate the facts from the myths and also provide some practical analogies to help paint a clear picture of latency and what may be behind it.

Fact or Myth?

Network latency is caused by too many switches and routers in your network.

This is mostly a myth.

Yes, an underpowered router can introduce latency, but most local network switches add minimal latency — a few milliseconds at most. Anything under about 10 milliseconds is, for practical purposes, not humanly detectable. A router or switch (even a low-end one) may add about 1 millisecond of latency. To get to 10 milliseconds you would need eight or more hops, and even then you wouldn’t be near anything noticeable.

The faster your link (Internet) speed, the less latency you have.

This is a myth.

The speed of your network is measured by how fast IP packets arrive. Latency is the measure of how long they took to get there. So, it’s basically speed vs. time. An example of latency is when NASA sends commands to a Mars orbiter. The information travels at the speed of light, but it takes several minutes or longer for commands sent from earth to get to the orbiter. This is an example of data moving at high speed with extreme latency.

VoIP is very sensitive to network latency.

This is a fact.

Can you imagine talking in real time to somebody on the moon? Your voice would take about eight seconds to get there. For VoIP networks, it is generally accepted that anything over about 150 milliseconds of latency can be a problem. When latency gets higher than 150 milliseconds, issues will emerge — especially for fast talkers and rapid conversations.

Xbox games are sensitive to latency.

This is another fact.

For example, in may collaborative combat games, participants are required to battle players from other locations. Low latency on your network is everything when it comes to beating the opponent to the draw. If you and your opponent shoot your weapons at the exact same time, but your shot takes 200 milliseconds to register at the host server and your opponent’s shot gets there in 100 milliseconds, you die.

Does a bandwidth shaping device such as NetEqualizer increase latency on a network ?

This is true, but only for the “bad” traffic that’s slowing the rest of your network down anyway.

Ever hear of the firefighting technique where you light a back fire to slow the fire down? This is similar to the NetEqualizer approach. NetEqualizer deliberately adds latency to certain bandwidth intensive applications, such as large downloads and p2p traffic, so that chat, email, VoIP, and gaming get the bandwidth they need. The “back fire” (latency) is used to choke off the unwanted, or non-time sensitive, applications. (For more information on how the NetEqualizer works, click here.)

Video is sensitive to latency.

This is a myth.

Video is sensitive to the speed of the connection but not the latency. Let’s go back to our man on the moon example where data takes eight seconds to travel from the earth to the moon. Latency creates a problem with two-way voice communication because in normal conversion, an eight second delay in hearing what was said makes it difficult to carry a conversion. What generally happens with voice and long latency is that both parties start talking at the same time and then eight seconds later you experience two people talking over each other. You see this happening a lot with on television with interviews done via satellite. However most video is one way. For example, when watching a Netflix movie, you’re not communicating video back to Netflix. In fact, almost all video transmissions are on delay and nobody notices since it is usually a one way transmission.

Analyzing the cost of Layer 7 Packet Shaping


November, 2010

By Eli RIles

For most IT administrators layer 7 packet shaping involves two actions.

Action 1:  Involves inspecting and analyzing data to determine what types of traffic are on your network.

Action 2: Involves taking action by adjusting application  flows on your network .

Without  the layer 7 visibility and actions,  an administrator’s job would degrade into a quagmire of random guesswork. Or would it?

Layer 7 monitoring and shaping is intuitively appealing , but it is a good idea to take a step back and break down examine the full life cycle costs of your methodology .

In an ironic inverse correlation, we assert that costs increase with the complexity of the monitoring tool.

1) Obviously, the more detailed the reporting tool (layer 7 ) , the more expensive its initial price tag.

2)  The kicker comes with part two. The more expensive the tool, the more  detail  it will provide, and the more time an administrator is likely to spend adjusting and mucking, looking for optimal performance.

But, is it a fair to assume higher labor costs with  more advanced monitoring and information?

Well, obviously it would not make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. Why have the reporting tool in the first place if the only output was to stare at reports and do nothing? Typically, the more information an admin has about a network, the more inclined he might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief  that when  the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network mucking can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention.  For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980′s.  The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.

An effective compromise with many of our customers is that they are stepping down from expensive complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.

For example, with a simple reporting tool you can plot network usage by user.  Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing; abuse becomes obvious just looking at the usage (a simple report).

However, there is also the personal control factor, which often does not follow clear lines of ROI (return on investment).

What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into a trip to Hawaii.

In our next article, we’ll put some real world numbers to the test for actual break downs, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.

List of monitoring tools compiled by Stanford

Top five free monitoring tools

Planetmy
Linux Tips
How to set up a monitor for free

Five Tips to Manage Network Congestion


As the demand for Internet access continues to grow around the world, the complexity of planning, setting up, and administering your network grows. Here are five (5) tips that we have compiled, based on discussions with network administrators in the field.

#1) Be Smart About Buying Bandwidth
The local T1 provider does not always give you the lowest price bandwidth.  There are many Tier 1 providers out there that may have fiber within line-of-sight of your business. For example, Level 3 has fiber rings already hot in many metro areas and will be happy to sell you bandwidth. To get a low-cost high-speed link to your point of presence, numerous companies can set up your wireless network infrastructure.

#2) Manage Expectations
You know the old saying “under promise and over deliver”.  This holds true for network offerings.  When building out your network infrastructure, don’t let your network users just run wide open. As you add bandwidth, you need to think about and implement appropriate rate limits/caps for your network users.  Do not wait; the problem with waiting is that your original users will become accustomed to higher speeds and will not be happy with sharing as network use grows – unless you enforce some reasonable restrictions up front.  We also recommend that you write up an expectations document for your end users “what to expect from the network” and post it on your website for them to reference.

#3) Understand Your Risk Factors
Many network administrators believe that if they set maximum rate caps/limits for their network users, then the network is safe from locking up due to congestion. However, this is not the case.  You also need to monitor your contention ratio closely.  If your network contention ratio becomes unreasonable, your users will experience congestion aka “lock ups” and “freeze”. Don’t make this mistake.

This may sound obvious, but let me spell it out. We often run into networks with 500 network users sharing a 20-meg link. The network administrator puts in place two rate caps, depending on the priority of the user  — 1 meg up and down for user group A and 5 megs up and down for user group B.  Next, they put rate caps on each group to ensure that they don’t exceed their allotted amount. Somehow, this is supposed to exonerate the network from experiencing contention/congestion. This is all well and good, but if you do the math, 500 network users on a 20 meg link will overwhelm the network at some point, and nobody will then be able to get anywhere close to their “promised amount.”

If you have a high contention ratio on your network, you will need something more than rate limits to prevent lockups and congestion. At some point, you will need to go with a layer-7 application shaper (such as Blue Coat Packeteer or Allot NetEnforcer), or go with behavior-based shaping (NetEqualizer). Your only other option is to keep adding bandwidth.

#4) Decide Where You Want to Spend Your Time
When you are building out your network, think about what skill sets you have in-house and those that you will need to outsource.  If you can select network applications and appliances that minimize time needed for set-up, maintenance, and day-to-day operations, you will reduce your ongoing costs. This is true whether your insource or outsource, as there is an “opportunity cost” for spending time with each network toolset.

#5) Use What You Have Wisely
Optimize your existing bandwidth.   Bandwidth shaping appliances can help you to optimize your use of the network.   Bandwidth shapers work in different ways to achieve this.  Layer-7 shapers will allocate portions of your network to pre-defined application types, splitting your pipe into virtual pipes based on how you want to allocate your network traffic.  Behavior-based shaping, on the other hand, will not require predefined allocations, but will shape traffic based on the nature of the traffic itself (latency-sensitive, short/bursty traffic is prioritized higher than hoglike traffic).   For known traffic patterns on a WAN, Layer-7 shaping can work very well.  For unknown patterns like Internet traffic, behavior-based shaping is superior, in our opinion.

On Internet links, a NetEqualizer bandwidth shaper will allow you to increase your customer base by between 10 to 30 percent without having to purchase additional bandwidth. This allows you to increase the amount of people you can put into your infrastructure without an expensive build out.

In order to determine whether the return-on-investment (ROI) makes sense in your environment, use our ROI tool to calculate your payback period on adding bandwidth control to your network.  You can then compare this one-time cost with your expected recurring month costs of additional bandwidth.  Also note in many cases you will need to do both at some point.  Bandwidth shaping can delay or defer purchasing additional bandwidth, but with growth in your network user base, you will eventually need to consider purchasing more bandwidth.

In Summary…
Obviously, these five tips are not rocket science, and some of them you may be using already.  We offer them here as a quick guide & reminder to help in your network planning.  While the sea change that we are all seeing in internet usage (more on that later…) makes network administration more challenging every day, adequate planning can help to prepare your network for the future.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here to request a full price list.

Equalizing Compared to Application Shaping (Traditional Layer-7 “Deep Packet Inspection” Products)


Editor’s Note: (Updated with new material March 2012)  Since we first wrote this article, many customers have implemented the NetEqualizer not only to shape their Internet traffic, but also to shape their company WAN.  Additionally, concerns about DPI and loss of privacy have bubbled up. (Updated with new material September 2010)  Since we first published this article, “deep packet inspection”, also known as Application Shaping, has taken some serious industry hits with respect to US-based ISPs.   

==============================================================================================
Author’s Note: We often get asked how NetEqualizer compares to Packeteer (Bluecoat), NetEnforcer (Allot), Network Composer (Cymphonix), Exinda, and a plethora of other well-known companies that do Application Shaping (aka “packet shaping”, “deep packet inspection”, or “Layer-7” shaping).   After several years of these questions, and discussing different aspects with former and current application shaping with IT administrators, we’ve developed a response that should clarify the differences between NetEqualizer’s behavior- based approach and the rest of the pack.
We thought of putting our response into a short, bullet-by-bullet table format, but then decided that since this decision often involves tens of thousands of dollars, 15 minutes of education on the subject with content to support the bullet chart was in order.  If you want to skip the details, see our Summary Table at the end of this article

However, if you’re looking to really understand the differences, and to have the question answered as objectively as possible, please take a few minutes to read on…
==============================================================================================

How NetEqualizer compares to Bluecoat, Allot, Cymphonix, & Exinda

In the following sections, we will cover specifically when and where Application Shaping is used, how it can be used to your advantage, and also when it may not be a good option for what you are trying to accomplish.  We will also discuss how Equalizing, NetEqualizer’s behavior-based shaping, fits into the landscape of application shaping, and how in many cases Equalizing is a much better alternative.

Download the full article (PDF)  Equalizing Compared To Application Shaping White Paper

Read the rest of this entry »

NetEqualizer Bandwidth Shaping Solution: Colleges, Universities, Boarding Schools, and University Housing


In working with information technology leaders at universities, colleges, boarding schools, and university housing over the years, we’ve repeatedly heard the same issues and challenges facing network administrators.  Here are just a few:

Download College & University White Paper

  • We need to provide 24/7 access to the web in the dormitories.
  • We need to support multiple campuses (and WAN connections between campuses).
  • We have thousands of students, and hundreds of administrators and professors, all sharing the same pipe.
  • We need to give priority to classroom videos used for educational purposes.
  • Our students want to play games and watch videos (e.g. YouTube).
  • We get calls if instant messaging & email are not responding instantaneously.
  • We need to manage P2P traffic.

In this article, we’ll talk about how the NetEqualizer has been used to solve these issues for many private and public colleges, universities, boarding schools, and in university housing facilities around the world.

Download article (PDF) College & University White Paper

Read full article …

$1000 Discount Offered Through NetEqualizer Cash For Conversion Program


After witnessing the overwhelming popularity of the government’s Cash for Clunkers new car program, we’ve decided to offer a similar deal to potential NetEqualizer customers. Therefore, this week, we’re announcing the launch of our Cash for Conversion program.The program offers owners of select brands (see below) of network optimization technology a $1000 credit toward the list-price purchase of NetEqualizer NE2000-10 or higher models (click here for a full price list). All owners have to do is send us your old (working or not) or out of license bandwidth control technology. Products from the following manufacturers will be accepted:

  • Exinda
  • Packeteer/Blue Coat
  • Allot
  • Cymphonics
  • Procera

In addition to receiving the $1000 credit toward a NetEqualizer, program participants will also have the peace of mind of knowing that their old technology will be handled responsibly through refurbishment or electronics recycling programs.

Only the listed manufacturers’ products will qualify. Offer good through the Labor Day weekend (September 7, 2009). For more information, contact us at 303-997-1300 or admin@apconnections.net.

Top Tips To Quantify The Cost Of WAN Optimization


Editor’s Note: As we mentioned in a recent article, there’s often some confusion when it comes to how WAN optimization fits into the overall network optimization industry — especially when compared to Internet optimization. Although similar, the two techniques require different approaches to optimization. What follows are some simple questions to ask your vendor before you purchase a WAN optimization appliance. For the record, the NetEqualizer is primarily used for Internet optimization.

When presenting a WAN optimization ROI argument, your vendor rep will clearly make a compelling case for savings.  The ROI case will be made by amortizing the cost of equipment against your contracted rate from your provider. You can and should trust these basic raw numbers. However, there is more to evaluating a WAN optimization (packet shaping) appliance than comparing equipment cost against bandwidth savings. Here are a few things to keep in mind:

  1. The amortization schedule should also make reasonable assumptions about future costs for T1, DS3, and OC3 links. Most contracted rates have been dropping in many metro areas and it is reasonable to assume that bandwidth costs will perhaps be 50-percent less two to three years out.
  2. If you do increase bandwidth, the licensing costs for the traffic shaping equipment can increase substantially. You may also find yourself in a situation where you need to do a forklift upgrade as you outrun your current hardware.
  3. Recurring licensing costs are often mandatory to keep your equipment current. Without upgrading your license, your deep packet inspection (layer 7 shaping filters) will become obsolete.
  4. Ongoing labor costs to tune and re-tune your WAN optimization appliance can often costs thousands per week.
  5. The good news is that optimization companies will normally allow you to try an appliance before you buy. Make sure you take the time to manage the equipment with your own internal techs or IT consultant to get an idea of how it will fit into your network.  The honeymoon with new equipment (supported by a well trained pre-sales team) can be short lived. After the free pre-sale support has expired, you will be on your own.

There are certainly times when WAN optimization makes sense, yet it many cases, what appears to be a no-brainer decision at first will begin to be called into question as costs mount down the line. Hopefully these five contributing factors will paint a clearer picture of what to expect.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

%d bloggers like this: