Crossing a Chasm, Transitioning From Packet Shaping to the Next Generation Bandwidth Shaping Technology


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Even though I would self identify as an early adopter of new technology, when I look at my real life behavior, I tend to resist change and hang on to   technology that I am comfortable with.   Suffice it to say, I  usually need an event or a gentle push to get over my resistance.

Given that technology change is uncomfortable,  what follows is a gentle push, or perhaps a mild shove, to help anybody who is looking to pull the trigger on moving away from Packet Shaping into a more sustainable, cost-effective alternative.

First off, lets look at why packet shaping (layer 7 deep packet inspection) technologies are popular.

“A good layer 7 based tool creates the perception of complete control over your network. You can see what applications are running, how much bandwidth they are using, and make adjustments to flows to meet your business objectives.”

Although the above statement appears idyllic, the reality of implementing packet shaping, even at its prime, was at best only 60 percent accurate.  The remaining 40 percent of traffic could never be classified, and thus had to shaped based on guess work or faith.

Today, the accuracy of packet classification continues to slip. Security concerns are forcing most content providers to adopt encryption. Encrypted traffic cannot be classified.

In an effort to stay relevant, companies have moved away from deep packet inspection to classifying traffic by the source and destination (source IP’s are never encrypted and thus always visible).

If your packet shaping device knows the address range of a content provider, it can safely assume a traffic type by examining the source IP address.  For example, Youtube traffic emanates from a source address owned by Google.  The draw-back with this method is that savvy users can easily hide their sources by using any one of the publicly available VPN utilities out there.  The personal VPN world is exploding as individual users are moving to VPN tunneling services for all their home browsing.

The combination of VPN tunnels and encrypted content is slowly transforming the best application classifiers into paper weights.

So, what are the alternatives?   Is  there something better?

Yes, if you can let go of concept of controlling specific traffic by type,  you can find viable alternatives.  As per our title, you must “cross the chasm”, and surrender to a new way of bandwidth shaping, where decisions are based on usage heuristics, and not absolute identification.

What is a heuristic-based shaper ? 

Our heuristic-based bandwidth shapers borrow from the world of computer science and a CPU scheduling technique called shortest job first (SJF).  In today’s world,  a “job” is synonymous with an application.  You have likely unknowingly experienced the benefits of a shortest job first scheduler when you use a linux-based laptop, such as a MAC or Ubuntu.  Unlike the older Windows operating systems where one application can lock up your computer, such lock ups are rare on Linux .  Linux uses a scheduler that allows preemption to let other applications in during peak times, so they are not starved for service.     Simply put,  a computer with many applications using SJF will pick the application it thinks is going to use the least amount of time and run it first. Or preempt a hog to let another application in.

In the world of bandwidth shaping we do not have the issue of contended CPU resources, but we do have an overload of Internet applications that vie for bandwidth resources on a shared link.   The NetEqualizer uses SJF-type techniques to preempt users who are dominating a bandwidth link with large downloads and other hogs. Although the NetEqualizer does not specifically classify these hogging applications by type , it does not matter. The hogging applications, such as large downloads and high resolution video, by their large foot print alone, are given lower priority.  Thus the business critical interactive applications with smaller bandwidth resource consumption get serviced first.

Summary

The issue we often see with switching to heuristic-shaping technology is that it goes against the absolute control-oriented solution offered by Packet Shaping.  The alternative of sticking with deep packet inspection and expecting to get control over your network is becoming impossible, hence something must change.

The new heuristic model of bandwidth shaping accomplishes priority for interactive cloud applications, and the implementation is simple and clean.

A Packet Shaper Alternative


We generally don’t market the NetEqualizer product as an alternative to any particular competitor. NetEqualizer  stands on its own; however many of our customers are former Blue Coat, PacketShaper users. and their only complaint with our product is that they wish they could have found us sooner.

If you are looking for something simpler , lower cost , with a rock solid track record of solving congestion issues on Network Interfaces, you have come to the right place.

The basic premise of our technology is shaping by behavior based heuristics. Although that might sound a bit different from shaping by application, it is really quite effective and easy to use.  More importantly , it is becoming the best option in a world where the layer 7 techniques used by Blue Coat Packet Shaper, Allot NetEnforcer, Exinda  are unable to identify signatures due to increased content encryption.

Feel free to contact us , or any of our reference customers who have switched over to our technology to learn more.

 

 

 

 

 

Bandwidth Shaping Shake Up, Your Packet Shaper May be Obsolete?


If you went to sleep in 2005 and woke up 10 years later you would likely be surprised by some dramatic changes in technology.

  • Smart cars that drive themselves are almost a reality
  • The desktop PC is no longer a consumer product
  • Wind farms  now line the highways of rural America
  • Layer 7 shaping technology is now clinging to life, crashing the financials of a several  companies that bet the house on it.

What happened to layer 7 and Packet Shaping?

In the early 2000’s all the rave in traffic classification was the ability to put different types of bandwidth traffic into labeled buckets and assign a priority to them. Akin to rating your food choices  on a tapas menu ,network administrators  enjoyed an extensive  list of various traffic. Youtube, Citrix,  news feeds, the list was only limited by the price and quality of the bandwidth shaper. The more expensive the traffic shaper , the more choices you had.

Starting in 2005 and continuing to this day,  several forces started to work against the layer 7 paradigm.

  • The price of bulk bandwidth went into a free fall, much faster than the relatively fixed cost of a bandwidth shaper.  The business proposition of buying a bandwidth shaper to conserve bandwidth utilization became much tighter. Some companies that were riding high saw their stock prices collapse.
  • Internet traffic became invisible and impossible to identify with the advent of encryption techniques. A traffic classifier using Layer 7,  cannot see inside HTTPS or a VPN tunnel, and thus it is essentially becomes a big expensive albatross with little value as the rate of encrypted traffic increases.
  • The FCC ruling toward Net Neutrality further put a damper on a portion of the Layer 7 market. For years ISPs had been using Layer 7 technology to give preferential treatment to different types of traffic.
  • Cloud based services are using less complex  architectures. Companies  can consolidate on one simplified central bandwidth shaper, where as before they might have had several on all their various WAN links and Network segments

So where does this leave the bandwidth shaping market?

There is still some demand for layer 7 type shapers, particular in countries like China, where they attempt to control   everything.  However in Europe and in the US , the trend is to more basic controls that do not violate the FCC rule, cost less, and use some form intelligent based fairness rules such as:

  • Quota’s ,  your cell phone data plan.
  • Fairness based heuristics is gaining momentum, lower price point, prevents congestion without violating FCC ruling  (  Equalizing).
  • Basic Rate limits,  your wired ISP 20 megabit plan, often implemented on a basic router and not a specialized shaping device.
  • No Shaping at all,  pipes are so large there is no need to ration bandwidth.

Will Shaping be around in 10 years?

Yes, consumers and businesses will always find ways to use all their bandwidth and more.

Will price points for bandwidth continue to drop ?

I am going to go against the grain here, and say bandwidth prices will flatten out in the near future.  Prices  over the last decade slid for several reasons which are no longer in play.

The biggest driver in price drops was the wide acceptance of wave division muliplexing on carrier lines in the 2005- present time frame. There was already a good bit of fiber in the ground but the WDM innovation caused a huge jump in capacity, with very little additional cost to providers.

The other factor was a major world-wide recession, where businesses where demand was slack.

Lastly there are no new large carriers coming on line. Competition and price wars will ease up as suppliers try to increase profits.

 

 

Five Requirements for QoS and Your Cloud Computing


I received a call today from one of the Largest Tier 1 providers in the world.  The salesperson on the other end was lamenting about his inability to sell cloud services to his customers.  His service offerings were hot, but the customers’ Internet connections were not.  Until his customers resolve their congestion problems, they were in a holding pattern for new cloud services.

Before I finish my story,  I promised a list of what Next Generation traffic controller can do so without further adieu, here it is.

  1. Next Generation Bandwidth controllers must be able to mitigate traffic flows originating from the Internet such that important Cloud Applications get priority.
  2. Next Generation Bandwidth controllers must NOT rely on Layer 7 DPI technology to identify traffic. (too much encryption and tunneling today for this to be viable)
  3. Next Generation Bandwidth controllers must hit a price range of $5k to $10k USD  for medium to large businesses.
  4. Next Generation Traffic controllers must not require babysitting and adjustments from the IT staff to remain effective.
  5. A Next Generation traffic controller should adopt a Heuristics-based decision model (like the one used in the NetEqualizer).

As for those businesses mentioned by the sales rep, when they moved to the cloud many of them had run into bottlenecks.  The bottlenecks were due to their iOS updates and recreational “crap” killing the cloud application traffic on their shared Internet trunk.

Their original assumption was they could use the QoS on their routers to mitigate traffic. After all, that worked great when all they had between them and their remote business logic was a nailed up MPLS network. Because it was a private corporate link, they had QoS devices on both ends of the link and no problems with recreational congestion.

Moving to the Cloud was a wake up call!  Think about it, when you go to the cloud you only control one end of the link.  This means that your router-based QoS is no longer effective, and incoming traffic will crush you if you do not do something different.

The happy ending is that we were able to help our friend at BT telecom,BT_logo by mitigating his customers’ bottlenecks. Contact us if you are interested in more details.

Bandwidth Control in the Cloud


The good news about cloud based applications is that in order to be successful, they must be fairly light weight in terms of their bandwidth footprint. Most cloud based designers keep create applications with a fairly small data footprint. A poorly designed cloud application that required large amounts of data transfer, would not get good reviews and would likely fizzle out.

The bad news is that cloud applications must share your Internet link with recreational traffic, and recreational traffic is often bandwidth intensive with no intention of playing nice when sharing a link .

For businesses,  a legitimate concern is having their critical cloud based applications  starved for bandwidth. When this happens they can perform poorly or lock up, creating a serious drop in productivity.

 

If you suspect you have bandwidth contention impacting the performance of a critical cloud application, the best place to start  your investigation would be with a bandwidth controller/monitor that can show you the basic footprint of how much bandwidth an application is using.

Below is a quick screen shot that I often use from our NetEqualizer when trouble shooting a customer link. It gives me a nice  snap shot of utilization. I can sort the heaviest users by their bandwidth footprint. I can can then click on a convenient, DNS look up tab, to see who they are.

Screen Shot 2015-12-29 at 8.25.52 AM

In my next post I will detail some typical bandwidth planning metrics for going to the cloud . Stay tuned.

Death to Deep Packet Inspection?


A few weeks ago, I wrote an article on how I was able to watch YouTube while on a United flight, bypassing their layer 7 filtering techniques. Following up today, I was not surprised to see a few other articles on the subject popping up recently.

Stealth VPNs By-Pass DPI

How to By Pass Deep Packet Inspection

Encryption Death to DPI

I also just recently heard from a partner company that Meraki/Cisco was abandoning their WAN DPI technology in their access points.   I am not sure from the details if this was due to poor performance from DPI , but that is what I suspect.

Lastly, even the US government is annoyed that much of the data they formally had easy access to is now being encrypted by tech companies to protect their customer base privacy.

Does this recent storm of chatter on the subject spell the end  of commercial deep packet inspection? In my opinion no, not in the near term. The lure of DPI is so strong that preaching against it is like Galileo telling the church to shove off, it is going to take some time. And technically there are still many instances where DPI works quite well.

Does Your School Have Enough Bandwidth for On-line Testing?


K-12 schools are all rapidly moving toward “one-for-one” programs, where every student has a computer, usually a laptop. Couple this with standardized, cloud-based testing services, and you have the potential for an Internet gridlock during the testing periods. Some of the common questions we hear are:

How will all of these students using the cloud affect our internet resource?

Will there be enough bandwidth for all of those students using on-line testing?

What type of QoS should we deploy, or should we buy more bandwidth?

The good news is that most cloud testing services are designed with a fairly modest bandwidth footprint.

For example, a student connection to a cloud testing application will average around 150kbs (kilo-bits per second).

In a perfect world, a 40 megabit link could handle about 400 students simultaneously doing on-line testing as long as there was no other major traffic.

On the other hand, a video stream may average 1500kbs or more.

A raw download, such as an iOS update, may take as much as 15,000kbs, that is 100 times more bandwidth than the student taking an on-line test.

A common belief when choosing a bandwidth controller to support on-line testing is to find a tool which will specifically identify the on-line testing service and the non-essential applications, thus allowing the IT staff at the school to make adjustments giving the testing a higher priority (QoS). Yes, this strategy seems logical but there are several drawbacks:

  • It does require a fairly sophisticated form of bandwidth control and can be fairly labor intensive and expensive.
  • Much of the public Internet traffic may be encrypted or tunneled, and hard to identify.
  • Another complication trying to give Internet traffic traditional priority is that a typical router cannot give priority to incoming traffic, and most of the test traffic is incoming (from the outside in). We detailed this phenomenon in our post about QoS and the Internet.

The key is not to make the problem more complicated than it needs to be. If you just look at the footprint of the streams coming into the testing facility, you can assume, from our observation, that all streams of 150kbs are of a higher priority than the larger streams, and simply throttle the larger streams. Doing so will insure there is enough bandwidth for the testing service connections to the students. The easiest way to do this is with a heuristic-based bandwidth controller, a class of bandwidth shapers that dynamically give priority to smaller streams by slowing down larger streams.

The other option is to purchase more bandwidth, or in some cases a combination of more bandwidth and a heuristic-based bandwidth controller, to be safe.

Please contact us for a more in-depth discussion of options.

For more information on cloud usage in K-12 schools, check out these posts:

Schools View Cloud Infrastructure as a Viable Option

K-12 Education is Moving to the Cloud

For more information on Bandwidth Usage by Cloud systems, check out this article:

Know Your Bandwidth Needs: Is Your Network Capacity Big Enough for Cloud Computing?

%d bloggers like this: