Cloud Computing Creates Demand For Bandwidth Shaping


image1-3The rise of cloud computing has been a mixed bag for the bottom line of traditional network hardware manufacturers.  Yes, there is business to be had by supplying the burgeoning cloud service providers with new hardware; however, as companies move their applications into the cloud, the elaborate WAN networks of yesteryear are slowly being phased out. The result is a decrease in sales of routers and switches, a dagger in the heart of the very growth engine that gave rise to the likes of Cisco, Juniper, and Adtran.

From a business perspective, we are pleasantly surprised to see an uptick in demand in the latter half of 2017 for bandwidth shapers.  We expect this to continue on into 2018 and beyond.

Why are bandwidth shapers seeing an uptick in interest?
Prior to the rise of cloud computing , companies required large internal LAN network pipes, with relatively small connections to the Internet.  As services move to the Cloud, the data that formerly traversed the local LAN is now being funneled out of the building through the pipe leading to the Internet.   For the most part, companies realize this extra burden on their Internet connection and take action by buying more bandwidth. Purchasing bandwidth makes sense in markets where bandwidth is cheap, but is not always possible.

Companies are realizing they cannot afford to have gridlock into their Cloud.  Network administrators understand that at any time an unanticipated spike in bandwidth demand could overwhelm their cloud connection.  The ramifications of clogged cloud connections could be catastrophic to their business, especially as more business is performed online.  Hence, we are getting preemptive inquiries about ensuring their cloud service will prioritize critical services across their Internet connection with a smart bandwidth shaper.

We are also getting inquiries from businesses that have fallen behind and are unable to upgrade their Internet pipe fast enough to keep up with Cloud demand.   This cyclical pattern of upgrading/running out of bandwidth can be tempered by using a bandwidth shaper.  As your network peaks, your bandwidth shaper can ensure that available resources are shared optimally, until you upgrade and have more bandwidth available.

Although moving to the Cloud seems to introduce a new paradigm, from the world of network optimization, the challenges are the same.  Over the years we have always recommended a two-prong approach to optimization: 1) adequate bandwidth, and 2) bandwidth shaping.  The reason for our recommendation continues to be the same.  With bandwidth shaping, you are ensuring that you are best-positioned to handle peak traffic on your network.  And now, more than ever, as business goes “online” and into the Cloud, and both your employees and your customers are on your network, bandwidth shaping is a prudent insurance policy to providing a great experience on your network.

 

 

How to Survive High Contention Ratios and Prevent Network Congestion


image1-2

Is there a way to raise contention ratios without creating network congestion, thus allowing your network to service more users?

Yes there is.

First a little background on the terminology.

Congestion occurs when a shared network attempts to deliver more bandwidth to its users than is available. We typically think of an oversold/contended network with respect to ISPs and residential customers; but this condition also occurs within businesses, schools and any organization where more users are vying for bandwidth than is available.

 The term, contention ratio, is used in the industry as a way of determining just how oversold your network is.  A contention ratio is simply the size of an Internet trunk divided by the number of users. We normally think of Internet trunks in units of megabits. For example, 10 users sharing a one megabit trunk would have a 10-to- 1 contention ratio.
 A decade ago, a 10-to-1 contention ratio was common. Today, bandwidth is much less expensive and the average contention ratios have come down.  Unfortunately, as bandwidth costs have dropped, pressure on trunks has risen, as today’s applications require increasing amounts of bandwidth. The most common congestion symptom is  slow network response times.
Now back to our original question…
Is there a way to raise contention ratios without creating congestion, thus allowing your network to service more users?
This is where a smart bandwidth controller can help.  Back in the “old” days before encryption was king, most solutions involved classifying types of traffic, and restricting less important traffic based on customer preferences.   Classifying by type went away with encryption, which prevents traffic classifiers from seeing the specifics of what is traversing a network.  A modern bandwidth controller uses dynamic rules to restrict  traffic based on aberrant behavior.  Although this might seem less intuitive than specifically restricting traffic by type, it turns out to be just as reliable, not to mention simpler and more cost-effective to implement.
We have seen results where a customer can increase their user base by as much as 50 percent and still have decent response times for interactive  cloud applications.
To learn more, contact us, our engineering team is more than happy to go over your specific situation, to see if we can help you.
You also might be interested in this VPN product  https://www.cloudwards.net/safervpn-review/

Crossing a Chasm, Transitioning From Packet Shaping to the Next Generation Bandwidth Shaping Technology


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Even though I would self identify as an early adopter of new technology, when I look at my real life behavior, I tend to resist change and hang on to   technology that I am comfortable with.   Suffice it to say, I  usually need an event or a gentle push to get over my resistance.

Given that technology change is uncomfortable,  what follows is a gentle push, or perhaps a mild shove, to help anybody who is looking to pull the trigger on moving away from Packet Shaping into a more sustainable, cost-effective alternative.

First off, lets look at why packet shaping (layer 7 deep packet inspection) technologies are popular.

“A good layer 7 based tool creates the perception of complete control over your network. You can see what applications are running, how much bandwidth they are using, and make adjustments to flows to meet your business objectives.”

Although the above statement appears idyllic, the reality of implementing packet shaping, even at its prime, was at best only 60 percent accurate.  The remaining 40 percent of traffic could never be classified, and thus had to shaped based on guess work or faith.

Today, the accuracy of packet classification continues to slip. Security concerns are forcing most content providers to adopt encryption. Encrypted traffic cannot be classified.

In an effort to stay relevant, companies have moved away from deep packet inspection to classifying traffic by the source and destination (source IP’s are never encrypted and thus always visible).

If your packet shaping device knows the address range of a content provider, it can safely assume a traffic type by examining the source IP address.  For example, Youtube traffic emanates from a source address owned by Google.  The draw-back with this method is that savvy users can easily hide their sources by using any one of the publicly available VPN utilities out there.  The personal VPN world is exploding as individual users are moving to VPN tunneling services for all their home browsing.

The combination of VPN tunnels and encrypted content is slowly transforming the best application classifiers into paper weights.

So, what are the alternatives?   Is  there something better?

Yes, if you can let go of concept of controlling specific traffic by type,  you can find viable alternatives.  As per our title, you must “cross the chasm”, and surrender to a new way of bandwidth shaping, where decisions are based on usage heuristics, and not absolute identification.

What is a heuristic-based shaper ? 

Our heuristic-based bandwidth shapers borrow from the world of computer science and a CPU scheduling technique called shortest job first (SJF).  In today’s world,  a “job” is synonymous with an application.  You have likely unknowingly experienced the benefits of a shortest job first scheduler when you use a linux-based laptop, such as a MAC or Ubuntu.  Unlike the older Windows operating systems where one application can lock up your computer, such lock ups are rare on Linux .  Linux uses a scheduler that allows preemption to let other applications in during peak times, so they are not starved for service.     Simply put,  a computer with many applications using SJF will pick the application it thinks is going to use the least amount of time and run it first. Or preempt a hog to let another application in.

In the world of bandwidth shaping we do not have the issue of contended CPU resources, but we do have an overload of Internet applications that vie for bandwidth resources on a shared link.   The NetEqualizer uses SJF-type techniques to preempt users who are dominating a bandwidth link with large downloads and other hogs. Although the NetEqualizer does not specifically classify these hogging applications by type , it does not matter. The hogging applications, such as large downloads and high resolution video, by their large foot print alone, are given lower priority.  Thus the business critical interactive applications with smaller bandwidth resource consumption get serviced first.

Summary

The issue we often see with switching to heuristic-shaping technology is that it goes against the absolute control-oriented solution offered by Packet Shaping.  The alternative of sticking with deep packet inspection and expecting to get control over your network is becoming impossible, hence something must change.

The new heuristic model of bandwidth shaping accomplishes priority for interactive cloud applications, and the implementation is simple and clean.

A Packet Shaper Alternative


We generally don’t market the NetEqualizer product as an alternative to any particular competitor. NetEqualizer  stands on its own; however many of our customers are former Blue Coat, PacketShaper users. and their only complaint with our product is that they wish they could have found us sooner.

If you are looking for something simpler , lower cost , with a rock solid track record of solving congestion issues on Network Interfaces, you have come to the right place.

The basic premise of our technology is shaping by behavior based heuristics. Although that might sound a bit different from shaping by application, it is really quite effective and easy to use.  More importantly , it is becoming the best option in a world where the layer 7 techniques used by Blue Coat Packet Shaper, Allot NetEnforcer, Exinda  are unable to identify signatures due to increased content encryption.

Feel free to contact us , or any of our reference customers who have switched over to our technology to learn more.

 

 

 

 

 

Bandwidth Shaping Shake Up, Your Packet Shaper May be Obsolete?


If you went to sleep in 2005 and woke up 10 years later you would likely be surprised by some dramatic changes in technology.

  • Smart cars that drive themselves are almost a reality
  • The desktop PC is no longer a consumer product
  • Wind farms  now line the highways of rural America
  • Layer 7 shaping technology is now clinging to life, crashing the financials of a several  companies that bet the house on it.

What happened to layer 7 and Packet Shaping?

In the early 2000’s all the rave in traffic classification was the ability to put different types of bandwidth traffic into labeled buckets and assign a priority to them. Akin to rating your food choices  on a tapas menu ,network administrators  enjoyed an extensive  list of various traffic. Youtube, Citrix,  news feeds, the list was only limited by the price and quality of the bandwidth shaper. The more expensive the traffic shaper , the more choices you had.

Starting in 2005 and continuing to this day,  several forces started to work against the layer 7 paradigm.

  • The price of bulk bandwidth went into a free fall, much faster than the relatively fixed cost of a bandwidth shaper.  The business proposition of buying a bandwidth shaper to conserve bandwidth utilization became much tighter. Some companies that were riding high saw their stock prices collapse.
  • Internet traffic became invisible and impossible to identify with the advent of encryption techniques. A traffic classifier using Layer 7,  cannot see inside HTTPS or a VPN tunnel, and thus it is essentially becomes a big expensive albatross with little value as the rate of encrypted traffic increases.
  • The FCC ruling toward Net Neutrality further put a damper on a portion of the Layer 7 market. For years ISPs had been using Layer 7 technology to give preferential treatment to different types of traffic.
  • Cloud based services are using less complex  architectures. Companies  can consolidate on one simplified central bandwidth shaper, where as before they might have had several on all their various WAN links and Network segments

So where does this leave the bandwidth shaping market?

There is still some demand for layer 7 type shapers, particular in countries like China, where they attempt to control   everything.  However in Europe and in the US , the trend is to more basic controls that do not violate the FCC rule, cost less, and use some form intelligent based fairness rules such as:

  • Quota’s ,  your cell phone data plan.
  • Fairness based heuristics is gaining momentum, lower price point, prevents congestion without violating FCC ruling  (  Equalizing).
  • Basic Rate limits,  your wired ISP 20 megabit plan, often implemented on a basic router and not a specialized shaping device.
  • No Shaping at all,  pipes are so large there is no need to ration bandwidth.

Will Shaping be around in 10 years?

Yes, consumers and businesses will always find ways to use all their bandwidth and more.

Will price points for bandwidth continue to drop ?

I am going to go against the grain here, and say bandwidth prices will flatten out in the near future.  Prices  over the last decade slid for several reasons which are no longer in play.

The biggest driver in price drops was the wide acceptance of wave division muliplexing on carrier lines in the 2005- present time frame. There was already a good bit of fiber in the ground but the WDM innovation caused a huge jump in capacity, with very little additional cost to providers.

The other factor was a major world-wide recession, where businesses where demand was slack.

Lastly there are no new large carriers coming on line. Competition and price wars will ease up as suppliers try to increase profits.

 

 

Five Requirements for QoS and Your Cloud Computing


I received a call today from one of the Largest Tier 1 providers in the world.  The salesperson on the other end was lamenting about his inability to sell cloud services to his customers.  His service offerings were hot, but the customers’ Internet connections were not.  Until his customers resolve their congestion problems, they were in a holding pattern for new cloud services.

Before I finish my story,  I promised a list of what Next Generation traffic controller can do so without further adieu, here it is.

  1. Next Generation Bandwidth controllers must be able to mitigate traffic flows originating from the Internet such that important Cloud Applications get priority.
  2. Next Generation Bandwidth controllers must NOT rely on Layer 7 DPI technology to identify traffic. (too much encryption and tunneling today for this to be viable)
  3. Next Generation Bandwidth controllers must hit a price range of $5k to $10k USD  for medium to large businesses.
  4. Next Generation Traffic controllers must not require babysitting and adjustments from the IT staff to remain effective.
  5. A Next Generation traffic controller should adopt a Heuristics-based decision model (like the one used in the NetEqualizer).

As for those businesses mentioned by the sales rep, when they moved to the cloud many of them had run into bottlenecks.  The bottlenecks were due to their iOS updates and recreational “crap” killing the cloud application traffic on their shared Internet trunk.

Their original assumption was they could use the QoS on their routers to mitigate traffic. After all, that worked great when all they had between them and their remote business logic was a nailed up MPLS network. Because it was a private corporate link, they had QoS devices on both ends of the link and no problems with recreational congestion.

Moving to the Cloud was a wake up call!  Think about it, when you go to the cloud you only control one end of the link.  This means that your router-based QoS is no longer effective, and incoming traffic will crush you if you do not do something different.

The happy ending is that we were able to help our friend at BT telecom,BT_logo by mitigating his customers’ bottlenecks. Contact us if you are interested in more details.

Bandwidth Control in the Cloud


The good news about cloud based applications is that in order to be successful, they must be fairly light weight in terms of their bandwidth footprint. Most cloud based designers keep create applications with a fairly small data footprint. A poorly designed cloud application that required large amounts of data transfer, would not get good reviews and would likely fizzle out.

The bad news is that cloud applications must share your Internet link with recreational traffic, and recreational traffic is often bandwidth intensive with no intention of playing nice when sharing a link .

For businesses,  a legitimate concern is having their critical cloud based applications  starved for bandwidth. When this happens they can perform poorly or lock up, creating a serious drop in productivity.

 

If you suspect you have bandwidth contention impacting the performance of a critical cloud application, the best place to start  your investigation would be with a bandwidth controller/monitor that can show you the basic footprint of how much bandwidth an application is using.

Below is a quick screen shot that I often use from our NetEqualizer when trouble shooting a customer link. It gives me a nice  snap shot of utilization. I can sort the heaviest users by their bandwidth footprint. I can can then click on a convenient, DNS look up tab, to see who they are.

Screen Shot 2015-12-29 at 8.25.52 AM

In my next post I will detail some typical bandwidth planning metrics for going to the cloud . Stay tuned.

Death to Deep Packet Inspection?


A few weeks ago, I wrote an article on how I was able to watch YouTube while on a United flight, bypassing their layer 7 filtering techniques. Following up today, I was not surprised to see a few other articles on the subject popping up recently.

Stealth VPNs By-Pass DPI

How to By Pass Deep Packet Inspection

Encryption Death to DPI

I also just recently heard from a partner company that Meraki/Cisco was abandoning their WAN DPI technology in their access points.   I am not sure from the details if this was due to poor performance from DPI , but that is what I suspect.

Lastly, even the US government is annoyed that much of the data they formally had easy access to is now being encrypted by tech companies to protect their customer base privacy.

Does this recent storm of chatter on the subject spell the end  of commercial deep packet inspection? In my opinion no, not in the near term. The lure of DPI is so strong that preaching against it is like Galileo telling the church to shove off, it is going to take some time. And technically there are still many instances where DPI works quite well.

Does Your School Have Enough Bandwidth for On-line Testing?


K-12 schools are all rapidly moving toward “one-for-one” programs, where every student has a computer, usually a laptop. Couple this with standardized, cloud-based testing services, and you have the potential for an Internet gridlock during the testing periods. Some of the common questions we hear are:

How will all of these students using the cloud affect our internet resource?

Will there be enough bandwidth for all of those students using on-line testing?

What type of QoS should we deploy, or should we buy more bandwidth?

The good news is that most cloud testing services are designed with a fairly modest bandwidth footprint.

For example, a student connection to a cloud testing application will average around 150kbs (kilo-bits per second).

In a perfect world, a 40 megabit link could handle about 400 students simultaneously doing on-line testing as long as there was no other major traffic.

On the other hand, a video stream may average 1500kbs or more.

A raw download, such as an iOS update, may take as much as 15,000kbs, that is 100 times more bandwidth than the student taking an on-line test.

A common belief when choosing a bandwidth controller to support on-line testing is to find a tool which will specifically identify the on-line testing service and the non-essential applications, thus allowing the IT staff at the school to make adjustments giving the testing a higher priority (QoS). Yes, this strategy seems logical but there are several drawbacks:

  • It does require a fairly sophisticated form of bandwidth control and can be fairly labor intensive and expensive.
  • Much of the public Internet traffic may be encrypted or tunneled, and hard to identify.
  • Another complication trying to give Internet traffic traditional priority is that a typical router cannot give priority to incoming traffic, and most of the test traffic is incoming (from the outside in). We detailed this phenomenon in our post about QoS and the Internet.

The key is not to make the problem more complicated than it needs to be. If you just look at the footprint of the streams coming into the testing facility, you can assume, from our observation, that all streams of 150kbs are of a higher priority than the larger streams, and simply throttle the larger streams. Doing so will insure there is enough bandwidth for the testing service connections to the students. The easiest way to do this is with a heuristic-based bandwidth controller, a class of bandwidth shapers that dynamically give priority to smaller streams by slowing down larger streams.

The other option is to purchase more bandwidth, or in some cases a combination of more bandwidth and a heuristic-based bandwidth controller, to be safe.

Please contact us for a more in-depth discussion of options.

For more information on cloud usage in K-12 schools, check out these posts:

Schools View Cloud Infrastructure as a Viable Option

K-12 Education is Moving to the Cloud

For more information on Bandwidth Usage by Cloud systems, check out this article:

Know Your Bandwidth Needs: Is Your Network Capacity Big Enough for Cloud Computing?

Miracle Product Fixes Slow Internet on Trains, Planes, and the Edge of the Grid


My apologies for the cheesy lead in. Just having some lighthearted fun, after my return from a seminar in the UK, and seeing all their news stands with all their sensational headlines.

A few years ago I got a call from an agency that maintained the Internet service for the National Train service of a European country. (Finland)
The scheme they used to provide internet access on their trains was to put a 4g wireless connection on every train, and then relay the data to a standard Wifi connection for customers on the train.  The country has good 4g access throughout, hence this was the most practical way to get Internet to a moving vehicle.

Using this method they were able to pipe “mobile” wifi into the trains running around the country.  When their trains got a bit crowded the service became useless during peak times. All the business travelers on the train were funneling through what was essentially a 3 or 4 megabit connection.

Fortunately, we were able to work with them to come up with a scheme to alleviate the congestion. The really cool part of the solution was that we were able to put a central Netequalizer at their main data center, and there was no need to put a device on each train. Many of the solutions to this type of problem, either developed internally by satellite providers or by airlines offering Wifi, require a local controller at the user end, thus the cost and the logistics of the solution are much higher than using the centralized NetEqualizer.

We have talked about the using a centralized NetEqualizer for MPLS networks, but sometimes it is hard to visualize using a central bandwidth controller for other hub and spoke type connections such as the train problem. If you would like more information on the details we would be more than happy to provide them.

Complimentary NetEqualizer Bandwidth Management Seminar in the UK


Press Release issued via BusinessWire.

April 08, 2015 01:05 AM Mountain Daylight Time

LAFAYETTE, Colo.–(BUSINESS WIRE)–APconnections, an innovation-driven technology company that delivers best-in-class network traffic management solutions, is excited to announce its upcoming complimentary NetEqualizer Technical Seminar on April 23rd, 2015, in Oxfordshire, United Kingdom, hosted by Flex Information Technology Ltd.

This is not a marketing presentation; it is run by and created for technical staff.

Join us to meet APconnections’ CTO Art Reisman, a visionary in the bandwidth management industry (check out Art’s blog). This is not a marketing presentation; it is run by and created for technical staff. The Seminar will feature in-depth, example-driven discussions of network optimization and provide participants with a first-hand look at NetEqualizer technology.

Seminar highlights include:

  • Learn how behavior-based shaping provides superior QoS for Internet traffic
  • Optimize business-critical VoIP, email, web browsing, SaaS & web applications
  • Control excessive bandwidth use by non-priority applications
  • Gain control over P2P traffic
  • Get visibility into your network with real-time reporting
  • See the NetEqualizer in action! We will demo a live system.

We welcome both customers and those just beginning to think about bandwidth shaping. The Seminar will take place at 14:30pm, Thursday, April 23rd, at Flex Information Technology Ltd in Grove Technology Park, Wantage, Oxfordshire OX12 9FF.

Online registration, including location and driving directions, is available here. There is no cost to attend, but registration is requested. Questions? Contact Paul Horseman at paul@flex.co.uk or call +44(0)333.101.7313.

About Flex Information Technology Ltd
Flex Information Technology is a partnership founded in 1993 to provide maintenance and support services to wide range of customers with large mission critical systems, particularly the Newspaper and Insurance sectors. In 1998 the company began focusing on support for small to medium businesses. Today we provide “Smart IT Solutions combined with Flexible and Quality Services for Businesses” to a growing satisfied customer base. We have accounts with leading IT suppliers and hardware and software distributors in the UK.

About APconnections
APconnections is a privately held company founded in 2003 and is based in Lafayette, Colorado, USA. Our flexible and scalable network traffic management solutions can be found at thousands of customer sites in public and private organizations of all sizes across the globe, including: Fortune 500 companies, major universities, K-12 schools, Internet providers, libraries, and government agencies on six continents.

Contacts

APconnections, Inc.
Sandy McGregor, 303-997-1300 x104
sandym@apconnections.net
or
Flex Information Technology Ltd
Paul Horseman, +44(0)333 101 7313
paul@flex.co.uk

So You Think you Have Enough Bandwidth?


There are actually only two tiers of bandwidth , video for all, and not video for all. It is a fairly black and white problem. If you secure enough bandwidth such that 25 to 30 percent of your users can simultaneously watch video feeds, and still have some head room on your circuit, congratulations  – you have reached bandwidth nirvana.

Why is video the lynchpin in this discussion?

Aside from the occasional iOS/Windows update, most consumers really don’t use that much bandwidth on a regular basis. Skype, chat, email, and gaming, all used together, do not consume as much bandwidth as video. Hence, the marker species for congestion is video.

Below, I present some of the metrics to see if you can mothball your bandwidth shaper.

1) How to determine the future bandwidth demand.
Believe it or not, you can outrun your bandwidth demand, if your latest bandwidth upgrade is large enough to handle the average video load per customer.  Then it is possible that no further upgrades will be needed, at least in the foreseeable future.

In the “Video for all” scenario the rule of thumb is you can assume 25 percent of your subscribers watching video at any one time.  If you still have 20 percent of your bandwidth left over, you have reached the video for all threshold.

To put some numbers to this
Assume 2000 subscribers, and a 1 gigabit link. The average video feed will require about 2 megabits. (note some HD video is higher than this )  This would mean, to support video 25 percent of your subscribers would use the entire 1 gigabit and there is nothing left over anybody else, hence you will run out of  bandwidth.

Now if you have 1.5 gigabits for 2000 subscribers you have likely reached the video for all threshold, and most likely you will be able to support them without any advanced intelligent bandwidth control . A simple 10 megabit rate cap per subscriber is likely all you would need.

2) Honeymoon periods are short-lived.
The reason why the reprieve in congestion after a bandwidth upgrade is so short-lived is usually because the operator either does not have a good intelligent bandwidth control solution, or they take their existing solution out thinking mistakenly they have reached the “video for all” level.  In reality, they are still under the auspices of the video not for all. They are lulled into a false sense of security for a brief honeymoon period.  After the upgrade things are okay. It takes a while for a user base to fill the void of a new bandwidth upgrade.

Bottom line: Unless you have the numbers to support 25 to 30 percent of your user base running video you will need some kind of bandwidth control.

Application Shaping and Encryption on a Collision Course


Art Reisman, CTO APconnections

I have had a few conversations lately where I have mentioned that due to increased encryption, application shaping is really no longer viable.  This statement without context evokes some quizzical stares and thus inspired me to expound.

I believe that due to increased use of encryption, Application Shaping is really no longer viable…

Yes, there are still ways to censor traffic and web sites, but shaping it, as in allocating a fixed amount of bandwidth for a particular type of traffic, is becoming a thing of the past. And here is why.

First a quick primer in how application shaping works.

When an IP packet with data comes into the application shaper, the packet shaper opens the packet and looks inside.  In the good old days the shaper would see the data inside the packet the same way it appeared in context on a web page. For example, when you loaded up the post that you are a reading now, the actual text is transported from the WordPress host server across the internet to you, broken up in a series of packets.  The only difference between the text on the page and the text crossing the Internet would be that the text in the packets would be chopped up into segments (about 1500 characters per packet is typical).

Classifying traffic in a packet shaper requires intercepting packets in transport, and looking inside them for particular patterns that are associated with applications (such as YouTube, Netflix, Bittorrent, etc.).  This is what is called the application pattern. The packet shaping appliance looks at the text inside the packets and attempts to identify unique sequences of characters, using a pattern matcher. Packet shaping companies, at least the good ones, spend millions of dollars a year keeping up with various patterns associated with ever-changing applications.

Perhaps you have used HTTPS, ssh. These are standard security features built into a growing number of websites. When you access a web page from a URL starting with HTTPS, that means this website is using encryption, and the text gets scrambled in a different way each time it is sent out.  Since the scrambling is unique/different for every user accessing the site, there is no one set pattern, and so a shaper using application shaping cannot classify the traffic. Hence the old methods used by packet shapers are no longer viable.

Does this also mean that you cannot block a website with a Web Filter when HTTPS is used?

I deliberately posed this question to highlight the difference between filtering a site and using application shaping to classify traffic. A site cannot typically hide the originating URL, as the encryption will not begin until there is an initial handshake. A web filter blocks a site based on the URL, thus blocking technology is still viable to prevent access to a website. Once the initial URL is known, data transfer is often set up on another transport port, and there is no URL involved in the transfer. Thus the packet shaper has no idea of where the datastream came from, nor is there any pattern that can be discerned due to the encryption stream.

So the short answer is that you can block a website using a web filter, even when https is used.  However, as we have seen, the same does not apply to shaping the traffic with an application shaper.

The Technology Differences Between a Web Filter and a Traffic Shaper


First, a couple of definitions, so we are all on the same page.
A Web Filter is basically a type of specialized firewall with a configurable list of URLs.  Using a Web Filter, a Network Administrator can completely block specific web sites, or block complete categories of sites, such as pornography.

A Traffic Shaper is typically deployed to change the priority of certain kinds of traffic.  It is used where blocking traffic completely is not required, or is not an acceptable practice.  For example, the mission of a typical Traffic Shaper might be to allow users to get into their Facebook accounts, and to limit their bandwidth so as to not overshadow other more important activities.  With a shaper the idea is to limit (shape) the total amount of data traffic for a given category.

From a technology standpoint, building a Web Filter is a much easier proposition than creating a Traffic Shaper.  This is not to demean the value or effort that goes into creating a good Web Filter.  When I say “easier”, I mean this from a core technology point of view.  Building a good Web Filter product is not so much a technology challenge, but more of a data management issue. A Web Filter worth its salt must be aware of potentially millions of various websites that are ever-changing. To manage these sites, a Web Filter product must be constantly getting updated. The product company supporting the Web Filter must search the Web, constantly indexing new web sites and their contents, and then passing this information into the Web Filter product. The work is ongoing, but not necessarily daunting in terms of technology prowess.  The actual blocking of a Web site is simply a matter of comparing a requested URL against the list of forbidden web sites and blocking the request (dropping the packets).
A Traffic Shaper, on the other hand, has a more daunting task than the Web Filter. This is due to the fact that unlike the Web Filter, a Traffic Shaper kicks in after the base URL has been loaded.  I’ll walk through a generic scenario to illustrate this point.  When a user logs into their Facebook account, the first URL they hit is a well-known Facebook home page.  Their initial query request coming from their computer to the Facebook home page is easy to spot by the Web Filter, and if you block it at the first step, that is the end of the Facebook session.  Now, if you say to your Traffic Shaper “I want you to limit Facebook Traffic to 1 megabit”, then the task gets a bit trickier.  This is because once you are logged into a Facebook  page subsequent requests are not that obvious. Suppose a user downloads an image or plays a shared video from their Facebook screen. There is likely no context for the Traffic Shaper to know the URL of the video is actually coming from Facebook.  Yes, to the user it is coming from their Facebook page, but when they click the link to play the video, the Traffic Shaper only sees the video link – it is not a Facebook URL any longer. On top of that, often times the Facebook page and it’s contents are encrypted for privacy.
For these reasons a traditional Traffic Shaper inspects the packets to see what is inside.  The traditional Traffic Shaper uses Deep Packet Inspection (DPI) to look into the data packet to see if it looks like Facebook data. This is not an exact science, and with the widespread use of encryption, the ability to identify traffic with accuracy is becoming all but impossible.
The good news is that there are other heuristic ways to shape traffic that are gaining traction in the industry.  The bad news is that many end customers continue to struggle with diminishing accuracy of traditional Traffic Shapers.
For more in depth information on this subject, feel free to e-mail me at art@apconnections.net.
By Art Reisman, CTO APconnections

Changing times, Five Points to Consider When Trying to Shape Internet Traffic


By Art Reisman, CTO, APconnections www.netequalizer.com

1 ) Traditional Layer 7 traffic shaper methods are NOT able to identify encrypted traffic. In fact, short of an NSA back door, built into some encryption schemes, traditional Layer 7 traffic shapers are slowly becoming obsolete as the percentage of encrypted traffic expands.
2 ) As of 2014, it was estimated that up to 6 percent of the traffic on the Internet is encrypted, and this is expected to double in the next year or so.
3) It is possible to identify the source and destination of traffic even on encrypted streams. The sending and receiving IP’s of encrypted traffic are never encrypted, hence large content providers, such as Facebook, YouTube, and Netflix may be identified by their IP address, but there some major caveats.

– it is common for the actual content from major content providers to be served from regional servers under different domain names (they are often registered to third parties). Simply trying to identify traffic content from its originating domain is too simplistic.

– I have been able to trace proxied traffic back to its originating domain with accuracy by first doing some experiments. I start by initiating a download from a known source, such as YouTube or Netflix, and then I can figure out the actual IP address of the proxy that the download is coming from. From this, I then know that this particular IP is most likely the source of any subsequent YouTube. The shortfall with relying on this technique is that IP addresses change regionally, and there are many of them. You cannot assume what was true today will be true tomorrow with respect to any proxy domain serving up content. Think of the domains used for content like a leased food cart that changes menus each week.

4) Some traffic can be identified by behavior, even when it is encrypted. For example, the footprint of a single computer with a large connection count can usually be narrowed down to one of two things. It is usually either BitTorrent, or some kind of virus on a local computer. BitTorrents tend to open many small connections and hold them open for long periods of time. But again there are caveats. Legit BitTorrent providers such as Universities distributing public material will use just a few connections to accomplish the data transfer. Whereas consumer grade BitTorrents, often used for illegal file sharing, may use 100’s of connections to move a file.

5)  I have been alerted to solutions that require organizations to retrofit all endpoints with pre-encryption utilities, thus allowing the traffic shaper to receive data before it is encrypted.  I am not privy to the mechanics on how this is implemented, but I would assume outside of very tightly controlled networks, such a method would be a big imposition on users.

%d bloggers like this: