Five Requirements for QoS and Your Cloud Computing


I received a call today from one of the Largest Tier 1 providers in the world.  The salesperson on the other end was lamenting about his inability to sell cloud services to his customers.  His service offerings were hot, but the customers’ Internet connections were not.  Until his customers resolve their congestion problems, they were in a holding pattern for new cloud services.

Before I finish my story,  I promised a list of what Next Generation traffic controller can do so without further adieu, here it is.

  1. Next Generation Bandwidth controllers must be able to mitigate traffic flows originating from the Internet such that important Cloud Applications get priority.
  2. Next Generation Bandwidth controllers must NOT rely on Layer 7 DPI technology to identify traffic. (too much encryption and tunneling today for this to be viable)
  3. Next Generation Bandwidth controllers must hit a price range of $5k to $10k USD  for medium to large businesses.
  4. Next Generation Traffic controllers must not require babysitting and adjustments from the IT staff to remain effective.
  5. A Next Generation traffic controller should adopt a Heuristics-based decision model (like the one used in the NetEqualizer).

As for those businesses mentioned by the sales rep, when they moved to the cloud many of them had run into bottlenecks.  The bottlenecks were due to their iOS updates and recreational “crap” killing the cloud application traffic on their shared Internet trunk.

Their original assumption was they could use the QoS on their routers to mitigate traffic. After all, that worked great when all they had between them and their remote business logic was a nailed up MPLS network. Because it was a private corporate link, they had QoS devices on both ends of the link and no problems with recreational congestion.

Moving to the Cloud was a wake up call!  Think about it, when you go to the cloud you only control one end of the link.  This means that your router-based QoS is no longer effective, and incoming traffic will crush you if you do not do something different.

The happy ending is that we were able to help our friend at BT telecom,BT_logo by mitigating his customers’ bottlenecks. Contact us if you are interested in more details.

Complimentary NetEqualizer Bandwidth Management Seminar in the UK


Press Release issued via BusinessWire.

April 08, 2015 01:05 AM Mountain Daylight Time

LAFAYETTE, Colo.–(BUSINESS WIRE)–APconnections, an innovation-driven technology company that delivers best-in-class network traffic management solutions, is excited to announce its upcoming complimentary NetEqualizer Technical Seminar on April 23rd, 2015, in Oxfordshire, United Kingdom, hosted by Flex Information Technology Ltd.

This is not a marketing presentation; it is run by and created for technical staff.

Join us to meet APconnections’ CTO Art Reisman, a visionary in the bandwidth management industry (check out Art’s blog). This is not a marketing presentation; it is run by and created for technical staff. The Seminar will feature in-depth, example-driven discussions of network optimization and provide participants with a first-hand look at NetEqualizer technology.

Seminar highlights include:

  • Learn how behavior-based shaping provides superior QoS for Internet traffic
  • Optimize business-critical VoIP, email, web browsing, SaaS & web applications
  • Control excessive bandwidth use by non-priority applications
  • Gain control over P2P traffic
  • Get visibility into your network with real-time reporting
  • See the NetEqualizer in action! We will demo a live system.

We welcome both customers and those just beginning to think about bandwidth shaping. The Seminar will take place at 14:30pm, Thursday, April 23rd, at Flex Information Technology Ltd in Grove Technology Park, Wantage, Oxfordshire OX12 9FF.

Online registration, including location and driving directions, is available here. There is no cost to attend, but registration is requested. Questions? Contact Paul Horseman at paul@flex.co.uk or call +44(0)333.101.7313.

About Flex Information Technology Ltd
Flex Information Technology is a partnership founded in 1993 to provide maintenance and support services to wide range of customers with large mission critical systems, particularly the Newspaper and Insurance sectors. In 1998 the company began focusing on support for small to medium businesses. Today we provide “Smart IT Solutions combined with Flexible and Quality Services for Businesses” to a growing satisfied customer base. We have accounts with leading IT suppliers and hardware and software distributors in the UK.

About APconnections
APconnections is a privately held company founded in 2003 and is based in Lafayette, Colorado, USA. Our flexible and scalable network traffic management solutions can be found at thousands of customer sites in public and private organizations of all sizes across the globe, including: Fortune 500 companies, major universities, K-12 schools, Internet providers, libraries, and government agencies on six continents.

Contacts

APconnections, Inc.
Sandy McGregor, 303-997-1300 x104
sandym@apconnections.net
or
Flex Information Technology Ltd
Paul Horseman, +44(0)333 101 7313
paul@flex.co.uk

Application Shaping and Encryption on a Collision Course


Art Reisman, CTO APconnections

I have had a few conversations lately where I have mentioned that due to increased encryption, application shaping is really no longer viable.  This statement without context evokes some quizzical stares and thus inspired me to expound.

I believe that due to increased use of encryption, Application Shaping is really no longer viable…

Yes, there are still ways to censor traffic and web sites, but shaping it, as in allocating a fixed amount of bandwidth for a particular type of traffic, is becoming a thing of the past. And here is why.

First a quick primer in how application shaping works.

When an IP packet with data comes into the application shaper, the packet shaper opens the packet and looks inside.  In the good old days the shaper would see the data inside the packet the same way it appeared in context on a web page. For example, when you loaded up the post that you are a reading now, the actual text is transported from the WordPress host server across the internet to you, broken up in a series of packets.  The only difference between the text on the page and the text crossing the Internet would be that the text in the packets would be chopped up into segments (about 1500 characters per packet is typical).

Classifying traffic in a packet shaper requires intercepting packets in transport, and looking inside them for particular patterns that are associated with applications (such as YouTube, Netflix, Bittorrent, etc.).  This is what is called the application pattern. The packet shaping appliance looks at the text inside the packets and attempts to identify unique sequences of characters, using a pattern matcher. Packet shaping companies, at least the good ones, spend millions of dollars a year keeping up with various patterns associated with ever-changing applications.

Perhaps you have used HTTPS, ssh. These are standard security features built into a growing number of websites. When you access a web page from a URL starting with HTTPS, that means this website is using encryption, and the text gets scrambled in a different way each time it is sent out.  Since the scrambling is unique/different for every user accessing the site, there is no one set pattern, and so a shaper using application shaping cannot classify the traffic. Hence the old methods used by packet shapers are no longer viable.

Does this also mean that you cannot block a website with a Web Filter when HTTPS is used?

I deliberately posed this question to highlight the difference between filtering a site and using application shaping to classify traffic. A site cannot typically hide the originating URL, as the encryption will not begin until there is an initial handshake. A web filter blocks a site based on the URL, thus blocking technology is still viable to prevent access to a website. Once the initial URL is known, data transfer is often set up on another transport port, and there is no URL involved in the transfer. Thus the packet shaper has no idea of where the datastream came from, nor is there any pattern that can be discerned due to the encryption stream.

So the short answer is that you can block a website using a web filter, even when https is used.  However, as we have seen, the same does not apply to shaping the traffic with an application shaper.

The Technology Differences Between a Web Filter and a Traffic Shaper


First, a couple of definitions, so we are all on the same page.
A Web Filter is basically a type of specialized firewall with a configurable list of URLs.  Using a Web Filter, a Network Administrator can completely block specific web sites, or block complete categories of sites, such as pornography.

A Traffic Shaper is typically deployed to change the priority of certain kinds of traffic.  It is used where blocking traffic completely is not required, or is not an acceptable practice.  For example, the mission of a typical Traffic Shaper might be to allow users to get into their Facebook accounts, and to limit their bandwidth so as to not overshadow other more important activities.  With a shaper the idea is to limit (shape) the total amount of data traffic for a given category.

From a technology standpoint, building a Web Filter is a much easier proposition than creating a Traffic Shaper.  This is not to demean the value or effort that goes into creating a good Web Filter.  When I say “easier”, I mean this from a core technology point of view.  Building a good Web Filter product is not so much a technology challenge, but more of a data management issue. A Web Filter worth its salt must be aware of potentially millions of various websites that are ever-changing. To manage these sites, a Web Filter product must be constantly getting updated. The product company supporting the Web Filter must search the Web, constantly indexing new web sites and their contents, and then passing this information into the Web Filter product. The work is ongoing, but not necessarily daunting in terms of technology prowess.  The actual blocking of a Web site is simply a matter of comparing a requested URL against the list of forbidden web sites and blocking the request (dropping the packets).
A Traffic Shaper, on the other hand, has a more daunting task than the Web Filter. This is due to the fact that unlike the Web Filter, a Traffic Shaper kicks in after the base URL has been loaded.  I’ll walk through a generic scenario to illustrate this point.  When a user logs into their Facebook account, the first URL they hit is a well-known Facebook home page.  Their initial query request coming from their computer to the Facebook home page is easy to spot by the Web Filter, and if you block it at the first step, that is the end of the Facebook session.  Now, if you say to your Traffic Shaper “I want you to limit Facebook Traffic to 1 megabit”, then the task gets a bit trickier.  This is because once you are logged into a Facebook  page subsequent requests are not that obvious. Suppose a user downloads an image or plays a shared video from their Facebook screen. There is likely no context for the Traffic Shaper to know the URL of the video is actually coming from Facebook.  Yes, to the user it is coming from their Facebook page, but when they click the link to play the video, the Traffic Shaper only sees the video link – it is not a Facebook URL any longer. On top of that, often times the Facebook page and it’s contents are encrypted for privacy.
For these reasons a traditional Traffic Shaper inspects the packets to see what is inside.  The traditional Traffic Shaper uses Deep Packet Inspection (DPI) to look into the data packet to see if it looks like Facebook data. This is not an exact science, and with the widespread use of encryption, the ability to identify traffic with accuracy is becoming all but impossible.
The good news is that there are other heuristic ways to shape traffic that are gaining traction in the industry.  The bad news is that many end customers continue to struggle with diminishing accuracy of traditional Traffic Shapers.
For more in depth information on this subject, feel free to e-mail me at art@apconnections.net.
By Art Reisman, CTO APconnections

Changing times, Five Points to Consider When Trying to Shape Internet Traffic


By Art Reisman, CTO, APconnections www.netequalizer.com

1 ) Traditional Layer 7 traffic shaper methods are NOT able to identify encrypted traffic. In fact, short of an NSA back door, built into some encryption schemes, traditional Layer 7 traffic shapers are slowly becoming obsolete as the percentage of encrypted traffic expands.
2 ) As of 2014, it was estimated that up to 6 percent of the traffic on the Internet is encrypted, and this is expected to double in the next year or so.
3) It is possible to identify the source and destination of traffic even on encrypted streams. The sending and receiving IP’s of encrypted traffic are never encrypted, hence large content providers, such as Facebook, YouTube, and Netflix may be identified by their IP address, but there some major caveats.

– it is common for the actual content from major content providers to be served from regional servers under different domain names (they are often registered to third parties). Simply trying to identify traffic content from its originating domain is too simplistic.

– I have been able to trace proxied traffic back to its originating domain with accuracy by first doing some experiments. I start by initiating a download from a known source, such as YouTube or Netflix, and then I can figure out the actual IP address of the proxy that the download is coming from. From this, I then know that this particular IP is most likely the source of any subsequent YouTube. The shortfall with relying on this technique is that IP addresses change regionally, and there are many of them. You cannot assume what was true today will be true tomorrow with respect to any proxy domain serving up content. Think of the domains used for content like a leased food cart that changes menus each week.

4) Some traffic can be identified by behavior, even when it is encrypted. For example, the footprint of a single computer with a large connection count can usually be narrowed down to one of two things. It is usually either BitTorrent, or some kind of virus on a local computer. BitTorrents tend to open many small connections and hold them open for long periods of time. But again there are caveats. Legit BitTorrent providers such as Universities distributing public material will use just a few connections to accomplish the data transfer. Whereas consumer grade BitTorrents, often used for illegal file sharing, may use 100’s of connections to move a file.

5)  I have been alerted to solutions that require organizations to retrofit all endpoints with pre-encryption utilities, thus allowing the traffic shaper to receive data before it is encrypted.  I am not privy to the mechanics on how this is implemented, but I would assume outside of very tightly controlled networks, such a method would be a big imposition on users.

Surviving iOS updates


The birds outside my office window are restless. I can see the strain in the Comcast cable wires as they droop, heavy with the burden of additional bits, weighting them down like a freak ice storm.   It is time, once again, for Apple to update every device in the Universe with their latest IOS update.

Assuming you are responsible for a Network with a limited Internet pipe, and you are staring down 100 or more users, about to hit the accept button for their update, what can you do to prevent your user network from being gridlocked?

The most obvious option to gravitate to is caching. I found this nice article (thanks Luke) on the Squid settings used for a previous iOS update in 2013.  Having worked with Squid quite a bit helping our customers, I was not surprised on the amount of tuning required to get this to work, and I suspect there will be additional changes to make it work in 2014.

If you have a Squid caching solution already up and running it is worth a try, but I am on the fence of recommending a Squid install from scratch.  Why? Because we are seeing diminishing returns from Squid caching each year due to the amount of dynamic content.  Translation: Very few things on the Internet come from the same place with the same filename anymore, and for many content providers they are marking much of their content as non-cacheable.

If you have a NetEqualizer in place you can easily blunt the effects of the data crunch with a standard default set-up. The NetEqualizer will automatically push the updates out further into time, especially during peak hours when there is contention. This will allow other applications on your network to function normally during the day. I doubt anybody doing the update will notice the difference.

Finally if you are desperate, you might be able to block access to anything IOS update on your firewall.  This might seem a bit harsh, but then again Apple did not consult with you, and besides isn’t that what the free Internet at Starbucks is for?

Here is a snippet pulled from a forum on how to block it.

iOS devices check for new versions by polling the server mesu.apple.com. This is done via HTTP, port 80. Specifically, the URL is:

http://mesu.apple.com/assets/com_apple_MobileAsset_SoftwareUpdate/com_apple_MobileAsset_SoftwareUpdate.xml

If you block or redirect mesu.apple.com, you will inhibit the check for software updates. If you are really ambitious, you could redirect the query to a cached copy of the XML, but I haven’t tried that. Please remove the block soon; you wouldn’t want to prevent those security updates, would you?

Is Your Bandwidth Controller Obsolete Technology?


Although not free yet, bandwidth contracts have been dropping in cost faster than a bad stock during a recession.  With cheaper bandwidth costs , the question often arises on whether or not an enterprise can do without their trusty bandwidth controller.

Below, we have compiled a list of factors that will determine whether or not Bandwidth Controllers stick around for a while, or go the route of the analog modem,  a relic of when people received their Internet from AOL and dial up.

  • In Many areas of the world bandwidth prices are still very high. For example most of Africa,  and also Parts of the Middle East  they do not have the infrastructure in  place to deliver high speed low cost circuits . Bandwidth controllers are essential equipment in these regions.
  • Even in countries where bandwidth infrastructure is subsidized, and urban access is relatively cheap,  people like to work and play in remote places. Bandwidth consumers have come to expect bandwidth while choosing to live in a remote village. Many of these lifestyle choices find people far away from the main fiber lines that crisscross the urban landscape. Much like serving fresh seafood in mining camp, providing bandwidth to remote locations,  has a high price, and bandwidth controllers are more essential than ever in the remote areas of developed countries.   For example we are seeing a pick up in NetEqualizer interest in luxury resort hotels on tropical islands, and national parks , where high speed Internet is now a necessity but it is not cheap.
  • Government spending on Internet infrastructure has grown out of favor, at least in the US. After the recent waste and fraud scandals, don’t expect another windfall like the broad band initiative any time soon. Government subsidies were a one time factor in the drop in bandwidth prices during the 2007 to 2010 time frame.
  • As the market matures and providers look to show profit, they will be tempted to raise prices again, especially as demand grows.  The recession of 2007 drove down some commercial demand at a time when there was significant infrastructure increases in capacity, we may be at the tail end of that deflationary bubble.
  • There was also a one time infrastructure enhancement, that gained momentum around 2007, this compounded the deflationary pressure on bandwidth. WDM technology allowed existing fiber to carry up to 16 times the original planned capacity.  We don’t expect any new infrastructure innovations of that magnitude to occur any time soon.  Moore’s law has finally cracked  (proved false) in the computer industry and so will the honeymoon increases in the carrying capacity of fiber.
  • Lastly, the wireless frequencies are crowded beyond capacity and bandwidth is still hard to find here, and operators are running out of tricks.
  • We must concede that we have seen cases where customers are getting bandwidth at such a low cost that they forgo investing in bandwidth controllers, but we expect that trend to flatten out as bandwidth prices hold steady or start to creep back up a bit in the coming decade.

Stay tuned.

%d bloggers like this: