Stick a Fork in Third Party Caching (Squid Proxy)

I was just going through our blog archives and noticed that many of the caching articles we promoted circa 2011 are still getting hits.  Many of the hits are coming from less developed countries where bandwidth is relatively expensive when compared to the western world.  I hope that businesses and ISPs hoping for a miracle using caching will find this article, as it applies to all third-party caching engines, not just the one we used to offer as an add-on to the NetEqualizer.

So why do I make such a bold statement about third-party caching becoming obsolete?

#1) There have been some recent changes in the way Google provides YouTube content, which makes caching it almost impossible.  All of their YouTube videos are generated dynamically and broken up into segments, to allow differential custom advertising.  (I yearn for the days without the ads!)

#2) Almost all pages and files on the Internet are marked “Do not Cache” in the HTML headers. Some of them will cache effectively, but you must assume the designer plans on making dynamic, on the fly, changes to their content.  Caching an obsolete page and delivering it to an end user could actually result in serious issues, and perhaps even a lawsuit, if you cause some form of economic harm by ignoring the “do not cache” directive.

#3) Streaming content as well as most HTML content is now encrypted, and since we are not the NSA, we do not have a back door to decrypt and deliver from our caching engines.

As you may have noticed I have been careful to point out that caching is obsolete on third-party caching engines, not all caching engines, so what gives?

Some of the larger content providers, such as Netflix, will work with larger ISPs to provide large caching servers for their proprietary and encrypted content. This is a win-win for both Netflix and the Last Mile ISP.  There are some restrictions on who Netflix will support with this technology.  The point is that it is Netflix providing the caching engine, for their content only, with their proprietary software, and a third-party engine cannot offer this service.  There may be other content providers providing a similar technology.  However, for now, you can stick a fork in any generic third-party caching server.

NetEqualizer News: January 2014

January 2014


Enjoy another issue of NetEqualizer News! This month, we talk about our Software Update 7.5 release, preview our new prices for 2014, and discuss some exciting new enhancements to NCO. As always, feel free to pass this along to others who might be interested in NetEqualizer News.

A message from Art…
Art Reisman, CTO – APconnections

As 2014 begins, we are excited to see what the year brings. In the United States, the economy is finally improving, at least art_canoe_picturewhen measured by job creation, stock market growth, and real estate sales. Hopefully, this trend continues, as we are ready for the Great Recession to be officially over! We hope that you are seeing an improving economy in your part of the world too.

With the new year, it is time to work on new things! Many of our long-time customers know that I love to work on new ideas and, with this in mind, we are excited to announce a new content partnership with the motion picture industry. I’ll explain this new and exciting expansion of our offerings below.

We love it when we hear back from you – so if you have a story you would like to share with us of how we have helped you, let us know. Email me directly at I would love to hear from you!

Announcing Software Update 7.5!

Caching Enhancements & RTR Beta is now G/A

Our first release of 2014 is now available. This release contains two key features: 1) NCO (caching) module enhancements, and 2) our 7.4 Dynamic Real-Time Reporting (RTR) Beta is now Generally Available.

Caching Enhancements
In order to support better YouTube hit ratios for our NetEqualizer Caching Option (NCO), we have invested in technology that keeps up with the changing nature of how YouTube is delivered. YouTube URLs actually appear like dynamic content most of the time, even if it is the same video you are watching from the day before. One of the basic covenants of a caching engine is to NOT cache dynamic content. In this case of YouTube videos, we have built out logic to cache YouTube, as it is not really dynamic content, just dynamic addressing.127

For this release, we consulted with some of the top caching engineers in the world to ensure that we are evolving our caching engine to keep up with the latest addressing schemes. This required a change to our caching logic and some extensive testing in our labs.

It is now economically feasible to make a jump to a 1TB SSD drive. As of 7.5, we have now increased our SSD drive size from 256GB to 1TB. All new caching customers will be shipped the 1TB SSD. For existing NCO customers, if you would like to upgrade your drive size, please contact us for pricing.

New Reporting Features
Our Real-Time Reporting Tool (RTR) Beta version is now Generally Available! We had some great feedback over the last couple of months and are very happy with the way it turned out. Thanks to everyone who participated in our Beta!

The new reporting features built into RTR allow for traffic reporting functionality similar to what you get from ntop. You can see overall traffic patterns from an historic point of view, and you can also drill down to see traffic patterns for specific IP addresses you want to track.


In addition, we added in the ability to show all rules associated with an IP address for easy trouble shooting. You can now see if a specific IP address is a member of a pool, has an associated hard limit, has priority, or has a connection limit.


Check out our Software Update 7.5 Release Notes for more details on what Software Update 7.5 includes.

These features will be free to customers with valid NetEqualizer Software and Support who are running version 7.0+ (NCO features will require NCO). If you are not current with NSS, contact us today!



2014 NetEqualizer Pricing Preview

As we begin a new year, we are releasing our 2014 Price List for NetEqualizer, which will be effective February 1st, 2014.

Of note this year is that we have added back a 10Mbps license level to our NE3000 series.

We also continue to offer license upgrades on our NE2000 series. Remember that if you have a NE2000 purchased on or after August 2011, it will be supported past 12/31/2014. If you have an older NE2000, please contact us to discuss your options.

All Newsletter readers can get an advance peek here! For a limited time, the 2014 Price List can be viewed here without registration. You can also view the Data Sheets for each model once in the 2014 Price List.

Current quotes will not be affected by the pricing updates, and will be honored for 90 days from the date the quote was originally given.

If you have questions on pricing, feel free to contact us at:



NCO Customers Will Soon Have Access to a Full Movie Library!

One of the things we had on our docket to work on this winter and spring was to expand our caching offering (NCO) to include Netflix.

In our due diligence we consulted with the Netflix Open Connect team (their caching engine), and discovered that they just don’t have the resources to support ISP’s with less than a 5 Gbps Netflix stream. Thus, we could not bundle their caching engine into our NCO offer – it is just too massive in scope.

Streaming long-form video content on the Internet cannot be accomplished reliably without a caching engine. It doesn’t matter how big your pipe is, you need to have a chunk of content stored locally to even have a chance to meet the potential demand – if you make any promises of consistent video content. This is why Netflix has spent millions of dollars providing caching servers to the largest commercial providers. Even with commercial providers’ big pipes to the backbone, they need to host Netflix content on their regional networks.

So what can we do to help our customers offer reliable streaming video content?

1) We would have to load up a caching server with content locally.
2) We would have to continually update it with new and interesting material.
3) We would need to take care of licensing desirable content.

The licensing part is the key to all this. It is not easy with some of the politics in the film industry, but after reaching out to some contacts over the last couple of weeks, it actually is very doable, due to the increase in independent distributors looking for channels.

Did you know that NetEqualizer servers sit in front of roughly 5,000,000 end users? This is sort of a “perfect storm” come to fruition. We have thousands of potential caching servers and a channel in place to serve a set of customers that currently do not have access to online streaming full length movie content. A customer running NCO would be able to choose between a Pay-Per-View (PPV) model and an unlimited content (UC) option.

The details and mechanics of these two options will be outlined in detail in our February Newsletter. In the meantime, please let us know your thoughts on how this offering would work best for your organization, and get on board with NCO to get the ball rolling!

To learn more about NCO, please read our Caching Executive White Paper.

If you have questions, contact us at:



Coming Soon: Get Website Category Data from NCO

Along with our other enhancements to NCO, another feature we’ll be rolling out soon with our NetEqualizer Caching Option (NCO) is the ability to gather website category data for sites visited by your users.

This data can not only be used to tune your NetEqualizer, but will help in enforcing usage policies and other requirements.

To learn more about NCO, please read our Caching Executive White Paper.

If you are interested in NCO or have questions about this feature, contact us at:



Best Of The Blog

Top 10 Out-of-the-Box Technology Predictions for 2014

By Art Reisman – CTO – APconnections

Back in 2011, I posted some technology predictions for 2012. Below is my revised and updated list for 2014.

1) Look for Google, or somebody, to launch an Internet Service using a balloon off the California Coast.

Well it turns out, those barges out in San Francisco Bay are for something far less ambitious than a balloon-based Internet service, but I still think this is on horizon so I am sticking with it.

2) Larger, slower transport planes to bring down the cost of comfortable international and long range travel.

I did some analysis on the cost of airline operators, and the largest percentage of the cost in air travel is fuel. You can greatly reduce fuel consumption per mile by flying larger, lighter aircraft at slower speeds. Think of these future airships like cruise ships. They will have more comforts than a the typical packed cross-continental flight of today. My guess is, given the choice, passengers will trade off speed for a little price break and more leg room…

Photo Of The Month


Monterey, CA
Monterey is a waterfront community on the central coast of California with a temperate climate year-round. Kayaking, scuba diving, surfing, whale-watching and beach-going are just some of the activities to be enjoyed in and around Monterey. This photo was taken on a recent visit to Monterey by one of our staff members.

NetEqualizer News: November 2013

November 2013


Enjoy another issue of NetEqualizer News! This month, we discuss takeaways from our recent Technical Seminar, update you on our 7.4 RTR Beta progress, and highlight recent enhancements to our NetEqualizer Caching Option. As always, feel free to pass this along to others who might be interested in NetEqualizer News.

A message from Art…
Art Reisman, CTO – APconnections

As we move into the end of 2013, we start once again to sum up the year and think about all we are thankful for. We would like to take this opportunity to THANK YOU all for being a part of our success! We truly enjoy working with each and every one of you, and appreciate your business!thank_you

As most of you know, 2013 was a big year for us – our 10th Anniversary. Looking back, it has gone so fast! Looking forward, we see a bright future with even more opportunity on a global scale. Speaking of global, we had a staff member this month travel to Malaysia to conduct two 1-day training sessions – a national university there, IIUM, has many campuses throughout Malaysia where they employ NetEqualizers. If you are interested in learning more about our training offerings, contact us anytime!

We love it when we hear back from you – so if you have a story you would like to share with us of how we have helped you, let us know. Email me directly at I would love to hear from you!

2013 Fall Technical Seminar Update

We recently held a half day seminar at Western Michigan University in Kalamazoo, Michigan. We would like to thank our host, Fawn Callen, for helping us get this event together, and for offering such as great space for the seminar!
This was a great opportunity for folks to meet with Art in person, pick his brain on all things related to equalizing and caching, and also to share ideas with us on future features.

Here are some of the features that we walked away thinking about:

1) Historical penalty tracking over time – this would be graphical and would help you see an historical trend on how tight your bandwidth is.

2) Enhance the masking feature to allow for more subnets so that organizations can take advantage of ISP-offered bandwidth allotments for traffic such as video.

3) Heuristic-based identification of users based on usage patterns – track individuals not based on IP, necessarily, but based on how they use the Internet, what sites they visit, etc.

Let us know if these are important to you!
neteq seminar logo with border
Contact us at:

Update on 7.4 RTR Beta

We have a great group of customers trying out our 7.4 RTR Beta Software Release – and the results have been very positive!

We are working on making the data logging and graphing more efficient for large networks as well as some other small changes that will help make RTR and NetEqualizer in general even better and more useful!


We’ll be thoroughly testing our enhancements the rest of November and December and all of those will be incorporated into our official 7.5 Software Release on January 1st.

This Release will be free to customers with valid NetEqualizer Software and Support who are running 7.0+. If you are not current with NSS, contact us today!

NetEqualizer Caching Enhancements

As we have discussed in previous issues of NetEqualizer News, we’ve been working hard with the folks at Squid to create a more robust custom caching solution for NetEqualizer.

Our enhancements include:

1) An updated caching solution that includes fixes and the latest features from Squid. This is beyond what open source has, and has been greatly improved with help from our Squid development consultant.

2) We are in the process of debating whether or not to include Netflix in future implementations of our caching. In relation to the NetEqualizer, the cost for doing this could be a bit high. However, there is good news. Providers are starting to offer Netflix traffic at a greatly reduced rate to their clients. We’ve already built in features that will help these clients take advantage of this offering. You can read more about caching in the cloud and Netflix traffic in the Best Of The Blog section of this newsletter.

For more information on the NetEqualizer Caching Option, read our white paper!

Best Of The Blog

Caching in the Cloud is Here

By Art Reisman – CTO – APconnections

I just got a note from a customer, a University, that their ISP is offering them 200 megabit internet at fixed price. The kicker is, they can also have access to a 1 gigabit feed specifically for YouTube at no extra cost. The only explanation for this is that their upstream ISP has an extensive in-network YouTube cache. I am just kicking myself for not seeing this coming!

I was well-aware that many of the larger ISPs cached NetFlix and YouTube on a large scale, but this is the first I have heard of a bandwidth provider offering a special reduced rate for YouTube to a customer downstream. I am just mad at myself for not predicting this type of offer and hearing about it from a third party.

As for the NetEqualizer, we have already made adjustments in our licensing for this differential traffic to come through at no extra charge beyond your regular license level, in this case 200 megabits. So if for example, you have a 350 megabit license, but have access to a 1Gbps YouTube feed, you will pay for a 350 megabit license, not 1Gbps. We will not charge you for the overage while accessing YouTube…

Photo Of The Month
Petronas Towers – Kuala Lumpur, Malaysia
As we mentioned in the Newsletter opener, a staff member of ours recently journeyed to Malaysia to conduct training sessions for NetEqualizer in two locations – Kuala Lumpur and Kuantan. The experience was a memorable one – Malaysia is a beautiful country with fantastic food, culture, and people. The 1,483 foot Petronas Towers are a testament to their success.

NetEqualizer News: October 2013

October 2013


Enjoy another issue of NetEqualizer News! This month, we preview our new RTR features (now available in Beta), reveal the location of our next Technical Seminar, discuss enhancements to our caching option, and remind you to get your web applications secured. As always, feel free to pass this along to others who might be interested in NetEqualizer News.

A message from Art…
Art Reisman, CTO – APconnections

art_smallFall is officially here in Boulder, Colorado. In fact, we had our first hard frost (the overnight low was 29 degrees Fahrenheit) on October 4th, pretty much right on schedule, as our fifty year average is October 6th. As we told you in our last newsletter, we have been planning for a late October harvest for our next release. We are right on track to release Software Update 7.5 in late October and have a Beta version of the new features available NOW. If you would like to get a sneak peek at the new features, learn more below about how to get involved in our 7.4 RTR Beta Test.

We love it when we hear back from you – so if you have a story you would like to share with us of how we have helped you, let us know. Email me directly at I would love to hear from you!

2013 Fall Technical Seminar
neteq seminar logo with border

We are happy to announce the date and time of our 2013 Fall Technical Seminar! Please join our CTO, Art Reisman, at our host site, Western Michigan University, on Tuesday, November 12th, 2013 for a half-day seminar in Kalamazoo, Michigan.

To learn more or register for this FREE technical seminar, sign up here.logo-270x231

Last month we asked for folks to let us know if they would be interested in hosting our next Technical Seminar. We had several people step forward (thank you all!), and from that group, have decided to hold our 2013 Fall Technical Seminar in Michigan.

We think Michigan will be a great place to visit in the fall, are are excited to see the NetEqualizer in action at Western Michigan, a longtime customer who has been using NetEqualizers since early 2008.

If you have any questions regarding the Technical Seminar, contact us at:

We hope to see you there!

NetEqualizer Caching Investment

We have recently partnered with some of the Squid core development team to harden and make our caching the best it can be!

Recent testing with enhancements are showing even better hit ratios for YouTube and other media, resulting in a better caching system for our customers.

The NetEqualizer Caching Option (NCO) is available as an add-on to NetEqualizer systems at additional cost. Caching helps supplement the power of Equalizing by storing high-bandwidth streams locally for internal users.For more information on NCO, click here.If you are interested in adding caching to your system, contact us at:

Planning for 2014: Do You Need to Secure Your Web Applications?

As we near the end of 2013, many of you may be putting together your 2014 plans.netgladiator_logo If web application security is on your “must have” list for 2014, you might want to take a look at our sister product, the NetGladiator.

We used NetEqualizer’s guiding principles when we developed the NetGladiator: keep it affordable (starting at $3,500 USD), make sure it is easy to set up and maintain, and implement security rules that provide value and make sense without the overkill of most products.

If you would like to learn more, visit our website, take a look at our white paper, or contact us at:

Not sure if you should be thinking about web application security? Take our hacking challenge to see if your web apps are at risk!

RTR Release and Beta Testing!

We are very excited to announce the release of our new Real-Time Reporting (RTR) tool features!

Here are all the cool new reports/features that you will see in Software Update 7.4 (as well as our Beta version):

The first major enhancement you will see is the ability to look at graphs of all traffic going through the NetEqualizer.

This graph will show you your equalizing ratio and when traffic peaked above that threshold as well as minimum and maximum outputs in the given time frame. This will really help you see how often and when traffic is being Equalized from an historical perspective.


The other new features revolve around being able to run reports on each IP in your Active Connections table.

Instead of a static table, you will now see links associated with each IP address.

Click the desired IP address to bring up the reporting interface.


From here, you can do a number of tasks:

1) Look at historical graphs of traffic to and from the given IP address.


2) Look up the country associated with the IP address.
3) Do an NS Lookup of the IP address to see what name server it is associated with.
4) Show all rules for an IP – this interface shows you what rules currently affect the given IP (hard limits, pools, connection limits, etc.).


We are currently in Beta on new RTR Features (7.4 Release with RTR Beta), and would like several more customer participants. If you are interested, please email us at:

so we can see if you are a good fit for the Beta version. We plan to release the new RTR functionality to all customers as Software Update 7.5 in late October.

If you are interested in participating, you need to be current on NSS, and either be on the 7.4 release currently or be willing to upgrade to it. Once on 7.4, we will give you a hot fix to install the new RTR capabilities.

For more information on Software Update 7.4 and our Beta release, click here.

Best Of The Blog

Using OpenDNS on Your Wireless Network to Prevent DMCA Infringements

By Sam Beskur – CTO – Global Gossip

Editor’s Note: APconnections and Global Gossip have partnered to offer a joint hotel service solution, HMSIO. Read our HMSIO service offering datasheet to learn more.

Traffic Filtering with OpenDNS

AUP (Acceptable Use Policy) violations which include DMCA infringements on illegal downloads (P2P, Usenet or otherwise) have been hugely troublesome in many locations where we provide public access WiFi. Nearly all major carriers here in the US now have some form of notification system to alert customers when violation occur and the once that don’t send notifications are silently tracking this behavior…

Photo Of The Month

“It’s fun to stay at the Y.M.C.A.” (what’s this?)
At APconnections, we like to maintain a good work-life balance – and that includes having fun at the office. While our CTO, Art Reisman, was off running at the gym, we played this little Halloween “trick” on him.

The World’s Biggest Caching Server

Caching solutions are used in all shapes and sizes to speed up Internet data retrieval. From your desktop keeping a local copy of the last web page viewed, to your cable company keeping an entire library of NetFlix movies,  there is a broad diversity in the scope and size of  caching solutions.

So, what is the biggest caching server out there?  Moreover, if I found the world’s largest caching server, would  it store  just a tiny microscopic subset of the total data  available from the public  Internet?   Is it possible that somebody has actually cached everything Internet? A caching server the size of the Internet seems absurd, but I decided to investigate anyway, and so with an open mind, I set out to find the biggest caching server in the world.  Below I have detailed my research and findings.

As always I started with Google, but not in the traditional sense. If you think about Google, they seem to have every  public page on the Internet indexed. That is a huge amount of data, and I suspect  they are the worlds biggest caching server.  Asserting Google as the worlds largest caching server seems logical , but somewhat hollow and unsubstantiated, my next step was to quantify my assertion.

To figure out how much data is actually stored by Google,  in a weird twist of logic, I figured the best way to estimate the size of the stored data would be to determine what data is not stored in Google.

I would need to find a good way to stumble into some truly random web pages without using Google to find them, and then specifically test to see if Google knew about those pages by  asking Google to search for unique, deep rooted, text strings within those sites.

Rather than ramble too much, I’ll just walk through one of my experiments below.

To find a random Web site, I started with  one of those random web site stumblers. As advertised, it took me to a  random  web site titled, “Finest Polynesian Tiki Objects”. From there, I looked for unique text strings on the Tiki site.  The  idea here is find a sentence of text from this site that is not likely to found anywhere but on this site. In essence something deep enough so as not to be a deliberatly indexed title already submitted to google.   I poked around on the Tiki site  and found some seemingly innocuous text on their merchant  site. “Presenting Genuine Witco Art – every piece will come with a scanned”. I put that exact string in my Google search box and presto there it was.

Screen Shot 2013-05-29 at 4.21.04 PM

Wow it looks like Google has this somewhat random page archived and indexed because it came up in my search.

A sample set of two data points is not large enough to extrapolate from and draw conclusions, so I repeated my experiment a few more times and here are more samples of what I found….

Try number two.

Random Web Site

Search String In Google

“For booking or general whatnot, contact Bob. Heck, just write to say hello if you feel like it.”

Screen Shot 2013-05-30 at 2.06.35 PM

It worked again, it found the exact page from a search on a string buried deep on the page.

And then I did it again.

Screen Shot 2013-05-30 at 2.18.55 PM

And again Google found the page.

The conclusion is that Google has cached close to 100 percent of the publicly accessible text on the Internet. In fairness to Google’s competitors they also found the same Web pages using the same search terms.

So how much data is cached in terms of a raw number?


There are plenty of public statistics for number of Web sites/pages connected to the Internet, and there is also data detailing the average size of a Web Page, what I have not determined  is how much of the Video, and Images are cached by Google, I do know they are working on image search engines, but for now, to be conservative I’ll base my estimates on Text only.

So roughly there are 15 billion Web Pages, and the average amount of text is 25 thousand bytes. (note most of the Web is Video and Images text is actually a small percentage)

So to get a final number I multiply 15 billion  15,000,000,000 times 25 thousand 25,000 and I get…

375,000,000,000,000 bytes cached…



Notice the name of te site or the band does not appear in my search string, nothing to tip off the google search engine what I am looking for and presto!

Why Caching Alone Will Not Solve Your Congestion Issue

Editors Note:
The intent of this article to is to help set appropriate expectations of using a caching server on an uncontrolled Internet link. There are some great speed gains to be had with a caching server; however, caching alone will not remedy a heavily congested Internet connection.


Are you going down the path of using a caching server (such as Squid) to decrease peak usage load on a congested Internet link? 

You might be surprised to learn that Internet link congestion cannot be mitigated with a caching server alone. Contention can only be eliminated by:

1) Increasing bandwidth

2) Some form of bandwidth control

3) Or a combination of 1) and 2)

A common assumption about caching is that somehow you will be able to cache a large portion of common web content – such that a significant amount of your user traffic will not traverse your backbone to your provider. Unfortunately, caching a large portion of web content to attain a significant hit ratio is not practical, and here is why:

Lets say your Internet trunk delivers 100 megabits and is heavily saturated prior to implementing caching or a bandwidth control solution. What happens when you add a caching server to the mix?

From our experience, a good hit rate to cache will likely not exceed 10 percent. Yes, we have heard claims of 50 percent, but have not seen this in practice. We assume this is an urban myth or just a special case.

Why is the hit rate at best only 10 percent?

Because the Internet is huge relative to a cache, and you can only cache a tiny fraction of total Internet content. Even Google, with billions invested in data storage, does not come close. You can attempt to keep trending popular content in the cache, but the majority of access requests to the Internet will tend to be somewhat random and impossible to anticipate. Yes, a good number of hits might hit the Yahoo home page and read the popular articles, but many users more are going to do unique things. For example, common hits like email and Facebook are all very different for each user, and cannot be maintained in the cache. User hobbies are also all different, and thus they traverse different web pages and watch different videos. The point is you can’t anticipate this data and keep it in a local cache any more reliably than guessing the weather long term. You can get a small statistical advantage, and that accounts for the 10 percent that you get right.

Note: Without a statistical advantage your hit rate would be effectively be 0.

Even with caching at a 10 percent hit rate, your link traffic will not decline.

With caching in place, any gain in efficiency will be countered by a corresponding increase in total usage. Why is this?

If you assume a 10 percent hit rate to cache, you will end up getting a 10 percent increase in Internet usage and thus, if your pipe to the Internet was near congestion when you put the caching solution in, it will still be congested. Yes, the hits to cache will be fast and amazing, but the 90 percent of the hits that do not come from the cache will equal 100 percent of your Internet link. The resulting effect will be that 90 percent of your Internet accesses will be sluggish due to the congested link.

Another way to understand is by practical example.

Let’s start with a very congested 100 megabit Internet link. Web hits are slow, YouTube takes forever, email responses are slow, and Skype calls break up. To solve these issues, you put in a caching server.

Now 10 percent of your hits come from cache, but since you did nothing to mitigate overall bandwidth usage, your users will simply eat up the extra 10 percent from cache and then some. It is like giving a drug addict a free hit of their preferred drug. If you serve up a fast YouTube, it will just encourage more YouTube usage.

Even with a good caching solution in place, if somebody tries to access Grandma’s Facebook page, it will have to come over the congested link, and it may time out and not load right away. Or, if somebody makes a Skype call it will still be slow. In other words, the 90 percent of the hits not in cache are still slow even though some video and some pages play fast, so the question is:

If 10 percent of your traffic is really fast, and 90 percent is doggedly slow, did your caching solution help?

The answer is yes, of course it helped, 10 percent of users are getting nice, uninterrupted YouTube. It just may not seem that way when the complaints keep rolling in. :)


Editors Update August 20 2013

This article written back in 2011  still says it all, and we continue to confirm  by talking to our ISP customers, that, at best a  generic caching engine will get a 10 percent hit rate for people watching movies. However this hit rate has little effect on solving congestion issues on the Internet link itself.

YouTube Dominates Video Viewership in U.S.

Editor’s Note: Updated July 27th, 2011 with material from

YouTube studies are continuing to confirm what I’m sure we all are seeing – that Americans are creating, sharing and viewing video online more than ever, this according to a Pew Research Center Internet & American Life Project study released Tuesday.

According to Pew, fully 71% of online Americans use video-sharing sites such as YouTube and Vimeo, up from 66% a year earlier. The use of video-sharing sites on any given day also jumped five percentage points, from 23% of online Americans in May 2010 to 28% in May 2011.  This figure (28%) is slightly lower than the 33% Video Metrix reported in June, but is still significant.

To download or read the fully study, click on this link:


YouTube viewership in May 2011 was approximately 33 percent of video viewed on the Internet in the U.S., according to data from the comScore Video Metrix released on June 17, 2011.

Google sites, driven primarily by video viewing at, ranked as the top online video content property in May with 147.2 million unique viewers, which was 83 percent of the total unique viewers tracked.  Google Sites had the highest number of viewing sessions with more than 2.1 billion, and highest time spent per viewer at 311 minutes, crossing the five-hour mark for the first time.

To read more on the data released by comScore, click here.  comScore, Inc. (NASDAQ: SCOR) is a global leader in measuring the digital world and preferred source of digital business analytics. For more information, please visit

This trend further confirms why our NetEqualizer Caching Option (NCO) is geared to caching YouTube videos. While NCO will cache any file sized from 2MB-40MB traversing port 80, the main target content is YouTube.  To read more about the NetEqualizer Caching Option to see if it’s a fit for your organization, read our YouTube Caching FAQ or contact Sales at

Nine Tips and Technologies for Network WAN Optimization

By Art Reisman

Art Reisman CTO

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

Although there is no way to actually make your true WAN speed faster, here are some tips for  corporate IT professionals that can make better use of the bandwidth you already have, thus providing the illusion of a faster pipe.

1) Caching — How  does it work and is it a good idea?

Caching servers have built-in intelligence to store the most recently and most frequently requested information, thus preventing future requests from traversing a WAN/Internet link unnecessarily.

Caching servers keep a time stamp of their last update to data. If the page time stamp has not changed since the last time a user has accessed the page, the caching server will present a local stored copy of the Web page, saving the time it would take to load the page from across the Internet.

Caching on your WAN link in some instances can reduce traffic by 50 percent or more. For example, if your employees are making a run on the latest PDF explaining their benefits, without caching each access would traverse the WAN link to a central server duplicating the data across the link many times over. With caching, they will receive a local copy from the caching server.

What is the downside of caching?

There are two main issues that can arise with caching:

a) Keeping the cache current –If you access a cache page that is not current you are at risk of getting old and incorrect information. Some things you may never want to be cached. For example, the results of a transactional database query. It’s not that these problems are insurmountable, but there is always the risk the data in cache will not be synchronized with changes. I personally have been misled by old data from my cache on several occasions.

b) Volume – There are some 300 million websites on the Internet. Each site contains upwards of several megabytes of public information. The amount of data is staggering and even the smartest caching scheme cannot account for the variation in usage patterns among users and the likelihood they will hit an uncached page.

We recommend Squid as a proxy solution.

2) Protocol Spoofing

Historically, there have been client server applications developed for an internal LAN. Many of these applications are considered chatty. For example, to complete a transaction between a client and server, tens of messages may be transmitted when perhaps one or two would suffice. Everything was fine until companies, for logistical and other reasons, extended their LANs across the globe using WAN links to tie different locations together.

To get a better visual on what goes on in a chatty application perhaps an analogy will help. It’s like  sending family members your summer vacation pictures, and, for some insane reason, putting each picture in a separate envelope and mailing them individually on the same mail run. Obviously, this would be extremely inefficient, just as chatty applications can be.

What protocol spoofing accomplishes is to “fake out” the client or server side of the transaction and then send a more compact version of the transaction over the Internet (i.e., put all the pictures in one envelope and send it on your behalf, thus saving you postage).

For more information, visit the Protocol Spoofing page at

3) Compression

At first glance, the term compression seems intuitively obvious. Most people have at one time or another extracted a compressed Windows ZIP file. If you examine the file sizes pre- and post-extraction, it reveals there is more data on the hard drive after the extraction. Well, WAN compression products use some of the same principles, only they compress the data on the WAN link and decompress it automatically once delivered, thus saving space on the link, making the network more efficient. Even though you likely understand compression on a Windows file conceptually, it would be wise to understand what is really going on under the hood during compression before making an investment to reduce network costs. Here are two questions to consider.

a) How Does it Work? — A good and easy way to visualize data compression is comparing it to the use of short hand when taking dictation. By using a single symbol for common words a scribe can take written dictation much faster than if he were to spell out each word. The basic principle behind compression techniques is to use shortcuts to represent common data.

Commercial compression algorithms, although similar in principle, can vary widely in practice. Each company offering a solution typically has its own trade secrets that they closely guard for a competitive advantage. However, there are a few general rules common to all strategies. One technique is to encode a repeated character within a data file. For a simple example, let’s suppose we were compressing this very document and as a format separator we had a row with a solid dash.

The data for this solid dash line is comprised of approximately 160 times the ASCII character “-�. When transporting the document across a WAN link without compression, this line of document would require 80 bytes of data, but with clever compression, we can encode this using a special notation “-� X 160.

The compression device at the front end would read the 160 character line and realize,”Duh, this is stupid. Why send the same character 160 times in a row?” So, it would incorporate a special code to depict the data more efficiently.

Perhaps that was obvious, but it is important know a little bit about compression techniques to understand the limits of their effectiveness. There are many types of data that cannot be efficiently compressed.

For example, many image and voice recordings are already optimized and there is very little improvement in data size that can be accomplished with compression techniques. The companies that sell compression based solutions should be able to provide you with profiles on what to expect based on the type of data sent on your WAN link.

b) What are the downsides? — Compression always requires equipment at both ends of the link and results can be sporadic depending on the traffic type.

If you’re looking for compression vendors, we recommend FatPipe, Juniper Networks

4) Requesting Text Only from Browsers on Remote Links

Editors note: Although this may seem a bit archaic and backwoods, it can be effective in a pinch to keep a remote office up and running.

If you are stuck with a dial-up or slower WAN connection, have your users set their browsers to text-only mode. However, while this will speed up general browsing and e-mail, it will do nothing to speed up more bandwidth intensive activities like video conferencing. The reason why text only can be effective is that  most Web pages are loaded with graphics which take up the bulk of the load time. If you’re desperate, switching to text-only will eliminate the graphics and save you quite a bit of time.

5) Application Shaping on Your WAN Link

Editor’s Note: Application shaping is appropriate for corporate IT administrators and is generally not a practical solution for a home user. Makers of application shapers include Packeteer and Allot and are typically out of the price range for many smaller networks and home users.

One of the most popular and intuitive forms of optimizing bandwidth is a method called “application shaping,” with aliases of “traffic shaping,” “bandwidth control,” and perhaps a few others thrown in for good measure. For the IT manager that is held accountable for everything that can and will go wrong on a network, or the CIO that needs to manage network usage policies, this is a dream come true. If you can divvy up portions of your WAN/Internet link to various applications, then you can take control of your network and ensure that important traffic has sufficient bandwidth.

At the center of application shaping is the ability to identify traffic by type.  For example, identifying between Citrix traffic, streaming audio, Kazaa peer-to-peer, or something else. However, this approach is not without its drawbacks.

Here are a few common questions potential users of application shaping generally ask.

a) Can you control applications with just a firewall or do you need a special product? — Many applications are expected to use Internet ports when communicating across the Web. An Internet port is part of an Internet address, and many firewall products can easily identify ports and block or limit them. For example, the “FTP” application commonly used for downloading files uses the well known “port 21.”

The fallacy with this scheme, as many operators soon find out, is that there are many applications that do not consistently use a fixed port for communication. Many application writers have no desire to be easily classified. In fact, they don’t want IT personnel to block them at all, so they deliberately design applications to not conform to any formal port assignment scheme. For this reason, any product that aims to block or alter application flows by port should be avoided if your primary mission is to control applications by type.

b) So, if standard firewalls are inadequate at blocking applications by port, what can help?

As you are likely aware, all traffic on the Internet travels around in what is called an IP packet. An IP packet can very simply be thought of as a string of characters moving from Computer A to Computer B. The string of characters is called the “payload,” much like the freight inside a railroad car. On the outside of this payload, or data, is the address where it is being sent. These two elements, the address and the payload, comprise the complete IP packet.

In the case of different applications on the Internet, we would expect to see different kinds of payloads. For example, let’s take the example of a skyscraper being transported from New York to Los Angeles. How could this be done using a freight train? Common sense suggests that one would disassemble the office tower, stuff it into as many freight cars as it takes to transport it, and then when the train arrived in Los Angeles hopefully the workers on the other end would have the instructions on how to reassemble the tower.

Well, this analogy works with almost anything that is sent across the Internet, only the payload is some form of data, not a physical hunk of bricks, metal and wires. If we were sending a Word document as an e-mail attachment, guess what, the contents of the document would be disassembled into a bunch of IP packets and sent to the receiving e-mail client where it would be re-assembled. If I looked at the payload of each Internet packet in transit, I could actually see snippets of the document in each packet and could quite easily read the words as they went by.

At the heart of all current application shaping products is special software that examines the content of Internet packets, and through various pattern matching techniques, determines what type of application a particular flow is. Once a flow is determined, then the application shaping tool can enforce the operators policies on that flow. Some examples of policy are:

  • Limit Citrix traffic to 100kbs
  • Reserve 500kbs for Shoretel voice traffic

The list of rules you can apply to traffic types and flow is unlimited. However, there is a  downside to application shaping of which you should be aware. Here are a few:

  • The number of applications on the Internet is a moving target. The best application shaping tools do a very good job of identifying several thousand of them, and yet there will always be some traffic that is unknown (estimated at 10 percent by experts from the leading manufacturers). The unknown traffic is lumped into the unknown classification and an operator must make a blanket decision on how to shape this class. Is it important? Is it not? Suppose the important traffic was streaming audio for a Web cast and is not classified. Well, you get the picture. Although theory behind application shaping by type is a noble one, the cost for a company to stay up to date is large and there are cracks.
  • Even if the application spectrum could be completely classified, the spectrum of applications constantly changes. You must keep licenses current to ensure you have the latest in detection capabilities. And even then it can be quite a task to constantly analyze and change the mix of policies on your network. As bandwidth costs lessen, how much human time should be spent divvying up and creating ever more complex policies to optimize your WAN traffic?

6) Test Your WAN-Link Speed

A common issues with slow WAN link service is that your provider is not giving you what they have advertised.

For more information, see The Real Meaning of Comcast Generosity.

7) Make Sure There Is No Interference on Your Wireless Point-to-Point WAN Link

If the signal between locations served by a point to point link are weak, the wireless equipment will automatically downgrade its service to a slower speed. We have seen this many times where a customer believes they have perhaps a 40-megabit backhaul link and perhaps are only realizing five megabits.

8) Deploy a Fairness Device to Smooth Out Those Rough Patches During Contentious Busy Hours

Yes, this is the NetEqualizer News Blog, but with all bias aside, these things work great. If you are in an office sharing an Internet feed with various users, the NetEqualizer will keep aggressive bandwidth users from crowding others out. No, it cannot create additional bandwidth on your pipe, but it will eliminate the gridlock caused  by your colleague  in the next cubicle  downloading a Microsoft service pack.

Yes, there are other devices on the market (like your fancy router), but the NetEqualizer was specifically designed for that mission.

9) Bonus Tip: Kill All of Those Security Devices and See What Happens

With recent out break of the H1N1 virus, it reminded me of  how sometimes the symptoms and carnage from a vaccine are worse than the disease it claims to cure. Well, the same holds true for your security protection hardware on your network. From proxies to firewalls, underpowered equipment can be the biggest choke point on your network.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email.

Click here for a full price list.

Links to other bandwidth control products on the market.

Packet Shaper by Blue Coat



Exinda  Packet Shaper  and Riverbed tend to focus on the enterprise WAN optimization market.


Cymphonix comes  from a background of detailed reporting.

Emerging Technologies

Very solid  product for bandwidth shaping.


Exinda from Australia has really made a good run in the US market offering a good alternative to the incumbants.


For those of you who are wed to Windows NetLimiter is your answer

%d bloggers like this: