Does Your School Have Enough Bandwidth for On-line Testing?

K-12 schools are all rapidly moving toward “one-for-one” programs, where every student has a computer, usually a laptop. Couple this with standardized, cloud-based testing services, and you have the potential for an Internet gridlock during the testing periods. Some of the common questions we hear are:

How will all of these students using the cloud affect our internet resource?

Will there be enough bandwidth for all of those students using on-line testing?

What type of QoS should we deploy, or should we buy more bandwidth?

The good news is that most cloud testing services are designed with a fairly modest bandwidth footprint.

For example, a student connection to a cloud testing application will average around 150kbs (kilo-bits per second).

In a perfect world, a 40 megabit link could handle about 400 students simultaneously doing on-line testing as long as there was no other major traffic.

On the other hand, a video stream may average 1500kbs or more.

A raw download, such as an iOS update, may take as much as 15,000kbs, that is 100 times more bandwidth than the student taking an on-line test.

A common belief when choosing a bandwidth controller to support on-line testing is to find a tool which will specifically identify the on-line testing service and the non-essential applications, thus allowing the IT staff at the school to make adjustments giving the testing a higher priority (QoS). Yes, this strategy seems logical but there are several drawbacks:

  • It does require a fairly sophisticated form of bandwidth control and can be fairly labor intensive and expensive.
  • Much of the public Internet traffic may be encrypted or tunneled, and hard to identify.
  • Another complication trying to give Internet traffic traditional priority is that a typical router cannot give priority to incoming traffic, and most of the test traffic is incoming (from the outside in). We detailed this phenomenon in our post about QoS and the Internet.

The key is not to make the problem more complicated than it needs to be. If you just look at the footprint of the streams coming into the testing facility, you can assume, from our observation, that all streams of 150kbs are of a higher priority than the larger streams, and simply throttle the larger streams. Doing so will insure there is enough bandwidth for the testing service connections to the students. The easiest way to do this is with a heuristic-based bandwidth controller, a class of bandwidth shapers that dynamically give priority to smaller streams by slowing down larger streams.

The other option is to purchase more bandwidth, or in some cases a combination of more bandwidth and a heuristic-based bandwidth controller, to be safe.

Please contact us for a more in-depth discussion of options.

For more information on cloud usage in K-12 schools, check out these posts:

Schools View Cloud Infrastructure as a Viable Option

K-12 Education is Moving to the Cloud

For more information on Bandwidth Usage by Cloud systems, check out this article:

Know Your Bandwidth Needs: Is Your Network Capacity Big Enough for Cloud Computing?

Miracle Product Fixes Slow Internet on Trains, Planes, and the Edge of the Grid

My apologies for the cheesy lead in. Just having some lighthearted fun, after my return from a seminar in the UK, and seeing all their news stands with all their sensational headlines.

A few years ago I got a call from an agency that maintained the Internet service for the National Train service of a European country. (Finland)
The scheme they used to provide internet access on their trains was to put a 4g wireless connection on every train, and then relay the data to a standard Wifi connection for customers on the train.  The country has good 4g access throughout, hence this was the most practical way to get Internet to a moving vehicle.

Using this method they were able to pipe “mobile” wifi into the trains running around the country.  When their trains got a bit crowded the service became useless during peak times. All the business travelers on the train were funneling through what was essentially a 3 or 4 megabit connection.

Fortunately, we were able to work with them to come up with a scheme to alleviate the congestion. The really cool part of the solution was that we were able to put a central Netequalizer at their main data center, and there was no need to put a device on each train. Many of the solutions to this type of problem, either developed internally by satellite providers or by airlines offering Wifi, require a local controller at the user end, thus the cost and the logistics of the solution are much higher than using the centralized NetEqualizer.

We have talked about the using a centralized NetEqualizer for MPLS networks, but sometimes it is hard to visualize using a central bandwidth controller for other hub and spoke type connections such as the train problem. If you would like more information on the details we would be more than happy to provide them.

Complimentary NetEqualizer Bandwidth Management Seminar in the UK

Press Release issued via BusinessWire.

April 08, 2015 01:05 AM Mountain Daylight Time

LAFAYETTE, Colo.–(BUSINESS WIRE)–APconnections, an innovation-driven technology company that delivers best-in-class network traffic management solutions, is excited to announce its upcoming complimentary NetEqualizer Technical Seminar on April 23rd, 2015, in Oxfordshire, United Kingdom, hosted by Flex Information Technology Ltd.

This is not a marketing presentation; it is run by and created for technical staff.

Join us to meet APconnections’ CTO Art Reisman, a visionary in the bandwidth management industry (check out Art’s blog). This is not a marketing presentation; it is run by and created for technical staff. The Seminar will feature in-depth, example-driven discussions of network optimization and provide participants with a first-hand look at NetEqualizer technology.

Seminar highlights include:

  • Learn how behavior-based shaping provides superior QoS for Internet traffic
  • Optimize business-critical VoIP, email, web browsing, SaaS & web applications
  • Control excessive bandwidth use by non-priority applications
  • Gain control over P2P traffic
  • Get visibility into your network with real-time reporting
  • See the NetEqualizer in action! We will demo a live system.

We welcome both customers and those just beginning to think about bandwidth shaping. The Seminar will take place at 14:30pm, Thursday, April 23rd, at Flex Information Technology Ltd in Grove Technology Park, Wantage, Oxfordshire OX12 9FF.

Online registration, including location and driving directions, is available here. There is no cost to attend, but registration is requested. Questions? Contact Paul Horseman at or call +44(0)333.101.7313.

About Flex Information Technology Ltd
Flex Information Technology is a partnership founded in 1993 to provide maintenance and support services to wide range of customers with large mission critical systems, particularly the Newspaper and Insurance sectors. In 1998 the company began focusing on support for small to medium businesses. Today we provide “Smart IT Solutions combined with Flexible and Quality Services for Businesses” to a growing satisfied customer base. We have accounts with leading IT suppliers and hardware and software distributors in the UK.

About APconnections
APconnections is a privately held company founded in 2003 and is based in Lafayette, Colorado, USA. Our flexible and scalable network traffic management solutions can be found at thousands of customer sites in public and private organizations of all sizes across the globe, including: Fortune 500 companies, major universities, K-12 schools, Internet providers, libraries, and government agencies on six continents.


APconnections, Inc.
Sandy McGregor, 303-997-1300 x104
Flex Information Technology Ltd
Paul Horseman, +44(0)333 101 7313

So You Think you Have Enough Bandwidth?

There are actually only two tiers of bandwidth , video for all, and not video for all. It is a fairly black and white problem. If you secure enough bandwidth such that 25 to 30 percent of your users can simultaneously watch video feeds, and still have some head room on your circuit, congratulations  – you have reached bandwidth nirvana.

Why is video the lynchpin in this discussion?

Aside from the occasional iOS/Windows update, most consumers really don’t use that much bandwidth on a regular basis. Skype, chat, email, and gaming, all used together, do not consume as much bandwidth as video. Hence, the marker species for congestion is video.

Below, I present some of the metrics to see if you can mothball your bandwidth shaper.

1) How to determine the future bandwidth demand.
Believe it or not, you can outrun your bandwidth demand, if your latest bandwidth upgrade is large enough to handle the average video load per customer.  Then it is possible that no further upgrades will be needed, at least in the foreseeable future.

In the “Video for all” scenario the rule of thumb is you can assume 25 percent of your subscribers watching video at any one time.  If you still have 20 percent of your bandwidth left over, you have reached the video for all threshold.

To put some numbers to this
Assume 2000 subscribers, and a 1 gigabit link. The average video feed will require about 2 megabits. (note some HD video is higher than this )  This would mean, to support video 25 percent of your subscribers would use the entire 1 gigabit and there is nothing left over anybody else, hence you will run out of  bandwidth.

Now if you have 1.5 gigabits for 2000 subscribers you have likely reached the video for all threshold, and most likely you will be able to support them without any advanced intelligent bandwidth control . A simple 10 megabit rate cap per subscriber is likely all you would need.

2) Honeymoon periods are short-lived.
The reason why the reprieve in congestion after a bandwidth upgrade is so short-lived is usually because the operator either does not have a good intelligent bandwidth control solution, or they take their existing solution out thinking mistakenly they have reached the “video for all” level.  In reality, they are still under the auspices of the video not for all. They are lulled into a false sense of security for a brief honeymoon period.  After the upgrade things are okay. It takes a while for a user base to fill the void of a new bandwidth upgrade.

Bottom line: Unless you have the numbers to support 25 to 30 percent of your user base running video you will need some kind of bandwidth control.

Application Shaping and Encryption on a Collision Course

Art Reisman, CTO APconnections

I have had a few conversations lately where I have mentioned that due to increased encryption, application shaping is really no longer viable.  This statement without context evokes some quizzical stares and thus inspired me to expound.

I believe that due to increased use of encryption, Application Shaping is really no longer viable…

Yes, there are still ways to censor traffic and web sites, but shaping it, as in allocating a fixed amount of bandwidth for a particular type of traffic, is becoming a thing of the past. And here is why.

First a quick primer in how application shaping works.

When an IP packet with data comes into the application shaper, the packet shaper opens the packet and looks inside.  In the good old days the shaper would see the data inside the packet the same way it appeared in context on a web page. For example, when you loaded up the post that you are a reading now, the actual text is transported from the WordPress host server across the internet to you, broken up in a series of packets.  The only difference between the text on the page and the text crossing the Internet would be that the text in the packets would be chopped up into segments (about 1500 characters per packet is typical).

Classifying traffic in a packet shaper requires intercepting packets in transport, and looking inside them for particular patterns that are associated with applications (such as YouTube, Netflix, Bittorrent, etc.).  This is what is called the application pattern. The packet shaping appliance looks at the text inside the packets and attempts to identify unique sequences of characters, using a pattern matcher. Packet shaping companies, at least the good ones, spend millions of dollars a year keeping up with various patterns associated with ever-changing applications.

Perhaps you have used HTTPS, ssh. These are standard security features built into a growing number of websites. When you access a web page from a URL starting with HTTPS, that means this website is using encryption, and the text gets scrambled in a different way each time it is sent out.  Since the scrambling is unique/different for every user accessing the site, there is no one set pattern, and so a shaper using application shaping cannot classify the traffic. Hence the old methods used by packet shapers are no longer viable.

Does this also mean that you cannot block a website with a Web Filter when HTTPS is used?

I deliberately posed this question to highlight the difference between filtering a site and using application shaping to classify traffic. A site cannot typically hide the originating URL, as the encryption will not begin until there is an initial handshake. A web filter blocks a site based on the URL, thus blocking technology is still viable to prevent access to a website. Once the initial URL is known, data transfer is often set up on another transport port, and there is no URL involved in the transfer. Thus the packet shaper has no idea of where the datastream came from, nor is there any pattern that can be discerned due to the encryption stream.

So the short answer is that you can block a website using a web filter, even when https is used.  However, as we have seen, the same does not apply to shaping the traffic with an application shaper.

The Technology Differences Between a Web Filter and a Traffic Shaper

First, a couple of definitions, so we are all on the same page.
A Web Filter is basically a type of specialized firewall with a configurable list of URLs.  Using a Web Filter, a Network Administrator can completely block specific web sites, or block complete categories of sites, such as pornography.

A Traffic Shaper is typically deployed to change the priority of certain kinds of traffic.  It is used where blocking traffic completely is not required, or is not an acceptable practice.  For example, the mission of a typical Traffic Shaper might be to allow users to get into their Facebook accounts, and to limit their bandwidth so as to not overshadow other more important activities.  With a shaper the idea is to limit (shape) the total amount of data traffic for a given category.

From a technology standpoint, building a Web Filter is a much easier proposition than creating a Traffic Shaper.  This is not to demean the value or effort that goes into creating a good Web Filter.  When I say “easier”, I mean this from a core technology point of view.  Building a good Web Filter product is not so much a technology challenge, but more of a data management issue. A Web Filter worth its salt must be aware of potentially millions of various websites that are ever-changing. To manage these sites, a Web Filter product must be constantly getting updated. The product company supporting the Web Filter must search the Web, constantly indexing new web sites and their contents, and then passing this information into the Web Filter product. The work is ongoing, but not necessarily daunting in terms of technology prowess.  The actual blocking of a Web site is simply a matter of comparing a requested URL against the list of forbidden web sites and blocking the request (dropping the packets).
A Traffic Shaper, on the other hand, has a more daunting task than the Web Filter. This is due to the fact that unlike the Web Filter, a Traffic Shaper kicks in after the base URL has been loaded.  I’ll walk through a generic scenario to illustrate this point.  When a user logs into their Facebook account, the first URL they hit is a well-known Facebook home page.  Their initial query request coming from their computer to the Facebook home page is easy to spot by the Web Filter, and if you block it at the first step, that is the end of the Facebook session.  Now, if you say to your Traffic Shaper “I want you to limit Facebook Traffic to 1 megabit”, then the task gets a bit trickier.  This is because once you are logged into a Facebook  page subsequent requests are not that obvious. Suppose a user downloads an image or plays a shared video from their Facebook screen. There is likely no context for the Traffic Shaper to know the URL of the video is actually coming from Facebook.  Yes, to the user it is coming from their Facebook page, but when they click the link to play the video, the Traffic Shaper only sees the video link – it is not a Facebook URL any longer. On top of that, often times the Facebook page and it’s contents are encrypted for privacy.
For these reasons a traditional Traffic Shaper inspects the packets to see what is inside.  The traditional Traffic Shaper uses Deep Packet Inspection (DPI) to look into the data packet to see if it looks like Facebook data. This is not an exact science, and with the widespread use of encryption, the ability to identify traffic with accuracy is becoming all but impossible.
The good news is that there are other heuristic ways to shape traffic that are gaining traction in the industry.  The bad news is that many end customers continue to struggle with diminishing accuracy of traditional Traffic Shapers.
For more in depth information on this subject, feel free to e-mail me at
By Art Reisman, CTO APconnections

Changing times, Five Points to Consider When Trying to Shape Internet Traffic

By Art Reisman, CTO, APconnections

1 ) Traditional Layer 7 traffic shaper methods are NOT able to identify encrypted traffic. In fact, short of an NSA back door, built into some encryption schemes, traditional Layer 7 traffic shapers are slowly becoming obsolete as the percentage of encrypted traffic expands.
2 ) As of 2014, it was estimated that up to 6 percent of the traffic on the Internet is encrypted, and this is expected to double in the next year or so.
3) It is possible to identify the source and destination of traffic even on encrypted streams. The sending and receiving IP’s of encrypted traffic are never encrypted, hence large content providers, such as Facebook, YouTube, and Netflix may be identified by their IP address, but there some major caveats.

– it is common for the actual content from major content providers to be served from regional servers under different domain names (they are often registered to third parties). Simply trying to identify traffic content from its originating domain is too simplistic.

– I have been able to trace proxied traffic back to its originating domain with accuracy by first doing some experiments. I start by initiating a download from a known source, such as YouTube or Netflix, and then I can figure out the actual IP address of the proxy that the download is coming from. From this, I then know that this particular IP is most likely the source of any subsequent YouTube. The shortfall with relying on this technique is that IP addresses change regionally, and there are many of them. You cannot assume what was true today will be true tomorrow with respect to any proxy domain serving up content. Think of the domains used for content like a leased food cart that changes menus each week.

4) Some traffic can be identified by behavior, even when it is encrypted. For example, the footprint of a single computer with a large connection count can usually be narrowed down to one of two things. It is usually either BitTorrent, or some kind of virus on a local computer. BitTorrents tend to open many small connections and hold them open for long periods of time. But again there are caveats. Legit BitTorrent providers such as Universities distributing public material will use just a few connections to accomplish the data transfer. Whereas consumer grade BitTorrents, often used for illegal file sharing, may use 100’s of connections to move a file.

5)  I have been alerted to solutions that require organizations to retrofit all endpoints with pre-encryption utilities, thus allowing the traffic shaper to receive data before it is encrypted.  I am not privy to the mechanics on how this is implemented, but I would assume outside of very tightly controlled networks, such a method would be a big imposition on users.

%d bloggers like this: