Covid-19 and Increased Internet Usage


Our sympathies go out to everyone who has been impacted by Covid 19, whether you had it personally or it affected your family and friends. I personally lost a sister to Covid-19 complications back in May; hence I take this virus very seriously.

The question I ask myself now as we see a light at the end of the Covid-19 tunnel with the anticipated vaccines next month is, how has Covid-19 changed the IT landscape for us and our customers?

The biggest change that we have seen is Increased Internet Usage.

We have seen a 500 percent increase in NetEqualizer License upgrades over the past 6 months, which means that our customers are ramping up their circuits to ensure a work from home experience without interruption or outages. What we can’t tell for sure is whether or not these upgrades were more out of an abundance of caution, getting ahead of the curve, or if there was actually a significant increase in demand. 

Without a doubt, home usage of Internet has increased, as consumers work from home on Zoom calls, watch more movies, and find ways to entertain themselves in a world where they are staying at home most of the time.  Did this shift actually put more traffic on the average business office network where our bandwidth controllers normally reside?  The knee jerk reaction would be yes of course, but I would argue not so fast.  Let me lay out my logic here…

For one, with a group of people working remotely using the plethora of cloud-hosted collaboration applications such as Zoom, or Blackboard sharing, there is very little if any extra bandwidth burden back at the home office or campus. The additional cloud-based traffic from remote users will be pushed onto their residential ISP providers. On the other hand, organizations that did not transition services to the cloud will have their hands full handling the traffic from home users coming in over VPN into the office.

Higher Education usage is a slightly different animal.   Let’s explore the three different cases as I see them for Higher Education.

#1) Everybody is Remote

In this instance it is highly unlikely there would be any increase in bandwidth usage at the campus itself. All of the Zoom or Microsoft Teams traffic would be shifted to the ISPs at the residences of students and teachers.

2) Teachers are On-Site and Students are Remote

For this we can do an approximation.

For each teacher sharing a room session you can estimate 2 to 8 megabits of consistent bandwidth load. Take a high school with 40 teachers on active Zoom calls, you could estimate a sustained 300 megabits dedicated to Zoom.  With just a skeleton crew of teachers and no students in the building the Internet Capacity should hold as the students tend to eat up huge chunks of bandwidth which is no longer the case. 

3) Mixed Remote and In-person Students

The one scenario that would stress existing infrastructure would be the case where students are on campus while at the same time classes are being broadcast remotely for the students who are unable to come to class in person.  In this instance, you have close to the normal campus load plus all the Zoom or Microsoft Teams sessions emanating from the classrooms. To top it off these Zoom or Microsoft Team sessions are highly sensitive to latency and thus the institution cannot risk even a small amount of congestion as that would cause an interruption to all classes. 

Prior to Covid-19, Internet congestion might interrupt a Skype conference call with the sales team to Europe, which is no laughing matter but a survivable disruption.  Post Covid-19, an interruption in Internet communcation could potentially  interrupt the  entire organization, which is not tolerable. 

In summary, it was probably wise for most institutions to beef up their IT infrastructure to handle more bandwidth. Even knowing in hindsight that  in some cases, it may have not been needed on the campus or the office.  Given the absolutely essential nature that Internet communication has played to keep Businesses and Higher Ed connected, it was not worth the risk of being caught with too little.

Stay tuned for a future article detailing the impact of Covid-19 on ISPs…

Cloud Computing Creates Demand For Bandwidth Shaping


image1-3The rise of cloud computing has been a mixed bag for the bottom line of traditional network hardware manufacturers.  Yes, there is business to be had by supplying the burgeoning cloud service providers with new hardware; however, as companies move their applications into the cloud, the elaborate WAN networks of yesteryear are slowly being phased out. The result is a decrease in sales of routers and switches, a dagger in the heart of the very growth engine that gave rise to the likes of Cisco, Juniper, and Adtran.

From a business perspective, we are pleasantly surprised to see an uptick in demand in the latter half of 2017 for bandwidth shapers.  We expect this to continue on into 2018 and beyond.

Why are bandwidth shapers seeing an uptick in interest?
Prior to the rise of cloud computing , companies required large internal LAN network pipes, with relatively small connections to the Internet.  As services move to the Cloud, the data that formerly traversed the local LAN is now being funneled out of the building through the pipe leading to the Internet.   For the most part, companies realize this extra burden on their Internet connection and take action by buying more bandwidth. Purchasing bandwidth makes sense in markets where bandwidth is cheap, but is not always possible.

Companies are realizing they cannot afford to have gridlock into their Cloud.  Network administrators understand that at any time an unanticipated spike in bandwidth demand could overwhelm their cloud connection.  The ramifications of clogged cloud connections could be catastrophic to their business, especially as more business is performed online.  Hence, we are getting preemptive inquiries about ensuring their cloud service will prioritize critical services across their Internet connection with a smart bandwidth shaper.

We are also getting inquiries from businesses that have fallen behind and are unable to upgrade their Internet pipe fast enough to keep up with Cloud demand.   This cyclical pattern of upgrading/running out of bandwidth can be tempered by using a bandwidth shaper.  As your network peaks, your bandwidth shaper can ensure that available resources are shared optimally, until you upgrade and have more bandwidth available.

Although moving to the Cloud seems to introduce a new paradigm, from the world of network optimization, the challenges are the same.  Over the years we have always recommended a two-prong approach to optimization: 1) adequate bandwidth, and 2) bandwidth shaping.  The reason for our recommendation continues to be the same.  With bandwidth shaping, you are ensuring that you are best-positioned to handle peak traffic on your network.  And now, more than ever, as business goes “online” and into the Cloud, and both your employees and your customers are on your network, bandwidth shaping is a prudent insurance policy to providing a great experience on your network.

 

 

How to Survive High Contention Ratios and Prevent Network Congestion


image1-2

Is there a way to raise contention ratios without creating network congestion, thus allowing your network to service more users?

Yes there is.

First a little background on the terminology.

Congestion occurs when a shared network attempts to deliver more bandwidth to its users than is available. We typically think of an oversold/contended network with respect to ISPs and residential customers; but this condition also occurs within businesses, schools and any organization where more users are vying for bandwidth than is available.

 The term, contention ratio, is used in the industry as a way of determining just how oversold your network is.  A contention ratio is simply the size of an Internet trunk divided by the number of users. We normally think of Internet trunks in units of megabits. For example, 10 users sharing a one megabit trunk would have a 10-to- 1 contention ratio.
 A decade ago, a 10-to-1 contention ratio was common. Today, bandwidth is much less expensive and the average contention ratios have come down.  Unfortunately, as bandwidth costs have dropped, pressure on trunks has risen, as today’s applications require increasing amounts of bandwidth. The most common congestion symptom is  slow network response times.
Now back to our original question…
Is there a way to raise contention ratios without creating congestion, thus allowing your network to service more users?
This is where a smart bandwidth controller can help.  Back in the “old” days before encryption was king, most solutions involved classifying types of traffic, and restricting less important traffic based on customer preferences.   Classifying by type went away with encryption, which prevents traffic classifiers from seeing the specifics of what is traversing a network.  A modern bandwidth controller uses dynamic rules to restrict  traffic based on aberrant behavior.  Although this might seem less intuitive than specifically restricting traffic by type, it turns out to be just as reliable, not to mention simpler and more cost-effective to implement.
We have seen results where a customer can increase their user base by as much as 50 percent and still have decent response times for interactive  cloud applications.
To learn more, contact us, our engineering team is more than happy to go over your specific situation, to see if we can help you.
You also might be interested in this VPN product  https://www.cloudwards.net/safervpn-review/

Three Myths About QoS and Your Internet Speed


Myth #1:  A QoS device will somehow make your traffic go faster across the Internet.

The Internet does not care about your local QoS device.  In fact, QoS means nothing to the Internet.  The only way your traffic can get special treatment across the Internet would be for you to buy a private dedicated link – which is really not practical for general Internet usage, as it would only be a point-to-point link.

Myth #2:  QoS will enhance the speed of your internal network.

The speed of your local internal links are a fixed rate, they always run at maximum speed.  The only way applying QoS can make something “appear” to go faster is by restricting some traffic in favor of other traffic.  I constantly get asked by our customers  if we can make important traffic get through faster, and my follow on questions are always the same.

  1. Do you have a congestion problem now?
    If not, than there is no need for any form of QoS, because your data already moving as fast as possible.
  2. If you do have congestion, what traffic do you want me to degrade so that other traffic can run without congestion?

Myth #3:  There is nothing you can do to give priority to incoming traffic on your Internet.  

Wrong! Okay, so this sounds like it may be a contradiction to Myth #1, but there is a difference in how you ask this question.   Yes, it is true that the Internet does not care about your QoS desires and will never give preferential treatment to your traffic.  But, the sending service DOES care about whether the data being transmitting is being sent at the appropriate speed for the link you get, and you can take advantage of this.

All senders of data into your network are constantly monitoring the speed at which that traffic is getting to you.  Now, if you recall the very definition of QoS is restricting one type of traffic over another.  Let’s say for example that you have a very congested Internet link with many incoming downloads.  Let’s say one download is a iOS update, and the other one is your favorite streaming Netflix movie.  By delaying the iOS update packets at the edge of your network, the sender will sense this delay, and back off on the download. The result is that there is more bandwidth left over for your favorite Netflix , and hence you have attained a higher quality of service for your Netflix over the iOS download.  How this delay is implemented is another story.

If you are interested in learning more, please feel free to contact us.

Economics of the Internet Cloud Part 1


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Why is it that you need to load up all of your applications and carry them around with you on your personal computing device ?   From  I-bird Pro, to your favorite weather application, the standard operating model  assumes you purchase these things , and then  affix them to your medium of preference.

Essentially you are tethered to your personal device.

Yes there are business reasons why a company like Apple would prefer this model.   They own the hardware and they control the applications, and thus it is in their interest to keep you walled off and loyal  to your investment in Apple products.

But there is another more insidious economic restriction that forces this model upon us. And that is a lag in speed and availability of wireless bandwidth.  If you had a wireless connection to the cloud that was low-cost and offered a minimum of 300 megabits  access without restriction, you could instantly fire up any application in existence without ever pre-downloading it.  Your personal computing device would not store anything.   This is the world of the future that I referenced in my previous article , Will Cloud Computing Obsolete Your Personal Device?

The X factor in my prediction is when will we have 300 megabit wireless  bandwidth speeds across the globe without restrictions ?  The assumption is that bandwidth speed and prices will follow a similar kind of curve similar to improvements in  computing speeds, a Moore’s law for bandwidth if you will.

It will happen but the question is how fast, 10 years , 20 years 50 years?  And when it does vendors and consumers will quickly learn it is much more convenient to keep everything in the cloud.  No more apps tied to your device.  People  will own some some very cheap cloud space for all their  “stuff”,  and the  device on which it runs will become  less  and less important.

Bandwidth speed increases in wireless are running against some pretty severe headwinds which I will cover in my next article stay tuned.

Will Cloud Computing Obsolete Your Personal Device?


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Twenty two years ago, all the Buzz  amongst the engineers in the AT&T Bell  labs offices,  was a technology called “thin client”.     The term “cloud” had not yet been coined yet,  but the seeds had been sowed.  We went to our project managment as we always did when we had a good idea, and as usual, being the dinosaurs that they were, they could not even grasp the concept , their brains were three sizes tooo small, and so the idea was tabled.

And then came  the Googles,  and the  Apples of the world,  the disrupters.  As bell labs reached old age , and wallowed in its death throws, I watched from afar as cloud computing took shape.

Today cloud computing is changing the face of the computer and networking world.   From my early 90’s excitement, it took over 10 agonizing years for the first cotyledons to appear above the soil. And even today,  20 years later, cloud computing is in its adolescence, the plants are essentially teenagers.

Historians probably won’t even take note of those 10 lost years. It will be footnoted as if that transition  time was instantaneous.  For those of us who waited in anticipation during  that incubation period , the time was real, it lasted over  1/4 of our professional working  lives.

Today, cloud computing is having a ripple effect on other technologies that  were  once assumed sacred. For example, customer premise networks and all the associated hardware are getting flushed down the toilet.    Businesses are simplifying their on premise networks and will continue to do so.  This is not good news for Cisco, or the desktop PC manufactures , chip makers and on down the line.

What to expect 20 years from now.   Okay here goes, I predict that the  “personal” computing devices that we know and love, might fall into decline in the next 25 years. Say goodbye to “your” IPAD or “your” iPhone.

That’s not to say you won’t have a device at your disposal for personal use, but it will only be tied to you for the time period for which you are using it.   You walk into the store , along with the shopping carts  there are  stack of computing devices, you pick one up , touch your thumb to it, and instantly it has all your data.

Imagine if  personal computing devices were so ubiquitous in society that you did not have to own one.  How freeing would that  be ?  You would not have to worry about forgetting it, or taking it through security . Where ever happened to be , in a  hotel, library, you could just grab one of the many complimentary devices stacked at the door, touch your thumb to the screen , and you are ready to go, e-mail, pictures , games all your personal settings ready to go.

Yes  you would  pay for the content and the services , through the nose most likely, but the hardware would be an irrelevant commodity.

Still skeptical ?  I’ll cover the the economics of how this transition will happen in my next post , stay tuned.

Five Requirements for QoS and Your Cloud Computing


I received a call today from one of the Largest Tier 1 providers in the world.  The salesperson on the other end was lamenting about his inability to sell cloud services to his customers.  His service offerings were hot, but the customers’ Internet connections were not.  Until his customers resolve their congestion problems, they were in a holding pattern for new cloud services.

Before I finish my story,  I promised a list of what Next Generation traffic controller can do so without further adieu, here it is.

  1. Next Generation Bandwidth controllers must be able to mitigate traffic flows originating from the Internet such that important Cloud Applications get priority.
  2. Next Generation Bandwidth controllers must NOT rely on Layer 7 DPI technology to identify traffic. (too much encryption and tunneling today for this to be viable)
  3. Next Generation Bandwidth controllers must hit a price range of $5k to $10k USD  for medium to large businesses.
  4. Next Generation Traffic controllers must not require babysitting and adjustments from the IT staff to remain effective.
  5. A Next Generation traffic controller should adopt a Heuristics-based decision model (like the one used in the NetEqualizer).

As for those businesses mentioned by the sales rep, when they moved to the cloud many of them had run into bottlenecks.  The bottlenecks were due to their iOS updates and recreational “crap” killing the cloud application traffic on their shared Internet trunk.

Their original assumption was they could use the QoS on their routers to mitigate traffic. After all, that worked great when all they had between them and their remote business logic was a nailed up MPLS network. Because it was a private corporate link, they had QoS devices on both ends of the link and no problems with recreational congestion.

Moving to the Cloud was a wake up call!  Think about it, when you go to the cloud you only control one end of the link.  This means that your router-based QoS is no longer effective, and incoming traffic will crush you if you do not do something different.

The happy ending is that we were able to help our friend at BT telecom,BT_logo by mitigating his customers’ bottlenecks. Contact us if you are interested in more details.

Six Ways to Save With Cloud Computing


I was just doing some research on the cost savings of Cloud computing, and clearly it is shaking up the IT industry.  The five points in this Webroot article, “Five Financial Benefits of Moving to the Cloud”, really hit the nail on the head.   The major points are listed below.

#1. Fully utilized hardware

#2. Lower power costs

#3. Lower people costs

#4. Zero capital costs

#5. Resilience without redundancy

Not listed in the article details was a 6th way that you save money in the cloud.  The following is from conversations I have had with a few of our customers that have moved to the Cloud.

#6.  Lower network costs

Since your business services are in the cloud, you can ditch all of those expensive MPLS links that you use to privately tie your offices to your back-end systems, and replace them with lower-cost commercial Internet links. You do not really need more bandwidth, just better bandwidth performance.  The commodity Internet links are likely good enough, but… when you move to the Cloud you will need a smart bandwidth shaper.

Your link to the Internet becomes even more critical when you go the Cloud.  But that does not mean bigger and more expensive pipes. Cloud applications are very lean and you do not need a big pipe to support them. You just need to make sure recreational traffic does not cut into your business application traffic. Here is my shameless plug: The NetEqualizer is perfectly designed to separate out the business traffic from the recreational.  Licensing is simple, and surprisingly affordable.

The NetEqualizer is Cloud-Ready.  If you are moving your business applications to the Cloud, contact us to see if we can help ease congestion for your traffic going both to and from the Cloud.

How Much Bandwidth do you Need for Cloud Services?


The good news is most cloud applications have a very small Internet footprint. The bad news is, if left unchecked, all that recreational video will suck the life out of your Internet connection before you know it.

The screen shot below is from a live snapshot depicting bandwidth utilization on a business network. Screen Shot 2016-01-27 at 12.26.49 PM

That top number, circled in red, is a YouTube video, and it is consuming about 3 megabits of bandwidth.  Directly underneath that are a couple of cloud service applications from Amazon, and they are consuming 1/10 of what the YouTube video demolishes.

Over the past few years I have analyzed quite a few customer systems, and I consistently see cloud-based business applications consuming  a small fraction of what video and software updates require.

For most businesses,  if they never allowed a video or software update to cross their network, they could easily handle all the cloud-based business applications without worry of running out of room on their trunks. Remember, video and updates use ten times what cloud applications consume. The savings in bandwidth utilization would be so great that  they could cut their contracted bandwidth allocation to a fraction of what they currently have.

Coming back to earth, I don’t think this plan is practical. We live in a video and software update driven world.

If you can’t outright block video and updates, the next best thing would be to give them a lower priority when there is contention on the line. The natural solution that most IT administrators gravitate to is to try to identify it by traffic type.  Although intuitively appealing, there are some major drawbacks with typecasting traffic on the fly.  The biggest drawback is that everything is coming across as encrypted traffic, and you really can’t expect to identify traffic once it is encrypted.

The good news is that you can reliably guess that your smaller footprint traffic is Cloud or Interactive (important), and those large 3 megabit + streams should get a lower priority (not as important).  For more on the subject of how to set your cloud priority we recommend reading: QoS and your Cloud Applications

 

 

Capacity Planning for Cloud Applications


The main factors to consider when capacity planning your Internet Link for cloud applications are:

1) How much bandwidth do your cloud applications actually need?

Typical cloud applications require about 1/2 of a megabit or less. There are exceptions to this rule, but for the most part a good cloud application design does not involve large transfers of data. QuickBooks, salesforce, Gmail, and just about any cloud-based data base will be under the 1/2 megabit guideline. The chart below really brings to light the difference between your typical, interactive Cloud Application and the types of applications that will really eat up your data link.

Screen Shot 2015-12-29 at 4.18.59 PM

Bandwidth Usage for Cloud Based Applications compared to Big Hitters

2) What types of traffic will be sharing your link with the cloud?

The big hitters are typically YouTube and Netflix.  They can consume up to 4 megabits or higher per connection.  Also, system updates for Windows and iOS, as well as internal backups to cloud storage, can consume 20 megabits or more.  Another big hitter can be typical Web Portal sites, such as CNN, Yahoo, and Fox News. A few years ago these sites had a small footprint as they consisted of static images and text.  Today, many of these sites automatically fire up video feeds, which greatly increase their footprint.

3) What is the cost of your Internet Bandwidth, and do you have enough?

Obviously, if there was no limit to the size of your Internet pipe or the required infrastructure to handle it, there would be no concerns or need for capacity planning.  In order to be safe, a good rule of thumb as of 2016 is that you need about 100 megabits per 20 users. Less than that, and you will need to be willing to scale back some of those larger bandwidth-consuming applications, which brings us to point 4.

4) Are you willing to give a lower priority to recreational traffic in order to insure your critical cloud applications do not suffer?

Hopefully you work in an organization where compromise can be explained, and the easiest compromise to make is to limit non-essential video and recreational traffic.  And those iOS updates? Typically a good bandwidth control solution will detect them and slow them down, so essentially they run in the background with a smaller footprint over a longer period of time.

Bandwidth Control in the Cloud


The good news about cloud based applications is that in order to be successful, they must be fairly light weight in terms of their bandwidth footprint. Most cloud based designers keep create applications with a fairly small data footprint. A poorly designed cloud application that required large amounts of data transfer, would not get good reviews and would likely fizzle out.

The bad news is that cloud applications must share your Internet link with recreational traffic, and recreational traffic is often bandwidth intensive with no intention of playing nice when sharing a link .

For businesses,  a legitimate concern is having their critical cloud based applications  starved for bandwidth. When this happens they can perform poorly or lock up, creating a serious drop in productivity.

 

If you suspect you have bandwidth contention impacting the performance of a critical cloud application, the best place to start  your investigation would be with a bandwidth controller/monitor that can show you the basic footprint of how much bandwidth an application is using.

Below is a quick screen shot that I often use from our NetEqualizer when trouble shooting a customer link. It gives me a nice  snap shot of utilization. I can sort the heaviest users by their bandwidth footprint. I can can then click on a convenient, DNS look up tab, to see who they are.

Screen Shot 2015-12-29 at 8.25.52 AM

In my next post I will detail some typical bandwidth planning metrics for going to the cloud . Stay tuned.

Does Your School Have Enough Bandwidth for On-line Testing?


K-12 schools are all rapidly moving toward “one-for-one” programs, where every student has a computer, usually a laptop. Couple this with standardized, cloud-based testing services, and you have the potential for an Internet gridlock during the testing periods. Some of the common questions we hear are:

How will all of these students using the cloud affect our internet resource?

Will there be enough bandwidth for all of those students using on-line testing?

What type of QoS should we deploy, or should we buy more bandwidth?

The good news is that most cloud testing services are designed with a fairly modest bandwidth footprint.

For example, a student connection to a cloud testing application will average around 150kbs (kilo-bits per second).

In a perfect world, a 40 megabit link could handle about 400 students simultaneously doing on-line testing as long as there was no other major traffic.

On the other hand, a video stream may average 1500kbs or more.

A raw download, such as an iOS update, may take as much as 15,000kbs, that is 100 times more bandwidth than the student taking an on-line test.

A common belief when choosing a bandwidth controller to support on-line testing is to find a tool which will specifically identify the on-line testing service and the non-essential applications, thus allowing the IT staff at the school to make adjustments giving the testing a higher priority (QoS). Yes, this strategy seems logical but there are several drawbacks:

  • It does require a fairly sophisticated form of bandwidth control and can be fairly labor intensive and expensive.
  • Much of the public Internet traffic may be encrypted or tunneled, and hard to identify.
  • Another complication trying to give Internet traffic traditional priority is that a typical router cannot give priority to incoming traffic, and most of the test traffic is incoming (from the outside in). We detailed this phenomenon in our post about QoS and the Internet.

The key is not to make the problem more complicated than it needs to be. If you just look at the footprint of the streams coming into the testing facility, you can assume, from our observation, that all streams of 150kbs are of a higher priority than the larger streams, and simply throttle the larger streams. Doing so will insure there is enough bandwidth for the testing service connections to the students. The easiest way to do this is with a heuristic-based bandwidth controller, a class of bandwidth shapers that dynamically give priority to smaller streams by slowing down larger streams.

The other option is to purchase more bandwidth, or in some cases a combination of more bandwidth and a heuristic-based bandwidth controller, to be safe.

Please contact us for a more in-depth discussion of options.

For more information on cloud usage in K-12 schools, check out these posts:

Schools View Cloud Infrastructure as a Viable Option

K-12 Education is Moving to the Cloud

For more information on Bandwidth Usage by Cloud systems, check out this article:

Know Your Bandwidth Needs: Is Your Network Capacity Big Enough for Cloud Computing?

QoS and Your Cloud Applications, the Must Know Facts


When you make the switch to the cloud, you will likely discover that the standard QoS techniques, from the days when services were hosted within your enterprise, will not work on traffic coming in from the public Internet.  Below we detail why, and offer some unique alternatives to traditional router-based QoS. Read on to learn about new QoS techniques designed specifically for the Cloud.

Any QoS designed for the Cloud must address incoming traffic not originating on your Network

Most Internet congestion is caused by incoming traffic. From downloads of data not originating at your facility. Unlike the pre-cloud days, your local router cannot give priority to this data because it has no control over the sending server stream.  Yes, you can still control the priority of outgoing data, but if recreational traffic coming into your network comes in at the same priority as, let’s say, a cloud based VOIP call, then when your download link is full, all traffic will suffer.

Likely No Help from your service provider

Even if you asked your cloud hosting service to mark their traffic as priority, your public Internet provider likely will not treat ToS bits with any form of priority. Hence, all data coming from the Internet into your router from the outside will hit with equal priority. During peak traffic times, important cloud traffic will not be able to punch through the morass.

Is there any way to give priority to incoming cloud traffic?

Is QoS over the Internet for Cloud traffic possible? The answer is yes, QoS on an Internet link is possible. We have spent the better part of seven years practicing this art form and while it is not rocket science, it does require a philosophical shift in thinking to get your arms around it.

How to give priority to Cloud Traffic

We call it “equalizing,” or behavior-based shaping, and it involves monitoring incoming and outgoing streams on your Internet link. Priority or QoS is nothing more than favoring one stream’s packets over another stream’s. You can accomplish priority QoS on incoming streams by queuing (slowing down) one stream over another without relying on ToS bits.

How do we determine which “streams” to slow down?

It turns out in the real world there are three types of applications that matter:

1 ) Cloud based Business applications. Typically things like data bases, accounting, sales force, educational, Voip services.

2) Recreational traffic such as Netflix, YouTube

3) Downloads and updates

The kicker that we discovered and almost always holds true is that Cloud based applications will use a fraction of the bandwidth of the video recreational traffic and the downloads. If you can simply spot these non essential data hogs by size and slow them down a bit, there will be plenty of room for your Cloud applications during peak periods.

How do we ensure that cloud traffic has priority if we can’t rely on QoS bits?

To be honest, we stumbled upon this technique about 12 years ago. We keep track of all the streams coming into your network with what can best be described as a sniffing device. When we see a large stream of data, we know from experience that it can’t be cloud traffic, as it is too large of a stream. Cloud applications by design are rarely large streams, because if they were, the cloud application would likely be sluggish and not commercially viable. With our sniffing device, the NetEqualizer, we are able to slow down the non-cloud connections by adding in tiny bit of latency, while at the same time allowing the cloud application streams to pass through. The interesting result is that the sending servers (the same ones that ignore TOS bits) will actually sense that their traffic is being delayed in transport and they will back off their sending speeds on their own.

For more information or a demo feel free to contact us http://www.netequalizer.com.

For further reading on this topic, check out this article: “Traffic Management, Vital in the Cloud”

%d bloggers like this: