Capacity Planning for Cloud Applications


The main factors to consider when capacity planning your Internet Link for cloud applications are:

1) How much bandwidth do your cloud applications actually need?

Typical cloud applications require about 1/2 of a megabit or less. There are exceptions to this rule, but for the most part a good cloud application design does not involve large transfers of data. QuickBooks, salesforce, Gmail, and just about any cloud-based data base will be under the 1/2 megabit guideline. The chart below really brings to light the difference between your typical, interactive Cloud Application and the types of applications that will really eat up your data link.

Screen Shot 2015-12-29 at 4.18.59 PM

Bandwidth Usage for Cloud Based Applications compared to Big Hitters

2) What types of traffic will be sharing your link with the cloud?

The big hitters are typically YouTube and Netflix.  They can consume up to 4 megabits or higher per connection.  Also, system updates for Windows and iOS, as well as internal backups to cloud storage, can consume 20 megabits or more.  Another big hitter can be typical Web Portal sites, such as CNN, Yahoo, and Fox News. A few years ago these sites had a small footprint as they consisted of static images and text.  Today, many of these sites automatically fire up video feeds, which greatly increase their footprint.

3) What is the cost of your Internet Bandwidth, and do you have enough?

Obviously, if there was no limit to the size of your Internet pipe or the required infrastructure to handle it, there would be no concerns or need for capacity planning.  In order to be safe, a good rule of thumb as of 2016 is that you need about 100 megabits per 20 users. Less than that, and you will need to be willing to scale back some of those larger bandwidth-consuming applications, which brings us to point 4.

4) Are you willing to give a lower priority to recreational traffic in order to insure your critical cloud applications do not suffer?

Hopefully you work in an organization where compromise can be explained, and the easiest compromise to make is to limit non-essential video and recreational traffic.  And those iOS updates? Typically a good bandwidth control solution will detect them and slow them down, so essentially they run in the background with a smaller footprint over a longer period of time.

Behind The Scenes, How Many Users Can an Access Point Handle ?


Assume you are teaching a class with thirty students, and every one of them needs help with their homework, what would you do? You’d probably schedule a time slot for each student to come in and talk to you one on one (assuming they all had different problems and there was no overlap in your tutoring).

Fast forward to your wireless access point.  You have perhaps heard all the rhetoric about 3.5 gigaherts, or 5.3 megahertz ?

Unfortunately, the word frequency is tossed around in tech buzzword circles the same way car companies and their marketing arms talk about engine sizes. I have no idea what 2.5 Liter Engine is,  it might sound cool and it might be better than a 2 liter engine, but in reality I don’t know how to compare the two numbers. So to answer our original question, we first need a little background on frequencies to get beyond the marketing speak.

A good example of a frequency, that is also easy to visualize, are ripples on pond. When you drop a rock in the water, ripples propagate out in all directions. Now imagine if  you stood in the water, thigh deep across the pond,  and the ripples hit your leg once each second.  The frequency of the ripples in the water would be 1 hertz, or one peak per second. With access points, there are similar ripples that we call radio waves. Although you can’t see them, like the ripples on the water, they are essentially the same thing. Little peaks and values of electromagnetic waves going up and down and hitting the antenna of the wireless device in your computer or Iphone. So when a marketing person tells you their AP is 2.4 Gigahertz, that means those little ripples coming out of  it are hitting your head, and everything else around them, 2.4 billion times each second. That is quite a few ripples per second.

Now in order to transmit a bit of data, the AP actually stops and starts transmitting ripples. One moment it is sending out 2.4 billion ripples pdf second the next moment it is not.  Now this is where it gets a bit weird, at least for me. The 2.4 billion ripples a second really have no meaning as far as data transmission by themselves; what the AP does is set up a schedule of time slots, let’s say 10 million time slots a second, where it is either transmitting ripples, or it turns the ripple generator off. Everybody that is in communication with the AP is aware of the schedule and all the 10 million time slots.  Think of these time slots as dates on your Calendar, and if you have a sunny day, call that a one, while if you have a cloudy day call that a 0.  Cloudy days are a binary 1 and clear day a binary 0. After we string together 8 days we have a sequence of 1’s and 0’s and a full byte. Now 8 days is a long time to transmit a byte, that is why the AP does not use 24 hours for a time slot, but it could , if we were some laid back hippie society where time did not matter.

So let’s go back over what we have learned and plug in some realistic parameters.
Let’s start with a frequency of 2.4 gigahertz. The fastest an AP can realistically turn this ripple generator off and on is about 1/4 the frequency or about 600 time slots/bits per second. This assumes a perfect world and all the bits get out without any interference from other things generating ripples (like your microwave) or something. So in reality the effective rate might be more on the order of 100 million bits a second.
Now let’s say there are 20 users in the room, sharing the available bits equally. They would all be able to run 5 megabits each. But again, there is over head switching between these users (sometimes they talk at the same time and have to constantly back off and re-synch)  Realistically with 20 users all competing for talk time,  1 to 2 megabits per user is more likely.

Other factors that can affect the number of users.
As you can imagine the radio AP manufacturers do all sorts of things to get better numbers. The latest AP’s have multiple antennas and run in two frequencies (two ripple generators) for more bits.

There are also often interference problems with multiple AP’s in the area , all making ripples . The transmission of  ripples for one AP do not stop at a fixed boundary, and this complexity will cause the data rates to slow down while the AP’s sort themselves out.

For related readings on Users and Access Points:

How Many Users Can a Wireless Access Point Handle?

How to Build Your Own Linux Access Points

How to use Access Points to set up and In-Home Music System

Five Things to Consider When Building a Commercial Wireless Network


By Art Reisman, CTO, APconnections,  www.netequalizer.com

with help from Sam Beskur, CTO Global Gossip North America, http://hsia.globalgossip.com/

Over the past several years we have provided our Bandwidth Controllers as a key component in many wireless networks.  Along the way we have seen many successes, and some not so successful deployments.  What follows are some key learnings  from our experiences with wireless deployment,

1) Commercial Grade Access Points versus Consumer Grade

Commercial grade access points use intelligent collision avoidance in densely packed areas. Basically, what this means is that they make sure that a user with access to multiple access points is only being serviced by one AP at a time. Without this intelligence, you get signal interference and confusion. An analogy would be if  you asked a sales rep for help in a store, and two sales reps start talking back to you at the same time; it would be confusing as to which one to listen to. Commercial grade access points follow a courtesy protocol, so you do not get two responses, or possibly even 3, in a densely packed network.

Consumer grade access points are meant to service a single household.  If there are two in close proximity to each other, they do not communicate. The end result is interference during busy times, as they will both respond at the same time to the same user without any awareness.  Due to this, users will have trouble staying connected. Sometimes the performance problems show up long after the installation. When pricing out a solution for a building or hotel be sure and ask the contractor if they are bidding in commercial grade (intelligent) access points.

2) Antenna Quality

There are a limited number of frequencies (channels) open to public WiFi.  If you can make sure the transmission is broadcast in a limited direction, this allows for more simultaneous conversations, and thus better quality.  Higher quality access points can actually figure out the direction of the users connected to them, such that, when they broadcast they cancel out the signal going out in directions not intended for the end-user.  In tight spaces with multiple access points, signal canceling antennas will greatly improve service for all users.

3) Installation Sophistication and Site Surveys

When installing a wireless network, there are many things a good installer must account for. For example,  the attenuation between access points.  In a perfect world  you want your access points to be far enough apart so they are not getting blasted by their neighbor’s signal. It is okay to hear your neighbor in the background a little bit, you must have some overlap otherwise you would have gaps in coverage,  but you do not want them competing with high energy signals close together.   If you were installing your network in a giant farm field with no objects in between access points, you could just set them up in a grid with the prescribed distance between nodes. In the real world you have walls, trees, windows, and all sorts of objects in and around buildings. A good installer will actually go out and measure the signal loss from these objects in order to place the correct number of access points. This is not a trivial task, but without an extensive site survey the resultant network will have quality problems.

4) Know What is Possible

Despite all the advances in wireless networks, they still have density limitations. I am not quite sure how to quantify this statement other than to say that wireless does not do well in an extremely crowded space (stadium, concert venue, etc.) with many devices all trying to get access at the same time. It is a big jump from designing coverage for a hotel with 1,000 guests spread out over the hotel grounds, to a packed stadium of people sitting shoulder to shoulder. The other compounding issue with density is that it is almost impossible to simulate before building out the network and going live.  I did find a reference to a company that claims to have done a successful build out in Gillette Stadium, home of the New England Patriots.  It might be worth looking into this further for other large venues.

5) Old Devices

Old 802.11b devices on your network will actually cause your access points to back off to slower speeds. Most exclusively-b devices were discontinued in the mid 2000’s, but they are still around. The best practice here is to just block these devices, as they are rare and not worth bringing the speed of your overall network down.

We hope these five (5) practical tips help you to build out a solid commercial wireless network. If you have questions, feel free to contact APconnections or Global Gossip to discuss.

Related Article:  Wireless Site Survey With Free tools

Wireless is Nice, but Wired Networks are Here to Stay


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com

The trend to go all wireless in high density housing was seemingly a slam dunk just a few years ago. The driving forces behind the exclusive deployment of wireless over wired access was two fold.

  • Wireless cost savings. It is much less expensive to strafe a building with a mesh network  rather than to pay a contractor to insert RJ45 cable throughout the building.
  • People expect wireless. Nobody plugs a computer into the wall anymore – or do they?

Something happened on the way to wireless Shangri-La. The physical limitations of wireless, combined with the appetite for ever increasing video, have caused some high density housing operators to rethink their positions.

In a recent discussion with several IT administrators representing large residential housing units, the topic turned to whether or not the wave of the future would continue to include wired Internet connections. I was surprised to learn that the consensus was that wired connections were not going away anytime soon.

To quote one attendee…

“Our parent company tried cutting costs by going all wireless in one of our new builds. The wireless access in buildings just can’t come close to achieving the speeds we can get in the wired buildings. When push comes to shove, our tenants still need to plug into the RJ45 connector in the wall socket. We have plenty of bandwidth at the core , but the wireless just does can’t compete with the expectations we have attained with our wired connections.”

I found this statement on a Resnet Mailing list from Brown University.

“Greetings,

     I just wanted to weigh-in on this idea. I know that a lot of folks seem to be of the impression that ‘wireless is all we need’, but I regularly have to connect physically to get reasonable latency and throughput. From a bandwidth perspective, switching to wireless-only is basically the same as replacing switches with half-duplex hubs.
     Sure, wireless is convenient, and it’s great for casual email/browsing/remote access users (including, unfortunately, the managers who tend to make these decisions). Those of us who need to move chunks of data around or who rely on low-latency responsiveness find themselves marginalized in wireless-only settings. For instance: RDP, SSH, and X11 over even moderately busy wireless connections are often barely usable, and waiting an hour for a 600MB Debian ISO seems very… 1997.”

Despite the tremendous economic pressure to build ever faster wireless networks, the physics of transmitting signals through the air will ultimately limit the speed of wireless connections far below of what can be attained by wired connections. I always knew this, but was not sure how long it would take reality to catch up with hype.

Why is wireless inferior to wired connections when it comes to throughput?

In the real world of wireless, the factors that limit speed include

  1. The maximum amount of data that can be transmitted on a wireless channel is less than wired. A rule of thumb for transmitting digital data over the airwaves is that you can only send bits of  data at 1/2 the frequency. For example, 800 megahertz ( a common wireless carrier frequency) has  800 million cycles per second and 1/2 of that is 400 million cycles per second. This translates to a theoretical maximum data rate of 400 megabits. Realistically though, with imperfect signals (noise) and other environmental factors, 1/10 of the original frequency is more likely the upper limit. This gives us a maximum carrying capacity per channel of 80 megabits on our 800 megahertz channel. For contrast, the upper limit of a single fiber cable is around 10 gigabits, and higher speeds are attained by laying cables in parallel, bonding multiple wires together in one cable, and on major back bones, providers can transmit multiple frequencies of light down the same fiber achieving speeds of 100 gigabits on a single fiber! In fairness, wireless signals can also use multiple frequencies for multiple carrier signals, but the difference is you cannot have them in close proximity to each other.
  2. The number of users sharing the channel is another limiting factor. Unlike a single wired connection, wireless users in densely populated areas must share a frequency, you cannot pick out a user in the crowd and dedicate the channel for a single person.  This means, unlike the dedicated wire going straight from your Internet provider to your home or office, you must wait your turn to talk on the frequency when there are other users in your vicinity. So if we take our 80 megabits of effective channel bandwidth on our 800 megahertz frequency, and add in 20 users, we are no down to 4 megabits per user.
  3. The efficiency of the channel. When multiple people are sharing a channel, the efficiency of how they use the channel drops. Think of traffic at a 4-way stop. There is quite a bit of wasted time while drivers try to figure out whose turn it is to go, not to mention they take a while to clear the intersection. Same goes for wireless users sharing techniques there is always overhead in context switching between users. Thus we can take our 20 user scenario down to an effective data rate of 2 megabits
  4. Noise.  There is noise and then there is NOISE. Although we accounted for average noise in our original assumptions, in reality there will always be segments of the network that experience higher noise levels than average. When NOISE spikes there is further degradation of the network, and sometimes a user cannot communicate at all with an AP. NOISE is a maddening and unquantifiable variable. Our assumptions above were based on the degradation from “average noise levels”, it is not unheard of for an AP to drop its effective transmit rate by 4 or 5 times to account for noise, and thus an effective data rate for all users on that segment from our original example drops down to 500kbs, just barely enough bandwidth to watch a bad video.

Long live wired connections!

Are You Unknowingly Sharing Bandwidth with Your Neighbors?


Editor’s Note: The following is a revised and update version of our original article from April 2007.

In a recent article titled, “The White Lies ISPs Tell about Broadband Speeds,” we discussed some of the methods ISPs use when overselling their bandwidth in order to put on their best face for their customers. To recap a bit, oversold bandwidth is a condition that occurs when an ISP promises more bandwidth to its users than it can actually deliver hence, during peak hours you may actually be competing with your neighbor for bandwidth. Since the act of “overselling” is a relative term, with some ISPs pushing the limit to greater extremes than others, we thought it a good idea to do a quick follow-up and define some parameters for measuring the oversold condition.

For this purpose we use the term contention ratio. A contention ratio is simply the size of an Internet trunk divided by the number of users. We normally think of Internet trunks in units of megabits. For example, 10 users sharing a one megabit trunk would have a 10-to-1 contention ratio. If sharing the bandwidth on the trunk equally and simultaneously, each user could sustain a constant feed of 100kbs, which is exactly 1/10 of the overall bandwidth.

So what is an acceptable contention ratio?

From a business standpoint, it is whatever a customer will put up with and pay for without canceling their service. This definition may seem ethically suspect, but whether in the bygone days of telecommunications phone service or contemporary Internet bandwidth business, there are long-standing precedents for overselling. What do you think a circuit busy signal is caused by? Or a dropped cell phone call? It’s best to leave the moral debate to a university assignment or a Sunday sermon.

So, without pulling any punches, what exactly will a customer tolerate before pulling the plug?
Here are some basic unofficial observations:
  • Rural customers in the US and Canada: Contention ratios of 10 to 1 are common (2007 this was 20 to 1)
  • International customers in remote areas of the world: Contention ratios of 20 to 1 are common (2007 was 80 to 1)
  • Internet providers in urban areas: Contention ratios of 5 to 1 are to be expected (2007 this was 10 to 1) *

* Larger cable operators have extremely fast last mile connections, most of their speed claims are based on the speed of their last mile connection and not their Internet Exchange point thresholds. The numbers cited are related to their connection to the broader Internet and not the last mile from their office (NOC) to your home. Admittedly, the lines of what is the Internet can be blurred as many cable operators cache popular local content (NetFlix Movies, for example). The movie is delivered from a server at their local office direct to your home, hence technically we would not consider this related to your contention ratio to the Internet.

The numbers above are a good, rough starting point, but things are not as simple as they look. There is a statistical twist as bandwidth amounts get higher.

From the customers perspective of speed, contention ratios can actually increase as the overall Internet trunk size gets larger. For example, if 50 people can share one megabit without mutiny, it should follow that 100 people can share two megabits without mutiny as the ratio has not changed. It is still 50 to 1.

However, from observations of hundreds of ISPs, we can easily conclude that perhaps 110 people can share two megabits with the same tolerance as 50 people sharing one megabit. What this means is that the larger the ISP, the more bandwidth at a fixed cost per megabit, and thus the larger the contention ratios you can get away with.

Is this really true? And if so, what are its implications for your business?

This is simply an empirical observation, backed up by talking to literally thousands of ISPs over the course of four years and noticing how their over subscription ratios increase with the size of their trunk while customer perception of speed remains about the same.

A conservative estimate is that, starting with the baseline ratio listed above, you can safely add 10 percent more subscribers above and beyond the original contention ratio for each megabit of trunk they share.

Related Articles

How to speed up access on your iPhone

How to determine the true speed of video over your Internet Connection

Case Study: A Simple Solution to Relieve Congestion on Your MPLS Network


Summary: In the last few months, we have set up several NetEqualizer systems on hub and spoke MPLS networks. Our solution is very cost effective because it differs from many TOS/Compression-based WAN optimization products that require multiple pieces of hardware.  Normally, for WAN optimization, a device is placed at the HUB and a partner device is placed at each remote location. With the NetEqualizer technology, we have been able to simply and elegantly solve contention issues with a single device at the central hub.

The problem:

A customer has a hub and spoke MPLS network where remote sites get their public Internet and corporate data by coming in on a spoke to a central site.  Although the network at the host site has plenty of bandwidth, the spokes have a fixed allocation over the MPLS and are experiencing contention issues (e.g. slow response times to corporate sales data, etc.).

The solution:

By placing a NetEqualizer at a central location, so that all the remote spokes come in through the NetEqualizer, we are able to sense when a remote spoke has reached its contention level. We then perform prioritization on all the competing applications and user streams coming in over the congested link.

Why it works:

QoS and priority is really quite simple: it is always the case where some large selfish application is dominating a shared link. The NetEqualizer is able to spot these selfish applications and scale them back using a technique called Equalizing. QoS and priority are just a matter of taking away bandwidth from somebody else. See our related article: QOS is a matter of sacrifice.

Okay, but how does it really work?

How does NetEqualizer solve the congested MPLS link issue?

The NetEqualizer solution, which is completely compatible with MPLS, works by taking advantage of the natural inclination of applications to back off when artificially restrained. We’ll get back to this key point in a moment.

NetEqualizer will adjust selfish application streams by adding latency, forcing them to back off and allow potentially starved data applications to establish communications – thus eliminating any disruption.

Once you have determined the peak capacity of an MPLS spoke (if you don’t know for sure it can be determined empirically through busy hour observation), you then tell the centralized NetEqualizer the throughput of the spoke through its defined subnet range or VLAN identification tag. This tells the NetEqualizer to kick into gear when that upper limit on the spoke is reached.

Once configured, the NetEqualizer constantly (every second) measures the total aggregate bandwidth throughput traversing every spoke on your network. If it senses the upper limit is being reached, NetEqualizer will then isolate the dominating flows and encourage them to back off.

Each connection between a user on your network and the Internet constitutes a traffic flow. Flows vary widely from short dynamic bursts, which occur, for example, when searching a small Web site, to large persistent flows, as when performing peer-to-peer file sharing or downloading a large file.

By keeping track of every flow going through each MPLS spoke, the NetEqualizer can make a determination of which ones are getting an unequal share of bandwidth and thus crowding out flows from weaker applications.

NetEqualizer determines detrimental flows from normal ones by taking the following questions into consideration:

  1. How persistent is the flow?
  2. How many active flows are there?
  3. How long has the flow been active?
  4. How much total congestion is currently on the link?
  5. How much bandwidth is the flow using relative to the link size?

Once the answers to these questions are known, NetEqualizer will adjust offending flows by adding latency, forcing them to back off and allow potentially starved applications to establish communications – thus eliminating any disruption. Selfish Applications with more aggressive bandwidth needs will be throttled back during peak contention. This is done automatically by the NetEqualizer, without requiring any additional programming by administrators.

The key to making this happen over an MPLS link relies on the fact that if you slow a down a selfish application it will back off. This can be done via the NetEqualizer without any changes to the topology of your MPLS network, since the throttling is done independent of the network.

Questions and Answers

How do you know congestion is caused by a heavy stream?

We have years of experience optimizing networks with this technology. It is safe to say that on any congested network, roughly five percent of users are responsible for 80 percent of Internet traffic. This seems to be a law of Internet usage.2

Can certain applications be given priority?

NetEqualizer can give priority by IP address, for video streams, and in its default mode it naturally gives priority to VoIP, thus addressing a common need for commercial operators.

———————————————————————————————————————————————–

2Randy Barrett, “Putting the Squeeze on Internet Hogs: How Operators Deal with Their Greediest Users.” Multichannel News. 7 Mar. 2007. Retrieved 1 Aug. 2007 http://www.multichannel.com/article/CA6439454.html

Update: Bandwidth Consumption and the IT Professionals that are Tasked to Preserve It


“What is the Great Bandwidth Arms Race? Simply put, it is the sole reason my colleague gets up and goes to work each day. It is perhaps the single most important aspect of his job—the one issue that is always on his mind, from the moment he pulls into the campus parking lot in the morning to the moment he pulls into his driveway at home at night. In an odd way, the Great Bandwidth Arms Race is the exact opposite of the “Prime Directive” from Star Trek: rather than a mandate of noninterference, it is one of complete and intentional interference. In short, my colleague’s job is to effectively manage bandwidth consumption at our university. He is a technological gladiator, and the Great Bandwidth Arms Race is his arena, his coliseum in which he regularly battles conspicuous bandwidth consumption.”

The excerpt above is from an article written by Paul Cesarini, a Professor at Bowling Green University back 2007. It would be interesting to get some comments and updates from Paul at some point, but for now, I’ll provide an update from the vendor perspective.

Since 2007, we have seen a big drop in P2P traffic that formerly dominated most networks. A report from bandwidth control vendor Sandvine tends to agree with our observations.

Sandvine Report
— The growth of Netflix, the decline of P2P traffic, and the end of the PC era are three notable aspects of a new report by network equipment company Sandvine. Netflix accounted for 27.6% of downstream U.S. Internet traffic in the third quarter, according to Sandvine’s “Global Internet Phenomena Report” for Fall 2011. YouTube accounted for 10 percent of downstream traffic and BitTorrent, the file-sharing protocol, accounted for 9 percent.”

We also agree with Sandvine’s current findings that video is driving bandwidth consumption; however, for the network professionals entrenched in the battle of bandwidth consumption, there is another factor at play which may indicate some hope on the horizon.

There has been a precipitous drop on raw bandwidth costs over the past 10 years. Commercial bandwidth rates have dropped from around $100 or more per megabit to as little as $10 per megabit. So the question now is: Will the availability of lower-cost bandwidth catch up to the demand curve? In other words, will the tools and human effort put into the fight against managing bandwidth become moot? And if so, what is the time frame?

I am going to go out halfway on limb and claim we are seeing bandwidth catch up with demand and hence the battle for the IT professional is going to subside over the coming years.

The reason for my statement is that once we get to a price point where most consumers can truly send and receive interactive video (note this is the not the same as ISPs using caching tricks), we will see some of the pressure spent on micro-managing bandwidth consumption with human labor ease up. Yes, there will be consumers that want HD video all the time, but with a few rules in your bandwidth control device you will be able allow certain levels of bandwidth consumption through, including low resolution video for Skype and YouTube, without crashing your network. Once we are at this point, the pressure for making trade-offs on specific kinds of consumption will ease off a bit.  What this implies is that the cost of human labor to balance bandwidth needs will be relegated to dumb devices and perhaps obsolete this one aspect of the job for an IT professional.

%d bloggers like this: