Cloud Computing – Do You Have Enough Bandwidth? And a Few Other Things to Consider

The following is a list of things to consider when using a cloud-computing model.

Bandwidth: Is your link fast enough to support cloud computing?

We get asked this question all the time: What is the best-practice standard for bandwidth allocation?

Well, the answer depends on what you are computing.

– First, there is the application itself.  Is your application dynamically loading up modules every time you click on a new screen? If the application is designed correctly, it will be lightweight and come up quickly in your browser. Flash video screens certainly spruce up the experience, but I hate waiting for them. Make sure when you go to a cloud model that your application is adapted for limited bandwidth.

– Second, what type of transactions are you running? Are you running videos and large graphics or just data? Are you doing photo processing from Kodak? If so, you are not typical, and moving images up and down your link will be your constraining factor.

– Third, are you sharing general Internet access with your cloud link? In other words, is that guy on his lunch break watching a replay of royal wedding bloopers on YouTube interfering with your access?

The good news is (assuming you will be running a transactional cloud computing environment – e.g. accounting, sales database, basic email, attendance, medical records – without video clips or large data files), you most likely will not need additional Internet bandwidth. Obviously, we assume your business has reasonable Internet response times prior to transitioning to a cloud application.

Factoid: Typically, for a business in an urban area, we would expect about 10 megabits of bandwidth for every 100 employees. If you fall below this ratio, 10/100, you can still take advantage of cloud computing but you may need  some form of QoS device to prevent the recreational or non-essential Internet access from interfering with your cloud applications.  See our article on contention ratio for more information.

Security: Can you trust your data in the cloud?

For the most part, chances are your cloud partner will have much better resources to deal with security than your enterprise, as this should be a primary function of their business. They should have an economy of scale – whereas most companies view security as a cost and are always juggling those costs against profits, cloud-computing providers will view security as an asset and invest more heavily.

We addressed security in detail in our article how secure is the cloud, but here are some of the main points to consider:

1) Transit security: moving data to and from your cloud provider. How are you going to make sure this is secure?
2) Storage: handling of your data at your cloud provider, is it secure once it gets there from an outside hacker?
3) Inside job: this is often overlooked, but can be a huge security risk. Who has access to your data within the provider network?

Evaluating security when choosing your provider.

You would assume the cloud company, whether it be Apple or Google (Gmail, Google Calendar), uses some best practices to ensure security. My fear is that ultimately some major cloud provider will fail miserably just like banks and brokerage firms. Over time, one or more of them will become complacent. Here is my check list on what I would want in my trusted cloud computing partner:

1) Do they have redundancy in their facilities and their access?
2) Do they screen their employees for criminal records and drug usage?
3) Are they willing to let you, or a truly independent auditor, into their facility?
4) How often do they back-up data and how do they test recovery?

Big Brother is watching.

This is not so much a traditional security threat, but if you are using a free service you are likely going to agree, somewhere in their fine print, to expose some of your information for marketing purposes. Ever wonder how those targeted ads appear that are relevant to the content of the mail you are reading?

Link reliability.

What happens if your link goes down or your provider link goes down, how dependent are you? Make sure your business or application can handle unexpected downtime.

Editors note: unless otherwise stated, these tips assume you are using a third-party provider for resources applications and are not a large enterprise with a centralized service on your Internet. For example, using QuickBooks over the Internet would be considered a cloud application (and one that I use extensively in our business), however, centralizing Microsoft excel on a corporate server with thin terminal clients would not be cloud computing.

How to Speed Up Your Internet Connection with a Bandwidth Controller


It occurred to me today, that in all the years I have been posting about common ways to speed up your Internet, I have never really written a plain and simple consumer explanation dedicated to how a bandwidth controller can speed up your Internet. After all, it seems intuitive, that a bandwidth controller is something an ISP would use to slow down your Internet; but there can be a beneficial side to a bandwidth controller, even at the home-consumer level.

Quite a bit of slow Internet service problems are due to contention on your link to the Internet. Even if you are the only user on the Internet, a simple update to your virus software running in the background can dominate your Internet link. A large download often will cause everything else you try (email, browsing) to come to a crawl.

What causes slowness on a shared link?

Everything you do on your Internet creates a connection from inside your network to the Internet, and all these connections compete for the limited amount of bandwidth which your ISP provides.

Your router (cable modem) connection to the Internet provides first-come, first-serve service to all the applications trying to access the Internet. To make matters worse, the heavier users (the ones with the larger persistent downloads), tend to get more than their fair share of router cycles. Large downloads are like the school yard bully – they tend to butt in line, and not play fair.

So how can a bandwidth controller make my Internet faster?

A smart bandwidth controller will analyze all your Internet connections on the fly. It will then selectively take away some bandwidth from the bullies. Once the bullies are removed, other applications will get much needed cycles out to the Internet, thus speeding them up.

What application benefits most when a bandwidth controller is deployed on a network?

The most noticeable beneficiary will be your VoIP service. VoIP calls typically don’t use that much bandwidth, but they are incredibly sensitive to a congested link. Even small quarter-second gaps in a VoIP call can make a conversation unintelligible.

Can a bandwidth controller make my YouTube videos play without interruption?

In some cases yes, but generally no. A YouTube video will require anywhere from 500kbs to 1000kbs of your link, and is often the bully on the link; however in some instances there are bigger bullies crushing YouTube performance, and a bandwidth controller can help in those instances.

Can a home user or small business with a slow connection take advantage of a bandwidth controller?

Yes, but the choice is a time-cost-benefit decision. For about $1,600 there are some products out there that come with support that can solve this issue for you, but that price is hard to justify for the home user – even a business user sometimes.

Note: I am trying to keep this article objective and hence am not recommending anything in particular.

On a home-user network it might be easier just to police it yourself, shutting off background applications, and unplugging the kids’ computers when you really need to get something done. A bandwidth controller must sit between your modem/router and all the users on your network.

Related Article Ten Things to Consider When Choosing a Bandwidth Shaper.

Where have all the Wireless ISPs gone?

Rachel Carlson wrote silent spring in 1962. She noticed a lack of Robins in her yard and eventually made the link back to DDT spraying.  Robins are again abundant, given a fighting chance they seem to prosper quite well.

Much like the Robins of 1962 , in the past 3 years,  I have noticed a die off in Business from Wireless ISPs.  Four years ago, I spent at least an hour or two a day talking to various WISPs around the USA. The mood was always upbeat, many were adding subscribers at a rapid rate. Today the rural WISPs of the US are still out there, but expansion and investment has come to a standstill.

Is the private investment drought by small rural WISPs due to the recession?

Certainly some of the slowdown is due to the weakness in the housing market; but as one operator told me a couple years ago, his customers will keep the Internet connection up long after they have disconnected their Television and Phone. Some consumers will pay their Internet bill right up to the last day of a pending foreclosure.

Much of the slow down is due to the rural broadband stimulus.

The Rural BroadBand initiative, seems to be a solution looking for a problem. From our perspective the main thing this initiative accomplished is subsidizing a few providers, at the expense of freezing  billions in private equity. Private equity that up until the initiative  was effectively expanding the rural market through entrepreneurs.

Why did the private investment stop.

It was quite simple really, when the playing field was level, most small operators felt like they had an upper hand against the larger prividers in rural areas for example

– They worked smarter using with less overhead using back haul technologies

– There was an abundance of wireless equipment makers (based on 802.11 public requencies) ready to help

– They had confidence that the larger operators were not interested in these low margin niche markets
With the broad band initiative several things happened

–  Nobody knew where the money was going to be spent or how broad the reach would be , this uncertainty froze all private expansion

– Many of these smaller providers applied for money, and only a few were awarded contracts ( if any) . Think of it this way suppose there were 4 restaurants in town all serving slightly different venues and then a giant came along and gave one Restaurant a 10  million dollar subsidy , the other three go out of business

Related article By the FCC’s own report it seems the rural broad band initiative has not changed access to higher speeds.

Prehaps someday the poison of select government subsidies will come to end , and the rural WISP will prosper again.

Update Nov 2011: It appears that not only did the rural broad band initiative freeze up the small home grown ISP market, but proves again that large government subsidies are a poison pill. Related article

By Art Reisman, CTO,

Art Reisman CTO
Art Reisman is a partner and co-founder of APconnections, a company that provides bandwidth control solutions (NetEqualizer) to
ISPs, Universities, Wireless ISPs, Libraries, Mining Camps, and any organization where groups of users must share their Internet resources equitably.

The Pros And Cons of Metered Internet Bandwidth And Quotas

Editor’s Note:Looks like the metered bandwidth is back in the news. We first addressed this subject back in June 2008. Below you’ll find our original commentary followed by a few articles on the topic.

Here is our original commentary on the subject:

The recent announcement that Time Warner Cable Internet plans to experiment with a quota-based bandwidth system has sparked lively debates throughout cyberspace. Although the metering will only be done in a limited market for now, it stands as an indication of the direction ISPs may be heading in the future. Bell Canada is also doing a metered bandwidth approach, in Canada much of the last mile for Bell is handled by resellers and they are not happy with this approach.

Over the past several years, we have seen firsthand the pros and cons of bandwidth metering. Ultimately, invoking a quota-based system does achieve the desired effect of getting customers to back off on their usage — especially the aggressive Internet users who take up a large amount of the bandwidth on a network.

However, this outcome doesn’t always develop smoothly as downsides exist for both the ISP and the consumer. From the Internet provider perspective, a quota-based system can put an ISP at a competitive disadvantage when marketing against the competition. Consumers will obviously choose unlimited bandwidth if given a choice at the same price. As the Time Warner article states, most providers already monitor your bandwidth utilization and will secretly kick you offline when some magic level of bandwidth usage has been reached.

To date, it has not been a good idea to flaunt this policy and many ISPs do their best to keep it under the radar. In addition, enforcing and demonstrating a quota-based system to customers will add overhead costs and also create more customer calls and complaints. It will require more sophistication in billing and the ability for customers to view their accounts in real time. Some consumers will demand this, and rightly so.

Therefore, a quota-based system is not simply a quick fix in response to increased bandwidth usage. Considering these negative repercussions, you may wonder what motivates ISPs to put such a system in place. As you may have guessed, it ultimately comes down to the bottom line.

ISPs are often getting charged or incurring cost overruns on total amount of bytes transferred. They are many times resellers of bandwidth themselves and may be getting charged by the byte and, by metering and a quota-based system, are just passing this cost along to the customers. In this case, on face value, quotas allow a provider to adopt a model where they don’t have to worry about cost overruns based on their total usage. They essentially hand this problem to their subscribers.

A second common motivation is that ISPs are simply trying to keep their own peak utilization down and avoid purchasing extra bandwidth to meet the sporadic increases in demand. This is much like power companies that don’t want to incur the expense of new power plants to just meet the demands during peak usage times.

Quotas in this case do have the desired effect of lowering peak usage, but there are other ways to solve the problem without passing the burden of byte counting on to the consumer. For example, behavior-based and fairness reallocation has proven to solve this issue without the downsides of quotas.

A final motivation for the provider is that a quota system will take some of the heat off of their backs from the FCC. According to other articles we have seen, ISPs have discreetly, if not secretly, been toying with bandwidth, redirecting it based on type and such. So, now, just coming clean and charging for what consumers use may be a step in the right direction – at least where policy disclosure is concerned.

For the consumer, this increased candor from ISPs is the only real advantage of a quota-based system. Rather than being misled and having providers play all sorts of bandwidth tricks, quotas at least put customers in the know. Although, the complexity and hassle of monitoring one’s own bandwidth usage on a monthly basis, similar to cell phone minutes, is something most consumers most likely don’t want to deal with.

Personally, I’m on the fence in regard to this issue. Just like believing in Santa Claus, I liked the illusion of unlimited bandwidth, but now, as quota-based systems emerge, I may be faced with reality. It will be interesting to see how the Time Warner experiment pans out.

Related Resource: Blog dedicated to stamping out usage-based billing in Canada.

Additional Recent Articles

Time Bomb Ticking on Netflix Streaming Strategy (Wall Street Journal)

How much casual driving would the average American do if gasoline cost $6 a gallon? A similar question may confront Web companies pushing bandwidth-guzzling services one day.

Several Web companies, including, Google and Netflix, are promoting services like music and video streaming that encourage consumers to gobble up bandwidth. Indeed, Netflix’s new pricing plans, eliminating the combined DVD-streaming offering, may push more people into streaming. These efforts come as broadband providers are discussing, or actually implementing, pricing plans that eventually could make those services pricey to use.

Most obviously this is an issue for the mobile Web, still a small portion of consumer Internet traffic in North America. Verizon Communications‘ majority-owned wireless service last week introduced tiered data pricing, about a year after AT&T made a similar move. But potentially much more disruptive is consumption-based pricing for “fixed broadband,” landlines that provide Internet access for consumers in their homes, either via a cable or a home Wi-Fi network. Long offered on an effectively unlimited basis, American consumers aren’t used to thinking about the bytes they consume online at home.

To keep reading, click here.

The Party’s Over: The End of the Bandwidth Buffet (

As the consumption of video on broadband accelerates, moving to consumption billing is the only option.

Arguments over consumption billing and network neutrality flared up again this summer. The associative connector of the two issues is their technical underpinning: Consumption billing is based on the ability to measure, meter and/or monitor bits as they flow by. The problem is that those abilities are what worry some advocates of one version of network neutrality.

The summer season began with AT&T stirring things up with an announcement that it was moving toward adopting consumption billing for wireless broadband.

To keep reading, click here.

Internet Providers Want to Meter Usage: Customers Who Like To Stream Movies, TV Shows May Get Hit With Extra Fees (MSNBC)

If Internet service providers’ current experiments succeed, subscribers may end up paying for high-speed Internet based on how much material they download. Trials with such metered access, rather than the traditional monthly flat fee for unlimited connection time, offer enough bandwidth that they won’t affect many consumers — yet…

To keep reading, click here.

Related article:  Metered broadband is coming

Editor’s final note: We are also seeing renewed interest in quota-based systems. We completely revamped our NetEqualizer quota interface this spring to meet rising demand.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

Behind the Scenes: Bugs and Networking Equipment

If you relied only on conspiracy theories to explain the origin of software bugs, they would likely leave little trust in the vendors and manufacturers providing your technology. In general, the more skeptical theories chalk software bugs up to a few nefarious, and easily preventable, causes:

  1. Corporate greed and the failure to effectively allocate resources
  2. Poor engineering
  3. Companies deliberately withholding fixes in an effort to sell upgrades and future support

Although I’ve certainly seen evidence of these policies many times over my 25-year career, the following case studies are more typical for understanding how a bug actually gets into a software release. It’s not necessarily the conspiracy it might initially seem.

My most memorable system failure took place back in the early 1990s. I was the system engineer responsible for the underlying UNIX operating system and Redundant Disk Drives (RAID) on the Audix Voice Messaging system. This was before the days of widespread e-mail use. I worked for AT&T Bell Labs at the time, and AT&T had a reputation of both high price and high reliability. Our customers, almost all Fortune 500 companies, used their voice mail extensively to catalog and archive voice messages. Customers such as John Hancock paid a premium for redundancy on their voice message storage. If there were any field-related problems, the buck stopped in my engineering lab.

For testing purposes, I had several racks of Audix (trade mark) systems and simulators combined with various stacks of disk drives in RAID configurations. We ran these systems for hours, constantly recording voice messages. To stress the RAID storage, we would periodically pull the power on a running disk drive. We would also smash them with a hammer while running. Despite the deliberate destruction of running disk drives, in every test scenario the RAID system worked flawlessly. We never lost a voice mail message in our laboratory.

However, about six months after a major release, I got a call from our support team. John Hancock had a system failure and lost every last one of their corporate voice mails. (AT&T had advised backing data up to tape, but John Hancock had decided not to utilize that facility because of their RAID investment. Remember, this was in the 1990s and does not reflect John Hancock current policies.)

The root cause analysis took several weeks of work with the RAID vendor, myself and some of the key UNIX developers sequestered in a lab in Santa Clara, California. After numerous brainstorm sessions, we were able to re-create the problem. It seemed the John Hancock disk drive had suffered what’s called a parity error.

A parity error can develop if a problem occurs when reading and writing data to the drive. When the problem emerges, the drives try to recover, but in the meantime the redundant drives read and write the bad data. As the attempts at auto recovery within the disk drive go on (sometimes for several minutes), all of the redundant drives have their copies of the data damaged beyond repair. In the case of John Hancock, when the system finally locked up, the voice message indices were useless. Unfortunately, very little could have been done on the vendor or manufacturing end to prevent this.

More recently, when APconnections released a new version of our NetEqualizer, despite extensive testing over a period of months including a new simulation lab, we had to release a patch to clean up some lingering problems with VLAN tags. It turned out the problem was with a bug in the Linux kernel, a kernel that normally gets better with time.

So what happened? Why did we not find this VLAN tag bug before the release? Well, first off, the VLAN tagging facility in the kernel had been stable for years. (The Linux kernel had been released as stable by We also had a reliable regression test for new releases that made sure it was not broken. However, our regression test only simulated the actual tag passing through the kernel. This made it much easier to test, and considering our bandwidth shaper software only affected the packets after the tag was in place, there was no logical reason to test a stable feature of the Linux kernel. To retest stable kernel features would not have been economically viable considering these circumstances.

This logic is common during pre-market testing. Rather than test everything, vendors use a regression test for stable components of their system and only rigorously test new features. A regression test is a subset of scenarios and is the only practical way to make sure features unrelated to those being changed do not break when a new release comes out. Think of it this way: Does your mechanic do a crash test when replacing the car battery to see if the airbags still deploy? This analogy may seem silly, but as a product developer, you must be pragmatic about what you test. There are almost infinite variations on a mature product and to retest all of them is not possible.

Therefore, in reality, most developers want nothing more than to release a flawless product. Yet, despite a developer’s best intentions, not every stone can be turned during pre-market testing. This, however, shouldn’t deter a developer from striving for perfection — both before a release as well as when the occasional bugs appear in the field.

VLAN tags made simple

By Art Reisman, CTO,

Art Reisman CTO

Why am I writing a post on VLAN tags ?

VLAN tags and Bandwidth Control are often intimately related, but before I can post on the relationship I thought it prudent to comment on VLAN tags, I definitly think they are way over used and hope to comment on that also in a future post.

I generally don’t like VLAN tags, the original idea behind them was to solve the issue with  Ethernet broadcasts saturating network segment. Wikipedia explains it like this…

After successful experiments with voice over Ethernet from 1981 to 1984, Dr. W. David Sincoskie joined Bellcore and turned to the problem of scaling up Ethernet networks. At 10 Mbit/s, Ethernet was faster than most alternatives of the time; however, Ethernet was a broadcast network and there was not a good way of connecting multiple Ethernets together. This limited the total bandwidth of an Ethernet network to 10 Mbit/s and the maximum distance between any two nodes to a few hundred feet.

What does that mean and why do you care?

First lets address how an Ethernet broadcast works and then we can discuss Dr Sincoskies solution and make some sense of it.

When a bunch of computers share a single Ethernet segment of a network separated by switches everybody can hear each other talking

Think of 2 people in a room yelling back and forth to communicate, that might work if one person pauses after each yell to give the other person a chance to yell back.  Now if you had three people in a room they can still yell at each other and pause and listen for other people yelling and that might still work, but if you had 1000 people in the room and they are trying to talk to people on the other side of the room the pausing technique waiting for other people to talk does not work very well.  And that is exactly the problem with Ethernet as it grows everybody is trying to talk on the same wire at once.  VLAN tags work by essentially creating a bunch of smaller virtual  rooms where only the noise and yelling from the people in the virtual room can be heard at one time.

Now when you set up a VLAN tag (virtual room ) you have to put up the dividers. On a network this is done by having  the switches, the things the computers plug into,  be aware of what virtual room each computer is in. The Ethernet tag specifies the identifier for the virtual room and so once set up you have a bunch of virtual rooms and everybody can talk.

This sort of begs the question

Does everybody attached to the Internet live in a virtual room ?

No virtual rooms  (VLANs) were needed so a single organization like a company can put a box around their network segments to protect them with a common set of access rules ( firewall router), the Internet works fine without VLAN tags.

So a VLAN tag is only appropriate when a group of users sit behind a common router ?

Yes that is correct , Ethernet broadcasts ( yelling  as per our analogy) do not cross cross router boundaries on the Internet.

Routers handle public IP addresses to figure out where to send things. A router does not use broadcast (yelling), it is much more discrete , it only sends on data to another router if it knows that the data is supposed to go there.

So why do we have two mechanisms one for  local computers sending Ethernet broadcasts and another for routers using point to point routing ?

This post was supposed to be about VLAN tags….. I’ll take it one step further to explain the difference.

Perhaps you have heard about the layers of networking, layer 2 is Ethernet and Layer 3 is IP. gave me the monologue below, which is technically correct, but does not really make much sense unless you already had a good understanding of networking in the first place , so I’ll finish by breaking down this into something a little more relevant with some in-line comments.

Basically a layer 2 switch operates utilizing Mac addresses in it’s caching table to quickly pass information from port to port. A layer 3 switch utilizes IP addresses to do the same.

What this means is that an Ethernet switch looks at MAC addresses which are used by your router for local addressing to a computer on your network. Think back to people shouting in the room to communicate, the MAC address would be a Nick name that only their closest friends would use when they shout at each other. At the head end of your network is a router, this is where you connect to the Internet, and other Internet users send data to you from your IP address and this is essentially the well known public address at your router. The IP address could be thought of as the address of the building where everybody is inside shouting at each other. The routers job is to get information,sent by IP address  destined for some body inside the room to the door. If you are a Comcast home user you likely have a Modem where you cable plugs in the Modem is the gateway to your house and is addressed by IP address by the outside world.

Essentially, A layer 2 switch is essentially a multiport transparent bridge. A layer 2 switch will learn about MAC addresses connected to each port and passes frames marked for those ports.

The above paragraph is referring to how an Ethernet switch sends data around, everybody in room registers their Nick-Name to the switch so it can shout in the direction of the person in the room when new data comes in.

It also knows that if a frame is sent out a port but is looking for the MAC address of the port it is connected to and drop that frame. Whereas a single CPU Bridge runs in serial, todays hardware based switches run in parallel, translating to extremly fast switching.

I left this paragraph in because it is completely unrelated to the question I asked that responded to, so ignore it. This is  a commentary about how modern switches can be reading and sending from multiple interfaces at the same time.

Layer 3 switching is a hybrid, as one can imagine, of a router and a switch. There are different types of layer 3 switching, route caching andtopology-based. In route caching the switch required both a Route Processor (RP) and a Switch Engine (SE). The RP must listen to the first packet to determine the destination. At that point the Switch Engine makes a shortcut entry in the caching table for the rest of the packets to follow.

More random stuff unrelated to the question “What is the difference between layer 3 and layer 2 ”

Due to advancement in processing power and drastic reductions in the cost of memory, today’s higher end layer 3 switches implement a topology-based switching which builds a lookup table and and poputlates it with the entire network’s topology. The database is held in hardware and is referenced there to maintain high throughput. It utilizes the longest address match as the layer 3 destination.

This is talking about how a Router translates between the local address Nick-Name of people yelling in the room and the public address of data leaving the building.
Now when and why would one use a l2 vs l3 vs a router? Simply put, a router will generally sit at the gateway between a private and a public network. A router can performNAT whereas an l3 switch cannot (imagine a switch that had the topology entries for the ENTIRE Internet!!).

Pros and Cons of Using Your Router as a Bandwidth Controller

So, you already have a router in your network, and rather than take on the expense of another piece of equipment, you want to double-up on functionality by implementing your bandwidth control within your router. While this is sound logic and may be your best decision, as always, there are some other factors to consider.

Here are a few things to think about:

1. Routers are optimized to move packets from one network to another with utmost efficiency. To do this function, there is often minimal introspection of the data, meaning the router does one table look-up and sends the data on its way. However, as soon as you start doing some form of bandwidth control, your router now must perform a higher-level of analysis on the data. Additional analysis can overwhelm a router’s CPU without warning. Implementing non-routing features, such as protocol sniffing, can create conditions that are much more complex than the original router mission. For simple rate limiting there should be no problem, but if you get into more complex bandwidth control, you can overwhelm the processing power that your router was designed for.

2. The more complex the system, the more likely it is to lock up. For example, that old analog desktop phone set probably never once crashed. It was a simple device and hence extremely reliable. On the other hand, when you load up an IP phone on your Windows PC,  you will reduce reliability even though the function is the same as the old phone system. The problem is that your Windows PC is an unreliable platform. It runs out of memory and buggy applications lock it up.

This is not news to a Windows PC owner, but the complexity of a mission will have the same effect on your once-reliable router. So, when you start loading up your router with additional missions, it is increasingly more likely that it will become unstable and lock up. Worse yet, you might cause a subtle network problem (intermittent slowness, etc.) that is less likely to be identified and fixed. When you combine a bandwidth controller/router/firewall together, it can become nearly impossible to isolate problems.

3. Routing with TOS bits? Setting priority on your router generally only works when you control both ends of the link. This isn’t always an option with some technology. However, products such as the NetEqualizer can supply priority for VoIP in both directions on your Internet link.

4. A stand-alone bandwidth controller can be  moved around your network or easily removed without affecting routing. This is possible because a bandwidth controller is generally not a routable device but rather a transparent bridge. Rearranging your network setup may not be an option, or simply becomes much more difficult, when using your router for other functions, including bandwidth control.

These four points don’t necessarily mean using a router for bandwidth control isn’t the right option for you. However, as is the case when setting up any network, the right choice ultimately depends on your individual needs. Taking these points into consideration should make your final decision on routing and bandwidth control a little easier.

Five Tips to Manage Network Congestion

As the demand for Internet access continues to grow around the world, the complexity of planning, setting up, and administering your network grows. Here are five (5) tips that we have compiled, based on discussions with network administrators in the field.

#1) Be Smart About Buying Bandwidth
The local T1 provider does not always give you the lowest price bandwidth.  There are many Tier 1 providers out there that may have fiber within line-of-sight of your business. For example, Level 3 has fiber rings already hot in many metro areas and will be happy to sell you bandwidth. To get a low-cost high-speed link to your point of presence, numerous companies can set up your wireless network infrastructure.

#2) Manage Expectations
You know the old saying “under promise and over deliver”.  This holds true for network offerings.  When building out your network infrastructure, don’t let your network users just run wide open. As you add bandwidth, you need to think about and implement appropriate rate limits/caps for your network users.  Do not wait; the problem with waiting is that your original users will become accustomed to higher speeds and will not be happy with sharing as network use grows – unless you enforce some reasonable restrictions up front.  We also recommend that you write up an expectations document for your end users “what to expect from the network” and post it on your website for them to reference.

#3) Understand Your Risk Factors
Many network administrators believe that if they set maximum rate caps/limits for their network users, then the network is safe from locking up due to congestion. However, this is not the case.  You also need to monitor your contention ratio closely.  If your network contention ratio becomes unreasonable, your users will experience congestion aka “lock ups” and “freeze”. Don’t make this mistake.

This may sound obvious, but let me spell it out. We often run into networks with 500 network users sharing a 20-meg link. The network administrator puts in place two rate caps, depending on the priority of the user  — 1 meg up and down for user group A and 5 megs up and down for user group B.  Next, they put rate caps on each group to ensure that they don’t exceed their allotted amount. Somehow, this is supposed to exonerate the network from experiencing contention/congestion. This is all well and good, but if you do the math, 500 network users on a 20 meg link will overwhelm the network at some point, and nobody will then be able to get anywhere close to their “promised amount.”

If you have a high contention ratio on your network, you will need something more than rate limits to prevent lockups and congestion. At some point, you will need to go with a layer-7 application shaper (such as Blue Coat Packeteer or Allot NetEnforcer), or go with behavior-based shaping (NetEqualizer). Your only other option is to keep adding bandwidth.

#4) Decide Where You Want to Spend Your Time
When you are building out your network, think about what skill sets you have in-house and those that you will need to outsource.  If you can select network applications and appliances that minimize time needed for set-up, maintenance, and day-to-day operations, you will reduce your ongoing costs. This is true whether your insource or outsource, as there is an “opportunity cost” for spending time with each network toolset.

#5) Use What You Have Wisely
Optimize your existing bandwidth.   Bandwidth shaping appliances can help you to optimize your use of the network.   Bandwidth shapers work in different ways to achieve this.  Layer-7 shapers will allocate portions of your network to pre-defined application types, splitting your pipe into virtual pipes based on how you want to allocate your network traffic.  Behavior-based shaping, on the other hand, will not require predefined allocations, but will shape traffic based on the nature of the traffic itself (latency-sensitive, short/bursty traffic is prioritized higher than hoglike traffic).   For known traffic patterns on a WAN, Layer-7 shaping can work very well.  For unknown patterns like Internet traffic, behavior-based shaping is superior, in our opinion.

On Internet links, a NetEqualizer bandwidth shaper will allow you to increase your customer base by between 10 to 30 percent without having to purchase additional bandwidth. This allows you to increase the amount of people you can put into your infrastructure without an expensive build out.

In order to determine whether the return-on-investment (ROI) makes sense in your environment, use our ROI tool to calculate your payback period on adding bandwidth control to your network.  You can then compare this one-time cost with your expected recurring month costs of additional bandwidth.  Also note in many cases you will need to do both at some point.  Bandwidth shaping can delay or defer purchasing additional bandwidth, but with growth in your network user base, you will eventually need to consider purchasing more bandwidth.

In Summary…
Obviously, these five tips are not rocket science, and some of them you may be using already.  We offer them here as a quick guide & reminder to help in your network planning.  While the sea change that we are all seeing in internet usage (more on that later…) makes network administration more challenging every day, adequate planning can help to prepare your network for the future.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here to request a full price list.

Network Capacity Planning: Is Your Network Positioned for Growth?

Authored by:  Sandy McGregor, Director of Sales & Marketing for APConnections, Inc.
Sandy has a Masters in Management Information Systems and over 17 years experience in the Applications Development Life Cycle.  In the past, she has been a Project Manager for large-scale data center projects, as well as a Director heading up architecture, development and operations teams.  In Sandy’s current role at APConnections, she is responsible for tracking industry trends.

As you may have guessed, mobile users are gobbling up network bandwidth in 2010!  Based on research conducted in the first half of 2010, Allot Communications has released The Allot MobileTrends Report , H1 2010 showing dramatic growth in mobile data bandwidth usage in 2010- up 68% in Q1 and Q2.

I am sure that you are seeing the impacts of all this usage on your networks.  The good news is all this usage is good for your business, as a network provider,  if you are positioned to grow to meet the needs of all this growth!  Whether you sell network usage to customers (as a ISP or WISP) or “sell” it internally (colleges and corporations), growth means that the infrastructure you provide becomes more and more critical to your business.

Here are some areas that we found of particular interest in the article, and their implications on your network, from our perspective…

1) Video Streaming grew by 92% to 35% of mobile use

It should be no surprise that video steaming applications take up a 35% share of mobile bandwidth, and grew by 92%.  At this growth rate, which we believe will continue and grow even faster in the future, your network capacity will need to grow as well.  Luckily, bandwidth prices are continuing to come down in all geographies.

No matter how much you partition your network using a bandwidth shaping strategy, the fact is that video streaming takes up a lot of bandwidth.  Add to that the fact that more and more users are using video, and you have a full pipe before you know it!  While you can look at ways to cache video, we believe that you have no choice but to add bandwidth to your network.

2) Users are downloading like crazy!

When your customers are not watching videos, they are downloading, either via P2P or HTTP, which combined represented 31 percent of mobile bandwidth, with an aggregate growth rate of 80 percent.  Although additional network capacity can help somewhat here, large downloads or multiple P2P users can still quickly clog your network.

You need to first determine if you want to allow P2P traffic on your network.  If you decide to support P2P usage, you may want to think how you will identify which users are doing P2P and if you will charge a premium for this service. Also, be aware that encrypted P2P traffic is on the rise, which makes it difficult to figure out what traffic is truly P2P.

Large file downloads need to be supported.  Your goal here should be to figure out how to enable downloading for your customers without slowing down other users and bringing the rest of your network to a halt.

In our opinion, P2P and downloading is an area where you should look at bandwidth shaping solutions.  These technologies use various methods to prioritize and control traffic, such as application shaping (Allot, BlueCoat, Cymphonix) or behavior-based shaping (NetEqualizer).

These tools, or various routers (such as Mikrotik), should also enable you to set rate limits on your user base, so that no one user can take up too much of your network capacity.  Ideally, rate limits should be flexible, so that you can set a fixed amount by user, group of users (subnet, VLAN), or share a fixed amount across user groups.

3) VoIP and IM are really popular too

The second fastest growing traffic types were VoIP and Instant Messaging (IM).  Note that if your customers are not yet using VoIP, they will be soon.  The cost model for VoIP just makes it so compelling for many users, and having one set of wires if an office configuration is attractive as well (who likes the tangle of wires dangling from their desk anyways?).

We believe that your network needs to be able to handle VoIP without call break-up or delay.  For a latency-sensitive application like VoIP, bandwidth shaping (aka traffic control, aka bandwidth management) is key.  Regardless of your network capacity, if your VoIP traffic is not given priority, call break up will occur.  We believe that this is another area where bandwidth shaping solutions can help you.

IM on the other hand, can handle a little latency (depending on how fast your customers type & send messages).  To a point, customers will tolerate a delay in IM – but probably 1-2 seconds max.  After that,they will blame your network, and if delays persist, will look to move to another network provider.

In summary, to position your network for growth:

1) Buy More Bandwidth – It is a never-ending cycle, but at least the cost of bandwidth is coming down!

2) Implement Rate Limits – Stop any one user from taking up your whole network.

3) Add Bandwidth Shaping – Maximize what you already have.  Think efficiency here.  To determine the payback period on an investment in the NetEqualizer, try our new ROI tool.  You can put together similar calculations for other vendors.

Note:  The Allot MobileTrends Report data was collected from Jan. 1 to June 30 from leading mobile operators worldwide with a combined user base of 190 million subscribers.

Top Five Causes For Disruption Of Internet Service

slow-internetEditor’s Note: We took a poll from our customer base consisting of thousands of NetEqualizer users. What follows are the top five most common causes  for disruption of Internet connectivity.

1) Congestion: Congestion is the most common cause for short Internet outages.  In general, a congestion outage is characterized by 10 seconds of uptime followed by approximately 30 seconds of chaos. During the chaotic episode, the circuit gridlocks to the point where you can’t load a Web page. Just when you think the problem has cleared, it comes back.

The cyclical nature of a congestion outage is due to the way browsers and humans retry on failed connections. During busy times usage surges and then backs off, but the relief is temporary. Congestion-related outages are especially acute at public libraries, hotels, residence halls and educational institutions. Congestion is also very common on wireless networks. (Have you ever tried to send a text message from a crowded stadium? It’s usually impossible.)

Fortunately for network administrators, this is one cause of disruption that can be managed and prevented (as you’ll see below, others aren’t that easy to control). So what’s the solution? The best option for preventing congestion is to use some form of bandwidth control. The next best option is to increase the size of your bandwidth link. However without some form of bandwidth control, bandwidth increases are often absorbed quickly and congestion returns. For more information on speeding up internet services using a bandwidth controller, check out this article.

2) Failed Link to Provider: If you have a business-critical Internet link, it’s a good idea to source service from multiple providers. Between construction work, thunderstorms, wind, and power problems, anything can happen to your link at almost any time. These types of outages are much more likely than internal equipment failures.

3) Service Provider Internet Speed Fluctuates: Not all DS3 lines are the same. We have seen many occasions where customers are just not getting their contracted rate 24/7 as promised.

4) Equipment Failure: Power surges are the most common cause for frying routers and switches. Therefore, make sure everything has surge and UPS protection. After power surges, the next most common failure is lockup from feature-overloaded equipment. Considering this, keep your configurations as simple as possible on your routers and firewalls or be ready to upgrade to equipment with faster newer processing power.

Related Article: Buying Guide for Surge and UPS Protection Devices

5) Operator Error: Duplicating IP addresses, plugging wires into the wrong jack, and setting bad firewall rules are the leading operator errors reported.

If you commonly encounter issues that aren’t discussed here, feel free to fill us in in the comments section. While these were the most common causes of disruptions for our customers, plenty of other problems can exist.

The Inside Scoop on Where the Market for Bandwidth Control Is Going

Editor’s Note: The modern traffic shaper appeared in the market in the late 1990s. Since then market dynamics have changed significantly. Below we discuss these changes with industry pioneer and APconnections CTO Art Reisman.

Editor: Tell us how you got started in the bandwidth control business?

Back in 2002, after starting up a small ISP, my partners and I were looking for a tool that we could plug-in and take care of the resource contention without spending too much time on it. At the time, we had a T1 to share among about 100 residential users and it was costing us $1200 per month, so we had to do something.

Editor: So what did you come up with?

I consulted with my friends at Cisco on what they had. Quite a few of my peers from Bell Labs had migrated to Cisco on the coat tails of Kevin Kennedy, who was also from Bell Labs. After consulting with them and confirming there was nothing exactly turnkey at Cisco, we built the Linux Bandwidth Arbitrator (LBA) for ourselves.

How was the Linux Bandwidth Arbitrator distributed and what was the industry response?

We put out an early version for download on a site called Freshmeat. Most of the popular stuff on that site are home-user based utilities and tools for Linux. Given that the LBA was not really a consumer tool, it rose like a rocket on that site. We were getting thousands of downloads a month, and about 10 percent of those were installing it someplace.

What did you learn from the LBA project?

We eventually bundled layer 7 shaping into the LBA. At the time that was the biggest request for a feature. We loosely partnered with the Layer 7 project and a group at the Computer Science Department at the University of Colorado to perfect our layer 7 patterns and filter. Myself and some of the other engineers soon realized that layer 7 filtering, although cool and cutting edge, was a losing game with respect to time spent and costs. It was not impossible but in reality it was akin to trying to conquer all software viruses and only getting half of them. The viruses that remain will multiply and take over because they are the ones running loose. At the same time we were doing layer 7, the core idea of Equalizing,  the way we did fairness allocation on the LBA, was s getting rave reviews.

What did you do next ?

We bundled the LBA into a CD for install and put a fledgling GUI interface on it. Many of the commercial users were happy to pay for the convenience, and from there we started catering to the commercial market and now here we are with modern version of the NetEqualizer.

How do you perceive the layer 7 market going forward?

Customers will always want layer 7 filtering. It is the first thing they think of from the CIO on down. It appeals almost instinctively to people. The ability to choose traffic  by type of application and then prioritize it by type is quite appealing. It is as natural as ordering from a restaurant menu.

We are not the only ones declaring a decline in Deep packet inspection we found this opinion on another popular blog regarding bandwidth control:

The end is that while Deep Packet Inspection presentations include nifty graphs and seemingly exciting possibilities; it is only effective in streamlining tiny, very predictable networks. The basic concept is fundamentally flawed. The problem with generous networks is not that bandwidth wants to be shifted from “terrible” protocols to “excellent” protocols. The problem is volume. Volume must be managed in a way that maintains the strategic goals of the arrangement administration. Nearly always this can be achieved with a macro approach of allocating an honest share to each entity that uses the arrangement. Any attempt to micro-manage generous networks ordinarily makes them of poorer quality; or at least simply results in shifting bottlenecks from one business to another.

So why did you get away from layer 7 support in the NetEqualizer back in 2007?

When trying to contain an open Internet connection it does not work very well. The costs to implement were going up and up. The final straw was when encrypted p2p hit the cloud. Encrypted p2p cannot be specifically classified. It essentially tunnels through $50,000 investments in layer 7 shapers, rendering them impotent. Just because you can easily sell a technology does not make it right.

We are here for the long haul to educate customers. Most of our NetEqualizers stay in service as originally intended for years without licensing upgrades. Most expensive layer 7 shapers are mothballed after about 12 months are just scaled back to do simple reporting. Most products are driven by channel sales and the channel does not like to work very hard to educate customers with alternative technology. They (the channel) are interested in margins just as a bank likes to collect fees to increase profit. We, on the other hand, sell for the long haul on value and not just what we can turn quickly to customers because customers like what they see at first glance.

Are you seeing a drop off in layer 7 bandwidth shapers in the marketplace?

In the early stages of the Internet up until the early 2000s, the application signatures were not that complex and they were fairly easy to classify. Plus the cost of bandwidth was in some cases 10 times more expensive than 2010 prices. These two factors made the layer 7 solution a cost-effective idea. But over time, as bandwidth costs dropped, speeds got faster and the hardware and processing power in the layer 7 shapers actually rose. So, now in 2010 with much cheaper bandwidth, the layer 7 shaper market is less effective and more expensive. IT people still like the idea, but slowly over time price and performance is winning out. I don’t think the idea of a layer 7 shaper will ever go away because there are always new IT people coming into the market and they go through the same learning curve. There are also many WAN type installations that combine layer 7 with compression for an effective boost in throughput. But, even the business ROI for those installations is losing some luster as bandwidth costs drop.

So, how is the NetEqualizer doing in this tight market where bandwidth costs are dropping? Are customers just opting to toss their NetEqualizer in favor of adding more bandwidth?

There are some that do not need shaping at all, but then there are many customers that are moving from $50,000 solutions to our $10,000 solution as they add more bandwidth. At the lower price points, bandwidth shapers still make sense with respect to ROI.  Even with lower bandwidth costs, users will almost always clog the network with new more aggressive applications. You still need a way to gracefully stop them from consuming everything, and the NetEqualizer at our price point is a much more attractive solution.

What to expect from Internet Bursting

APconnections will be releasing ( version 4.7) a bursting feature on their NetEqualizer bandwidth controller this week. What follows is an explanation of the feature and also some facts and information about Internet Bursting that consumers will also find useful.

First an explanation on how the NetEqualizer bursting feature works.

– The NetEqualizer currently comes with a feature that lets you set a rate limit by IP address.

– Prior to the bursting feature, the top speed allowed for each user was fixed at a set rate limit.

– Now with bursting a user can be allowed a burst of bandwidth for 10 seconds with speeds multiples of two , three or four, or any multiple of their base rate limit.

So if for example a user has a base rate limit of 2 megabits a second, and a burst factor of 4, then their connection will be allowed to burst all the way up to 8 megabits for 10 seconds, at which time it will revert back to the original 2 megabits per second. This type of burst will be noticed when loading large Web pages loaded with graphics. They will essentially fly up in the browser at warp speed.

In order to make  bursting a “special” feature it obviously can’t be on all the time. For this reason the NetEqualizer by default, will force a user to wait 80 seconds before they can burst again.

Will bursting show up in speed tests?

With the default settings of 10 second bursts and an 80 second time out before the next burst it is unlikely a user will be able to see their  full burst speed accurately with a speed test site.

How do you set a bursting feature for an IP address ?

From the GUI


Add Rules->set hard limit

The last field in the command specifies the burst factor.  Set this field to the multiple of the default speed you wish to burst up to.

Note: Once bursting has been set-up, bursting on an IP address will start when that IP exceeds its rate limit (across all connections for that IP).  The burst applies to all connections across the IP address.

How do you turn the burst feature off for an IP address.

You must remove the Hard Limit on the IP address and then recreate the Hard Limit by IP without bursting defined.

From the Web GUI Main Menu, Click on ->Remove/Deactivate Rules

Select the appropriate Hard Limit from the drop-down box. Click on ->Remove Rule

To re-add the rule without bursting, from the Web GUI Main Menu, Click on ->Add Rules->Hard Limit by IP and leave the last field set to 1.

Can you change the global bursting defaults for duration of burst and time between bursts ?

Yes, from the GUI screen you can select

misc->run command

In the space provided you would run the following command

/usr/sbin/brctl setburstparams my 40  30

The first parameter is the time,in seconds, an IP must wait before it can burst again. If an IP has done a burst cycle it will be forced to wait this long in seconds before it can burst again.

The second parameter is the time, in seconds, an IP will be allowed  to burst before begin relegated back to its default rate cap.

The global burst parameters are not persistent, meaning you will need to put a command in the start up file if you want them to stick  between reboots.


If speed tests are not a good way to measure a burst, then what do you recommend?

The easiest way would be  to extend the burst time to minutes (instead of the default 10 seconds ) and then run the speed test.

With the default set at 10 seconds the best was to see a burst in action is to take a continuous snap shot of an IP’s consumption during an extended download.

Beware of the confusion that bursting might cause.

The Promise of Streaming Video: An Unfunded Mandate

By Art Reisman, CTO,

Art Reisman CTO
Art Reisman is a partner and co-founder of APconnections, a company that provides bandwidth control solutions (NetEqualizer) to
ISPs, Universities, Libraries, Mining Camps, and any organization where groups of users must share their Internet resources equitably. What follows is an objective educational journey on how consumers and ISPs can live in harmony with the explosion of YouTube video.

The following is written primarily for the benefit of mid-to-small sized internet services providers (ISPs).  However, home consumers may also find the details interesting.  Please follow along as I break down the business cost model of the costs required to keep up with growing video demand.

In the past few weeks, two factors have come up in conversations with our customers, which has encouraged me to investigate this subject further and outline the challenges here:

1) Many of our ISP customers are struggling to offer video at competitive levels during the day, and yet are being squeezed due to high bandwidth costs.  Many look to the NetEqualizer to alleviate video congestion problems.  As you know, there are always trade-offs to be made in handling any congestion issue, which I will discuss at the end of this article.  But back to the subject at hand.  What I am seeing from customers is that there is an underlying fear that they (IT adminstrators) are behind the curve.   As I have an opinion on this, I decided I need to lay out what is “normal” in terms of contention ratios for video, as well what is “practical” for video in today’s world.

2) My internet service provider, a major player that heavily advertises how fast their speed is to the home, periodically slows down standard YouTube Videos.  I should be fair with my accusation, with the Internet you can actually never be quite certain who is at fault.  Whether I am being throttled or not, the point is that there are an ever-growing number of video content providers , who are pushing ahead with plans that do not take into account, nor care about, a last mile provider’s ability to handle the increased load.  A good analogy would be a travel agency that is booking tourists onto a cruise ship without keeping a tally of tickets sold, nor caring, for that matter.  When all those tourists show up to board the ship, some form of chaos will ensue (and some will not be able to get on the ship at all).

Some ISPs are also adding to this issue, by building out infrastructure without regard to content demand, and hoping for the best.  They are in a tight spot, getting caught up in a challenging balancing act between customers, profit, and their ability to actually deliver video at peak times.

The Business Cost Model of an ISP trying to accommodate video demands

Almost all ISPs rely on the fact that not all customers will pull their full allotment of bandwidth all the time.  Hence, they can map out an appropriate subscriber ratio for their network, and also advertise bandwidth rates that are sufficient enough to handle video.  There are four main governing factors on how fast an actual consumer circuit will be:

1) The physical speed of the medium to the customer’s front door (this is often the speed cited by the ISP)
2) The combined load of all customers sharing their local circuit and  the local circuit’s capacity (subscriber ratio factors in here)
3) How much bandwidth the ISP contracts out to the Internet (from the ISP’s provider)

4) The speed at which the source of the content can be served (Youtube’s servers), we’ll assume this is not a source of contention for our examples below, but it certainly should remain a suspect in any finger pointing of a slow circuit.

The actual limit to the am0unt of bandwidth a customer gets at one time, which dictates whether they can run a live streaming video, usually depends  on how oversold their ISP is (based on the “subscriber ratio” mentioned in points 1 and 2 above). If  your ISP can predict the peak loads of their entire circuit correctly, and purchase enough bulk bandwidth to meet that demand (point 3 above), then customers should be able to run live streaming video without interruption.

The problem arises when providers put together a static set of assumptions that break down as consumer appetite for video grows faster than expected.  The numbers below typify the trade-offs a mid-sized provider is playing with in order to make a profit, while still providing enough bandwidth to meet customer expectations.

1) In major metropolitan areas, as of 2010, bandwidth can be purchased in bulk for about $3000 per 50 megabits. Some localities less some more.

2) ISPs must cover a fixed cost per customer amortized: billing, sales staff, support staff, customer premise equipment, interest on investment , and licensing, which comes out to about $35 per month per customer.

3) We assume market competition fixes price at about $45 per month per customer for a residential Internet customer.

4) This leaves $10 per month for profit margin and bandwidth fees.  We assume an even split: $5 a month per customer for profit, and $5 per month per customer to cover bandwidth fees.

With 50 megabits at $3000 and each customer contributing $5 per month, this dictates that you must share the 50 Megabit pipe amongst 600 customers to be viable as a business.  This is the governing factor on how much bandwidth is available to all customers for all uses, including video.

So how many simultaneous YouTube Videos can be supported given the scenario above?

Live streaming YouTube video needs on average about 750kbs , or about 3/4 of a megabit, in order to run without breaking up.

On a 50 megabit shared link provided by an ISP, in theory you could support about 70 simultaneous YouTube sessions, assuming nothing else is running on the network.  In the real world there would always be background traffic other than YouTube.

In reality, you are always going to have a minimum fixed load of internet usage from 600 customers of approximately 10-to-20 megabits.  The 10-to-20 megabit load is just to support everything else, like web sufing, downloads, skype calls, etc.  So realistically you can support about 40 YouTube sessions at one time.  What this implies that if 10 percent of your customers (60 customers) start to watch YouTube at the same time you will need more bandwidth, either that or you are going to get some complaints.  For those ISPs that desperately want to support video, they must count on no more than about 40 simultaneous videos running at one time, or a little less than 10 percent of their customers.

Based on the scenario above, if 40 customers simultaneously run YouTube, the link will be exhausted and all 600 customers will be wishing they had their dial-up back.  At last check, YouTube traffic accounted for 10 percent of all Internet Traffic.  If left completely unregulated, a typical rural ISP could find itself on the brink of saturation from normal YouTube usage already.  With tier-1 providers in major metro areas, there is usually more bandwidth, but with that comes higher expectations of service and hence some saturation is inevitable.

This is why we believe that Video is currently an “unfunded mandate”.  Based on a reasonable business cost model, as we have put forth above, an ISP cannot afford to size their network to have even 10% of their customers running real-time streaming video at the same time.  Obviously, as bandwidth costs decrease, this will help the economic model somewhat.

However, if you still want to tune for video on your network, consider the options below…

NetEqualizer and Trade-offs to allow video

If you are not a current NetEqualizer user, please feel free to call our engineering team for more background.  Here is my short answer on “how to allow video on your network” for current NetEqualizer users:

1) You can determine the IP address ranges for popular sites and give them priority via setting up a “priority host”.
This is not recommended for customers with 50 megs or less, as generally this may push you over into a gridlock situation.

2) You can raise your HOGMIN to 50,000 bytes per second.
This will generally let in the lower resolution video sites.  However, they may still incur Penalities should they start buffering at a higher rate than 50,000.  Again, we would not recommend this change for customers with pipes of 50 megabits or less.

With either of the above changes you run the risk of crowding out web surfing and other interactive uses , as we have described above. You can only balance so much Video before you run out of room.  Please remember that the Default Settings on the NetEq are designed to slow video before the entire network comes to halt.

For more information, you can refer to another of Art’s articles on the subject of Video and the Internet:  How much YouTube can the Internet Handle?

Other blog posts about ISPs blocking YouTube

NetEqualizer Bandwidth Shaping Solution: Hotels & Resorts

In working with some of the world’s leading hotels and resorts, we’ve repeatedly heard the same issues and challenges facing network administrators. Here are just a few:

Download Hotels White Paper

  • We need to do more with less bandwidth.
  • We need a solution that’s low cost, low maintenance, and easy to set up.
  • We need to meet the expectations of our tech-savvy customers and prevent Internet congestion during times of peak usage.
  • We need a solution that can meet the demands of a constantly changing clientele. We need to offer tiered internet access for our hotel guests, and provide managed access for conference attendees.

In this article, we’ll talk about how the NetEqualizer has been used to solve these issues for many Hotels and Resorts around the world.

Download article (PDF) Hotels & Resorts White Paper

Read full article …

NetEqualizer Bandwidth Shaping Solution: Business Centers

In working with numerous Business Center network administrators, we have heard the same issues and challenges repeatedly. Here are just a few:

Download Business Centers White Paper

  • We need to do more with less bandwidth.
  • We need a solution that’s low cost, low maintenance, and easy to set up.
  • We need to support selling fixed bandwidth to our customers, by office and/or user.
  • We need to be able to report on subscriber usage.
  • We need to increase user satisfaction and reduce network troubleshooting calls

In this article, we’ll talk about how the NetEqualizer has been used to solve these issues for many Business Centers around the world.

Download article (PDF) Business Centers White Paper

Read full article …

%d bloggers like this: