How Much YouTube Can the Internet Handle?

By Art Reisman, CTO, 

Art Reisman CTO

Art Reisman


As the Internet continues to grow and true speeds become higher,  video sites like YouTube are taking advantage of these fatter pipes. However, unlike the peer-to-peer traffic of several years ago (which seems to be abating), YouTube videos don’t face the veil of copyright scrutiny cast upon p2p which caused most users to back off.

In our experience, there are trade offs associated with the advancements in technology that have come with YouTube. From measurements done in our NetEqualizer laboratories, the typical normal quality YouTube video needs about 240kbs sustained over the 10 minute run time for the video. The newer higher definition videos run at a rate at least twice that. 

Many of the rural ISPs that we at NetEqualizer support with our bandwidth shaping and control equipment have contention ratios of about 300 users per 10-megabit link. This seems to be the ratio point where these small businesses can turn  a profit.  Given this contention ratio, if 40 customers simultaneously run YouTube, the link will be exhausted and all 300 customers will be wishing they had their dial-up back. At last check, YouTube traffic accounted for 10 percent of all Internet Traffic.  If left completely unregulated,  a typical rural  ISP could find itself on the brink of saturation from normal YouTube usage already. With tier-1 providers in major metro areas there is usually more bandwidth, but with that comes higher expectations of service and hence some saturation is inevitable. 

If you believe there is a conspiracy, or that ISPs are not supposed to profit as they take risk and operate in a market economy, you are entitled to your opinion, but we are dealing with reality. And there will always be tension between users and their providers, much the same as there is with government funds and highway congestion. 

The fact is all ISPs have a fixed amount of bandwidth they can deliver and when data flows exceed their current capacity, they are forced to implement some form of passive constraint. Without them many networks would lock up completely. This is no different than a city restricting water usage when reservoirs are low. Water restrictions are well understood by the populace and yet somehow bandwidth allocations and restrictions are perceived as evil. I believe this misconception is simply due to the fact that bandwidth is so dynamic, if there was a giant reservoir of bandwidth pooled up in the mountains where you could see this resource slowly become depleted , the problem could be more easily visualized. 

The best compromise offered, and the only comprise that is not intrusive is bandwidth rationing at peak hours when needed. Without rationing, a network will fall into gridlock, in which case not only do the YouTube videos come to halt , but  so does e-mail , chat , VOIP and other less intensive applications. 

There is some good news, alternative ways to watch YouTube videos. 

We noticed during out testing that YouTube videos attempt to play back video as a  real-time feed , like watching live TV.  When you go directly to YouTube to watch a video, the site and your PC immediately start the video and the quality becomes dependent on having that 240kbs. If your providers speed dips below this level your video will begin to stall, very annoying;  however if you are willing to wait a few seconds there are tools out there that will play back YouTube videos for you in non real-time. 

Buffering Tool 

They accomplish this by pre-buffering before the video starts playing.  We have not reviewed any of these tools so do your research. We suggest you google “YouTube buffering tools” to see what is out there. Not only do these tools smooth out the YouTube playback during peak times or on slower connections , but they also help balance the load on the network during peak times. 

Bio Art Reisman is a partner and co-founder of APconnections, a company that provides bandwidth control solutions (NetEqualizer) to ISPs, Universities, Libraries, Mining Camps and any organization where groups of users must share their Internet resources equitably. What follows is an objective educational journey on how consumers and ISPs can live in harmony with the explosion of YouTube video.

NetEqualizer a Great ROI Purchase for Reducing T1, E1, DS3 Costs

If you are looking to cut costs with the recent downturn in the economy, now would be a good time re-visit the issue of bandwidth optimization. How can it be cost justified ?

First, ask yourself if you’re maxing out your Internet connection. If the answer is yes, then you should look at optimizing tools before purchasing more bandwidth.  However, some are quite expensive and hard to swallow, making it difficult to justify the expense. But, NetEqualizer offers a very competitive fixed price solution with no recurring costs.

There are two basic cost-savings factors with the NetEqualizer:

1) Greatly reduced IT labor — For most business, the largest single line item cost is human labor.  And one of the hardest labor costs to quantify is your IT.  Your IT staff may seem to somehow make themselves essential to every issue, no matter how hard you try to automate things.

On the issue of complaints that “the Network is slow,” if you were to sit back and conservatively look at tech time spent fiddling with routers or your expensive layer-7 based packet shaper, you’d probably notice that quite a bit of time is spent making adjustments and tweaking equipment on a weekly or daily business, only to repeat the fire drill the next time the network grinds to a halt.

Why is this?

Nine times out of ten,  the core problem is too much congestion, and to compound matters,  the  acute  source of the congestion changes. It is the transient nature of the cause that tends to drive up your labor costs. Yes you can find and head off problems with your router or deep packet inspection device, but you have to re-visit this issue each time the congestion source changes. Great for keeping techs busy, but bad for costs.

The big advantage with the NetEqualizer over the layer-7 shapers, or using a reporting tool and manually chasing issues on your router, is that the NetEqualizer proactively finds and eliminates network congestion before it blows up in your face, becoming an IT fire drill. Over and over again we hear from customers that they have deployed the NetEqualizer with our default setup,  plugged it in, and left it alone.

So, if you’re looking to save money in this downturn, have your IT support do something that helps generate revenue, like forward-facing customer support, and let the NetEqualizer put out the fires before they spread.

2) Stretching your existing  bandwidth to accommodate more users — Essentially, this allows you to indefinitely stave off signing a new bandwidth contract.

NetEqualizer can stretch the life of your current Internet trunk. Internet congestion is similar to the problem power companies face. They must have enough capacity on their grid to meet peak demands even though they may rarely need it. The same holds true for your Ineternet contract. You must purchase a contract with ample bandwidth to meet your peak loads.  But, as you may realize, much of your peaks are transient and they are also related to quite a bit of non-business traffic. The NetEqualizer is effective because it can spread your non-essential traffic out over time, smoothing out your peaks.

For more information on the NetEqualizer, including a live demo and price list, visit

Does TCP need an overhaul?

Just stumbled upon an article by

Dr. Lawrence G. Roberts, CEO, Anagran Inc.

He discusses the idea of solving Internet Congestion by Fixing the TCP protocol. Here is an excerpt

There has been widespread discussion lately about the unfairness of the primary protocol we rely on with the Internet – Transmission Control Protocol (TCP) – along with many proposals on how to fix it. Since there are clearly many problems with both slow and unfair service, my question is: Should TCP be overhauled to fix today’s congestion control problem, or does the network itself need fixing?

First, the problems include:

  • Multi-flow unfairness – More flows, such as P2P, can consume too much capacity
  • Distance unfairness – Long-distance users get slower service
  • Loss unfairness – Random packet loss slows flows unevenly; Web access is slowed

He then goes on discuss various specific congestion problems and proposes some ways to solve it by mucking with the TCP protocol itself. It is a very good article!

I Just wanted to point out that inside the NetEqalizer we have already brought back fairness to many congested networks without retrofitting TCP. I just wish we were a little better at getting the word out!

Here is the link to the full article

Eli Riles

%d bloggers like this: