Bursting Is for the Birds (Burstable Internet Speed)


IMG_20170403_180712

Internet Bursting

By Art Reisman, CTO, http://www.netequalizer.com

Art Reisman CTO www.netequalizer.com

Art Reisman

We posted this article back in May 2008. It was written from the perspective of an ISP; however many consumers are finding our site and may find after reading this article that their burstable Internet service is not all its cracked up to be.  If you are a home internet user, and a bit of a geek,  you might find this article on burstable Internet Speeds thought provoking.

The Demand Side

From many of our NetEqualizer users, we often hear, “I want to offer my customers a fixed-rate one-megabit link, but at night, or when the bandwidth is there, I want to let them have more”. In most cases, the reasons for doing this type of feature are noble and honest. The operator requesting it is simply trying to allow his or her customers access to a resource that has already been paid for. Call it a gesture of good faith. But, in the end, it can lead to further complications.

The problem with this offering is that it can be like slipping up while training your dog. You have to be consistent if you don’t want problems. For example, you can’t let the dog lick scraps off the table on Sunday and then tell him he can’t do it on Monday. Well, the same is true for your customers (We’re not insinuating they are dogs, of course). If you provide them with higher speeds when your network isn’t busy, they may be calling you when your contention ratios are at their peak during times of greater usage. To avoid this, it is best to not to let them ever go above their contracted amount – even when the bandwidth is available.

The Supply Side

Now that we’ve covered the possible confusion bursting may cause for your end-customer, we should take a look at how bursting affects an ISP from the perspective of variable rate bandwidth being offered by your upstream provider.

Back in 2001, when the NetEqualizer was just a lone neuron in the far corner of my developing brain, a partner and I were running a fledgling local neighborhood WISP. To get started, we pulled in a half T1 from a local bandwidth provider.

The pricing is where things got complicated. While we had a half T1, if we went over that more than five percent of the time, the provider was going to charge us large random amounts of cash. Sort of like using too many minutes on your cell phone.

According to our provider, this bursting feature was touted as a great benefit to us as the extra bandwidth would be there when we needed it. On the other hand, there was also this inner-fear of dipping into the extra bandwidth as we knew things could quickly get out of our control. For example, what if some psycho customer drove my usage over the half T1 for a month and bankrupted me before we even detected it? This was just one of the nightmare scenarios that went through my head.

Just to give you a better idea of what the experience was like, think of it this way. Have you ever made an international call from a hotel because it was your only choice and then gotten nailed with a $20 fee for a two minute conversation? This experience was kind of like that. You don’t really know what to expect, but you’re pretty sure it’s not going to be good.

I’m a business owner whose gut instinct is to live within my means. This includes determining how much bandwidth my business needs by sizing it correctly and avoiding hidden costs.

Yet, for many business owners this process is made more complicated by the policies of their bandwidth providers, bursting being a major factor. Well, it’s time to fight back. If you have a provider that offers you bursting, ask them the following questions:

  • Can I have in writing how this bursting feature works exactly?
  • Is a burst one second, 10 seconds, or 10 hours at a time?
  • Is it available all of the time, or just when my upstream provider(s) circuits are not busy?
  • If it is available for 10 hours, can I just negotiate a flat rate for this extra bandwidth?
  • Can you just turn it off for me?

For many customers that we’ve spoken with, bursting is creating more of a fear of overcharge than any tangible benefits. On the other hand, the bursting feature is often helping their upstream provider.

For an upstream provider who is subdividing a large Internet pipe into smaller pipes for resale, it is difficult to enforce a fixed bandwidth limit. So, rather than purchase expensive equipment to divvy up their bandwidth evenly amongst their customers, providers may instead offer bursting as a “feature”. And, while they are at it, they’ll charge you for something that you likely don’t really need.

So, think twice about who’s really benefiting from bursting and know that a few questions can go along way in evening out the deal with your provider. Chances are bursting may be doing your company more harm than good.

In short, while bursting may seem harmless on the surface for both the ISP and the customer, over time the potential problems can significantly outweigh the benefits. Put simply, the best way to avoid this is to maintain consistency at all times and leave bursting for the birds.

More Resistence for Deep Packet Inspection


Editors note:

We come across stories from irate user groups every day. It seems the more the public knows about deep packet inspection practices the less likely it becomes. In Canada it looks like the resistance is getting some heavy hitters.

Google, Amazon, others want CRTC to ban internet interference

Last Updated: Tuesday, February 24, 2009 | 4:53 PM ET Comments49Recommend97

A coalition of more than 70 technology companies, including internet search leader Google, online retailer Amazon and voice over internet provider Skype, is calling on the CRTC to ban internet service providers from “traffic shaping,” or using technology that favours some applications over others.

In a submission filed Monday to the Canada Radio-television and Telecommunications Commission (CRTC) in advance of a July probe into the issue of internet traffic management, the Open Internet Coalition said traffic shaping network management “discourages investment in broadband networks, diminishes consumer choice, interferes with users’ freedom of expression, and inhibits innovation.”

Full Article

Four Reasons Why Peer-to-Peer File Sharing Is Declining in 2009


By Art Reisman

CTO of APconnections, makers of the plug-and-play bandwidth control and traffic shaping appliance NetEqualizer

Art Reisman CTO www.netequalizer.com

I recently returned from a regional NetEqualizer tech seminar with attendees from Western Michigan University, Eastern Michigan University and a few regional ISPs.  While having a live look at Eastern Michigan’s p2p footprint, I remarked that it was way down from what we had been seeing in 2007 and 2008.  The consensus from everybody in the room was that p2p usage is waning. Obviously this is not a wide data base to draw a conclusion from, but we have seen the same trend at many of our customer installs (3 or 4 a week), so I don’t think it is a fluke. It is kind of ironic, with all the controversy around Net Neutrality and Bit-torrent blocking,  that the problem seems to be taking care of itself.

So, what are the reasons behind the decline? In our opinion, there are several reasons:

1) Legal Itunes and other Mp3 downloads are the norm now. They are reasonably priced and well marketed. These downloads still take up bandwidth on the network, but do not clog access points with connections like torrents do.

2) Most music aficionados are well stocked with the classics (bootleg or not) by now and are only grabbing new tracks legally as they come out. The days of downloading an entire collection of music at once seem to be over. Fans have their foundation of digital music and are simply adding to it rather than building it up from nothing as they were several years ago.

3) The RIAA enforcement got its message out there. This, coupled with reason #1 above, pushed users to go legal.

4) Legal, free and unlimited. YouTube videos are more fun than slow music downloads and they’re free and legal. Plus, with the popularity of YouTube, more and more television networks have caught on and are putting their programs online.

Despite the decrease in p2p file sharing, ISPs are still experiencing more pressure on their networks than ever from Internet congestion. YouTube and NetFlix  are more than capable of filling in the void left by waning Bit-torrents.  So, don’t expect the controversy over traffic shaping and the use of bandwidth controllers to go away just yet.

ROI calculator for Bandwidth Controllers


Is your commercial Internet link getting full ? Are you evaluating whether to increase the size of your existing internet pipe and trying to do a cost trade off on investing in an optimization solution? If you answered yes to either of these questions then you’ll find the rest of this post useful.

To get started, we assume you are somewhat familiar with the NetEqualizer’s automated fairness and behavior based shaping.

To learn more about NetEqualizer behavior based shaping  we suggest our  NetEqualizer FAQ.

Below are the criteria we used for our cost analysis.

1) It was based on feedback from numerous customers (different verticals) over the previous six years.

2) In keeping with our policies we used average and not best case scenarios of savings.
3) Our Scenario is applicable to any private business or public operator that administers a shared Internet Link with 50 or more users

4) For our example  we will assume a 10 megabit trunk at a cost of $1500 per month.

ROI savings #1 Extending the number of users you can support.

NetEqualizer Equalizing and fairness typically extends the number of users that can share a trunk by making better use of the available bandwidth in a time period. Bandwidth can be stretched from 10 to 30 percent:

savings $150 to $450 per month

ROI savings #2 Reducing support calls caused by peak period brownouts.

We conservatively assume a brownout once a month caused by general network overload. With a transient brownout scenario you will likely spend debug time  trying to find the root cause. For example, a bad DNS server could the problem, or your upstream provider may have an issue. A brownout  may be caused by simple congestion .   Assuming you dispatch staff time to trouble shoot a congestion problem once a month and at an overhead  from 1 to 3 hours. Savings would be $300 per month in staff hours.

ROI savings #3 No recurring costs with your NetEqualizer.

Since the NetEqualizer uses behavior based shaping your license is essentially good for the life of the unit. Layer 7 based protocol shapers must be updated at least once a year.  Savings $100 to $500 per month

The total

The cost of a NetEqualizer unit for a 10 meg circuit runs around $3000, the low estimate for savings per month is around $500 per month.

In our scenario the ROI is very conservatively 6 months.

Note: Commercial Internet links supported by NetEqualizer include T1,E1,DS3,OC3,T3, Fiber, 1 gig and more

Related Articles

Is Barack Obama going to turn the tide toward Net Neutrality ?


NetWork World of Canada discusses some interesting scenarios about possible policy changes with the new adminstration.

In the article the author (Howard Solomon) specifically sites Obama’s leaning…

Meanwhile, the new President favours net neutrality, the principle that Internet service providers (ISPs) shouldn’t interfere with content traveling online, which could hurt Sandvine, a builder of deep packet inspection appliances for ISPs. At least one Senator is expected to introduce limiting legislation this month.

Will this help NetEqualizer sales and our support for behavior-based Net Neutral policy shaping?

According to Eli Riles vice president of sales at APconnections, “I don’t think it will change things much, we are already seeing steady growth, and I don’t expect a rush to purchase our equipment due to a government policy change. We sell mostly to Tier2 and Tier3 providers who have already generally stopped purchasing Layer 7 solutions mostly due to the higher cost and less so due to moral high ground or government mandate.”

related article

Stay tuned…

Can your ISP support Video for all?


By Art Reisman, CTO, http://www.netequalizer.com

Art Reisman CTO www.netequalizer.com

Art Reisman

As the Internet continues to grow with higher home user speeds available from Tier 1 providers,  video sites such as YouTube , Netflix,  and others are taking advantage of these fatter pipes. However, unlike the peer-to-peer traffic of several years ago (which seems to be abating), These videos don’t face the veil of copyright scrutiny cast upon p2p which caused most p2p users to back off. They are here to stay, and any ISP currently offering high speed Internet will need to accommodate the subsequent rising demand.

How should a Tier2 or Tier3 provider size their overall trunk to insure smooth video at all times for all users?

From measurements done in our NetEqualizer laboratories, a normal quality video stream requires around 350kbs bandwidth sustained over its life span to insure there are no breaks or interruptions. Newer higher definition videos may run at even higher speeds.


A typical rural wireless WISP will have contention ratios of about 300 users per 10-megabit link. This seems to be the ratio point where a small businesses can turn  a profit.  Given this contention ratio, if 30 customers simultaneously watch YouTube, the link will be exhausted and all 300 customers will be experience protracted periods of poor service.

Even though it is theoretically possible  to support 30 subscribers on a 10 megabit , it would only be possible if the remaining 280 subscribers were idle. In reality the trunk will become saturated with perhaps 10 to 15  active video streams,  as  obviously  the remaining 280 users are not idle. Given this realistic scenario, is it reasonable for an ISP with 10 megabits and 300 subscribers to tout they support video ?

As of late 2007 about 10 percent of Internet traffic was attributed to video. It is safe to safe to assume that number is higher now (Jan 2009). Using the 2007 number, 10 percent of 300 subscribers would yield on average 30 video streams, but that is not a fair number, because the 10 percent of people using video, would only apply to the subscribers who are actively on line, and not all 300. To be fair,  we’ll assume 150 of 300 subscribers are online during peak times.  The calculation now  yields an estimated 15 users doing video at one time, which is right on our upper limit of smooth service for a 10 megabit link, any more and something has to give.

The moral of this story so far is,  you should  be cautious before promoting unlimited video support with contention ratios of 30 subscribers to 1 megabit.  The good news is, most rural providers are not competing in metro areas, hence customers will have to make do with what they have. In areas more intense competition for customers where video support might make a difference, our recommendation is that  you will need to have a ratio closer to 20 subscribers to 1 megabit, and you still may have peak outages.

One trick you can use to support Video with limited Internet resources.

We have previously been on record as not being a supporter of Caching to increase Internet speed, well it is time to back track on that. We are now seeing results that Caching can be a big boost in speeding up popular YouTue videos. Caching and video tend to work well together as consumers tend to flock a small subset of the popular videos. The downside is your local caching server will only be able to archive a subset of the content on the master YouTube servers but this should be enough to give the appearance of pretty good video.

In the end there is no substitute for having a big fat pipe with enough room to run video, we’ll just have to wait and see if the market can support this expense.

Network Access Control Module Screenshots

Network Access Control lease plan now available from APconnections


APconnections to Offer Managed Network Access Control with no upfront costs.

LAFAYETTE, Colo., January 6, 2009 — APconnections, a leading supplier
of plug-and-play bandwidth shaping products and the creator of the
NetEqualizer, today announced it would begin offering a network access
control management services with no upfront  costs.

The services will be targeted toward networks that typically see a
high degree of turnover among users, such as airports, hotels, and
Internet cafes. For qualifying customers, APconnections will remotely
manage access to Internet connections, leaving clients free from the
worry of regulating and distributing short-term Internet service.

The suggested initial management package will offer users the option
of utilizing a complimentary 128 kbs connection or upgrading to a
high-speed 1-megabit connection for a fee. Upon accessing the network,
users will be directed to a billing page, which will offer the two
levels of service. The content of this page will largely be determined
by the client, including the option to display advertisements from
local vendors, providing the opportunity to further increase revenues.

In addition to clients no longer having to worry about regulating
Internet access, APconnections will also be responsible for all
billing and technical support. On a monthly basis, clients will be
provided with a statement showing income and network usage.

The only cost to clients will be a pre-determined percentage of the
income from customers’ high-speed upgrades. While this service can be
provided for customers with an existing ISP, Internet service can also
be established or expanded through APconnections directly for an
additional fee.

To qualify, clients must average a set number of monthly users. A
one-month trial of the service will be offered at no charge, at the
conclusion of which a service contract must be signed.

For more information, please contact APconnections at 1-888-287-2492
or via e-mail at admin@APconnections.net.

APconnections is a privately held company founded in 2003 and is based
in Lafayette, Colorado.

Art Reisman
www.apconnections.net
www.netequalizer.com
303-997-1300 extension 103
720-560-3568 cell

How the Music Industry Caused the Current Bittorrent Explosion


By: Art Reisman

Art Reisman CTO www.netequalizer.com

Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

Originally published April 4, 2008

Update Dec 18 , 2008: The RIAA announced a new tactic over the weekend.  The ironic twist is that by our accounts the old tactic of vigorous enforcement was working. We were seeing (on the hundreds of networks we support) far fewer bittorrents running when compared to two years ago. I’d estimate the drop to be about 80 percent.  I am not sure if our observations were indicative of the industry trend, but by our accounts, pirated material must have been on the decline. We’ll be putting together a more detailed article shortly.

Flash back to the year 2000, Napster hits the scene and becomes the site of choice for anybody trying to download online music.

It is important to understand that the original Napster had a centralized infrastructure. All file transfers happened via the coordination of a central server. Had the music industry embraced this model, they would likely have had a smooth transition from their brick and mortar channel to a soft distribution. Had they only been a bit more farsighted as to the consequences of their actions.

Instead of embracing Napster, the music industry, along with the RIAA (the industry henchman for copyright enforcement), worked to shut Napster down, much the same way they had successfully gone after commercial establishments that play unlicensed music.

There were some smaller label artists that did embrace Napster, obviously looking for untapped market share, but for the most part the industry reacted like a obsolete dinosaur fighting progress out of fear of losing revenue.

I was personally experimenting with downloading music at this time. If Bill Clinton and Obama can admit to illegal drug use, I should be able to confess to one or two illegal downloads without retribution (note: I have since licensed all my music in my library). It wasn’t the free music that attracted me to Napster in 2000, but rather the convenience of getting the tracks I wanted when I wanted them.

Well, the RIAA succeeded in getting an injunction against Napster and shutting them down in February 2001.

This would turn out to be a costly mistake.

It was no coincidence that shortly after the fall of Napster a whole heard of new file sharing techniques showed up. BearShare, Kazaa, Gnutella, Limewire, and Bittorrent all became popular seemingly overnight and once again copyrighted material was being spread all over the world. Only this time it was not coming from a centralized server, but from millions of servers. Now, instead of having one source where music distribution could be tracked, the music industry had a wasp nest of swarming downloads.

Although today there are many paying customers of legal downloads, black market peer-to-peer file sharing still runs rampant, and this time it is not possible to squash the distribution model . Bittorents are themselves not the cause of illegal file sharing, no more than automobiles cause drunk driving. The industry cannot possibly shut down a freely distributed file sharing model without shutting down the Internet itself, and obviously the distribution channel is not guilty of piracy but the people that us it are. Instead, the RIAA has adopted a policy of making examples by tracking down and arresting individual copy right distributors, a daunting and possibly futile task.

For example, it is extremely difficult to get a subpoena to far off corners of the world where governments are concerned with more important matters.

I’ll comment on how the RIAA enforces illegal distribution and the downside of their model in my next posting.

The True Cost of Bandwidth Monitoring


By Art Reisman

Art Reisman CTO www.netequalizer.com

For most IT administrators, bandwidth monitoring of some sort is an essential part of keeping track of, as well as justifying, network expenses. Without visibility into a network load, an administrator’s job would degrade into a quagmire of random guesswork. Or would it?

The traditional way of  looking at monitoring your Internet has two parts: the fixed cost of the monitoring tool used to identify traffic, and the labor associated with devising a remedy. In an ironic inverse correlation, we assert that costs increase with the complexity of the monitoring tool. Obviously, the more detailed the reporting tool, the more expensive its initial price tag. The kicker comes with part two. The more expensive the tool, the more  detail  it will provide, and the more time an administrator is likely to spend adjusting and mucking, looking for optimal performance.

But, is it a fair to assume higher labor costs with  more advanced monitoring and information?

Well, obviously it would not make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. Why have the reporting tool in the first place if the only output was to stare at reports and do nothing? Typically, the more information an admin has about a network, the more inclined he might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief  that when  the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network mucking can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention.  For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980’s.  The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.

An effective compromise with many of our customers is that they are stepping down from expensive complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.

For example, with a simple reporting tool you can plot network usage by user.  Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing; abuse becomes obvious just looking at the usage (a simple report).

However, there is also the personal control factor, which often does not follow clear lines of ROI (return on investment).

What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into a trip to Hawaii.

In our next article, we’ll put some real world numbers to the test for actual break downs, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.

Planetmy
Linux Tips
How to set up a monitor for free

NetEqualizer Network Access Control Module Helps Generate Revenue


Background: The NetEqualizer network access control module (NAC), which was released this past September, allows users to re-direct “unknown” or “unauthorized” traffic to a web server hosted on the NetEqualizer.  Once redirected, you can have the NetEqualizer perform a variety of actions, including:

1) Authenticating a user via login
2) Allowing the unknown user to create a paid account (using a credit card, for example)
3) Allowing the user to pass through to the Internet without logging in

Did you know that the NetEqualizer network access control module offers several options to generate revenue? One of the dilemmas many of our customers have mentioned is that in order to be competitive they don’t want to charge for their Internet service (hotels, etc.). Well, the cool thing about the NAC module is that you can offer multiple logins with different rate limits. For example, one could be your standard free service and another could be a paid service with higher bandwidth rates.

An additional revenue generating feature of the NAC module is the ability to run advertisements on the login screens. For example, if you’re a hotel operator, even if you’re not charging for Internet service, you could have your guests login on a screen with the logo and name of a local merchant, or anybody that is interested in cross marketing with your hotel.

The NAC module also has customizable splash screens on its default login page that you can edit, thus welcoming your users with whatever content you choose.

For more information about the NetEqualizer network access control module, visit our Web page at www.netequalizer.com or contact us at 1-888-287-2492 or via email at sales@netequalizer.com.

Will the New UDP-based Bittorrent Thwart Traffic Shaping?


A customer asked us today how the newer Bittorrent methods using UDP will affect our ability to keep traffic in check. Here is our first take on this subject (See the related article “Bittorrent declares war on VoIP, gamers”).

The change from TCP to UDP transfer will have some effect on our methods to throttle bandwidth, however, at
the IP level there is no difference between the two and we have never based our shaping techniques on whether packets were UDP or TCP. The ISP mentioned in the  article mentioned above likely uses TCP window-size manipulation to slow downloads. You can’t do that with UDP, and I think that is what the author was eluding to.

The only difference for the NetEqualizer will be that UDP streams are harder to knock down, so it may require a tuning change if it is really an issue. By this, I mean we may have to hit them harder with more latency than our standard defaults when throttling packets.

On a side note, we are seeing some interesting trends with regard to Bittorrent.

When looking at our customer networks, we are just not seeing the same levels of Bittorrent that we have seen in the past  (circa 2006).

We believe the drop is due to a couple of factors:

1)  The RIAA’s enforcement — The high school and university crowd has been sufficiently spanked with copyright prosecutions. Most people now think twice about downloading copyrighted material.

2) Legal alternatives — The popularity of online purchase music  sites has replaced some of the illegal transfers (These also take up bandwidth, but they are not distributed by bittorrent).

The recent trends do not mean that bittorrent is going away, but rather that viable alternatives are emerging.  However, while legal distribution of content is here to stay and will likely grow over time, we do not expect an explosion that will completely replace bittorrent.

Five Questions You Should Ask about Internet Speed and Bursting


Art Reisman

By Art Reisman, CTO, APconnections

Editor’s Note: With consumers up in arms about net neutrality, they should also be asking their ISPs for some truth in advertising when it comes their Internet speed and the specifics concerning how and when bursting occurs.

With all the talk of net neutrality and deep packet inspection, we thought it was time to revisit the illusion created by providers offering “burstable” Internet speeds.

What is a burstable Internet speed? Well, it’s a common trick used by providers that lets you temporarily enjoy their highest speed, but then after a certain time period or after a bandwidth quota is reached, you automatically get knocked down  to a slower speed.

Generally, your provider leaves the specifics of when this bursting takes place out of their standard literature.  Instead, they will likely cite a best-case number when marketing their service. When bursting is mentioned, if ever, it is likely done in the fine print.

But, this doesn’t mean that there aren’t ways to hold your ISP accountable. Below are some questions that you should ask your Internet service provider to find out exactly what you are paying for.

  1. Is the speed advertised in their marketing literature available all the time, or is that a best-case speed (or burst) that you may or may not achieve on a regular basis?
  2. Do you get charged, penalized, or black-listed for using this higher speed?
  3. How long can you burst for? For example, is a burst one second, 10 seconds, or 10 hours at a time?
  4. Can you get exactly how this bursting feature works in writing?
  5. Can you trade in the bursting feature for a guaranteed sustained top speed that is always on and not considered bursting?

While we can’t promise that these questions will always elicit an upfront, honest and informed response, they’re a step in the right direction. For a more in depth article on the subject and business behind “bursting” you should also  check out Bursting Is for the Birds.

Canadians Mull over Privacy and Deep Packet Inspection


Editor’s note: Seems the Canadians are also finally forced to face the issue of deep packet inspection. I guess the cat is out of the bag in Canada? One troubling note in the article below is the authors insinuation that the only way to control Internet bandwidth is through DPI .

Privacy Commissioner of Canada - blog.privcom.gc.ca

CRTC begins dialogue on traffic shaping

Posted on November 21st, 2008 by Daphne Guerrero

Yesterday, the CRTC rendered its decision on ISP’s traffic shaping practices. It announced that it was denying the Canadian Internet Service Providers’ (CAIP) request that Bell Canada, which provides wholesale ADSL services to smaller ISPs across the country, cease the traffic-shaping practices it has adopted for its wholesale customers.

“Based on the evidence before us, we found that the measures employed by Bell Canada to manage its network were not discriminatory. Bell Canada applied the same traffic-shaping practices to wholesale customers as it did to its own retail customers,” said Konrad von Finckenstein, Q.C., Chairman of the CRTC.

Moreover, the CRTC recognized that traffic-shaping “raises a number of questions” for both end-users and ISPs and has decided to hold a public hearing next July to consider them.

Read the full article

How Much YouTube Can the Internet Handle?


By Art Reisman, CTO, http://www.netequalizer.com 

Art Reisman CTO www.netequalizer.com

Art Reisman

 

As the Internet continues to grow and true speeds become higher,  video sites like YouTube are taking advantage of these fatter pipes. However, unlike the peer-to-peer traffic of several years ago (which seems to be abating), YouTube videos don’t face the veil of copyright scrutiny cast upon p2p which caused most users to back off.
 

In our experience, there are trade offs associated with the advancements in technology that have come with YouTube. From measurements done in our NetEqualizer laboratories, the typical normal quality YouTube video needs about 240kbs sustained over the 10 minute run time for the video. The newer higher definition videos run at a rate at least twice that. 

Many of the rural ISPs that we at NetEqualizer support with our bandwidth shaping and control equipment have contention ratios of about 300 users per 10-megabit link. This seems to be the ratio point where these small businesses can turn  a profit.  Given this contention ratio, if 40 customers simultaneously run YouTube, the link will be exhausted and all 300 customers will be wishing they had their dial-up back. At last check, YouTube traffic accounted for 10 percent of all Internet Traffic.  If left completely unregulated,  a typical rural  ISP could find itself on the brink of saturation from normal YouTube usage already. With tier-1 providers in major metro areas there is usually more bandwidth, but with that comes higher expectations of service and hence some saturation is inevitable. 

If you believe there is a conspiracy, or that ISPs are not supposed to profit as they take risk and operate in a market economy, you are entitled to your opinion, but we are dealing with reality. And there will always be tension between users and their providers, much the same as there is with government funds and highway congestion. 

The fact is all ISPs have a fixed amount of bandwidth they can deliver and when data flows exceed their current capacity, they are forced to implement some form of passive constraint. Without them many networks would lock up completely. This is no different than a city restricting water usage when reservoirs are low. Water restrictions are well understood by the populace and yet somehow bandwidth allocations and restrictions are perceived as evil. I believe this misconception is simply due to the fact that bandwidth is so dynamic, if there was a giant reservoir of bandwidth pooled up in the mountains where you could see this resource slowly become depleted , the problem could be more easily visualized. 

The best compromise offered, and the only comprise that is not intrusive is bandwidth rationing at peak hours when needed. Without rationing, a network will fall into gridlock, in which case not only do the YouTube videos come to halt , but  so does e-mail , chat , VOIP and other less intensive applications. 

There is some good news, alternative ways to watch YouTube videos. 

We noticed during out testing that YouTube videos attempt to play back video as a  real-time feed , like watching live TV.  When you go directly to YouTube to watch a video, the site and your PC immediately start the video and the quality becomes dependent on having that 240kbs. If your providers speed dips below this level your video will begin to stall, very annoying;  however if you are willing to wait a few seconds there are tools out there that will play back YouTube videos for you in non real-time. 

Buffering Tool 

They accomplish this by pre-buffering before the video starts playing.  We have not reviewed any of these tools so do your research. We suggest you google “YouTube buffering tools” to see what is out there. Not only do these tools smooth out the YouTube playback during peak times or on slower connections , but they also help balance the load on the network during peak times. 

Bio Art Reisman is a partner and co-founder of APconnections, a company that provides bandwidth control solutions (NetEqualizer) to ISPs, Universities, Libraries, Mining Camps and any organization where groups of users must share their Internet resources equitably. What follows is an objective educational journey on how consumers and ISPs can live in harmony with the explosion of YouTube video.