Hotel Property Managers Should Consider Generic Bandwidth Control Solutions


Editors Note: The following Hotelsmag.com article caught my attention this morning. The hotel industry is now seriously starting to understand that they need some form of bandwidth control.   However, many hotel solutions for bandwidth control are custom marketed, which perhaps puts their economy-of-scale at a competitive disadvantage. Yet, the NetEqualizer bandwidth controller, as well as our competitors, cross many market verticals, offering hotels an effective solution without the niche-market costs. For example, in addition to the numerous other industries in which the NetEqualizer is being used, some of our hotel customers include: The Holiday Inn Capital Hill, a prominent Washington DC hotel; The Portola Plaza Hotel and Conference Center in Monterrey, California; and the Hotel St. Regis in New York City.

For more information about the NetEqualizer, or to check out our live demo, visit www.netequalizer.com.

Heavy Users Tax Hotel Systems:Hoteliers and IT Staff Must Adapt to a New Reality of Extreme Bandwidth Demands

By Stephanie Overby, Special to Hotels — Hotels, 3/1/2009

The tweens taking up the seventh floor are instant-messaging while listening to Internet radio and downloading a pirated version of “Twilight” to watch later. The 200-person meeting in the ballroom has a full interactive multimedia presentation going for the next hour. And you do not want to know what the businessman in room 1208 is streaming on BitTorrent, but it is probably not a productivity booster.

To keep reading, click here.

Net Neutrality Defined,Barack Obama is on the bandwagon


By Art Reisman, CTO, http://www.netequalizer.com

Art Reisman CTO www.netequalizer.com

Art Reisman

There continues to be a flurry of Net Neutrality articles published and according to one, Barack Obama is a big supporter of Net Neutrality.  Of course that was a fleeting campaign soundbite that the media picked up without much context.

I was releived to see that finally a politically entity put a definition on Net Neutrality.

From the government of Norway we get:

“The new rules lay out three guidelines. First, Internet users must be given complete and accurate information about the service they are buying, including capacity and quality. Second, users are allowed to send and receive content of their choice, use services and applications of their choice. and connect any hardware and software that doesn’t harm the network. Finally, the connection cannot be discriminated against based on application, service, content, sender, or receiver.”

Full Article: Norway gets net neutrality—voluntary, but broadly supported

I could not agree more. Note that this definition does not rule out some form a fair bandwidth shaping, and that is an important distinction because the Internet will be reduced to gridlock without some traffic control.

The funniest piece of irony in this whole debate is that the larger service providers are warning of Armageddon without some form of fairness rules, (and I happen to agree) , while at the same time their marketing arm is creating an image of infinite unfettered access for $29 a month. (I omitted a reference link because they change daily)

Bursting Is for the Birds (Burstable Internet Speed)


IMG_20170403_180712

Internet Bursting

By Art Reisman, CTO, http://www.netequalizer.com

Art Reisman CTO www.netequalizer.com

Art Reisman

We posted this article back in May 2008. It was written from the perspective of an ISP; however many consumers are finding our site and may find after reading this article that their burstable Internet service is not all its cracked up to be.  If you are a home internet user, and a bit of a geek,  you might find this article on burstable Internet Speeds thought provoking.

The Demand Side

From many of our NetEqualizer users, we often hear, “I want to offer my customers a fixed-rate one-megabit link, but at night, or when the bandwidth is there, I want to let them have more”. In most cases, the reasons for doing this type of feature are noble and honest. The operator requesting it is simply trying to allow his or her customers access to a resource that has already been paid for. Call it a gesture of good faith. But, in the end, it can lead to further complications.

The problem with this offering is that it can be like slipping up while training your dog. You have to be consistent if you don’t want problems. For example, you can’t let the dog lick scraps off the table on Sunday and then tell him he can’t do it on Monday. Well, the same is true for your customers (We’re not insinuating they are dogs, of course). If you provide them with higher speeds when your network isn’t busy, they may be calling you when your contention ratios are at their peak during times of greater usage. To avoid this, it is best to not to let them ever go above their contracted amount – even when the bandwidth is available.

The Supply Side

Now that we’ve covered the possible confusion bursting may cause for your end-customer, we should take a look at how bursting affects an ISP from the perspective of variable rate bandwidth being offered by your upstream provider.

Back in 2001, when the NetEqualizer was just a lone neuron in the far corner of my developing brain, a partner and I were running a fledgling local neighborhood WISP. To get started, we pulled in a half T1 from a local bandwidth provider.

The pricing is where things got complicated. While we had a half T1, if we went over that more than five percent of the time, the provider was going to charge us large random amounts of cash. Sort of like using too many minutes on your cell phone.

According to our provider, this bursting feature was touted as a great benefit to us as the extra bandwidth would be there when we needed it. On the other hand, there was also this inner-fear of dipping into the extra bandwidth as we knew things could quickly get out of our control. For example, what if some psycho customer drove my usage over the half T1 for a month and bankrupted me before we even detected it? This was just one of the nightmare scenarios that went through my head.

Just to give you a better idea of what the experience was like, think of it this way. Have you ever made an international call from a hotel because it was your only choice and then gotten nailed with a $20 fee for a two minute conversation? This experience was kind of like that. You don’t really know what to expect, but you’re pretty sure it’s not going to be good.

I’m a business owner whose gut instinct is to live within my means. This includes determining how much bandwidth my business needs by sizing it correctly and avoiding hidden costs.

Yet, for many business owners this process is made more complicated by the policies of their bandwidth providers, bursting being a major factor. Well, it’s time to fight back. If you have a provider that offers you bursting, ask them the following questions:

  • Can I have in writing how this bursting feature works exactly?
  • Is a burst one second, 10 seconds, or 10 hours at a time?
  • Is it available all of the time, or just when my upstream provider(s) circuits are not busy?
  • If it is available for 10 hours, can I just negotiate a flat rate for this extra bandwidth?
  • Can you just turn it off for me?

For many customers that we’ve spoken with, bursting is creating more of a fear of overcharge than any tangible benefits. On the other hand, the bursting feature is often helping their upstream provider.

For an upstream provider who is subdividing a large Internet pipe into smaller pipes for resale, it is difficult to enforce a fixed bandwidth limit. So, rather than purchase expensive equipment to divvy up their bandwidth evenly amongst their customers, providers may instead offer bursting as a “feature”. And, while they are at it, they’ll charge you for something that you likely don’t really need.

So, think twice about who’s really benefiting from bursting and know that a few questions can go along way in evening out the deal with your provider. Chances are bursting may be doing your company more harm than good.

In short, while bursting may seem harmless on the surface for both the ISP and the customer, over time the potential problems can significantly outweigh the benefits. Put simply, the best way to avoid this is to maintain consistency at all times and leave bursting for the birds.

Is running an ISP/Wisp a recession proof business ?


February 24th, 2009

Lafayette Colorado

APconnections makers of the of the popular NetEqualizer line of bandwidth control and traffic shaping hardware appliances today announced results of their annual ISP  state of the business survey, below is the summary.

We have been asking our ISP/WISP customers  how their business is faring in the recession over the past several months and the answer is a resoundingly upbeat !

Out of the 25 ISPs ( Tier 2 providers) only two had seen  a decline in subscribers, 18 were holding their own, and 5 were seeing strong growth.  Here are some other tidbits.

1) Many Households will cancel their cable TV before giving up their broad band

2) Cancellations  for one provider mainly occured with foreclosures, again this supports the notion of people holding their broadband right up to the end of their finances.

3) Laid off workers are signing up for broad band as they see this as a needed for job searches and also in looking for ways to start small home businesses

4) We have seen an increase in inquiries for our services across the US and Canada

5) We have not heard of anybody foregoing food as of yet , but I would not put it past some of the gamers.

Four Reasons Why Peer-to-Peer File Sharing Is Declining in 2009


By Art Reisman

CTO of APconnections, makers of the plug-and-play bandwidth control and traffic shaping appliance NetEqualizer

Art Reisman CTO www.netequalizer.com

I recently returned from a regional NetEqualizer tech seminar with attendees from Western Michigan University, Eastern Michigan University and a few regional ISPs.  While having a live look at Eastern Michigan’s p2p footprint, I remarked that it was way down from what we had been seeing in 2007 and 2008.  The consensus from everybody in the room was that p2p usage is waning. Obviously this is not a wide data base to draw a conclusion from, but we have seen the same trend at many of our customer installs (3 or 4 a week), so I don’t think it is a fluke. It is kind of ironic, with all the controversy around Net Neutrality and Bit-torrent blocking,  that the problem seems to be taking care of itself.

So, what are the reasons behind the decline? In our opinion, there are several reasons:

1) Legal Itunes and other Mp3 downloads are the norm now. They are reasonably priced and well marketed. These downloads still take up bandwidth on the network, but do not clog access points with connections like torrents do.

2) Most music aficionados are well stocked with the classics (bootleg or not) by now and are only grabbing new tracks legally as they come out. The days of downloading an entire collection of music at once seem to be over. Fans have their foundation of digital music and are simply adding to it rather than building it up from nothing as they were several years ago.

3) The RIAA enforcement got its message out there. This, coupled with reason #1 above, pushed users to go legal.

4) Legal, free and unlimited. YouTube videos are more fun than slow music downloads and they’re free and legal. Plus, with the popularity of YouTube, more and more television networks have caught on and are putting their programs online.

Despite the decrease in p2p file sharing, ISPs are still experiencing more pressure on their networks than ever from Internet congestion. YouTube and NetFlix  are more than capable of filling in the void left by waning Bit-torrents.  So, don’t expect the controversy over traffic shaping and the use of bandwidth controllers to go away just yet.

Cox Shaping Policies Similar to NetEqualizer


Editor’s Note: Cox today announced a bandwidth management policy similar to NetEqualizer, but with a twist. It seems they are only delaying p2p during times of congestion (similar to NetEqualizer), but in order to specifically determine traffic is p2p, they are possibly employing some form of Deep Packet Inspection (not similar to NetEqualizer, which is traffic-type agnostic). If anybody has inside knowledge, we would appreciate comments here and will make corrections to our assertion if needed.

As this all plays out, it will be interesting to see how they differentiate p2p from video and if they are actually doing Deep Packet Inspection.  Also, if DPI is part of the Cox strategy, how will this sit with the FCC when they clearly strong armed  Comcast to stop using DPI ?

Cox Will Shape Its Broadband Traffic; Delay P2P & FTP Transfers

Om Malik | Gigaom.com | Tuesday, January 27, 2009

Cox Communications, the third largest cable company and broadband service provider is joining Comcast in traffic shaping and delaying traffic it thinks is not time sensitive. They call it congestion management, making it seem like a innocuous practice, though in reality it is anything but innocous. Chalk this up as yet-another-incumbent-behaving-badly!

To be fair, in the past Cox had made it pretty clear that it was going to play god with traffic flowing through its pipes. Next month, they will start testing a new method of managing traffic on its network in Kansas and Arkansas. Cox, outlining the congestion management policy on their website notes:

“…automatically ensures that all time-sensitive Internet traffic — such as web pages, voice calls, streaming videos and gaming — moves without delay. Less time-sensitive traffic, such as file uploads, peer-to-peer and Usenet newsgroups, may be delayed momentarily — but only when the local network is congested.”

Full article

ISP-planet nice article on NetEqualizer


NetEqualizer Sees New Opportunity

An aggressive move into a new channel comes along with cost cutting elsewhere in the business.

by Alex Goldman
ISP-Planet Managing Editor
[January 27, 2009]
Email a Colleague

When some ISP executives think “bandwidth shaper” they think of a device with a five digit price tag. If so, they’re not thinking of Lafayette, Colo.-based APConnection’s NetEqualizer product, which we last wrote about in 2007 (see Network Contention Specialist).

The NetEqualizer starts at under $2,000, and pricing is published online.

Full article

Can your ISP support Video for all?


By Art Reisman, CTO, http://www.netequalizer.com

Art Reisman CTO www.netequalizer.com

Art Reisman

As the Internet continues to grow with higher home user speeds available from Tier 1 providers,  video sites such as YouTube , Netflix,  and others are taking advantage of these fatter pipes. However, unlike the peer-to-peer traffic of several years ago (which seems to be abating), These videos don’t face the veil of copyright scrutiny cast upon p2p which caused most p2p users to back off. They are here to stay, and any ISP currently offering high speed Internet will need to accommodate the subsequent rising demand.

How should a Tier2 or Tier3 provider size their overall trunk to insure smooth video at all times for all users?

From measurements done in our NetEqualizer laboratories, a normal quality video stream requires around 350kbs bandwidth sustained over its life span to insure there are no breaks or interruptions. Newer higher definition videos may run at even higher speeds.


A typical rural wireless WISP will have contention ratios of about 300 users per 10-megabit link. This seems to be the ratio point where a small businesses can turn  a profit.  Given this contention ratio, if 30 customers simultaneously watch YouTube, the link will be exhausted and all 300 customers will be experience protracted periods of poor service.

Even though it is theoretically possible  to support 30 subscribers on a 10 megabit , it would only be possible if the remaining 280 subscribers were idle. In reality the trunk will become saturated with perhaps 10 to 15  active video streams,  as  obviously  the remaining 280 users are not idle. Given this realistic scenario, is it reasonable for an ISP with 10 megabits and 300 subscribers to tout they support video ?

As of late 2007 about 10 percent of Internet traffic was attributed to video. It is safe to safe to assume that number is higher now (Jan 2009). Using the 2007 number, 10 percent of 300 subscribers would yield on average 30 video streams, but that is not a fair number, because the 10 percent of people using video, would only apply to the subscribers who are actively on line, and not all 300. To be fair,  we’ll assume 150 of 300 subscribers are online during peak times.  The calculation now  yields an estimated 15 users doing video at one time, which is right on our upper limit of smooth service for a 10 megabit link, any more and something has to give.

The moral of this story so far is,  you should  be cautious before promoting unlimited video support with contention ratios of 30 subscribers to 1 megabit.  The good news is, most rural providers are not competing in metro areas, hence customers will have to make do with what they have. In areas more intense competition for customers where video support might make a difference, our recommendation is that  you will need to have a ratio closer to 20 subscribers to 1 megabit, and you still may have peak outages.

One trick you can use to support Video with limited Internet resources.

We have previously been on record as not being a supporter of Caching to increase Internet speed, well it is time to back track on that. We are now seeing results that Caching can be a big boost in speeding up popular YouTue videos. Caching and video tend to work well together as consumers tend to flock a small subset of the popular videos. The downside is your local caching server will only be able to archive a subset of the content on the master YouTube servers but this should be enough to give the appearance of pretty good video.

In the end there is no substitute for having a big fat pipe with enough room to run video, we’ll just have to wait and see if the market can support this expense.

Bonded DSL Technical Pros and Cons Discussion


Editor’s Note: We often get asked if our NetEqualizer product line can do load balancing. The answer is yes. maybe if we wanted to integrate in one of the public domain load balancing devices freely available. It seems that to do it correctly without issues is extremely expensive. In the following excerpt, we have reprinted some thoughts and experience from a user who has a wide breadth of knowledge in this area. He gives detailed examples of the trade-offs involved in bonding multiple WAN connections.

When bonding is done by your provider it is essentially seamless and requires no extra effort (or risks to the customer) . It is normally done using bonded T1 links, but also can come in the form of a bonded DSL. The technology discussed below is applicable to users who are bonding two or more lines together without the knowledge (help) of their upstream provider.

As for Linux freeware Load Balancing devices. They are NOT any sort of true bonding at all. If you have 3 x 1.5 Mbit lines, then you do NOT have a 4.5 Mbit line with these products. If you really want a 4.5Mbit Bonded line, then I’m not aware of anyway to do it without having BGP or some method of co-ordinating with someone upstream on the other side of the link. However, what these multi-WAN-routers will do is try to equally spread sessions out over the three lines so that if your users are collectively doing 3Mbit of collective downloads, that should be about 1Mbit on each line. For the most part, it does a pretty good job.

It does it by a fairly dumb round robin NATing. So, it’s much like a regular NAT router – everyone behind it is a private 192.168 number (which is the 1st downside) – and it’ll NAT the privates to one of the 3 Public IP’s on the WAN ports. The side effect of that is broken session, where some websites (particularly SSL) that will complain that your IP address has changed while you’re inside the shopping cart or whatever.

To counteract that problem, they have ‘session persistence’ which tries to track each ‘Session Pair’ and keep the same WAN IP in effect for that ‘Session Pair’. That means that the 1st time one of the private IP:port accesses some particular public ip:port, the router will remember that and use that same WAN port for that same public/private pair. The result of this is that ‘most’ of the time, we don’t have these broken sessions, but the downside of this is that the fairness of the load balancing is offset.

For example, if you had 2 lines connected..;

  • User1 comes to speakeasy and does a speedtest – the router says ‘speakeasy is out WAN1 for evermore’.
  • User2 comes and looks up google, and the router says ‘google is out WAN2 for evermore’
  • User3 goes to Download.com and the router decides ‘Download.com is on WAR1’.
  • User4 goes to smalltextsite.com (WAN2)
  • User5 goes to YouTube (WAN1)

And so on. With session persistence turned on, User300 will get SpeakEasy, Download.com and YouTube across WAN1 because that’s what it originally learned to be persistent about.

So, the tradeoff is if you don’t use the session persistence, then you’ll have angry customers because things break. If you do use
persistence, then there may be an unbalancing.

Also, there is still some broken sites, even with persistance on. For example, some online stores have the customer shopping at www.StoreSite.com and when they checkout it transfers their cart contents to www.PaymentProcessor.com, which may flag an IP security violation. Any time the router see’s different IP’s out in the public side, it figures it can use a new WAN port and doesn’t know it’s the same user and application. There are a few game launchers that kids load a ‘launcher’ program and select a server to connect to, but when they actually click ‘connect’, the server complains because the WAN addresses have changed.

In all honesty, it’s works quite well and there are few problems. We also can make our own exception list, so in my shopping cart example, we can manually add ‘storesite.com‘ and ‘paymentprocessor.com‘ to the same WAN address and that’ll ensure that it always uses the same WAN for those sites. That’s requires users complain first before you’d even know there’s a problem, and requires some tricks to figure out what’s going on, but the exception list can ultimately handle these problems if you make enough exceptions.

Comcast fairness techniques comparison with NetEqualizer


Comcast is now rolling out the details of their new policy on Traffic shaping Fairness as they get away from their former Deep Packet inspection.

For the complete Comcast article click here

Below we compare techniques with the NetEqualizer

Note: Feel free to  comment if you feel we  need to make any corrections in our comparison our goal is to be as accurate as possible.

1) Both techniques make use of slowing users down if they exceed a bandwidth limit over a time period.

2) The Comcast bandwidth limit kicks in after 15 minutes and is based only on a customers usage over that time period, it is not based on the congestion going on in the overall network.

3) NetEqualizer bandwidth limits are based on the last 8 seconds of customer usage, but only kick when the overall Network is full.  (the aggregate bandwidth utilization of all users on the line has reached a critical level)

4) Comcast punishes offenders by cutting them back  50 percent for a minimum of 15 minutes

5) NetEqualizer punishes offenders  just a few seconds and then lets them back to full strength. It will hit the offending connection with a decrease ranging from 50 to 80 percent.

6) Comcast puts a restriction on all traffic to the user during the 15 minute Penalty period

7) NetEqualizer only punishes offending connections , for example if you were running an FTP download and a streaming audio , only the FTP download would be effected by the restriction.

In our opinion both methods are effective and fair.

FYI NetEqualizer also has a Quota system which is used by a very small percent of our customers. It is very similar to the Comcast 15 minute system only that the time interval is done in Days.

Details on the NetEqualizer Quota based system can be found in the user guide page 11.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

How the Music Industry Caused the Current Bittorrent Explosion


By: Art Reisman

Art Reisman CTO www.netequalizer.com

Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

Originally published April 4, 2008

Update Dec 18 , 2008: The RIAA announced a new tactic over the weekend.  The ironic twist is that by our accounts the old tactic of vigorous enforcement was working. We were seeing (on the hundreds of networks we support) far fewer bittorrents running when compared to two years ago. I’d estimate the drop to be about 80 percent.  I am not sure if our observations were indicative of the industry trend, but by our accounts, pirated material must have been on the decline. We’ll be putting together a more detailed article shortly.

Flash back to the year 2000, Napster hits the scene and becomes the site of choice for anybody trying to download online music.

It is important to understand that the original Napster had a centralized infrastructure. All file transfers happened via the coordination of a central server. Had the music industry embraced this model, they would likely have had a smooth transition from their brick and mortar channel to a soft distribution. Had they only been a bit more farsighted as to the consequences of their actions.

Instead of embracing Napster, the music industry, along with the RIAA (the industry henchman for copyright enforcement), worked to shut Napster down, much the same way they had successfully gone after commercial establishments that play unlicensed music.

There were some smaller label artists that did embrace Napster, obviously looking for untapped market share, but for the most part the industry reacted like a obsolete dinosaur fighting progress out of fear of losing revenue.

I was personally experimenting with downloading music at this time. If Bill Clinton and Obama can admit to illegal drug use, I should be able to confess to one or two illegal downloads without retribution (note: I have since licensed all my music in my library). It wasn’t the free music that attracted me to Napster in 2000, but rather the convenience of getting the tracks I wanted when I wanted them.

Well, the RIAA succeeded in getting an injunction against Napster and shutting them down in February 2001.

This would turn out to be a costly mistake.

It was no coincidence that shortly after the fall of Napster a whole heard of new file sharing techniques showed up. BearShare, Kazaa, Gnutella, Limewire, and Bittorrent all became popular seemingly overnight and once again copyrighted material was being spread all over the world. Only this time it was not coming from a centralized server, but from millions of servers. Now, instead of having one source where music distribution could be tracked, the music industry had a wasp nest of swarming downloads.

Although today there are many paying customers of legal downloads, black market peer-to-peer file sharing still runs rampant, and this time it is not possible to squash the distribution model . Bittorents are themselves not the cause of illegal file sharing, no more than automobiles cause drunk driving. The industry cannot possibly shut down a freely distributed file sharing model without shutting down the Internet itself, and obviously the distribution channel is not guilty of piracy but the people that us it are. Instead, the RIAA has adopted a policy of making examples by tracking down and arresting individual copy right distributors, a daunting and possibly futile task.

For example, it is extremely difficult to get a subpoena to far off corners of the world where governments are concerned with more important matters.

I’ll comment on how the RIAA enforces illegal distribution and the downside of their model in my next posting.

Will the New UDP-based Bittorrent Thwart Traffic Shaping?


A customer asked us today how the newer Bittorrent methods using UDP will affect our ability to keep traffic in check. Here is our first take on this subject (See the related article “Bittorrent declares war on VoIP, gamers”).

The change from TCP to UDP transfer will have some effect on our methods to throttle bandwidth, however, at
the IP level there is no difference between the two and we have never based our shaping techniques on whether packets were UDP or TCP. The ISP mentioned in the  article mentioned above likely uses TCP window-size manipulation to slow downloads. You can’t do that with UDP, and I think that is what the author was eluding to.

The only difference for the NetEqualizer will be that UDP streams are harder to knock down, so it may require a tuning change if it is really an issue. By this, I mean we may have to hit them harder with more latency than our standard defaults when throttling packets.

On a side note, we are seeing some interesting trends with regard to Bittorrent.

When looking at our customer networks, we are just not seeing the same levels of Bittorrent that we have seen in the past  (circa 2006).

We believe the drop is due to a couple of factors:

1)  The RIAA’s enforcement — The high school and university crowd has been sufficiently spanked with copyright prosecutions. Most people now think twice about downloading copyrighted material.

2) Legal alternatives — The popularity of online purchase music  sites has replaced some of the illegal transfers (These also take up bandwidth, but they are not distributed by bittorrent).

The recent trends do not mean that bittorrent is going away, but rather that viable alternatives are emerging.  However, while legal distribution of content is here to stay and will likely grow over time, we do not expect an explosion that will completely replace bittorrent.

How Much YouTube Can the Internet Handle?


By Art Reisman, CTO, http://www.netequalizer.com 

Art Reisman CTO www.netequalizer.com

Art Reisman

 

As the Internet continues to grow and true speeds become higher,  video sites like YouTube are taking advantage of these fatter pipes. However, unlike the peer-to-peer traffic of several years ago (which seems to be abating), YouTube videos don’t face the veil of copyright scrutiny cast upon p2p which caused most users to back off.
 

In our experience, there are trade offs associated with the advancements in technology that have come with YouTube. From measurements done in our NetEqualizer laboratories, the typical normal quality YouTube video needs about 240kbs sustained over the 10 minute run time for the video. The newer higher definition videos run at a rate at least twice that. 

Many of the rural ISPs that we at NetEqualizer support with our bandwidth shaping and control equipment have contention ratios of about 300 users per 10-megabit link. This seems to be the ratio point where these small businesses can turn  a profit.  Given this contention ratio, if 40 customers simultaneously run YouTube, the link will be exhausted and all 300 customers will be wishing they had their dial-up back. At last check, YouTube traffic accounted for 10 percent of all Internet Traffic.  If left completely unregulated,  a typical rural  ISP could find itself on the brink of saturation from normal YouTube usage already. With tier-1 providers in major metro areas there is usually more bandwidth, but with that comes higher expectations of service and hence some saturation is inevitable. 

If you believe there is a conspiracy, or that ISPs are not supposed to profit as they take risk and operate in a market economy, you are entitled to your opinion, but we are dealing with reality. And there will always be tension between users and their providers, much the same as there is with government funds and highway congestion. 

The fact is all ISPs have a fixed amount of bandwidth they can deliver and when data flows exceed their current capacity, they are forced to implement some form of passive constraint. Without them many networks would lock up completely. This is no different than a city restricting water usage when reservoirs are low. Water restrictions are well understood by the populace and yet somehow bandwidth allocations and restrictions are perceived as evil. I believe this misconception is simply due to the fact that bandwidth is so dynamic, if there was a giant reservoir of bandwidth pooled up in the mountains where you could see this resource slowly become depleted , the problem could be more easily visualized. 

The best compromise offered, and the only comprise that is not intrusive is bandwidth rationing at peak hours when needed. Without rationing, a network will fall into gridlock, in which case not only do the YouTube videos come to halt , but  so does e-mail , chat , VOIP and other less intensive applications. 

There is some good news, alternative ways to watch YouTube videos. 

We noticed during out testing that YouTube videos attempt to play back video as a  real-time feed , like watching live TV.  When you go directly to YouTube to watch a video, the site and your PC immediately start the video and the quality becomes dependent on having that 240kbs. If your providers speed dips below this level your video will begin to stall, very annoying;  however if you are willing to wait a few seconds there are tools out there that will play back YouTube videos for you in non real-time. 

Buffering Tool 

They accomplish this by pre-buffering before the video starts playing.  We have not reviewed any of these tools so do your research. We suggest you google “YouTube buffering tools” to see what is out there. Not only do these tools smooth out the YouTube playback during peak times or on slower connections , but they also help balance the load on the network during peak times. 

Bio Art Reisman is a partner and co-founder of APconnections, a company that provides bandwidth control solutions (NetEqualizer) to ISPs, Universities, Libraries, Mining Camps and any organization where groups of users must share their Internet resources equitably. What follows is an objective educational journey on how consumers and ISPs can live in harmony with the explosion of YouTube video.

NetEqualizer CTO not a fan of Software Patents


NetEqualizer CTO Art Reisman has written several opinion pieces over the years regarding the use of software patents. You might be surprised to find out that he is not a big fan of them and refuses to file any Patent claims to protect the NetEqualizer technology (whose roots are in open source)

Below are links to several analysis articles written by Art for Extreme Tech Magazine over the last couple of years.

  • Analysis: Vuze’s Allegations Are Anecdotal, But Troubling

    According to APConnections CTO Art Reisman, the accusations of network traffic impairment leveled at AT&T and Comcast by Vuze are serious, troubling, and worthy of further investigation, but also mostly anecdotal at present.

  • Analysis: Confessions of a Patent Holder

    APConnections CTO Art Reisman weighs in with an insider’s look at what the patent process is really like. What was the jury in the recent Vonage-Verizon case thinking?

  • How Your Wi-Fi Router May Have ‘Hidden Nodes’

    If you’ve ever tried to connect to your office’s wireless network only to find that the Internet service has slowed to a crawl, you may be running up against a phenomenon known as the “hidden node.”

  • Your Wi-Fi Router May Have ‘Hidden Nodes’

    Deep Tech: If you’ve ever tried to connect to your office’s wireless network only to find that the Internet service has slowed to a crawl, you may be running up against a phenomenon known as the “hidden node.”

  • Analysis: The White Lies ISPs Tell About Broadband Speeds

    Insider Art Reisman, CTO of bandwidth shaper firm APConnections, reveals how how even the common speed tests used to evaluate your broadband connection may be spoofed by ISPs. Think you’re getting your full rated speed? Think again.

  • Analysis: Reverse-Engineering Skype Is Doubtful

    A recent rumor hitting the blogosphere has the world buzzing with the possibility that a Chinese company backed with large sums of money has cracked the Skype encryption codes and is poised to offer a competing product that can send and receive Skype calls. Art Reisman says he’s dubious.

  • Analysis: Reverse-Engineering Skype Is Doubtful

    A recent rumor hitting the blogosphere has the world buzzing with the possibility that a Chinese company backed with large sums of money has cracked the Skype encryption codes and is poised to offer a competing product that can send and receive Skype calls. Art Reisman says he’s dubious.

  • Analysis: ISPs Are Going To Eat Vonage’s Lunch

    Art Reisman of APConnections thinks that market forces will take care of Vonage far sooner, and more effectively, than any efforts to block its services.

Deep packet Inspection a poison pill for NebuAd ?


Editors Note:

NebuAd had a great idea show ads to users based on content and share the revenue with ISPs that sign up for their service. What is wrong with this Idea ? I guess customers don’t like people looking at their private data using DPI hence the lawsuit detailed in the article below.  The funny thing is we are still hearing from customers that want DPI as part of their solution, this includes many Universities , ISPs and alike.  I think the message is clear: Don’t use Deep Packet Inspection unless you fully disclose this practice to your customers/employees or risk getting your head nailed to a table.

———————————————————————–

From Zdnet Nov 11, 2008

NebuAd, the controversial company that was trying to sell deep-packet inspection technology as a means of delivering more relevant ads, has already had most of the life sucked out of it. Now, a class action lawsuit filed in U.S. District Court in San Francisco today, could put the final nail in the coffin.

Full article

http://blogs.zdnet.com/BTL/?p=10774