You Must Think Outside the Box to Bring QoS to the Cloud and Wireless Mesh Networks


By Art Reisman
CTO – http://www.netequalizer.com

About 10 years ago, we had this idea for QoS across an Internet link. It was simple and elegant, and worked like a charm. Ten years later, as services spread out over the Internet cloud, our original techniques are more important than ever. You cannot provide QoS using TOS (diffserv) techniques over any public or semi public Internet link, but using our techniques we have proven the impossible is possible.

Why TOS bits don’t work over the Internet.

The main reason is that setting TOS bits are only effective when you control all sides of a conversation on a link, and this is not possible on most Internet links (think cloud computing and wireless mesh networks). For standard TOS services to work, you must control all the equipment in between the two end points. All it takes is one router in the path of a VoIP conversation to ignore a TOS bit, and its purpose becomes obsolete. Thus TOS bits for priority are really only practical inside a corporate LAN/WAN topology.

Look at the root cause of poor quality services and you will find alternative solutions.

Most people don’t realize the problem with congested VoIP, on any link, is due to the fact that their VoIP packets are getting crowded out by larger downloads and things like recreational video (this is also true for any interactive cloud access congestion). Often, the offending downloads are initiated by their own employees or users. A good behavior-based shaper will be able to favor VoIP streams over less essential data streams without any reliance on the sending party adhering to a TOS scheme.

How do we accomplish priority for VoIP?

We do this by monitoring all the streams on a link with one piece of equipment inserted anywhere in the congested link. In our current terminology, a stream consists of an IP (local), talking to another IP (remote Internet). When we see a large stream dominating the link, we step back and ask, is the link congested? Is that download crowding out other time-sensitive transactions such as VOIP? If the answer is yes to both questions, then we proactively take away some bandwidth from the offending stream. I know this sounds ridiculously simple, and does not seem plausible, but it works. It works very well and it works with just one device in the link irrespective of any other complex network engineering. It works with minimal set up. It works over MPLS links. I could go on and on, the only reason you have not heard of it is perhaps is that it goes against the grain of what most vendors are selling – and that is large orders for expensive high end routers using TOS bits.

Related article QoS over the Internet – is it possible?

Fast forward to our next release, how to provide QOS deep inside a cloud or mesh network where sending or receiving IP addresses are obfuscated.

Coming this winter we plan to improve upon our QoS techniques so we can drill down inside of Mesh and Cloud networks a bit better.

As the use of NAT, distributed across mesh networks, becomes more wide spread, and the bundling of services across cloud computing becomes more prevalent, one side effect has been that our stream based behavior shaping (QoS) is not as effective as it is when all IP addresses are visible (not masked behind a NAT/PAT device).

This is due to the fact that currently, we base our decision on a pair of IP’s talking to each other, but we do not consider the IP port numbers, and sometimes especially in a cloud or mesh network, services are trunked across a tunnel using the same IP. As these services get tunneled across a trunk, the data streams are bundled together using one common pair of IP’s and then the streams are broken out based on IP ports so they can be routed to their final destination. For example, in some cloud computing environments there is no way to differentiate the video stream within the tunnel coming from the cloud, from a smaller data access session. They can sometimes both be talking across the same set of IP’s to the cloud. In a normal open network we could slow the video (or in some cases give priority to it) by knowing the IP of the video server, and the IP of the receiving user,  but when the video server is buried within the tunnel sharing the IP’s of other services, our current equalizing (QOS techniques) become less effective.

Services within a tunnel, cloud, or mesh may be bundled using the same IPs, but they are often sorted out on different ports at the ends of the tunnel. With our new release coming this winter, we will start to look at streams as IP and port number, thus allowing for much greater resolution for QOS inside the Cloud and inside your mesh network. Stay tuned!

Equalizing is the Silver Bullet for Quality of Service


Silver Bullet (n.) – A simple and seemingly magical solution to a complex problem.

The amount of solutions available that have been developed to improve Quality of Service (QoS) for data traveling across a network (video, VoIP, etc.) are endless. Often, these tools appear to be simple, but seem to fall short in implementation:

Compression: Compressing files in transit helps reduce congestion by decreasing the amount of bandwidth a transfer requires. This appears to be a viable solution, but in practice, most of the large streams that tend to clog networks (high resolution media files, etc.) are already compressed. Thus, most networks won’t see much improvement in QoS when this method is used.

Layer 7 Inspection: Providing QoS to specific applications also sounds like a reasonable approach to the problem. However, most applications are increasingly utilizing encryption for transferring data, and thus determining the purpose of a network packet is a much harder problem. It also requires constant tweaking and updates to ensure the proper applications are given priority.

Type of Service: Each network packet has a flag as part of its payload that denotes its “type of service.” This flag was intended to help give QoS to packets based on their importance and purpose. This method, however, requires lots of custom router configurations and is not very reliable as far as who is able to set the flag, when, and why.

These solutions are analogous to the diet pill and weight loss products that inundate our lives on a daily basis. They are offering complex solutions to a simple problem:

Overweight? Buy this machine, watch these DVDs, take this pill.

When the real solution is:

Overweight? Eat better.

Simple solutions are what good engineering is all about, and it drives the entire philosophy behind Equalizing – the bandwidth control method implemented in our NetEqualizer. The truth is, you can accomplish 99% of your QoS needs on a fixed link SIMPLY by cranking down on the large streams of traffic. While the above approaches try to do this in various ways, nothing is easier and more hands-off than looking at the behavior of a connection relative to the available bandwidth, and subsequently throttling it as needed. No deep packet inspection, compression, or packet analysis required. No need to concern yourself with new Internet usage trends or the latest media file types. Just fair bandwidth, regardless of trunk size, for all of your users, at all times of day. When bandwidth is controlled, connection quality is allowed to be as good as possible for everyone!

Speeding Up Your Internet Connection Using a TOS Bit


A TOS bit (Type Of Service bit) is a special bit within an IP packet that directs routers to give preferential treatment to selected packets. This sounds great, just set a bit and move to the front of the line for faster service. As always there are limitations.

How does one set a TOS bit?

It seems that only very special enterprise applications, like VoIP PBX’s, actually set and make use of TOS bits. Setting the actual bit is not all that difficult if you have an application that deals with the Network layer, but most commercial applications just send their data on to their local host computer clearing house for data, which in turn, puts the data into IP packets without a TOS bit set. After searching around for a while, I just don’t see any literature on being able to set a TOS bit at the application level. For example, there are several forums where people mention setting the TOS bit in Skype but nothing definitive on how to do it.

However, not to be discouraged, and being the hacker that I am, I could, with some work, make a little module to force every packet leaving my computer or wireless device standard with the TOS bit set. So why not package this up and sell it to the public as an Internet accelerator?

Well before I spend any time on it, I must consider the following:

Who enforces the priority for TOS packets?

This is a function of routers at the edge of your network, and all routers along the path to wherever the IP packet is going. Generally, this limits the effectiveness of using a TOS bit to networks that you control end-to-end. In other words, a consumer using a public Internet connection cannot rely on their provider to give any precedence to TOS bits; hence this feature is relegated to enterprise networks within a business or institution.

Incoming traffic generally cannot be controlled.

The subject of when you can and cannot control a TOS bit does get a bit more involved (pun intended). We have gone over it in more detail in a separate article.

Most of what you do is downloading.

So assuming that your Internet provider did give special treatment to incoming data (which it likely does not), such as video, downloads, and VoIP, the problem with my accelerator idea is that it could only set the TOS bit on data leaving your computer. Incoming TOS bits would have to be set by the sending server.

The moral of the story is that TOS bits that traverse the public Internet don’t have much of a chance in making a difference in your connection speed.

In conclusion, we are going to continue to study TOS bits to see where they might be beneficial and complement our behavior-based shaping (aka “equalizing”) technology.

NetEqualizer Provides Unique Low-Cost Way to Send Priority Traffic over the Internet


Quality of service, or QoS as it’s commonly known, is one of those overused buzz words in the networking industry. In general, it refers to the overall quality of online activities such as video or VoIP calls, which, for example, might be judged by call clarity. For providers of Internet services, promises of high QoS are a selling point to consumers. And, of course, there are plenty of third-party products that claim to make quality of service that much better.

A year ago on our blog, we broke down the costs and benefits of certain QoS methods in our article QoS Is a Matter of Sacrifice. Since then, and in part to address some of the drawbacks and shortcomings we discussed, we’ve developed a new NetEqualizer release offering a very unique and novel way to provide QoS over your Internet link using a type of service (ToS) bit. In the article that follows, we’ll show that the NetEqualizer methodology is the only optimization device that can provide QoS in both directions of a voice or video call over an Internet link.

This is worth repeating: The NetEqualizer is the only device that can provide QoS in both directions for a voice or video call on an open Internet link. Traditional router-based solutions can only provide QoS in both directions of a call when both ends of a link are controlled within the enterprise. As a result, QoS is often reduced and limited. With the NetEqualizer, this limitation can now be largely overcome.

First, let’s step back and discuss why typical routers using ToS bits cannot ensure QoS for an incoming stream over the Internet. Consider a typical scenario with a VoIP call that relies on ToS bits to ensure quality within the enterprise. In this instance, both sending and receiving routers will make sure there is enough bandwidth on the WAN link to ensure the voice data gets across without interruption. But when there is a VoIP conversation going on between a phone within your enterprise and a user out on the cloud, the router can only ensure the data going out.

When communicating enterprise-to-cloud, the router at the edge of your network can see all of the traffic leaving your network and has the ability to queue up (slow down) less important traffic and put the ToS-tagged traffic ahead of everybody else leaving your network. The problem arises on the other side of the conversation. The incoming VoIP traffic is hitting your network and may also have a ToS bit set, but your router cannot control the rate at which other random data traffic arrives.

The general rule with using ToS bits to ensure priority is that you must control both the sending and receiving sides of every stream.

With data traffic originating from an uncontrolled source, such as with a Microsoft update, the Microsoft server is going to send data as fast as it can. The ToS mechanisms on your edge router have no way to control the data coming in from the Microsoft server, and thus the incoming data will crowd out the incoming voice call.

Under these circumstances, you’re likely to get customer complaints about the quality of VoIP calls. For example, a customer on a conference call may begin to notice that although others can hear him or her fine, those on the other end of the line break up every so often.

So it would seem that by the time incoming traffic hits your edge router it’s too late to honor priority. Or is it?

When we tell customers we’ve solved this problem with a single device on the link, and that we can provide priority for VoIP and video, we get looks as if we just proved the Earth isn’t flat for the first time.

But here’s how we do it.

First, you must think of QoS as the science of taking away bandwidth from the low-priority user rather than giving special treatment to a high-priority user. We’ve shown that if you create a slow virtual circuit for a non-priority connection, it will slow down naturally and thus return bandwidth to the circuit.

By only slowing down the larger low-priority connections, you can essentially guarantee more bandwidth for everybody else. The trick to providing priority to an incoming stream (voice call or video) is to restrict the flows from the other non-essential streams on the link. It turns out that if you create a low virtual circuit for these lower-priority streams, the sender will naturally back off. You don’t need to be in control of the router on the sending side.

For example, let’s say Microsoft is sending an update to your enterprise and it’s wiping out all available bandwidth on your inbound link. Your VPN users cannot get in, cannot connect via VoIP, etc. When sitting at your edge, the NetEqualizer will detect the ToS bits on your VPN and VoIP call. It will then see the lack of ToS bits on the Microsoft update. In doing so, it will automatically start queuing the incoming Microsoft data. Ninety-nine out of one hundred times this technique will cause the sending Microsoft server to sense the slower circuit and back off, and your VPN/VoIP call will receive ample bandwidth to continue without interruption.

For some reason the typical router is not designed to work this way. As a result, it’s at a loss as to how to provide QoS on an incoming link. This is something we’ve been doing for years based on behavior, and in our upcoming release, we’ve improved on our technology to honor ToS bits. Prior to this release, our customers were required to identify priority users by IP address. Going forward, the standard ToS bits (which remain in the IP packet even through the cloud) will be honored, and thus we have a very solid viable solution for providing QoS on an incoming Internet link.

Related article QOS over the Internet is it possible?

Related Example: Below is an excerpt from a user that could have benefited from a NetEqualizer. In this comment below, taken from an Astaro forum, the user is lamenting on the fact that despite setting QoS bits he can’t get his network to give priority to his VoIP traffic:

“Obviously, I can’t get this problem resolved by using QoS functionality of Astaro. Phone system still shows lost packets when there is a significant concurring traffic. Astaro does not shrink the bandwidth of irrelevant traffic to the favor of VoIP definitions, I don’t know where the problem is and obviously nobody can clear this up.

Astaro Support Engineer said “Get a dedicated digital line,” so I ordered one it will be installed shortly.

The only way to survive until the new line is installed was to throttle all local subnets, except for IPOfficeInternal, to ensure the latter will have enough bandwidth at any given time, but this is not a very smart way of doing this.

QoS is a Matter of Sacrifice


Usually in the first few minutes of talking to a potential customer, one of their requests will be something like “I want to give QoS (Quality of Service) to Video”, or “I want to give Quality of Service to our Blackboard application.”

The point that is often overlooked by resellers, pushing QoS solutions, is that providing QoS for one type of traffic always involves taking bandwidth away from something else.

The network hacks understand this, but for those that are not down in the trenches sometimes we must gently walk them through a scenario.

Take the following typical exchange:

Customer: I want to give our customers access to NetFlix and have that take priority over P2P.

NetEq Rep: How do you know that you have a p2p problem?

Customer: We caught a guy with Kazaa on his Laptop last year so we know they are out there.

NetEq rep (after plugging in a test system and doing some analysis): It looks like you have some scattered p2p users, but they are only about 2 percent of your traffic load. Thirty percent of your peak traffic is video. If we give priority to all your video we will have to sacrifice something, web browsing, chat, e-mail, Skype, and Internet Radio. I know this seems like quite a bit but there is nothing else out there to steal from, you see in order to give priority to video we must take away bandwidth from something else and although you have p2p, stopping it will not provide enough bandwidth to make a dent in your video appetite.

Customer (now frustrated by reality): Well I guess I will just have to tell our clients they can’t watch video all the time. I can’t make web browsing slower to support video, that will just create a new problems.

If you have an oversubscribed network, meaning too many people vying for limited Internet resources, when you implement any form of QoS, you will still end up with an oversubscribed network. QoS must rob Peter to pay Paul.

So when is QoS worth while?

QoS is a great idea if you understand who you are stealing from.

Here are some facts on using QoS to improve your Internet Connection:

Fact #1

If your QoS mechanism involves modifying packets with special instructions (ToS bits) on how it should be treated, it will only work on links where you control both ends of the circuit and everything in between.

Fact #2

Most Internet congestion is caused by incoming traffic. For data originating at your facility, you can certainly have your local router give priority to it on its way out, but you can’t set QoS bits on traffic coming into your network (we assume from a third party). Regulating outgoing traffic with ToS bits will not have any effect on incoming traffic.

Fact #3

Your public Internet provider will not treat ToS bits with any form of priority (the exception would be a contracted MPLS type network). Yes, they could, but if they did then everybody would game the system to get an advantage and they would not have much meaning anyway.

Fact #4

The next two facts address our initial question — Is QoS over the Internet possible? The answer is, yes. QoS on an Internet link is possible. We have spent the better part of seven years practicing this art form and it is not rocket science, but it does require a philosophical shift in thinking to get your arms around.

We call it “equalizing,” or behavior-based shaping, and it involves monitoring incoming and outgoing streams on your Internet link. Priority or QoS is nothing more than favoring one stream’s packets over another stream’s packets. You can accomplish priority QoS on incoming streams by queuing (slowing down) one stream over another without relying on ToS bits.

Fact #5

Surprisingly, behavior-based methods such as those used by our NetEqualizer do provide a level QoS for VoIP on the public Internet. Although you can’t tell the Internet to send your VoIP packets faster, most people don’t realize the problem with congested VoIP is due to the fact that their VoIP packets are getting crowded out by large downloads. Often, the offending downloads are initiated by their own employees or users. A good behavior-based shaper will be able to favor VoIP streams over less essential data streams without any reliance on the sending party adhering to a QoS scheme.

Please remember our initial point “providing QoS for one type of traffic always involves taking bandwidth away from something else,” and take these facts into consideration as you work on QoS for your network.

Using NetEqualizer to Ensure Clean, Clear QoS for VOIP Calls


A Little Bit of History

Many VoIP installations  are designed with an initial architecture that assumes inter-office  phone calls will reside within the confines of the company LAN. Internal LANs  are almost always 100 megabit and consist of multiple paths between end points. The basic corporate LAN design usually provides more than enough bandwidth to route all inter-office VoIP calls without congestion.

As enterprises  become more dispersed geographically, care must be taken when extending  VoIP calls beyond the main office.  Once a VoIP call leaves the confines of your local network and traverses  over  the public Internet link, it will have to compete for space with any data traffic that might also be destined  for the Internet. Without careful planning, your Enterprise will most likely start dropping VoIP calls during  busy traffic times.

The most common way of dealing with priority or VoIP  is to set what is called the TOS bit.  The TOS bit acts like a little flag inside each Internet packet of the VoIP stream. An Internet router can  rearrange the packets destined for the Internet, and give priority to the outgoing VoIP packets by looking at the TOS bit. The downside of this method is that it does not help with VoIP calls originating  from the outside coming into your network.  For example, somebody receiving a VoIP call in the main office from a VPN user working at home, may experience some distortion on the incoming VoIP  call.  This is usually caused when somebody else in the office is doing a large download during the VoIP call.  Routers typically cannot set priority on incoming data, hence the inbound data download can dominate all the bandwidth, rendering the VoIP call inaudible.

How NetEqualizer Solves VoIP Congestion Issues

The NetEqualizer  solves the problem of  VoIP traffic competing with regular data traffic by using a simple  method. A NetEqualizer provides priority for both incoming and outgoing VoIP traffic . It does not use TOS bits.  It is VoIP and Network agnostic.  Sounds like the old Saturday Night Live commercial where Chevy Chase hawks a floor cleaner that is also an ice cream topping.

Here is how it works…

It turns out that VoIP streams require no more than 100kbs per  call,  usually quite a bit less.  Large downloads, on the other hand, will grab the entire Internet Trunk if they can get it.  The NetEqualizer has been designed to favor streams of less than 100kbs over larger data streams. When a large download is competing with a VoIP call for precious resources, the NetEqualizer will create some artificial latency on the download stream causing it to back off and slow down. No need to rely on TOS bits in this scenario, problem solved.

Conceptually, that is all there is to it.  Obviously, the NetEqualizer engineering team has refined and tuned  this technique over the years.  In general, the NetEqualizer Default Rules need very little set-up, and a unit can be inline in a matter of minutes.

The scenarios where NetEqualizer is appropriate for ensuring that your VoIP system runs smoothly are:

  1. You are running an Enterprise VoIP service with remote offices that connect to your main PBX over VPN links
  2. You are an ISP and your customers use a VoIP service over limited bandwidth connectivity

Recommended Reading

Other vendor White Papers on the subject:  River Bed

Other suggested reading:  http://www.bandwidth.com/wiki/article/QoS_(Quality_of_Service)

New Asymmetric Shaping Option Augments NetEqualizer-Lite


We currently have a new release in beta testing that allows for equalizing on an asymmetric link. As is the case with all of our equalizing products, this release will allow users to more efficiently utilize their bandwidth, thus optimizing network performance. This will be especially ideal for users of our recently released NetEqualizer-Lite.

Many wireless access points have a limit on the total amount of bandwidth they can transmit in both directions. This is because only one direction can be talking at a time. Unlike wired networks, where a 10-meg link typically means you can have 10 megs UP and 10 megs going the other direction simultaneously, in  a wireless network you can only have 10 megabits total at any one time.  So, if you had 7 megabits coming in, you could only have 3 megabits going out. These limits are a hard saturation point.

In the past, it was necessary to create separate settings for both the up and down stream. With the new NetEqualizer release, you can simply tell the NetEqualizer that you have an asymmetric 10-megabit link, and congestion control will automatically kick in for both streams,  alleviating bottlenecks more efficiently and keeping your network running smoothly.

For more information on APconnections’ equalizing technology, click here.

NetEqualizer-Lite Is Now Available!


Last month, we introduced our newest release, a Power-over-Ethernet NetEqualizer. Since then, with your help, we’ve titled the new release the NetEqualizer-Lite and are already getting positive feedback from users. Here’s a little background about what led us to release the NetEqualizer-Lite…Over the years, we’d had several customers express interest in placing a NetEqualizer as close as possible to their towers in order to relieve congestion. However, in many cases, this would require both a weatherproof and low-power NetEqualizer unit – two features that were not available up to this point. However, in the midst of a growing demand for this type of technology, we spent the last few months working to meet this need and thus developed the NetEqualizer-Lite.

Here’s what you can expect from the NetEqualizer-Lite:

  • Power over Ethernet
  • Up to 10 megabits of shaping
  • Up to 200 users
  • Comes complete with all standard NetEqualizer features

And, early feedback on the new release has been positive. Here’s what one user recently posted on DSLReports.com:

We’ve ordered 4 of these and deployed 2 so far. They work exactly like the 1U rackmount NE2000 that we have in our NOC, only the form factor is much smaller (about 6x6x1) and they use POE or a DC power supply. I amp clamped one of the units, and it draws about 7 watts….The Netequalizer has resulted in dramatically improved service to our customers. Most of the time, our customers are seeing their full bandwidth. The only time they don’t see it now is when they’re downloading big files. And, when they don’t see full performance, its only for the brief period that the AP is approaching saturation. The available bandwidth is re-evaulated every 2 seconds, so the throttling periods are often brief. Bottom line to this is that we can deliver significantly more data through the same AP. The customers hitting web pages, checking e-mail, etc. virtually always see full bandwidth, and the hogs don’t impact these customers. Even the hogs see better performance (although that wasn’t one of my priorities). (DSLReports.com)

Pricing for the new model will be $1,200 for existing NetEqualizer users and $1,550 for non-customers purchasing their first unit. However, the price for subsequent units will be $1,200 for users and nonusers alike.

For more information about the new release, contact us at admin@apconnections.net or 1-800-918-2763.

APconnections Releases NetEqualizer for Small Business and WISP Market


LAFAYETTE, Colo., April 13 /PRNewswire/ -- APconnections (http://www.netequalizer.com),
a leading supplier of plug-and-play bandwidth shaping products,
today announced the release of its newest NetEqualizer model,
developed specifically with WISPs and small business users in mind.

This newest NetEqualizer release easily handles up to 10 megabits of traffic and up to 100 users, allowing room for expansion for growing demand. Furthermore, in addition to offering all standard NetEqualizer features, this smaller model will be Power over Ethernet, providing administrators greater flexibility in placing the unit within their network.

The model was developed to meet a growing demand both for an affordable traffic shaping device to help small businesses run VoIP concurrent with data traffic over their Internet link as well as a need for a shaping unit with PoE for the WISP market.

In a large wireless network, congestion often occurs at tower locations. However, with a low-cost PoE version of the NetEqualizer, wireless providers can now afford to have advanced bandwidth control at or near their access distribution points.

“About half of wireless network slowness comes from p2p (Bit Torrent) and video users overloading the access points,” said Joe D’Esopo, vice president of business development at APconnections. “We have had great success with our NE2000 series, but the price point of $2,500 was a bit too high to duplicate all over the network.”

For a small- or medium-sized office with a hosted VoIP PBX solution, the NetEqualizer is one of the few products on the market that can provide QoS for VoIP over an Internet link. And now, with volume pricing approaching $1,000, the NetEqualizer will help revolutionize the way offices use their Internet connection.

Pricing for the new model will be $1,200 for existing NetEqualizer users and $1,499 for non-customers purchasing their first unit. However, the price for subsequent units will be $1,200 for users and nonusers alike.

The NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology gives priority to latency sensitive applications, such as VoIP and email. It does it all dynamically and automatically, improving on other available bandwidth shaping technology. It controls network flow for the best WAN optimization.

APconnections is a privately held company founded in 2003 and is based in Lafayette, Colorado.

Full Article

Hotel Property Managers Should Consider Generic Bandwidth Control Solutions


Editors Note: The following Hotelsmag.com article caught my attention this morning. The hotel industry is now seriously starting to understand that they need some form of bandwidth control.   However, many hotel solutions for bandwidth control are custom marketed, which perhaps puts their economy-of-scale at a competitive disadvantage. Yet, the NetEqualizer bandwidth controller, as well as our competitors, cross many market verticals, offering hotels an effective solution without the niche-market costs. For example, in addition to the numerous other industries in which the NetEqualizer is being used, some of our hotel customers include: The Holiday Inn Capital Hill, a prominent Washington DC hotel; The Portola Plaza Hotel and Conference Center in Monterrey, California; and the Hotel St. Regis in New York City.

For more information about the NetEqualizer, or to check out our live demo, visit www.netequalizer.com.

Heavy Users Tax Hotel Systems:Hoteliers and IT Staff Must Adapt to a New Reality of Extreme Bandwidth Demands

By Stephanie Overby, Special to Hotels — Hotels, 3/1/2009

The tweens taking up the seventh floor are instant-messaging while listening to Internet radio and downloading a pirated version of “Twilight” to watch later. The 200-person meeting in the ballroom has a full interactive multimedia presentation going for the next hour. And you do not want to know what the businessman in room 1208 is streaming on BitTorrent, but it is probably not a productivity booster.

To keep reading, click here.

NetEqualizer Bandwidth Control Tech Seminar Video Highlights


Tech Seminar, Eastern Michigan University, January 27, 2009

This 10-minute clip was professionally produced January 27, 2009. It  gives a nice quick overview of how the NetEqualizer does bandwidth control while providing priority for VoIP and video.

The video specifically covers:

1) Basic traffic shaping technology and NetEqualizer’s behavior-based methods

2) Internet congestion and gridlock avoidance on a network

3) How peer-to-peer file sharing operates

4) How to counter the effects of peer-to-peer file sharing

5) Providing QoS and priority for voice and video on a network

6) A short comparison by a user (a university admin) who prefers NetEqualizer to layer-7 deep packet inspection techniques

Tips on Evaluating Routers, Bandwidth Shapers, Wirelss Access Points and Other Networking Equipment


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper. APconnections removed all Deep Packet Inspection technology from their NetEqualizer product over two years ago.

As many IT managers may already know, it is very hard to find unbiased information regarding networking equipment.  Publications and analysts always seem to have some bias or motivation, as you never know who pays their fees. Even your peers that swear by a new technology  have a vested interest in the commercial success of their chosen technology. And, most IT managers are not going to second guess and critique a technology decision, where big money was spent,  as long it provides some value, even if it’s not exactly what they’d hoped for.

Obviously you should continue to use analysts and peers as sources of advice and information, but there are also other ways to find unbiased data prior to making a technology decision.

Here are some ideas that have worked over the years for both myself as a buyer as well as for our customers:

1) When evaluating technology, request to talk to the engineering or test team at the company you are buying from. This may not be possible, but is worth a try. Companies (sales teams) hate it when you talk directly to their engineers. Why? Because they are more likely to tell the truth about every little problem.

2) If you can’t find an engineer that currently works at the company, then find one that formerly worked there. This is easier than you might think. Techies with loads of experience and insight spend time in tech forums, and a simple post asking for inside knowledge may yield some good sources.

3) This may sound silly, but try Googling  (productname)sucks.com. You’ll be surprised by what you might find. Many of the companies that are too large for you to get in touch with their engineering staffs will have ad-hoc consumer complaint sites.  However, keep in mind that all companies and products will have unhappy customers, so don’t discount a large company in favor of a smaller one just because you find complaints about the market leader.  The smaller company just may not yet have the critical mass to draw organized negative attention. And, no matter how good a product is, there will likely always be an unhappy customer.

4) Nothing beats a live trial of a product. But, don’t limit your decision to the vendors slobbering to give you free trials.  Giving away free trials is a marketing strategy to move a product and ultimately adds to the final cost in one way or another. Smaller vendors with great products may not be offering free trials, so you may miss out on some valuable technology if you only look for the complimentary test runs. Plus, all vendors should have a return policy if  they are confident in their product, so, even without a free trial, it shouldn’t be all or nothing.

While there is no guarantee that these tips will always lead to the perfect product, they have certainly bettered our hit-to-miss ratio over the past several years. If you’re asking the right people and looking in the right places, a little research can go a long way.

Related Articles

Choosing an IM security Product

A call for revolutions against beta culture

Can your ISP support Video for all?


By Art Reisman, CTO, http://www.netequalizer.com

Art Reisman CTO www.netequalizer.com

Art Reisman

As the Internet continues to grow with higher home user speeds available from Tier 1 providers,  video sites such as YouTube , Netflix,  and others are taking advantage of these fatter pipes. However, unlike the peer-to-peer traffic of several years ago (which seems to be abating), These videos don’t face the veil of copyright scrutiny cast upon p2p which caused most p2p users to back off. They are here to stay, and any ISP currently offering high speed Internet will need to accommodate the subsequent rising demand.

How should a Tier2 or Tier3 provider size their overall trunk to insure smooth video at all times for all users?

From measurements done in our NetEqualizer laboratories, a normal quality video stream requires around 350kbs bandwidth sustained over its life span to insure there are no breaks or interruptions. Newer higher definition videos may run at even higher speeds.


A typical rural wireless WISP will have contention ratios of about 300 users per 10-megabit link. This seems to be the ratio point where a small businesses can turn  a profit.  Given this contention ratio, if 30 customers simultaneously watch YouTube, the link will be exhausted and all 300 customers will be experience protracted periods of poor service.

Even though it is theoretically possible  to support 30 subscribers on a 10 megabit , it would only be possible if the remaining 280 subscribers were idle. In reality the trunk will become saturated with perhaps 10 to 15  active video streams,  as  obviously  the remaining 280 users are not idle. Given this realistic scenario, is it reasonable for an ISP with 10 megabits and 300 subscribers to tout they support video ?

As of late 2007 about 10 percent of Internet traffic was attributed to video. It is safe to safe to assume that number is higher now (Jan 2009). Using the 2007 number, 10 percent of 300 subscribers would yield on average 30 video streams, but that is not a fair number, because the 10 percent of people using video, would only apply to the subscribers who are actively on line, and not all 300. To be fair,  we’ll assume 150 of 300 subscribers are online during peak times.  The calculation now  yields an estimated 15 users doing video at one time, which is right on our upper limit of smooth service for a 10 megabit link, any more and something has to give.

The moral of this story so far is,  you should  be cautious before promoting unlimited video support with contention ratios of 30 subscribers to 1 megabit.  The good news is, most rural providers are not competing in metro areas, hence customers will have to make do with what they have. In areas more intense competition for customers where video support might make a difference, our recommendation is that  you will need to have a ratio closer to 20 subscribers to 1 megabit, and you still may have peak outages.

One trick you can use to support Video with limited Internet resources.

We have previously been on record as not being a supporter of Caching to increase Internet speed, well it is time to back track on that. We are now seeing results that Caching can be a big boost in speeding up popular YouTue videos. Caching and video tend to work well together as consumers tend to flock a small subset of the popular videos. The downside is your local caching server will only be able to archive a subset of the content on the master YouTube servers but this should be enough to give the appearance of pretty good video.

In the end there is no substitute for having a big fat pipe with enough room to run video, we’ll just have to wait and see if the market can support this expense.

QoS on the Internet — Can Class of Service Be Guaranteed?


Most quality of service (QoS) schemes today are implemented to give priority to voice or video data running in common over a data circuit. The trick used to ensure that certain types of data receive priority over others makes use of a type of service (TOS) bit. Simply put, this is just a special flag inside of an Internet packet that can be a 1 or a 0, with a 1 implying priority while a 0 implies normal treatment.

In order for the TOS bit scheme to work correctly, all routers along a path need to be aware of it. In a self-contained corporate network, an organization usually controls all routers along the data path and makes sure that this recognition occurs. For example, a multinational organization with a VoIP system most likely purchases dedicated links through a global provider like ATT. In this scenario, the company can configure all of their routers to give priority to QoS tagged traffic, and this will prevent something like a print server file from degrading an interoffice VoIP call.

However, this can be a very expensive process and may not be available to smaller businesses and organizations that do not have their own dedicated links. In any place where many customers share an Internet link which is not the nailed up point-to-point that you’d find within a corporate network, there is contention for resources. In these cases, guaranteeing class of service is more difficult. So, this begs the question, “How can you set a QoS bit and prioritize traffic on such a link?”

In general, the answer is that you can’t.

The reason is quite simple. Your provider to the Internet cloud — Time Warner, Comcast, Qwest, etc. — most likely does not look at or support TOS bits. You can set them if you want, but they will probably be ignored. There are exceptions to this rule, however, but your voice traffic traveling over the Internet cloud will in all likelihood get the same treatment as all other traffic.

The good news is that most providers have plenty of bandwidth on their backbones and your third party voice service such as Skype will be fine. I personally use a PBX in the sky called Aptela from my home office. It works fine until my son starts watching YouTube videos and then all of a sudden my calls get choppy.

The bottle neck for this type of outage is not your provider’s backbone, but rather the limited link coming into your office or your home. The easiest way to ensure that your Skype call does not crash is to self-regulate the use of other bandwidth intensive Internet services.

Considering all of this, NetEqualizer customers often ask, “How does the NetEqualizer/AirEqualizer do priority QOS?”

It is a very unique technology, but the answer is also very simple. First, you need to clear your head about the way QoS is typically done in the Cisco™ model using bit tagging and such.

In its default mode, the NetEqualizer/AirEqualizer treats all of your standard traffic as one big pool. When your network is busy, it constantly readjusts bandwidth allocation for users automatically. It does this by temporarily limiting the amount of bandwidth a large download (such as that often found with p2p file sharing) might be using in order to ensure greater response times for e-mail, chat, Web browsing, VoIP, and other everyday online activities.

So, essentially, the NetEqualizer/AirEqualizer is already providing one level of QoS in the default setup. However, users have the option of giving certain applications priority over others.

For example, when you tell the NetEqualizer/AirEqualizer to give specific priority to your video server, it automatically squeezes all the other users into a smaller pool and leaves the video server traffic alone. In essence, this reserves bandwidth for the video server at a higher priority than all of the generic users. When the video stream is not active, the generic data users are allowed to utilize more bandwidth, including that which had been preserved for video. Once the settings are in place, all of this is done automatically and in real time. The same could be done with VoIP and other priority applications.

In most cases, the only users that even realize this process is taking place are those who are running the non-prioritized applications that have typically slowed your network. For everyone else, it’s business as usual. So, as mentioned, QoS over the NetEqualizer/AirEqualizer is ultimately a very simple process, but also very effective. And, it’s all done without controversial bit tagging and deep packet inspection!

Follow

Get every new post delivered to your Inbox.

Join 51 other followers

%d bloggers like this: