You Must Think Outside the Box to Bring QoS to the Cloud and Wireless Mesh Networks


By Art Reisman
CTO – http://www.netequalizer.com

About 10 years ago, we had this idea for QoS across an Internet link. It was simple and elegant, and worked like a charm. Ten years later, as services spread out over the Internet cloud, our original techniques are more important than ever. You cannot provide QoS using TOS (diffserv) techniques over any public or semi public Internet link, but using our techniques we have proven the impossible is possible.

Why TOS bits don’t work over the Internet.

The main reason is that setting TOS bits are only effective when you control all sides of a conversation on a link, and this is not possible on most Internet links (think cloud computing and wireless mesh networks). For standard TOS services to work, you must control all the equipment in between the two end points. All it takes is one router in the path of a VoIP conversation to ignore a TOS bit, and its purpose becomes obsolete. Thus TOS bits for priority are really only practical inside a corporate LAN/WAN topology.

Look at the root cause of poor quality services and you will find alternative solutions.

Most people don’t realize the problem with congested VoIP, on any link, is due to the fact that their VoIP packets are getting crowded out by larger downloads and things like recreational video (this is also true for any interactive cloud access congestion). Often, the offending downloads are initiated by their own employees or users. A good behavior-based shaper will be able to favor VoIP streams over less essential data streams without any reliance on the sending party adhering to a TOS scheme.

How do we accomplish priority for VoIP?

We do this by monitoring all the streams on a link with one piece of equipment inserted anywhere in the congested link. In our current terminology, a stream consists of an IP (local), talking to another IP (remote Internet). When we see a large stream dominating the link, we step back and ask, is the link congested? Is that download crowding out other time-sensitive transactions such as VOIP? If the answer is yes to both questions, then we proactively take away some bandwidth from the offending stream. I know this sounds ridiculously simple, and does not seem plausible, but it works. It works very well and it works with just one device in the link irrespective of any other complex network engineering. It works with minimal set up. It works over MPLS links. I could go on and on, the only reason you have not heard of it is perhaps is that it goes against the grain of what most vendors are selling – and that is large orders for expensive high end routers using TOS bits.

Related article QoS over the Internet – is it possible?

Fast forward to our next release, how to provide QOS deep inside a cloud or mesh network where sending or receiving IP addresses are obfuscated.

Coming this winter we plan to improve upon our QoS techniques so we can drill down inside of Mesh and Cloud networks a bit better.

As the use of NAT, distributed across mesh networks, becomes more wide spread, and the bundling of services across cloud computing becomes more prevalent, one side effect has been that our stream based behavior shaping (QoS) is not as effective as it is when all IP addresses are visible (not masked behind a NAT/PAT device).

This is due to the fact that currently, we base our decision on a pair of IP’s talking to each other, but we do not consider the IP port numbers, and sometimes especially in a cloud or mesh network, services are trunked across a tunnel using the same IP. As these services get tunneled across a trunk, the data streams are bundled together using one common pair of IP’s and then the streams are broken out based on IP ports so they can be routed to their final destination. For example, in some cloud computing environments there is no way to differentiate the video stream within the tunnel coming from the cloud, from a smaller data access session. They can sometimes both be talking across the same set of IP’s to the cloud. In a normal open network we could slow the video (or in some cases give priority to it) by knowing the IP of the video server, and the IP of the receiving user,  but when the video server is buried within the tunnel sharing the IP’s of other services, our current equalizing (QOS techniques) become less effective.

Services within a tunnel, cloud, or mesh may be bundled using the same IPs, but they are often sorted out on different ports at the ends of the tunnel. With our new release coming this winter, we will start to look at streams as IP and port number, thus allowing for much greater resolution for QOS inside the Cloud and inside your mesh network. Stay tuned!

NetEqualizer News: October 2011


NetEqualizer News

October 2011

Greetings!

Enjoy another issue of NetEqualizer News! This month, we present a video demonstration detailing how active connections behave on a live network. The video utilizes a real-time reporting tool that you can leverage with your own NetEqualizer data! We also preview some new features coming this fall (IPv6 Visibility and ToS Priority), announce our FlyAway Contest winner, and discuss P2P blocking! As always, feel free to pass this along to others who might be interested in NetEqualizer News.

Our Website     Contact Us      NetEqualizer Demo      Price List      Join Our Mailing List

In This Issue:

:: Demo: How Active Connections Behave in Real Time

:: And The Fly Away Contest Winner Is…

:: Update on New Features Coming This Fall

:: Best Of The Blog

Demo: How Active Connections Behave in Real Time

We often get asked about active connections and how they are handled by the NetEqualizer. The answer to this question is fundamental to how equalizing and behavior-based bandwidth shaping works.

In early August, we posted an article on our blog that discussed how you could generate real-time reports using Excel and your NetEqualizer data. The video linked to below references that project, and uses it to demonstrate how active connections behave in real-time on a live network.

There are some interesting observations you can take away from this video, even if you don’t implement the reporting tool on your own device. You will come away from it with a better understanding of how users are connected through your network, and what types of connections are occurring every second.

Click the image below to view the video.  Note: real-time reports using Excel functionality has been replaced by Dynamic Real-Time Reporting in software update 7.1:

Some key points from the video are:

  • For every user, there are many connections occurring that most people are probably not aware of. The OS might be checking for updates, A/V could be checking for new signatures, an email program is reloading its inbox, etc.
  • Most connections have a very short life, and they are also mostly very small. 90% of connections will only utilize 10 to 1000 bytes/second.
  • Flows change dynamically. Even for a single user, 2 to 20 connections (or more) can exist at any moment in time.
  • Contention can occur quickly. Because of the variability in connections (especially with a broad user base), network contention can occur quickly. If large downloads are part of the active connections, this contention happens even faster.
  • The NetEqualizer instantly responds to this problem by taking a Robin Hood approach to the hogging connections. It shaves off bandwidth from the large connections and gives that much-needed resource to the thousands of other connections that require it.

View the blog article referenced in the video above here:
Dynamic Reporting With The NetEqualizer.

And The FlyAway Contest Winner Is…

frontier airlinesEvery few months, we have a drawing to give away two roundtrip domestic airline tickets from Frontier Airlines to one lucky person who’s recently tried out our online NetEqualizer demo.
The time has come to announce this round’s winner.
And the winner is…Mohammed O. Ibrahim of Zanzibar Connections.  Congratulations, Mohammed!
Please contact us within 30 days (by November 10th, 2011) at: email
admin -or- 303-997-1300 to claim your prize.

Update on New Features
Coming This Fall!

We are very excited about the new features coming in our Fall 2011 Software Update!

IPv6 Visibility

As we await the need to handle significant amounts of IPv6 traffic, NetEqualizer is already implementing solutions to meet the shift head-on. The Fall 2011 Software Update will include features that will provide enhanced visibility to IPv6 traffic.

This feature will help our customers that are experimenting with IPv6/IPv4 dual stacks, as they start to see IPv6 Internet traffic on their networks.

The enhanced IPv6 capabilities that we are implementing in the NetEqualizer this Fall include:

  • Providing you with visibility to current IPv6 connections so that you to determine if you need to start shaping IPv6 traffic.
  • Logging the IPv6 traffic so that you can obtain a historical snapshot to help in your IPv6 planning efforts.

ToS Priority

We are now seeing an influx of customers looking to provide priority bandwidth to VoIP connections on their links without all the hassle of complex router rules. NetEqualizer’s new Type of Service (ToS) Priority feature is the solution. Included in the Fall 2011 Software Update, the ToS Priority feature will automatically prioritize connections that are utilizing services like VoIPas well as a host of other types of important connections. This will provide improved quality of service (QoS) on your network.

Larger SSD Drives

We will now be shipping with larger SSD drives to customers waiting to try our NetEqualizer Caching Option (NCO).

As always, the Fall 2011 Software Update will be available at no charge to customers with valid NetEqualizer Software Subscriptions (NSS).

For more information on the NetEqualizer or the upcoming release, visit our blog or contact us at: email sales -or- toll-free U.S.(800-918-2763), worldwide (303) 997-1300 x. 103.

Best of the Blog

How Effective is P2P Blocking?
by Art Reisman – CTO – NetEqualizer

This past week, a discussion about peer-to-peer (P2P) blocking tools came up in a user group that I follow. In the course of the discussion, different IT administrators chimed in, citing their favorite tools for blocking P2P traffic.

At some point in the discussion, somebody posed the question, “How do you know your peer-to-peer tool is being effective?” For the next several hours the room went eerily silent.

The reason why this question was so intriguing to me is that for years I collaborated with various developers on creating an open-source P2P blocking tool using layer 7 technology (the Application Layer of the OSI Model). During this time period, we released several iterations of our technology as freeware. Our testing and trials showed some successes, but we also learned how fragile the technology was and we were reluctant to push it out commercially.

To keep reading, click here.

Photo Of The Month

NetEqualizer CF Card

New Design!

As of August 10th, 2011, our Compact Flash Cards are being shipped with a new label design and card case!

View our videos on YouTube

NetEqualizer Provides Unique Low-Cost Way to Send Priority Traffic over the Internet


Quality of service, or QoS as it’s commonly known, is one of those overused buzz words in the networking industry. In general, it refers to the overall quality of online activities such as video or VoIP calls, which, for example, might be judged by call clarity. For providers of Internet services, promises of high QoS are a selling point to consumers. And, of course, there are plenty of third-party products that claim to make quality of service that much better.

A year ago on our blog, we broke down the costs and benefits of certain QoS methods in our article QoS Is a Matter of Sacrifice. Since then, and in part to address some of the drawbacks and shortcomings we discussed, we’ve developed a new NetEqualizer release offering a very unique and novel way to provide QoS over your Internet link using a type of service (ToS) bit. In the article that follows, we’ll show that the NetEqualizer methodology is the only optimization device that can provide QoS in both directions of a voice or video call over an Internet link.

This is worth repeating: The NetEqualizer is the only device that can provide QoS in both directions for a voice or video call on an open Internet link. Traditional router-based solutions can only provide QoS in both directions of a call when both ends of a link are controlled within the enterprise. As a result, QoS is often reduced and limited. With the NetEqualizer, this limitation can now be largely overcome.

First, let’s step back and discuss why typical routers using ToS bits cannot ensure QoS for an incoming stream over the Internet. Consider a typical scenario with a VoIP call that relies on ToS bits to ensure quality within the enterprise. In this instance, both sending and receiving routers will make sure there is enough bandwidth on the WAN link to ensure the voice data gets across without interruption. But when there is a VoIP conversation going on between a phone within your enterprise and a user out on the cloud, the router can only ensure the data going out.

When communicating enterprise-to-cloud, the router at the edge of your network can see all of the traffic leaving your network and has the ability to queue up (slow down) less important traffic and put the ToS-tagged traffic ahead of everybody else leaving your network. The problem arises on the other side of the conversation. The incoming VoIP traffic is hitting your network and may also have a ToS bit set, but your router cannot control the rate at which other random data traffic arrives.

The general rule with using ToS bits to ensure priority is that you must control both the sending and receiving sides of every stream.

With data traffic originating from an uncontrolled source, such as with a Microsoft update, the Microsoft server is going to send data as fast as it can. The ToS mechanisms on your edge router have no way to control the data coming in from the Microsoft server, and thus the incoming data will crowd out the incoming voice call.

Under these circumstances, you’re likely to get customer complaints about the quality of VoIP calls. For example, a customer on a conference call may begin to notice that although others can hear him or her fine, those on the other end of the line break up every so often.

So it would seem that by the time incoming traffic hits your edge router it’s too late to honor priority. Or is it?

When we tell customers we’ve solved this problem with a single device on the link, and that we can provide priority for VoIP and video, we get looks as if we just proved the Earth isn’t flat for the first time.

But here’s how we do it.

First, you must think of QoS as the science of taking away bandwidth from the low-priority user rather than giving special treatment to a high-priority user. We’ve shown that if you create a slow virtual circuit for a non-priority connection, it will slow down naturally and thus return bandwidth to the circuit.

By only slowing down the larger low-priority connections, you can essentially guarantee more bandwidth for everybody else. The trick to providing priority to an incoming stream (voice call or video) is to restrict the flows from the other non-essential streams on the link. It turns out that if you create a low virtual circuit for these lower-priority streams, the sender will naturally back off. You don’t need to be in control of the router on the sending side.

For example, let’s say Microsoft is sending an update to your enterprise and it’s wiping out all available bandwidth on your inbound link. Your VPN users cannot get in, cannot connect via VoIP, etc. When sitting at your edge, the NetEqualizer will detect the ToS bits on your VPN and VoIP call. It will then see the lack of ToS bits on the Microsoft update. In doing so, it will automatically start queuing the incoming Microsoft data. Ninety-nine out of one hundred times this technique will cause the sending Microsoft server to sense the slower circuit and back off, and your VPN/VoIP call will receive ample bandwidth to continue without interruption.

For some reason the typical router is not designed to work this way. As a result, it’s at a loss as to how to provide QoS on an incoming link. This is something we’ve been doing for years based on behavior, and in our upcoming release, we’ve improved on our technology to honor ToS bits. Prior to this release, our customers were required to identify priority users by IP address. Going forward, the standard ToS bits (which remain in the IP packet even through the cloud) will be honored, and thus we have a very solid viable solution for providing QoS on an incoming Internet link.

Related article QOS over the Internet is it possible?

Related Example: Below is an excerpt from a user that could have benefited from a NetEqualizer. In this comment below, taken from an Astaro forum, the user is lamenting on the fact that despite setting QoS bits he can’t get his network to give priority to his VoIP traffic:

“Obviously, I can’t get this problem resolved by using QoS functionality of Astaro. Phone system still shows lost packets when there is a significant concurring traffic. Astaro does not shrink the bandwidth of irrelevant traffic to the favor of VoIP definitions, I don’t know where the problem is and obviously nobody can clear this up.

Astaro Support Engineer said “Get a dedicated digital line,” so I ordered one it will be installed shortly.

The only way to survive until the new line is installed was to throttle all local subnets, except for IPOfficeInternal, to ensure the latter will have enough bandwidth at any given time, but this is not a very smart way of doing this.

QoS on the Internet — Can Class of Service Be Guaranteed?


Most quality of service (QoS) schemes today are implemented to give priority to voice or video data running in common over a data circuit. The trick used to ensure that certain types of data receive priority over others makes use of a type of service (TOS) bit. Simply put, this is just a special flag inside of an Internet packet that can be a 1 or a 0, with a 1 implying priority while a 0 implies normal treatment.

In order for the TOS bit scheme to work correctly, all routers along a path need to be aware of it. In a self-contained corporate network, an organization usually controls all routers along the data path and makes sure that this recognition occurs. For example, a multinational organization with a VoIP system most likely purchases dedicated links through a global provider like ATT. In this scenario, the company can configure all of their routers to give priority to QoS tagged traffic, and this will prevent something like a print server file from degrading an interoffice VoIP call.

However, this can be a very expensive process and may not be available to smaller businesses and organizations that do not have their own dedicated links. In any place where many customers share an Internet link which is not the nailed up point-to-point that you’d find within a corporate network, there is contention for resources. In these cases, guaranteeing class of service is more difficult. So, this begs the question, “How can you set a QoS bit and prioritize traffic on such a link?”

In general, the answer is that you can’t.

The reason is quite simple. Your provider to the Internet cloud — Time Warner, Comcast, Qwest, etc. — most likely does not look at or support TOS bits. You can set them if you want, but they will probably be ignored. There are exceptions to this rule, however, but your voice traffic traveling over the Internet cloud will in all likelihood get the same treatment as all other traffic.

The good news is that most providers have plenty of bandwidth on their backbones and your third party voice service such as Skype will be fine. I personally use a PBX in the sky called Aptela from my home office. It works fine until my son starts watching YouTube videos and then all of a sudden my calls get choppy.

The bottle neck for this type of outage is not your provider’s backbone, but rather the limited link coming into your office or your home. The easiest way to ensure that your Skype call does not crash is to self-regulate the use of other bandwidth intensive Internet services.

Considering all of this, NetEqualizer customers often ask, “How does the NetEqualizer/AirEqualizer do priority QOS?”

It is a very unique technology, but the answer is also very simple. First, you need to clear your head about the way QoS is typically done in the Cisco™ model using bit tagging and such.

In its default mode, the NetEqualizer/AirEqualizer treats all of your standard traffic as one big pool. When your network is busy, it constantly readjusts bandwidth allocation for users automatically. It does this by temporarily limiting the amount of bandwidth a large download (such as that often found with p2p file sharing) might be using in order to ensure greater response times for e-mail, chat, Web browsing, VoIP, and other everyday online activities.

So, essentially, the NetEqualizer/AirEqualizer is already providing one level of QoS in the default setup. However, users have the option of giving certain applications priority over others.

For example, when you tell the NetEqualizer/AirEqualizer to give specific priority to your video server, it automatically squeezes all the other users into a smaller pool and leaves the video server traffic alone. In essence, this reserves bandwidth for the video server at a higher priority than all of the generic users. When the video stream is not active, the generic data users are allowed to utilize more bandwidth, including that which had been preserved for video. Once the settings are in place, all of this is done automatically and in real time. The same could be done with VoIP and other priority applications.

In most cases, the only users that even realize this process is taking place are those who are running the non-prioritized applications that have typically slowed your network. For everyone else, it’s business as usual. So, as mentioned, QoS over the NetEqualizer/AirEqualizer is ultimately a very simple process, but also very effective. And, it’s all done without controversial bit tagging and deep packet inspection!

%d bloggers like this: