What Is Burstable Bandwidth? Five Points to Consider


IMG_20170403_180712

Internet Bursting

Internet Providers continually use clever marketing analogies to tout their burstable high-speed Internet connections. One of my favorites is the comparison to an automobile with overdrive that at the touch of button can burn up the road. At first, the analogies seem valid, but there are usually some basic pitfalls and unresolved issues.  Below are five points that are designed to make you ponder just what you’re getting with your burstable Internet connection, and may ultimately call some of these analogies, and burstable Internet speeds altogether, into question.

  1. The car acceleration analogy just doesn’t work.

    First, you don’t share your car’s engine with other users when you’re driving.  Whatever the engine has to offer is yours for the taking when you press down on the throttle.  As you know, you do share your Internet connection with many other users.  Second, with your Internet connection, unless there is a magic button next to your router, you don’t have the ability to increase your speed on command.  Instead, Internet bursting is a mysterious feature that only your provider can dole out when they deem appropriate.  You have no control over the timing.

  2. Since you don’t have the ability to decide when you can be granted the extra power, how does your provider decide when to turn up your burst speed?

    Most providers do not share details on how they implement bursting policies, but here is an educated guess – based on years of experience helping providers enforce various policies regarding Internet line speeds.  I suspect your provider watches your bandwidth consumption and lets you pop up to your full burst speed, typically 10 megabits, for a few seconds at a time.  If you continue to use the full 10 megabits for more than a few seconds, they likely will reign you back down to your normal committed rate (typically 1 megabit). Please note this is just an example from my experience and may not reflect your provider’s actual policy.

  3. Above, I mentioned a few seconds for a burst, but just how long does a typical burst last?

    If you were watching a bandwidth-intensive HD video for an hour or more, for example, could you sustain adequate line speed to finish the video? A burst of a few seconds will suffice to make a Web page load in 1/8 of a second instead of perhaps the normal 3/4 of a second. While this might be impressive to a degree, when it comes to watching an hour-long video, this might eclipse your baseline speed. So, if you’re watching a movie or doing any another sustained bandwidth-intensive activity, it is unlikely you will be able to benefit from any sort of bursting technology.

  4. Why doesn’t my provider let me have the burst speed all of the time?

    The obvious answer is that if they did,  it would not be a burst, so it must somehow be limited in some duration.  A better answer is that your provider has peaks and valleys in their available bandwidth during the day, and the higher speed of a burst cannot be delivered consistently.  Therefore, it’s better to leave bursting as a nebulous marketing term rather than a clearly defined entity.  One other note is that if you only get bursting during your provider’s Internet “valleys”, it may not help you at all, as the time of day may be no where near your busy hour time, and so although it will not hurt you, it will not help much either.

  5. When are the likely provider peak times where my burst is compromised?

    Slower service and the inability to burst are most likely occurring during times when everybody else on the Internet is watching movies — during the early evening.  Again, if this is your busy hour, just when you could really use bursting, it is not available to you.

These five points should give you a good idea of the multiple questions and issues that need to be considered when weighing the viability and value of burstable Internet speeds.  Of course, a final decision on bursting will ultimately depend on your specific circumstances.  For further related reading on the subject, we suggest you visit our articles How Much YouTube Can the Internet Handle and Field Guide to Contention Ratios.

NetEqualizer Bandwidth Shaping Solution: K-12 Schools


Download K-12 Schools White Paper

In working with network administrators at public and private K-12 schools over the years, we’ve repeatedly heard the same issues and challenges facing them. Here are just a few:

  • We need a solution that’s low cost, low maintenance, and easy to set up.
  • We need a solution that will prioritize classroom videos and other online educational tools (e.g. blackboard.com).
  • We need to improve the overall Web-user experience for students.
  • We need a solution that doesn’t require “per-user” licensing.

In this article, we’ll talk about how the NetEqualizer has been used to solve these issues for many public and private K-12 schools around the world.

Download article (PDF) K-12 Schools White Paper

Read full article …

NetEqualizer Bandwidth Shaping Solution: Colleges, Universities, Boarding Schools, and University Housing


In working with information technology leaders at universities, colleges, boarding schools, and university housing over the years, we’ve repeatedly heard the same issues and challenges facing network administrators.  Here are just a few:

Download College & University White Paper

  • We need to provide 24/7 access to the web in the dormitories.
  • We need to support multiple campuses (and WAN connections between campuses).
  • We have thousands of students, and hundreds of administrators and professors, all sharing the same pipe.
  • We need to give priority to classroom videos used for educational purposes.
  • Our students want to play games and watch videos (e.g. YouTube).
  • We get calls if instant messaging & email are not responding instantaneously.
  • We need to manage P2P traffic.

In this article, we’ll talk about how the NetEqualizer has been used to solve these issues for many private and public colleges, universities, boarding schools, and in university housing facilities around the world.

Download article (PDF) College & University White Paper

Read full article …

How does your ISP actually enforce your Internet Speed


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com

YT

Have you ever wondered how your ISP manages to control the speed of your connection? If so, you might find the following article enlightening.  Below, we’ll discuss the various trade-offs used to control and break out bandwidth rate limits and the associated side effects of using those techniques.

Dropping Packets (Cisco term “traffic policing”)

One of the simplest methods for a bandwidth controller to enforce a rate cap is by dropping packets. When using the packet-dropping method, the bandwidth controlling device will count the total number of bytes that cross a link during a second.  If the target rate is exceeded during any single second, the bandwidth controller will drop packets for the remainder of that second. For example, if the bandwidth limit is 1 megabit, and the bandwidth controller counts 1 million bits gone by  in 1/2 a second, it will then drop packets for the remainder of the second.  The counter will then reset for the next second. From most evidence we have observed, rate caps enforced by many ISPs use the drop packet method, as it is the least expensive method supported on most basic routers.

So, what is wrong with dropping packets to enforce a bandwidth cap?

Well, when a link hits a rate cap and packets are dropped en masse, it can wreak havoc on a network. For example, the standard reaction of a Web browser when it perceives web traffic is getting lost is to re-transmit the lost data. For a better understanding of dropping packets, let’s use the analogy of a McDonald’s fast food restaurant.

Suppose the manager of the restaurant was told his bonus was based on making sure there was a never a line at the cash register. So, whenever somebody showed up to order food when all registers were occupied, the manager would open a trap door conveniently ejecting   the customer back out into the parking lot.  The customer, being extremely hungry, will come running back in the door (unless of course they die of starvation or get hit by a car) only to be ejected again. To make matters worse, let’s suppose a bus load of school kids arrive. As the kids file in to the McDonald’s, the remaining ones on the bus have no idea their classmates inside are getting ejected, so they keep streaming into the McDonald’s. Hopefully, you get the idea.

Well, when bandwidth shapers deploy packet-dropping technology to enforce a rate cap, you can get the same result seen with the trapdoor analogy in the McDonald’s. Web browsers and other user-based applications will beat their heads into the wall when they don’t get responses from their counterparts on the other end of the line. When packets are being dropped en masse,  the network tends to spiral out-of-control until all the applications essentially give up.  Perhaps you have seen this behavior while staying at a hotel with an underpowered Internet link. Your connectivity will alternate between working and then hanging up completely for a minute or so during busy hours. This can obviously be very maddening.

The solution to shaping bandwidth on a network without causing gridlock is to implement queuing.

Queuing Packets (Cisco term “traffic shaping”)

Queuing is the art of putting something in a line and making it wait before continuing on. Obviously, this is what fast food restaurants do in reality. They plan enough staff on hand to handle the average traffic throughout the day, and then queue up their customers when they are arriving at a faster rate then they can fill orders. The assumption with this model is that at some point during the day the McDonald’s will get caught up with the number of arriving customers and the lines will shrink away.

Another benefit of queuing is that wait times can perhaps be estimated by customers as they drive by and see the long line extending out into the parking lot, and thus, they will save their energy and not attempt to go inside.

But, what happens in the world of the Internet?

With queuing methods implemented, a bandwidth controller looks at the data rate of the incoming packets, and if deemed too fast, it will delay the packets in a queue. The packets will eventually get to their destination, albeit somewhat later than expected. Packets on queue can pile up very quickly, and without some help, the link would saturate. Computer memory to store the packets in the queue would also saturate and, much like the scenario mentioned above, the packets would eventually get dropped if they continued to come in at a faster rate than they were sent out.

TCP to the Rescue (keeping queuing under control)

Most internet applications use a service called TCP (transmission control protocol) to handle their data transfers. TCP has developed intelligence to figure out the speed of the link for which it is sending data on, and then can make adjustments. When the NetEqualizer bandwidth controller queues a packet or two, the TCP controllers on the customer end-point computers will sense the slower packets and back off the speed of the transfer. With just a little bit of queuing, the sender slows down a bit and dropping packets can be kept to a minimum.

Queuing Inside the NetEqualizer

The NetEqualizer bandwidth shaper uses a combination of queuing and dropping packets to get speed under control. Queuing is the first option, but when a sender does not back off eventually, their packets will get dropped. For the most part, this combination of queuing and dropping works well.

So far we have been inferring a simple case of a single sender and a single queue, but what happens if you have gigabit link with 10,000 users and you want to break off 100 megabits to be shared by 3000 users? How would a bandwidth shaper accomplish this? This is another area where a well-designed bandwidth controller like the NetEqualizer separates itself from the crowd.

In order to provide smooth shaping for a large group of users sharing a link, the NetEqualizer does several things in combination.

  1. It keeps track of all streams, and based on their individual speeds, the NetEqualizer will use different queue delays on each stream.
  2. Streams that back off will get minimal queuing
  3. Streams that do not back off may eventually have some of their packets dropped

The net effect of the NetEqualizer queuing intelligence is that all users will experience steady response times and smooth service.

Notes About UDP and Rate Limits

Some applications such as video do not use TCP to send data. Instead, they use a “send-and-forget” mechanism called UDP, which has no built-in back-off mechanism. Without some higher intelligence, UDP packets will continue to be sent at a fixed rate, even if the packets are coming too quickly for the receiver.  The good news is that even most UDP applications also have some way of measuring if their packets are getting to their destination. It’s just that with UDP, the mechanism of synchronization is not standardized.

Finally there are those applications that just don’t care if the packets get to their destination. Speed tests and viruses send UDP packets as fast as they can, regardless of whether the network can handle them or not. The only way to enforce a rate cap with such ill-mannered application is to drop the packets.

Hopefully this primer has given you a good introduction to the mechanisms used to enforce Internet Speeds, namely dropping packets & queuing.  And maybe you will think about this the next time you visit a fast food restaurant during their busy time…

Using NetEqualizer Lite to prevent the 802.11 Hidden Terminal problem


Introduction

Of the numerous growing pains that can accompany the expansion of a wireless network, the hidden terminal problems is one of the most difficult problems to solve. Despite your best efforts, the communication breakdown between nodes can wreak havoc on a network, often leading to sub par performance and unhappy users.

What is a hidden terminal and why is it a problem for wireless networks?

An 802.11 wireless network in a normal, simple configuration consists of a central access point (AP) and one or more remote users – which are the individuals utilizing the computers and devices that constitute a node. Wireless transmission technology is such that if more than one remote user transmits data back to the AP at the same time, it is difficult for the AP to distinguish between the two talkers.

When the forefathers of 802.11 first designed the protocols for how a wireless network should prevent this problem, they assumed that all users and nodes would be in close proximity to the access point and could actually hear each other’s transmissions.

For example, say node A and node B are wireless laptops in an office building with one access point. Node A starts sending data to the access point at the same moment as node B. By design, node A is smart enough to listen at the exact moment it is sending data in order to ensure that it has the airwaves free and clear. If it hears some other talker at the same time, it may back off, or, in other cases, node B may be the one to back off. The exact mechanism used to determine the back off order is similar to right of way rules at a four-way stop. These rules of etiquette are followed to prevent a crash and allow each node to send its data unimpeded.

Thus, 802.11 is designed with a set of courtesies such that if one node hears another node talking, it backs off, going silent as to reduce the chaos of multiple transmissions at the same time. This should be true for every node in the network.

This technology worked fine until directional antennas were invented and attached to remote nodes, which allowed users to be farther away from an access point and still send and receive transmissions. This technology is widely available and fairly inexpensive, so it was adopted by many wireless service providers to extend Internet service across a community.

The impact of these directional antennas, and the longer distances it allows users to be from access points, is that individual nodes are often unable to hear each other. Since their antennas are directed back to a central location, as the individual nodes get farther away from the central AP, they also become farther apart from each other. This made it more difficult for the nodes to communicate. Think of a group of people talking while they stand around in an ever-expanding circle. As the circle expands away from the center, people get farther apart, making it harder for them to communicate.

Since it’s not practical to have each node point a directional antenna at all of the other nodes, the result is that the nodes don’t acknowledge one another and subsequently don’t back off to let others in. When nodes compete to reach the access point at the same time, typically those with the strongest signals, which are generally closest to the AP, win out, leaving the weaker-signaled nodes helpless and unable to communicate with the access point (see image below).

Your browser may not support display of this image.

When a network with hidden nodes reaches capacity, it is usually due to circumstances such as this, where nodes with stronger signals steal the airwaves and crowd out nodes with weaker signals. If the nodes with the stronger signals continue to talk constantly, the weaker nodes can be locked out indefinitely, leaving certain users without access to the network.

The degradation of the hidden node problem varies with time of day, as well as with who is talking at any moment. As a result, the problem is not in one place for long, so it is not easily remedied by a quick mechanical fix. But, fortunately, there is a solution.

How does a NetEqualizer solve the hidden node issue?

The NetEqualizer solution, which is completely compatible with 802.11, works by taking advantage of the natural inclination of Internet connections to back off when artificially restrained. We’ll get back to this key point in a moment.

Understanding the true throughput upper limit of your access point is key to the NetEqualizer’s efficiency, since the advertised throughput of an AP and its actual ceiling often vary, with most AP’s not reaching their full potential.

Once you have determined the peak capacity of the access point (done empirically through busy hour observation), you then place a NetEqualizer (normally the lower end NetEqualizer POE device) between the access point and it’s connection to the Internet. You then set the NetEqualizer to the effective throughput of the AP . This tells the NetEqualizer to kick into gear when that upper limit is reached.

Once configured, the NetEqualizer constantly (every second) measures the total aggregate bandwidth throughput traversing the AP. If it senses the upper limit is being reached, NetEqualizer will then isolate the dominating flows and encourage them to back off.

Each connection between a user on your network and the Internet constitutes a traffic flow. Flows vary widely from short dynamic bursts, which occur, for example, when searching a small Web site, to large persistent flows, as when performing peer-to-peer file sharing or downloading a large file.

By keeping track of every flow going through the AP, the NetEqualizer can make a determination of which ones are getting an unequal share of bandwidth and thus crowding out flows from weaker nodes.

NetEqualizer determines detrimental flows from normal ones by taking the following questions into consideration:

1) How persistent is the flow?
2) How many active flows are there?
3) How long has the flow been active?
4) How much total congestion is currently on the trunk?
5) How much bandwidth is the flow using relative to the link size?

Once the answers to these questions are known, NetEqualizer will adjust offending flows by adding latency, forcing them to back off and allow potentially hidden nodes to establish communications – thus eliminating any disruption. Nodes with stronger signals that are closer to the access point will no longer have the advantage over users based farther away. This is done automatically by the NetEqualizer, without requiring any additional programming by administrators.

The key to making this happen over 802.11 relies on the fact that if you slow a stream to the Internet down, the application at the root cause will back off and also slow down. This can be done by the NetEqualizer without any changes to the 802.11 protocol since the throttling is actually done independent of the radio. The throttling of heavy streams happens between the AP and the connection to the Internet.

Questions and Answers

How do you know congestion is caused by a heavy stream?

We have years of experience optimizing networks with this technology. It is safe to say that on any congested network roughly 5 percent of users are responsible for 80 percent of Internet traffic. This seems to be a law of Internet usage.2

Can certain applications be given priority?

NetEqualizer can give priority by IP address, for video streams, and in its default mode it naturally gives priority to Voice over IP (VoIP), thus addressing a common need for commercial operators.

How many users can the NetEqualizer POE support?

The NetEqualizer Lite can support approximately 100 users.

What happens to voice traffic over a wireless transmission? Will it be improved or impaired?

We have mostly seen improvements to voice quality using our techniques. Voice calls are usually fairly low runners when it comes to the amount of bandwidth consumed. Congestion is usually caused by higher running activities, and thus we are able to tune the NetEqualizer to favor voice.

How can I find out more about the NetEqualizer?

Additional information about the NetEqualizer can be found at our Web site.

How can I purchase an NetEqualizer for trial?

Customers in the U.S. can contact APconnections directly at 1-800-918-2763 or via e-mail at admin@APconnections.net. International customers outside of Europe can contact APconnections at +1 303-997-1300, extension 103 or at the e-mail listed above.

About APconnections

APconnections is a privately held company founded in July 2003 and based in Lafayette, CO. We develop cost-effective and easy-to-install and manage traffic shaping appliances. Our NetEqualizer product family optimizes critical network bandwidth resources for any organization that purchases bandwidth in bulk and then redistributes or resells that bandwidth to disparate users with competing needs.

Our goal is to provide fully featured traffic shaping products that are simple to install and easy to use and manage. We released our first commercial offering in July 2003, and since then over 1000 unique customers around the world have put our products into service. Our flexible and scalable solutions can be found at ISPs, WISPs, major universities, Fortune 500 companies, SOHOs and small businesses on six continents.

Competing demands for network resources and congestion are problems shared by network administrators and operators across the globe. Low priority applications such as a large file download should never be allowed to congest and slowdown your VoIP, CRM, ERP or other high priority business applications. Until the development of APconnections’ NetEqualizer product family, network administrators and operators who wanted to cost-effectively manage network congestion and quality of service were forced to cobble together custom solutions. This process turned a simple task into a labor intensive exercise in custom software development. Now, with the NetEqualizer product family from APconnections, network staff can purchase and quickly install cost-effective turnkey traffic shaping solutions.

University of Limerick published an independent study validating Equalizing as solution to the hidden node problem.


1 Nodes are defined as any computer or device that is within a network. In this white paper, the term “user” will refer to the individual or group utilizing these computers or devices and could effectively be interchanged with the term “node”. In addition, the term “talker” will at times be used to refer to nodes that are sending data.

How to Implement Network Access Control and Authentication


There are a number of basic ways an automated network access control (NAC) system can identify unauthorized users and keep them from accessing your network. However, there are pros and cons to using these different NAC methods.  This article will discuss both the basic network access control principles and the different trade-offs each brings to the table, as well as explore some additional NAC considerations. Geared toward the Internet service provider, hotel operator, library, or other public portal operator who provides Internet service and wishes to control access, this discussion will give you some insight into what method might be best for your network.

The NAC Strategies

MAC Address

MAC addresses are unique to every computer connected to the network, and thus many NAC systems use them to grant or deny access.  Since MAC addresses are unique, NAC systems can use them to identify an individual customer and grant them access.

While they can be effective, there are limitations to using MAC addresses for network access. For example, if a customer switches to a new computer in the system, it will not recognize them, as their MAC address will have changed.  As a result, for mobile customer bases, MAC address authentication by itself is not viable.

Furthermore, on larger networks with centralized authentication, MAC addresses do not propagate beyond one network hop, hence MAC address authentication can only be done on smaller networks (no hops across routers).  A work-around for this limit would be to use a distributed set of authentication points local to each segment. This would involve multiple NAC devices, which would automatically raise complexity with regard to synchronization. Your entire authentication database would need to be replicated on each NAC.

Finally, a common question when it comes to MAC addresses is whether or not they can be spoofed. In short, yes, they can, but it does require some sophistication and it is unlikely a normal user with the ability to do so would go through all the trouble to avoid paying an access charge.  That is not to say it won’t happen, but rather that the risk of losing revenue is not worth the cost of combating the determined isolated user.

I mention this because some vendors will sell you features to combat spoofing and most likely it is not worth the incremental cost.  If your authentication is set up by MAC address, the spoofer would have to also have the MAC address of a paying user in order to get in. Since there is no real pattern to MAC addresses, guessing another customer’s MAC address would be nearly impossible without inside knowledge.

IP Address

IP addresses allow a bit more flexibility than MAC addresses because IP addresses can span across a network segment separated by a router to a central location. Again, while this strategy can be effective, IP address authentication has the same issue as MAC addressing, as it does not allow a customer to switch computers, thus requiring that the customer use the same computer each time they log in. In theory, a customer could change the IP address should they switch computers, but this would be way too much of an administrative headache to explain when operating a consumer-based network.

In addition, IP addresses are easy to spoof and relatively easy to guess should a user be trying to steal another user’s identity. But, should two users log on with the same IP address at the same time, the ruse can quickly be tracked down. So, while plausible, it is a risky thing to do.

User ID  Combined with MAC Address or IP Address

This methodology solves the portability issue found when using MAC addresses and IP addresses by themselves. With this strategy, the user authenticates their session with a user ID and password and the NAC module records their IP or MAC address for the duration of the session.

For a mobile consumer base, this is really the only practical way to enforce network access control. However, there is a caveat with this method. The NAC controller must expire a user session when there is a lack of activity.  You can’t expect users to always log out from their network connection, so the session server (NAC) must take an educated guess as to when they are done. The ramification is that they must log back in again. This usually isn’t a major problem, but can simply be a hassle for users.

The good news is the inactivity timer can be extended to hours or even days, and should a customer login in on a different computer while current on a previous session, the NAC can sense this and terminate the old session automatically.

The authentication method currently used with the NetEqualizer is based on IP address and user ID/password, since it was designed for ISPs serving a transient customer base.

Other Important Considerations

NAC and Billing Systems

Many NAC solutions also integrate billing services. Overlooking the potential complexity and ballooning costs with a billing system has the potential to cut into efficiency and profits for both customer and vendor. Our philosophy is that a flat rate and simple billing are best.

To name a few examples, different customers may want time of day billing; billing by day, hour, month, or year; automated refunds; billing by speed of connections; billing by type of property (geographic location); or tax codes. It can obviously go from a simple idea to a complicated one in a hurry. While there’s nothing wrong with these requests, history has shown that costs can increase exponentially when maintaining a system and trying to meet these varied demands, once you get beyond simple flat rate.

Another thing to look out for with billing is integration with a credit card processor. Back-end integration for credit card processing takes some time and energy to validate. For example, the most common credit card authentication system in the US, Authorize.net, does not work unless you also have a US bank account.  You may be tempted to shop your credit card billing processor based on fees, but if you plan on doing automated integration with a NAC system, it is best to make sure the CC authorization company provides automated tools to integrate with the computer system and your consulting firm accounts for this integration work.

Redirection Requirements

You cannot purchase and install a NAC system without some network analysis. Most NAC systems will re-direct unauthorized users to a Web page that allows them to sign up for the service. Although this seems relatively straight forward, there are some basic network features that need to be in place in order for this redirection to work correctly. The details involved go beyond the scope of this article, but you should expect to have a competent network administrator or consultant on hand in order to set this up correctly. To be safe, plan for eight to 40 hours of consulting time for troubleshooting and set-up above and beyond the cost of the equipment.

Network Access for Organizational Control

Thus far we have focused on the basic ways to restrict basic access to the Internet for a public provider. However, in a private or institutional environment where security and access to information are paramount, the NAC mission can change substantially. For example, in the Wikipedia article on network access control, a much broader mission is outlined than what a simple service provider would require. The article reads:

“Network Access Control aims to do exactly what the name implies—control access to a network with policies, including pre-admission endpoint security policy checks and post-admission controls over where users and devices can go on a network and what they can do.”

This paragraph was obviously written by a contributor that views NAC as a broad control technique reaching deep into a private network.  Interestingly, there is an ongoing dispute on Wikipedia stating that this definition goes beyond the simpler idea of just granting access.

The rift on Wikipedia can be summarized as an argument over whether a NAC should be a simple gatekeeper for access to a network, with users having free rein to wander once in, or whether the NAC has responsibilities to protect various resources within the network once access is attained. Both camps are obviously correct, but it depends on the customer and type of business as to what type of NAC is required.

Therefore, in closing, the overarching message that emerges from this discussion is simply that implementing network access control requires an evaluation not only of the network setup, but also how the network will be used. Strategies that may work perfectly in certain circumstances can leave network administrators and users frustrated in other situations. However, with the right amount of foresight, network access control technologies can be implemented to facilitate the success of your network and the satisfaction of users rather than serving as an ongoing frustrating limitation.

The Real Killer Apps and What You Can Do to Stop Them from Bringing Down Your Internet Links


When planning a new network, or when diagnosing a problem on an existing one, a common question that’s raised concerns the impact that certain applications may have on overall performance. In some cases, solving the problem can be as simple as identifying and putting an end to (or just cutting back) the use of certain bandwidth-intensive applications. So, the question, then, is what applications may actually be the source of the problem?

The following article works to identify and break down the applications that will most certainly kill your network, but also provides suggestions as to what you can do about them. While every application certainly isn’t covered, our experience working with network administrators around the world has helped us identify the most common problems.

The Common Culprits

YouTube Video (standard video) — On average, a sustained 10-minute YouTube video will consume about 500kbs over its duration. Most video players try to store the video (buffer ahead) locally as fast as your network  can take it.   On a shared network, this has the effect of bringing everything else on your network to its knees. This may not be a problem if you are the only person using the Internet link, but in today’s businesses and households, that is rarely the case.

For more specifics about YouTube consumption, see these other Youtube articles.

Microsoft Service-Pack Downloads — Updates such as Microsoft service packs use file transfer protocol (FTP). Generally, this protocol will use as much bandwidth as it can find. The end result is that your VoIP phone may lock up, your video’s will become erratic, and Web surfing will come to a crawl.

Keeping Your Network Running Smoothly While Handling Killer Apps

There is no magic pill that can give you unlimited bandwidth, but each of  the following solutions may help. However, they often require trade offs.

  1. The obvious solution is to communicate with other members of your household or business when using bandwidth intensive applications. This is not always practical, but, if other users agree to change their behavior, it’s usually a surefire solution.
  2. Deploy a fairness device to smooth out those rough patches during contentious busy hours — Yes, this is the NetEqualizer News blog, but with all bias aside, these types of technologies often work great. If you are in an office sharing an Internet feed with various users, the NetEqualizer will keep aggressive bandwidth users from crowding others out. No, it cannot create additional bandwidth on your pipe, but it will eliminate the gridlock caused by your colleague  in the next cubicle  downloading a Microsoft service pack. Yes, there are other  devices on the market that can enforce fairness, but the NetEqualizer was specifically designed for this mission. And, with a starting price of around $1400, it is a product small businesses can invest in and avoid longer term costs (see option 3).
  3. Buy more bandwidth — In most cases, this is the most expensive of the different solutions in the long term and should usually be a last resort. This is especially true if the problems are largely caused by recreational Internet use on a business network. However, if the bandwidth-intensive activities are a necessary part of your operation, and they can’t afford to be regulated by a fairness device, upgrading your bandwidth may be the only long-term solution. But, before signing the contract, be sure to explore options one and two first.

As mentioned, not every network-killing application is discussed here, but this should head you in the right direction in identifying the problem and finding a solution. For a more detailed discussion of this issue, visit the links below.

  • For a  more detailed discussion on how much bandwidth specific applications consume, click here.
  • For a set of detailed tips/tricks on making your Internet run faster, click here.
  • For an in-depth look at more complex methods used to mitigate network congestion on a WAN or Internet link, click here.

Top Tips To Quantify The Cost Of WAN Optimization


Editor’s Note: As we mentioned in a recent article, there’s often some confusion when it comes to how WAN optimization fits into the overall network optimization industry — especially when compared to Internet optimization. Although similar, the two techniques require different approaches to optimization. What follows are some simple questions to ask your vendor before you purchase a WAN optimization appliance. For the record, the NetEqualizer is primarily used for Internet optimization.

When presenting a WAN optimization ROI argument, your vendor rep will clearly make a compelling case for savings.  The ROI case will be made by amortizing the cost of equipment against your contracted rate from your provider. You can and should trust these basic raw numbers. However, there is more to evaluating a WAN optimization (packet shaping) appliance than comparing equipment cost against bandwidth savings. Here are a few things to keep in mind:

  1. The amortization schedule should also make reasonable assumptions about future costs for T1, DS3, and OC3 links. Most contracted rates have been dropping in many metro areas and it is reasonable to assume that bandwidth costs will perhaps be 50-percent less two to three years out.
  2. If you do increase bandwidth, the licensing costs for the traffic shaping equipment can increase substantially. You may also find yourself in a situation where you need to do a forklift upgrade as you outrun your current hardware.
  3. Recurring licensing costs are often mandatory to keep your equipment current. Without upgrading your license, your deep packet inspection (layer 7 shaping filters) will become obsolete.
  4. Ongoing labor costs to tune and re-tune your WAN optimization appliance can often costs thousands per week.
  5. The good news is that optimization companies will normally allow you to try an appliance before you buy. Make sure you take the time to manage the equipment with your own internal techs or IT consultant to get an idea of how it will fit into your network.  The honeymoon with new equipment (supported by a well trained pre-sales team) can be short lived. After the free pre-sale support has expired, you will be on your own.

There are certainly times when WAN optimization makes sense, yet it many cases, what appears to be a no-brainer decision at first will begin to be called into question as costs mount down the line. Hopefully these five contributing factors will paint a clearer picture of what to expect.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

Hitchhiker’s Guide To Network And WAN Optimization Technology


Manufacturers make all sorts of claims about speeding up your network with special technologies, in the following pages we’ll take a look at the different types of technologies explaining them in such a way that you the Consumer can make an informed decision on what is right for you.

Table of Contents

  • Compression – Relies on data patterns that can be represented more efficiently. Best suited for point to point leased lines.
  • Caching – Relies on human behavior , accessing the same data over and over. Best suited for point to point leased lines, but also viable for Internet Connections and VPN tunnels.
  • Protocol Spoofing – Best suited for Point to Point WAN links.
  • Application Shaping – Controls data usage based on spotting specific patterns in the data. Best suited for both point to point leased lines and Internet connections. Very expensive to maintain in both initial cost, ongoing costs and labor.
  • Equalizing – Makes assumptions on what needs immediate priority based on the data usage. Excellent choice for Internet connections and clogged VPN tunnels.
  • Connection Limits – Prevents access gridlock in routers and access points. Best suited for Internet access where p2p usage is clogging your network.
  • Simple Rate Limits – Prevents one user from getting more than a fixed amount of data. Best suited as a stop gap first effort for a remedying a congested Internet connection with a limited budget.

Compression

At first glance, the term compression seems intuitively obvious. Most people have at one time or another extracted a compressed Zip windows file. Examining the file sizes pre and post extraction reveals there is more data on the hard drive after the extraction. WAN compression products use some of the same principles only they compress the data on the WAN link and decompress it automatically once delivered, thus saving space on the link, making the network more efficient. Even though you likely understand compression on a Windows file conceptually, it would be wise to understand what is really going on under the hood during compression before making an investment to reduce network costs. Some questions to consider: How does compression really work? Are there situations where it may not work at all?

How it Works

A good, easy to visualize analogy to data compression is the use of short hand when taking dictation. By using a single symbol for common words a scribe can take written dictation much faster than if he were to spell out each entire word. Thus the basic principle behind compression techniques is to use shortcuts to represent common data. Commercial compression algorithms, although similar in principle, vary widely in practice. Each company offering a solution typically has their own trade secrets that they closely guard for a competitive advantage.

There are a few general rules common to all strategies. One technique is to encode a repeated character within a data file. For a simple example let’s suppose we were compressing this very document and as a format separator we had a row with a solid dash.

The data for this solid dash line is comprised of approximately 160 times the ASCII character “-�. When transporting the document across a WAN link without compression this line of document would require 80bytes of data, but with clever compression we can encode this using a special notation “-� X 160.

The compression device at the front end would read the 160 character line and realize: “Duh, this is stupid. Why send the same character 160 times in a row?” so it would incorporate a special code to depict the data more efficiently.

Perhaps that was obvious, but it is important know a little bit about compression techniques to understand the limits of their effectiveness. There are many types of data that cannot be efficiently compressed.

For example: many image and voice recordings are already optimized and there is very little improvement in data size that can be accomplished with compression techniques. The companies that sell compression based solutions should be able to provide you with profiles on what to expect based on the type of data sent on your WAN link.

Caching

Suppose you are the administrator for a network, and you have a group of a 1000 users that wake up promptly at 7:00 am each morning and immediately go to MSNBC.com to retrieve the latest news from Wall Street. This synchronized behavior would create 1000 simultaneous requests for the same remote page on the Internet.

Or, in the corporate world, suppose the CEO of a multinational 10,000 employee business, right before the holidays put out an all points 20 page PDF file on the corporate site describing the new bonus plan? As you can imagine all the remote WAN links might get bogged down for hours while each and every employee tried to download this file.

Well it does not take a rocket scientist to figure out that if somehow the MSNBC home page could be stored locally on an internal server that would alleviate quite a bit of pressure on your WAN link.

And in the case of the CEO memo, if a single copy of the PDF file was placed locally at each remote office it would alleviate the rush of data.

Caching does just that.

Offered by various vendors Caching can be very effective in many situations, and vendors can legitimately make claims of tremendous WAN speed improvement in some situations. Caching servers have built in intelligence to store the most recently and most frequently requested information, thus preventing future requests from traversing the WAN link unnecessarily .

You may know that most desktop browsers do their own form caching already. Many web servers keep a time stamp of their last update to data , and browsers such as the popular Internet Explorer will use a cached copy of a remote page after checking the time stamp.

So what is the downside of caching?

There are two main issues that can arise with caching:

  1. Keeping the cache current. If you access a cache page that is not current then you are at risk of getting old and incorrect information. Some things you may never want to be cached, for example the results of a transactional database query. It’s not that these problems are insurmountable, but there is always the risk that the data in cache will not be synchronized with changes.
  2. Volume. There are some 60 million web sites out on the Internet alone. Each site contains upwards of several megabytes of public information. The amount of data is staggering and even the smartest caching scheme cannot account for the variation in usage patterns among users and the likely hood they will hit an un-cached page.

Protocol Spoofing

Historically, there are client server applications that were developed for an internal LAN. Many of these applications are considered chatty. For example, to complete a transaction between a client and server, 10’s of messages may be transmitted, when perhaps one or two would suffice. Everything was fine until companies-for logistical and other reasons extended their LANs across the globe using WAN links to tie different locations together.

To get a better visual on what goes on in a chatty application perhaps an analogy will help with getting a picture in your mind. Suppose you were sending a letter to family members with your summer vacation pictures, and, for some insane reason, you decided to put each picture in a separate envelope and mail them individually on the same mail run. Obviously, this would be extremely inefficient.

What protocol spoofing accomplishes is to fake out the client or server side of the transaction and then send a more compact version of the transaction over the Internet, i.e. put all the pictures in one envelope and send it on your behalf thus saving you postage…

You might ask why not improve the inefficiencies in these chatty applications rather than write software to deal with the problem?

Good question, but that would be the subject of a totally different white paper on how IT organizations must evolve with legacy technology. It’s just beyond the scope of our white paper.

Application Shaping

One of the most popular and intuitive forms of optimizing bandwidth is a method called “application shaping” with aliases of “traffic shaping”, “bandwidth control”, and perhaps a few others thrown in for good measure. For the IT manager that is held accountable for everything that can and will go wrong on a network, or the CIO that needs to manage network usage policies, this is a dream come true. If you can divvy up portions of your WAN link to various applications then you can take control of your network and insure that important traffic has sufficient bandwidth.

At the center of application shaping is the ability to identify traffic by type. Is this Citrix traffic, streaming Audio, Kazaa peer to peer or something else?

The Fallacy of Internet Ports and Application Shaping

Many applications are expected to use Internet ports when communicating across the Internet. An Internet port is part of an Internet address, and many firewall products can easily identify ports and block or limit them. For example, the “FTP” application commonly used for downloading files uses the well know “port 21”. The fallacy with this scheme, as many operators soon find out, is that there are many applications that do not consistently use a fixed port for communication. Many application writers have no desire to be easily classified. In fact, they don’t want IT personnel to block them at all, so they deliberately design applications to not conform to any formal port assignment scheme. For this reason, any product that purports to block or alter application flows, by port, should be avoided if your primary mission is to control applications by type.

So, if standard firewalls are inadequate at blocking applications by port what can help?

As you are likely aware, all traffic on the Internet travels around in what is called an IP packet. An IP packet can very simply be thought of as a string of characters moving from Computer A to Computer B. The string of characters is called the “payload,” much like the freight inside a railroad car. On the outside of this payload, or data, is the address where it is being sent. These two elements, the address and the payload, comprise the complete IP packet. In the case of different applications on the Internet we would expect to see different kinds of payloads. For example, let’s take the example of a skyscraper being transported from New York to Los Angeles. How could this be done using a freight train? Common sense suggests that one would disassemble the office tower, stuff it into as many freight cars as it takes to transport it, and then when the train arrived in Los Angeles hopefully the workers on the other end would have the instructions on how to reassemble the tower.

Well, this analogy works with almost anything that is sent across the Internet, only the payload is some form of data, not a physical hunk of bricks, metal and wires. If we were sending a Word document as an e-mail attachment, guess what? The contents of the document would be disassembled into a bunch of IP packets and sent to the receiving e-mail client where it would be re-assembled. If I looked at the payload of each Internet packet in transit I could actually see snippets of the document in each packet and could quite easily read the words as they went by.

At the heart of all current application shaping products is special software that examines the content of Internet packets, and through various pattern matching techniques determines what type of application a particular flow is.

Once a flow is determined then the application shaping tool can enforce the operators policies on that flow.  Here are some examples:

  • Limit AIM messenger traffic to 100kbs
  • Reserve 500kbs for Shoretell voice traffic

The list of rules you can apply to traffic types and flow is unlimited.

The Downside to Application Shaping

Application shaping does work and is a very well thought out logical way to set up a network. After all, complete control over all types of traffic should allow an operator to run a clean ship, right? But as with any euphoric ideal there are drawbacks to the reality that you should be aware of.

  1. The number of applications on the Internet is a moving target. The best application shaping tools do a very good job of identifying several thousand of them, and yet there will always be some traffic that is unknown (estimated at ten percent by experts from the leading manufactures). The unknown traffic is lumped into the unknown classification and an operator must make a blanket decision on how to shape this class. Is it important? Is it not? Suppose the important traffic was streaming audio for a web cast and is not classified. Well, you get the picture. Although theory behind application shaping by type is a noble one, the cost for a company to keep current is large and there are cracks.
  2. Even if the application spectrum could be completely classified, the spectrum of applications constantly changes. You must keep licenses current to insure you have the latest in detection capabilities. And even then it can be quite a task to constantly analyze and change the mix of policies on your network. As bandwidth costs lessen, how much human time should be spent divvying up and creating ever more complex policies to optimize your WAN traffic?

Equalizing

Take a minute to think about what is really going on in your network to make you want to control it in the first place.

We can only think of a few legitimate reasons to do anything at all to your WAN: “The network is slow”, or “My VoIP call got dropped”.

If such words were never uttered than life would be grand.

So you really only have to solve these issues to be successful. Who cares about the actual speed of the WAN link or the number and types of applications running on your network or what port they are using, if you never hear these two complaints?

Equalizing goes at the heart of congestion using the basic principal of time. The reason why a network is slow or a voice call breaks up is that the network is stupid. The network grants immediate access to anybody who wants to use it, no matter what their need is. That works great much of the day when networks have plenty of bandwidth to handle all traffic demands, but it is the peak usage demands that play havoc.

Take the above statement with some simple human behavior factors. People notice slowness when real time activities break down. Accessing a web page, or sending an e-mail , chat session, voice call. All these activities will generate instant complaints if response times degrade from the “norm”.

The other fact of human network behavior is that there are bandwidth intensive applications, peer to peer, large e-mail attachments, data base back ups. These bandwidth intensive activities are attributed to a very small number of active users at any one time which makes it all the more insidious as they can consume well over ninety percent of a network’s resources at any time. Also, most of these bandwidth intensive applications can be spread out over time without notice from the user.

That data base back up for example: does it really need to be completed in three minutes at 5:30 on a Friday, or can it be done over six minutes and complete at 5:33? That would give your network perhaps fifty percent more bandwidth at no additional cost and nobody would notice. It is unlikely the user backing up their local disk drive is waiting for it to complete with stop watch in hand.

It is these unchanging human factor interactions that allow equalizing to work today, tomorrow and well into the future without need for upgrading. It looks at the behavior of the applications and usage patterns. By adhering to some simple rules of behavior the real time applications can be identified from the heavy non real time activities and thus be granted priority on the fly without any specific policies set by the IT Manager.

How Equalizing Technology Balances Traffic

Each connection on your network constitutes a traffic flow. Flows vary widely from short dynamic bursts, for example, when searching a small website, to large persistent flows, as when performing peer-to-peer file sharing.

Equalizing is determined from the answers to these questions:

  1. How persistent is the flow?
  2. How many active flows are there?
  3. How long has the flow been active?
  4. How much total congestion is currently on the trunk?
  5. How much bandwidth is the flow using relative to the link size?

Once these answers are known then Equalizing makes adjustments to flow by adding latency to low-priority tasks so high-priority tasks receive sufficient bandwidth. Nothing more needs to be said and nothing more needs to be administered to make it happen, once set up it need not be revisited.

Exempting Priority Traffic

Many people often point out that although equalizing technology sounds promising that it may be prone to mistakes with such a generic approach to traffic shaping. What if a user has a high priority bandwidth intensive video stream that must get through, wouldn’t this be the target of a miss-applied rule to slow it down?

The answer is yes, but what we have found is that high bandwidth priority streams are usually few in number and known by the administrator; they rarely if ever pop up spontaneously, so it is quite easy to exempt such flows since they are the rare exception. This is much easier than trying to classify every flow on your network at all times.

Connection Limits

Often overlooked as a source of network congestion is the number of connections a user generates. A connection can be defined as a single user communicating with a single Internet site. Take accessing the Yahoo home page for example. When you access the Yahoo home page your browser goes out to Yahoo and starts following various links on the Yahoo page to retrieve all the data. This data is typically not all at the same Internet address, so your browser may access several different public Internet locations to load the Yahoo home page, perhaps as many as ten connections over a short period of time. Routers and access points on your local network must keep track of these “connections” to insure that the data gets routed back to the correct browser. Although ten connections to the Yahoo home page is not excessive over a few seconds there are very poorly behaved applications, (most notably Gnutella, Bear Share, and Bittorrent), which are notorious for opening up 100’s or even 1000’s of connections in a short period of time. This type of activity is just as detrimental to your network as other bandwidth eating applications and can bring your network to a grinding halt. The solution is to make sure any traffic management solution deployed incorporates some form of connection limiting features.

Simple Rate Limits

The most common and widely used form of bandwidth control is the simple rate limit. This involves putting a fixed rate cap on a single IP address as per often is the case with rate plans promised by ISPs to their user community. “2 meg up and 1 meg down” is a common battle cry, but what happens in reality with such rate plans?

Although setting simple rates limits is far superior to running a network wide open we often call this “set, forget, and pray”!

Take for example six users sharing a T1 if each of these six users gets a rate of 256kbs up and 256kbs down. Then these six users each using their full share of 256 kilo bits per second is the maximum amount a T1 can handle. Although it is unlikely that you will hit gridlock with just six users, when the number of users reaches thirty, gridlock becomes likely, and with forty or fifty users, it becomes a certainty to happen quite often. It is not uncommon for schools, wireless ISPs, and executive suites to have sixty users to as many as 200 users sharing a single T1 with simple fixed user rate limits as the only control mechanism.

Yes, simple fixed user rate limiting does resolve the trivial case where one or two users, left unchecked, can use all available bandwidth; however unless your network is not oversold there is never any guarantee that busy-hour conditions will not result in gridlock.

Conclusion

The common thread to all WAN optimization techniques is they all must make intelligent assumptions about data patterns or human behavior to be effective. After all, in the end, the speed of the link is just that, a fixed speed that cannot be exceeded. All of these techniques have their merits and drawbacks, the trick is finding a solution best for your network needs. Hopefully the background information contained in this document will give you information so you the consumer can make an informed decision.

Optimizing Your WAN Is Not The Same As Optimizing Your Internet Link — Here’s Why…


WAN optimization is a catch-all phrase for making a network more efficient. However, few products distinguish between optimizing a WAN link and optimizing an Internet link. Yet, the methods used for the latter do not necessarily overlap with WAN optimization. In this article, we’ll break down the differences and similarities between the two practices and explain why WAN optimization tends to be the more common, yet not necessarily most effective, of the two techniques when it comes to overall network optimization.

Some Basic Definitions

A WAN link is always a point-to-point link where an institution/business controls both ends of the link. However, a WAN link does not provide Internet access.

On the other hand, an Internet link is one where one end terminates in a business/home/institution and the other end terminates in the Internet cloud, thus providing the former with Internet access.

A VPN link is a special case of a WAN link where the link traverses across the public Internet to get to another location within an organization.  This is not an Internet link by our definition mentioned above.

Whether dealing with a small business, a home user, or public entities such as libraries, schools etc., there are far more Internet links out there than WAN links. Each of these entities will most certainly have a dedicated Internet link while many will not have a WAN link.

Some Common Questions

If Internet links far outnumber WAN links, why are there so many commercial products dedicated to optimizing WAN links and so few specifically dedicated to Internet optimization?

There are a few reasons for this:

  1. WAN optimization is fairly easy to measure and quantify, so a WAN optimization vendor can easily demonstrate their value by showing before and after results.
  2. Many WAN-based applications — Citrix, SQL queries, etc. — are inherently inefficient and in need of optimization.
  3. The market is flooded with vendors and analysts (such as Gartner) which all tend  to promote and sustain the WAN optimization market.
  4. WAN optimization tools also double as reporting and monitoring tools, which administrators gravitate toward.
  5. A large number of commercial Internet connection are located at Small or medium sized business and and the ROI on an optimization device for their Internet Link is either not that compelling or not understood.

Why is a WAN optimizing tool not the best tool to optimize an Internet link? Don’t the methodologies overlap?

Most of the methods used by a WAN optimizing appliance make use of two principles:

  1. The organization owns both ends of the link and will use two optimizing devices — one at each end. For example, compression techniques require that you own both ends of the link. As mentioned earlier, you cannot control both ends of an Internet link.
  2. The types of traffic running over a WAN Link are consistent and well defined. Organizations tend to do the same thing over and over again on their internal link. Yet, on an Internet link, the traffic varies from minute to minute and cannot be easily quantified.

So, how does one optimize unbounded traffic coming into an Internet link?

You need an appliance such as a NetEqualizer that dynamically manages over all flows for more information you can read. But,  don’t take it from us, you can also check in on what existing NetEqualizer users are saying.

How does a company quantify the cost of using a device to optimize their Internet link?

Admittedly, the results may be a bit subjective. The good news is that optimization companies will normally allow you to try an appliance before you buy. On the other hand, most Internet providers will require you to purchase a fixed length contract.

The fact of the matter is that an Internet link can be rendered useless by  a small number of users during peak times. If you blindly upgrade your contract to accommodate this problem, it is akin to buying gourmet lunches for some employees while feeding everybody else microwave popcorn. In the end, the majority will be unhappy.

While the appropriate network optimization technique will vary from situation to situaiton, Internet optimization appliances tend to work well under most circumstances and are worth implementing. Or, at the very least, they’re worth exploring before signing on to a long-term bandwidth increase with your ISP.

See: Related Discussion on Internet Congestion and predictability.

The True Price of Bandwidth Monitoring


By Art Reisman

Art Reisman CTO www.netequalizer.com

For most IT administrators, bandwidth monitoring of some sort is an essential part of keeping track of, as well as justifying, network expenses. Without visibility into a network load, an administrator’s job would degrade into a quagmire of random guesswork. Or would it?

The traditional way of  looking at monitoring your Internet has two parts: the fixed cost of the monitoring tool used to identify traffic, and the labor associated with devising a remedy. In an ironic inverse correlation, we assert that costs increase with the complexity of the monitoring tool. Obviously, the more detailed the reporting tool, the more expensive its initial price tag. The kicker comes with part two. The more expensive the tool, the more  detail  it will provide, and the more time an administrator is likely to spend adjusting and mucking, looking for optimal performance.

But, is it a fair to assume higher labor costs with  more advanced monitoring and information?

Well, obviously it would not make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. Why have the reporting tool in the first place if the only output was to stare at reports and do nothing? Typically, the more information an admin has about a network, the more inclined he might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief  that when  the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network mucking can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention.  For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980’s.  The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.

An effective compromise with many of our customers is that they are stepping down from expensive complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.

For example, with a simple reporting tool you can plot network usage by user.  Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing; abuse becomes obvious just looking at the usage (a simple report).

However, there is also the personal control factor, which often does not follow clear lines of ROI (return on investment).

What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into a trip to Hawaii.

In our next article, we’ll put some real world numbers to the test for actual break downs, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.

List of monitoring tools compiled by Stanford

Planetmy
Linux Tips
How to set up a monitor for free

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

NetEqualizer White Paper Comparison with Traditional Layer-7 (Deep Packet Inspection Products)


Updated with new reference material May 4th 2009

How NetEqualizer compares to Packeteer, Allot, Cymphonics, Exinda

We often get asked how NetEqualizer compares to Packeteer, Allot, Cymphonics, Exinda and a plethora of other well-known companies that do layer 7 application shaping (packet shaping). After several years of these questions, and discussing different aspects with former and current application shaping IT administrators, we’ve developed a response that should clarify the differences between NetEqualizers behavior based approach and the rest of the pack.

We thought of putting our response into a short, bullet-by-bullet table format, but then decided that since this decision often involves tens of thousands of dollars, 15 minutes of education on the subject with content to support the bullet chart was in order. If you want to see just the bullet chart, you can skip to the end now, but if you’re looking to have the question answered as objectively as possible, please take a few minutes to read on

In the following sections, we will cover specifically when and where application shaping (deep packet inspection) is used, how it can be used to your advantage, and also when it may not be a good option for what you are trying to accomplish. We will also discuss how the NetEqualizer and its behavior-based shaping fits into the landscape of application shaping, and how in some cases the NetEqualizer is a much better alternative.

First off, let’s discuss the accuracy of application shaping. To do this, we need to review the basic mechanics of how it works.

Application shaping is defined as the ability to identify traffic on your network by type and then set customized policies to control the flow rates for each particular type. For example, Citrix, AIM, Youtube, and BearShare are all applications that can be uniquely identified.

As you are likely aware, all traffic on the Internet travels around in what is called an IP packet. An IP packet can very simply be thought of as a string of characters moving from computer A to computer B. The string of characters is called the “payload,” much like the freight inside a railroad car. On the outside of this payload is the address where it is being sent. On the inside is the data/payload that is being transmitted. These two elements, the address and the payload, comprise the complete IP packet. In the case of different applications on the Internet, we would expect to see different kinds of payloads.

At the heart of all current application shaping products is special software that examines the content of Internet packets as they pass through the packet shaper. Through various pattern matching techniques, the packet shaper determines in real time what type of application a particular flow is. It then proceeds to take action to possibly restrict or allow the data based on a rule set designed by the system administrator.

For example, the popular peer-to-peer application Kazaa actually has the ASCII characters “Kazaa” appear in the payload, and hence a packet shaper can use this keyword to identify a Kazaa application. Seems simple enough, but suppose that somebody was downloading a Word document discussing the virtues of peer-to-peer and the title had the character string “Kazaa” in it. Well, it is very likely that this download would be identified as Kazaa and hence misclassified. After all, downloading a Word document from a Web server is not the same thing as the file sharing application Kazaa.

The other issue that constantly brings the accuracy of application shaping under fire is that some application writers find it in their best interest not be classified. In a mini arms race that plays out everyday across the world, some application developers are constantly changing their signature and some have gone as far as to encrypt their data entirely.

Yes, it is possible for the makers of application shapers to counter each move, and that is exactly what the top companies do, but it can take a heroic effort to keep pace. The constant engineering and upgrading required has an escalating cost factor. In the case of encrypted applications, the amount of CPU power required for decryption is quite intensive and impractical and other methods will be needed to identify encrypted p2p.

But, this is not to say that application shaping doesn’t work in all cases or provide some value. So, let’s break down where it has potential and where it may bring false promises. First off, the realities of what really happens when you deploy and depend on this technology need to be discussed.

Accuracy and False Positives

As of early 2003, we had a top engineer and executive join APConnections direct from a company that offered application shaping as one of their many value-added technologies. He had first hand knowledge from working with hundreds of customers who were big supporters of application shaping:

The application shaper his company offered could identify 90 percent of the spectrum of applications, which means they left 10 percent as unclassified. So, right off the bat, 10 percent of the traffic is unknown by the traffic shaper. Is this traffic important? Is it garbage that you can ignore? Well, there is no way to know with out any intelligence, so you are forced to let it go by without any restriction. Or, you could put one general rule over all of the traffic – perhaps limiting it to 1 megabit per second max, for example. Essentially, if your intention was 100-percent understanding and control of your network traffic, right out the gate you must compromise this standard.

In fairness, this 90-percent identification actually is an amazing number with regard to accuracy when you understand how daunting application shaping is. Regardless, there is still room for improvement.

So, that covers the admitted problem of unclassifiable traffic, but how accurate can a packet shaper be with the traffic it does claim to classify? Does it make mistakes? There really isn’t any reliable data on how often an application shaper will misidentify an application. To our knowledge, there is no independent consumer reporting company that has ever created a lab capable of generating several thousand different applications types with a mix of random traffic, and then took this mix and identified how often traffic was misclassified. Yes, there are trivial tests done one application at a time, but misclassification becomes more likely with real-world complex and diverse application mixes.

From our own testing of application technology freely available on the Internet, we discovered false positives can occur up to 25 percent of the time. A random FTP file download can be classified as something more specific. Obviously commercial packet shapers do not rely on the free technology in open source and they actually may improve on it. So, if we had to estimate based on our experience, perhaps 5 percent of Internet traffic will likely get misclassified. This brings our overall accuracy down to 85 percent (combining the traffic they don’t claim to classify with an estimated error rate for the traffic they do classify).

Constantly Evolving Traffic

Our sources say (mentioned above) that 70 percent of their customers that purchased application shaping equipment were using the equipment primarily as a reporting tool after one year. This means that they had stopped keeping up with shaping policies altogether and were just looking at the reports to understand their network (nothing proactive to change the traffic).

This is an interesting fact. From what we have seen, many people are just unable, or unwilling, to put in the time necessary to continuously update and change their application rules to keep up with the evolving traffic. The reason for the constant changing of rules is that with traditional application shaping you are dealing with a cunning and wise foe. For example, if you notice that there is a large contingent of users using Bittorrent and you put a rule in to quash that traffic, within perhaps days, those users will have moved on to something new: perhaps a new application or encrypted p2p. If you do not go back and reanalyze and reprogram your rule set, your packet shaper slowly becomes ineffective.

And finally lest we not forget that application shaping is considered by some to be a a violation of Net Neutrality.

When is application shaping the right solution?

There is a large set of businesses that use application shaping quite successfully along with other technologies. This area is WAN optimization. Thus far, we have discussed the issues with using an application shaper on the wide open Internet where the types and variations of traffic are unbounded. However, in a corporate environment with a finite set and type of traffic between offices, an application shaper can be set up and used with fantastic results.

There is also the political side to application shaping. It is human nature to want to see and control what takes place in your environment. Finding the best tool available to actually show what is on your network, and the ability to contain it, plays well with just about any CIO or IT director on the planet. An industry leading packet shaper brings visibility to your network and a pie chart showing 300 different kinds of traffic. Whether or not the tool is practical or accurate over time isn’t often brought into the buying decision. The decision to buy can usually be “intuitively” justified. By intuitively, we mean that it is easier to get approval for a tool that is simple to conceptually understand by a busy executive looking for a quick-fix solution.

As the cost of bandwidth continues to fall, the question becomes how much a CIO should spend to analyze a network. This is especially true when you consider that as the Internet expands, the complexity of shaping applications grows. As bandwidth prices drop, the cost of implementing such a product is either flat or increasing. In cases such as this, it often does not make sense to purchase a $15,000 bandwidth shaper to stave off a bandwidth upgrade that might cost an additional $200 a month.

What about the reporting aspects of an application shaper? Even if it can only accurately report 90 percent of the actual traffic, isn’t this useful data in itself?

Yes and no. Obviously analyzing 90 percent of the data on your network might be useful, but if you really look at what is going on, it is hard to feel like you have control or understanding of something that is so dynamic and changing. By the time you get a handle on what is happening, the system has likely changed. Unless you can take action in real time, the network usage trends (on a wide open Internet trunk) will vary from day to day.1 It turns out that the most useful information you can determine regarding your network is an overall usage patter for each individual. The goof-off employee/user will stick out like a sore thumb when you look at a simple usage report since the amount of data transferred can be 10-times the average for everybody else. The behavior is the indicator here, but the specific data types and applications will change from day to day and week to week

How does the NetEqualizer differ and what are its advantages and weaknesses?

First, we’ll summarize equalizing and behavior-based shaping. Overall, it is a simple concept. Equalizing is the art form of looking at the usage patterns on the network, and then when things get congested, robbing from the rich to give to the poor. Rather than writing hundreds of rules to specify allocations to specific traffic as in traditional application shaping, you can simply assume that large downloads are bad, short quick traffic is good, and be done with it.

This behavior-based approach usually mirrors what you would end up doing if you could see and identify all of the traffic on your network, but doesn’t require the labor and cost of classifying everything. Applications such as Web surfing, IM, short downloads, and voice all naturally receive higher priority while large downloads and p2p receive lower priority. This behavior-based shaping does not need to be updated constantly as applications change.

Trusting a heuristic solution such as NetEqualizer is not always an easy step. Oftentimes, customers are concerned with accidentally throttling important traffic that might not fit the NetEqualizer model, such as video. Although there are exceptions, it is rare for the network operator not to know about these potential issues in advance, and there are generally relatively few to consider. In fact, the only exception that we run into is video, and the NetEqualizer has a low level routine that easily allows you to give overriding priority to a specific server on your network, hence solving the problem.

Another key element in behavior-based shaping is connections. Equalizing takes care of instances of congestion caused by single-source bandwidth hogs. However, the other main cause of Internet gridlock (as well as bringing down routers and access points) is p2p and its propensity to open hundreds or perhaps thousands of connections to different sources on the Internet. Over the years, the NetEqualizer engineers have developed very specific algorithms to spot connection abuse and avert its side effects.

This overview, along with the summary table below, should give you a good idea of where the NetEqualizer stands in relation to packet shaping.

Summary Table

Application based shaping

  • good for static links where traffic patterns are constant

  • good for intuitive presentations makes sense and easy to explain to non technical people
  • detailed reporting by application type
  • not the best fit for wide open Internet trunks
    • costly to maintain in terms of licensing

    • high initial cost

    • constant labor to tune with changing application spectrum

    • expect approximately 15 percent of traffic to be unclassified

  • only a static snapshot of a changing spectrum may not be useful
  • false positives may show data incorrectly no easy way to confirm accuracy
  • violates Net Neutrality

Equalizing

  • not the best for dedicated WAN trunks

  • the most cost effective for shared Internet trunks
  • little or no recurring cost or labor
  • low entry cost
  • conceptual takes some getting used to
  • basic reporting by behavior used to stop abuse
  • handles encrypted p2p without modifications or upgrades
  • Supports Net Neutrality

1 The exception is a corporate WAN link with relatively static usage patterns.

Note: Since we first published this article, deep packet inspection also known as layer 7 shaping has taken some serious industry hits with respect to US based ISPs

Related articles:

Why is NetEqualizer the low price leader in bandwidth control

When is deep packet inspection a good thing?

NetEqualizer offers deep packet inspection comprimise.

Internet users attempt to thwart Deep Packet Inspection using encryption.

Why the controversy over deep Packet inspection?

World wide web founder denounces deep packet inspection

Speeding up Your T1, DS3, or Cable Internet Connection with an Optimizing Appliance


By Art Reisman, CTO, APconnections (www.netequalizer.com)

Whether you are a home user or a large multinational corporation, you likely want to get the most out of your Internet connection. In previous articles, we have  briefly covered using Equalizing (Fairness)  as a tool to speed up your connection without purchasing additional bandwidth. In the following sections, we’ll break down  exactly how this is accomplished in layman’s terms.

First , what is an optimizing appliance?

An optimizing appliance is a piece of networking equipment that has one Ethernet input and one Ethernet output. It is normally located between the router that terminates your Internet connection and the users on your network. From this location, all Internet traffic must pass through the device. When activated, the optimizing appliance can rearrange traffic loads for optimal service, thus preventing the need for costly new bandwidth upgrades.

Next, we’ll summarize equalizing and behavior-based shaping.

Overall, equalizing is a simple concept. It is the art form of looking at the usage patterns on the network, and when things get congested, robbing from the rich to give to the poor. In other words, heavy users are limited in the amount of badwidth to which they have access in order to ensure that ALL users on the network can utilize the network effectively. Rather than writing hundreds of rules to specify allocations to specific traffic as in traditional application shaping, you can simply assume that large downloads are bad, short quick traffic is good, and be done with it.

How is Fairness implemented?

If you have multiple users sharing your Internet trunk and somebody mentions “fairness,” it probably conjures up the image of each user waiting in line for their turn. And while a device that enforces fairness in this way would certainly be better than doing nothing, Equalizing goes a few steps further than this.

We don’t just divide the bandwidth equally like a “brain dead” controller. Equalizing is a system of dynamic priorities that reward smaller users at the expense of heavy users. It is very very dynamic, and there is no pre-set limit on any user. In fact, the NetEqualizer does not keep track of users at all. Instead, we monitor user streams. So, a user may be getting one stream (FTP Download) slowed down while at the same time having another stream untouched(e-mail).

Another key element in behavior-based shaping is connections. Equalizing takes care of instances of congestion caused by single-source bandwidth hogs. However, the other main cause of Internet gridlock (as well as bringing down routers and access points) is p2p and its propensity to open hundreds or perhaps thousands of connections to different sources on the Internet. Over the years, the NetEqualizer engineers have developed very specific algorithms to spot connection abuse and avert its side effects.

What is the result?

The end result is that applications such as Web surfing, IM, short downloads, and voice all naturally receive higher priority, while large downloads and p2p receive lower priority. Also, situations where we cut back large streams is  generally for a short duration. As an added advantage, this behavior-based shaping does not need to be updated constantly as applications change.

Trusting a heuristic solution such as NetEqualizer is not always an easy step. Oftentimes, customers are concerned with accidentally throttling important traffic that might not fit the NetEqualizer model, such as video. Although there are exceptions, it is rare for the network operator not to know about these potential issues in advance, and there are generally relatively few to consider. In fact, the only exception that we run into is video, and the NetEqualizer has a low level routine that easily allows you to give overriding priority to a specific server on your network, hence solving the problem. The NetEqualizer also has a special feature whereby you can exempt and give priority to any IP address specifically in the event that a large stream such as video must be given priority.

Through the implementation of Equalizing technology, network administrators are able to get the most out of their network. Users of the NetEqualizer are often surprised to find that their network problems were not a result of a lack of bandwidth, but rather a lack of bandwidth control.

See who else is using this technology.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

5 Tips to speed up your business T1/DS3 to the Internet


By Art Reisman

Art Reisman CTO www.netequalizer.com

In tight times expanding your corporate Internet pipe is a hard pill to swallow, especially when your instincts tell you the core business should be able to live within the current allotment.

Here are some tips and hard facts that you  you may want to consider  to help stretch your business Internet pipe

1) Layer 7 application shaping.

The market place is crawling with solutions that allow you to set policies on bandwidth based on type of application.  Application shaping allows an administrator to restrict lower priority activities, while allowing mission critical Apps favorable consideration. This methodology is very seductive , but from our experience it can send your IT department into a nanny state, constantly trying to figure out what to allow and what to restrict. Also the cost of an Internet link expansion is dropping, while many of the application shaping solutions start around $10,000 and go up from there.

The up side is Layer 7 application shaping does work well when it comes to internal WAN links that do not carry Internet traffic. An administrator can get a handle on the fixed traffic running privately within their network quite easily.

2) Using your router to restrict specific IP and ports

If your core business utilization can be isolated to a single server or group of servers a few simple rules to allocate a large chunk of the pipe to these resources (by IP address) may be a good fit.

In an environment where business priorities change and are not isolated to a fixed server or two, this solution can backfire, but if your resource allocation requirements are stable doing something on your router to restrict one particular subnet over another can be useful in stretching your bandwidth.

One thing to be careful is that it often takes a skilled technician to set up specialty rules on your router. You can easilyu rack  up  $$ to your IT consultants if  your set up is not static.

3) Behavior based shaping

Editors note: We are the makers of the NetEqualizer which specializes in this technology; however our intent in this article is to be objective.

Behavior based shaping works well and affordably in most situations. Most business related applications will get priority as they tend to use small amounts of data or web pages.  Occasionally there are exceptions that need to override the basic behavior based shaping such as video.  Video can easily  be excluded from the generic policies.  Implementing a few exclusions is far less cumbersome than trying to classify all traffic all the time such as with application shaping.

4) Add more bandwidth and by pass your local loop carrier

T1’s and T3’s from your local telco may not be the only options for bandwidth in your area. Many of our customers get creative by purchasing bandwidth directly from a tier one provider (such as Level 3) and then using a Microwave back haul the bandwidth to their location. The Telco’s make a killing with what they call a loop charge (before they put any bandwidth on your line) With Microwave backhaul technology you can by-pass this charge for significant savings.

5) Clean up the laptops and computers on your network.  Many robots and viruses run in the background on your windows machines and can generate a cacophony of back ground traffic.  A business wide license for good virus protection may be worth the investment.  Stay away from the free ware versions of virus protection they tend to miss quite a bit.