NetEqualizer Bandwidth Shaping Solution: Telecom, Satellite Systems, Cable, and Wired and Wireless ISPs


In working with Internet providers around the world, we’ve repeatedly heard the same issues and challenges facing network administrators. Here are just a few:

Download ISP White Paper

  • We need to support selling fixed bandwidth to our customers.
  • We need to be able to report on subscriber usage.
  • We need the ability to increase subscriber ratio, or not have a subscriber cutback, before having to buy more bandwidth.
  • We need to meet the varying needs of all of our users.
  • We need to manage P2P traffic.
  • We need to give VoIP traffic priority.
  • We need to make exemptions for customers routing all of their traffic through VPN tunnels.
  • We need a solution that’s low cost, low maintenance, and easy to set up.
  • We need a solution that will grow with our network.
  • We need a solution that will meet CALEA requirements.

In this article, we will talk about how the NetEqualizer has been used to solve these issues for Internet providers worldwide.

Download article (PDF) ISP White Paper

Read full article …

NetEqualizer Bandwidth Shaping Solution: Libraries


In working with libraries across the country, we have heard the same issues and challenges repeatedly from network administrators.  Here are just a few:

Download Library White Paper

  • We need to meet the varying needs of all of our patrons while keeping the network truly open to the public.
  • We need to ensure access to our online resources for remote users (online catalogs, databases, etc.).
  • We need to do more with less bandwidth.
  • We need a solution that’s low cost, low maintenance, and easy to set up.
  • We need a solution that will grow with our network.

In this article, we’ll talk about how the NetEqualizer has been used to solve these issues for many libraries around the world.

Download article (PDF) Library White Paper

Read full article …

NetEqualizer Bandwidth Shaping Solution: K-12 Schools


Download K-12 Schools White Paper

In working with network administrators at public and private K-12 schools over the years, we’ve repeatedly heard the same issues and challenges facing them. Here are just a few:

  • We need a solution that’s low cost, low maintenance, and easy to set up.
  • We need a solution that will prioritize classroom videos and other online educational tools (e.g. blackboard.com).
  • We need to improve the overall Web-user experience for students.
  • We need a solution that doesn’t require “per-user” licensing.

In this article, we’ll talk about how the NetEqualizer has been used to solve these issues for many public and private K-12 schools around the world.

Download article (PDF) K-12 Schools White Paper

Read full article …

Comcast Suit: Was Blocking P2P Worth the Final Cost?


By Art Reisman
CTO of APconnections
Makers of the plug-and-play bandwidth control and traffic shaping appliance NetEqualizer

Art Reisman CTO www.netequalizer.com

Comcast recently settled a class action suit in the state of Pennsylvania regarding its practice of selectively blocking of P2P.  So far, the first case was settled for 16 million dollars with more cases on the docket yet to come. To recap, Comcast and other large ISPs invested in technology to thwart P2P, denied involvment when first accused, got spanked by the FCC,  and now Comcast is looking to settle various class action suits.

When Comcast’s practices were established, P2P usage was sky-rocketing with no end in sight and the need to block some of it was required in order to preserve reasonable speeds for all users. Given that there was no specific law or ruling on the book, it seemed like mucking with P2P to alleviate gridlock was a rational business decision. This decision made even more sense considering that DSL providers were stealing disgruntled customers. With this said, Comcast wasn’t alone in the practice — all of the larger providers were doing it, throttling P2P to some extent to ensure good response times for all of their customers.

Yet, with the lawsuits mounting, it appears on face value that things backfired a bit for Comcast. Or did they?

We can work out some very rough estimates as the final cost trade-off. Here goes:

I am going to guess that before this plays out completely, settlements will run close to $50 million or more. To put that in perspective, Comcast shows a 2008 profit of close to $3 billion. Therefore, $50 million is hardly a dent to their stock holders. But, in order to play this out, we must ask what the ramifications would have been to not blocking P2P back when all of this began and P2P was a more serious bandwidth threat (Today, while P2P has declined, YouTube and online video are now the primary bandwidth hogs).

We’ll start with the customer. The cost of getting a new customer is usually calculated at around 6 months of service or approximately $300. So, to make things simple, we’ll assume the net cost of a losing a customer is roughly $300. In addition, there are also the support costs related to congested networks that can easily run $300 per customer incident.

The other more subtle cost of P2P is that the methods used to deter P2P traffic were designed to keep traffic on the Comcast network. You see, ISPs pay for exchanging data when they hand off to other networks, and by limiting the amount of data exchanged, they can save money. I did some cursory research on the costs involved with exchanging data and did not come up with anything concrete, so I’ll assume a P2P customer can cost you $5 per month.

So, lets put the numbers together to get an idea of how much potential financial damage P2P was causing back in 2007 (again, I must qualify that these are based on estimates and not fact. Comments and corrections are welcome).

  • Comcast had approximately 15 million broadband customers in 2008.
  • If 1 in 100 were heavy P2P users, the exchange cost would be $7.5 million per month in exchange costs.
  • Net lost customers to a competitor might be 1 in 500 a month. That would run $9 million a month.
  • Support calls due to preventable congestion might run another 1 out of 500 customers or $9 million a month.

So, very conservatively for 2007 and 2008, incremental costs related to unmitigated P2P could have easily run a total of $600 million right off the bottom line.

Therefore, while these calculations are approximations, in retrospect it was likely financially well worth the risk for Comcast to mitigate the effects of unchecked P2P. Of course, the public relations costs are much harder to quantify.

NetEqualizer Bandwidth Shaping Solution: Colleges, Universities, Boarding Schools, and University Housing


In working with information technology leaders at universities, colleges, boarding schools, and university housing over the years, we’ve repeatedly heard the same issues and challenges facing network administrators.  Here are just a few:

Download College & University White Paper

  • We need to provide 24/7 access to the web in the dormitories.
  • We need to support multiple campuses (and WAN connections between campuses).
  • We have thousands of students, and hundreds of administrators and professors, all sharing the same pipe.
  • We need to give priority to classroom videos used for educational purposes.
  • Our students want to play games and watch videos (e.g. YouTube).
  • We get calls if instant messaging & email are not responding instantaneously.
  • We need to manage P2P traffic.

In this article, we’ll talk about how the NetEqualizer has been used to solve these issues for many private and public colleges, universities, boarding schools, and in university housing facilities around the world.

Download article (PDF) College & University White Paper

Read full article …

NetEqualizer provides Net Neutrality solution for bandwidth control.


By Eli Riles NetEqualizer VP of Sales

This morning I read an article on how some start up companies are being hurt awaiting the FCC’s decision on Net Neutrality.

Late in the day, a customer called and exclaimed, “Wow now with the FCC coming down  hard on technologies that jeopardize net neutrality, your business  must booming since you offer an excellent viable alternative” And yet  in face of this controversy, several of our competitors continue to sell deep packet inspection devices to customers.

Public operators and businesses that continue to purchase such technology are likely uninformed about the growing fire-storm of opposition against Deep Packet Inspection techniques.  The allure of being able to identify, and control Internet Traffic by type is very a natural solution, which customers often demand. Suppliers who sell DPI devices are just doing what their customer have asked. As with all technologies once the train leaves the station it is hard to turn around. What is different in the case of DPI is that suppliers and ISPs had their way with an ignorant public starting in the late 90’s. Nobody really gave much thought as to how DPI might be the villain in the controversy over Net Nuetrality. It was just assumed that nobody would notice their internet traffic being watched and redirected by routing devices. With behemoths such as Google having a vested interest in keeping traffic flowing without Interference on the Internet, commercial deep packet inspection solutions are slowly falling out of favor in the ISP sector. The bigger question for the players betting the house on DPI is , will it fall out favor in other  business verticals?

The NetEqualizer decision to do away with DPI two years ago is looking quite brilliant now, although at the time it was clearly a risk bucking market trends.  Today, even in the face of world wide recession our profit and unit sales are up for the first three quarters of 2009 this year.

As we have claimed in previous articles there is a time and place for deep packet inspection; however any provider using DPI to manipulate data is looking for a potential dog fight with the FCC.

NetEqualizer has been providing alternative bandwidth control options for ISPs , Businesses , and Schools of all sizes for 7 years without violating any of the Net Nuetrality sacred cows. If you have not heard about us, maybe now is a good time to pick up the phone. We have been on the record touting our solution as being fair equitable for quite some time now.

Why is NetEqualizer the low price leader in Bandwidth Control


Recently we have gotten feed back from customers that stating they almost did not consider the NetEqualizer because the price was so much less than solutions  from the likes of: Packeteer (Blue Coat), Allot NetEnforcer and Exinda.

Sometimes low price will raise a red flag on a purchase decision, especially when the price is an order of magnitude less than the competition.

Given this feed back we thought it would be a good idea to go over some of the major cost structure differences betwen APconnections maker of the NetEqualizer and some of the competition.

1) NetEqualizer’s are sold mostly direct by word of mouth. We do not have a traditional indirect sales channel.

– The down side for us as a company is that this does limit our reach a bit.  Many IT departments do not have the resources to seek out new products on their own, and are limited to only what is presented to them.

– The good news for all involved is selling direct takes quite a bit of cost out of delivering the product. Indirect  sales channels need to be incented to sell,  Often times they will steer the customer toward the highest commission product in their arsenal.  Our  direct channel eliminates this overhead.

-The other good thing about not using a sales channel is that when you talk to one of our direct (non commissioned) sales reps you can be sure that they are experts on the NetEqualizer. With a sales channel a sales rep often sells many different kinds of products and they can get rusty on some of the specifics.

2) We have bundled our Manufacturing with a company that also produces a popular fire wall. We also have a back source to manufacture our products at all times thus insuring a steady flow of product without the liability of a Manufacturing facility

3) We have never borrowed money to run Apconnections,

– this keeps us very stable and able to withstand market fluctuations

– there are no greedy investors calling the shots looking for a return and demanding higher prices

4) The NetEqualizer is simple and elegant

– Many products keep adding features to grow their market share we have a solution that works well but does not require constant current engineering

The Real Killer Apps and What You Can Do to Stop Them from Bringing Down Your Internet Links


When planning a new network, or when diagnosing a problem on an existing one, a common question that’s raised concerns the impact that certain applications may have on overall performance. In some cases, solving the problem can be as simple as identifying and putting an end to (or just cutting back) the use of certain bandwidth-intensive applications. So, the question, then, is what applications may actually be the source of the problem?

The following article works to identify and break down the applications that will most certainly kill your network, but also provides suggestions as to what you can do about them. While every application certainly isn’t covered, our experience working with network administrators around the world has helped us identify the most common problems.

The Common Culprits

YouTube Video (standard video) — On average, a sustained 10-minute YouTube video will consume about 500kbs over its duration. Most video players try to store the video (buffer ahead) locally as fast as your network  can take it.   On a shared network, this has the effect of bringing everything else on your network to its knees. This may not be a problem if you are the only person using the Internet link, but in today’s businesses and households, that is rarely the case.

For more specifics about YouTube consumption, see these other Youtube articles.

Microsoft Service-Pack Downloads — Updates such as Microsoft service packs use file transfer protocol (FTP). Generally, this protocol will use as much bandwidth as it can find. The end result is that your VoIP phone may lock up, your video’s will become erratic, and Web surfing will come to a crawl.

Keeping Your Network Running Smoothly While Handling Killer Apps

There is no magic pill that can give you unlimited bandwidth, but each of  the following solutions may help. However, they often require trade offs.

  1. The obvious solution is to communicate with other members of your household or business when using bandwidth intensive applications. This is not always practical, but, if other users agree to change their behavior, it’s usually a surefire solution.
  2. Deploy a fairness device to smooth out those rough patches during contentious busy hours — Yes, this is the NetEqualizer News blog, but with all bias aside, these types of technologies often work great. If you are in an office sharing an Internet feed with various users, the NetEqualizer will keep aggressive bandwidth users from crowding others out. No, it cannot create additional bandwidth on your pipe, but it will eliminate the gridlock caused by your colleague  in the next cubicle  downloading a Microsoft service pack. Yes, there are other  devices on the market that can enforce fairness, but the NetEqualizer was specifically designed for this mission. And, with a starting price of around $1400, it is a product small businesses can invest in and avoid longer term costs (see option 3).
  3. Buy more bandwidth — In most cases, this is the most expensive of the different solutions in the long term and should usually be a last resort. This is especially true if the problems are largely caused by recreational Internet use on a business network. However, if the bandwidth-intensive activities are a necessary part of your operation, and they can’t afford to be regulated by a fairness device, upgrading your bandwidth may be the only long-term solution. But, before signing the contract, be sure to explore options one and two first.

As mentioned, not every network-killing application is discussed here, but this should head you in the right direction in identifying the problem and finding a solution. For a more detailed discussion of this issue, visit the links below.

  • For a  more detailed discussion on how much bandwidth specific applications consume, click here.
  • For a set of detailed tips/tricks on making your Internet run faster, click here.
  • For an in-depth look at more complex methods used to mitigate network congestion on a WAN or Internet link, click here.

APconnections Study Shows Administrators Prioritize Results over Bandwidth Reporting


Today we released the results of our month-long study into the needs of bandwidth monitoring technology users which sought to determine the priority users place on detailed reporting compared to the overall impact on network optimization. Based on the results of a NetEqualizerNews.com poll, 80-percent of study participants voted that a smoothly running network was more important than the information provided by detailed reporting.

Ultimately, the study confirms what we’ve believed for years. While some reporting is essential, complicated reporting tools tend to be overkill. When users simply want their networks to run smoothly and efficiently, detailed reporting isn’t always necessary and certainly isn’t the most cost-effective solution.

Detailed bandwidth monitoring technology is not only more expensive from the start, but an administrator is also likely to spend more time making adjustments and looking for optimal performance. The result is a continuous cycle of unnecessarily spent manpower and money.

We go into further detail on the subject in our recent blog post entitled “The True Price of Bandwidth Monitoring.” The full article can be found at https://netequalizernews.com/2009/07/16/the-true-price-of-bandwidth-monitoring/.

$1000 Discount Offered Through NetEqualizer Cash For Conversion Program


After witnessing the overwhelming popularity of the government’s Cash for Clunkers new car program, we’ve decided to offer a similar deal to potential NetEqualizer customers. Therefore, this week, we’re announcing the launch of our Cash for Conversion program.The program offers owners of select brands (see below) of network optimization technology a $1000 credit toward the list-price purchase of NetEqualizer NE2000-10 or higher models (click here for a full price list). All owners have to do is send us your old (working or not) or out of license bandwidth control technology. Products from the following manufacturers will be accepted:

  • Exinda
  • Packeteer/Blue Coat
  • Allot
  • Cymphonics
  • Procera

In addition to receiving the $1000 credit toward a NetEqualizer, program participants will also have the peace of mind of knowing that their old technology will be handled responsibly through refurbishment or electronics recycling programs.

Only the listed manufacturers’ products will qualify. Offer good through the Labor Day weekend (September 7, 2009). For more information, contact us at 303-997-1300 or admin@apconnections.net.

Hitchhiker’s Guide To Network And WAN Optimization Technology


Manufacturers make all sorts of claims about speeding up your network with special technologies, in the following pages we’ll take a look at the different types of technologies explaining them in such a way that you the Consumer can make an informed decision on what is right for you.

Table of Contents

  • Compression – Relies on data patterns that can be represented more efficiently. Best suited for point to point leased lines.
  • Caching – Relies on human behavior , accessing the same data over and over. Best suited for point to point leased lines, but also viable for Internet Connections and VPN tunnels.
  • Protocol Spoofing – Best suited for Point to Point WAN links.
  • Application Shaping – Controls data usage based on spotting specific patterns in the data. Best suited for both point to point leased lines and Internet connections. Very expensive to maintain in both initial cost, ongoing costs and labor.
  • Equalizing – Makes assumptions on what needs immediate priority based on the data usage. Excellent choice for Internet connections and clogged VPN tunnels.
  • Connection Limits – Prevents access gridlock in routers and access points. Best suited for Internet access where p2p usage is clogging your network.
  • Simple Rate Limits – Prevents one user from getting more than a fixed amount of data. Best suited as a stop gap first effort for a remedying a congested Internet connection with a limited budget.

Compression

At first glance, the term compression seems intuitively obvious. Most people have at one time or another extracted a compressed Zip windows file. Examining the file sizes pre and post extraction reveals there is more data on the hard drive after the extraction. WAN compression products use some of the same principles only they compress the data on the WAN link and decompress it automatically once delivered, thus saving space on the link, making the network more efficient. Even though you likely understand compression on a Windows file conceptually, it would be wise to understand what is really going on under the hood during compression before making an investment to reduce network costs. Some questions to consider: How does compression really work? Are there situations where it may not work at all?

How it Works

A good, easy to visualize analogy to data compression is the use of short hand when taking dictation. By using a single symbol for common words a scribe can take written dictation much faster than if he were to spell out each entire word. Thus the basic principle behind compression techniques is to use shortcuts to represent common data. Commercial compression algorithms, although similar in principle, vary widely in practice. Each company offering a solution typically has their own trade secrets that they closely guard for a competitive advantage.

There are a few general rules common to all strategies. One technique is to encode a repeated character within a data file. For a simple example let’s suppose we were compressing this very document and as a format separator we had a row with a solid dash.

The data for this solid dash line is comprised of approximately 160 times the ASCII character “-�. When transporting the document across a WAN link without compression this line of document would require 80bytes of data, but with clever compression we can encode this using a special notation “-� X 160.

The compression device at the front end would read the 160 character line and realize: “Duh, this is stupid. Why send the same character 160 times in a row?” so it would incorporate a special code to depict the data more efficiently.

Perhaps that was obvious, but it is important know a little bit about compression techniques to understand the limits of their effectiveness. There are many types of data that cannot be efficiently compressed.

For example: many image and voice recordings are already optimized and there is very little improvement in data size that can be accomplished with compression techniques. The companies that sell compression based solutions should be able to provide you with profiles on what to expect based on the type of data sent on your WAN link.

Caching

Suppose you are the administrator for a network, and you have a group of a 1000 users that wake up promptly at 7:00 am each morning and immediately go to MSNBC.com to retrieve the latest news from Wall Street. This synchronized behavior would create 1000 simultaneous requests for the same remote page on the Internet.

Or, in the corporate world, suppose the CEO of a multinational 10,000 employee business, right before the holidays put out an all points 20 page PDF file on the corporate site describing the new bonus plan? As you can imagine all the remote WAN links might get bogged down for hours while each and every employee tried to download this file.

Well it does not take a rocket scientist to figure out that if somehow the MSNBC home page could be stored locally on an internal server that would alleviate quite a bit of pressure on your WAN link.

And in the case of the CEO memo, if a single copy of the PDF file was placed locally at each remote office it would alleviate the rush of data.

Caching does just that.

Offered by various vendors Caching can be very effective in many situations, and vendors can legitimately make claims of tremendous WAN speed improvement in some situations. Caching servers have built in intelligence to store the most recently and most frequently requested information, thus preventing future requests from traversing the WAN link unnecessarily .

You may know that most desktop browsers do their own form caching already. Many web servers keep a time stamp of their last update to data , and browsers such as the popular Internet Explorer will use a cached copy of a remote page after checking the time stamp.

So what is the downside of caching?

There are two main issues that can arise with caching:

  1. Keeping the cache current. If you access a cache page that is not current then you are at risk of getting old and incorrect information. Some things you may never want to be cached, for example the results of a transactional database query. It’s not that these problems are insurmountable, but there is always the risk that the data in cache will not be synchronized with changes.
  2. Volume. There are some 60 million web sites out on the Internet alone. Each site contains upwards of several megabytes of public information. The amount of data is staggering and even the smartest caching scheme cannot account for the variation in usage patterns among users and the likely hood they will hit an un-cached page.

Protocol Spoofing

Historically, there are client server applications that were developed for an internal LAN. Many of these applications are considered chatty. For example, to complete a transaction between a client and server, 10’s of messages may be transmitted, when perhaps one or two would suffice. Everything was fine until companies-for logistical and other reasons extended their LANs across the globe using WAN links to tie different locations together.

To get a better visual on what goes on in a chatty application perhaps an analogy will help with getting a picture in your mind. Suppose you were sending a letter to family members with your summer vacation pictures, and, for some insane reason, you decided to put each picture in a separate envelope and mail them individually on the same mail run. Obviously, this would be extremely inefficient.

What protocol spoofing accomplishes is to fake out the client or server side of the transaction and then send a more compact version of the transaction over the Internet, i.e. put all the pictures in one envelope and send it on your behalf thus saving you postage…

You might ask why not improve the inefficiencies in these chatty applications rather than write software to deal with the problem?

Good question, but that would be the subject of a totally different white paper on how IT organizations must evolve with legacy technology. It’s just beyond the scope of our white paper.

Application Shaping

One of the most popular and intuitive forms of optimizing bandwidth is a method called “application shaping” with aliases of “traffic shaping”, “bandwidth control”, and perhaps a few others thrown in for good measure. For the IT manager that is held accountable for everything that can and will go wrong on a network, or the CIO that needs to manage network usage policies, this is a dream come true. If you can divvy up portions of your WAN link to various applications then you can take control of your network and insure that important traffic has sufficient bandwidth.

At the center of application shaping is the ability to identify traffic by type. Is this Citrix traffic, streaming Audio, Kazaa peer to peer or something else?

The Fallacy of Internet Ports and Application Shaping

Many applications are expected to use Internet ports when communicating across the Internet. An Internet port is part of an Internet address, and many firewall products can easily identify ports and block or limit them. For example, the “FTP” application commonly used for downloading files uses the well know “port 21”. The fallacy with this scheme, as many operators soon find out, is that there are many applications that do not consistently use a fixed port for communication. Many application writers have no desire to be easily classified. In fact, they don’t want IT personnel to block them at all, so they deliberately design applications to not conform to any formal port assignment scheme. For this reason, any product that purports to block or alter application flows, by port, should be avoided if your primary mission is to control applications by type.

So, if standard firewalls are inadequate at blocking applications by port what can help?

As you are likely aware, all traffic on the Internet travels around in what is called an IP packet. An IP packet can very simply be thought of as a string of characters moving from Computer A to Computer B. The string of characters is called the “payload,” much like the freight inside a railroad car. On the outside of this payload, or data, is the address where it is being sent. These two elements, the address and the payload, comprise the complete IP packet. In the case of different applications on the Internet we would expect to see different kinds of payloads. For example, let’s take the example of a skyscraper being transported from New York to Los Angeles. How could this be done using a freight train? Common sense suggests that one would disassemble the office tower, stuff it into as many freight cars as it takes to transport it, and then when the train arrived in Los Angeles hopefully the workers on the other end would have the instructions on how to reassemble the tower.

Well, this analogy works with almost anything that is sent across the Internet, only the payload is some form of data, not a physical hunk of bricks, metal and wires. If we were sending a Word document as an e-mail attachment, guess what? The contents of the document would be disassembled into a bunch of IP packets and sent to the receiving e-mail client where it would be re-assembled. If I looked at the payload of each Internet packet in transit I could actually see snippets of the document in each packet and could quite easily read the words as they went by.

At the heart of all current application shaping products is special software that examines the content of Internet packets, and through various pattern matching techniques determines what type of application a particular flow is.

Once a flow is determined then the application shaping tool can enforce the operators policies on that flow.  Here are some examples:

  • Limit AIM messenger traffic to 100kbs
  • Reserve 500kbs for Shoretell voice traffic

The list of rules you can apply to traffic types and flow is unlimited.

The Downside to Application Shaping

Application shaping does work and is a very well thought out logical way to set up a network. After all, complete control over all types of traffic should allow an operator to run a clean ship, right? But as with any euphoric ideal there are drawbacks to the reality that you should be aware of.

  1. The number of applications on the Internet is a moving target. The best application shaping tools do a very good job of identifying several thousand of them, and yet there will always be some traffic that is unknown (estimated at ten percent by experts from the leading manufactures). The unknown traffic is lumped into the unknown classification and an operator must make a blanket decision on how to shape this class. Is it important? Is it not? Suppose the important traffic was streaming audio for a web cast and is not classified. Well, you get the picture. Although theory behind application shaping by type is a noble one, the cost for a company to keep current is large and there are cracks.
  2. Even if the application spectrum could be completely classified, the spectrum of applications constantly changes. You must keep licenses current to insure you have the latest in detection capabilities. And even then it can be quite a task to constantly analyze and change the mix of policies on your network. As bandwidth costs lessen, how much human time should be spent divvying up and creating ever more complex policies to optimize your WAN traffic?

Equalizing

Take a minute to think about what is really going on in your network to make you want to control it in the first place.

We can only think of a few legitimate reasons to do anything at all to your WAN: “The network is slow”, or “My VoIP call got dropped”.

If such words were never uttered than life would be grand.

So you really only have to solve these issues to be successful. Who cares about the actual speed of the WAN link or the number and types of applications running on your network or what port they are using, if you never hear these two complaints?

Equalizing goes at the heart of congestion using the basic principal of time. The reason why a network is slow or a voice call breaks up is that the network is stupid. The network grants immediate access to anybody who wants to use it, no matter what their need is. That works great much of the day when networks have plenty of bandwidth to handle all traffic demands, but it is the peak usage demands that play havoc.

Take the above statement with some simple human behavior factors. People notice slowness when real time activities break down. Accessing a web page, or sending an e-mail , chat session, voice call. All these activities will generate instant complaints if response times degrade from the “norm”.

The other fact of human network behavior is that there are bandwidth intensive applications, peer to peer, large e-mail attachments, data base back ups. These bandwidth intensive activities are attributed to a very small number of active users at any one time which makes it all the more insidious as they can consume well over ninety percent of a network’s resources at any time. Also, most of these bandwidth intensive applications can be spread out over time without notice from the user.

That data base back up for example: does it really need to be completed in three minutes at 5:30 on a Friday, or can it be done over six minutes and complete at 5:33? That would give your network perhaps fifty percent more bandwidth at no additional cost and nobody would notice. It is unlikely the user backing up their local disk drive is waiting for it to complete with stop watch in hand.

It is these unchanging human factor interactions that allow equalizing to work today, tomorrow and well into the future without need for upgrading. It looks at the behavior of the applications and usage patterns. By adhering to some simple rules of behavior the real time applications can be identified from the heavy non real time activities and thus be granted priority on the fly without any specific policies set by the IT Manager.

How Equalizing Technology Balances Traffic

Each connection on your network constitutes a traffic flow. Flows vary widely from short dynamic bursts, for example, when searching a small website, to large persistent flows, as when performing peer-to-peer file sharing.

Equalizing is determined from the answers to these questions:

  1. How persistent is the flow?
  2. How many active flows are there?
  3. How long has the flow been active?
  4. How much total congestion is currently on the trunk?
  5. How much bandwidth is the flow using relative to the link size?

Once these answers are known then Equalizing makes adjustments to flow by adding latency to low-priority tasks so high-priority tasks receive sufficient bandwidth. Nothing more needs to be said and nothing more needs to be administered to make it happen, once set up it need not be revisited.

Exempting Priority Traffic

Many people often point out that although equalizing technology sounds promising that it may be prone to mistakes with such a generic approach to traffic shaping. What if a user has a high priority bandwidth intensive video stream that must get through, wouldn’t this be the target of a miss-applied rule to slow it down?

The answer is yes, but what we have found is that high bandwidth priority streams are usually few in number and known by the administrator; they rarely if ever pop up spontaneously, so it is quite easy to exempt such flows since they are the rare exception. This is much easier than trying to classify every flow on your network at all times.

Connection Limits

Often overlooked as a source of network congestion is the number of connections a user generates. A connection can be defined as a single user communicating with a single Internet site. Take accessing the Yahoo home page for example. When you access the Yahoo home page your browser goes out to Yahoo and starts following various links on the Yahoo page to retrieve all the data. This data is typically not all at the same Internet address, so your browser may access several different public Internet locations to load the Yahoo home page, perhaps as many as ten connections over a short period of time. Routers and access points on your local network must keep track of these “connections” to insure that the data gets routed back to the correct browser. Although ten connections to the Yahoo home page is not excessive over a few seconds there are very poorly behaved applications, (most notably Gnutella, Bear Share, and Bittorrent), which are notorious for opening up 100’s or even 1000’s of connections in a short period of time. This type of activity is just as detrimental to your network as other bandwidth eating applications and can bring your network to a grinding halt. The solution is to make sure any traffic management solution deployed incorporates some form of connection limiting features.

Simple Rate Limits

The most common and widely used form of bandwidth control is the simple rate limit. This involves putting a fixed rate cap on a single IP address as per often is the case with rate plans promised by ISPs to their user community. “2 meg up and 1 meg down” is a common battle cry, but what happens in reality with such rate plans?

Although setting simple rates limits is far superior to running a network wide open we often call this “set, forget, and pray”!

Take for example six users sharing a T1 if each of these six users gets a rate of 256kbs up and 256kbs down. Then these six users each using their full share of 256 kilo bits per second is the maximum amount a T1 can handle. Although it is unlikely that you will hit gridlock with just six users, when the number of users reaches thirty, gridlock becomes likely, and with forty or fifty users, it becomes a certainty to happen quite often. It is not uncommon for schools, wireless ISPs, and executive suites to have sixty users to as many as 200 users sharing a single T1 with simple fixed user rate limits as the only control mechanism.

Yes, simple fixed user rate limiting does resolve the trivial case where one or two users, left unchecked, can use all available bandwidth; however unless your network is not oversold there is never any guarantee that busy-hour conditions will not result in gridlock.

Conclusion

The common thread to all WAN optimization techniques is they all must make intelligent assumptions about data patterns or human behavior to be effective. After all, in the end, the speed of the link is just that, a fixed speed that cannot be exceeded. All of these techniques have their merits and drawbacks, the trick is finding a solution best for your network needs. Hopefully the background information contained in this document will give you information so you the consumer can make an informed decision.

The True Price of Bandwidth Monitoring


By Art Reisman

Art Reisman CTO www.netequalizer.com

For most IT administrators, bandwidth monitoring of some sort is an essential part of keeping track of, as well as justifying, network expenses. Without visibility into a network load, an administrator’s job would degrade into a quagmire of random guesswork. Or would it?

The traditional way of  looking at monitoring your Internet has two parts: the fixed cost of the monitoring tool used to identify traffic, and the labor associated with devising a remedy. In an ironic inverse correlation, we assert that costs increase with the complexity of the monitoring tool. Obviously, the more detailed the reporting tool, the more expensive its initial price tag. The kicker comes with part two. The more expensive the tool, the more  detail  it will provide, and the more time an administrator is likely to spend adjusting and mucking, looking for optimal performance.

But, is it a fair to assume higher labor costs with  more advanced monitoring and information?

Well, obviously it would not make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. Why have the reporting tool in the first place if the only output was to stare at reports and do nothing? Typically, the more information an admin has about a network, the more inclined he might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief  that when  the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network mucking can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention.  For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980’s.  The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.

An effective compromise with many of our customers is that they are stepping down from expensive complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.

For example, with a simple reporting tool you can plot network usage by user.  Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing; abuse becomes obvious just looking at the usage (a simple report).

However, there is also the personal control factor, which often does not follow clear lines of ROI (return on investment).

What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into a trip to Hawaii.

In our next article, we’ll put some real world numbers to the test for actual break downs, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.

List of monitoring tools compiled by Stanford

Planetmy
Linux Tips
How to set up a monitor for free

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

APconnections Announces NetEqualizer Lifetime Buyer Protection Policy


This week, we announced the launch of the NetEqualizer Lifetime Buyer Protection Policy. In the event of an un-repairable failure of a NetEqualizer unit at any time, or in the event that it is time to retire a unit, customers will have the option to purchase a replacement unit and apply a 50-percent credit of their original unit purchase price, toward the new unit.  For current pricing see register for our price list.  This includes units that are more than three years old (the expected useful life for hardware) and in service at the time of failure.

For example, if you purchased a unit in 2003 for $4000 and were looking to replace it or upgrade with a newer model, APconnections would kick in a $2000 credit toward the replacement purchase.

The Policy will be in addition to the existing optional yearly NetEqualizer Hardware Warranty (NHW), which offers customers cost-free repairs or replacement of any malfunctioning unit while NHW is in effect (read details on NHW).

Our decision to implement the policy was a matter of customer peace-of-mind rather than necessity. While the failure rate of any NetEqualizer unit is ultimately very low, we want customers to know that we stand behind our products – even if it’s several years down the line.

To qualify,

  • users must be the original owner of the NetEqualizer unit,
  • the customer must have maintained a support contract that has been current within last 18 months , lapses of support longer than 18 months will void our replacement policy
  • the unit must have been in use on your network at the time of failure.

Shipping is not included in the discounted price. Purchasers of the one-year NetEqualizer hardware warranty (NHW) will still qualify for full replacement at no charge while under hardware warranty.  Contact us for more details by emailing sales@apconnections.net, or calling 303.997.1300 x103 (International), or 1.888.287.2492 (US Toll Free).

Note: This Policy does not apply to the NetEqualizer Lite.

Deep Packet Inspection Abuse In Iran Raises Questions About DPI Worldwide


Over the past few years, we at APconnections have made our feelings about Deep Packet Inspection clear, completely abandoning the practice in our NetEqualizer technology more than two years ago. While there may be times that DPI is necessary and appropriate, its use in many cases can threaten user privacy and the open nature of the Internet. And, in extreme cases, DPI can even be used to threaten freedom of speech and expression. As we mentioned in a previous article, this is currently taking place in Iran.

Although these extreme invasions of privacy are most likely not occurring in the United States, their existence in Iran is bringing increasing attention to the slippery slope that is Deep Packet Inspection. A July 10 Huffington Post article reads:

“Before DPI becomes more widely deployed around the world and at home, the U.S. government ought to establish legitimate criteria for authorizing the use such control and surveillance technologies. The harm to privacy and the power to control the Internet are so disturbing that the threshold for using DPI must be very high.The use of DPI for commercial purposes would need to meet this high bar. But it is not clear that there is any commercial purpose that outweighs the potential harm to consumers and democracy.”

This potential harm to the privacy and rights of consumers was a major factor behind our decision to discontinue the use of DPI in any of our technology and invest in alternative means for network optimization. We hope that the ongoing controversy will be reason for others to do the same.

What NetEqualizer Users Are Saying (Updated June 2009)


Editor’s Note: As NetEqualizer’s popularity has grown, more and more users have been sharing their experiences on message boards and listservs across the Internet. Just to give you an idea of what they’re saying, here a few of the reviews and discussion excerpts that have been posted online over the past several months…

Wade LeBeau — The Daily Journal Network Operations Manager

NetEqualizer is one of the most cost-effective management units on the market, and we found the unit easy to install—right out of the box. We made three setting changes to match our network using the web (browser) interface, connected the unit, and right away traffic shaping started, about 10minutes total setup time. The unit has two Ethernet ports…one port toward your user network, the other ports toward your broadband connection/server if applicable. A couple of simple clicks and you can see reporting live as it happens. In testing, we ran our unit for 30-days and saw our broadband reports stabilize and our users receiving the same slices of broadband access. With the NetEqualizer, there is no burden of extensive policies to manage….The NetEqualizer is a nice tool to add to any network of any size. Businesses can see how important the Internet is and how hungry users can be for information.

__________________________________________________________________________________________________

DSL Reports, April 2009

The Netequalizer has resulted in dramatically improved service to our customers. Most of the time, our customers are seeing their full bandwidth. The only time they don’t see it now is when they’re downloading big files. And, when they don’t see full performance, its only for the brief period that the AP is approaching saturation. The available bandwidth is re-evaluated every 2 seconds, so the throttling periods are often brief.

Bottom line to this is that we can deliver significantly more data through the same AP. The customers hitting web pages, checking e-mail, etc. virtually always see full bandwidth, and the hogs don’t impact these customers. Even the hogs see better performance (although that wasn’t one of my priorities).

Click here to read more.