NetEqualizer Demo Sale Offers 30-Percent Discount On NE2000-10


Over the next several weeks, we’ll be launching our 2009 NetEqualizer demo sale, offering customers significantly reduced prices on lightly used NE2000-10 models. The sale was a resounding success last year and available units are limited, so interested parties should be quick to claim their NetEqualizers as the 2009 demos become available.

The list price for new NetEqualizer NE2000-10 models is $2,465, but the demos will be sold for only $1,750, a 30-percent discount. Each unit will be fully updated with the most recent NetEqualizer software and covered by the optional NetEqualizer Hardware Warranty (NHW). As with all units purchased directly from APconnections, the NetEqualizers will also come with our unmatched customer service and support.

The NE2000-10 is capable of shaping up to 10 Mbps and supporting 10,000 users. However, because of its versatility, the unit is perfect for universities, libraries, office parks, and businesses of all sizes. For more information on the entire NetEqualizer line, visit our Web site at http://www.netequalizer.com or view our full price list at http://www.netequalizer.com/neteqpricelist.php.

For purchasing information or other questions, please contact us at sales@apconnnections.net or 303-997-1300.

Net Neutrality Bill Won’t End Conflicts Between Users and Providers


This week, Representatives Edward Markey, a Massachusetts Democrat, and Anna Eshoo, a California Democrat, introduced the Internet Freedom Preservation Act aimed at protecting the rights of Internet users and ultimately net neutrality. Yet, before net neutrality advocates unequivocally praise the bill, it should be noted that it protects the rights of Internet service providers as well. For example, as long as ISPs are candid with their customers in regard to their network optimizaiton practices, the bill does allow for “reasonable network management,” stating:

“Nothing in this section shall be construed to prohibit an Internet access provider from engaging in reasonable network management consistent with the policies and duties of nondiscrimination and openness set forth in this Act. For purposes of subsections (b)(1) and (b)(5), a network management practice is a reasonable practice only if it furthers a critically important interest, is narrowly tailored to further that interest, and is the means of furthering that interest that is the least restrictive, least discriminatory, and least constricting of consumer choice available. In determining whether a network management practice is reasonable, the Commission shall consider, among other factors, the particular network architecture or technology limitations of the provider.”

While this stipulation is extremely important in the protection it provides Internet service providers, it is likely to come into conflict with some Internet users’ ideas of net neutrality.  For example, the bill also states that it is ISPs’ “duty to not block, interfere with, discriminate against, impair or degrade the ability of any person to use an Internet access service to access, use, send, post, receive or offer any lawful content, application or service through the Internet.” However, even users of the NetEqualizer, one of the more hands off approaches to network management, don’t have a choice but to target the behavior of certain heavy customers. One person’s penchant for downloading music — legally or not — can significantly impact the quality of service for everyone else. And, increasing bandwidth just to meet the needs of a few users isn’t reasonable either.

It would seem that this would be a perfect case of reasonable network management which would be allowed under the proposed bill. Yet many net neutrality advocates tend to quickly dismiss any management as an infringement upon the user’s rights. The protection of the users’ rights will likely get the attention in discussions about these types of bills, but there should also be just as much emphasis on the rights of the provider to reasonably manage their network and what this may mean for the idea of unadulterated net neutrality.

The fact that this bill includes the right to reasonably manage one’s network indicates that some form of management is typically nececsary for a network to run at its full potential. The key is finding some middle ground.

Related article September 22 2009

FCC rules in favor of Net Neutrality the commentary on this blog is great and worth the read.

APconnections Study Shows Administrators Prioritize Results over Bandwidth Reporting


Today we released the results of our month-long study into the needs of bandwidth monitoring technology users which sought to determine the priority users place on detailed reporting compared to the overall impact on network optimization. Based on the results of a NetEqualizerNews.com poll, 80-percent of study participants voted that a smoothly running network was more important than the information provided by detailed reporting.

Ultimately, the study confirms what we’ve believed for years. While some reporting is essential, complicated reporting tools tend to be overkill. When users simply want their networks to run smoothly and efficiently, detailed reporting isn’t always necessary and certainly isn’t the most cost-effective solution.

Detailed bandwidth monitoring technology is not only more expensive from the start, but an administrator is also likely to spend more time making adjustments and looking for optimal performance. The result is a continuous cycle of unnecessarily spent manpower and money.

We go into further detail on the subject in our recent blog post entitled “The True Price of Bandwidth Monitoring.” The full article can be found at https://netequalizernews.com/2009/07/16/the-true-price-of-bandwidth-monitoring/.

Results From Comcast’s New Bandwidth Shaping Approach Support Long-Time NetEqualizer Strategy


This week, a DSL Reports article explored the favorable customer response to the most recent changes in Comcast’s bandwidth shaping strategy. The article states:

“Last month we explored how Comcast and Sandvine’s network management technology continues to evolve. Unlike Comcast’s last system, which throttled upstream traffic for all users regardless of consumption, this new system identifies customers and throttles back consumption only if they’re on a congested node — and they’re a particular reason why. Even then, we haven’t seen complaints from users in our Comcast forum, which is a very good sign.”

Several months ago, we documented the similarities and differences between Comcast’s network management techniques and those of NetEqualizer. If you go back and read our older article, it sounds like these latest changes address many of the issues we raised and inch Comcast’s approach even closer to that of NetEqualizer. The key here is that, like NetEqualizer, they now only hit the users that are specifically breaking the camels back, and as the author points out, there are no complaints.

Although nobody from Comcast has ever conferred with us on our technology, we believe this new more specific shaping is very close to what we have been doing for years, and with similar results — no complaints.

To read the full DSL Reports article, click here.

$1000 Discount Offered Through NetEqualizer Cash For Conversion Program


After witnessing the overwhelming popularity of the government’s Cash for Clunkers new car program, we’ve decided to offer a similar deal to potential NetEqualizer customers. Therefore, this week, we’re announcing the launch of our Cash for Conversion program.The program offers owners of select brands (see below) of network optimization technology a $1000 credit toward the list-price purchase of NetEqualizer NE2000-10 or higher models (click here for a full price list). All owners have to do is send us your old (working or not) or out of license bandwidth control technology. Products from the following manufacturers will be accepted:

  • Exinda
  • Packeteer/Blue Coat
  • Allot
  • Cymphonics
  • Procera

In addition to receiving the $1000 credit toward a NetEqualizer, program participants will also have the peace of mind of knowing that their old technology will be handled responsibly through refurbishment or electronics recycling programs.

Only the listed manufacturers’ products will qualify. Offer good through the Labor Day weekend (September 7, 2009). For more information, contact us at 303-997-1300 or admin@apconnections.net.

Top Tips To Quantify The Cost Of WAN Optimization


Editor’s Note: As we mentioned in a recent article, there’s often some confusion when it comes to how WAN optimization fits into the overall network optimization industry — especially when compared to Internet optimization. Although similar, the two techniques require different approaches to optimization. What follows are some simple questions to ask your vendor before you purchase a WAN optimization appliance. For the record, the NetEqualizer is primarily used for Internet optimization.

When presenting a WAN optimization ROI argument, your vendor rep will clearly make a compelling case for savings.  The ROI case will be made by amortizing the cost of equipment against your contracted rate from your provider. You can and should trust these basic raw numbers. However, there is more to evaluating a WAN optimization (packet shaping) appliance than comparing equipment cost against bandwidth savings. Here are a few things to keep in mind:

  1. The amortization schedule should also make reasonable assumptions about future costs for T1, DS3, and OC3 links. Most contracted rates have been dropping in many metro areas and it is reasonable to assume that bandwidth costs will perhaps be 50-percent less two to three years out.
  2. If you do increase bandwidth, the licensing costs for the traffic shaping equipment can increase substantially. You may also find yourself in a situation where you need to do a forklift upgrade as you outrun your current hardware.
  3. Recurring licensing costs are often mandatory to keep your equipment current. Without upgrading your license, your deep packet inspection (layer 7 shaping filters) will become obsolete.
  4. Ongoing labor costs to tune and re-tune your WAN optimization appliance can often costs thousands per week.
  5. The good news is that optimization companies will normally allow you to try an appliance before you buy. Make sure you take the time to manage the equipment with your own internal techs or IT consultant to get an idea of how it will fit into your network.  The honeymoon with new equipment (supported by a well trained pre-sales team) can be short lived. After the free pre-sale support has expired, you will be on your own.

There are certainly times when WAN optimization makes sense, yet it many cases, what appears to be a no-brainer decision at first will begin to be called into question as costs mount down the line. Hopefully these five contributing factors will paint a clearer picture of what to expect.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

Hitchhiker’s Guide To Network And WAN Optimization Technology


Manufacturers make all sorts of claims about speeding up your network with special technologies, in the following pages we’ll take a look at the different types of technologies explaining them in such a way that you the Consumer can make an informed decision on what is right for you.

Table of Contents

  • Compression – Relies on data patterns that can be represented more efficiently. Best suited for point to point leased lines.
  • Caching – Relies on human behavior , accessing the same data over and over. Best suited for point to point leased lines, but also viable for Internet Connections and VPN tunnels.
  • Protocol Spoofing – Best suited for Point to Point WAN links.
  • Application Shaping – Controls data usage based on spotting specific patterns in the data. Best suited for both point to point leased lines and Internet connections. Very expensive to maintain in both initial cost, ongoing costs and labor.
  • Equalizing – Makes assumptions on what needs immediate priority based on the data usage. Excellent choice for Internet connections and clogged VPN tunnels.
  • Connection Limits – Prevents access gridlock in routers and access points. Best suited for Internet access where p2p usage is clogging your network.
  • Simple Rate Limits – Prevents one user from getting more than a fixed amount of data. Best suited as a stop gap first effort for a remedying a congested Internet connection with a limited budget.

Compression

At first glance, the term compression seems intuitively obvious. Most people have at one time or another extracted a compressed Zip windows file. Examining the file sizes pre and post extraction reveals there is more data on the hard drive after the extraction. WAN compression products use some of the same principles only they compress the data on the WAN link and decompress it automatically once delivered, thus saving space on the link, making the network more efficient. Even though you likely understand compression on a Windows file conceptually, it would be wise to understand what is really going on under the hood during compression before making an investment to reduce network costs. Some questions to consider: How does compression really work? Are there situations where it may not work at all?

How it Works

A good, easy to visualize analogy to data compression is the use of short hand when taking dictation. By using a single symbol for common words a scribe can take written dictation much faster than if he were to spell out each entire word. Thus the basic principle behind compression techniques is to use shortcuts to represent common data. Commercial compression algorithms, although similar in principle, vary widely in practice. Each company offering a solution typically has their own trade secrets that they closely guard for a competitive advantage.

There are a few general rules common to all strategies. One technique is to encode a repeated character within a data file. For a simple example let’s suppose we were compressing this very document and as a format separator we had a row with a solid dash.

The data for this solid dash line is comprised of approximately 160 times the ASCII character “-�. When transporting the document across a WAN link without compression this line of document would require 80bytes of data, but with clever compression we can encode this using a special notation “-� X 160.

The compression device at the front end would read the 160 character line and realize: “Duh, this is stupid. Why send the same character 160 times in a row?” so it would incorporate a special code to depict the data more efficiently.

Perhaps that was obvious, but it is important know a little bit about compression techniques to understand the limits of their effectiveness. There are many types of data that cannot be efficiently compressed.

For example: many image and voice recordings are already optimized and there is very little improvement in data size that can be accomplished with compression techniques. The companies that sell compression based solutions should be able to provide you with profiles on what to expect based on the type of data sent on your WAN link.

Caching

Suppose you are the administrator for a network, and you have a group of a 1000 users that wake up promptly at 7:00 am each morning and immediately go to MSNBC.com to retrieve the latest news from Wall Street. This synchronized behavior would create 1000 simultaneous requests for the same remote page on the Internet.

Or, in the corporate world, suppose the CEO of a multinational 10,000 employee business, right before the holidays put out an all points 20 page PDF file on the corporate site describing the new bonus plan? As you can imagine all the remote WAN links might get bogged down for hours while each and every employee tried to download this file.

Well it does not take a rocket scientist to figure out that if somehow the MSNBC home page could be stored locally on an internal server that would alleviate quite a bit of pressure on your WAN link.

And in the case of the CEO memo, if a single copy of the PDF file was placed locally at each remote office it would alleviate the rush of data.

Caching does just that.

Offered by various vendors Caching can be very effective in many situations, and vendors can legitimately make claims of tremendous WAN speed improvement in some situations. Caching servers have built in intelligence to store the most recently and most frequently requested information, thus preventing future requests from traversing the WAN link unnecessarily .

You may know that most desktop browsers do their own form caching already. Many web servers keep a time stamp of their last update to data , and browsers such as the popular Internet Explorer will use a cached copy of a remote page after checking the time stamp.

So what is the downside of caching?

There are two main issues that can arise with caching:

  1. Keeping the cache current. If you access a cache page that is not current then you are at risk of getting old and incorrect information. Some things you may never want to be cached, for example the results of a transactional database query. It’s not that these problems are insurmountable, but there is always the risk that the data in cache will not be synchronized with changes.
  2. Volume. There are some 60 million web sites out on the Internet alone. Each site contains upwards of several megabytes of public information. The amount of data is staggering and even the smartest caching scheme cannot account for the variation in usage patterns among users and the likely hood they will hit an un-cached page.

Protocol Spoofing

Historically, there are client server applications that were developed for an internal LAN. Many of these applications are considered chatty. For example, to complete a transaction between a client and server, 10’s of messages may be transmitted, when perhaps one or two would suffice. Everything was fine until companies-for logistical and other reasons extended their LANs across the globe using WAN links to tie different locations together.

To get a better visual on what goes on in a chatty application perhaps an analogy will help with getting a picture in your mind. Suppose you were sending a letter to family members with your summer vacation pictures, and, for some insane reason, you decided to put each picture in a separate envelope and mail them individually on the same mail run. Obviously, this would be extremely inefficient.

What protocol spoofing accomplishes is to fake out the client or server side of the transaction and then send a more compact version of the transaction over the Internet, i.e. put all the pictures in one envelope and send it on your behalf thus saving you postage…

You might ask why not improve the inefficiencies in these chatty applications rather than write software to deal with the problem?

Good question, but that would be the subject of a totally different white paper on how IT organizations must evolve with legacy technology. It’s just beyond the scope of our white paper.

Application Shaping

One of the most popular and intuitive forms of optimizing bandwidth is a method called “application shaping” with aliases of “traffic shaping”, “bandwidth control”, and perhaps a few others thrown in for good measure. For the IT manager that is held accountable for everything that can and will go wrong on a network, or the CIO that needs to manage network usage policies, this is a dream come true. If you can divvy up portions of your WAN link to various applications then you can take control of your network and insure that important traffic has sufficient bandwidth.

At the center of application shaping is the ability to identify traffic by type. Is this Citrix traffic, streaming Audio, Kazaa peer to peer or something else?

The Fallacy of Internet Ports and Application Shaping

Many applications are expected to use Internet ports when communicating across the Internet. An Internet port is part of an Internet address, and many firewall products can easily identify ports and block or limit them. For example, the “FTP” application commonly used for downloading files uses the well know “port 21”. The fallacy with this scheme, as many operators soon find out, is that there are many applications that do not consistently use a fixed port for communication. Many application writers have no desire to be easily classified. In fact, they don’t want IT personnel to block them at all, so they deliberately design applications to not conform to any formal port assignment scheme. For this reason, any product that purports to block or alter application flows, by port, should be avoided if your primary mission is to control applications by type.

So, if standard firewalls are inadequate at blocking applications by port what can help?

As you are likely aware, all traffic on the Internet travels around in what is called an IP packet. An IP packet can very simply be thought of as a string of characters moving from Computer A to Computer B. The string of characters is called the “payload,” much like the freight inside a railroad car. On the outside of this payload, or data, is the address where it is being sent. These two elements, the address and the payload, comprise the complete IP packet. In the case of different applications on the Internet we would expect to see different kinds of payloads. For example, let’s take the example of a skyscraper being transported from New York to Los Angeles. How could this be done using a freight train? Common sense suggests that one would disassemble the office tower, stuff it into as many freight cars as it takes to transport it, and then when the train arrived in Los Angeles hopefully the workers on the other end would have the instructions on how to reassemble the tower.

Well, this analogy works with almost anything that is sent across the Internet, only the payload is some form of data, not a physical hunk of bricks, metal and wires. If we were sending a Word document as an e-mail attachment, guess what? The contents of the document would be disassembled into a bunch of IP packets and sent to the receiving e-mail client where it would be re-assembled. If I looked at the payload of each Internet packet in transit I could actually see snippets of the document in each packet and could quite easily read the words as they went by.

At the heart of all current application shaping products is special software that examines the content of Internet packets, and through various pattern matching techniques determines what type of application a particular flow is.

Once a flow is determined then the application shaping tool can enforce the operators policies on that flow.  Here are some examples:

  • Limit AIM messenger traffic to 100kbs
  • Reserve 500kbs for Shoretell voice traffic

The list of rules you can apply to traffic types and flow is unlimited.

The Downside to Application Shaping

Application shaping does work and is a very well thought out logical way to set up a network. After all, complete control over all types of traffic should allow an operator to run a clean ship, right? But as with any euphoric ideal there are drawbacks to the reality that you should be aware of.

  1. The number of applications on the Internet is a moving target. The best application shaping tools do a very good job of identifying several thousand of them, and yet there will always be some traffic that is unknown (estimated at ten percent by experts from the leading manufactures). The unknown traffic is lumped into the unknown classification and an operator must make a blanket decision on how to shape this class. Is it important? Is it not? Suppose the important traffic was streaming audio for a web cast and is not classified. Well, you get the picture. Although theory behind application shaping by type is a noble one, the cost for a company to keep current is large and there are cracks.
  2. Even if the application spectrum could be completely classified, the spectrum of applications constantly changes. You must keep licenses current to insure you have the latest in detection capabilities. And even then it can be quite a task to constantly analyze and change the mix of policies on your network. As bandwidth costs lessen, how much human time should be spent divvying up and creating ever more complex policies to optimize your WAN traffic?

Equalizing

Take a minute to think about what is really going on in your network to make you want to control it in the first place.

We can only think of a few legitimate reasons to do anything at all to your WAN: “The network is slow”, or “My VoIP call got dropped”.

If such words were never uttered than life would be grand.

So you really only have to solve these issues to be successful. Who cares about the actual speed of the WAN link or the number and types of applications running on your network or what port they are using, if you never hear these two complaints?

Equalizing goes at the heart of congestion using the basic principal of time. The reason why a network is slow or a voice call breaks up is that the network is stupid. The network grants immediate access to anybody who wants to use it, no matter what their need is. That works great much of the day when networks have plenty of bandwidth to handle all traffic demands, but it is the peak usage demands that play havoc.

Take the above statement with some simple human behavior factors. People notice slowness when real time activities break down. Accessing a web page, or sending an e-mail , chat session, voice call. All these activities will generate instant complaints if response times degrade from the “norm”.

The other fact of human network behavior is that there are bandwidth intensive applications, peer to peer, large e-mail attachments, data base back ups. These bandwidth intensive activities are attributed to a very small number of active users at any one time which makes it all the more insidious as they can consume well over ninety percent of a network’s resources at any time. Also, most of these bandwidth intensive applications can be spread out over time without notice from the user.

That data base back up for example: does it really need to be completed in three minutes at 5:30 on a Friday, or can it be done over six minutes and complete at 5:33? That would give your network perhaps fifty percent more bandwidth at no additional cost and nobody would notice. It is unlikely the user backing up their local disk drive is waiting for it to complete with stop watch in hand.

It is these unchanging human factor interactions that allow equalizing to work today, tomorrow and well into the future without need for upgrading. It looks at the behavior of the applications and usage patterns. By adhering to some simple rules of behavior the real time applications can be identified from the heavy non real time activities and thus be granted priority on the fly without any specific policies set by the IT Manager.

How Equalizing Technology Balances Traffic

Each connection on your network constitutes a traffic flow. Flows vary widely from short dynamic bursts, for example, when searching a small website, to large persistent flows, as when performing peer-to-peer file sharing.

Equalizing is determined from the answers to these questions:

  1. How persistent is the flow?
  2. How many active flows are there?
  3. How long has the flow been active?
  4. How much total congestion is currently on the trunk?
  5. How much bandwidth is the flow using relative to the link size?

Once these answers are known then Equalizing makes adjustments to flow by adding latency to low-priority tasks so high-priority tasks receive sufficient bandwidth. Nothing more needs to be said and nothing more needs to be administered to make it happen, once set up it need not be revisited.

Exempting Priority Traffic

Many people often point out that although equalizing technology sounds promising that it may be prone to mistakes with such a generic approach to traffic shaping. What if a user has a high priority bandwidth intensive video stream that must get through, wouldn’t this be the target of a miss-applied rule to slow it down?

The answer is yes, but what we have found is that high bandwidth priority streams are usually few in number and known by the administrator; they rarely if ever pop up spontaneously, so it is quite easy to exempt such flows since they are the rare exception. This is much easier than trying to classify every flow on your network at all times.

Connection Limits

Often overlooked as a source of network congestion is the number of connections a user generates. A connection can be defined as a single user communicating with a single Internet site. Take accessing the Yahoo home page for example. When you access the Yahoo home page your browser goes out to Yahoo and starts following various links on the Yahoo page to retrieve all the data. This data is typically not all at the same Internet address, so your browser may access several different public Internet locations to load the Yahoo home page, perhaps as many as ten connections over a short period of time. Routers and access points on your local network must keep track of these “connections” to insure that the data gets routed back to the correct browser. Although ten connections to the Yahoo home page is not excessive over a few seconds there are very poorly behaved applications, (most notably Gnutella, Bear Share, and Bittorrent), which are notorious for opening up 100’s or even 1000’s of connections in a short period of time. This type of activity is just as detrimental to your network as other bandwidth eating applications and can bring your network to a grinding halt. The solution is to make sure any traffic management solution deployed incorporates some form of connection limiting features.

Simple Rate Limits

The most common and widely used form of bandwidth control is the simple rate limit. This involves putting a fixed rate cap on a single IP address as per often is the case with rate plans promised by ISPs to their user community. “2 meg up and 1 meg down” is a common battle cry, but what happens in reality with such rate plans?

Although setting simple rates limits is far superior to running a network wide open we often call this “set, forget, and pray”!

Take for example six users sharing a T1 if each of these six users gets a rate of 256kbs up and 256kbs down. Then these six users each using their full share of 256 kilo bits per second is the maximum amount a T1 can handle. Although it is unlikely that you will hit gridlock with just six users, when the number of users reaches thirty, gridlock becomes likely, and with forty or fifty users, it becomes a certainty to happen quite often. It is not uncommon for schools, wireless ISPs, and executive suites to have sixty users to as many as 200 users sharing a single T1 with simple fixed user rate limits as the only control mechanism.

Yes, simple fixed user rate limiting does resolve the trivial case where one or two users, left unchecked, can use all available bandwidth; however unless your network is not oversold there is never any guarantee that busy-hour conditions will not result in gridlock.

Conclusion

The common thread to all WAN optimization techniques is they all must make intelligent assumptions about data patterns or human behavior to be effective. After all, in the end, the speed of the link is just that, a fixed speed that cannot be exceeded. All of these techniques have their merits and drawbacks, the trick is finding a solution best for your network needs. Hopefully the background information contained in this document will give you information so you the consumer can make an informed decision.

Optimizing Your WAN Is Not The Same As Optimizing Your Internet Link — Here’s Why…


WAN optimization is a catch-all phrase for making a network more efficient. However, few products distinguish between optimizing a WAN link and optimizing an Internet link. Yet, the methods used for the latter do not necessarily overlap with WAN optimization. In this article, we’ll break down the differences and similarities between the two practices and explain why WAN optimization tends to be the more common, yet not necessarily most effective, of the two techniques when it comes to overall network optimization.

Some Basic Definitions

A WAN link is always a point-to-point link where an institution/business controls both ends of the link. However, a WAN link does not provide Internet access.

On the other hand, an Internet link is one where one end terminates in a business/home/institution and the other end terminates in the Internet cloud, thus providing the former with Internet access.

A VPN link is a special case of a WAN link where the link traverses across the public Internet to get to another location within an organization.  This is not an Internet link by our definition mentioned above.

Whether dealing with a small business, a home user, or public entities such as libraries, schools etc., there are far more Internet links out there than WAN links. Each of these entities will most certainly have a dedicated Internet link while many will not have a WAN link.

Some Common Questions

If Internet links far outnumber WAN links, why are there so many commercial products dedicated to optimizing WAN links and so few specifically dedicated to Internet optimization?

There are a few reasons for this:

  1. WAN optimization is fairly easy to measure and quantify, so a WAN optimization vendor can easily demonstrate their value by showing before and after results.
  2. Many WAN-based applications — Citrix, SQL queries, etc. — are inherently inefficient and in need of optimization.
  3. The market is flooded with vendors and analysts (such as Gartner) which all tend  to promote and sustain the WAN optimization market.
  4. WAN optimization tools also double as reporting and monitoring tools, which administrators gravitate toward.
  5. A large number of commercial Internet connection are located at Small or medium sized business and and the ROI on an optimization device for their Internet Link is either not that compelling or not understood.

Why is a WAN optimizing tool not the best tool to optimize an Internet link? Don’t the methodologies overlap?

Most of the methods used by a WAN optimizing appliance make use of two principles:

  1. The organization owns both ends of the link and will use two optimizing devices — one at each end. For example, compression techniques require that you own both ends of the link. As mentioned earlier, you cannot control both ends of an Internet link.
  2. The types of traffic running over a WAN Link are consistent and well defined. Organizations tend to do the same thing over and over again on their internal link. Yet, on an Internet link, the traffic varies from minute to minute and cannot be easily quantified.

So, how does one optimize unbounded traffic coming into an Internet link?

You need an appliance such as a NetEqualizer that dynamically manages over all flows for more information you can read. But,  don’t take it from us, you can also check in on what existing NetEqualizer users are saying.

How does a company quantify the cost of using a device to optimize their Internet link?

Admittedly, the results may be a bit subjective. The good news is that optimization companies will normally allow you to try an appliance before you buy. On the other hand, most Internet providers will require you to purchase a fixed length contract.

The fact of the matter is that an Internet link can be rendered useless by  a small number of users during peak times. If you blindly upgrade your contract to accommodate this problem, it is akin to buying gourmet lunches for some employees while feeding everybody else microwave popcorn. In the end, the majority will be unhappy.

While the appropriate network optimization technique will vary from situation to situaiton, Internet optimization appliances tend to work well under most circumstances and are worth implementing. Or, at the very least, they’re worth exploring before signing on to a long-term bandwidth increase with your ISP.

See: Related Discussion on Internet Congestion and predictability.

The True Price of Bandwidth Monitoring


By Art Reisman

Art Reisman CTO www.netequalizer.com

For most IT administrators, bandwidth monitoring of some sort is an essential part of keeping track of, as well as justifying, network expenses. Without visibility into a network load, an administrator’s job would degrade into a quagmire of random guesswork. Or would it?

The traditional way of  looking at monitoring your Internet has two parts: the fixed cost of the monitoring tool used to identify traffic, and the labor associated with devising a remedy. In an ironic inverse correlation, we assert that costs increase with the complexity of the monitoring tool. Obviously, the more detailed the reporting tool, the more expensive its initial price tag. The kicker comes with part two. The more expensive the tool, the more  detail  it will provide, and the more time an administrator is likely to spend adjusting and mucking, looking for optimal performance.

But, is it a fair to assume higher labor costs with  more advanced monitoring and information?

Well, obviously it would not make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. Why have the reporting tool in the first place if the only output was to stare at reports and do nothing? Typically, the more information an admin has about a network, the more inclined he might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief  that when  the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network mucking can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention.  For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980’s.  The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.

An effective compromise with many of our customers is that they are stepping down from expensive complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.

For example, with a simple reporting tool you can plot network usage by user.  Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing; abuse becomes obvious just looking at the usage (a simple report).

However, there is also the personal control factor, which often does not follow clear lines of ROI (return on investment).

What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into a trip to Hawaii.

In our next article, we’ll put some real world numbers to the test for actual break downs, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.

List of monitoring tools compiled by Stanford

Planetmy
Linux Tips
How to set up a monitor for free

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

APconnections Announces NetEqualizer Lifetime Buyer Protection Policy


This week, we announced the launch of the NetEqualizer Lifetime Buyer Protection Policy. In the event of an un-repairable failure of a NetEqualizer unit at any time, or in the event that it is time to retire a unit, customers will have the option to purchase a replacement unit and apply a 50-percent credit of their original unit purchase price, toward the new unit.  For current pricing see register for our price list.  This includes units that are more than three years old (the expected useful life for hardware) and in service at the time of failure.

For example, if you purchased a unit in 2003 for $4000 and were looking to replace it or upgrade with a newer model, APconnections would kick in a $2000 credit toward the replacement purchase.

The Policy will be in addition to the existing optional yearly NetEqualizer Hardware Warranty (NHW), which offers customers cost-free repairs or replacement of any malfunctioning unit while NHW is in effect (read details on NHW).

Our decision to implement the policy was a matter of customer peace-of-mind rather than necessity. While the failure rate of any NetEqualizer unit is ultimately very low, we want customers to know that we stand behind our products – even if it’s several years down the line.

To qualify,

  • users must be the original owner of the NetEqualizer unit,
  • the customer must have maintained a support contract that has been current within last 18 months , lapses of support longer than 18 months will void our replacement policy
  • the unit must have been in use on your network at the time of failure.

Shipping is not included in the discounted price. Purchasers of the one-year NetEqualizer hardware warranty (NHW) will still qualify for full replacement at no charge while under hardware warranty.  Contact us for more details by emailing sales@apconnections.net, or calling 303.997.1300 x103 (International), or 1.888.287.2492 (US Toll Free).

Note: This Policy does not apply to the NetEqualizer Lite.

Deep Packet Inspection Abuse In Iran Raises Questions About DPI Worldwide


Over the past few years, we at APconnections have made our feelings about Deep Packet Inspection clear, completely abandoning the practice in our NetEqualizer technology more than two years ago. While there may be times that DPI is necessary and appropriate, its use in many cases can threaten user privacy and the open nature of the Internet. And, in extreme cases, DPI can even be used to threaten freedom of speech and expression. As we mentioned in a previous article, this is currently taking place in Iran.

Although these extreme invasions of privacy are most likely not occurring in the United States, their existence in Iran is bringing increasing attention to the slippery slope that is Deep Packet Inspection. A July 10 Huffington Post article reads:

“Before DPI becomes more widely deployed around the world and at home, the U.S. government ought to establish legitimate criteria for authorizing the use such control and surveillance technologies. The harm to privacy and the power to control the Internet are so disturbing that the threshold for using DPI must be very high.The use of DPI for commercial purposes would need to meet this high bar. But it is not clear that there is any commercial purpose that outweighs the potential harm to consumers and democracy.”

This potential harm to the privacy and rights of consumers was a major factor behind our decision to discontinue the use of DPI in any of our technology and invest in alternative means for network optimization. We hope that the ongoing controversy will be reason for others to do the same.

Google Questions Popular Bandwidth Shaping Myth


At this week’s Canadian Radio-Television and Telecommunications Commission Internet traffic hearing, Google’s Canada Policy Counsel, Jacob Glick, raised a point that we’ve been arguing for the last few years. Glick said:

“We urge you to reject as false the choice between debilitating network congestion and application-based discrimination….This is a false dichotomy. The evidence is, and experience in Canada and in the U.S. already shows, that carriers can manage their networks, reduce congestion and protect the open Internet, all at the same time.”

While we agree with Glick to a certain extent, we differ in the alternative proposed by hearing participants — simply increase bandwidth. This is not to say that increasing bandwidth isn’t the appropriate solution in certain circumstances, but to question the validity of a dichotomy with an equally narrow third alternative doesn’t exactly significantly expand the industry’s options. Especially when increasing bandwidth isn’t always a viable solution for some ISPs.

The downsides of application-based shaping are one of the main reasons behind NetEqualizer’s reliance on behavior-based shaping. Therefore, while Glick is right that the above-mentioned dichotomy doesn’t explore all of the available options, it’s important to realize that the goals being promoted at the hearing are not solely achieved through increased bandwidth.

For more on how the NetEqualizer fits into the ongoing debate, see our past article, NetEqualizer Offers Net Neutrality, User Privacy Compromise.

What NetEqualizer Users Are Saying (Updated June 2009)


Editor’s Note: As NetEqualizer’s popularity has grown, more and more users have been sharing their experiences on message boards and listservs across the Internet. Just to give you an idea of what they’re saying, here a few of the reviews and discussion excerpts that have been posted online over the past several months…

Wade LeBeau — The Daily Journal Network Operations Manager

NetEqualizer is one of the most cost-effective management units on the market, and we found the unit easy to install—right out of the box. We made three setting changes to match our network using the web (browser) interface, connected the unit, and right away traffic shaping started, about 10minutes total setup time. The unit has two Ethernet ports…one port toward your user network, the other ports toward your broadband connection/server if applicable. A couple of simple clicks and you can see reporting live as it happens. In testing, we ran our unit for 30-days and saw our broadband reports stabilize and our users receiving the same slices of broadband access. With the NetEqualizer, there is no burden of extensive policies to manage….The NetEqualizer is a nice tool to add to any network of any size. Businesses can see how important the Internet is and how hungry users can be for information.

__________________________________________________________________________________________________

DSL Reports, April 2009

The Netequalizer has resulted in dramatically improved service to our customers. Most of the time, our customers are seeing their full bandwidth. The only time they don’t see it now is when they’re downloading big files. And, when they don’t see full performance, its only for the brief period that the AP is approaching saturation. The available bandwidth is re-evaluated every 2 seconds, so the throttling periods are often brief.

Bottom line to this is that we can deliver significantly more data through the same AP. The customers hitting web pages, checking e-mail, etc. virtually always see full bandwidth, and the hogs don’t impact these customers. Even the hogs see better performance (although that wasn’t one of my priorities).

Click here to read more.

APconnections Marks a New Milestone: Six Years Operating a Virtual Company


Duty Calls

Duty Calls

Since 2003, APconnections has utilized innovative technology to stay in near constant contact with our customers, offering unmatched service and accessibility. Through the use of the latest mobile communications advances, we’ve worked to surpass the efficiency and effectiveness of the traditional workplace, benefiting both our customers and staff members alike.

Since we’ve always been dedicated to bringing the latest technological benefits to our customers, be it through our own products or in how we might better serve those who use them, building our company around the concept of a virtual workplace made perfect sense. In the end, it provides for greater accessibility and increased freedom at the same time. We can essentially work from anywhere.

In addition to the customer service benefits, the virtual nature of APconnections has been central to maintaining the competitive prices of our products. Removing the middlemen that have typically been necessary for holding organizations together significantly reduces operating costs, which equates to savings that are passed on to our customers. Furthermore, this also assures that our existing and future customers work only with staff possessing unparalleled expertise in APconnections’ technology. This is both a matter of trust and efficiency.  We find that it’s very comforting and appealing for customers to know that if and when they call or e-mail, they’ll be working directly with someone who understands our technology better than anyone else.

NetEqualizer Software Update Improves VLAN Shaping, NTOP


Editor’s Note: The following blog entry explains the newest NetEqualizer features available with our most recent software update. While minor bug fixes are often included in these updates, they will not always be detailed.

We recently released our newest NetEqualizer software update, further improving on our existing technology. The following fixes have been implemented from the the previous 2.43k version to the latest 3.32a.

  1. Upgraded internal disk memory caching. This feature remedied an issue with NTOP that was causing disk corruptions on the CF drive.
  2. Subnet masking was modified such that masked traffic will not count against your license level. Prior to this change, a customer with a 10-meg license who ran 100 meg local transfers across their NetEqualizer would experience a license violation. You can now mask that traffic (make it invisible to the NetEqualizer and hence not violate your license).
  3. A bug fix was put in for customers who run asymmetric pools. Bandwidth pools with different upload and download speeds were not working correctly prior to this fix.
  4. VLAN shaping fix. There was an issue on cold restarts.
  5. Support for multi-core CPU
  6. More efficient connection limit processing

This software update is available without charge for NetEqualizer customers with a current NetEqualizer Software Subscription (NSS). For more information on this update, or the NSS, contact us at admin@apconnections.net.