Complimentary NetEqualizer Bandwidth Management Seminar in the UK


Press Release issued via BusinessWire.

April 08, 2015 01:05 AM Mountain Daylight Time

LAFAYETTE, Colo.–(BUSINESS WIRE)–APconnections, an innovation-driven technology company that delivers best-in-class network traffic management solutions, is excited to announce its upcoming complimentary NetEqualizer Technical Seminar on April 23rd, 2015, in Oxfordshire, United Kingdom, hosted by Flex Information Technology Ltd.

This is not a marketing presentation; it is run by and created for technical staff.

Join us to meet APconnections’ CTO Art Reisman, a visionary in the bandwidth management industry (check out Art’s blog). This is not a marketing presentation; it is run by and created for technical staff. The Seminar will feature in-depth, example-driven discussions of network optimization and provide participants with a first-hand look at NetEqualizer technology.

Seminar highlights include:

  • Learn how behavior-based shaping provides superior QoS for Internet traffic
  • Optimize business-critical VoIP, email, web browsing, SaaS & web applications
  • Control excessive bandwidth use by non-priority applications
  • Gain control over P2P traffic
  • Get visibility into your network with real-time reporting
  • See the NetEqualizer in action! We will demo a live system.

We welcome both customers and those just beginning to think about bandwidth shaping. The Seminar will take place at 14:30pm, Thursday, April 23rd, at Flex Information Technology Ltd in Grove Technology Park, Wantage, Oxfordshire OX12 9FF.

Online registration, including location and driving directions, is available here. There is no cost to attend, but registration is requested. Questions? Contact Paul Horseman at paul@flex.co.uk or call +44(0)333.101.7313.

About Flex Information Technology Ltd
Flex Information Technology is a partnership founded in 1993 to provide maintenance and support services to wide range of customers with large mission critical systems, particularly the Newspaper and Insurance sectors. In 1998 the company began focusing on support for small to medium businesses. Today we provide “Smart IT Solutions combined with Flexible and Quality Services for Businesses” to a growing satisfied customer base. We have accounts with leading IT suppliers and hardware and software distributors in the UK.

About APconnections
APconnections is a privately held company founded in 2003 and is based in Lafayette, Colorado, USA. Our flexible and scalable network traffic management solutions can be found at thousands of customer sites in public and private organizations of all sizes across the globe, including: Fortune 500 companies, major universities, K-12 schools, Internet providers, libraries, and government agencies on six continents.

Contacts

APconnections, Inc.
Sandy McGregor, 303-997-1300 x104
sandym@apconnections.net
or
Flex Information Technology Ltd
Paul Horseman, +44(0)333 101 7313
paul@flex.co.uk

Will Bandwidth Shaping Ever Be Obsolete?


By Art Reisman

CTO – www.netequalizer.com

I find public forums where universities openly share information about their bandwidth shaping policies an excellent source of information. Unlike commercial providers, these user groups have found technical collaboration is in their best interest, and they often openly discuss current trends in bandwidth control.

A recent university IT user group discussion thread kicked off with the following comment:

“We are in the process of trying to decide whether or not to upgrade or all together remove our packet shaper from our residence hall network.  My network engineers are confident we can accomplish rate limiting/shaping through use of our core equipment, but I am not convinced removing the appliance will turn out well.”

Notice that he is not talking about removing rate limits completely, just backing off from an expensive extra piece of packet shaping equipment and using the simpler rate limits available on his router.  The point of my reference to this discussion is not so much to discourse over the different approaches of rate limiting, but to emphasize, at this point in time, running wide-open without some sort of restriction is not even being considered.

Despite an 80 to 90 percent reduction in bulk bandwidth prices in the past few years, bandwidth is not quite yet cheap enough for an ISP to run wide-open. Will it ever be possible for an ISP to run wide-open without deliberately restricting their users?

The answer is not likely.

First of all, there seems to be no limit to the ways consumer devices and content providers will conspire to gobble bandwidth. The common assumption is that no matter what an ISP does to deliver higher speeds, consumer appetite will outstrip it.

Yes, an ISP can temporarily leap ahead of demand.

We do have a precedent from several years ago. In 2006, the University of Brighton in the UK was able to unplug our bandwidth shaper without issue. When I followed up with their IT director, he mentioned that their students’ total consumption was capped by the far end services of the Internet, and thus they did not hit their heads on the ceiling of the local pipes. Running without restriction, 10,000 students were not able to eat up their 1 gigabit pipe! I must caveat this experiment by saying that in the UK their university system had invested heavily in subsidized bandwidth and were far ahead of the average ISP curve for the times. Content services on the Internet for video were just not that widely used by students at the time. Such an experiment today would bring a pipe under a similar contention ratio to its knees in a few seconds. I suspect today one would need more or on the order of 15 to 25 gigabits to run wide open without contention-related problems.

It also seems that we are coming to the end of the line for bandwidth in the wireless world much more quickly than wired bandwidth.

It is unlikely consumers are going to carry cables around with their iPad’s and iPhones to plug into wall jacks any time soon. With the diminishing returns in investment for higher speeds on the wireless networks of the world, bandwidth control is the only way to keep order of some kind.

Lastly I do not expect bulk bandwidth prices to continue to fall at their present rate.

The last few years of falling prices are the result of a perfect storm of factors not likely to be repeated.

For these reasons, it is not likely that bandwidth control will be obsolete for at least another decade. I am sure we will be revisiting this issue in the next few years for an update.

APconnections Celebrates New NetEqualizer Lite with Introductory Pricing


Editor’s Note:  This is a copy of a press release that went out on May 15th, 2012.  Enjoy!

Lafayette, Colorado – May 15, 2012 – APconnections, an innovation-driven technology company that delivers best-in-class network traffic management solutions, is celebrating the expansion of its NetEqualizer Lite product line by offering special pricing for a limited time.

NetEqualizer’s VP of Sales and Business Development, Joe D’Esopo is excited to announce “To make it easy for you to try the new NetEqualizer Lite, for a limited time we are offering the NetEqualizer Lite-10 at introductory pricing of just $999 for the unit, our Lite-20 at $1,100, and our Lite-50 at $1,400.  These are incredible deals for the value you will receive; we believe unmatched today in our industry.”

We have upgraded our base technology for the NetEqualizer Lite, our entry-level bandwidth-shaping appliance.  Our new Lite still retains a small form-factor, which sets it apart, and makes it ideal for implementation in the Field, but now has enhanced CPU and memory. This enables us to include robust graphical reporting like in our other product lines, and also to support additional bandwidth license levels.

The Lite is geared towards smaller networks with less than 350 users, is available in three license levels, and is field-upgradable across them: our Lite-10 runs on networks up to 10Mbps and up to 150 users ($999), our Lite-20 (20Mbps and 200 users for $1,100), and Lite-50 (50Mbps and 350 users for $1,400).  See our NetEqualizer Price List for complete details.  One year renewable NetEqualizer Software & Support (NSS) and NetEqualizer Hardware Warranties (NHW) are offered.

Like all of our bandwidth shapers, the NetEqualizer Lite is a plug-n-play, low maintenance solution that is quick and easy to set-up, typically taking one hour or less.  QoS is implemented via behavior-based bandwidth shaping, “equalizing”, giving priority to latency-sensitive applications, such as VoIP, web browsing, chat and e-mail over large file downloads and video that can clog your Internet pipe.

About APconnections:  APconnections is based in Lafayette, Colorado, USA.  We released our first commercial offering in July 2003, and since then thousands of customers all over the world have put our products into service.  Today, our flexible and scalable solutions can be found in over 4,000 installations in many types of public and private organizations of all sizes across the globe, including: Fortune 500 companies, major universities, K-12 schools, and Internet providers on six (6) continents.  To learn more, contact us at sales@apconnections.net.

Contact: Sandy McGregor
Director, Marketing
APconnections, Inc.
303.997.1300
sandy@apconnections.net

Our Take on Network Instruments 5th Annual Network Global Study


Editors Note: Network Instruments released their “Fifth Annual State of the Network Global study” on March 13th, 2o12. You can read their full study here. Their results were based on responses by 163 network engineers, IT directors, and CIOs in North America, Asia, Europe, Africa, Australia, and South America. Responses were collected from October 22, 2011 to January 3, 2012.

What follows is our take (or my .02 cents) on the key findings around Bandwidth Management and Bandwidth Monitoring from the study.

Finding #1: Over the next two years, more than one-third of respondents expect bandwidth consumption to increase by more than 50%.

Part of me says “well, duh!” but that is only because we hear that from many of our customers. So I guess if you were an Executive, far removed from the day-to-day, this would be an important thing to have pointed out to you. Basically, this is your wake up call (if you are not already awake) to listen to your Network Admins who keep asking you to allocate funds to the network. Now is the time to make your case for more bandwidth to your CEO/President/head guru. Get together budget and resources to build out your network in anticipation of this growth – so that you are not caught off guard. Because if you don’t, someone else will do it for you.

Finding #2: 41% stated network and application delay issues took more than an hour to resolve.

You can and should certainly put monitoring on your network to be able to see and react to delays. However, another way to look at this, admittedly biased from my bandwidth shaping background, is get rid of the delays!

If you are still running an unshaped network, you are missing out on maximizing your existing resource. Think about how smoothly traffic flows on roads, because there are smoothing algorithms (traffic lights) and rules (speed limits) that dictate how traffic moves, hence “traffic shaping.” Now, imagine driving on roads without any shaping in place. What would you do when you got to a 4-way intersection? Whether you just hit the accelerator to speed through, or decided to stop and check out the other traffic probably depends on your risk-tolerance and aggression profile. And the result would be that you make it through OK (live) or get into an ugly crash (and possibly die).

Similarly, your network traffic, when unshaped, can live (getting through without delays) or die (getting stuck waiting in a queue) trying to get to its destination. Whether you look at deep packet inspection, rate limiting, equalizing, or a home-grown solution, you should definitely look into bandwidth shaping. Find a solution that makes sense to you, will solve your network delay issues, and gives you a good return-on-investment (ROI). That way, your Network Admins can spend less time trying to find out the source of the delay.

Finding #3: Video must be dealt with.

24% believe video traffic will consume more than half of all bandwidth in 12 months.
47% say implementing and measuring QoS for video is difficult.
49% have trouble allocating and monitoring bandwidth for video.

Again, no surprise if you have been anywhere near a network in the last 2 years. YouTube use has exploded and become the norm on both consumer and business networks. Add that to the use of video conferencing in the workplace to replace travel, and Netflix or Hulu to watch movies and TV, and you can see that video demand (and consumption) has risen sharply.

Unfortunately, there is no quick, easy fix to make sure that video runs smoothly on your network. However, a combination of solutions can help you to make video run better.

1) Get more bandwidth.

This is just a basic fact-of-life. If you are running a network of < 10Mbps, you are going to have trouble with video, unless you only have one (1) user on your network. You need to look at your contention ratio and size your network appropriately.

2) Cache static video content.

Caching is a good start, especially for static content such as YouTube videos. One caveat to this, do not expect caching to solve network congestion problems (read more about that here) – as users will quickly consume any bandwidth that caching has freed up. Caching will help when a video has gone viral, and everyone is accessing it repeatedly on your network.

3) Use bandwidth shaping to prioritize business-critical video streams (servers).

If you have a designated video-streaming server, you can define rules in your bandwidth shaper to prioritize this server. The risk of this strategy is that you could end up giving all your bandwidth to video; you can reduce the risk by rate capping the bandwidth portioned out to video.

As I said, this is just my take on the findings. What do you see? Do you have a different take? Let us know!

What Does it Cost You Per Mbs for Bandwidth Shaping?


Sometimes by using a cost metric you can distill a relatively complicated thing down to a simple number for comparison. For example, we can compare housing costs by Dollars Per Square Foot or the fuel efficiency of cars by using the Miles Per Gallon (MPG) metric.  There are a number of factors that go into buying a house, or a car, and a compelling cost metric like those above may be one factor.   Nevertheless, if you decide to buy something that is more expensive to operate than a less expensive alternative, you are probably aware of the cost differences and justify those with some good reasons.

Clearly this makes sense for bandwidth shaping now more than ever, because the cost of bandwidth continues to decline and as the cost of bandwidth declines, the cost of shaping the bandwidth should decline as well.  After all, it wouldn’t be logical to spend a lot of money to manage a resource that’s declining in value.

With that in mind, I thought it might be interesting to looking at bandwidth shaping on a cost per Mbs basis. Alternatively, I could look at bandwidth shaping on a cost per user basis, but that metric fails to capture the declining cost of a Mbs of bandwidth. So, cost per Mbs it is.

As we’ve pointed out before in previous articles, there are two kinds of costs that are typically associated with bandwidth shapers:

1) Upfront costs (these are for the equipment and setup)

2) Ongoing costs (these are for annual renewals, upgrades, license updates, labor for maintenance, etc…)

Upfront, or equipment costs, are usually pretty easy to get.  You just call the vendor and ask for the price of their product (maybe not so easy in some cases).  In the case of the NetEqualizer, you don’t even have to do that – we publish our prices here.

With the NetEqualizer, setup time is normally less than an hour and is thus negligible, so we’ll just divide the unit price by the throughput level, and here’s the result:

I think this is what you would expect to see.

For ongoing costs you would need to add all the mandatory per year costs and divide by throughput, and the metric would be an ongoing “yearly” per Mbs cost.

Again, if we take the NetEqualizer as an example, the ongoing costs are almost zero.  This is because it’s a turn-key appliance and it requires no time from the customer for bandwidth analysis, nor does it require any policy setup/maintenance to effectively run (it doesn’t use policies). In fact, it’s a true zero maintenance product and that yields zero labor costs. Besides no labor, there’s no updates or licenses required (an optional service contract is available if you want ongoing access to technical support, or software upgrades).

Frankly, it’s not worth the effort of graphing this one. The ongoing cost of a NetEqualizer Support Agreement ranges from $29 (dollars) – $.20 (cents) per Mbs per year. Yet, this isn’t the case for many other products and this number should be evaluated carefully. In fact, in some cases the ongoing costs of some products exceed the upfront cost of a new NetEqualizer!

Again, it may not be the case that the lowest cost per Mbs of bandwidth shaping is the best solution for you – but, if it’s not, you should have some good reasons.

If you shape bandwidth now, what is your cost per Mbs of bandwidth shaping? We’d be interested to know.

If your ongoing costs are higher than the upfront costs of a new NetEqualizer and you’re open to a discussion, you should drop us a note at sales@apconnections.net.

Developing Technology to Detect a Network Hacker


Editors note:  Updated on Feb 1st, 2012.  Our new product, NetGladiator, has been released.  You can learn more about it on the NetGladiator website at www.netgladiator.net or calling us at 303.997.1300 x123.

In a few weeks we will be releasing a product to automatically detect and prevent a web application hacker from breaking into a private enterprise. What follows are the details of how this product was born.  If you are currently seeking or researching intrusion detection & prevention technology, you will find the following quite useful.

Like many technology innovations, our solution resulted from the timely intersection of two technologies.

Technology 1: About one year ago we starting working with a consultant in our local tech community to do some programming work on a minor feature in our NetEqualizer product line. Fiddlerontheroot is the name of their company, and they specialize in ethical hacking. Ethical hacking is the process of deliberately hacking into a high-profile client company with the intention of exposing their weaknesses. The key expertise that they provided was a detailed knowledge of how to hack into a network or website.

Technology 2: Our NetEqualizer technology is well known for providing state-of-the-art bandwidth control. While working with Fiddler on the Root, we realized our toolset could be reconfigured to spot, and thwart, unwanted entry into a network. A key piece to the puzzle would be our long-forgotten Deep Packet Inspection technology. DPI is the frowned upon practice of looking inside data packets traversing the Internet.

An ironic twist to this new product journey was that, due to the privacy controversy, as well as finding a better way to shape bandwidth, we removed all of our DPI methodology from our core bandwidth shaping product four years ago.  Just like with any weapon, there are appropriate uses for DPI. Over a lunch conversation one day, we realized that using DPI to prevent a hacker intrusion was a legitimate use of DPI technology. Preventing an attack is much different from a public ISP scanning and censoring customer data.

So how did we merge these technologies to create a unique heuristics-based IPS system?

Before I answer that question, perhaps you are thinking that revealing our techniques might provide a potential hacker or competitor with inside secrets? More on this later…

The key to using DPI to prevent an intrusion (hack) revolves around 3 key facts:

1) A hacker MUST try to enter your enterprise by exploiting weaknesses in your normal entry points.

2) One of the normal entry points is a web page, and everybody has them. After all, if you had no publicly available data there would be no reason to be attached to the Internet.

3) By using DPI technology to monitor incoming requests and looking for abnormalities, we can now reliably spot unwanted intrusion attempts.

When we met with Fiddler on the Root, we realized that a normal entry by a customer and a probing entry by a hacker are radically different. A hacker attempts things that no normal visitor could even possibly stumble into. In our new solution we have directed our DPI technology to watch for abnormal entry intrusion attempts. This involved months of observing a group of professional hackers and then developing a set of profiles which clearly distinguish them from a friendly user.

What other innovations are involved in a heuristics-based Intrusion Prevention System (IPS)?

Spotting the hacker pattern with DPI was only part of a complete system. We also had to make sure we did not get any false positives – this is the case where a normal activity might accidentally be flagged as an intruder, and this obviously would be unacceptable. In our test lab we have a series of computers that act like users searching the Internet, the only difference is we can ramp these robot users up to hyper-speed so that they access millions of pages over a short period of time. We then measure our “false positive” rate from our simulation and ensure that our false positive rate on intrusion detection is below 0.001 percent.

Our solution, NetGladiator, is different than other IPS appliances.  We are not an “all-in-one solution”, which can be rendered useless by alerting you thousands of times a day, can block legitimate requests, and break web functionality.  We do one thing very well – we catch & stop hackers during their information discovery process – keeping your web applications secure.  NetGladiator is custom-configured for your environment, alerting you on meaningful attempts without false positive alerts.

We also had to dig into our expertise in real-time optimization. Although that sounds like marketing propaganda to impress somebody, we can break that statement down to mean something.

When doing DPI, you must look at and analyze every data stream and packet coming into your enterprise, skipping something might lead to a security breach. Looking at data and analyzing it requires quite a bit more CPU power than just moving it along a network. Many intrusion detection systems are afterthoughts to standard routers and switches. These devices were originally not designed to do computing-intensive heuristics on data. Doing so may slow your network down to a crawl, a common complaint with lower-end affordable security updates. We did not want to force our customers to make that trade-off. Our technology uses a series of processors embedded in our equipment all working in unison to analyze each packet of Internet data without causing any latency. Although we did not invent the idea of using parallel processing for analysis of data, we are the only product in our price range able to do this.

How did we validate and test our IPS solution?

1) We have been putting our systems in front of beta test sites and asking white knights to try to hack into them.

2) We have been running our technology in front of some of our own massive web crawlers. Our crawlers do not attempt anything abnormal but can push through millions of sites and web pages. This is how we test for false positives blocking a web crawler that is NOT attempting anything abnormal.

Back to the question, does divulging our methodology render it easier to breach?

The holes that hackers exploit are relatively consistent – in other words there really is only a finite number of exploitations that hackers use. They can either choose to exploit these holes or not, and if they attempt to exploit the hole they will be spotted by our DPI. Hence announcing that we are protecting these holes is more likely to discourage a hacker, who will then look for another target.

What Does Net Privacy Have to Do with Bandwidth Shaping?


I definitely understand the need for privacy. Obviously, if I was doing something nefarious, I wouldn’t want it known, but that’s not my reason. Day in and day out, measures are taken to maintain my privacy in more ways than I probably even realize. You’re likely the same way.

For example, to avoid unwanted telephone and mail solicitations, you don’t advertise your phone numbers or give out your address. When you buy something with your credit card, you usually don’t think twice about your card number being blocked out on the receipt. If you go to the pharmacist, you take it for granted that the next person in line has to be a certain distance behind so they can’t hear what prescription you’re picking up. The list goes on and on. For me personally, I’m sure there are dozens, if not hundreds, of good examples why I appreciate privacy in my life. This is true in my daily routines as well as in my experiences online.

The topic of Internet privacy has been raging for years. However, the Internet still remains a hotbed for criminal activity and misuse of personal information. Email addresses are valued commodities sold to spammers. Search companies have dedicated countless bytes of storage to every search term and IP address made. Websites place tracking cookies on your system so they can learn more about your Web travels, habits, likes, dislikes, etc.  Forensically, you can tell a lot about a person from their online activities. To be honest, it’s a little creepy.

Maybe you think this is much ado about nothing. Why should you care? However, you may recall that less than four years ago, AOL accidentally released around 20 million search keywords from over 650,000 users. Now, those 650,000 users and their searches will exist forever in cyberspace.  Could it happen again? Of course, why wouldn’t it happen again since all it takes is a packed laptop to walk out the door?

Internet privacy is an important topic, and as a result, technology is becoming more and more available to help people protect information they want to keep confidential. And that’s a good thing. But what does this have to do with bandwidth management? In short, a lot (no pun intended)!

Many bandwidth management products (from companies like Blue Coat, Allot, and Exinda, for example) intentionally work at the application level. These products are commonly referred to as Layer 7 or Deep Packet Inspect tools. Their mission is to allocate bandwidth specifically by what you’re doing on the Internet. They want to determine how much bandwidth you’re allowed for YouTube, Netflix, Internet games, Facebook, eBay, Amazon, etc. They need to know what you’re doing so they can do their job.

In terms of this article, whether you’re philosophically adamant about net privacy (like one of the inventors of the Internet), or could care less, is really not important. The question is, what happens to an application-managed approach when people take additional steps to protect their own privacy?

For legitimate reasons, more and more people will be hiding their IPs, encrypting, tunneling, or otherwise disguising their activities and taking privacy into their own hands. As privacy technology becomes more affordable and simple, it will become more prevalent. This is a mega-trend, and it will create problems for those management tools that use this kind of information to enforce policies.

However, alternatives to these application-level products do exist, such as “fairness-based” bandwidth management. Fairness-based bandwidth management, like the NetEqualizer, is the only a 100% neutral solution and ultimately provides a more privacy friendly approach for Internet users and a more effective solution for administrators when personal privacy protection technology is in place. Fairness is the idea of managing bandwidth by how much you can use, not by what you’re doing. When you manage bandwidth by fairness instead of activity, not only are you supporting a neutral, private Internet, but you’re also able to address the critical task of bandwidth allocation, control and quality of service.

How to Determine a Comprehensive ROI for Bandwidth Shaping Products


In the past, we’ve published several articles on our blog to help customers better understand the NetEqualizer’s potential return on investment (ROI). Obviously, we do this because we think we offer a compelling ROI proposition for most bandwidth-shaping decisions. Why? Primarily because we provide the benefits of bandwidth shaping at a a very low cost — both initially and even more so over time. (Click here for the NetEqualizer ROI calculator.)

But, we also want to provide potential customers with the questions that need to be considered before a product is purchased, regardless of whether or not the answers lead to the NetEqualizer. With that said, this article will break down these questions, addressing many issues that may not be obvious at first glance, but are nonetheless integral when determining what bandwidth shaping product is best for you.

First, let’s discuss basic ROI. As a simple example, if an investment cost $100, and if in one year that investment returned $120, the ROI is 20 percent.  Simple enough. But what if your investment horizon is five years or longer? It gets a little more complicated, but suffice it to say you would perform a similar calculation for each year while adjusting these returns for time and cost.

The important point is that this technique is a well-known calculation for evaluating whether one thing is a better investment than another — be it bandwidth shaping products or real estate. Naturally and obviously the best financial decision will be determined by the greatest return for the smallest cost.

The hard part is determining what questions to ask in order to accurately determine the ROI. A missed cost or benefit here or there could dramatically alter the outcome, potentially leading to significant unforeseen losses.

For the remainder of this article, I’ll discuss many of the potential costs and returns associated with bandwidth shaping products, with some being more obscure than others. In the end, it should better prepare you to address the most important questions and issues and ultimately lead to a more accurate ROI assessment.

Let’s start by looking at the largest components of bandwidth shaping product “costs” and whether they are one-time or ongoing. We’ll then consider the returns.

COSTS

  • The initial cost of the tool
    • This is a one-time cost.
  • The cost of vendor support and license updates
    • These are ongoing costs and include monthly and annual licenses for support, training, software updates, library updates, etc…  The difference from vendor to vendor can be significant — especially over the long run.
  • The cost of upgrades within the time horizon of the investment
    • These upgrades can come in several different forms. For example, what does it cost to go from a 50Mbs tool to 100Mbs? Can your tool be upgraded, or do you have to buy a whole new tool? This can be a one-time cost or it can occur several times. It really depends on the growth of your network, but it’s usually inevitable for networks of any size.
  • The internal (human) cost to support the tool
    • For example, how many man hours do you have to spend to maintain the tool, to optimize it and to adapt it to your changing network? This could be a considerable “hidden” cost and it’s generally recurring. It also usually increases in time as the cost of salaries/benefits tend to go up. Because of that, this is a very important component that should be quantified for a good ROI analysis. Tools that require little or no ongoing maintenance will have a large advantage.
  • Overall impact on the network
    • Does the product add latency or other inefficiencies? Does it create any processing overhead and how much? If the answer is yes, costs such as these will constantly impact your network quality and add up over time.

RETURNS

  • Savings from being able to delay or eliminate buying more bandwidth
    • This could either be a one-time or ongoing return. Even delaying a bandwidth upgrade for six months or a year can be highly valuable.
  • Savings from not losing existing revenue sources
    • How many customers did you not lose because they did not get frustrated with their network/Internet service? This return is ongoing.
  • Ability to generate new revenue
    • How many new customers did you add because of a better-maintained network?  Were you able to generate revenue by adding new higher-value services like a tiered rate structure? This will usually be an ongoing return.
  • Savings from the ability eliminate or reduce the financial impact of unprofitable customers
    • This is an ongoing savings. Can you convert an unprofitable customer to a profitable one by reducing their negative impact on the network? If not, and they walk, do you care?
  • Avoidance of having to buy additional equipment
    • Were you able to avoid having to “divide and conquer” by buying new access points, splitting VLANs, etc..? This can be a one-time or ongoing return.
  • Savings in the cost of responding to technical support calls
    • How much time was saved by not having to receive an irate customer call, research it and respond back? If this is something you typically deal with on a regular basis, the savings will add up every day, week or month this is avoided.

Overall, these issues are the basic financial components and questions that need to be quantified to make a good ROI analysis. For each business, and each tool, this type of analysis may yield a different answer, but it is important to note that over time there are many more items associated with ongoing costs/savings than those occurring only once. Thus, you must take great care to understand the impact of these for each tool, especially those issues that lead to costs that increase over time.

The 10-Gigabit Barrier for Bandwidth Controllers and Intel-Based Routers


By Art Reisman

Editor’s note: This article was adapted from our answer to a NetEqualizer pre-sale question asked by an ISP that was concerned with its upgrade path. We realized the answer was useful in a broader sense and decided to post it here.

Any router, bandwidth controller, or firewall that is based on Intel architecture and buses will never be able to go faster than about about 7 gigabits sustained. (This includes our NE4000 bandwidth controller. While the NE4000 can actually reach speeds close to 10 gigabits, we rate our equipment for five gigabits because we don’t like quoting best-case numbers to our customers.) The limiting factor in Intel architecture is that to expand beyond 10-gigabit speeds you cannot be running with a central clock. Therefore, with a central clock controlling the show, it is practically impossible to move data around much faster than 10 gigabits.

The alternative is to use a specialized asynchronous design, which is what faster switches and hardware do. They have no clock or centralized multiprocessor/bus. However, the price point for such hardware quickly jumps to 5-10 times the Intel architecture because it must be custom designed. It is also quite limited in function once released.

Obviously, vendors can stack a bunch of 10-gig fiber bandwidth controllers behind a switch and call it something faster, but this is no different from dividing up your network paths and using multiple bandwidth controllers yourself.  So, be careful when assessing the claims of other manufacturers in this space.

Considering these limitations, many cable operators here in the US have embraced the 10-gigabit barrier. At some point you must divide and conquer using multiple 10-gig fiber links and multiple NE4000 type boxes, which we believe is really the only viable plan — that is if you want any sort of sophistication in your bandwidth controller.

While there are some that will keep requesting giant centralized boxes, and paying a premium for them (it’s in their blood to think single box, central location), when you think about the Internet, it only works because it is made of many independent paths. There is no centralized location by design. However, as you approach 10-gigabit speeds in your organization, it might be time to stop thinking “single box.”

I went through this same learning curve as a system architect at AT&T Bell Labs back in the 1990s.  The sales team was constantly worried about how many telephone ports we could support in one box because that is what operators were asking for.  It shot the price per port through the roof with some of our designs. So, in our present case, we (NetEqualizer) decided not to get into that game because we believe that price per megabit of shaping will likely win out in the end.

Art Reisman is currently CTO and co-founder of APconnections, creator of the NetEqualizer. He  has worked at several start-up companies over the years and has invented and brought several technology products to market, both on his own and with the backing of larger corporations. This includes tools for the automotive industry.

The Facts and Myths of Network Latency


There are many good references that explain how some applications such as VoIP are sensitive to network latency, but there is also some confusion as to what latency actually is as well as perhaps some misinformation about the causes. In the article below, we’ll separate the facts from the myths and also provide some practical analogies to help paint a clear picture of latency and what may be behind it.

Fact or Myth?

Network latency is caused by too many switches and routers in your network.

This is mostly a myth.

Yes, an underpowered router can introduce latency, but most local network switches add minimal latency — a few milliseconds at most. Anything under about 10 milliseconds is, for practical purposes, not humanly detectable. A router or switch (even a low-end one) may add about 1 millisecond of latency. To get to 10 milliseconds you would need eight or more hops, and even then you wouldn’t be near anything noticeable.

The faster your link (Internet) speed, the less latency you have.

This is a myth.

The speed of your network is measured by how fast IP packets arrive. Latency is the measure of how long they took to get there. So, it’s basically speed vs. time. An example of latency is when NASA sends commands to a Mars orbiter. The information travels at the speed of light, but it takes several minutes or longer for commands sent from earth to get to the orbiter. This is an example of data moving at high speed with extreme latency.

VoIP is very sensitive to network latency.

This is a fact.

Can you imagine talking in real time to somebody on the moon? Your voice would take about eight seconds to get there. For VoIP networks, it is generally accepted that anything over about 150 milliseconds of latency can be a problem. When latency gets higher than 150 milliseconds, issues will emerge — especially for fast talkers and rapid conversations.

Xbox games are sensitive to latency.

This is another fact.

For example, in may collaborative combat games, participants are required to battle players from other locations. Low latency on your network is everything when it comes to beating the opponent to the draw. If you and your opponent shoot your weapons at the exact same time, but your shot takes 200 milliseconds to register at the host server and your opponent’s shot gets there in 100 milliseconds, you die.

Does a bandwidth shaping device such as NetEqualizer increase latency on a network ?

This is true, but only for the “bad” traffic that’s slowing the rest of your network down anyway.

Ever hear of the firefighting technique where you light a back fire to slow the fire down? This is similar to the NetEqualizer approach. NetEqualizer deliberately adds latency to certain bandwidth intensive applications, such as large downloads and p2p traffic, so that chat, email, VoIP, and gaming get the bandwidth they need. The “back fire” (latency) is used to choke off the unwanted, or non-time sensitive, applications. (For more information on how the NetEqualizer works, click here.)

Video is sensitive to latency.

This is a myth.

Video is sensitive to the speed of the connection but not the latency. Let’s go back to our man on the moon example where data takes eight seconds to travel from the earth to the moon. Latency creates a problem with two-way voice communication because in normal conversion, an eight second delay in hearing what was said makes it difficult to carry a conversion. What generally happens with voice and long latency is that both parties start talking at the same time and then eight seconds later you experience two people talking over each other. You see this happening a lot with on television with interviews done via satellite. However most video is one way. For example, when watching a Netflix movie, you’re not communicating video back to Netflix. In fact, almost all video transmissions are on delay and nobody notices since it is usually a one way transmission.

Analyzing the cost of Layer 7 Packet Shaping


November, 2010

By Eli RIles

For most IT administrators layer 7 packet shaping involves two actions.

Action 1:  Involves inspecting and analyzing data to determine what types of traffic are on your network.

Action 2: Involves taking action by adjusting application  flows on your network .

Without  the layer 7 visibility and actions,  an administrator’s job would degrade into a quagmire of random guesswork. Or would it?

Layer 7 monitoring and shaping is intuitively appealing , but it is a good idea to take a step back and break down examine the full life cycle costs of your methodology .

In an ironic inverse correlation, we assert that costs increase with the complexity of the monitoring tool.

1) Obviously, the more detailed the reporting tool (layer 7 ) , the more expensive its initial price tag.

2)  The kicker comes with part two. The more expensive the tool, the more  detail  it will provide, and the more time an administrator is likely to spend adjusting and mucking, looking for optimal performance.

But, is it a fair to assume higher labor costs with  more advanced monitoring and information?

Well, obviously it would not make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. Why have the reporting tool in the first place if the only output was to stare at reports and do nothing? Typically, the more information an admin has about a network, the more inclined he might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief  that when  the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network mucking can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention.  For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980′s.  The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.

An effective compromise with many of our customers is that they are stepping down from expensive complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.

For example, with a simple reporting tool you can plot network usage by user.  Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing; abuse becomes obvious just looking at the usage (a simple report).

However, there is also the personal control factor, which often does not follow clear lines of ROI (return on investment).

What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into a trip to Hawaii.

In our next article, we’ll put some real world numbers to the test for actual break downs, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.

List of monitoring tools compiled by Stanford

Top five free monitoring tools

Planetmy
Linux Tips
How to set up a monitor for free

Five Tips to Manage Network Congestion


As the demand for Internet access continues to grow around the world, the complexity of planning, setting up, and administering your network grows. Here are five (5) tips that we have compiled, based on discussions with network administrators in the field.

#1) Be Smart About Buying Bandwidth
The local T1 provider does not always give you the lowest price bandwidth.  There are many Tier 1 providers out there that may have fiber within line-of-sight of your business. For example, Level 3 has fiber rings already hot in many metro areas and will be happy to sell you bandwidth. To get a low-cost high-speed link to your point of presence, numerous companies can set up your wireless network infrastructure.

#2) Manage Expectations
You know the old saying “under promise and over deliver”.  This holds true for network offerings.  When building out your network infrastructure, don’t let your network users just run wide open. As you add bandwidth, you need to think about and implement appropriate rate limits/caps for your network users.  Do not wait; the problem with waiting is that your original users will become accustomed to higher speeds and will not be happy with sharing as network use grows – unless you enforce some reasonable restrictions up front.  We also recommend that you write up an expectations document for your end users “what to expect from the network” and post it on your website for them to reference.

#3) Understand Your Risk Factors
Many network administrators believe that if they set maximum rate caps/limits for their network users, then the network is safe from locking up due to congestion. However, this is not the case.  You also need to monitor your contention ratio closely.  If your network contention ratio becomes unreasonable, your users will experience congestion aka “lock ups” and “freeze”. Don’t make this mistake.

This may sound obvious, but let me spell it out. We often run into networks with 500 network users sharing a 20-meg link. The network administrator puts in place two rate caps, depending on the priority of the user  — 1 meg up and down for user group A and 5 megs up and down for user group B.  Next, they put rate caps on each group to ensure that they don’t exceed their allotted amount. Somehow, this is supposed to exonerate the network from experiencing contention/congestion. This is all well and good, but if you do the math, 500 network users on a 20 meg link will overwhelm the network at some point, and nobody will then be able to get anywhere close to their “promised amount.”

If you have a high contention ratio on your network, you will need something more than rate limits to prevent lockups and congestion. At some point, you will need to go with a layer-7 application shaper (such as Blue Coat Packeteer or Allot NetEnforcer), or go with behavior-based shaping (NetEqualizer). Your only other option is to keep adding bandwidth.

#4) Decide Where You Want to Spend Your Time
When you are building out your network, think about what skill sets you have in-house and those that you will need to outsource.  If you can select network applications and appliances that minimize time needed for set-up, maintenance, and day-to-day operations, you will reduce your ongoing costs. This is true whether your insource or outsource, as there is an “opportunity cost” for spending time with each network toolset.

#5) Use What You Have Wisely
Optimize your existing bandwidth.   Bandwidth shaping appliances can help you to optimize your use of the network.   Bandwidth shapers work in different ways to achieve this.  Layer-7 shapers will allocate portions of your network to pre-defined application types, splitting your pipe into virtual pipes based on how you want to allocate your network traffic.  Behavior-based shaping, on the other hand, will not require predefined allocations, but will shape traffic based on the nature of the traffic itself (latency-sensitive, short/bursty traffic is prioritized higher than hoglike traffic).   For known traffic patterns on a WAN, Layer-7 shaping can work very well.  For unknown patterns like Internet traffic, behavior-based shaping is superior, in our opinion.

On Internet links, a NetEqualizer bandwidth shaper will allow you to increase your customer base by between 10 to 30 percent without having to purchase additional bandwidth. This allows you to increase the amount of people you can put into your infrastructure without an expensive build out.

In order to determine whether the return-on-investment (ROI) makes sense in your environment, use our ROI tool to calculate your payback period on adding bandwidth control to your network.  You can then compare this one-time cost with your expected recurring month costs of additional bandwidth.  Also note in many cases you will need to do both at some point.  Bandwidth shaping can delay or defer purchasing additional bandwidth, but with growth in your network user base, you will eventually need to consider purchasing more bandwidth.

In Summary…
Obviously, these five tips are not rocket science, and some of them you may be using already.  We offer them here as a quick guide & reminder to help in your network planning.  While the sea change that we are all seeing in internet usage (more on that later…) makes network administration more challenging every day, adequate planning can help to prepare your network for the future.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here to request a full price list.

Network Capacity Planning: Is Your Network Positioned for Growth?


Authored by:  Sandy McGregor, Director of Sales & Marketing for APConnections, Inc.
Sandy has a Masters in Management Information Systems and over 17 years experience in the Applications Development Life Cycle.  In the past, she has been a Project Manager for large-scale data center projects, as well as a Director heading up architecture, development and operations teams.  In Sandy’s current role at APConnections, she is responsible for tracking industry trends.

As you may have guessed, mobile users are gobbling up network bandwidth in 2010!  Based on research conducted in the first half of 2010, Allot Communications has released The Allot MobileTrends Report , H1 2010 showing dramatic growth in mobile data bandwidth usage in 2010- up 68% in Q1 and Q2.

I am sure that you are seeing the impacts of all this usage on your networks.  The good news is all this usage is good for your business, as a network provider,  if you are positioned to grow to meet the needs of all this growth!  Whether you sell network usage to customers (as a ISP or WISP) or “sell” it internally (colleges and corporations), growth means that the infrastructure you provide becomes more and more critical to your business.

Here are some areas that we found of particular interest in the article, and their implications on your network, from our perspective…

1) Video Streaming grew by 92% to 35% of mobile use

It should be no surprise that video steaming applications take up a 35% share of mobile bandwidth, and grew by 92%.  At this growth rate, which we believe will continue and grow even faster in the future, your network capacity will need to grow as well.  Luckily, bandwidth prices are continuing to come down in all geographies.

No matter how much you partition your network using a bandwidth shaping strategy, the fact is that video streaming takes up a lot of bandwidth.  Add to that the fact that more and more users are using video, and you have a full pipe before you know it!  While you can look at ways to cache video, we believe that you have no choice but to add bandwidth to your network.

2) Users are downloading like crazy!

When your customers are not watching videos, they are downloading, either via P2P or HTTP, which combined represented 31 percent of mobile bandwidth, with an aggregate growth rate of 80 percent.  Although additional network capacity can help somewhat here, large downloads or multiple P2P users can still quickly clog your network.

You need to first determine if you want to allow P2P traffic on your network.  If you decide to support P2P usage, you may want to think how you will identify which users are doing P2P and if you will charge a premium for this service. Also, be aware that encrypted P2P traffic is on the rise, which makes it difficult to figure out what traffic is truly P2P.

Large file downloads need to be supported.  Your goal here should be to figure out how to enable downloading for your customers without slowing down other users and bringing the rest of your network to a halt.

In our opinion, P2P and downloading is an area where you should look at bandwidth shaping solutions.  These technologies use various methods to prioritize and control traffic, such as application shaping (Allot, BlueCoat, Cymphonix) or behavior-based shaping (NetEqualizer).

These tools, or various routers (such as Mikrotik), should also enable you to set rate limits on your user base, so that no one user can take up too much of your network capacity.  Ideally, rate limits should be flexible, so that you can set a fixed amount by user, group of users (subnet, VLAN), or share a fixed amount across user groups.

3) VoIP and IM are really popular too

The second fastest growing traffic types were VoIP and Instant Messaging (IM).  Note that if your customers are not yet using VoIP, they will be soon.  The cost model for VoIP just makes it so compelling for many users, and having one set of wires if an office configuration is attractive as well (who likes the tangle of wires dangling from their desk anyways?).

We believe that your network needs to be able to handle VoIP without call break-up or delay.  For a latency-sensitive application like VoIP, bandwidth shaping (aka traffic control, aka bandwidth management) is key.  Regardless of your network capacity, if your VoIP traffic is not given priority, call break up will occur.  We believe that this is another area where bandwidth shaping solutions can help you.

IM on the other hand, can handle a little latency (depending on how fast your customers type & send messages).  To a point, customers will tolerate a delay in IM – but probably 1-2 seconds max.  After that,they will blame your network, and if delays persist, will look to move to another network provider.

In summary, to position your network for growth:

1) Buy More Bandwidth – It is a never-ending cycle, but at least the cost of bandwidth is coming down!

2) Implement Rate Limits – Stop any one user from taking up your whole network.

3) Add Bandwidth Shaping – Maximize what you already have.  Think efficiency here.  To determine the payback period on an investment in the NetEqualizer, try our new ROI tool.  You can put together similar calculations for other vendors.

Note:  The Allot MobileTrends Report data was collected from Jan. 1 to June 30 from leading mobile operators worldwide with a combined user base of 190 million subscribers.

APconnections Announces New API for Customizing Bandwidth User Quotas


APconnections is proud to announce the release of its NetEqualizer User-Quota API (NUQ API) programmer’s toolkit. This new toolkit will allow NetEqualizer users to generate custom configurations to better handle bandwidth quotas* as well as keep customers informed of their individual bandwidth usage.

The NetEqualizer User-Quota API (NUQ API) programmer’s toolkit features include:

  1. Tracking user data by IP and MAC address (MAC address tracking will be out in the second release)
  2. Specifying quotas and bandwidth limits by IP or a subnet block
  3. Monitoring real-time bandwidth utilization at any time
  4. Setting up a notification alarm when a user exceeds a bandwidth limit
  5. Utilizing an API programming interface

In addition to providing the option to create separate bandwidth quotas for individual customers and reduce a customer’s Internet pipe when they have reached their individual set limit, customers themselves can be notified when a limit is reached and even have access to an interface to monitor current monthly usage so they are not surprised when they reach their limit.

Overall, the NUQ API will provide a quick and easy tool to customize your business and business process.

If you do not currently have the resources to use the NUQ API and customize it to fit your business, please contact us and we can arrange for one of our consulting partners to put together an estimate for you.  Or, if you just have a few questions, we’d be happy to put together a reasonable support contract (Support for the API programs is not included in our standard software support (NSS)).

*Bandwidth quotas are used by ISPs as a means to meter total bandwidth downloaded over a period of time. Although not always disclosed, most ISPs reserve the right to limit service for users that continually download data. Some providers use the threat of quotas as a deterrent to keep overall traffic on an Internet link down.

See how bandwidth hogs are being treated in Asia

Equalizing Compared to Application Shaping (Traditional Layer-7 “Deep Packet Inspection” Products)


Editor’s Note: (Updated with new material March 2012)  Since we first wrote this article, many customers have implemented the NetEqualizer not only to shape their Internet traffic, but also to shape their company WAN.  Additionally, concerns about DPI and loss of privacy have bubbled up. (Updated with new material September 2010)  Since we first published this article, “deep packet inspection”, also known as Application Shaping, has taken some serious industry hits with respect to US-based ISPs.   

==============================================================================================
Author’s Note: We often get asked how NetEqualizer compares to Packeteer (Bluecoat), NetEnforcer (Allot), Network Composer (Cymphonix), Exinda, and a plethora of other well-known companies that do Application Shaping (aka “packet shaping”, “deep packet inspection”, or “Layer-7” shaping).   After several years of these questions, and discussing different aspects with former and current application shaping with IT administrators, we’ve developed a response that should clarify the differences between NetEqualizer’s behavior- based approach and the rest of the pack.
We thought of putting our response into a short, bullet-by-bullet table format, but then decided that since this decision often involves tens of thousands of dollars, 15 minutes of education on the subject with content to support the bullet chart was in order.  If you want to skip the details, see our Summary Table at the end of this article

However, if you’re looking to really understand the differences, and to have the question answered as objectively as possible, please take a few minutes to read on…
==============================================================================================

How NetEqualizer compares to Bluecoat, Allot, Cymphonix, & Exinda

In the following sections, we will cover specifically when and where Application Shaping is used, how it can be used to your advantage, and also when it may not be a good option for what you are trying to accomplish.  We will also discuss how Equalizing, NetEqualizer’s behavior-based shaping, fits into the landscape of application shaping, and how in many cases Equalizing is a much better alternative.

Download the full article (PDF)  Equalizing Compared To Application Shaping White Paper

Read the rest of this entry »

%d bloggers like this: