APconnections 10 Year Anniversary Celebration – All Summer Long!


We are celebrating 10 years in business this summer, thanks to you, our loyal  customers!  Our first NetEqualizer sale was a CD version, way back on July 13th, 2003.  As part of APconnections’ 10 Year Celebration, we will be donating $25 to one of four charities of the buyer’s choice for each NetEqualizer or NetGladiator sold from now until August 31, 2013.

We selected charities that are all rated B+ or above by CharityWatcAPconnections 10 Year Celebrationh.  The charities are operate on a global basis (like us!) and focus on one of the following: International Relief & Development, Homelessness & Housing, or Hunger. While we may not have picked your favorite charity, we hope that you agree that these are all worthy causes!

When you place a purchase order between now and August 31st, 2013, you will be asked to pick the charity of your choice for each unit purchased.

The charities, along with descriptions of their mission/vision from their websites are as follows.  You can visit their websites by clicking on their logos or the displayed link:

1) United States Fund for UNICEF   http://www.unicefusa.org
UNICEFThe United Nations Children’s Fund (UNICEF) works in more than 190 countries and territories to save and improve children’s lives, providing health care and immunizations, clean water and sanitation, nutrition, education, emergency relief and more. The U.S. Fund for UNICEF supports UNICEF’s work through fundraising, advocacy and education in the United States. Together, we are working toward the day when ZERO children die from preventable causes and every child has a safe and healthy childhood.

2) Habitat for Humanity    http://www.habitat.orgHabitat for Humanity
Habitat for Humanity believes that every man, woman and child should have a decent, safe and affordable place to live. We build and repair houses all over the world using volunteer labor and donations. Our partner families purchase these houses through no-profit, no-interest mortgage loans or innovative financing methods.

Doctors without Borders3) Doctors Without Borders   http://www.doctorswithoutborders.org
Doctors Without Borders/Médecins Sans Frontières (MSF) works in nearly 70 countries providing medical aid to those most in need regardless of their race, religion, or political affiliation.

The Hunger Project4) Global Hunger Project    http://www.thp.org
The Hunger Project (THP) is a global, non-profit, strategic organization committed to the sustainable end of world hunger. In Africa, South Asia and Latin America, THP seeks to end hunger and poverty by empowering people to lead lives of self-reliance, meet their own basic needs and build better futures for their children.

Thank you for all your support over our first 10 years, we truly appreciate your business! 

We look forward to working with all of you for many more years. 

APconnections Celebrates New NetEqualizer Lite with Introductory Pricing


Editor’s Note:  This is a copy of a press release that went out on May 15th, 2012.  Enjoy!

Lafayette, Colorado – May 15, 2012 – APconnections, an innovation-driven technology company that delivers best-in-class network traffic management solutions, is celebrating the expansion of its NetEqualizer Lite product line by offering special pricing for a limited time.

NetEqualizer’s VP of Sales and Business Development, Joe D’Esopo is excited to announce “To make it easy for you to try the new NetEqualizer Lite, for a limited time we are offering the NetEqualizer Lite-10 at introductory pricing of just $999 for the unit, our Lite-20 at $1,100, and our Lite-50 at $1,400.  These are incredible deals for the value you will receive; we believe unmatched today in our industry.”

We have upgraded our base technology for the NetEqualizer Lite, our entry-level bandwidth-shaping appliance.  Our new Lite still retains a small form-factor, which sets it apart, and makes it ideal for implementation in the Field, but now has enhanced CPU and memory. This enables us to include robust graphical reporting like in our other product lines, and also to support additional bandwidth license levels.

The Lite is geared towards smaller networks with less than 350 users, is available in three license levels, and is field-upgradable across them: our Lite-10 runs on networks up to 10Mbps and up to 150 users ($999), our Lite-20 (20Mbps and 200 users for $1,100), and Lite-50 (50Mbps and 350 users for $1,400).  See our NetEqualizer Price List for complete details.  One year renewable NetEqualizer Software & Support (NSS) and NetEqualizer Hardware Warranties (NHW) are offered.

Like all of our bandwidth shapers, the NetEqualizer Lite is a plug-n-play, low maintenance solution that is quick and easy to set-up, typically taking one hour or less.  QoS is implemented via behavior-based bandwidth shaping, “equalizing”, giving priority to latency-sensitive applications, such as VoIP, web browsing, chat and e-mail over large file downloads and video that can clog your Internet pipe.

About APconnections:  APconnections is based in Lafayette, Colorado, USA.  We released our first commercial offering in July 2003, and since then thousands of customers all over the world have put our products into service.  Today, our flexible and scalable solutions can be found in over 4,000 installations in many types of public and private organizations of all sizes across the globe, including: Fortune 500 companies, major universities, K-12 schools, and Internet providers on six (6) continents.  To learn more, contact us at sales@apconnections.net.

Contact: Sandy McGregor
Director, Marketing
APconnections, Inc.
303.997.1300
sandy@apconnections.net

What Does it Cost You Per Mbs for Bandwidth Shaping?


Sometimes by using a cost metric you can distill a relatively complicated thing down to a simple number for comparison. For example, we can compare housing costs by Dollars Per Square Foot or the fuel efficiency of cars by using the Miles Per Gallon (MPG) metric.  There are a number of factors that go into buying a house, or a car, and a compelling cost metric like those above may be one factor.   Nevertheless, if you decide to buy something that is more expensive to operate than a less expensive alternative, you are probably aware of the cost differences and justify those with some good reasons.

Clearly this makes sense for bandwidth shaping now more than ever, because the cost of bandwidth continues to decline and as the cost of bandwidth declines, the cost of shaping the bandwidth should decline as well.  After all, it wouldn’t be logical to spend a lot of money to manage a resource that’s declining in value.

With that in mind, I thought it might be interesting to looking at bandwidth shaping on a cost per Mbs basis. Alternatively, I could look at bandwidth shaping on a cost per user basis, but that metric fails to capture the declining cost of a Mbs of bandwidth. So, cost per Mbs it is.

As we’ve pointed out before in previous articles, there are two kinds of costs that are typically associated with bandwidth shapers:

1) Upfront costs (these are for the equipment and setup)

2) Ongoing costs (these are for annual renewals, upgrades, license updates, labor for maintenance, etc…)

Upfront, or equipment costs, are usually pretty easy to get.  You just call the vendor and ask for the price of their product (maybe not so easy in some cases).  In the case of the NetEqualizer, you don’t even have to do that – we publish our prices here.

With the NetEqualizer, setup time is normally less than an hour and is thus negligible, so we’ll just divide the unit price by the throughput level, and here’s the result:

I think this is what you would expect to see.

For ongoing costs you would need to add all the mandatory per year costs and divide by throughput, and the metric would be an ongoing “yearly” per Mbs cost.

Again, if we take the NetEqualizer as an example, the ongoing costs are almost zero.  This is because it’s a turn-key appliance and it requires no time from the customer for bandwidth analysis, nor does it require any policy setup/maintenance to effectively run (it doesn’t use policies). In fact, it’s a true zero maintenance product and that yields zero labor costs. Besides no labor, there’s no updates or licenses required (an optional service contract is available if you want ongoing access to technical support, or software upgrades).

Frankly, it’s not worth the effort of graphing this one. The ongoing cost of a NetEqualizer Support Agreement ranges from $29 (dollars) – $.20 (cents) per Mbs per year. Yet, this isn’t the case for many other products and this number should be evaluated carefully. In fact, in some cases the ongoing costs of some products exceed the upfront cost of a new NetEqualizer!

Again, it may not be the case that the lowest cost per Mbs of bandwidth shaping is the best solution for you – but, if it’s not, you should have some good reasons.

If you shape bandwidth now, what is your cost per Mbs of bandwidth shaping? We’d be interested to know.

If your ongoing costs are higher than the upfront costs of a new NetEqualizer and you’re open to a discussion, you should drop us a note at sales@apconnections.net.

The Story of NetEqualizer


By Art Reisman

CTO www.netequalizer.com

In the spring of 2002, I was a systems engineer at Bell Labs in charge of architecting Conversant – an innovative speech-processing product. Revenue kept falling quarter by quarter, and meanwhile upper management seemed to only be capable of providing material for Dilbert cartoons, or perhaps helping to fine-tune the script for The Office. It was so depressing that I could not even read Dilbert anymore – those cartoons are not as amusing when you are living them every day.

Starting in the year 2000, and continuing every couple of months, there was a layoff somewhere in the company (which was Avaya at the time). Our specific business unit would get hit every six months or so. It was like living in a hospice facility. You did not want to get to know anybody too well because you would be tagged with the guilt of still having a job should they get canned next week. The product I worked on existed only as a cash cow to be milked for profit, while upper management looked to purchase a replacement. I can’t say I blamed them; our engineering expertise was so eroded by then that it would have been a futile effort to try and continue to grow and develop the product.

Mercifully, I was laid off in June of 2003.

Prior to my pink slip, I had been fiddling with an idea that a friend of mine, Paul Harris, had come up with. His idea was to run a local wireless ISP. This initially doomed idea spawned from an article in the local newspaper about a guy up in Aspen, CO that was beaming wireless Internet around town using a Pringles can – I am not making this up. Our validation consisted of Paul rigging up a Pringles can antenna, attaching it to his laptop’s wireless card (we had external cards for wireless access at the time), and then driving a block from his house and logging in to his home Internet. Amazing!

The next day, while waiting around for the layoff notices, we hatched a plan to see if we could set up a tiny ISP from my neighborhood in northern Lafayette, CO. I lived in a fairly dense development of single-family homes, and despite many of my neighbors working in the tech industry, all we could get in our area was dial-up Internet. Demand was high for something faster.

So, I arranged to get a 1/2 T1 line to my house at the rate of about $1,500 per month, with the idea that I could resell the service to my neighbors. Our take rate for service appeared to be everybody I talked to. And so, Paul climbed onto the roof and set up some kind of pole attached to the top of the chimney, with a wire running down into the attic where we had a $30 Linksys AP. The top of my roof gave us a line-of-sight to 30 or 40 other rooftops in the area. We started selling service right away.

In the meantime, I started running some numbers in my head about how well this 1/2 T1 line would hold up. It seemed like every potential customer I talked to planned on downloading the Library of Congress, and I was afraid of potential gridlock. I had seen gridlock many times on the network at the office – usually when we were beating the crap out if it with all the geeky things we experimented on at Bell Labs.

We finally hooked up a couple of houses in late March, and by late April the trees in the area leafed out and blocked our signal. Subsequently, the neighbors got annoyed and stopped paying. Most 802.11 frequencies do not travel well through trees. I was also having real doubts about our ability to make back the cost of the T1 service, especially with the threat of gridlock looming once more people came online – not to mention the line-of-sight being blocked by the trees.

Being laid off was a blessing in disguise. Leaving Bell Labs was not a step I would have taken on my own. Not only did I have three kids, a mortgage, and the net worth of a lawnmower, my marketable technical skills had lapsed significantly over the past four years. Our company had done almost zero cutting-edge R&D in that time. How was I going to explain that void of meaningful, progressive work on my resume? It was a scary realization.

Rather than complain about it, I decided to learn some new skills, and the best way to do that is to give yourself a project. I decided to spend some time trying to figure out a way to handle the potential saturation on our T1 line. I conjured up my initial solution from my computer science background. In any traditional operating systems’ course, there is always a lesson discussing how a computer divvies up its resources. Back in the old days, when computers were very expensive, companies with computer work would lease time on a shared computer to run a “job”. Computing centers at the time were either separate companies, or charge-back centers in larger companies that could afford a mainframe. A job was the term used for your computer program. The actual computer code was punched out on cards. The computer operator would take your stack of cards from behind a cage in a special room and run them through the machine. Many operators were arrogant jerks that belittled you when your job kicked out with an error, or if it ran too long and other jobs were waiting. Eventually computer jobs evolved so they could be submitted remotely from a terminal, and the position of the operator faded away. Even without the operator, computers were still very expensive, and there were always more jobs to run than the amount of leased time on the computer. This sounds a lot like a congested Internet pipe, right?

The solution for computers with limited resources was a specialized program called an operating system.  Operating systems decided what jobs could run, and how much time they would get, before getting furloughed. During busy times, the operating system would temporarily kick larger jobs out and make them wait before letting them back in. The more time they used before completion, the lower their priority, and the longer they would wait for their turn.

My idea – and the key to controlling congestion on an Internet pipe – was based on adapting the proven OS scheduling methodology used to prevent gridlock on a computer and apply it to another limited resource – bandwidth on an Internet link. But, I wasn’t quite sure how to accomplish this yet.

Kevin Kennedy was a very respected technical manager during my early days at Bell Labs in Columbus, Ohio. Kevin left shortly after I came on board, and eventually rose up to be John Chambers’ number two at Cisco. Kevin helped start a division at Cisco which allowed a group of engineers to migrate over and work with him – many of whom were friends of mine from Bell Labs. I got on the phone and consulted a few of them on how Cisco dealt with congestion on their network. I wondered if they had anything smart and automated, and the answer I got was “yes, sort of.” There was some newfangled way to program their IOS operating system, but nothing was fully automated. That was all I needed to hear. It seemed I had found a new niche, and I set out to make a little box that you plugged into a WAN or Internet port that would automatically relieve congestion and not require any internal knowledge of routers and complex customizations.

In order to make an automated fairness engine, I would need to be able to tap into the traffic on an Internet link. So I started looking at the Linux kernel source code and spent several weeks reading about what was out there. Reading source code is like building a roadmap in your head. Slowly over time neurons start to figure it out – much the same way a London Taxi driver learns their way around thousands of little streets with some of them being dead ends. I eventually stumbled into the Linux bridge code. The Linux bridge code allows anybody with a simple laptop and two Ethernet cards to build an Ethernet bridge. Although an Ethernet bridge was not really related in function to my product idea, it solved all of the upfront work I would need to do to break into an Internet connection to examine data streams and then reset their priorities on the fly as necessary – all this with complete transparency to the network.

As usual, the mechanics of putting the concept in my head into working code was a bit painful and arduous. I am not the most adept when it comes to using code syntax and wandering my way around kernel code. A good working knowledge of building tools, compiling tools, and legacy Linux source code is required to make anything work in the Linux kernel. The problem was that I couldn’t stand those details. I hated them and would have gladly paid somebody else to implement my idea, but I had absolutely no money. Building and coding in the Linux kernel is like reading a book you hate where the chapters and plot are totally scrambled. But, having done it many times, I slogged through, and out the other side appeared the Linux Bandwidth Arbitrator (LBA) – a set of utilities and computer programs made for Linux open source that would automatically take a Linux bridge and start applying fairness rules.

Once I had the tool working in my small home test lab, I started talking about it on a couple of Linux forums. I needed a real network to test it on because I had no experience running a network. My engineering background up until now had been working with firmware on proprietary telecommunication products. I had no idea how my idea would perform in the wild.

Eventually, as a result of one of my Linux forum posts, a call came in from a network administrator and Linux enthusiast named Eric who ran a network for a school district in the Pacific Northwest. I thought I had hit the big time. He was a real person with a real network with a real problem. I helped him load up a box with our tool set in his home office for testing. Eventually, we got it up and running on his district network with mixed results. This experiment, although inconclusive, got some serious kinks worked out with my assumptions.

I went back to the Linux forums with my newfound knowledge. I learned of a site called “freshmeat.net” where one could post free software for commercial use. The response was way more than I expected, perhaps a thousand hits or so in the first week. However, the product was not easy to build from scratch and most hits were just curious seekers of free tools. Very few users had built a Linux kernel, let alone had the skill set to build a Linux Bandwidth Arbitrator from my instructions. But, it only took one qualified candidate to further validate the concept.

This person turned out to be an IT administrator from a state college in Georgia. He loaded our system up after a few questions, and the next thing I knew I got an e-mail that went something like this:

“Since we installed the LBA, all of our congestion has ceased, and the utilization on our main Internet trunk is 20% less. The students are very happy!”

I have heard this type of testimonial many times since, but I was in total disbelief with this first one. It was on a significant network with significant results! Did it really work, or was this guy just yanking my chain? No. It was real, and it really did work!

I was broke and ecstatic at the same time. The Universe sends you these little messages that you are on the right track just when you need them. To me, this e-mail was akin to 50,000 people in a stadium cheering for you. Queue the Rocky music.

Our following on freshmeat.net grew and grew. We broke into the Top 100 projects, which is like making it to Hollywood Week on American Idol to tech geeks, and then broke the Top 50 or so in their rankings. This was really quite amazing because most of the software utilities on freshmeat.net were consumer-based utilities, which have a much broader audience. The only projects with higher rankings in a business-to-business utility product (like the LBA) were utilities like SQL Dansguard, and other very well-known projects.

Shortly after going live on freshmeat.net, I started collaborating add-ons to the LBA utility with Steve Wagor (now my partner at APconnections). He was previously working as a DBA, webmaster, and jack-of-all-trades for a company that built websites for realtors in the southwestern United States. We were getting about one request a week to help install the LBA in a customer network. Steve got the idea to make a self-booting CD that could run on any standard PC with a couple of LAN cards. In August of 2004, we started selling them. Our only current channel was freshmeat.net, which allowed us to offer a purchasable CD as long as we offered the freeware version too.* We sold fifteen CD’s that first month. The only bad news was that we were working for about $3.00 per hour. There were too many variables on the customer-loaded systems to be as efficient as we needed to be.  Also, many of the customers loading the free CD were as broke as we were and not able to pay for our expertise.

* As an interesting side note, we also had a free trial version that ran for about two hours that could be converted to the commercial version with a key. The idea was to let people try it, prove it worked, and then send them the permanent key when they paid. Genius, we thought. However, we soon realized there were thousands of small Internet cafes around the world that would run the thing for two hours and then reboot. They were getting congestion control and free consulting from us. So in countries where the power goes out once a day anyway, no one is bothered by a sixty-second Internet outage.

As word got out that the NetEqualizer worked well, we were able to formalize the commercial version and started bundling everything into our own manufacturing and shipping package from the United States. This eliminated all the free consulting work on the demo systems, and also ensured a uniform configuration that we could support.

Today NetEqualizer has become an adjective brand name in growing circles.

Some humble facts:

NetEqualizer is a multi-million dollar company.

NetEqualizer’s have over ten million users going through them on six continents.

We serve many unique locales in addition to the world’s largest population centers. Some of the more interesting places are:

  • Malta
  • The Seychelles Islands
  • The Northern Slopes of Alaska
  • Iceland
  • Barbados
  • Guantanamo Bay
  • The Yukon Territory
  • The Afghan-American Embassy
  • The United States Olympic Training Center
  • Multiple NBA arenas
  • Yellowstone National Park

Stay tuned for Part II, “From Startup to Multi-National, Multi-Million Dollar Enterprise.”

Meanwhile, check out these related articles:

NetEqualizer Brand Becoming an Eponym for Fairness and Net-Neutrality Techniques

Building a Software Company from Scratch” – Adapted from an entrepreneur.org article.

What Is Burstable Bandwidth? Five Points to Consider


Internet Providers continually use clever marketing analogies to tout their burstable high-speed Internet connections. One of my favorites is the comparison to an automobile with overdrive that at the touch of button can burn up the road. At first, the analogies seem valid, but there are usually some basic pitfalls and unresolved issues.  Below are five points that are designed to make you ponder just what you’re getting with your burstable Internet connection, and may ultimately call some of these analogies, and burstable Internet speeds altogether, into question.

  1. The car acceleration analogy just doesn’t work.

    First, you don’t share your car’s engine with other users when you’re driving.  Whatever the engine has to offer is yours for the taking when you press down on the throttle.  As you know, you do share your Internet connection with many other users.  Second, with your Internet connection, unless there is a magic button next to your router, you don’t have the ability to increase your speed on command.  Instead, Internet bursting is a mysterious feature that only your provider can dole out when they deem appropriate.  You have no control over the timing.

  2. Since you don’t have the ability to decide when you can be granted the extra power, how does your provider decide when to turn up your burst speed?

    Most providers do not share details on how they implement bursting policies, but here is an educated guess – based on years of experience helping providers enforce various policies regarding Internet line speeds.  I suspect your provider watches your bandwidth consumption and lets you pop up to your full burst speed, typically 10 megabits, for a few seconds at a time.  If you continue to use the full 10 megabits for more than a few seconds, they likely will reign you back down to your normal committed rate (typically 1 megabit). Please note this is just an example from my experience and may not reflect your provider’s actual policy.

  3. Above, I mentioned a few seconds for a burst, but just how long does a typical burst last?

    If you were watching a bandwidth-intensive HD video for an hour or more, for example, could you sustain adequate line speed to finish the video? A burst of a few seconds will suffice to make a Web page load in 1/8 of a second instead of perhaps the normal 3/4 of a second. While this might be impressive to a degree, when it comes to watching an hour-long video, this might eclipse your baseline speed. So, if you’re watching a movie or doing any another sustained bandwidth-intensive activity, it is unlikely you will be able to benefit from any sort of bursting technology.

  4. Why doesn’t my provider let me have the burst speed all of the time?

    The obvious answer is that if they did,  it would not be a burst, so it must somehow be limited in some duration.  A better answer is that your provider has peaks and valleys in their available bandwidth during the day, and the higher speed of a burst cannot be delivered consistently.  Therefore, it’s better to leave bursting as a nebulous marketing term rather than a clearly defined entity.  One other note is that if you only get bursting during your provider’s Internet “valleys”, it may not help you at all, as the time of day may be no where near your busy hour time, and so although it will not hurt you, it will not help much either.

  5. When are the likely provider peak times where my burst is compromised?

    Slower service and the inability to burst are most likely occurring during times when everybody else on the Internet is watching movies — during the early evening.  Again, if this is your busy hour, just when you could really use bursting, it is not available to you.

These five points should give you a good idea of the multiple questions and issues that need to be considered when weighing the viability and value of burstable Internet speeds.  Of course, a final decision on bursting will ultimately depend on your specific circumstances.  For further related reading on the subject, we suggest you visit our articles How Much YouTube Can the Internet Handle and Field Guide to Contention Ratios.

$1000 Discount Offered Through NetEqualizer Cash For Conversion Program


After witnessing the overwhelming popularity of the government’s Cash for Clunkers new car program, we’ve decided to offer a similar deal to potential NetEqualizer customers. Therefore, this week, we’re announcing the launch of our Cash for Conversion program.The program offers owners of select brands (see below) of network optimization technology a $1000 credit toward the list-price purchase of NetEqualizer NE2000-10 or higher models (click here for a full price list). All owners have to do is send us your old (working or not) or out of license bandwidth control technology. Products from the following manufacturers will be accepted:

  • Exinda
  • Packeteer/Blue Coat
  • Allot
  • Cymphonics
  • Procera

In addition to receiving the $1000 credit toward a NetEqualizer, program participants will also have the peace of mind of knowing that their old technology will be handled responsibly through refurbishment or electronics recycling programs.

Only the listed manufacturers’ products will qualify. Offer good through the Labor Day weekend (September 7, 2009). For more information, contact us at 303-997-1300 or admin@apconnections.net.

The True Price of Bandwidth Monitoring


By Art Reisman

Art Reisman CTO www.netequalizer.com

For most IT administrators, bandwidth monitoring of some sort is an essential part of keeping track of, as well as justifying, network expenses. Without visibility into a network load, an administrator’s job would degrade into a quagmire of random guesswork. Or would it?

The traditional way of  looking at monitoring your Internet has two parts: the fixed cost of the monitoring tool used to identify traffic, and the labor associated with devising a remedy. In an ironic inverse correlation, we assert that costs increase with the complexity of the monitoring tool. Obviously, the more detailed the reporting tool, the more expensive its initial price tag. The kicker comes with part two. The more expensive the tool, the more  detail  it will provide, and the more time an administrator is likely to spend adjusting and mucking, looking for optimal performance.

But, is it a fair to assume higher labor costs with  more advanced monitoring and information?

Well, obviously it would not make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. Why have the reporting tool in the first place if the only output was to stare at reports and do nothing? Typically, the more information an admin has about a network, the more inclined he might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief  that when  the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network mucking can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention.  For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980’s.  The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.

An effective compromise with many of our customers is that they are stepping down from expensive complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.

For example, with a simple reporting tool you can plot network usage by user.  Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing; abuse becomes obvious just looking at the usage (a simple report).

However, there is also the personal control factor, which often does not follow clear lines of ROI (return on investment).

What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into a trip to Hawaii.

In our next article, we’ll put some real world numbers to the test for actual break downs, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.

List of monitoring tools compiled by Stanford

Planetmy
Linux Tips
How to set up a monitor for free

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

%d bloggers like this: