Instant Bandwidth Snapshot Feature: Is this an Industry First?


One of the things that we have noticed with reporting tools lately, including ntop (the reporting tool we integrate), is that there is no easy way to show instant bandwidth for a user.  Most reporting tools smooth out usage over some time period, a 5 minute average is the norm.

For example, this popular Netflow Analyzer touts a 10 minute average, right from the FAQ on their main page it states:

Real-time Bandwidth Reports for each WAN link

As soon as Netflow data is received, graphs are generated showing details on incoming and outgoing traffic on the link for the last 10 minutes.”

No where can we find a reasonable bandwitdh monitoring tool that will show you instant, as of this second, bandwidth utilization. We are sure somebody will e-mail us to dispute this claim, and if so, we will gladly publish their link and give them credit on our BLOG.

When is an Instant Bandwidth Reporting Tool useful?

1) The five minute average reporting tool is of little use when a customer calls and tells you they are not getting their expected bandwidth on a speed test or video.  In these cases it is best to see the instant report while they are consuming the bandwidth, not averaged into a 10 minute aggregate.

2) If a customer has a fixed rate cap, and calls and reports that their VOIP is not working well.  The easiest and quickest way is to check what their consumption is during a VOIP call is to see it now. You don’t need a fancy protocol analyzer to tell them they are sucking up their full 1 megabit allocation with their YouTube video specifically. You just need to know that their line is clear and that they are consuming the full megabit at this instant, thus exonerating you (the ISP or support person) from getting drawn down in the dregs of culpability.

Here are some links to other reporting tools.

http://www.javvin.com/packet.html

Ip guard

Spiceworks

Here is a snapshot of our screen that allows you take an Instant Bandwidth Snapshot, showing the last second of utilization for a individual IP, Pool, or VLAN on your network.

NetEqualizer Field Guide to Network Capacity Planning


I recently reviewed an article that covered bandwidth allocations for various Internet applications. Although the information was accurate, it was very high level and did not cover the many variances that affect bandwidth consumption. Below, I’ll break many of these variances down, discussing not only how much bandwidth different applications consume, but the ranges of bandwidth consumption, including ping times and gaming, as well as how our own network optimization technology measures bandwidth consumption.

E-mail

Some bandwidth planning guides make simple assumptions and provide a single number for E-mail capacity planning, oftentimes overstating the average consumption. However, this usually doesn’t provide an accurate assessment. Let’s consider a couple of different types of E-mail.

E-mail — Text

Most E-mail text messages are at most a paragraph or two of text. On the scale of bandwidth consumption, this is negligible.

However, it is important to note that when we talk about the bandwidth consumption of different kinds of applications, there is an element of time to consider — How long will this application be running for? So, for example, you might send two kilobytes of E-mail over a link and it may roll out at the rate of one megabit. A 300-word, text-only E-mail can and will consume one megabit of bandwidth. The catch is that it generally lasts just a fraction of second at this rate. So, how would you capacity plan for heavy sustained E-mail usage on your network?

When computing bandwidth rates for classification with a commercial bandwidth controller such as a NetEqualizer, the industry practice is to average the bandwidth consumption for several seconds, and then calculate the rate in units of kilobytes per second (Kbs).

For example, when a two kilobyte file (a very small E-mail, for example) is sent over a link for a fraction of a second, you could say that this E-mail consumed two megabits of bandwidth. For the capacity planner, this would be a little misleading since the duration of the transaction was so short. If you take this transaction average over a couple of seconds, the transfer rate would be just one kbs, which for practical purposes, is equivalent to zero.

E-mail with Picture Attachments

A normal text E-mail of a few thousand bytes can quickly become 10 megabits of data with a few picture attachments. Although it may not look all the big on your screen, this type of E-mail can suck up some serious bandwidth when being transmitted. In fact, left unmolested, this type of transfer will take as much bandwidth as is available in transit. On a T1 circuit, a 10-megabit E-mail attachment may bring the line to a standstill for as long as six seconds or more. If you were talking on a Skype call while somebody at the same time shoots a picture E-mail to a friend, your Skype call is most likely going to break up for five seconds or so. It is for this reason that many network operators on shared networks deploy some form of bandwidth contorl or QoS as most would agree an E-mail attachment should not take priority over a live phone call.

E-mail with PDf Attachment

As a rule, PDF files are not as large as picture attachments when it comes to E-mail traffic. An average PDF file runs in the range of 200 thousand bytes whereas today’s higher resolution digital cameras create pictures of a few million bytes, or roughly 10 times larger. On a T1 circuit, the average bandwidth of the PDF file over a few seconds will be around 100kbs, which leaves plenty of room for other activities. The exception would be the 20-page manual which would be crashing your entire T1 for a few seconds just as the large picture attachments referred to above would do.

Gaming/World of Warcraft

There are quite a few blogs that talk about how well World of Warcraft runs on DSL, cable, etc., but most are missing the point about this game and games in general and their actual bandwidth requirements. Most gamers know that ping times are important, but what exactly is the correlation between network speed and ping time?

The problem with just measuring speed is that most speed tests start a stream of packets from a server of some kind to your home computer, perhaps a 20-megabit test file. The test starts (and a timer is started) and the file is sent. When the last byte arrives, a timer is stopped. The amount of data sent over the elapsed seconds yields the speed of the link. So far so good, but a fast speed in this type of test does not mean you have a fast ping time. Here is why.

Most people know that if you are talking to an astronaut on the moon there is a delay of several seconds with each transmission. So, even though the speed of the link is the speed of light for practical purposes, the data arrives several seconds later. Well, the same is true for the Internet. The data may be arriving at a rate of 10 megabits, but the time it takes in transit could be as high as 1 second. Hence, your ping time (your mouse click to fire your gun) does not show up at the controlling server until a full second has elapsed. In a quick draw gun battle, this could be fatal.

So, what affects ping times?

The most common cause would be a saturated network. This is when your network transmission rates of all data on your Internet link exceed the links rated capacity. Some links like a T1 just start dropping packets when full as there is no orderly line to send out waiting packets. In many cases, data that arrive to go out of your router when the link is filled just get tossed. This would be like killing off excess people waiting at a ticket window or something. Not very pleasant.

If your router is smart, it will try to buffer the excess packets and they will arrive late. Also, if the only thing running on your network is World of Warcraft, you can actually get by with 120kbs in many cases since the amount of data actually sent of over the network is not that large. Again, the ping time is more important and a 120kbs link unencumbered should have ping times faster than a human reflex.

There may also be some inherent delay in your Internet link beyond your control. For example, all satellite links, no matter how fast the data speed, have a minimum delay of around 300 milliseconds. Most urban operators do not need to use satellite links, but they all have some delay. Network delay will vary depending on the equipment your provider has in their network, and also how and where they connect up to other providers as well as the amount of hops your data will take. To test your current ping time, you can run a ping command from a standard Windows machine

Citrix

Applications vary widely in the amount of bandwidth consumed. Most mission critical applications using Citrix are fairly lightweight.

YouTube Video — Standard Video

A sustained YouTube video will consume about 500kbs on average over the video’s 10-minute duration. Most video players try to store the video up locally as fast as they can take it. This is important to know because if you are sizing a T1 to be shared by voice phones, theoretically,  if a user was watching a YouTube video, you would have 1 -megabit left over for the voice traffic. Right? Well, in reality, your video player will most likely take the full T1, or close to it, if it can while buffering YouTube.

YouTube — HD Video

On average, YouTube HD consumes close to 1 megabit.

See these other Youtube articles for more specifics about YouTube consumption

Netflix – Movies On Demand

Netflix is moving aggressively to a model where customers download movies over the Internet, versus having a DVD sent to them in the mail.  In a recent study, it was shown that 20% of bandwidth usage during peak in the U.S. is due to Netflix downloads. An average a two hour movie takes about 1.8 gigabits, if you want high-definition movies then its about 3 gigabits for two hours.   Other estimates are as high as 3-5 gigabits per movie.

On a T1 circuit, the average bandwidth of a high-definition Netflix movie (conversatively 3 gigabits/2 hours) over one second will be around 400kbs, which consumes more than 25% of the total circuit.

Skype/VoIP Calls

The amount of bandwidth you need to plan for a VoIP network is a hot topic. The bottom line is that VoIP calls range from 8kbs to 64kbs. Normally, the higher the quality the transmission, the higher the bit rate. For example, at 64kbs you can also transmit with the quality that one might experience on an older style AM radio. At 8kbs, you can understand a voice if the speaker is clear and pronunciates  their words clearly.  However, it is not likely you could understand somebody speaking quickly or slurring their words slightly.

Real-Time Music, Streaming Audio and Internet Radio

Streaming audio ranges from about 64kbs to 128kbs for higher fidelity.

File Transfer Protocol (FTP)/Microsoft Servicepack Downloads

Updates such as Microsoft service packs use file transfer protocol. Generally, this protocol will use as much bandwidth as it can find. There are several limiting factors for the actual speed an FTP will attain, though.

  1. The speed of your link — If the factors below (2 and 3) do not come into effect, an FTP transfer will take your entire link and crowd out VoIP calls and video.
  2. The speed of the senders server — There is no guarantee that the  sending serving is able to deliver data at the speed of your high speed link. Back in the days of dial-up 28.8kbs modems, this was never a factor. But, with some home internet links approaching 10 megabits, don’t be surprised if the sending server cannot keep up. During peak times, the sending server may be processing many requests at one time, and hence, even though it’s coming from a commercial site, it could actually be slower than your home network.
  3. The speed of the local receiving machine — Yes, even the computer you are receiving the file on has an upper limit. If you are on a high speed university network, the line speed of the network can easily exceed your computers ability to take up data.

While every network will ultimately be different, this field guide should provide you with an idea of the bandwidth demands your network will experience. After all, it’s much better to plan ahead rather than risking a bandwidth overload that causes your entire network to come to a hault.

Related Article a must read for anybody upgrading their Internet Pipe is our article on Contention Ratios

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

Other products that classify bandwidth

Simple Is Better with Bandwidth Monitoring and Traffic Shaping Equipment


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

For most IT administrators, bandwidth monitoring of some sort is an essential part of keeping track of, as well as justifying, network expenses. However, the question a typical CIO will want to know before approving any purchase is, “What is the return on investment for your equipment purchase?”.  Putting a hard and fast number on  bandwidth optimization equipment may seem straight forward.  If you can quantify the cost of your bandwidth and project an approximate reduction in usage or increase in throughput, you can crunch the numbers. But, is that all you should consider when determining how much you should spend on a bandwidth optimization device?

The traditional way of looking at monitoring your Internet has two dimensions.  First, the fixed cost of the monitoring tool used to identify traffic, and second, the labor associated with devising and implementing the remedy.  In an ironic inverse correlation, we assert that your ROI will degrade with the complexity of the monitoring tool.

Obviously, the more detailed the reporting/shaping tool, the more expensive its initial price tag. Yet, the real kicker comes with part two. The more detailed data output generally leads to an increase in the time an administrator is likely to spend making adjustments and looking for optimal performance.

But, is it really fair to assume higher labor costs with more advanced monitoring and information?

Well, obviously it wouldn’t make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. But, typically, the more information an admin has about a network, the more inclined he or she might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief that when the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network adjusting can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention. For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980s. The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.

An effective compromise with many of our customers is that they are stepping down from expensive, complex reporting tools to a simpler approach.  Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.

For example, with a simple reporting tool you can plot network usage by user. Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing. Abuse becomes obvious just looking at the usage (a simple report).

However, there is also the personal control factor, which often does not follow clear lines of ROI.

What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into, for example, a trip to Hawaii.

In our next article, we’ll put some real world numbers to the test for actual breakdowns, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.

List of monitoring tools compiled by Stanford

ROI tool , determine how much a bandwidth control device can save.

Great article on choosing a bandwidth controller

Planetmy
Linux Tips
How to set up a monitor for free

Good enough is better a lesson from the Digital Camera Revolution

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

The Promise of Streaming Video: An Unfunded Mandate


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com
Art Reisman is a partner and co-founder of APconnections, a company that provides bandwidth control solutions (NetEqualizer) to
ISPs, Universities, Libraries, Mining Camps, and any organization where groups of users must share their Internet resources equitably. What follows is an objective educational journey on how consumers and ISPs can live in harmony with the explosion of YouTube video.

The following is written primarily for the benefit of mid-to-small sized internet services providers (ISPs).  However, home consumers may also find the details interesting.  Please follow along as I break down the business cost model of the costs required to keep up with growing video demand.

In the past few weeks, two factors have come up in conversations with our customers, which has encouraged me to investigate this subject further and outline the challenges here:

1) Many of our ISP customers are struggling to offer video at competitive levels during the day, and yet are being squeezed due to high bandwidth costs.  Many look to the NetEqualizer to alleviate video congestion problems.  As you know, there are always trade-offs to be made in handling any congestion issue, which I will discuss at the end of this article.  But back to the subject at hand.  What I am seeing from customers is that there is an underlying fear that they (IT adminstrators) are behind the curve.   As I have an opinion on this, I decided I need to lay out what is “normal” in terms of contention ratios for video, as well what is “practical” for video in today’s world.

2) My internet service provider, a major player that heavily advertises how fast their speed is to the home, periodically slows down standard YouTube Videos.  I should be fair with my accusation, with the Internet you can actually never be quite certain who is at fault.  Whether I am being throttled or not, the point is that there are an ever-growing number of video content providers , who are pushing ahead with plans that do not take into account, nor care about, a last mile provider’s ability to handle the increased load.  A good analogy would be a travel agency that is booking tourists onto a cruise ship without keeping a tally of tickets sold, nor caring, for that matter.  When all those tourists show up to board the ship, some form of chaos will ensue (and some will not be able to get on the ship at all).

Some ISPs are also adding to this issue, by building out infrastructure without regard to content demand, and hoping for the best.  They are in a tight spot, getting caught up in a challenging balancing act between customers, profit, and their ability to actually deliver video at peak times.

The Business Cost Model of an ISP trying to accommodate video demands

Almost all ISPs rely on the fact that not all customers will pull their full allotment of bandwidth all the time.  Hence, they can map out an appropriate subscriber ratio for their network, and also advertise bandwidth rates that are sufficient enough to handle video.  There are four main governing factors on how fast an actual consumer circuit will be:

1) The physical speed of the medium to the customer’s front door (this is often the speed cited by the ISP)
2) The combined load of all customers sharing their local circuit and  the local circuit’s capacity (subscriber ratio factors in here)
3) How much bandwidth the ISP contracts out to the Internet (from the ISP’s provider)

4) The speed at which the source of the content can be served (Youtube’s servers), we’ll assume this is not a source of contention for our examples below, but it certainly should remain a suspect in any finger pointing of a slow circuit.

The actual limit to the am0unt of bandwidth a customer gets at one time, which dictates whether they can run a live streaming video, usually depends  on how oversold their ISP is (based on the “subscriber ratio” mentioned in points 1 and 2 above). If  your ISP can predict the peak loads of their entire circuit correctly, and purchase enough bulk bandwidth to meet that demand (point 3 above), then customers should be able to run live streaming video without interruption.

The problem arises when providers put together a static set of assumptions that break down as consumer appetite for video grows faster than expected.  The numbers below typify the trade-offs a mid-sized provider is playing with in order to make a profit, while still providing enough bandwidth to meet customer expectations.

1) In major metropolitan areas, as of 2010, bandwidth can be purchased in bulk for about $3000 per 50 megabits. Some localities less some more.

2) ISPs must cover a fixed cost per customer amortized: billing, sales staff, support staff, customer premise equipment, interest on investment , and licensing, which comes out to about $35 per month per customer.

3) We assume market competition fixes price at about $45 per month per customer for a residential Internet customer.

4) This leaves $10 per month for profit margin and bandwidth fees.  We assume an even split: $5 a month per customer for profit, and $5 per month per customer to cover bandwidth fees.

With 50 megabits at $3000 and each customer contributing $5 per month, this dictates that you must share the 50 Megabit pipe amongst 600 customers to be viable as a business.  This is the governing factor on how much bandwidth is available to all customers for all uses, including video.

So how many simultaneous YouTube Videos can be supported given the scenario above?

Live streaming YouTube video needs on average about 750kbs , or about 3/4 of a megabit, in order to run without breaking up.

On a 50 megabit shared link provided by an ISP, in theory you could support about 70 simultaneous YouTube sessions, assuming nothing else is running on the network.  In the real world there would always be background traffic other than YouTube.

In reality, you are always going to have a minimum fixed load of internet usage from 600 customers of approximately 10-to-20 megabits.  The 10-to-20 megabit load is just to support everything else, like web sufing, downloads, skype calls, etc.  So realistically you can support about 40 YouTube sessions at one time.  What this implies that if 10 percent of your customers (60 customers) start to watch YouTube at the same time you will need more bandwidth, either that or you are going to get some complaints.  For those ISPs that desperately want to support video, they must count on no more than about 40 simultaneous videos running at one time, or a little less than 10 percent of their customers.

Based on the scenario above, if 40 customers simultaneously run YouTube, the link will be exhausted and all 600 customers will be wishing they had their dial-up back.  At last check, YouTube traffic accounted for 10 percent of all Internet Traffic.  If left completely unregulated, a typical rural ISP could find itself on the brink of saturation from normal YouTube usage already.  With tier-1 providers in major metro areas, there is usually more bandwidth, but with that comes higher expectations of service and hence some saturation is inevitable.

This is why we believe that Video is currently an “unfunded mandate”.  Based on a reasonable business cost model, as we have put forth above, an ISP cannot afford to size their network to have even 10% of their customers running real-time streaming video at the same time.  Obviously, as bandwidth costs decrease, this will help the economic model somewhat.

However, if you still want to tune for video on your network, consider the options below…

NetEqualizer and Trade-offs to allow video

If you are not a current NetEqualizer user, please feel free to call our engineering team for more background.  Here is my short answer on “how to allow video on your network” for current NetEqualizer users:

1) You can determine the IP address ranges for popular sites and give them priority via setting up a “priority host”.
This is not recommended for customers with 50 megs or less, as generally this may push you over into a gridlock situation.

2) You can raise your HOGMIN to 50,000 bytes per second.
This will generally let in the lower resolution video sites.  However, they may still incur Penalities should they start buffering at a higher rate than 50,000.  Again, we would not recommend this change for customers with pipes of 50 megabits or less.

With either of the above changes you run the risk of crowding out web surfing and other interactive uses , as we have described above. You can only balance so much Video before you run out of room.  Please remember that the Default Settings on the NetEq are designed to slow video before the entire network comes to halt.

For more information, you can refer to another of Art’s articles on the subject of Video and the Internet:  How much YouTube can the Internet Handle?

Other blog posts about ISPs blocking YouTube

Net Neutrality Enforcement and Debate: Will It Ever Be Settled?


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper. APconnections removed all Deep Packet Inspection technology from their NetEqualizer product over 2 years ago.

As the debate over net neutrality continues, we often forget what an ISP actually is and why they exist.
ISPs in this country are for-profit private companies made up of stockholders and investors who took on risk (without government backing) to build networks with the hopes of making a profit. To make a profit they must balance users expectations for performance against costs of implementing a network.

The reason bandwidth control is used in the first place is the standard switching problem capacity problem. Nobody can afford the investment of infrastructure to build a network to meet peak demands at all times. Would you build a house with 10 bedrooms if you were only expecting one or two kids sometime in the future? ISPs build networks to handle an average load, and when peak loads come along, they must do some mitigation. You can argue they should have built their networks; with more foresight until you are green, but the fact is demand for bandwidth will always outstrip supply.

So, where did the net neutrality debate get its start?
Unfortunately, in many Internet providers’ first attempt to remedy the overload issue on their networks, the layer-7 techniques they used opened a Pandora’s box of controversy that may never be settled.

When the subject of net neutrality started heating up around 2007 and 2008, the complaints from consumers revolved around ISP practices of looking inside customer’s transmittal of data and blocking or redirecting traffic based on content. There were all sorts of rationalizations for this practice and I’ll be the first to admit that it was not done with intended malice. However, the methodology was abhorrent.

I likened this practice to the phone company listening into your phone calls and deciding which calls to drop to keep their lines clear. Or, if you want to take it a step farther, the postal service making a decision to toss your junk mail based on their own private criteria. Legally I see no difference between looking inside mail or looking inside Internet traffic. It all seems to cross a line. When referring to net neutrality, the bloggers of this era were originally concerned with this sort of spying and playing God with what type of data can be transmitted.

To remedy this situation, Comcast and others adopted methods that relegated Internet usage based on patterns of usage and not content. At the time, we were happy to applaud them and claim that the problem of spying on data had been averted. I pretty much turned my attention away from the debate at that time, but I recently started looking back at the debate and, wow, what a difference a couple of years make.

So, where are we headed?
I am not sure what his sources are, but Rush Limbaugh claims that net neutrality is going to become a new fairness doctrine. To summarize, the FCC or some government body would start to use its authority to ensure equal access to content from search engine companies. For example, making sure that minority points of view on subjects got top billing in search results. This is a bit a scary, although perhaps a bit alarmist, but it would not surprise me since, once in government control, anything is possible. Yes, I realize conservative talk radio show hosts like to elicit emotional reactions, but usually there is some truth to back up their claims.

Other intelligent points of view:

The CRTC (Canadian FCC) seems to have a head on their shoulders, they have stated that ISPs must disclose their practices, but are not attempting to regulate how in some form of over reaching doctrine. Although I am not in favor of government institutions, if they must exist then the CRTC stance seems like a sane and appropriate request with regard to regulating ISPs.

Freedom to Tinker

What Is Deep Packet Inspection and Why All the Controversy?

APconnections Announces New API for Customizing Bandwidth User Quotas


APconnections is proud to announce the release of its NetEqualizer User-Quota API (NUQ API) programmer’s toolkit. This new toolkit will allow NetEqualizer users to generate custom configurations to better handle bandwidth quotas* as well as keep customers informed of their individual bandwidth usage.

The NetEqualizer User-Quota API (NUQ API) programmer’s toolkit features include:

  1. Tracking user data by IP and MAC address (MAC address tracking will be out in the second release)
  2. Specifying quotas and bandwidth limits by IP or a subnet block
  3. Monitoring real-time bandwidth utilization at any time
  4. Setting up a notification alarm when a user exceeds a bandwidth limit
  5. Utilizing an API programming interface

In addition to providing the option to create separate bandwidth quotas for individual customers and reduce a customer’s Internet pipe when they have reached their individual set limit, customers themselves can be notified when a limit is reached and even have access to an interface to monitor current monthly usage so they are not surprised when they reach their limit.

Overall, the NUQ API will provide a quick and easy tool to customize your business and business process.

If you do not currently have the resources to use the NUQ API and customize it to fit your business, please contact us and we can arrange for one of our consulting partners to put together an estimate for you.  Or, if you just have a few questions, we’d be happy to put together a reasonable support contract (Support for the API programs is not included in our standard software support (NSS)).

*Bandwidth quotas are used by ISPs as a means to meter total bandwidth downloaded over a period of time. Although not always disclosed, most ISPs reserve the right to limit service for users that continually download data. Some providers use the threat of quotas as a deterrent to keep overall traffic on an Internet link down.

See how bandwidth hogs are being treated in Asia

NetEqualizer Programmers Toolkit for Developing Quota-Based Usage Rules (NUQ API)


Author’s Notes:

December 2012 update: As of Software Update 6.0, we have incorporated the Professional Quota API into our new 6.0 GUI, which is documented in our full User GuideThe”Professional Quota API User Guide” is now deprecated.

Due to the popularity of User Quotas, we built a GUI to implement the quota commands.  We recommend using the 6.0 GUI to configure User quotas, which incorporates all the commands listed below and does NOT require basic programming skills to use.


July 2012 update: As of Software Update 5.8, we now offer the Professional Quota API, which provides a GUI front-end to the NUQ-API.  Enclosed is a link to the Professional Quota API User Guide (PDF), which walks you through how to use the new GUI toolset.

Professional Quota API Guide

If you prefer to use the native commands (NUQ API) instead of the new GUI, OR if you are using a Software Update  prior to 5.8 (< 5.8), please follow the instructions below.  If you are current on NSS, we recommend upgrading to 5.8 to use the new Professional Quota API GUI.  If you are not current on NSS, you can call 303.997.1300 ext.5 or email admin@apconnections.net  to get current.

 

 


The following article serves as the programmer’s toolkit for the new NetEqualizer User-Quota API (NUQ API). Other industry terms for this process include bandwidth allotment, and usage-based service.  The NUQ API toolkit is available with NetEqualizer release 4.5 and above and a current software subscription license (NSS).

Note: NetEqualizer is a commercial-grade, Linux-based, in-line bandwidth shaper.  If you are looking something windows-based try these.

Background

Prior to this release, we provided a GUI-based user limit tool, but it was discontinued with release 4.0.  The GUI tool did not have the flexibility for application development, and was inadequate for customizations. The NetEqualizer User-Quota API (NUQ API) programmer’s toolkit is our replacement for the GUI tool. The motivation for developing the toolkit was to allow ISPs, satellite providers, and other Internet management companies to customize their business processes around user limits. The NUQ API is a quick and easy way to string together a program of actions in unique ways to meet your needs.  However, it does require basic programming/Linux skills.

Terms of Use

APconnections, the maker of the NetEqualizer, is an OEM manufacturer of a bandwidth shaping appliance.  The toolkit below provides short examples of how to use the NUQ API to get you started developing a system to enforce quota bandwidth limits for your customers. You are free to copy/paste and use our sample programs in the programmer’s toolkit to your liking.  However, questions and support are not covered in the normal setup of the NetEqualizer product (NSS) and must be negotiated separately.  Please call 303.997.1300 x103 or email sales@apconnections.net to set up a support contract for the NUQ API programmer’s toolkit.

Once you have upgraded to version 4.5 and have purchased a current NSS, please contact APconnections for installation instructions. Once installed, you can find the tools available in the directory/art/quota.

Step 1: Start the Quota Server

In order to use the NUQ API programmer’s toolkit, you must have the main quota server running.  To start the quota server from the Linux command line, you can type:

# /art/quota/quota &

Once the quota main process is running, you can make requests using the command line API.

The following API commands are available:

quota_create

Usage:

quota_create 102.20.20.2/24

Will cause the NetEqualizer to start tracking data for a block (subnet) of IP addresses in the range 10.20.20.0  through 10.20.20.255.

_________________________________________________________________________________________________________

quota_remove

Usage:

/art/quota/quota_remove 102.20.20.2/24

Will remove a block of IP addresses from the quota system.

Note: You must use the exact same IP address and mask to remove a block as was used to create the block.

_________________________________________________________________________________________________________

quota_set_alarm

Usage:

/art/quota/quota_set_alarm 102.20.20.2/17 <down limit>  <up limit>

Will set an alarm when an IP address reaches a defined limit.

Alarm notifications will be reported in the log /tmp/quotalog.  See the sample programs below for usage.

Note: All IPs in the subnet range will get flagged when/if they reach the defined limit. The limits are in bytes transferred. Alarm notifications are reported in the quotalog /tmp/quotalog.  See example below.

_________________________________________________________________________________________________________

quota_remove_alarm

Usage:

/art/quota/quota_remove_alarm 102.20.20.2/17

Will remove all alarms in effect on the specified subnet.

Note: The subnet specification must match exactly the format used when the alarm was created — same exact IP address and same exact mask.

_________________________________________________________________________________________________________

quota_reset_ip

Usage:

/art/quota/quota_reset_ip 102.20.20.2/17

Will reset the usage counters for the specified subnet range

_________________________________________________________________________________________________________

quota_status_ip

Usage:

/art/quota/quota_status_ip 102.20.20.2/24

Will show the current usage byte count for the specified IPs in the range to the console. The usage counters must be initiated with quota_create command.

Will also put usage statistics to the default log /tmp/quotalog

_________________________________________________________________________________________________________

quota_rules

Will display all current rules in effect

Usage:

/art/quota/quota_rules

_________________________________________________________________________________________________________

ADD_CONFIG

Usage:

/art/ADD_CONFIG HARD <ip> <down> <up><subnet mask> <burst factor>

Used to set rate limits on IP’s, which would be the normal response should a user exceed their quota.

Parameter definitions:

HARD                     Constant that specifies the type of operation.  In this case HARD indicates “hard limit”.

<ip>                        The IP address in format x.x.x.x

<down>                 Is the specified max download (inbound) transfer speed for this ip in BYTES per second, this is not kbs.

<up>                       Is the specified upload (outbound) transfer speed in BYTES per second

<subnet mask>   Specifies the subnet mask for the IP address.  For example, 24 would be the same as x.x.x.x/24 notation. However, for this command the mask is specified as a separate parameter.

<burst factor> The last field in the command specifies the burst factor.  Set this field to 1 (no bursting) or to a multiple greater than 1 (bursting).  BURST FACTOR is multiplied times the <down> and <up> HARD LIMITs to arrive at the BURST LIMIT (default speed you wish to burst up to).  For example… 2Mbps <down> HARD LIMIT x 4 BURST FACTOR = 8Mbps <down> BURST LIMIT.

_________________________________________________________________________________________________________

REMOVE_CONFIG

Usage:

/art/REMOVE_CONFIG HARD x.x.x.x

Where x.x.x.x is the base ip used in the ADD_CONFIG HARD command no other parameters are necessary on the removal of the rule.

_________________________________________________________________________________________________________

To view the Log:

Usage:

/tmp/quotalog

Various status messages will get reported along with ALARMs and usage statistics

_________________________________________________________________________________________________________

Examples and Sample sessions (assumes Linux shell and Perl knowledge)

From the command line of a running NetEqualizer  first start the quota server

root@neteq:/art/quota# /art/quota/quota &
[1] 29653
#

Then I issue a command to start tracking byte counts on the local subnet, for this example I have some background network traffic running across the NetEqualizer.

root@neteq:/art/quota# ./quota_create 192.168.1.143/24
Created 192.168.1.143/24
root@neteq:/art/quota#

I have now told the quota server to start tracking bytes on the subnet 192.168.1.*

To see the transferred current byte count on an IP you can use the status_ip command

root@neteq:/art/quota# ./quota_status_ip 192.168.1.143/24
Begin status for 192.168.1.143/24
status for 192.168.1.255
start time = Fri Apr 2 21:23:13 UTC 2010
current date time = Fri Apr 2 21:55:28 UTC 2010
Total bytes down = 65033
Total bytes up = 0
status for 192.168.1.119
start time = Fri Apr 2 21:54:50 UTC 2010
current date time = Fri Apr 2 21:55:28 UTC 2010
Total bytes down = 3234
Total bytes up = 4695
End of status for 192.168.1.143/24
root@neteq:/art/quota#

Yes, the output is a bit cryptic, but everything is there. For example, the start time and current time since the data collection started on each IP reporting in.

Now let’s say we wanted to do something useful when a byte count or quota was exceeded by a user.

First, we would set up an alarm.
root@neteq:/art/quota# ./quota_set_alarm 192.168.1.143/24 10000 10000
alarm block created for 192.168.1.143/24

We have now told the quota server to notify us when any IP in the range 192.168.1.* exceeds 10000 bytes up or 10000 bytes down.

Note: If an alarm is raised, the next alarm will occur at twice the original byte count. In the example above, we will get alarms at 10,000, 20,000, 30,000 and so forth for all IPs in the range. Obviously, in a commercial operation, you would want your quotas set much higher in the gigabyte range.

Now that we have alarms set, how do we know when the happen and how can we take action?

Just for fun, we wrote a little perl script to take action when an alarm occurs. So, first here’s the perl script code and then and example of how to use it.

root@neteq:/art# cat test
#!/usr/bin/perl
while ( 1)
{  $line = readline(*STDIN);
print $line;
chomp ($line);
@foo=split(” “, $line);
if ( $foo[0] eq “ALARM”)
{
print “send an email to somebody important here \n”;
}
}

First, save the perl script off to a file. In our example, we save it to a file /art/test

Next, we will monitor the /tmp/quotalog for new alarms as they occur, and when we find one we will print the message “send and email to somebody important here” .   To actually send an email you would need to set up an email server and call the command line smtp command with your message , we did not go that far here.

Here is how we use the test script to monitor the quotalog  (where ALARM Messages get reported)

root@neteq:/art# tail -f /tmp/quotalog | ./test

Log Reset
ALARM 192.168.1.119 has exceeded up byte count of 160000
send an email to somebody important here
ALARM 192.168.1.119 has exceeded down byte count of 190000
send an email to somebody important here
ALARM 192.168.1.119 has exceeded up byte count of 170000
send an email to somebody important here
ALARM 192.168.1.119 has exceeded down byte count of 200000
send an email to somebody important here
ALARM 192.168.1.119 has exceeded up byte count of 180000
send an email to somebody important here
ALARM 192.168.1.119 has exceeded down byte count of 210000
send an email to somebody important here
ALARM 192.168.1.119 has exceeded up byte count of 190000
send an email to somebody important here
ALARM 192.168.1.119 has exceeded down byte count of 220000
send an email to somebody important here

Now, what if we just want to see what rules are in effect?  Here is a sequence where we create a couple of rules and show how you can status them. Note the subtle difference between the command quota_rules and status_ip.  Status_ip shows ip’s that are part of rule and are actively counting bytes.  Since a rule does not become active (show up in status) until there are actually bytes transferred.

root@neteq:/art/quota# ./quota_create 192.168.13.143/24
Created 192.168.13.143/24
root@neteq:/art/quota# ./quota_rules
Active Quotas —————
192.168.13.143/24
Active Alarms —————-
root@neteq:/art/quota# ./quota_set_alarm 192.168.11.143/24 20000 20000
alarm block created for 192.168.11.143/24
root@neteq:/art/quota# ./quota_rules
Active Quotas —————
192.168.13.143/24
Active Alarms —————-
192.168.11.0/24
root@neteq:/art/quota#

That concludes the NetEqualizer User-Quota API (NUQ API) programmers toolkit for now. We will be adding more examples and features in the near future. Please feel free to e-mail us at support@apconnections.net with feature requests and bug reports on this tool.

Note: You must have a current NSS to receive the toolkit software. It is not enabled with the default system.

Related Opinion Article on the effectiveness of Quotas

Bandwidth Quota Prophecy plays out at Comcast.


A couple of years ago we pointed out how implementing a metered usage policy could create additional overhead.  Here is an excerpt:

To date, it has not been a good idea to flaunt a quota policy and many ISPs do their best to keep it under the radar. In addition, enforcing and demonstrating a quota-based system to customers will add overhead costs and also create more customer calls and complaints. It will require more sophistication in billing and the ability for customers to view their accounts in real time. Some consumers will demand this, and rightly so.

Today two years after Comcast started a fair use policy based on Quota’s they announced a new tool for customers that allows customers to see their usage and  gives them a warning before being cut off.  I suspect the new tool is designed to alleviate the issues we mention in our paragraph above.

NetEqualizer customers can usually accomplish bandwidth reductions fairly without the complexity of quota systems , but in a pinch we also have a quota system on our equipment.

How does your ISP actually enforce your Internet Speed


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com

YT

Have you ever wondered how your ISP manages to control the speed of your connection? If so, you might find the following article enlightening.  Below, we’ll discuss the various trade-offs used to control and break out bandwidth rate limits and the associated side effects of using those techniques.

Dropping Packets (Cisco term “traffic policing”)

One of the simplest methods for a bandwidth controller to enforce a rate cap is by dropping packets. When using the packet-dropping method, the bandwidth controlling device will count the total number of bytes that cross a link during a second.  If the target rate is exceeded during any single second, the bandwidth controller will drop packets for the remainder of that second. For example, if the bandwidth limit is 1 megabit, and the bandwidth controller counts 1 million bits gone by  in 1/2 a second, it will then drop packets for the remainder of the second.  The counter will then reset for the next second. From most evidence we have observed, rate caps enforced by many ISPs use the drop packet method, as it is the least expensive method supported on most basic routers.

So, what is wrong with dropping packets to enforce a bandwidth cap?

Well, when a link hits a rate cap and packets are dropped en masse, it can wreak havoc on a network. For example, the standard reaction of a Web browser when it perceives web traffic is getting lost is to re-transmit the lost data. For a better understanding of dropping packets, let’s use the analogy of a McDonald’s fast food restaurant.

Suppose the manager of the restaurant was told his bonus was based on making sure there was a never a line at the cash register. So, whenever somebody showed up to order food when all registers were occupied, the manager would open a trap door conveniently ejecting   the customer back out into the parking lot.  The customer, being extremely hungry, will come running back in the door (unless of course they die of starvation or get hit by a car) only to be ejected again. To make matters worse, let’s suppose a bus load of school kids arrive. As the kids file in to the McDonald’s, the remaining ones on the bus have no idea their classmates inside are getting ejected, so they keep streaming into the McDonald’s. Hopefully, you get the idea.

Well, when bandwidth shapers deploy packet-dropping technology to enforce a rate cap, you can get the same result seen with the trapdoor analogy in the McDonald’s. Web browsers and other user-based applications will beat their heads into the wall when they don’t get responses from their counterparts on the other end of the line. When packets are being dropped en masse,  the network tends to spiral out-of-control until all the applications essentially give up.  Perhaps you have seen this behavior while staying at a hotel with an underpowered Internet link. Your connectivity will alternate between working and then hanging up completely for a minute or so during busy hours. This can obviously be very maddening.

The solution to shaping bandwidth on a network without causing gridlock is to implement queuing.

Queuing Packets (Cisco term “traffic shaping”)

Queuing is the art of putting something in a line and making it wait before continuing on. Obviously, this is what fast food restaurants do in reality. They plan enough staff on hand to handle the average traffic throughout the day, and then queue up their customers when they are arriving at a faster rate then they can fill orders. The assumption with this model is that at some point during the day the McDonald’s will get caught up with the number of arriving customers and the lines will shrink away.

Another benefit of queuing is that wait times can perhaps be estimated by customers as they drive by and see the long line extending out into the parking lot, and thus, they will save their energy and not attempt to go inside.

But, what happens in the world of the Internet?

With queuing methods implemented, a bandwidth controller looks at the data rate of the incoming packets, and if deemed too fast, it will delay the packets in a queue. The packets will eventually get to their destination, albeit somewhat later than expected. Packets on queue can pile up very quickly, and without some help, the link would saturate. Computer memory to store the packets in the queue would also saturate and, much like the scenario mentioned above, the packets would eventually get dropped if they continued to come in at a faster rate than they were sent out.

TCP to the Rescue (keeping queuing under control)

Most internet applications use a service called TCP (transmission control protocol) to handle their data transfers. TCP has developed intelligence to figure out the speed of the link for which it is sending data on, and then can make adjustments. When the NetEqualizer bandwidth controller queues a packet or two, the TCP controllers on the customer end-point computers will sense the slower packets and back off the speed of the transfer. With just a little bit of queuing, the sender slows down a bit and dropping packets can be kept to a minimum.

Queuing Inside the NetEqualizer

The NetEqualizer bandwidth shaper uses a combination of queuing and dropping packets to get speed under control. Queuing is the first option, but when a sender does not back off eventually, their packets will get dropped. For the most part, this combination of queuing and dropping works well.

So far we have been inferring a simple case of a single sender and a single queue, but what happens if you have gigabit link with 10,000 users and you want to break off 100 megabits to be shared by 3000 users? How would a bandwidth shaper accomplish this? This is another area where a well-designed bandwidth controller like the NetEqualizer separates itself from the crowd.

In order to provide smooth shaping for a large group of users sharing a link, the NetEqualizer does several things in combination.

  1. It keeps track of all streams, and based on their individual speeds, the NetEqualizer will use different queue delays on each stream.
  2. Streams that back off will get minimal queuing
  3. Streams that do not back off may eventually have some of their packets dropped

The net effect of the NetEqualizer queuing intelligence is that all users will experience steady response times and smooth service.

Notes About UDP and Rate Limits

Some applications such as video do not use TCP to send data. Instead, they use a “send-and-forget” mechanism called UDP, which has no built-in back-off mechanism. Without some higher intelligence, UDP packets will continue to be sent at a fixed rate, even if the packets are coming too quickly for the receiver.  The good news is that even most UDP applications also have some way of measuring if their packets are getting to their destination. It’s just that with UDP, the mechanism of synchronization is not standardized.

Finally there are those applications that just don’t care if the packets get to their destination. Speed tests and viruses send UDP packets as fast as they can, regardless of whether the network can handle them or not. The only way to enforce a rate cap with such ill-mannered application is to drop the packets.

Hopefully this primer has given you a good introduction to the mechanisms used to enforce Internet Speeds, namely dropping packets & queuing.  And maybe you will think about this the next time you visit a fast food restaurant during their busy time…

NetEqualizer provides Net Neutrality solution for bandwidth control.


By Eli Riles NetEqualizer VP of Sales

This morning I read an article on how some start up companies are being hurt awaiting the FCC’s decision on Net Neutrality.

Late in the day, a customer called and exclaimed, “Wow now with the FCC coming down  hard on technologies that jeopardize net neutrality, your business  must booming since you offer an excellent viable alternative” And yet  in face of this controversy, several of our competitors continue to sell deep packet inspection devices to customers.

Public operators and businesses that continue to purchase such technology are likely uninformed about the growing fire-storm of opposition against Deep Packet Inspection techniques.  The allure of being able to identify, and control Internet Traffic by type is very a natural solution, which customers often demand. Suppliers who sell DPI devices are just doing what their customer have asked. As with all technologies once the train leaves the station it is hard to turn around. What is different in the case of DPI is that suppliers and ISPs had their way with an ignorant public starting in the late 90’s. Nobody really gave much thought as to how DPI might be the villain in the controversy over Net Nuetrality. It was just assumed that nobody would notice their internet traffic being watched and redirected by routing devices. With behemoths such as Google having a vested interest in keeping traffic flowing without Interference on the Internet, commercial deep packet inspection solutions are slowly falling out of favor in the ISP sector. The bigger question for the players betting the house on DPI is , will it fall out favor in other  business verticals?

The NetEqualizer decision to do away with DPI two years ago is looking quite brilliant now, although at the time it was clearly a risk bucking market trends.  Today, even in the face of world wide recession our profit and unit sales are up for the first three quarters of 2009 this year.

As we have claimed in previous articles there is a time and place for deep packet inspection; however any provider using DPI to manipulate data is looking for a potential dog fight with the FCC.

NetEqualizer has been providing alternative bandwidth control options for ISPs , Businesses , and Schools of all sizes for 7 years without violating any of the Net Nuetrality sacred cows. If you have not heard about us, maybe now is a good time to pick up the phone. We have been on the record touting our solution as being fair equitable for quite some time now.

When is it time to add more bandwidth to your network?


We recently received an e-mail regarding this question from a customer, here is the basic dialogue with our answer below.

It occurred to me today…..pre netequalizer, I’d know that it was time to upgrade our network bandwidth by watching the network traffic graphs.  If there were periods of the day that the connection was maxed out it was a good sign that more bandwidth was needed.

Now that our traffic is running through netequalizer, with the threshold limit and then slowing of user connections beyond that point, we’ll not see the graph max out any more will we?  And if we did ever see that, we’d be way past the point of needing more bandwidth, because it would mean that our link was so saturated that netequalizer couldn’t slow down enough traffic fast enough to avoid that situation.

Answer: We actually do have systems that run very close to pegged(Max) for
hours at a time without complaint. Generally we would suggest waiting
until user perception for the speed of normal sized web pages and short
e-mails is perceived as slow. NetEqualizer does a very good job of allowing your network to run close to capacity without experiences adverse side effects so in essence it would be premature to add more bandwidth based on hitting peak usage.

Note: If you ask your sales rep for your local bandwidth provider if you should purchase more bandwidth, they will almost always recommend adding more solve almost ato ny issue on your network. Your provider whether it be Quest, Comcast, Time Warner or a host of other local providers,  most likely has a business model where they grow profit by selling bandwidth; hence their sales staff really is not incented to offer alternatives. Occasionally when it is physically impossible to bring more bandwidth to your business they will relent and offer a referal for a bandwidth opimization company.

How to Implement Network Access Control and Authentication


There are a number of basic ways an automated network access control (NAC) system can identify unauthorized users and keep them from accessing your network. However, there are pros and cons to using these different NAC methods.  This article will discuss both the basic network access control principles and the different trade-offs each brings to the table, as well as explore some additional NAC considerations. Geared toward the Internet service provider, hotel operator, library, or other public portal operator who provides Internet service and wishes to control access, this discussion will give you some insight into what method might be best for your network.

The NAC Strategies

MAC Address

MAC addresses are unique to every computer connected to the network, and thus many NAC systems use them to grant or deny access.  Since MAC addresses are unique, NAC systems can use them to identify an individual customer and grant them access.

While they can be effective, there are limitations to using MAC addresses for network access. For example, if a customer switches to a new computer in the system, it will not recognize them, as their MAC address will have changed.  As a result, for mobile customer bases, MAC address authentication by itself is not viable.

Furthermore, on larger networks with centralized authentication, MAC addresses do not propagate beyond one network hop, hence MAC address authentication can only be done on smaller networks (no hops across routers).  A work-around for this limit would be to use a distributed set of authentication points local to each segment. This would involve multiple NAC devices, which would automatically raise complexity with regard to synchronization. Your entire authentication database would need to be replicated on each NAC.

Finally, a common question when it comes to MAC addresses is whether or not they can be spoofed. In short, yes, they can, but it does require some sophistication and it is unlikely a normal user with the ability to do so would go through all the trouble to avoid paying an access charge.  That is not to say it won’t happen, but rather that the risk of losing revenue is not worth the cost of combating the determined isolated user.

I mention this because some vendors will sell you features to combat spoofing and most likely it is not worth the incremental cost.  If your authentication is set up by MAC address, the spoofer would have to also have the MAC address of a paying user in order to get in. Since there is no real pattern to MAC addresses, guessing another customer’s MAC address would be nearly impossible without inside knowledge.

IP Address

IP addresses allow a bit more flexibility than MAC addresses because IP addresses can span across a network segment separated by a router to a central location. Again, while this strategy can be effective, IP address authentication has the same issue as MAC addressing, as it does not allow a customer to switch computers, thus requiring that the customer use the same computer each time they log in. In theory, a customer could change the IP address should they switch computers, but this would be way too much of an administrative headache to explain when operating a consumer-based network.

In addition, IP addresses are easy to spoof and relatively easy to guess should a user be trying to steal another user’s identity. But, should two users log on with the same IP address at the same time, the ruse can quickly be tracked down. So, while plausible, it is a risky thing to do.

User ID  Combined with MAC Address or IP Address

This methodology solves the portability issue found when using MAC addresses and IP addresses by themselves. With this strategy, the user authenticates their session with a user ID and password and the NAC module records their IP or MAC address for the duration of the session.

For a mobile consumer base, this is really the only practical way to enforce network access control. However, there is a caveat with this method. The NAC controller must expire a user session when there is a lack of activity.  You can’t expect users to always log out from their network connection, so the session server (NAC) must take an educated guess as to when they are done. The ramification is that they must log back in again. This usually isn’t a major problem, but can simply be a hassle for users.

The good news is the inactivity timer can be extended to hours or even days, and should a customer login in on a different computer while current on a previous session, the NAC can sense this and terminate the old session automatically.

The authentication method currently used with the NetEqualizer is based on IP address and user ID/password, since it was designed for ISPs serving a transient customer base.

Other Important Considerations

NAC and Billing Systems

Many NAC solutions also integrate billing services. Overlooking the potential complexity and ballooning costs with a billing system has the potential to cut into efficiency and profits for both customer and vendor. Our philosophy is that a flat rate and simple billing are best.

To name a few examples, different customers may want time of day billing; billing by day, hour, month, or year; automated refunds; billing by speed of connections; billing by type of property (geographic location); or tax codes. It can obviously go from a simple idea to a complicated one in a hurry. While there’s nothing wrong with these requests, history has shown that costs can increase exponentially when maintaining a system and trying to meet these varied demands, once you get beyond simple flat rate.

Another thing to look out for with billing is integration with a credit card processor. Back-end integration for credit card processing takes some time and energy to validate. For example, the most common credit card authentication system in the US, Authorize.net, does not work unless you also have a US bank account.  You may be tempted to shop your credit card billing processor based on fees, but if you plan on doing automated integration with a NAC system, it is best to make sure the CC authorization company provides automated tools to integrate with the computer system and your consulting firm accounts for this integration work.

Redirection Requirements

You cannot purchase and install a NAC system without some network analysis. Most NAC systems will re-direct unauthorized users to a Web page that allows them to sign up for the service. Although this seems relatively straight forward, there are some basic network features that need to be in place in order for this redirection to work correctly. The details involved go beyond the scope of this article, but you should expect to have a competent network administrator or consultant on hand in order to set this up correctly. To be safe, plan for eight to 40 hours of consulting time for troubleshooting and set-up above and beyond the cost of the equipment.

Network Access for Organizational Control

Thus far we have focused on the basic ways to restrict basic access to the Internet for a public provider. However, in a private or institutional environment where security and access to information are paramount, the NAC mission can change substantially. For example, in the Wikipedia article on network access control, a much broader mission is outlined than what a simple service provider would require. The article reads:

“Network Access Control aims to do exactly what the name implies—control access to a network with policies, including pre-admission endpoint security policy checks and post-admission controls over where users and devices can go on a network and what they can do.”

This paragraph was obviously written by a contributor that views NAC as a broad control technique reaching deep into a private network.  Interestingly, there is an ongoing dispute on Wikipedia stating that this definition goes beyond the simpler idea of just granting access.

The rift on Wikipedia can be summarized as an argument over whether a NAC should be a simple gatekeeper for access to a network, with users having free rein to wander once in, or whether the NAC has responsibilities to protect various resources within the network once access is attained. Both camps are obviously correct, but it depends on the customer and type of business as to what type of NAC is required.

Therefore, in closing, the overarching message that emerges from this discussion is simply that implementing network access control requires an evaluation not only of the network setup, but also how the network will be used. Strategies that may work perfectly in certain circumstances can leave network administrators and users frustrated in other situations. However, with the right amount of foresight, network access control technologies can be implemented to facilitate the success of your network and the satisfaction of users rather than serving as an ongoing frustrating limitation.

The Real Killer Apps and What You Can Do to Stop Them from Bringing Down Your Internet Links


When planning a new network, or when diagnosing a problem on an existing one, a common question that’s raised concerns the impact that certain applications may have on overall performance. In some cases, solving the problem can be as simple as identifying and putting an end to (or just cutting back) the use of certain bandwidth-intensive applications. So, the question, then, is what applications may actually be the source of the problem?

The following article works to identify and break down the applications that will most certainly kill your network, but also provides suggestions as to what you can do about them. While every application certainly isn’t covered, our experience working with network administrators around the world has helped us identify the most common problems.

The Common Culprits

YouTube Video (standard video) — On average, a sustained 10-minute YouTube video will consume about 500kbs over its duration. Most video players try to store the video (buffer ahead) locally as fast as your network  can take it.   On a shared network, this has the effect of bringing everything else on your network to its knees. This may not be a problem if you are the only person using the Internet link, but in today’s businesses and households, that is rarely the case.

For more specifics about YouTube consumption, see these other Youtube articles.

Microsoft Service-Pack Downloads — Updates such as Microsoft service packs use file transfer protocol (FTP). Generally, this protocol will use as much bandwidth as it can find. The end result is that your VoIP phone may lock up, your video’s will become erratic, and Web surfing will come to a crawl.

Keeping Your Network Running Smoothly While Handling Killer Apps

There is no magic pill that can give you unlimited bandwidth, but each of  the following solutions may help. However, they often require trade offs.

  1. The obvious solution is to communicate with other members of your household or business when using bandwidth intensive applications. This is not always practical, but, if other users agree to change their behavior, it’s usually a surefire solution.
  2. Deploy a fairness device to smooth out those rough patches during contentious busy hours — Yes, this is the NetEqualizer News blog, but with all bias aside, these types of technologies often work great. If you are in an office sharing an Internet feed with various users, the NetEqualizer will keep aggressive bandwidth users from crowding others out. No, it cannot create additional bandwidth on your pipe, but it will eliminate the gridlock caused by your colleague  in the next cubicle  downloading a Microsoft service pack. Yes, there are other  devices on the market that can enforce fairness, but the NetEqualizer was specifically designed for this mission. And, with a starting price of around $1400, it is a product small businesses can invest in and avoid longer term costs (see option 3).
  3. Buy more bandwidth — In most cases, this is the most expensive of the different solutions in the long term and should usually be a last resort. This is especially true if the problems are largely caused by recreational Internet use on a business network. However, if the bandwidth-intensive activities are a necessary part of your operation, and they can’t afford to be regulated by a fairness device, upgrading your bandwidth may be the only long-term solution. But, before signing the contract, be sure to explore options one and two first.

As mentioned, not every network-killing application is discussed here, but this should head you in the right direction in identifying the problem and finding a solution. For a more detailed discussion of this issue, visit the links below.

  • For a  more detailed discussion on how much bandwidth specific applications consume, click here.
  • For a set of detailed tips/tricks on making your Internet run faster, click here.
  • For an in-depth look at more complex methods used to mitigate network congestion on a WAN or Internet link, click here.

APconnections Study Shows Administrators Prioritize Results over Bandwidth Reporting


Today we released the results of our month-long study into the needs of bandwidth monitoring technology users which sought to determine the priority users place on detailed reporting compared to the overall impact on network optimization. Based on the results of a NetEqualizerNews.com poll, 80-percent of study participants voted that a smoothly running network was more important than the information provided by detailed reporting.

Ultimately, the study confirms what we’ve believed for years. While some reporting is essential, complicated reporting tools tend to be overkill. When users simply want their networks to run smoothly and efficiently, detailed reporting isn’t always necessary and certainly isn’t the most cost-effective solution.

Detailed bandwidth monitoring technology is not only more expensive from the start, but an administrator is also likely to spend more time making adjustments and looking for optimal performance. The result is a continuous cycle of unnecessarily spent manpower and money.

We go into further detail on the subject in our recent blog post entitled “The True Price of Bandwidth Monitoring.” The full article can be found at https://netequalizernews.com/2009/07/16/the-true-price-of-bandwidth-monitoring/.

The True Price of Bandwidth Monitoring


By Art Reisman

Art Reisman CTO www.netequalizer.com

For most IT administrators, bandwidth monitoring of some sort is an essential part of keeping track of, as well as justifying, network expenses. Without visibility into a network load, an administrator’s job would degrade into a quagmire of random guesswork. Or would it?

The traditional way of  looking at monitoring your Internet has two parts: the fixed cost of the monitoring tool used to identify traffic, and the labor associated with devising a remedy. In an ironic inverse correlation, we assert that costs increase with the complexity of the monitoring tool. Obviously, the more detailed the reporting tool, the more expensive its initial price tag. The kicker comes with part two. The more expensive the tool, the more  detail  it will provide, and the more time an administrator is likely to spend adjusting and mucking, looking for optimal performance.

But, is it a fair to assume higher labor costs with  more advanced monitoring and information?

Well, obviously it would not make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. Why have the reporting tool in the first place if the only output was to stare at reports and do nothing? Typically, the more information an admin has about a network, the more inclined he might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief  that when  the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network mucking can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention.  For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980’s.  The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.

An effective compromise with many of our customers is that they are stepping down from expensive complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.

For example, with a simple reporting tool you can plot network usage by user.  Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing; abuse becomes obvious just looking at the usage (a simple report).

However, there is also the personal control factor, which often does not follow clear lines of ROI (return on investment).

What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into a trip to Hawaii.

In our next article, we’ll put some real world numbers to the test for actual break downs, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.

List of monitoring tools compiled by Stanford

Planetmy
Linux Tips
How to set up a monitor for free

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.