Network User Authentication Using Heuristics

Most authentication systems are black and white, once you are in , you are in. It was brought our attention recently, that authentication should be an ongoing process,  not a one time gate with continuous unchecked free rein once in.

The reasons are well founded.

1) Students at universities and employees at businesses, have all kinds of devices which can get stolen/borrowed while open.

My high school kids can attest this many times over. Often the result is just an innocuous string of embarrassing texts emanating from their phones claiming absurd things. For example  ” I won’t be at the party, I was digging for a booger and got a nose bleed” ,  blasted out to their friends after they left their phone unlocked.

2) People will also deliberately give out their authentication to friends and family

This leaves a hole in standard authentication strategies .

Next year we plan to add an interesting twist to our Intrusion Detection Device ( NetGladiator). The idea was actually not mine, but was suggested by a customer recently at our user group meeting in Western Michigan.

Here is the plan.

The idea for our intrusion detection device would be to build a knowledge base of a user’s habits over time and then match those established patterns against a  tiered alert system when there is any kind of abrupt   change.

It should be noted that we would not be monitoring content, and thus we would be far less invasive than Google Gmail ,with their targeted advertisements,  we would primarily just following the trail or path of usage and not reading content.

The heuristics would consist of a three-pronged model.

Prong one, would look at general trending access across all users globally . If  an aggregate group of users on the network were downloading an IOS update, then this behavior would be classified as normal for individual users.

Prong two ,  would look at the pattern of usage for the authenticated user. For example most people tune their devices to start at a particular page. They also likely use a specific e-mail client, and then have their favorite social networking sites. String together enough these and you would develop unique foot print for that user. Yes the user could deviate from their pattern of established usage as long as there were still elements of their normal usage in their access patterns.

Prong three would be the alarming level. In general a user would receive a risk rating when they deviated into suspect behaviors outside their established baseline. Yes this is profiling similar to psychological profiling on employment tests, which are very accurate at predicting future behavior.

A simple example of a risk factor would be a user that all of sudden starts executing login scripts en masse outside of their normal pattern. Something this egregious would be flagged as high risk,  and the administrator could specify an automatic disconnection for the user at a high risk level. Lower risk behavior would be logged for after the fact forensics if any internal servers became compromised.

NetEqualizer Directory Integration FAQ

Editor’s Note: This month, we announced the availability of the NetEqualizer Directory Integration (NDI) feature. Over the past few weeks, interest and inquiries have been high, so we’ve created the following Q&A to address many of the common questions we’ve received.

What is NDI anyway?
NetEqualizer Directory Integration (NDI) is an API for NetEqualizer that allows you to pull in username information from a directory and display it in your active connections table. This way, instead of only seeing IP to IP connection information, you can see usernames associated with those IPs so that you can make better decisions about how to manage your bandwidth. We will gradually be expanding NDI functionality to allow for shaping by username.

How much does NDI cost?
NDI requires setup consultation and is an additional add-on feature for the NetEqualizer. Currently, version 7.0 is required to run NDI. Take a look at our price list for more information.

How does NDI work?
NDI is an API on NetEqualizer that sends your directory server a URL containing an IP address. The process on your directory server then looks up the username for that IP and returns it to the NetEqualizer which stores the information.

What am I responsible for implementing with NDI?
You are responsible for implementing the process which resides on the directory server. This process returns a username when given an IP by the NDI API call. We have examples of how to do this for some directory server setups, but directory server setups are too specific for us to create a generic process that will work for all customers.

When would knowing the username be helpful?
Knowing the username instead of simply IP-to-IP information can helpful for administrators in many ways. Here are just a few:
– Easily see which users are taking up a lot of bandwidth. This is doable with a manual look up but that can get tedious.
– Eventually, NDI will be enhanced to shape by username. Again, this helps take away a step that an administrator would have to perform manually.
– Often, users are not assigned static IP addresses. With NDI’s dynamic updating, you don’t have to worry about the IP anymore. The username information will automatically adjust.

What are the upcoming enhancements to NDI?
We are planning to make NDI more robust in the months ahead. Our first feature will be Quotas by Username. This feature is currently in Beta. Once this feature is implemented, you will be able to assign usage quotas by username as opposed to IP or subnet. Additional possible changes to NDI include shaping by username and limiting by username. Stay tuned to NetEqualizer News for announcements.

If you have additional questions about NDI, feel free to contact us at:!

NetEqualizer News: March 2013

March 2013


Enjoy another issue of NetEqualizer News! This month, we discuss AD integration into NetEqualizer, the results of our recent Educause conference, and new NetEqualizer features coming in 2013. As always, feel free to pass this along to others who might be interested in NetEqualizer News.

A message from Art…
Art Reisman, CTO – APconnections

art_smallSpring is almost upon us, and yet in Colorado it is just starting to feel like winter. We get our snowiest weeks typically in late February and early March. So far for this year, that is proving to be true. However, with spring coming soon we look forward to beginning again!

We love it when we hear back from you – so if you have a story you would like to share with us of how we have helped you, let us know. Email me directly at I would love to hear from you!

2013 Software Update Features

We are already in the process of implementing exciting new features for our first Software Update of 2013!

Here is a quick preview of what you can expect this year:

64 Bit Processing: This change will allow current customers to operate on existing hardware and attain 20 to 50 percent improvements in performance.

Sortable Active Connections Table: You will now be able to sort the Active Connections table by any of the columns you choose.

Caching Service Updates: We will be updating and expanding our available caching services.

Port-Based Equalizing: Check out our blog article on bandwidth management on the public side of a NAT router.

Active Directory Integration: Now, administrators will be able to see AD user names in their Active Connections table.

Other Minor Enhancements

Check back with NetEqualizer News for updates on each of these new features and their scheduled releases!
Remember, new software updates (including all the features described above – except AD Integration) are available for free to customers with valid NetEqualizer Software & Support (NSS).

If you are not current with NSS, contact us today!


toll-free U.S. (888-287-2492),

worldwide (303) 997-1300 x. 103

Educause Conference Poster Session Update
educause logo

Sandy had a great time at the West/Southwest Regional Educause Conference in Austin  February 12-14th, 2013!

Here she is in front of her Poster Session materials:

wsw educause photo 1

Sandy talked about the “Future of Bandwidth Shaping” with the attendees. One professor, who also runs a ISP for colleges, even came from South Africa!

Most of you know that NetEqualizer is the future of bandwidth shaping, but if you need to convince anyone else (your boss, the powers that be, etc.), we recommend printing out our updated 1-2 page Executive White Paper and sharing that. If you are in Higher Education, you can also check out our newly revised College & University Guide.

Stay tuned to NetEqualizer News for updates on upcoming conferences!

AD Integration Beta and Support Update

We are close to releasing our new Active Directory integration feature and are nearing the end of our beta tests!

Thanks to all of those organizations that have helped us out thus far.

As an additional note, because Active Directory is a complicated environment which varies from customer to customer, the AD Integration feature will be an additional charge beyond NSS. This fee will include support in getting you up and running with the new feature.

Best Of The Blog

How Much Bandwidth Do You Really Need?

By Art Reisman – CTO – APconnections

When it comes to how much money to spend on the Internet, there seems to be this underlying feeling of guilt with everybody I talk to. From ISPs, to libraries or multinational corporations, they all have a feeling of bandwidth inadequacy. It is very similar to the guilt I used to feel back in College when I would skip my studies for some social activity (drinking). Only now it applies to bandwidth contention ratios. Everybody wants to know how they compare with the industry average in their sector. Are they spending on bandwidth appropriately, and if not, are they hurting their institution, will they become second-rate?

To ease the pain, I was hoping to put a together a nice chart on industry standard recommendations, validating that your bandwidth consumption was normal, and I just can’t bring myself to do it quite yet. There is this elephant in the room that we must contend with. So before I make up a nice chart on recommendations, a more relevant question is… how bad do you want your video service to be?

Your choices are:

  1. bad
  2. crappy
  3. downright awful

Although my answer may seem a bit sarcastic, there is a truth behind these choices. I sense that much of the guilt of our customers trying to provision bandwidth is based on the belief that somebody out there has enough bandwidth to reach some form of video Shangri-La; like playground children bragging about their father’s professions, claims of video ecstasy are somewhat exaggerated…

Photo Of The Month


Snow Geese in Kansas

If you look closely at the lake you can see a gaggle of white Snow Geese in the water. Though they typically live in colder climates, they’ll come to warmer states in the winter to breed. This photo was taken in Kansas on a recent trip by one of our staff members.

P2P Protocol Blocking Now Offered with NetGladiator Intrusion Prevention

A few months ago we introduced our NetGladiator Intrusion Prevention (IPS) Device. To date, it has thwarted tens of thousands of robotic cyber attacks and counting. Success breeds success and our users wanted more.

When our savvy customers realized the power, speed, and low price point of our underlying layer 7 engine, we started getting requests seeking additional features such as: “Can you also block Peer To Peer and other protocols that cannot be stopped by our standard Web Filters and Firewalls?”  It was natural that we extended our IPS device to address this space; hence, today we are announcing the next-generation NetGladiator. We now offer a module that will allow you to block and monitor the world’s top 10 p2p protocols (which account for 99 percent of all P2P traffic). We also back our technology with our unique promise to implement a custom protocol blocking rule with the purchase of any system at no extra charge. For example, if you have a specific protocol you need to monitor and just can’t uncover it with your WebSense or Firewall filter, we will custom deliver a NetGladiator system that can track and/or block your unique protocol, in addition to our standard p2p blocking options.

Below is a sample Excel live report integrated with the NetGladiator in monitor mode. On the screen snapshot below, you will notice that we have uncovered a batch of Utorrent and Frost Wire p2p traffic.

Please feel free to call 303-997-1300 or email our NetGladiator sales engineering team with any additional questions at

Related Articles

NetGladiator A layer 7 shaper in sheep’s clothing

Apconnections Backs up Security Device Support with an unusual offer, “We’ll hack your network”

What gets people excited about purchasing an intrusion detection system? Not much. Certainly, fear can be used to sell security devices. But most, mid sized companies are spread thin with their IT staff, they are focused on running their business operations. To spend money to prevent something that has never happened to them would be seen as somewhat foolish. There are a large number of potential threats to a business, security being just one of them.

One expert pointed out recently:

“Sophisticated fraudsters are becoming the norm with data breaches, carder forums, and do it yourself (DIY) crime kits being marketed via the Internet.” Excerpt from fraudwar blog spot.

Thus, getting data stolen happens so often that it can be considered a survivable event, it is the new normal. Your customers are not going to run for the hills, as they have been conditioned to roll with this threat. But there still is a steep cost for such an event. So our staff put our heads together and asked the question… there must be an easy, quantifiable, minimum investment way to objectively evaluate data risk without a giant cluster of data security devices in place, spewing gobs of meaningless drivel.

One of our internal, white knight, hackers pointed out, that in his storied past, he had been able to break into almost any business at will (good thing he is a white knight and does not steal or damage anything). While talking to some of our channel resellers we have also learned that most companies, although aware of outside intrusion, are reluctant to throw money and resources at a potential problem that they can’t easily quantify.

Thus arose an idea for our new offer. For a small refundable retainer fee, we will attempt to break into a customers data systems from the outside. If we can’t get in, then we’ll return the retainer fee. Obviously, if we get in, we can then propose a solution with indisputable evidence of the vulnerability, and if we don’t get in, then the customer can have some level of assurance that their existing infrastructure thwarted a determined break in.

Dynamic Reporting With The NetEqualizer

Update  Feb 2014

The spread sheet reporting features  described below as an excel Integration have now been integrated into the NetEqualizer GUI as of 2013. We have also added protocol reporting for common applications.  We generally do not break links to old articles hence we did not take this article down.



Have you ever wanted an inexpensive real-time bandwidth reporting tool?

The following excel integration, totally opens up the power of the NetEqualizer bandwidth data. Even I love watching my NetEqualizer data on my spreadsheet. Last night, I had it up and watched as the bandwidth spiked all of a sudden, so I looked around to see why it was – turns out my son started watching NetFlix on his Nintendo DS! Too funny, but very persuasive in terms of enhancing your ability to do monitoring.

This blog shows just one example, but suffice it to say that the reporting options are endless. You could easily write a VBA routine in Excel to bring this data down every second. You could automatically log the days top 10 highest streams, or top 10 highest connections. You could graph the last 60 seconds (or other timeframe) of per second peak usage. You could update this graph, watching it scroll by in real time. It’s endless what you could do, with relatively little effort (because Excel does all the computationally hard work as pre-programmed routines for reporting and display).

Here’s a picture of what’s happening on my NetEqualizer right now as I write this:


Pretty slick eh? After I put this spreadsheet together the first time, I won’t have to do anything to have it report current data every minute or sooner. Let me explain how you can do it too.

Did you know that there’s a little known feature in Microsoft Excel called an Excel Web Query?  This facility allows you to specify an http: address on the web and use the data off the resulting web page for automatic insertion into Excel.  Further, you can tell Excel that you want your spreadsheet to be automatically updated regularly – as frequently as every minute or whenever you hit the “Refresh All” key. If you combine this capability with the ability to run a NetEqualizer report from your browser using the embedded command, you can automatically download just about any NetEqualizer data into a spreadsheet for reporting, graphing and analysis.

Fig-1 above shows some interesting information all of it gathered from my NetEqualizer as well as some information that has been programmed into my spreadsheet. Here’s what’s going on: Cells B4 & B5 contain information pulled from my NetEqualizer, it is the total bandwidth Up & Down respectively going through the unit right now. It compares this with cells C4 & C5, which are the TrunkUp & TrunkDown settings (also pulled from the NetEqualizer’s configuration file and downloaded automatically) and calculates cells D4 & D5 showing the % of trunk used. The Cells B8:K show all the data from the NetEqualizer’s Active Connections Report. The column titled “8 Second Rolling Average Bandwidth” shows Wavg and this data is also automatically plotted in a pie chart showing the bandwidth composition of my individual flows. Also, I put a conditional rule on my bandwidth flow that says because I’m greater than 85% of my TrunkDown speed, all Flows greater than HOGMIN should be highlighted in Red. All of this updated every minute, or sooner if I hit the refresh key.

I’ll take you through a step by step on how I created the page above so you unlock the power of Excel on your critical bandwidth data.

The steps I outline are for Excel 2007, this can be done in earlier versions of Excel but the steps will be slightly different. All I ask is if you create a spreadsheet like this and do something you really like, let us know about it (email:

I’m going to assume that you know how to construct a basic spreadsheet. This document would be far too long if I took you through each little step to create the report above. Instead, I’ll show you the important part – how to get the data from the NetEqualizer into the spreadsheet and have it automatically and regularly refresh itself.

In this page there are two links: One at B4:B5, and another at B8:K (K has no ending row because it depends on how many connections it pulls – thus K could range from K8 to K99999999 – you get the idea).

Let’s start by linking my total up and down bandwidth to cells B4:B5 from the NetEqualizer.  To do this, follow these steps:

Select cell B4 with your cursor.

Select the “Data” tab and click “From Web”.

Click “No” and Erase the address in the address bar:

Put the following in the Address Bar instead – make sure to put the IP Address of your NetEqualizer instead of “YourNetEqualizersIPAddress” – and hit return:

—Please contact us ( if you are a current NetEqualizer user and want the full doc—

You may get asked for your User ID and Password – just use your normal NetEqualizer User ID and Password.

Now you should see this:

Click on the 2nd arrow in the form which turns it into a check mark after it’s been clicked (as shown in the picture above). This highlights the data returned which is the “Peak” bandwidth (Up & Down) on the NetEqualizer .  Click the Import button.  In a few seconds this will populate the spreadsheet with this data in cells B4 & B5.

Now, let’s tell the connection that we want the data updated every 1 minute. Right Click on B4 (or B5), and you will see this:

Click on Data Range Properties.

Change “Refresh every” to 1 minute. Also, you should copy the other click marks as well.  Hit “OK”.

Done! Total Bandwidth flow data from the NetEqualizer bridge will now automatically update into the spreadsheet every 60 seconds.

For the Active Connections portion of this report, follow the same instructions starting by selecting cell B8. Only for this report, use the following web address (remember to use your NetEqualizer’s IP):

—Please contact us ( if you are a current NetEqualizer user and want the full doc—

(note: we’ve had some reports that this command doesn’t cut and paste well probably because of the “wrap”, you may need to type it in)

Also, please copy and paste this exactly (unless you’re a Linux expert – and if you are send me a better command!) since there are many special formatting characters that have been used to make this import work in a well behaved manner.  Trust me on this, there was plenty of trial an error spent on getting this to come in reliably.

Also, remember to set the connection properties to update every 1 minute.

At this point you may be noticing one of the cool things about this procedure is that I can run my own “custom” reports via a web http address that also issues Linux commands like “cat” & “awk” – being able to do this allows me to take just about any data off the NetEqualizer for automatic import into Excel.

So that’s how it’s done. Here’s a list of a few other handy web connection reports:

For your NetEqualizer’s configuration file use:

—Please contact us ( if you are a current NetEqualizer user and want the full doc—

For your NetEqualizer’s log file use:

—Please contact us ( if you are a current NetEqualizer user and want the full doc—

(note: we’ve had some reports that this command doesn’t cut and paste well probably because of the “wrap”, you may need to type it in)

Once you get all the data you need into your Excel, you can operate on the data using any Excel commands including macros, or Excel Visual Basic.

Lastly, do you want to see what’s happening right now, and you don’t want to wait up to 60 seconds? Hit the “Refresh All” button on the “Data” tab – that will refresh everything as of this second:

Good luck, and let us know how it goes…

Caveat – this feature is unsupported by APConnections.

We Want Your Feedback!

In this month’s newsletter, we gave an overview of a few potential new NetEqualizer features. While we have several options under consideration, we want to know what features might serve you best. So, take a look at the options below, visit our survey, and let us know what you think!

  1. The option to send an SNMP trap to your SNMP monitor during a network event.
  2. The option to receive email notification during certain specified network events.This could include when:
    1. Bandwidth utilization is high – This would happen when your bandwidth utilization is extremely high and might indicate the need for an upgrade in bandwidth
    2. Errors occur on an interface card – This would be used to detect if there was a problem with one of your Ethernet or fiber connections
    3. A new P2P user is detected on your network – This would make even better and more efficient use of our new P2P Locator Technology
    4. YouTube has been viewed from cache – An email would be dispatched every time a YouTube video is served up from our NetEqualizer Caching Option
  3. A form of active directory integration to specify a rate limit on a user by name rather than IP address. For example, you could say John Smith is limited to one-megabit downloads. As of now, you would need to know John Smith’s IP address. With an integration of active directory, you can specify him by name.
  4. A standard pre-written quota utility (source code) with each system. Right now, the NetEqualizer just comes with an API (see the NetEqualizer User Quota API). However, this new utility would be something you could plug IPs into from the GUI and have a monthly quota enforced right away. Initially, it would be a very simple tool, but it could be expanded. In other words, this would be a good working program using our API to get you a head start on expanding and writing a full-bodied quota tool.

Click here or on the survey to respond.

New Features Survey

Click here or on the survey to respond.

NetEqualizer P2P Locator Technology

Editor’s NoteThe NetEqualizer has always been able to thwart P2P behavior on a network. However, our new utility can now pinpoint an individual P2P user or gamer without any controversial layer-7 packet inspectionThis is an extremely important step from a privacy point of view as we can actually spot P2P users without looking at any private data.

A couple of months ago, I was doing a basic health check on a customer’s heavily used residential network. In the process, I instructed the NetEqualizer to take a few live snapshots. I then used the network data to do some filtering with custom software scripts. Within just a few minutes, I was able to inform the administrator that eight users on his network were doing some heavy P2P, and one in particular looked to be hosting a gaming session. This was news to the customer, as his previous tools didn’t provide that kind of detail.

A few days later, I decided to formally write up my notes and techniques for monitoring a live system to share on the blog. But, as I got started, another lightbulb went on…in the end, many customers just want to know the basics — who is using P2P, hosting game servers, etc. They don’t always have the time to follow a manual diagnostic recipe.

So, with this in mind, instead of writing up the manual notes, I spent the next few weeks automating and testing an intelligent utility to provide this information. The utility is now available with NetEqualizer 5.0.

The utility provides: 

  • A list of users that are suspected of using P2P
  • A list of users that are likely hosting gaming servers
  • A confidence rating for each user (from high to low)
  • The option of tracking users by IP and MAC address

The key to determining a user’s behavior is the analysis of the fluctuations in their connection counts and total number of connections. We take snapshots over a few seconds, and like a good detective, we’ve learned how to differentiate P2P use from gaming, Web browsing and even video. We can do this without using any deep packet inspection. It’s all based on human-factor heuristics and years of practice.

Enclosed is a screen shot of the new P2P Locator, available under our Reports & Graphing menu.

Our new P2P Locator technology

Contact us to learn more about the NetEqualizer P2P Locator Technology or NetEqualizer 5.0. For more information about ongoing changes and challenges with BitTorrent and P2P, see Ars Technica’s “BitTorrent Has New Plan to Shape Up P2P Behavior.”

Setting Up a Squid Proxy Caching Co-Resident with a Bandwidth Controller

Editor’s Note: It was a long road to get here (building the NetEqualizer Caching Option (NCO) a new feature offered on the NE3000 & NE4000), and for those following in our footsteps or just curious on the intricacies of YouTube caching, we have laid open the details.

This evening, I’m burning the midnight oil. I’m monitoring Internet link statistics at a state university with several thousand students hammering away on their residential network. Our bandwidth controller, along with our new NetEqualizer Caching Option (NCO), which integrates Squid for caching, has been running continuously for several days and all is stable. From the stats I can see, about 1,000 YouTube videos have been played out of the local cache over the past several hours. Without the caching feature installed, most of the YouTube videos would have played anyway, but there would be interruptions as the Internet link coughed and choked with congestion. Now, with NCO running smoothly, the most popular videos will run without interruptions.

Getting the NetEqualizer Caching Option to this stable product was a long and winding road.  Here’s how we got there.

First, some background information on the initial problem.

To use a Squid proxy server, your network administrator must put hooks in your router so that all Web requests go the Squid proxy server before heading out to the Internet. Sometimes the Squid proxy server will have a local copy of the requested page, but most of the time it won’t. When a local copy is not present, it sends your request on to the Internet to get the page (for example the Yahoo! home page) on your behalf. The squid server will then update a local copy of the page in its cache (storage area) while simultaneously sending the results back to you, the original requesting user. If you make a subsequent request to the same page, the Squid will quickly check it to see if the content has been updated since it stored away the first time, and if it can, it will send you a local copy. If it detects that the local copy is no longer valid (the content has changed), then it will go back out to the Internet and get a new copy.

Now, if you add a bandwidth controller to the mix, things get interesting quickly. In the case of the NetEqualizer, it decides when to invoke fairness based on the congestion level of the Internet trunk. However, with the bandwidth controller unit (BCU) on the private side of the Squid server, the actual Internet traffic cannot be distinguished from local cache traffic. The setup looks like this:

Internet->router->Squid->bandwidth controller->users

The BCU in this example won’t know what is coming from cache and what is coming from the Internet. Why? Because the data coming from the Squid cache comes over the same path as the new Internet data. The BCU will erroneously think all the traffic is coming from the Internet and will shape cached traffic as well as Internet traffic, thus defeating the higher speeds provided by the cache.

In this situation, the obvious solution would be to switch the position of the BCU to a setup like this:

Internet->router->bandwidth controller->Squid->users

This configuration would be fine except that now all the port 80 HTTP traffic (cached or not) will appear like it is coming from the Squid proxy server and your BCU will not be able to do things like put rate limits on individual users.

Fortunately, with the our NetEqualizer 5.0 release, we’ve created an integration with NetEqualizer and co-resident Squid (our NetEqualizer Caching Option) such that everything works correctly. (The NetEqualizer still sees and acts on all traffic as if it were between the user and the Internet. This required some creative routing and actual bug fixes to the bridging and routing in the Linux kernel. We also had to develop a communication module between the NetEqualizer and the Squid server so the NetEqualizer gets advance notice when data is originating in cache and not the Internet.)

Which do you need, Bandwidth Control or Caching?

At this point, you may be wondering if Squid caching is so great, why not just dump the BCU and be done with the complexity of trying to run both? Well, while the Squid server alone will do a fine job of accelerating the access times of large files such as video when they can be fetched from cache, a common misconception is that there is a big relief on your Internet pipe with the caching server.  This has not been the case in our real world installations.

The fallacy for caching as panacea for all things congested assumes that demand and overall usage is static, which is unrealistic.  The cache is of finite size and users will generally start watching more YouTube videos when they see improvements in speed and quality (prior to Squid caching, they might have given up because of slowness), including videos that are not in cache.  So, the Squid server will have to fetch new content all the time, using additional bandwidth and quickly negating any improvements.  Therefore, if you had a congested Internet pipe before caching, you will likely still have one afterward, leading to slow access for many e-mail, Web  chat and other non-cachable content. The solution is to include a bandwidth controller in conjunction with your caching server.  This is what NetEqualizer 5.0 now offers.

In no particular order, here is a list of other useful information — some generic to YouTube caching and some just basic notes from our engineering effort. This documents the various stumbling blocks we had to overcome.

1. There was the issue of just getting a standard Squid server to cache YouTube files.

It seemed that the URL tags on these files change with each access, like a counter, and a normal Squid server is fooled into believing the files have changed. By default, when a file changes, a caching server goes out and gets the new copy. In the case of YouTube files, the content is almost always static. However, the caching server thinks they are different when it sees the changing file names. Without modifications, the default Squid caching server will re-retrieve the YouTube file from the source and not the cache because the file names change. (Read more on caching YouTube with Squid…).

2. We had to move to a newer Linux kernel to get a recent of version of Squid (2.7) which supports the hooks for YouTube caching.

A side effect was that the new kernel destabilized some of the timing mechanisms we use to implement bandwidth control. These subtle bugs were not easily reproduced with our standard load generation tools, so we had to create a new simulation lab capable of simulating thousands of users accessing the Internet and YouTube at the same time. Once we built this lab, we were able to re-create the timing issues in the kernel and have them patched.

3. It was necessary to set up a firewall re-direct (also on the NetEqualizer) for port 80 traffic back to the Squid server.

This configuration, and the implementation of an extra bridge, were required to get everything working. The details of the routing within the NetEqualizer were customized so that we would be able to see the correct IP addresses of  Internet sources and users when shaping.  (As mentioned above, if you do not take care of this, all IPs (traffic) will appear as if they are coming from the Proxy server.

4. The firewall has a table called ConnTrack (not be confused with NetEqualizer connection tracking but similar).

The connection tracking table on the firewall tends to fill up and crash the firewall, denying new requests for re-direction if you are not careful. If you just go out and make the connection table randomly enormous that can also cause your system to lock up. So, you must measure and size this table based on experimentation. This was another reason for us to build our simulation lab.

5. There was also the issue of the Squid server using all available Linux file descriptors.

Linux comes with a default limit for security reasons, and when the Squid server hit this limit (it does all kinds of file reading and writing keeping descriptors open), it locks up.

Tuning changes that we made to support Caching with Squid

a. To limit the file size of a cached object of 2 megabytes (2MB) to 40 megabytes (40MB)

  • minimum_object_size 2000000 bytes
  • maximum_object_size 40000000 bytes

If you allow smaller cached objects it will rapidly fill up your cache and there is little benefit to caching small pages.

b. We turned off the Squid keep reading flag

  • quick_abort_min 0 KB
  • quick_abort_max 0 KB

This flag when set continues to read a file even if the user leave the page, for example when watching a video if the user aborts on their browser the Squid cache continues to read the file. I suppose this could now be turned back on, but during testing it was quite obnoxious to see data transfers talking place to the squid cache when you thought nothing was going on.

c. We also explicitly told the Squid what DNS servers to use in its configuration file. There was some evidence that without this the Squid server may bog down, but we never confirmed it. However, no harm is done by setting these parameters.

  • dns_nameservers   x.x.x.x

d. You have to be very careful to set the cache size not to exceed your actual capacity. Squid is not smart enough to check your real capacity, so it will fill up your file system space if you let it, which in turn causes a crash. When testing with small RAM disks less than four gigs of cache, we found that the Squid logs will also fill up your disk space and cause a lock up. The logs are refreshed once a day on a busy system. With a large amount of pages being accessed, the log will use close to one (1) gig of data quite easily, and then to add insult to injury, the log back up program makes a back up. On a normal-sized caching system there should be ample space for logs

e. Squid has a short-term buffer not related to caching. It is just a buffer where it stores data from the Internet before sending it to the client. Remember all port 80 (HTTP) requests go through the squid, cached or not, and if you attempt to control the speed of a transfer between Squid and the user, it does not mean that the Squid server slows the rate of the transfer coming from the Internet right away. With the BCU in line, we want the sender on the Internet to back off right away if we decide to throttle the transfer, and with the Squid buffer in between the NetEqualizer and the sending host on the Internet, the sender would not respond to our deliberate throttling right away when the buffer was too large (Link to Squid caching parameter).

f. How to determine the effectiveness of your YouTube caching statistics?

I use the Squid client cache statistics page. Down at the bottom there is a entry that lists hits verses requests.


  • ICP : 0 Queries, 0 Hits (0%)
  • HTTP: 21990877 Requests, 3812 Hits (0%)

At first glance, it may appear that the hit rate is not all that effective, but let’s look at these stats another way. A simple HTTP page generates about 10 HTTP requests for perhaps 80K bytes of data total. A more complex page may generate 500k. For example, when you go to the CNN home page there are quite a few small links, and each link increments the HTTP counter. On the other hand, a YouTube hit generates one hit for about 20 megabits of data. So, if I do a little math based on bytes cached we get, the summary of HTTP hits and requests above does not account for total data. But, since our cache is only caching Web pages from two megabits to 40 megabits, with an estimated average of 20 megabits, this gives us about 400 gigabytes of regular HTTP and 76 Gigabytes of data that came from the cache. Abut 20 percent of all HTTP data came from cache by this rough estimate, which is a quite significant.

NetEqualizer Testing and Integration of Squid Caching Server

Editor’s Note: Due to the many variables involved with tuning and supporting Squid Caching Integration, this feature will require an additional upfront support charge. It will also require at minimum a NE3000 platform. Contact for specific details.

In our upcoming 5.0 release, the main enhancement will be the ability to implement YouTube caching from a NetEqualizer. Since a squid-caching server can potentially be implemented separately by your IT department, the question does come up about what the difference is between using the embedded NetEqualizer integration and running the caching server stand-alone on a network.

Here are a few of the key reasons why using the NetEqualizer caching integration provides for the most efficient and effective set up:

1. Communication – For proper performance, it’s important that the NetEqualizer know when a file is coming from cache and when it’s coming from the Internet. It would be counterproductive to have data from cache shaped in any way. To accomplish this, we wrote a new utility, aptly named “cache helper,” to advise the NetEqualizer of current connections originating from cache. This allows the NetEqualizer to permit cached traffic to pass without being shaped.

2. Creative Routing – It’s also important that the NetEqualizer be able to see the public IP addresses of traffic originating on the Internet. However, using a stand-alone caching server prevents this. For example, if you plug a caching server into your network in front of a NetEqualizer (between the NetEqualizer and your users), all port 80 traffic would appear to come from the proxy server’s IP address. Cached or not, it would appear this way in a default setup. The NetEqualizer shaping rules would not be of much use in this mode as they would think all of the Internet traffic was originating from a single server. Without going into details, we have developed a set of special routing rules to overcome this limitation in our implementation.

3. Advanced Testing and Validation – Squid proxy servers by themselves are very finicky. Time and time again, we hear about implementations where a customer installed a proxy server only to have it cause more problems than it solved, ultimately slowing down the network. To ensure a simple yet tight implementation, we ran a series of scenarios under different conditions. This required us to develop a whole new methodology for testing network loads through the Netequalizer. Our current class of load generators is very good at creating a heavy load and controlling it precisely, but in order to validate a caching system, we needed a different approach. We needed a load simulator that could simulate the variations of live internet traffic. For example, to ensure a stable caching system, you must take the following into consideration:

  • A caching proxy must perform quite a large number of DNS look-ups
  • It must also check tags for changes in content for cached Web pages
  • It must facilitate the delivery of cached data and know when to update the cache
  • The squid process requires a significant chunk of CPU and memory resources
  • For YouTube integration, the Squid caching server must also strip some URL tags on YouTube files on the fly

To answer this challenge, and provide the most effective caching feature, we’ve spent the past few months developing a custom load generator. Our simulation lab has a full one-gigabit connection to the Internet. It also has a set of servers that can simulate thousands of simultaneous users surfing the Internet at the same time. We can also queue up a set of YouTube users vying for live video from the cache and Internet. Lastly, we put a traditional point-to-point FTP and UDP load across the NetEqualizer using our traditional load generator.

Once our custom load generator was in place, we were able to run various scenarios that our technology might encounter in a live network setting.  Our testing exposed some common, and not so common, issues with YouTube caching and we were able to correct them. This kind of analysis is not possible on a live commercial network, as experimenting and tuning requires deliberate outages. We also now have the ability to re-create a customer problem and develop actual Squid source code patches should the need arise.

New APconnections Corporate Speed Test Tool Released for NetEqualizer

For many Internet users, one of the first troubleshooting steps when online access seems to slow is to run a simple speed test. And, under the right circumstances, speed tests can be an effective way to pinpoint the problem.

However, slowing Internet speeds aren’t just an issue for the casual user. Over our years of troubleshooting thousands of corporate and other commercial links, a recurring issue has been customers not getting their full-advertised bandwidth from their upstream provider. Some customers are aware something is amiss from examining bandwidth reports on their routers and some of these problems we stumble upon while troubleshooting network congestion issues.

But, what if you have a shared, busy corporate Internet connection such as this — with hundreds or thousands of users on the link at one time? Should a traditional speed test be the first place to turn? In this situation, the answer is “no.” Running a speed test under these conditions is neither meaningful nor useful.

Let me explain.

The problem starts with the overall design and process of the speed test itself. Speed tests usually run short duration files. For example, a 10-megabit file sent over a hundred-megabit link might complete in 0.1 seconds, reporting the link speed to the operator at 100 megabits. However, statistically this is just a snapshot of one very small moment in time and is of little value when the demands on a network are constantly changing. Furthermore, with this type of test, the link must be free of active users, which is nearly impossible when you have an entire office, for example, accessing the network at once.

On these larger shared links, the true speed can only be measured during peak times with users accessing a wide variety of applications persistently over a significant period. But, there is no easily controlled Web speed test site that can measure this type of performance on your link.

Yes, a sophisticated IT administrator can run reports and see trends and make assumptions. And many do. Yet, for some businesses, this isn’t practical.

For this reason, we’ve introduced the NetEqualizer Speed Test Utility.

How Does the NetEqualizer Speed Test Utility Work?

The NetEqualizer Speed Test Utility is an intelligent tool embedded in your NetEqualizer that can be activated from your GUI. On high-traffic networks, there is always a busy hour background load on the link – a baseline if you will. When you set up the speed test tool, you simply tell the NetEqualizer some basics about your network, including:

  • Link Speed
  • Number of Users
  • Busy Hours

After turning the tool on, it will keep track of your network’s bandwidth usage. If your usage drops below expected levels, it will present a mild warning on the GUI screen that your bandwidth may be compromised and give an explanation of the deviation. The operator can also be notified by e-mail.

This set up allows bandwidth to be monitored without having to depend on unreliable speed tests or run time-consuming reports, allowing the problem to be more quickly identified and addressed.

For more information about the NetEqualizer Speed Test Utility, contact APconnections at

What to expect from Internet Bursting

APconnections will be releasing ( version 4.7) a bursting feature on their NetEqualizer bandwidth controller this week. What follows is an explanation of the feature and also some facts and information about Internet Bursting that consumers will also find useful.

First an explanation on how the NetEqualizer bursting feature works.

– The NetEqualizer currently comes with a feature that lets you set a rate limit by IP address.

– Prior to the bursting feature, the top speed allowed for each user was fixed at a set rate limit.

– Now with bursting a user can be allowed a burst of bandwidth for 10 seconds with speeds multiples of two , three or four, or any multiple of their base rate limit.

So if for example a user has a base rate limit of 2 megabits a second, and a burst factor of 4, then their connection will be allowed to burst all the way up to 8 megabits for 10 seconds, at which time it will revert back to the original 2 megabits per second. This type of burst will be noticed when loading large Web pages loaded with graphics. They will essentially fly up in the browser at warp speed.

In order to make  bursting a “special” feature it obviously can’t be on all the time. For this reason the NetEqualizer by default, will force a user to wait 80 seconds before they can burst again.

Will bursting show up in speed tests?

With the default settings of 10 second bursts and an 80 second time out before the next burst it is unlikely a user will be able to see their  full burst speed accurately with a speed test site.

How do you set a bursting feature for an IP address ?

From the GUI


Add Rules->set hard limit

The last field in the command specifies the burst factor.  Set this field to the multiple of the default speed you wish to burst up to.

Note: Once bursting has been set-up, bursting on an IP address will start when that IP exceeds its rate limit (across all connections for that IP).  The burst applies to all connections across the IP address.

How do you turn the burst feature off for an IP address.

You must remove the Hard Limit on the IP address and then recreate the Hard Limit by IP without bursting defined.

From the Web GUI Main Menu, Click on ->Remove/Deactivate Rules

Select the appropriate Hard Limit from the drop-down box. Click on ->Remove Rule

To re-add the rule without bursting, from the Web GUI Main Menu, Click on ->Add Rules->Hard Limit by IP and leave the last field set to 1.

Can you change the global bursting defaults for duration of burst and time between bursts ?

Yes, from the GUI screen you can select

misc->run command

In the space provided you would run the following command

/usr/sbin/brctl setburstparams my 40  30

The first parameter is the time,in seconds, an IP must wait before it can burst again. If an IP has done a burst cycle it will be forced to wait this long in seconds before it can burst again.

The second parameter is the time, in seconds, an IP will be allowed  to burst before begin relegated back to its default rate cap.

The global burst parameters are not persistent, meaning you will need to put a command in the start up file if you want them to stick  between reboots.


If speed tests are not a good way to measure a burst, then what do you recommend?

The easiest way would be  to extend the burst time to minutes (instead of the default 10 seconds ) and then run the speed test.

With the default set at 10 seconds the best was to see a burst in action is to take a continuous snap shot of an IP’s consumption during an extended download.

Beware of the confusion that bursting might cause.

Instant Bandwidth Snapshot Feature: Is this an Industry First?

One of the things that we have noticed with reporting tools lately, including ntop (the reporting tool we integrate), is that there is no easy way to show instant bandwidth for a user.  Most reporting tools smooth out usage over some time period, a 5 minute average is the norm.

For example, this popular Netflow Analyzer touts a 10 minute average, right from the FAQ on their main page it states:

Real-time Bandwidth Reports for each WAN link

As soon as Netflow data is received, graphs are generated showing details on incoming and outgoing traffic on the link for the last 10 minutes.”

No where can we find a reasonable bandwitdh monitoring tool that will show you instant, as of this second, bandwidth utilization. We are sure somebody will e-mail us to dispute this claim, and if so, we will gladly publish their link and give them credit on our BLOG.

When is an Instant Bandwidth Reporting Tool useful?

1) The five minute average reporting tool is of little use when a customer calls and tells you they are not getting their expected bandwidth on a speed test or video.  In these cases it is best to see the instant report while they are consuming the bandwidth, not averaged into a 10 minute aggregate.

2) If a customer has a fixed rate cap, and calls and reports that their VOIP is not working well.  The easiest and quickest way is to check what their consumption is during a VOIP call is to see it now. You don’t need a fancy protocol analyzer to tell them they are sucking up their full 1 megabit allocation with their YouTube video specifically. You just need to know that their line is clear and that they are consuming the full megabit at this instant, thus exonerating you (the ISP or support person) from getting drawn down in the dregs of culpability.

Here are some links to other reporting tools.

Ip guard


Here is a snapshot of our screen that allows you take an Instant Bandwidth Snapshot, showing the last second of utilization for a individual IP, Pool, or VLAN on your network.

APconnections Announces New API for Customizing Bandwidth User Quotas

APconnections is proud to announce the release of its NetEqualizer User-Quota API (NUQ API) programmer’s toolkit. This new toolkit will allow NetEqualizer users to generate custom configurations to better handle bandwidth quotas* as well as keep customers informed of their individual bandwidth usage.

The NetEqualizer User-Quota API (NUQ API) programmer’s toolkit features include:

  1. Tracking user data by IP and MAC address (MAC address tracking will be out in the second release)
  2. Specifying quotas and bandwidth limits by IP or a subnet block
  3. Monitoring real-time bandwidth utilization at any time
  4. Setting up a notification alarm when a user exceeds a bandwidth limit
  5. Utilizing an API programming interface

In addition to providing the option to create separate bandwidth quotas for individual customers and reduce a customer’s Internet pipe when they have reached their individual set limit, customers themselves can be notified when a limit is reached and even have access to an interface to monitor current monthly usage so they are not surprised when they reach their limit.

Overall, the NUQ API will provide a quick and easy tool to customize your business and business process.

If you do not currently have the resources to use the NUQ API and customize it to fit your business, please contact us and we can arrange for one of our consulting partners to put together an estimate for you.  Or, if you just have a few questions, we’d be happy to put together a reasonable support contract (Support for the API programs is not included in our standard software support (NSS)).

*Bandwidth quotas are used by ISPs as a means to meter total bandwidth downloaded over a period of time. Although not always disclosed, most ISPs reserve the right to limit service for users that continually download data. Some providers use the threat of quotas as a deterrent to keep overall traffic on an Internet link down.

See how bandwidth hogs are being treated in Asia

NetEqualizer Programmers Toolkit for Developing Quota-Based Usage Rules (NUQ API)

Author’s Notes:

December 2012 update: As of Software Update 6.0, we have incorporated the Professional Quota API into our new 6.0 GUI, which is documented in our full User GuideThe”Professional Quota API User Guide” is now deprecated.

Due to the popularity of User Quotas, we built a GUI to implement the quota commands.  We recommend using the 6.0 GUI to configure User quotas, which incorporates all the commands listed below and does NOT require basic programming skills to use.

July 2012 update: As of Software Update 5.8, we now offer the Professional Quota API, which provides a GUI front-end to the NUQ-API.  Enclosed is a link to the Professional Quota API User Guide (PDF), which walks you through how to use the new GUI toolset.

Professional Quota API Guide

If you prefer to use the native commands (NUQ API) instead of the new GUI, OR if you are using a Software Update  prior to 5.8 (< 5.8), please follow the instructions below.  If you are current on NSS, we recommend upgrading to 5.8 to use the new Professional Quota API GUI.  If you are not current on NSS, you can call 303.997.1300 ext.5 or email  to get current.



The following article serves as the programmer’s toolkit for the new NetEqualizer User-Quota API (NUQ API). Other industry terms for this process include bandwidth allotment, and usage-based service.  The NUQ API toolkit is available with NetEqualizer release 4.5 and above and a current software subscription license (NSS).

Note: NetEqualizer is a commercial-grade, Linux-based, in-line bandwidth shaper.  If you are looking something windows-based try these.


Prior to this release, we provided a GUI-based user limit tool, but it was discontinued with release 4.0.  The GUI tool did not have the flexibility for application development, and was inadequate for customizations. The NetEqualizer User-Quota API (NUQ API) programmer’s toolkit is our replacement for the GUI tool. The motivation for developing the toolkit was to allow ISPs, satellite providers, and other Internet management companies to customize their business processes around user limits. The NUQ API is a quick and easy way to string together a program of actions in unique ways to meet your needs.  However, it does require basic programming/Linux skills.

Terms of Use

APconnections, the maker of the NetEqualizer, is an OEM manufacturer of a bandwidth shaping appliance.  The toolkit below provides short examples of how to use the NUQ API to get you started developing a system to enforce quota bandwidth limits for your customers. You are free to copy/paste and use our sample programs in the programmer’s toolkit to your liking.  However, questions and support are not covered in the normal setup of the NetEqualizer product (NSS) and must be negotiated separately.  Please call 303.997.1300 x103 or email to set up a support contract for the NUQ API programmer’s toolkit.

Once you have upgraded to version 4.5 and have purchased a current NSS, please contact APconnections for installation instructions. Once installed, you can find the tools available in the directory/art/quota.

Step 1: Start the Quota Server

In order to use the NUQ API programmer’s toolkit, you must have the main quota server running.  To start the quota server from the Linux command line, you can type:

# /art/quota/quota &

Once the quota main process is running, you can make requests using the command line API.

The following API commands are available:




Will cause the NetEqualizer to start tracking data for a block (subnet) of IP addresses in the range  through





Will remove a block of IP addresses from the quota system.

Note: You must use the exact same IP address and mask to remove a block as was used to create the block.




/art/quota/quota_set_alarm <down limit>  <up limit>

Will set an alarm when an IP address reaches a defined limit.

Alarm notifications will be reported in the log /tmp/quotalog.  See the sample programs below for usage.

Note: All IPs in the subnet range will get flagged when/if they reach the defined limit. The limits are in bytes transferred. Alarm notifications are reported in the quotalog /tmp/quotalog.  See example below.





Will remove all alarms in effect on the specified subnet.

Note: The subnet specification must match exactly the format used when the alarm was created — same exact IP address and same exact mask.





Will reset the usage counters for the specified subnet range





Will show the current usage byte count for the specified IPs in the range to the console. The usage counters must be initiated with quota_create command.

Will also put usage statistics to the default log /tmp/quotalog



Will display all current rules in effect






/art/ADD_CONFIG HARD <ip> <down> <up><subnet mask> <burst factor>

Used to set rate limits on IP’s, which would be the normal response should a user exceed their quota.

Parameter definitions:

HARD                     Constant that specifies the type of operation.  In this case HARD indicates “hard limit”.

<ip>                        The IP address in format x.x.x.x

<down>                 Is the specified max download (inbound) transfer speed for this ip in BYTES per second, this is not kbs.

<up>                       Is the specified upload (outbound) transfer speed in BYTES per second

<subnet mask>   Specifies the subnet mask for the IP address.  For example, 24 would be the same as x.x.x.x/24 notation. However, for this command the mask is specified as a separate parameter.

<burst factor> The last field in the command specifies the burst factor.  Set this field to 1 (no bursting) or to a multiple greater than 1 (bursting).  BURST FACTOR is multiplied times the <down> and <up> HARD LIMITs to arrive at the BURST LIMIT (default speed you wish to burst up to).  For example… 2Mbps <down> HARD LIMIT x 4 BURST FACTOR = 8Mbps <down> BURST LIMIT.





Where x.x.x.x is the base ip used in the ADD_CONFIG HARD command no other parameters are necessary on the removal of the rule.


To view the Log:



Various status messages will get reported along with ALARMs and usage statistics


Examples and Sample sessions (assumes Linux shell and Perl knowledge)

From the command line of a running NetEqualizer  first start the quota server

root@neteq:/art/quota# /art/quota/quota &
[1] 29653

Then I issue a command to start tracking byte counts on the local subnet, for this example I have some background network traffic running across the NetEqualizer.

root@neteq:/art/quota# ./quota_create

I have now told the quota server to start tracking bytes on the subnet 192.168.1.*

To see the transferred current byte count on an IP you can use the status_ip command

root@neteq:/art/quota# ./quota_status_ip
Begin status for
status for
start time = Fri Apr 2 21:23:13 UTC 2010
current date time = Fri Apr 2 21:55:28 UTC 2010
Total bytes down = 65033
Total bytes up = 0
status for
start time = Fri Apr 2 21:54:50 UTC 2010
current date time = Fri Apr 2 21:55:28 UTC 2010
Total bytes down = 3234
Total bytes up = 4695
End of status for

Yes, the output is a bit cryptic, but everything is there. For example, the start time and current time since the data collection started on each IP reporting in.

Now let’s say we wanted to do something useful when a byte count or quota was exceeded by a user.

First, we would set up an alarm.
root@neteq:/art/quota# ./quota_set_alarm 10000 10000
alarm block created for

We have now told the quota server to notify us when any IP in the range 192.168.1.* exceeds 10000 bytes up or 10000 bytes down.

Note: If an alarm is raised, the next alarm will occur at twice the original byte count. In the example above, we will get alarms at 10,000, 20,000, 30,000 and so forth for all IPs in the range. Obviously, in a commercial operation, you would want your quotas set much higher in the gigabyte range.

Now that we have alarms set, how do we know when the happen and how can we take action?

Just for fun, we wrote a little perl script to take action when an alarm occurs. So, first here’s the perl script code and then and example of how to use it.

root@neteq:/art# cat test
while ( 1)
{  $line = readline(*STDIN);
print $line;
chomp ($line);
@foo=split(” “, $line);
if ( $foo[0] eq “ALARM”)
print “send an email to somebody important here \n”;

First, save the perl script off to a file. In our example, we save it to a file /art/test

Next, we will monitor the /tmp/quotalog for new alarms as they occur, and when we find one we will print the message “send and email to somebody important here” .   To actually send an email you would need to set up an email server and call the command line smtp command with your message , we did not go that far here.

Here is how we use the test script to monitor the quotalog  (where ALARM Messages get reported)

root@neteq:/art# tail -f /tmp/quotalog | ./test

Log Reset
ALARM has exceeded up byte count of 160000
send an email to somebody important here
ALARM has exceeded down byte count of 190000
send an email to somebody important here
ALARM has exceeded up byte count of 170000
send an email to somebody important here
ALARM has exceeded down byte count of 200000
send an email to somebody important here
ALARM has exceeded up byte count of 180000
send an email to somebody important here
ALARM has exceeded down byte count of 210000
send an email to somebody important here
ALARM has exceeded up byte count of 190000
send an email to somebody important here
ALARM has exceeded down byte count of 220000
send an email to somebody important here

Now, what if we just want to see what rules are in effect?  Here is a sequence where we create a couple of rules and show how you can status them. Note the subtle difference between the command quota_rules and status_ip.  Status_ip shows ip’s that are part of rule and are actively counting bytes.  Since a rule does not become active (show up in status) until there are actually bytes transferred.

root@neteq:/art/quota# ./quota_create
root@neteq:/art/quota# ./quota_rules
Active Quotas —————
Active Alarms —————-
root@neteq:/art/quota# ./quota_set_alarm 20000 20000
alarm block created for
root@neteq:/art/quota# ./quota_rules
Active Quotas —————
Active Alarms —————-

That concludes the NetEqualizer User-Quota API (NUQ API) programmers toolkit for now. We will be adding more examples and features in the near future. Please feel free to e-mail us at with feature requests and bug reports on this tool.

Note: You must have a current NSS to receive the toolkit software. It is not enabled with the default system.

Related Opinion Article on the effectiveness of Quotas

%d bloggers like this: