The Illusion of Separation: My Malaysia Trip Report

By Zack Sanders

VP of Security – APconnections

Traveling is an illuminating experience. Whether you are going halfway across the country or halfway around the world, the adventures that you have and the lessons that you learn are priceless and help shape your outlook on life, humanity, and the planet we live on. Even with the ubiquitousness of the Internet, we are still so often constrained by our limited and biased information sources that we develop a world view that is inaccurate and disconnected. This disconnection is the root of many of our problems – be they political, environmental, or social. There is control in fear and the powerful maintain their seats by reinforcing this separation to the masses. Having the realization that we are all together on this planet and that we all largely want the same things is something that can only be discovered by going out and seeing the world for yourself with as open of a mind as possible.

One of the great things about NetEqualizer, and working for APconnections, is that, while we are a relatively small organization, we are truly international in our business. From the United States to the United Kingdom, and Argentina to Finland, NetEqualizers are helping nearly every vertical around the world optimize the bandwidth they have available. Because of this global reach, we sometimes get to travel to unique customer sites to conduct training or help install units. We recently acquired a new customer in Malaysia – a large university system called International Islamic University Malaysia, or IIUM. In addition to NetEqualizers for all of their campuses, two days of training was allotted in their order – one day each at two of their main locations (Kuala Lumpur and Kuantan). I jumped at the chance to travel to Asia (my first time to the continent) and promptly scheduled some dates with our primary contact at the University.

I spent the weeks prior to my departure in Spain – a nicely-timed, but unrelated, warmup trip to shake the rust off that had accrued since my last international travel experience five years ago. The part about the Malaysia trip that I was dreading the most was the hours I would log sitting in seat 46E of the Boeing 777 metal I was to take to Kuala Lumpur with Singapore Airlines. Having the Spain trip occur before this helped ease me in to the longer flights.

F.C. Barcelona hosting Real Madrid at the Camp Nou.

My Malaysia itinerary looked like this:

Denver -> San Francisco (2.5 hours), Layover (overnight)

San Francisco -> Seoul (12 hours), Layover (1 hour)

Seoul -> Singapore (7 hours), Layover (6 hours)

Singapore -> Kuala Lumpur (1 hour)

I was only back in the United States from Spain for one week. It was a fast, but much needed, seven days of rest. The break went by quickly and I was back in the air again, this time heading west.

After 22 hours on the plane and 7 hours in various airports, I was ready to crash at my hotel in the City Centre when I touched down in KL. I don’t sleep too well on planes so I was pretty exhausted. The trouble was that it was 8am local time when I arrived and check-in wouldn’t be until 2:00pm. Fortunately, the fine folks at Mandarin Oriental accommodated me with a room and I slept the day away.

KL City Centre.

I padded my trip with the intention of having a few days before the training to get adjusted, but it didn’t take me as long as I thought and I was able to do some site seeing in and outside the city before the training.

My first stop was Batu Caves – a Hindu shrine located near the last stop of the LRT’s KTM-KOMUTER line in the Gombak District – which I later learned was near the location of my first training seminar. The shrine is set atop 272 stairs in a 400 million year old limestone cave. After the trek up you are greeted by lightly dripping water and a horde of ambitious monkeys in addition to the shrines within the cave walls.

Batu Caves entrance.

Batu Caves.

Petronas Towers.

This was the furthest I ventured from the city for site seeing. The rest of the time, I spent near the City Centre – combing through the markets of Chinatown and Little India, taking a tour of the Petronas Towers, and checking out the street food on Jalan Alor. Kuala Lumpur is a very Western city. The influence is everywhere despite the traditional Islamic culture. TGI-Fridays, Chili’s, and Starbucks were the hotspots – at least in this touristy part of town. On my last night I found a unique spot at the top of the Trader’s Hotel called Skybar. It is a prime location because it looks directly at the Petronas Towers – which, at night especially, are gorgeous. The designers of the bar did a great job implementing sweeping windows and sunken sofas to enjoy the view. I stayed there for a couple hours and had a Singapore Sling – a drink I’ve heard of but had never gotten to try.

Singapore Sling at the Skybar.

The city and sites were great, however, the primary purpose of the trip was not leisure – it was to share my knowledge of NetEqualizer with those that would be working with it at the University. To be honest, I wasn’t sure what to expect. This was definitely different from most locations I have been to in the past. A lot of thoughts went through my head about how I’d be received, if the training would be valuable or not, etc. It’s not that I was worried about anything in particular, I just didn’t know. My first stop was the main location in KL. It’s a beautifully manicured campus where the buildings all have aqua blue roofs. My cab driver did a great job helping me find the Information Technology Department building and I quickly met up with my contact and got set up in the Learning Lab.

This session had nine participants – ranging from IT head honchos to network engineers. The specific experience with the NetEqualizer also ranged from well-versed to none at all. I catered the training such that it would be useful to all participants – we went over the basics but also spent time on more advanced topics and configurations. All in all, the training lasted six hours or so, including an hour break for lunch that I took with some of the attendees. It was great talking with each of them – regardless of whether the subject was bandwidth congestion or the series finale episode of Breaking Bad. They were great hosts and I look forward to keeping in touch with them.

Training at IIUM.

I was pretty tired from the day by the time I arrived back at the hotel. I ate and got to bed early because I had to leave at 6:00am for my morning flight across the peninsula to Kuantan – a short, 35 minute jaunt eastward – to do it all over again at that campus. Kuantan is much smaller than KL, but it is still a large city. I didn’t get to see much of it, however, because I took a cab directly from the airport to the campus and got started. There were only four participants this time – but the training went just as well. I had similar experiences talking with this group of guys, and they, too, were great hosts. I returned back to the airport in the evening and took a flight back to KL. The flight is so short that it’s comical. It goes like this:

Taxi to the runway -> “Flight attendants prepare for takeoff” -> “You may now use your electronic devices” -> 5 minutes goes by -> “Flight attendants prepare for landing – please turn off your electronic devices” -> Land -> Taxi to terminal

The airport in Kuantan at sunset.

I had one more day to check out Kuala Lumpur and then it was back to the airport for another 22 hours of flying. At this point though, I felt like a flying professional. The time didn’t bother me and the frequent meals, Sons of Anarchy episodes, and extra leg room helped break it up nicely. I took a few days in San Francisco to recover and visit friends before ultimately heading back to Boulder.

It was a whirlwind of a month. I flew almost 33,000 miles in 33 days and touched down in eight countries on three continents. Looking back, it was a great experience – both personally and professionally. I think the time I spent in these places, and the things I did, will pay invaluable dividends going forward.

If your organization is interested in NetEqualizer training – regardless of whether you are a new or existing customer – let us know by sending an email to!

View of KL Tower from the top of the Petronas Towers.

On the Trail of Network Latency Over a Satellite Link

By Art Reisman – CTO –

Art Reisman CTO

This morning, just for fun, I decided to isolate the latency on a route from my home office, to a computer located at a remote hunting lodge. The hunting lodge is serviced by a Wild Blue satellite link.

What causes latency?

The factors that influence network latency are:

1) Wire transport speed.

Not to be confused with the amount of data a wire carry in a second, I am referring to the raw speed at which data travels on a wire. Once on the wire, the traversal time from end to end. For the most part, we can assume data travels at the speed of light: 186,000 miles per second.

2) Distance.

How far is the data traveling. Even though data travels at the speed of light, a hop across the United States will cost you about 4 milliseconds, and a hop up to a stationary satellite  ( round trip about 44,000 miles) adds a minimum of 300 milliseconds. I have worked through an example of how you can  trace latency across a satellite link below.

3) Number of hops.

How many switching points are there between source and destination? Each hop requires the data to move from one wire to another, and this requires a small amount of waiting to get on the next wire. Each hop can be an additional 2 or  3 milliseconds.

4) Overhead processing on a hop.

This can also add up, sometimes at the end points points, people like to look at the data, usually for security reasons, on their firewall. Depending on the number of features and processing power of the firewall this can also add a wide range of latency. Normal is from 1 or 2 milliseconds, but that can blow up to 50 milliseconds or in some cases even more when you turn on too many features on your firewall.

How much latency is too much?

It really depends on what you are doing. If it is a one way conversation, like you are watching a Netflix movie, you are probably not going to care if the data is arriving a half second after it was sent, but if you are talking interactively on a Skype call, you will find your self talking over the other person quite often – especially at the beginning of a call.

Tracing Latency across a satellite link.

Note: I am doing this all from the command line on my Mac.

Step one: I have the IP address of a computer that I know is only accessible by Satellite. So first I run a command called trace route to find all the hops along the route.

localhost:~ root# traceroute

When I run this command I get a list of every hop along the route, I also get some millisecond times for each hop from trace route but I am not sure if I trust them, so I am not showing them.

From my Mac command line I do:

traceroute to (
1 ( This is my local router or gateway the first hop
2 ( – This is the Comcast Router , the first router upstream from my house at the local Comcast NOC most likely.
3 ( – We then go through a bunch of Comcast links
4 (
5 (
6 (
7 (
8 ( – and then we leave the Comcast network of routers here
9 ( – and finally to some other back bone router
10 (
11  * * *
13 ( This IP is on the other side of a Satellite link)

Now here is the cool part, I am going to ping the last IP address before the route goes up to the satellite, and then the hop after that to see what the latency over the satellite hop is.

Note the physical satellite does not have an IP, there is a router here on Earth that transmits data up and over the satellite link.

localhost:~ root# ping
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=56 time=42.476 ms
64 bytes from icmp_seq=1 ttl=56 time=55.878 ms
64 bytes from icmp_seq=2 ttl=56 time=42.382 ms

About 50 milliseconds.

And the last hop to the remote computer.

localhost:~ root# ping
PING ( 56 data bytes
Request timeout for icmp_seq 0
64 bytes from icmp_seq=0 ttl=109 time=1551.310 ms
64 bytes from icmp_seq=1 ttl=109 time=1574.177 ms
64 bytes from icmp_seq=2 ttl=109 time=1494.628 ms

Wow that hop up over the satellite link added about 1500 milliseconds to my ping time!

That is a little more latency than I would have expected, but in fairness to Wild Blue they do a good job at a reasonable price. The funny thing is streaming audio works fine over the Satellite link because it is not latency sensitive.  However a skype call might be a bit more painful , 300 milliseconds is about the tolerance level where users start to notice latency on a phone call, 500 is manageable, and up over 1000, starts to require a little planning and pausing before and after you speak.

References. A non technical guide to fixing TCP/IP problems

NetEqualizer YouTube Caching FAQ

Editor’s Note: This week, we announced the availability of the NetEqualizer YouTube caching feature we first introduced in October. Over the past month, interest and inquiries have been high, so we’ve created the following Q&A to address many of the common questions we’ve received.

This may seem like a silly question, but why is caching advantageous?

The bottleneck most networks deal with is that they have a limited pipe leading out to the larger public Internet cloud. When a user visits a website or accesses content online, data must be transferred to and from the user through this limited pipe, which is usually meant for only average loads (increasing its size can be quite expensive). During busy times, when multiple users are accessing material from the Internet at once, the pipe can become clogged and service slowed. However, if an ISP can keep a cached copy of certain bandwidth-intensive content, such as a popular video, on a server in their local office, this bottleneck can be avoided. The pipe remains open and unclogged and customers are assured their video will always play faster and more smoothly than if they had to go out and re-fetch a copy from the YouTube server on the Internet.

What is the ROI benefit of caching YouTube? How much bandwidth can a provider conserve?

At the time of this writing, we are still in the early stages of our data collection on this subject. What we do know is that YouTube can account for up to 15 percent of Internet traffic. We expect to be able to cache at least the most popular 300 YouTube videos with this initial release and perhaps more when we release the mass-storage version of our caching server in the future. Considering this, realistic estimates put the savings in terms of bandwidth overhead somewhere between 5 and 15 percent. But this is only the instant benefits in terms bandwidth savings. The long-term customer-satisfaction benefit is that many more YouTube videos will play without interruption on a crowded network (busy hour) than before. Therefore, ROI shouldn’t be measured in bandwidth savings alone.

Why is it just the YouTube caching feature? Why not cache everything?

There are a couple of good reasons not to cache everything.

First, there are quite a few Web pages that are dynamically generated or change quite often, and a caching mechanism relies on content being relatively static. This allows it to grab content from the Internet and store it locally for future use without the content changing. As mentioned, when users/clients visit the specific Web pages that have been stored, they are directed to the locally saved content rather than over the Internet and to the original website. Therefore, caching obviously wouldn’t be possible for pages that are constantly changing. Caching dynamic content can cause all kinds of issues — especially with merchant and secure sites where each page is custom-generated for the client.

Second, a caching server can realistically only store a subset of data that it accesses. Yes, data storage is getting less expensive every year, but a local store is finite in size and will eventually fill up. So, when making a decision on what to cache and what not to cache, YouTube, being both popular and bandwidth intensive, was the logical choice.

Will the NetEqualizer ever cache content beyond YouTube? Such as other videos?

At this time, the NetEqualizer is caching files that traverse port 80 and correspond to video files from 30 seconds to 10 minutes. It is possible that some other port 80 file will fall into this category, but the bulk of it will be YouTube.

Is there anything else about YouTube that makes it a good candidate to cache?

Yes, YouTube content meets the level of stability discussed above that’s needed for effective caching. Once posted, most YouTube videos are not edited or changed. Hence, the copy in the local cache will stay current and be good indefinitely.

When I download large distributions, the download utility often gives me a choice of mirrored sites around the world. Is this the same as caching?

By definition this is also caching, but the difference is that there is a manual step to choosing one of these distribution sites. Some of the large-content open source distributions have been delivered this way for many years. The caching feature on the NetEqualizer is what is called “transparent,” meaning users do not have to do anything to get a cached copy.

If users are getting a file from cache without their knowledge, could this be construed as a violation of net neutrality?

We addressed the tenets of net neutrality in another article and to our knowledge caching has not been controversial in any way.

What about copyright violations? Is it legal to store someone’s content on an intermediate server?

This is a very complex question and anything is possible, but with respect to intent and the NetEqualizer caching mechanism, the Internet provider is only caching what is already freely available. There is no masking or redirection of the actual YouTube administrative wrappings that a user sees (this would be where advertising and promotions appear). Hence, there is no loss of potential of revenue for YouTube. In fact, it would be considered more of a benefit for them as it helps more people use their service where connections might otherwise be too slow.

Final Editor’s Note: While we’re confident this Q&A will answer many of the questions that arise about the NetEqualizer YouTube caching feature, please don’t hesitate to contact us with further inquiries. We can be reached at 1-888-287-2492 or

NetEqualizer Tuning Guide for Small Networks with a Small Number of Infrequent Users

If you are working with a network that has a small number of infrequent users on a small network (10Mbps or less), here are some tuning recommendations that will help you to optimize your network use.  These recommendations came out of a discussion with one of our customers.   Their environment is a 40 person company on a 10Mbps pipe (normal amount of users on a small network), and then converts over at night to a network with only one user.

The following recommendations will help to alleviate the situation where a small network with a small number of infrequent users has a user get knocked down to a less than 1Mbps with a PENALTY while there is more than enough bandwidth to sustain their download at a higher rate.

Summary of Recommendations (listed in priority order):

1) (best option) Put a hard limit somewhere below RATIO (typically 85%) on each IP address on the network.  So, for a 10Mbps network with RATIO = 85%, your hard limits should be below 8.5Mbps for each IP address.

2) Put a “day configuration” and a “night configuration” in place.  The process to do this is described in the Changing Configurations by Time of Day section of our Advanced Tips & Tricks guide.

3) Change the PENALTY unit sensitivity, to make the penalty less restrictive.

4) Raise the value of HOGMIN from 12,000 bytes/second anywhere up to 128,000 bytes/second.

The philosophy behind each is described in detail in the following sections.

1) Adding Hard Limits on each IP address

We recommend putting Hard Limits on each IP address.  Hard Limits will keep any one user from consuming the entire network bandwidth.  If you prefer not to have Hard Limits on all IP addresses, you can set the Hard Limit only for the infrequent users.

For example, on a 10Mbps network, you can put a Hard Limit of 4-5Mbps on every user, which will prevent any one user from tripping equalizing, but will allow all of them to sustain a 5 Mbps download on your lightly loaded network.

If a user starts a large download, it will consume network bandwidth up until the network reaches a point of congestion (at 85% with RATIO set to 85).   Once that point is reached, equalizing will kick in and start penalizing the traffic.  In cases where the network has a normal number of users on it, this works very well to provide fairness across the available bandwidth.

When the one network user spikes the entire network to above 85% congested, a Penalty kicks in.  The result of the penalty is that the file download gets throttled back to 500kbs or maybe less – almost instantly.  Once the penalty is removed, the file download will again consume all the network bandwidth until another penalty is applied. This cycle repeats itself every few seconds until the download completes.

On a system with more than one user, and typically one that is very busy with 100’s of users or 1000’s, the pipe is usually always near capacity, so penalties being applied are not as dramatic, and ensure that all other users do not experience “lockup”.

2) Change your Configuration by Time of Day

You can also change your NetEqualizer to use two separate configuration files, so that you can apply different rules at various times of day – for example, rules for “off-hours” (typically nighttime) versus another set for “on-hours” (typically daytime).    This would be beneficial if you want to open up the amount of bandwidth available per user at night.  For example, you could set your off-hours hard limits to 8 Mbps, and lower your on-hours hard limits to 4Mbps.

Note that it is still important to keep your hard limits below RATIO, so that you do not trigger equalizing based on one data flow.

3) Change the PENALTY unit sensitivity, to make the penalty less restrictive

Networks much larger than 45 megabits may require a PENALTY UNIT resolution smaller than 100ths of seconds. In the NetEqualizer Web GUI, the smallest penalty that can be applied to an IP Packet is 1/100 of a second. If you are finding that a default PENALTY of 1 is putting too much latency on your connections then you can adjust the PENALTY unit to 1/1000 of second with the following command:

From the Web GUI Main Menu, Click on ->Miscellaneous->Run a Command

Type in: /bridge/bridge-utils/brctl/brctl rembrain my 99999

Note: For this change to persist you will need to put it in the /art/autostart file.

4)  Raise the value of HOGMIN, anywhere up to 128,000 bytes/sec

HOGMIN is used to determine what traffic should be penalized on a congested network.  One way to get traffic to not be penalized, then, is to raised the value of HOGMIN (default is 12,000 bytes per second).  For a lightly-loaded network you could consider HOGMIN = 50,000 bytes/sec and may even go as high as 128,000 bytes/sec.

Taken as a whole, this is how our four recommendations would work in the example we have described…

Hmm… I have a 10 megabit pipe and I have 40 users during the day and 1 user at night.  No user should be able to take the whole pipe all day, but I want my 1 user to get more bandwidth at night.

  • I’ll create two configuration files: one with 4Mbps hard limits on all my users during the day (4 megabits is relatively fast service for the average and nobody would complain) and another with 8Mbps hard limits for my night user(s).
  • In addition, I would like the penalty to be less harsh at night, so I’ll change the PENALTY=1/1000 of a second in my night configuration file.
  • I also would like HOGMIN to be raised at night.  I will set it to 50,000 in my night configuration file.

During the day, when every once in a while we get 2 or 3 users downloading at once, it will no longer kill the entire pipe.  And during the night, my user can download larger files without being restricted.  So, with 4Mbps/8Mbps restrictions plus Equalizing, I get the best of both worlds – pretty fast downloads when the pipe is empty and I am protected against peak time crashes! People get fast downloads and if there is a peak I am protected from system gridlock.   Now there is nothing anybody can do to crash the system at random times.

I hope you find this tuning suggestion helpful for your situation.  If you would like additional help, please contact our Support Team at or 303.997.1300 x102 to discuss tuning for your specific configuration.

Top Five Causes For Disruption Of Internet Service

slow-internetEditor’s Note: We took a poll from our customer base consisting of thousands of NetEqualizer users. What follows are the top five most common causes  for disruption of Internet connectivity.

1) Congestion: Congestion is the most common cause for short Internet outages.  In general, a congestion outage is characterized by 10 seconds of uptime followed by approximately 30 seconds of chaos. During the chaotic episode, the circuit gridlocks to the point where you can’t load a Web page. Just when you think the problem has cleared, it comes back.

The cyclical nature of a congestion outage is due to the way browsers and humans retry on failed connections. During busy times usage surges and then backs off, but the relief is temporary. Congestion-related outages are especially acute at public libraries, hotels, residence halls and educational institutions. Congestion is also very common on wireless networks. (Have you ever tried to send a text message from a crowded stadium? It’s usually impossible.)

Fortunately for network administrators, this is one cause of disruption that can be managed and prevented (as you’ll see below, others aren’t that easy to control). So what’s the solution? The best option for preventing congestion is to use some form of bandwidth control. The next best option is to increase the size of your bandwidth link. However without some form of bandwidth control, bandwidth increases are often absorbed quickly and congestion returns. For more information on speeding up internet services using a bandwidth controller, check out this article.

2) Failed Link to Provider: If you have a business-critical Internet link, it’s a good idea to source service from multiple providers. Between construction work, thunderstorms, wind, and power problems, anything can happen to your link at almost any time. These types of outages are much more likely than internal equipment failures.

3) Service Provider Internet Speed Fluctuates: Not all DS3 lines are the same. We have seen many occasions where customers are just not getting their contracted rate 24/7 as promised.

4) Equipment Failure: Power surges are the most common cause for frying routers and switches. Therefore, make sure everything has surge and UPS protection. After power surges, the next most common failure is lockup from feature-overloaded equipment. Considering this, keep your configurations as simple as possible on your routers and firewalls or be ready to upgrade to equipment with faster newer processing power.

Related Article: Buying Guide for Surge and UPS Protection Devices

5) Operator Error: Duplicating IP addresses, plugging wires into the wrong jack, and setting bad firewall rules are the leading operator errors reported.

If you commonly encounter issues that aren’t discussed here, feel free to fill us in in the comments section. While these were the most common causes of disruptions for our customers, plenty of other problems can exist.

NetEqualizer Programmers Toolkit for Developing Quota-Based Usage Rules (NUQ API)

Author’s Notes:

December 2012 update: As of Software Update 6.0, we have incorporated the Professional Quota API into our new 6.0 GUI, which is documented in our full User GuideThe”Professional Quota API User Guide” is now deprecated.

Due to the popularity of User Quotas, we built a GUI to implement the quota commands.  We recommend using the 6.0 GUI to configure User quotas, which incorporates all the commands listed below and does NOT require basic programming skills to use.

July 2012 update: As of Software Update 5.8, we now offer the Professional Quota API, which provides a GUI front-end to the NUQ-API.  Enclosed is a link to the Professional Quota API User Guide (PDF), which walks you through how to use the new GUI toolset.

Professional Quota API Guide

If you prefer to use the native commands (NUQ API) instead of the new GUI, OR if you are using a Software Update  prior to 5.8 (< 5.8), please follow the instructions below.  If you are current on NSS, we recommend upgrading to 5.8 to use the new Professional Quota API GUI.  If you are not current on NSS, you can call 303.997.1300 ext.5 or email  to get current.



The following article serves as the programmer’s toolkit for the new NetEqualizer User-Quota API (NUQ API). Other industry terms for this process include bandwidth allotment, and usage-based service.  The NUQ API toolkit is available with NetEqualizer release 4.5 and above and a current software subscription license (NSS).

Note: NetEqualizer is a commercial-grade, Linux-based, in-line bandwidth shaper.  If you are looking something windows-based try these.


Prior to this release, we provided a GUI-based user limit tool, but it was discontinued with release 4.0.  The GUI tool did not have the flexibility for application development, and was inadequate for customizations. The NetEqualizer User-Quota API (NUQ API) programmer’s toolkit is our replacement for the GUI tool. The motivation for developing the toolkit was to allow ISPs, satellite providers, and other Internet management companies to customize their business processes around user limits. The NUQ API is a quick and easy way to string together a program of actions in unique ways to meet your needs.  However, it does require basic programming/Linux skills.

Terms of Use

APconnections, the maker of the NetEqualizer, is an OEM manufacturer of a bandwidth shaping appliance.  The toolkit below provides short examples of how to use the NUQ API to get you started developing a system to enforce quota bandwidth limits for your customers. You are free to copy/paste and use our sample programs in the programmer’s toolkit to your liking.  However, questions and support are not covered in the normal setup of the NetEqualizer product (NSS) and must be negotiated separately.  Please call 303.997.1300 x103 or email to set up a support contract for the NUQ API programmer’s toolkit.

Once you have upgraded to version 4.5 and have purchased a current NSS, please contact APconnections for installation instructions. Once installed, you can find the tools available in the directory/art/quota.

Step 1: Start the Quota Server

In order to use the NUQ API programmer’s toolkit, you must have the main quota server running.  To start the quota server from the Linux command line, you can type:

# /art/quota/quota &

Once the quota main process is running, you can make requests using the command line API.

The following API commands are available:




Will cause the NetEqualizer to start tracking data for a block (subnet) of IP addresses in the range  through





Will remove a block of IP addresses from the quota system.

Note: You must use the exact same IP address and mask to remove a block as was used to create the block.




/art/quota/quota_set_alarm <down limit>  <up limit>

Will set an alarm when an IP address reaches a defined limit.

Alarm notifications will be reported in the log /tmp/quotalog.  See the sample programs below for usage.

Note: All IPs in the subnet range will get flagged when/if they reach the defined limit. The limits are in bytes transferred. Alarm notifications are reported in the quotalog /tmp/quotalog.  See example below.





Will remove all alarms in effect on the specified subnet.

Note: The subnet specification must match exactly the format used when the alarm was created — same exact IP address and same exact mask.





Will reset the usage counters for the specified subnet range





Will show the current usage byte count for the specified IPs in the range to the console. The usage counters must be initiated with quota_create command.

Will also put usage statistics to the default log /tmp/quotalog



Will display all current rules in effect






/art/ADD_CONFIG HARD <ip> <down> <up><subnet mask> <burst factor>

Used to set rate limits on IP’s, which would be the normal response should a user exceed their quota.

Parameter definitions:

HARD                     Constant that specifies the type of operation.  In this case HARD indicates “hard limit”.

<ip>                        The IP address in format x.x.x.x

<down>                 Is the specified max download (inbound) transfer speed for this ip in BYTES per second, this is not kbs.

<up>                       Is the specified upload (outbound) transfer speed in BYTES per second

<subnet mask>   Specifies the subnet mask for the IP address.  For example, 24 would be the same as x.x.x.x/24 notation. However, for this command the mask is specified as a separate parameter.

<burst factor> The last field in the command specifies the burst factor.  Set this field to 1 (no bursting) or to a multiple greater than 1 (bursting).  BURST FACTOR is multiplied times the <down> and <up> HARD LIMITs to arrive at the BURST LIMIT (default speed you wish to burst up to).  For example… 2Mbps <down> HARD LIMIT x 4 BURST FACTOR = 8Mbps <down> BURST LIMIT.





Where x.x.x.x is the base ip used in the ADD_CONFIG HARD command no other parameters are necessary on the removal of the rule.


To view the Log:



Various status messages will get reported along with ALARMs and usage statistics


Examples and Sample sessions (assumes Linux shell and Perl knowledge)

From the command line of a running NetEqualizer  first start the quota server

root@neteq:/art/quota# /art/quota/quota &
[1] 29653

Then I issue a command to start tracking byte counts on the local subnet, for this example I have some background network traffic running across the NetEqualizer.

root@neteq:/art/quota# ./quota_create

I have now told the quota server to start tracking bytes on the subnet 192.168.1.*

To see the transferred current byte count on an IP you can use the status_ip command

root@neteq:/art/quota# ./quota_status_ip
Begin status for
status for
start time = Fri Apr 2 21:23:13 UTC 2010
current date time = Fri Apr 2 21:55:28 UTC 2010
Total bytes down = 65033
Total bytes up = 0
status for
start time = Fri Apr 2 21:54:50 UTC 2010
current date time = Fri Apr 2 21:55:28 UTC 2010
Total bytes down = 3234
Total bytes up = 4695
End of status for

Yes, the output is a bit cryptic, but everything is there. For example, the start time and current time since the data collection started on each IP reporting in.

Now let’s say we wanted to do something useful when a byte count or quota was exceeded by a user.

First, we would set up an alarm.
root@neteq:/art/quota# ./quota_set_alarm 10000 10000
alarm block created for

We have now told the quota server to notify us when any IP in the range 192.168.1.* exceeds 10000 bytes up or 10000 bytes down.

Note: If an alarm is raised, the next alarm will occur at twice the original byte count. In the example above, we will get alarms at 10,000, 20,000, 30,000 and so forth for all IPs in the range. Obviously, in a commercial operation, you would want your quotas set much higher in the gigabyte range.

Now that we have alarms set, how do we know when the happen and how can we take action?

Just for fun, we wrote a little perl script to take action when an alarm occurs. So, first here’s the perl script code and then and example of how to use it.

root@neteq:/art# cat test
while ( 1)
{  $line = readline(*STDIN);
print $line;
chomp ($line);
@foo=split(” “, $line);
if ( $foo[0] eq “ALARM”)
print “send an email to somebody important here \n”;

First, save the perl script off to a file. In our example, we save it to a file /art/test

Next, we will monitor the /tmp/quotalog for new alarms as they occur, and when we find one we will print the message “send and email to somebody important here” .   To actually send an email you would need to set up an email server and call the command line smtp command with your message , we did not go that far here.

Here is how we use the test script to monitor the quotalog  (where ALARM Messages get reported)

root@neteq:/art# tail -f /tmp/quotalog | ./test

Log Reset
ALARM has exceeded up byte count of 160000
send an email to somebody important here
ALARM has exceeded down byte count of 190000
send an email to somebody important here
ALARM has exceeded up byte count of 170000
send an email to somebody important here
ALARM has exceeded down byte count of 200000
send an email to somebody important here
ALARM has exceeded up byte count of 180000
send an email to somebody important here
ALARM has exceeded down byte count of 210000
send an email to somebody important here
ALARM has exceeded up byte count of 190000
send an email to somebody important here
ALARM has exceeded down byte count of 220000
send an email to somebody important here

Now, what if we just want to see what rules are in effect?  Here is a sequence where we create a couple of rules and show how you can status them. Note the subtle difference between the command quota_rules and status_ip.  Status_ip shows ip’s that are part of rule and are actively counting bytes.  Since a rule does not become active (show up in status) until there are actually bytes transferred.

root@neteq:/art/quota# ./quota_create
root@neteq:/art/quota# ./quota_rules
Active Quotas —————
Active Alarms —————-
root@neteq:/art/quota# ./quota_set_alarm 20000 20000
alarm block created for
root@neteq:/art/quota# ./quota_rules
Active Quotas —————
Active Alarms —————-

That concludes the NetEqualizer User-Quota API (NUQ API) programmers toolkit for now. We will be adding more examples and features in the near future. Please feel free to e-mail us at with feature requests and bug reports on this tool.

Note: You must have a current NSS to receive the toolkit software. It is not enabled with the default system.

Related Opinion Article on the effectiveness of Quotas

NetEqualizer: Advanced Tuning

Bits to Bytes Conversion Cheat Sheet

For those of you that want a simple way to convert from megabits/sec to bytes/sec, here is an easy way to do it.  Open one of the documents linked below, follow the instructions, and enter your pipe size.  It will do all the conversions for you, so that you have bytes/sec, which is what you enter in the NetEqualizer GUI “trunk_up” and “trunk_down” fields.

MSWord document that contains an embedded spreadsheet.  It is saved as a “.doc” file, so that it can be opened with older versions of MSWord. bitstobytes conversion cheat sheet document that contains an embedded spreadsheet.  It is saved as a “.odt” file.
bitstobytes conversion cheat sheet

URL-Based Shaping With Your NetEqualizer: A How To Guide

What is URL-based Shaping?

URL shaping is the ability to specify the URL, normally a popular site such as YouTube or NetFlix, and set up a fixed-rate limit for traffic to that specific URL.

Is URL shaping just a matter of using a reverse lookup on a URL to get the IP address and plugging it into a bandwidth controller?

In the simplest case, yes, but for sites such as YouTube, the URL of will have many associated IP addresses used for downloading actual videos. Shaping exclusively on the base URL would not be effective.

Is URL shaping the same thing as application shaping?

No. Although similar in some ways, there are significant differences:

  1. URL shaping is essentially the same as shaping by a known IP address. The trick with URL shaping is to discover IP addresses associated with a well-known URL.
  2. Application shaping uses Deep Packet Inspection (DPI). URL shaping does not. It does not inspect or open customer data.

How to set up URL-based shaping on your NetEqualizer

The following specifications are necessary:

  1. NetEqualizer version 4.0 or later
  2. A separate Linux-based client such that the client must access the Internet through the NetEqualizer
  3. The Perl source code for client URL shaping (listed below) loaded onto a client
  4. You will also need to set up your client so that it has permissions to run RSH (remote Shell) commands on your NetEqualizer without requiring a password to be entered. If you do not do this, your Perl discovery routine will hang. The notes for setting up the RSH permissions are outlined below.

How it works…

Save the Perl source code into a .pl file we suggest

Make sure to make this file executable

chmod 777

Run the perl command with the following syntax from the command line, where will be replaced with the specific URL you wish to shape:

./ pool# downlimit uplimit x.x.x.x y.y.y.y

  • Pool# is an unused bandwidth pool on your NetEqualizer unit
  • Downlimit is the rate in bytes per second incoming for the URL
  • Uplimit is the rate bytes per second outgoing to the Internet for the URL
  • x.x.x.x is the IP address of your NetEqualizer
  • y.y.y.y is the IP address of the client

The script will attempt an http request using It will then continue to do recursive Web accesses on subsequent links starting on the main domain URL. It will stop when there are no more links to follow or when 150 pages have been accessed. Any foreign IP’s found during the access session will be put into the given bandwidth pool as CLASS B masks, and will immediately be forever shaped until you remove the pool.


In our beta testing, the script did well in finding YouTube subnets used for videos.  We did not confirm whether the main NetFlix home page URL shares IP subnets with their download sites.

Notes for setting up RSH

Begin Notes

These notes  assume you are either logged in on the Client as root or you use sudo -i and are acting as root. is used in the example as the Server (NetEq) IP.

On your Client machine, do:

  • ssh-keygen -t rsa -b 4096
  • ssh-copy-id -i ~/.ssh/ root@
  • nano -w /etc/ssh/ssh_config

Make sure that these are as follows:

  • RhostsRSAAuthentication yes
  • RSAAuthentication yes
  • EnableSSHKeysign yes
  • HostbasedAuthentication yes

The next line is all one line to the ssh_known_hosts

  • scp /etc/ssh/ root@

The next line is all one line to the ssh_known_hosts2

  • scp /etc/ssh/ root@

Now, find out your HOSTNAME on the Client:

  • echo $HOSTNAME

On the Server machine, do:

  • nano -w /etc/hosts.equiv
  • harry-lin root
  • my $HOSTNAME of the Client was harry-lin
  • nano -w /etc/ssh/sshd_config

Check the following:

  • PermitRootLogin yes
  • StrictModes yes
  • RSAAuthentication yes
  • PubkeyAuthentication yes
  • AuthorizedKeysFile %h/.ssh/authorized_keys
  • IgnoreRhosts no
  • RhostsRSAAuthentication no
  • HostbasedAuthentication yes

Now do:

  • chown root:root /root


  • /etc/init.d/ssh reload

Now you can try something like this from your Client:

  • ssh root@

If it doesn’t work, then do the following, which gives you details if possible:

  • ssh -v root@

Final Notes: While support for this utility is NOT currently included with your NetEqualizer, we will assist any customers with a current Network Software Subscription for up to one hour. For additional support, consulting fees may apply.

Tech Tips, a script to block URLs with your NetEqualizer

# The following script can be used with your NetEqualizer to block a set of URL’s of your choosing

# save the script below into a file in the /art directory , we named ours

# then create a file with URL’s  you wish to block,one per line in the same directory as this perl script

# you’ll need a NetEqualizer version 4.0 or higher


#!/usr/bin/perl -w

$| = 1;

if(scalar(@ARGV) < 1){
print “Usage: $0 <file name with urls to block> \n”;
exit 1;

open (SPECIAL, “< $ARGV[0]”) || die “openning  url file in block stuff problem”;

while ($line=<SPECIAL> )
print ” blocking $line \n”;

$search_phrase = $line;

if ( -e “/usr/bin/nslookup”)
print ” calling  nslookup for $search_phrase \n”;
$data=`/usr/bin/nslookup $search_phrase`;
open (LOGF, “>> /tmp/arblog”) || die “opening log file “;
# uses same log file as NetEq process not sure if this a good idea ?
print “$data data \n”;
@foo= split(/[\s#]+/, $data);
while ( $counter  < @foo)
$counter= $counter+1;
if ( exists $foo[$counter] ) {
if ($foo[$counter] =~ /(\d+)(\.\d+){3}/)
print ” $foo[$counter] is an IP \n”;
# ADD_CONFIG CONNECTION x.x.x.x/y val porti direction optional_commenta
system (“/art/ADD_CONFIG CONNECTION $foo[$counter]/32 1 0  1 $line “);
print LOGF “putting block on site $search_phrase IP $foo[$counter] \n”;
print LOGF “problem with version of NS lookup could not find valid IP for $search_phrase \n”;
{ print “need nslookup utility to run this command part of dnslib package debian\n”;
exit 1;
# While there’s a URL in our queue which we haven’t looked at …

When is it time to add more bandwidth to your network?

We recently received an e-mail regarding this question from a customer, here is the basic dialogue with our answer below.

It occurred to me today…..pre netequalizer, I’d know that it was time to upgrade our network bandwidth by watching the network traffic graphs.  If there were periods of the day that the connection was maxed out it was a good sign that more bandwidth was needed.

Now that our traffic is running through netequalizer, with the threshold limit and then slowing of user connections beyond that point, we’ll not see the graph max out any more will we?  And if we did ever see that, we’d be way past the point of needing more bandwidth, because it would mean that our link was so saturated that netequalizer couldn’t slow down enough traffic fast enough to avoid that situation.

Answer: We actually do have systems that run very close to pegged(Max) for
hours at a time without complaint. Generally we would suggest waiting
until user perception for the speed of normal sized web pages and short
e-mails is perceived as slow. NetEqualizer does a very good job of allowing your network to run close to capacity without experiences adverse side effects so in essence it would be premature to add more bandwidth based on hitting peak usage.

Note: If you ask your sales rep for your local bandwidth provider if you should purchase more bandwidth, they will almost always recommend adding more solve almost ato ny issue on your network. Your provider whether it be Quest, Comcast, Time Warner or a host of other local providers,  most likely has a business model where they grow profit by selling bandwidth; hence their sales staff really is not incented to offer alternatives. Occasionally when it is physically impossible to bring more bandwidth to your business they will relent and offer a referal for a bandwidth opimization company.

APconnections Announces NetEqualizer Lifetime Buyer Protection Policy

This week, we announced the launch of the NetEqualizer Lifetime Buyer Protection Policy. In the event of an un-repairable failure of a NetEqualizer unit at any time, or in the event that it is time to retire a unit, customers will have the option to purchase a replacement unit and apply a 50-percent credit of their original unit purchase price, toward the new unit.  For current pricing see register for our price list.  This includes units that are more than three years old (the expected useful life for hardware) and in service at the time of failure.

For example, if you purchased a unit in 2003 for $4000 and were looking to replace it or upgrade with a newer model, APconnections would kick in a $2000 credit toward the replacement purchase.

The Policy will be in addition to the existing optional yearly NetEqualizer Hardware Warranty (NHW), which offers customers cost-free repairs or replacement of any malfunctioning unit while NHW is in effect (read details on NHW).

Our decision to implement the policy was a matter of customer peace-of-mind rather than necessity. While the failure rate of any NetEqualizer unit is ultimately very low, we want customers to know that we stand behind our products – even if it’s several years down the line.

To qualify,

  • users must be the original owner of the NetEqualizer unit,
  • the customer must have maintained a support contract that has been current within last 18 months , lapses of support longer than 18 months will void our replacement policy
  • the unit must have been in use on your network at the time of failure.

Shipping is not included in the discounted price. Purchasers of the one-year NetEqualizer hardware warranty (NHW) will still qualify for full replacement at no charge while under hardware warranty.  Contact us for more details by emailing, or calling 303.997.1300 x103 (International), or 1.888.287.2492 (US Toll Free).

Note: This Policy does not apply to the NetEqualizer Lite.

NetEqualizer Support Archives

Posted in Support. Tags: , , . Comments Off on NetEqualizer Support Archives

Using a Load Generator/Emulator to Test Your Network

By Art Reisman, CTO, APconnections (

One of the most challenging aspects of technology development has always been the process of bridging the gap between theory and application.  What may seem to work on paper, and even in limited trials, was never guaranteed when dealing with real-world scenarios and often unforeseen problems.

Several members of our engineering team just returned from a week of  testing with Candela Technologies’ network load emulator, and once again, we’ve not been dissapointed.  At the touch of a button, we were able to create unbelievably realistic worst-case load scenarios. Candela’s LANforge equipment not only stressed our network elements, but did so with variation, creating an environment that successfully simulated the challenges our technology will face on a regular basis in the field.

Judging by the numerous trials we’ve run, it’s become clear that simply driving a fixed load across a network is not enough to ensure reliability. Instead, you need a simulation with a multitude of elements (different packet sizes, UDP , TCP, broadcast traffic, etc.) and traffic streams, including those that refuse to back down such as with a bad denial of service attack or virus.  Fortunatley, this is exactly the quality of service that Candela Tech offers.

In addition to giving you peace of mind, this type of simulation can also save you and your company time and money.  When implementing a network upgrade, the normal method of operation goes a little like this:

  1. Work late at night and over the weekends
  2. Implement the change
  3. Put staff on standby for the next business day
  4. Have a fallback strategy to revert to a previously proven configuration should things go south

While these steps eventually may do the trick, they’re not without their costs — both financial and otherwise. Aside from the overtime you’ll end up paying your admin, perhaps more importantly, you also run the risk of negatively impacting the service of clients and customers during the hit-and-miss setup process.

Yet, the costs that come with this type of strategy can easily be reduced with a sophisticated load generation device. Network choke points can be stressed and limits determined before unwittingly making  guinea pigs out of your network users.  And, the staff from Candela Tech is more than knowledgeable and eager to help, which has allowed us to be up and running right out of the box on more than one occasion.

Ultimately, using Candela Technologies has been a lot like looking into a crystal ball. After the LANforge simulations, we’re able to identify and address any issues before they affect our customers. What was once a process of bringing our technology to the real world has now become a process of Candela bringing the real world to us.

Note: There are other competitive network load generators on the market, Fluke being the market leader.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

NetEqualizer Direct Sales and Restocking Fee

A customer recently stated that the restocking fee we charge gave the appearance of a lack of confidence in our product. I can appreciate that perception, especially with all the failed products many IT professionals have been burned with over the years.

However, here is the official reasoning behind why we charge a restocking fee.  As taken from my response to this customer:

The restocking fee has its roots based on a couple of factors

1) The restocking fee is designed to make sure we don’t get inundated with requests for free units from customers that are “just looking”. The other vendors  you mention charge much higher prices, sometimes four times as much,  and they typically use a channel that already purchases stock for the purposes of demo’s. All of this cost gets passed along to the customers that end up buying the product (basically covering the cost of dry wells).  We sell mostly direct and with  no local presence it is difficult to know a customer’s buying patterns.   You’d be surprised how many customers will trial something  without any intention to purchase.  But, many times it is not the immediate customers fault as the CIO might change the IT manager’s  budget, etc.

2) We are  not 100-percent certain that our unit will solve your issue. I’d  say we are closer to 80-percent certain based on what you described, but we will easily provide you with $200 of support helping you figure out what your issue is. You will have the chance to talk directly to our engineers who trouble shoot thousands of networks a year with similar problems. We do not want or pretend to be a consulting company, but we don’t want to consult without recouping some of our cost either — especially with our low margins which we are already passing along.

%d bloggers like this: