The Illusion of Separation: My Malaysia Trip Report


By Zack Sanders

VP of Security – APconnections

Traveling is an illuminating experience. Whether you are going halfway across the country or halfway around the world, the adventures that you have and the lessons that you learn are priceless and help shape your outlook on life, humanity, and the planet we live on. Even with the ubiquitousness of the Internet, we are still so often constrained by our limited and biased information sources that we develop a world view that is inaccurate and disconnected. This disconnection is the root of many of our problems – be they political, environmental, or social. There is control in fear and the powerful maintain their seats by reinforcing this separation to the masses. Having the realization that we are all together on this planet and that we all largely want the same things is something that can only be discovered by going out and seeing the world for yourself with as open of a mind as possible.

One of the great things about NetEqualizer, and working for APconnections, is that, while we are a relatively small organization, we are truly international in our business. From the United States to the United Kingdom, and Argentina to Finland, NetEqualizers are helping nearly every vertical around the world optimize the bandwidth they have available. Because of this global reach, we sometimes get to travel to unique customer sites to conduct training or help install units. We recently acquired a new customer in Malaysia – a large university system called International Islamic University Malaysia, or IIUM. In addition to NetEqualizers for all of their campuses, two days of training was allotted in their order – one day each at two of their main locations (Kuala Lumpur and Kuantan). I jumped at the chance to travel to Asia (my first time to the continent) and promptly scheduled some dates with our primary contact at the University.

I spent the weeks prior to my departure in Spain – a nicely-timed, but unrelated, warmup trip to shake the rust off that had accrued since my last international travel experience five years ago. The part about the Malaysia trip that I was dreading the most was the hours I would log sitting in seat 46E of the Boeing 777 metal I was to take to Kuala Lumpur with Singapore Airlines. Having the Spain trip occur before this helped ease me in to the longer flights.

F.C. Barcelona hosting Real Madrid at the Camp Nou.

My Malaysia itinerary looked like this:

Denver -> San Francisco (2.5 hours), Layover (overnight)

San Francisco -> Seoul (12 hours), Layover (1 hour)

Seoul -> Singapore (7 hours), Layover (6 hours)

Singapore -> Kuala Lumpur (1 hour)

I was only back in the United States from Spain for one week. It was a fast, but much needed, seven days of rest. The break went by quickly and I was back in the air again, this time heading west.

After 22 hours on the plane and 7 hours in various airports, I was ready to crash at my hotel in the City Centre when I touched down in KL. I don’t sleep too well on planes so I was pretty exhausted. The trouble was that it was 8am local time when I arrived and check-in wouldn’t be until 2:00pm. Fortunately, the fine folks at Mandarin Oriental accommodated me with a room and I slept the day away.

KL City Centre.

I padded my trip with the intention of having a few days before the training to get adjusted, but it didn’t take me as long as I thought and I was able to do some site seeing in and outside the city before the training.

My first stop was Batu Caves – a Hindu shrine located near the last stop of the LRT’s KTM-KOMUTER line in the Gombak District – which I later learned was near the location of my first training seminar. The shrine is set atop 272 stairs in a 400 million year old limestone cave. After the trek up you are greeted by lightly dripping water and a horde of ambitious monkeys in addition to the shrines within the cave walls.

Batu Caves entrance.

Batu Caves.

Petronas Towers.

This was the furthest I ventured from the city for site seeing. The rest of the time, I spent near the City Centre – combing through the markets of Chinatown and Little India, taking a tour of the Petronas Towers, and checking out the street food on Jalan Alor. Kuala Lumpur is a very Western city. The influence is everywhere despite the traditional Islamic culture. TGI-Fridays, Chili’s, and Starbucks were the hotspots – at least in this touristy part of town. On my last night I found a unique spot at the top of the Trader’s Hotel called Skybar. It is a prime location because it looks directly at the Petronas Towers – which, at night especially, are gorgeous. The designers of the bar did a great job implementing sweeping windows and sunken sofas to enjoy the view. I stayed there for a couple hours and had a Singapore Sling – a drink I’ve heard of but had never gotten to try.

Singapore Sling at the Skybar.

The city and sites were great, however, the primary purpose of the trip was not leisure – it was to share my knowledge of NetEqualizer with those that would be working with it at the University. To be honest, I wasn’t sure what to expect. This was definitely different from most locations I have been to in the past. A lot of thoughts went through my head about how I’d be received, if the training would be valuable or not, etc. It’s not that I was worried about anything in particular, I just didn’t know. My first stop was the main location in KL. It’s a beautifully manicured campus where the buildings all have aqua blue roofs. My cab driver did a great job helping me find the Information Technology Department building and I quickly met up with my contact and got set up in the Learning Lab.

This session had nine participants – ranging from IT head honchos to network engineers. The specific experience with the NetEqualizer also ranged from well-versed to none at all. I catered the training such that it would be useful to all participants – we went over the basics but also spent time on more advanced topics and configurations. All in all, the training lasted six hours or so, including an hour break for lunch that I took with some of the attendees. It was great talking with each of them – regardless of whether the subject was bandwidth congestion or the series finale episode of Breaking Bad. They were great hosts and I look forward to keeping in touch with them.

Training at IIUM.

I was pretty tired from the day by the time I arrived back at the hotel. I ate and got to bed early because I had to leave at 6:00am for my morning flight across the peninsula to Kuantan – a short, 35 minute jaunt eastward – to do it all over again at that campus. Kuantan is much smaller than KL, but it is still a large city. I didn’t get to see much of it, however, because I took a cab directly from the airport to the campus and got started. There were only four participants this time – but the training went just as well. I had similar experiences talking with this group of guys, and they, too, were great hosts. I returned back to the airport in the evening and took a flight back to KL. The flight is so short that it’s comical. It goes like this:

Taxi to the runway -> “Flight attendants prepare for takeoff” -> “You may now use your electronic devices” -> 5 minutes goes by -> “Flight attendants prepare for landing – please turn off your electronic devices” -> Land -> Taxi to terminal

The airport in Kuantan at sunset.

I had one more day to check out Kuala Lumpur and then it was back to the airport for another 22 hours of flying. At this point though, I felt like a flying professional. The time didn’t bother me and the frequent meals, Sons of Anarchy episodes, and extra leg room helped break it up nicely. I took a few days in San Francisco to recover and visit friends before ultimately heading back to Boulder.

It was a whirlwind of a month. I flew almost 33,000 miles in 33 days and touched down in eight countries on three continents. Looking back, it was a great experience – both personally and professionally. I think the time I spent in these places, and the things I did, will pay invaluable dividends going forward.

If your organization is interested in NetEqualizer training – regardless of whether you are a new or existing customer – let us know by sending an email to sales@apconnections.net!

View of KL Tower from the top of the Petronas Towers.

On the Trail of Network Latency Over a Satellite Link


By Art Reisman – CTO – www.netequalizer.com

Art Reisman CTO www.netequalizer.com

This morning, just for fun, I decided to isolate the latency on a route from my home office, to a computer located at a remote hunting lodge. The hunting lodge is serviced by a Wild Blue satellite link.

What causes latency?

The factors that influence network latency are:

1) Wire transport speed.

Not to be confused with the amount of data a wire carry in a second, I am referring to the raw speed at which data travels on a wire. Once on the wire, the traversal time from end to end. For the most part, we can assume data travels at the speed of light: 186,000 miles per second.

2) Distance.

How far is the data traveling. Even though data travels at the speed of light, a hop across the United States will cost you about 4 milliseconds, and a hop up to a stationary satellite  ( round trip about 44,000 miles) adds a minimum of 300 milliseconds. I have worked through an example of how you can  trace latency across a satellite link below.

3) Number of hops.

How many switching points are there between source and destination? Each hop requires the data to move from one wire to another, and this requires a small amount of waiting to get on the next wire. Each hop can be an additional 2 or  3 milliseconds.

4) Overhead processing on a hop.

This can also add up, sometimes at the end points points, people like to look at the data, usually for security reasons, on their firewall. Depending on the number of features and processing power of the firewall this can also add a wide range of latency. Normal is from 1 or 2 milliseconds, but that can blow up to 50 milliseconds or in some cases even more when you turn on too many features on your firewall.

How much latency is too much?

It really depends on what you are doing. If it is a one way conversation, like you are watching a Netflix movie, you are probably not going to care if the data is arriving a half second after it was sent, but if you are talking interactively on a Skype call, you will find your self talking over the other person quite often – especially at the beginning of a call.

Tracing Latency across a satellite link.

Note: I am doing this all from the command line on my Mac.

Step one: I have the IP address of a computer that I know is only accessible by Satellite. So first I run a command called trace route to find all the hops along the route.

localhost:~ root# traceroute 75.104.xxx.xxx

When I run this command I get a list of every hop along the route, I also get some millisecond times for each hop from trace route but I am not sure if I trust them, so I am not showing them.

From my Mac command line I do:

traceroute to 75.104.xxx.xxx (75.104.xxx.xxx)
1  192.168.1.1 (192.168.1.1)- This is my local router or gateway the first hop
2  95.145.80.1 (95.145.80.1) – This is the Comcast Router , the first router upstream from my house at the local Comcast NOC most likely.
3  te-8-1-ur01.boulder.co.denver.comcast.net (68.85.107.85) – We then go through a bunch of Comcast links
4  te-7-4-ur02.boulder.co.denver.comcast.net (68.86.103.122)
5  te-0-10-0-10-ar02.aurora.co.denver.comcast.net (68.86.179.97)
6  he-3-10-0-0-cr01.denver.co.ibone.comcast.net (68.86.92.25)
7  xe-5-0-2-0-pe01.910fifteenth.co.ibone.comcast.net (68.86.82.202)
8  173.167.58.162 (173.167.58.162) – and then we leave the Comcast network of routers here
9  if-1-1-2-0.tcore1.pdi-paloalto.as6453.net (66.198.127.85) – and finally to some other back bone router
10  66.198.127.94 (66.198.127.94)
11  * * *
13 75.104.xxx.xxx ( This IP is on the other side of a Satellite link)

Now here is the cool part, I am going to ping the last IP address before the route goes up to the satellite, and then the hop after that to see what the latency over the satellite hop is.

Note the physical satellite does not have an IP, there is a router here on Earth that transmits data up and over the satellite link.

localhost:~ root# ping 66.198.127.94
PING 66.198.127.94 (66.198.127.94): 56 data bytes
64 bytes from 66.198.127.94: icmp_seq=0 ttl=56 time=42.476 ms
64 bytes from 66.198.127.94: icmp_seq=1 ttl=56 time=55.878 ms
64 bytes from 66.198.127.94: icmp_seq=2 ttl=56 time=42.382 ms

About 50 milliseconds.

And the last hop to the remote computer.

localhost:~ root# ping  75.104.xxx.xxx
PING 75.104.180.156 (75.104.xxx.xxx): 56 data bytes
Request timeout for icmp_seq 0
64 bytes from 75.104.180.xxx: icmp_seq=0 ttl=109 time=1551.310 ms
64 bytes from 75.104.180.xxx: icmp_seq=1 ttl=109 time=1574.177 ms
64 bytes from 75.104.180.xxx: icmp_seq=2 ttl=109 time=1494.628 ms

Wow that hop up over the satellite link added about 1500 milliseconds to my ping time!

That is a little more latency than I would have expected, but in fairness to Wild Blue they do a good job at a reasonable price. The funny thing is streaming audio works fine over the Satellite link because it is not latency sensitive.  However a skype call might be a bit more painful , 300 milliseconds is about the tolerance level where users start to notice latency on a phone call, 500 is manageable, and up over 1000, starts to require a little planning and pausing before and after you speak.

References. A non technical guide to fixing TCP/IP problems

NetEqualizer YouTube Caching FAQ


Editor’s Note: This week, we announced the availability of the NetEqualizer YouTube caching feature we first introduced in October. Over the past month, interest and inquiries have been high, so we’ve created the following Q&A to address many of the common questions we’ve received.

This may seem like a silly question, but why is caching advantageous?

The bottleneck most networks deal with is that they have a limited pipe leading out to the larger public Internet cloud. When a user visits a website or accesses content online, data must be transferred to and from the user through this limited pipe, which is usually meant for only average loads (increasing its size can be quite expensive). During busy times, when multiple users are accessing material from the Internet at once, the pipe can become clogged and service slowed. However, if an ISP can keep a cached copy of certain bandwidth-intensive content, such as a popular video, on a server in their local office, this bottleneck can be avoided. The pipe remains open and unclogged and customers are assured their video will always play faster and more smoothly than if they had to go out and re-fetch a copy from the YouTube server on the Internet.

What is the ROI benefit of caching YouTube? How much bandwidth can a provider conserve?

At the time of this writing, we are still in the early stages of our data collection on this subject. What we do know is that YouTube can account for up to 15 percent of Internet traffic. We expect to be able to cache at least the most popular 300 YouTube videos with this initial release and perhaps more when we release the mass-storage version of our caching server in the future. Considering this, realistic estimates put the savings in terms of bandwidth overhead somewhere between 5 and 15 percent. But this is only the instant benefits in terms bandwidth savings. The long-term customer-satisfaction benefit is that many more YouTube videos will play without interruption on a crowded network (busy hour) than before. Therefore, ROI shouldn’t be measured in bandwidth savings alone.

Why is it just the YouTube caching feature? Why not cache everything?

There are a couple of good reasons not to cache everything.

First, there are quite a few Web pages that are dynamically generated or change quite often, and a caching mechanism relies on content being relatively static. This allows it to grab content from the Internet and store it locally for future use without the content changing. As mentioned, when users/clients visit the specific Web pages that have been stored, they are directed to the locally saved content rather than over the Internet and to the original website. Therefore, caching obviously wouldn’t be possible for pages that are constantly changing. Caching dynamic content can cause all kinds of issues — especially with merchant and secure sites where each page is custom-generated for the client.

Second, a caching server can realistically only store a subset of data that it accesses. Yes, data storage is getting less expensive every year, but a local store is finite in size and will eventually fill up. So, when making a decision on what to cache and what not to cache, YouTube, being both popular and bandwidth intensive, was the logical choice.

Will the NetEqualizer ever cache content beyond YouTube? Such as other videos?

At this time, the NetEqualizer is caching files that traverse port 80 and correspond to video files from 30 seconds to 10 minutes. It is possible that some other port 80 file will fall into this category, but the bulk of it will be YouTube.

Is there anything else about YouTube that makes it a good candidate to cache?

Yes, YouTube content meets the level of stability discussed above that’s needed for effective caching. Once posted, most YouTube videos are not edited or changed. Hence, the copy in the local cache will stay current and be good indefinitely.

When I download large distributions, the download utility often gives me a choice of mirrored sites around the world. Is this the same as caching?

By definition this is also caching, but the difference is that there is a manual step to choosing one of these distribution sites. Some of the large-content open source distributions have been delivered this way for many years. The caching feature on the NetEqualizer is what is called “transparent,” meaning users do not have to do anything to get a cached copy.

If users are getting a file from cache without their knowledge, could this be construed as a violation of net neutrality?

We addressed the tenets of net neutrality in another article and to our knowledge caching has not been controversial in any way.

What about copyright violations? Is it legal to store someone’s content on an intermediate server?

This is a very complex question and anything is possible, but with respect to intent and the NetEqualizer caching mechanism, the Internet provider is only caching what is already freely available. There is no masking or redirection of the actual YouTube administrative wrappings that a user sees (this would be where advertising and promotions appear). Hence, there is no loss of potential of revenue for YouTube. In fact, it would be considered more of a benefit for them as it helps more people use their service where connections might otherwise be too slow.

Final Editor’s Note: While we’re confident this Q&A will answer many of the questions that arise about the NetEqualizer YouTube caching feature, please don’t hesitate to contact us with further inquiries. We can be reached at 1-888-287-2492 or sales@apconnections.net.

NetEqualizer Tuning Guide for Small Networks with a Small Number of Infrequent Users


If you are working with a network that has a small number of infrequent users on a small network (10Mbps or less), here are some tuning recommendations that will help you to optimize your network use.  These recommendations came out of a discussion with one of our customers.   Their environment is a 40 person company on a 10Mbps pipe (normal amount of users on a small network), and then converts over at night to a network with only one user.

The following recommendations will help to alleviate the situation where a small network with a small number of infrequent users has a user get knocked down to a less than 1Mbps with a PENALTY while there is more than enough bandwidth to sustain their download at a higher rate.

Summary of Recommendations (listed in priority order):

1) (best option) Put a hard limit somewhere below RATIO (typically 85%) on each IP address on the network.  So, for a 10Mbps network with RATIO = 85%, your hard limits should be below 8.5Mbps for each IP address.

2) Put a “day configuration” and a “night configuration” in place.  The process to do this is described in the Changing Configurations by Time of Day section of our Advanced Tips & Tricks guide.

3) Change the PENALTY unit sensitivity, to make the penalty less restrictive.

4) Raise the value of HOGMIN from 12,000 bytes/second anywhere up to 128,000 bytes/second.

The philosophy behind each is described in detail in the following sections.

1) Adding Hard Limits on each IP address

We recommend putting Hard Limits on each IP address.  Hard Limits will keep any one user from consuming the entire network bandwidth.  If you prefer not to have Hard Limits on all IP addresses, you can set the Hard Limit only for the infrequent users.

For example, on a 10Mbps network, you can put a Hard Limit of 4-5Mbps on every user, which will prevent any one user from tripping equalizing, but will allow all of them to sustain a 5 Mbps download on your lightly loaded network.

If a user starts a large download, it will consume network bandwidth up until the network reaches a point of congestion (at 85% with RATIO set to 85).   Once that point is reached, equalizing will kick in and start penalizing the traffic.  In cases where the network has a normal number of users on it, this works very well to provide fairness across the available bandwidth.

When the one network user spikes the entire network to above 85% congested, a Penalty kicks in.  The result of the penalty is that the file download gets throttled back to 500kbs or maybe less – almost instantly.  Once the penalty is removed, the file download will again consume all the network bandwidth until another penalty is applied. This cycle repeats itself every few seconds until the download completes.

On a system with more than one user, and typically one that is very busy with 100’s of users or 1000’s, the pipe is usually always near capacity, so penalties being applied are not as dramatic, and ensure that all other users do not experience “lockup”.

2) Change your Configuration by Time of Day

You can also change your NetEqualizer to use two separate configuration files, so that you can apply different rules at various times of day – for example, rules for “off-hours” (typically nighttime) versus another set for “on-hours” (typically daytime).    This would be beneficial if you want to open up the amount of bandwidth available per user at night.  For example, you could set your off-hours hard limits to 8 Mbps, and lower your on-hours hard limits to 4Mbps.

Note that it is still important to keep your hard limits below RATIO, so that you do not trigger equalizing based on one data flow.

3) Change the PENALTY unit sensitivity, to make the penalty less restrictive

Networks much larger than 45 megabits may require a PENALTY UNIT resolution smaller than 100ths of seconds. In the NetEqualizer Web GUI, the smallest penalty that can be applied to an IP Packet is 1/100 of a second. If you are finding that a default PENALTY of 1 is putting too much latency on your connections then you can adjust the PENALTY unit to 1/1000 of second with the following command:

From the Web GUI Main Menu, Click on ->Miscellaneous->Run a Command

Type in: /bridge/bridge-utils/brctl/brctl rembrain my 99999

Note: For this change to persist you will need to put it in the /art/autostart file.

4)  Raise the value of HOGMIN, anywhere up to 128,000 bytes/sec

HOGMIN is used to determine what traffic should be penalized on a congested network.  One way to get traffic to not be penalized, then, is to raised the value of HOGMIN (default is 12,000 bytes per second).  For a lightly-loaded network you could consider HOGMIN = 50,000 bytes/sec and may even go as high as 128,000 bytes/sec.

Taken as a whole, this is how our four recommendations would work in the example we have described…

Hmm… I have a 10 megabit pipe and I have 40 users during the day and 1 user at night.  No user should be able to take the whole pipe all day, but I want my 1 user to get more bandwidth at night.

  • I’ll create two configuration files: one with 4Mbps hard limits on all my users during the day (4 megabits is relatively fast service for the average and nobody would complain) and another with 8Mbps hard limits for my night user(s).
  • In addition, I would like the penalty to be less harsh at night, so I’ll change the PENALTY=1/1000 of a second in my night configuration file.
  • I also would like HOGMIN to be raised at night.  I will set it to 50,000 in my night configuration file.

During the day, when every once in a while we get 2 or 3 users downloading at once, it will no longer kill the entire pipe.  And during the night, my user can download larger files without being restricted.  So, with 4Mbps/8Mbps restrictions plus Equalizing, I get the best of both worlds – pretty fast downloads when the pipe is empty and I am protected against peak time crashes! People get fast downloads and if there is a peak I am protected from system gridlock.   Now there is nothing anybody can do to crash the system at random times.

I hope you find this tuning suggestion helpful for your situation.  If you would like additional help, please contact our Support Team at support@apconnections.net or 303.997.1300 x102 to discuss tuning for your specific configuration.

Top Five Causes For Disruption Of Internet Service


slow-internetEditor’s Note: We took a poll from our customer base consisting of thousands of NetEqualizer users. What follows are the top five most common causes  for disruption of Internet connectivity.

1) Congestion: Congestion is the most common cause for short Internet outages.  In general, a congestion outage is characterized by 10 seconds of uptime followed by approximately 30 seconds of chaos. During the chaotic episode, the circuit gridlocks to the point where you can’t load a Web page. Just when you think the problem has cleared, it comes back.

The cyclical nature of a congestion outage is due to the way browsers and humans retry on failed connections. During busy times usage surges and then backs off, but the relief is temporary. Congestion-related outages are especially acute at public libraries, hotels, residence halls and educational institutions. Congestion is also very common on wireless networks. (Have you ever tried to send a text message from a crowded stadium? It’s usually impossible.)

Fortunately for network administrators, this is one cause of disruption that can be managed and prevented (as you’ll see below, others aren’t that easy to control). So what’s the solution? The best option for preventing congestion is to use some form of bandwidth control. The next best option is to increase the size of your bandwidth link. However without some form of bandwidth control, bandwidth increases are often absorbed quickly and congestion returns. For more information on speeding up internet services using a bandwidth controller, check out this article.

2) Failed Link to Provider: If you have a business-critical Internet link, it’s a good idea to source service from multiple providers. Between construction work, thunderstorms, wind, and power problems, anything can happen to your link at almost any time. These types of outages are much more likely than internal equipment failures.

3) Service Provider Internet Speed Fluctuates: Not all DS3 lines are the same. We have seen many occasions where customers are just not getting their contracted rate 24/7 as promised.

4) Equipment Failure: Power surges are the most common cause for frying routers and switches. Therefore, make sure everything has surge and UPS protection. After power surges, the next most common failure is lockup from feature-overloaded equipment. Considering this, keep your configurations as simple as possible on your routers and firewalls or be ready to upgrade to equipment with faster newer processing power.

Related Article: Buying Guide for Surge and UPS Protection Devices

5) Operator Error: Duplicating IP addresses, plugging wires into the wrong jack, and setting bad firewall rules are the leading operator errors reported.

If you commonly encounter issues that aren’t discussed here, feel free to fill us in in the comments section. While these were the most common causes of disruptions for our customers, plenty of other problems can exist.

NetEqualizer Programmers Toolkit for Developing Quota-Based Usage Rules (NUQ API)


Author’s Notes:

December 2012 update: As of Software Update 6.0, we have incorporated the Professional Quota API into our new 6.0 GUI, which is documented in our full User GuideThe”Professional Quota API User Guide” is now deprecated.

Due to the popularity of User Quotas, we built a GUI to implement the quota commands.  We recommend using the 6.0 GUI to configure User quotas, which incorporates all the commands listed below and does NOT require basic programming skills to use.


July 2012 update: As of Software Update 5.8, we now offer the Professional Quota API, which provides a GUI front-end to the NUQ-API.  Enclosed is a link to the Professional Quota API User Guide (PDF), which walks you through how to use the new GUI toolset.

Professional Quota API Guide

If you prefer to use the native commands (NUQ API) instead of the new GUI, OR if you are using a Software Update  prior to 5.8 (< 5.8), please follow the instructions below.  If you are current on NSS, we recommend upgrading to 5.8 to use the new Professional Quota API GUI.  If you are not current on NSS, you can call 303.997.1300 ext.5 or email admin@apconnections.net  to get current.

 

 


The following article serves as the programmer’s toolkit for the new NetEqualizer User-Quota API (NUQ API). Other industry terms for this process include bandwidth allotment, and usage-based service.  The NUQ API toolkit is available with NetEqualizer release 4.5 and above and a current software subscription license (NSS).

Note: NetEqualizer is a commercial-grade, Linux-based, in-line bandwidth shaper.  If you are looking something windows-based try these.

Background

Prior to this release, we provided a GUI-based user limit tool, but it was discontinued with release 4.0.  The GUI tool did not have the flexibility for application development, and was inadequate for customizations. The NetEqualizer User-Quota API (NUQ API) programmer’s toolkit is our replacement for the GUI tool. The motivation for developing the toolkit was to allow ISPs, satellite providers, and other Internet management companies to customize their business processes around user limits. The NUQ API is a quick and easy way to string together a program of actions in unique ways to meet your needs.  However, it does require basic programming/Linux skills.

Terms of Use

APconnections, the maker of the NetEqualizer, is an OEM manufacturer of a bandwidth shaping appliance.  The toolkit below provides short examples of how to use the NUQ API to get you started developing a system to enforce quota bandwidth limits for your customers. You are free to copy/paste and use our sample programs in the programmer’s toolkit to your liking.  However, questions and support are not covered in the normal setup of the NetEqualizer product (NSS) and must be negotiated separately.  Please call 303.997.1300 x103 or email sales@apconnections.net to set up a support contract for the NUQ API programmer’s toolkit.

Once you have upgraded to version 4.5 and have purchased a current NSS, please contact APconnections for installation instructions. Once installed, you can find the tools available in the directory/art/quota.

Step 1: Start the Quota Server

In order to use the NUQ API programmer’s toolkit, you must have the main quota server running.  To start the quota server from the Linux command line, you can type:

# /art/quota/quota &

Once the quota main process is running, you can make requests using the command line API.

The following API commands are available:

quota_create

Usage:

quota_create 102.20.20.2/24

Will cause the NetEqualizer to start tracking data for a block (subnet) of IP addresses in the range 10.20.20.0  through 10.20.20.255.

_________________________________________________________________________________________________________

quota_remove

Usage:

/art/quota/quota_remove 102.20.20.2/24

Will remove a block of IP addresses from the quota system.

Note: You must use the exact same IP address and mask to remove a block as was used to create the block.

_________________________________________________________________________________________________________

quota_set_alarm

Usage:

/art/quota/quota_set_alarm 102.20.20.2/17 <down limit>  <up limit>

Will set an alarm when an IP address reaches a defined limit.

Alarm notifications will be reported in the log /tmp/quotalog.  See the sample programs below for usage.

Note: All IPs in the subnet range will get flagged when/if they reach the defined limit. The limits are in bytes transferred. Alarm notifications are reported in the quotalog /tmp/quotalog.  See example below.

_________________________________________________________________________________________________________

quota_remove_alarm

Usage:

/art/quota/quota_remove_alarm 102.20.20.2/17

Will remove all alarms in effect on the specified subnet.

Note: The subnet specification must match exactly the format used when the alarm was created — same exact IP address and same exact mask.

_________________________________________________________________________________________________________

quota_reset_ip

Usage:

/art/quota/quota_reset_ip 102.20.20.2/17

Will reset the usage counters for the specified subnet range

_________________________________________________________________________________________________________

quota_status_ip

Usage:

/art/quota/quota_status_ip 102.20.20.2/24

Will show the current usage byte count for the specified IPs in the range to the console. The usage counters must be initiated with quota_create command.

Will also put usage statistics to the default log /tmp/quotalog

_________________________________________________________________________________________________________

quota_rules

Will display all current rules in effect

Usage:

/art/quota/quota_rules

_________________________________________________________________________________________________________

ADD_CONFIG

Usage:

/art/ADD_CONFIG HARD <ip> <down> <up><subnet mask> <burst factor>

Used to set rate limits on IP’s, which would be the normal response should a user exceed their quota.

Parameter definitions:

HARD                     Constant that specifies the type of operation.  In this case HARD indicates “hard limit”.

<ip>                        The IP address in format x.x.x.x

<down>                 Is the specified max download (inbound) transfer speed for this ip in BYTES per second, this is not kbs.

<up>                       Is the specified upload (outbound) transfer speed in BYTES per second

<subnet mask>   Specifies the subnet mask for the IP address.  For example, 24 would be the same as x.x.x.x/24 notation. However, for this command the mask is specified as a separate parameter.

<burst factor> The last field in the command specifies the burst factor.  Set this field to 1 (no bursting) or to a multiple greater than 1 (bursting).  BURST FACTOR is multiplied times the <down> and <up> HARD LIMITs to arrive at the BURST LIMIT (default speed you wish to burst up to).  For example… 2Mbps <down> HARD LIMIT x 4 BURST FACTOR = 8Mbps <down> BURST LIMIT.

_________________________________________________________________________________________________________

REMOVE_CONFIG

Usage:

/art/REMOVE_CONFIG HARD x.x.x.x

Where x.x.x.x is the base ip used in the ADD_CONFIG HARD command no other parameters are necessary on the removal of the rule.

_________________________________________________________________________________________________________

To view the Log:

Usage:

/tmp/quotalog

Various status messages will get reported along with ALARMs and usage statistics

_________________________________________________________________________________________________________

Examples and Sample sessions (assumes Linux shell and Perl knowledge)

From the command line of a running NetEqualizer  first start the quota server

root@neteq:/art/quota# /art/quota/quota &
[1] 29653
#

Then I issue a command to start tracking byte counts on the local subnet, for this example I have some background network traffic running across the NetEqualizer.

root@neteq:/art/quota# ./quota_create 192.168.1.143/24
Created 192.168.1.143/24
root@neteq:/art/quota#

I have now told the quota server to start tracking bytes on the subnet 192.168.1.*

To see the transferred current byte count on an IP you can use the status_ip command

root@neteq:/art/quota# ./quota_status_ip 192.168.1.143/24
Begin status for 192.168.1.143/24
status for 192.168.1.255
start time = Fri Apr 2 21:23:13 UTC 2010
current date time = Fri Apr 2 21:55:28 UTC 2010
Total bytes down = 65033
Total bytes up = 0
status for 192.168.1.119
start time = Fri Apr 2 21:54:50 UTC 2010
current date time = Fri Apr 2 21:55:28 UTC 2010
Total bytes down = 3234
Total bytes up = 4695
End of status for 192.168.1.143/24
root@neteq:/art/quota#

Yes, the output is a bit cryptic, but everything is there. For example, the start time and current time since the data collection started on each IP reporting in.

Now let’s say we wanted to do something useful when a byte count or quota was exceeded by a user.

First, we would set up an alarm.
root@neteq:/art/quota# ./quota_set_alarm 192.168.1.143/24 10000 10000
alarm block created for 192.168.1.143/24

We have now told the quota server to notify us when any IP in the range 192.168.1.* exceeds 10000 bytes up or 10000 bytes down.

Note: If an alarm is raised, the next alarm will occur at twice the original byte count. In the example above, we will get alarms at 10,000, 20,000, 30,000 and so forth for all IPs in the range. Obviously, in a commercial operation, you would want your quotas set much higher in the gigabyte range.

Now that we have alarms set, how do we know when the happen and how can we take action?

Just for fun, we wrote a little perl script to take action when an alarm occurs. So, first here’s the perl script code and then and example of how to use it.

root@neteq:/art# cat test
#!/usr/bin/perl
while ( 1)
{  $line = readline(*STDIN);
print $line;
chomp ($line);
@foo=split(” “, $line);
if ( $foo[0] eq “ALARM”)
{
print “send an email to somebody important here \n”;
}
}

First, save the perl script off to a file. In our example, we save it to a file /art/test

Next, we will monitor the /tmp/quotalog for new alarms as they occur, and when we find one we will print the message “send and email to somebody important here” .   To actually send an email you would need to set up an email server and call the command line smtp command with your message , we did not go that far here.

Here is how we use the test script to monitor the quotalog  (where ALARM Messages get reported)

root@neteq:/art# tail -f /tmp/quotalog | ./test

Log Reset
ALARM 192.168.1.119 has exceeded up byte count of 160000
send an email to somebody important here
ALARM 192.168.1.119 has exceeded down byte count of 190000
send an email to somebody important here
ALARM 192.168.1.119 has exceeded up byte count of 170000
send an email to somebody important here
ALARM 192.168.1.119 has exceeded down byte count of 200000
send an email to somebody important here
ALARM 192.168.1.119 has exceeded up byte count of 180000
send an email to somebody important here
ALARM 192.168.1.119 has exceeded down byte count of 210000
send an email to somebody important here
ALARM 192.168.1.119 has exceeded up byte count of 190000
send an email to somebody important here
ALARM 192.168.1.119 has exceeded down byte count of 220000
send an email to somebody important here

Now, what if we just want to see what rules are in effect?  Here is a sequence where we create a couple of rules and show how you can status them. Note the subtle difference between the command quota_rules and status_ip.  Status_ip shows ip’s that are part of rule and are actively counting bytes.  Since a rule does not become active (show up in status) until there are actually bytes transferred.

root@neteq:/art/quota# ./quota_create 192.168.13.143/24
Created 192.168.13.143/24
root@neteq:/art/quota# ./quota_rules
Active Quotas —————
192.168.13.143/24
Active Alarms —————-
root@neteq:/art/quota# ./quota_set_alarm 192.168.11.143/24 20000 20000
alarm block created for 192.168.11.143/24
root@neteq:/art/quota# ./quota_rules
Active Quotas —————
192.168.13.143/24
Active Alarms —————-
192.168.11.0/24
root@neteq:/art/quota#

That concludes the NetEqualizer User-Quota API (NUQ API) programmers toolkit for now. We will be adding more examples and features in the near future. Please feel free to e-mail us at support@apconnections.net with feature requests and bug reports on this tool.

Note: You must have a current NSS to receive the toolkit software. It is not enabled with the default system.

Related Opinion Article on the effectiveness of Quotas

NetEqualizer: Advanced Tuning

%d bloggers like this: