How Does NetEqualizer compare to Mikrotik


Mikrotik is a super charged Swiss army knife solution, no feature is off limits on their product, routing , bandwidth control, layer seven filters, PPPoe, firewall they have it all. If I was going off to start a WISP with a limited budget, and could bring only one tool with me, it would be a Mikrotik solution. On the other hand the NetEqualizer grew up with the value equation of optimizing bandwidth on a network and doing it in a smart turn key fashion. It was developed by a wireless operator that realized high quality easy to use bandwidth control  was needed to ensure a profitable business.

Yes there is some overlap between the two,  over time the NetEqualizer has gone beyond their included auxillary features,  for example:  NetEqualizer has a firewall and  a network access control module; but the primary reason an operator would purchase a NetEqualizer still goes back to our core mission.  To keep their margins in this competitive business, they need to optimize their Internet trunk without paying an army of technicians to maintain a piece of equipment.


The following was part of a conversation with a customer who was interested in comparing Mikrotik queues to NetEqualizer Equalizinq. So take off your Mikrotik hat for a minute and read on about a different philosophy on how to control bandwidth.

Equalizing is a bit different than  Microtik, so we can’t make exact
feature comparisons.  NetEqualizer lets users run until the network
(or pool) is crowded and then slaps the heavy users for a very short
duration, faster than you  or I could do it  (if you tried). Do you
have the arcade game “wack a mole”  in Australia?  Where you hit the
moles on the head when they pop up out of the holes with a hammer?

The vision of our product was to allow operators to plug it in ,give
priority to short real time traffic when the network is busy, and to
leave it alone when shaping is not needed.

It does this based on connections not based on users (as per your question)

Suppose out of your 1000 users, 90 percent were web surfing , 5
percent watching youtube, and  20 percent were doing chat sessions
while doing youtube and web surfing, and another 20 percent were on
SKype calls while web surfing.

Based on the different demand levels of all these users it is nearly
impossible to divide the bandwidth evenly.

But, If the trunk was saturated, in the example above, the
NetEqualizer would chop down youtube streams (since they are the
biggest) leaving all the other streams alone. So instead of having
your network crash completely a few youtube videos would break up for
a few seconds and then when conditions abated they would be allowed to
run. I cannot tell you the exact allocations per user because we don’t
try to hit fixed allocations, we just put delay on the nasties until
the bandwidth usage overall drops back to 90 percent.  It is never the
same . And then we quickly take the delay away when things are better.

The value to you is that you get the best possible usage of your
network bandwidth without micro managing everything. There are no
queues to manage. We have been using this model with ISPs for 6 years.

If you do want to put additional rules onto users you can do that with
individual rate limits. Or VLAN limits.

Lastly if you have a very high priority client that must run video you
can give them an exemption if needed.

To control p2p you can use our connection limits as most p2p clients
overload APs with massive connections. We have a fairly smart simple
way to spot this type of user and keep them from crashing your network.
Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list. .

NetEqualizer Bandwidth Controller POE unit a hit with customers


Editors Note:  Just pulled this post off of DSL reports.

NetEqualizer POE units list at $1499 and serve as a great QOS devise for the SOHO small business user.

We’ve ordered 4 of these and deployed 2 so far. They work exactly like the 1U rackmount NE2000 that we have in our NOC, only the form factor is much smaller (about 6x6x1) and they use POE or a DC power supply. I amp clamped one of the units, and it draws about 7 watts.

We have a number of remote APs where we don’t have the physical space and/or power sources (i.e., solar powered) to accommodate the full size Netequalizer. Also, because of our network topology, it makes sense to have these units close to the AP and not at our border. These units are the perfect solution for these locations.

Our service area is mostly in a forest, so have a number of Trango 900 Mhz APs. These units can cut through the trees well, but they only have about 2.5 Mbps available on them (they’re rated at 3 Mbps, but we’ve tested their actual throughput at 2.5 Mbps). We have our customers set for 768k, so it doesn’t take too many Youtube and Netflix streams to kill the performance on these APs. We were using Mikrotiks to throttle the customers (using bursting to give them about 10 minutes @768k, then throttling them to around 300k). While this helped to keep the bandwidth hogs from individually killing the performance, it sometimes made matters worse.

For example, if a customer started downloading some 2 GB file at 10:00pm, it would take them until 1:00pm the next day to finish. As such, they would have disrupted services in the morning and early afternoon. If we had given this customer their full 768k, they would have finished this download before 4:00am and would never have been a disruption.

With the Mikrotik solution, we also had too many times that there was less than 768k available for the next customer, because there were a number of customers locked at 300k tying up much of the bandwidth. So, the customer that was hitting the casual web page was seeing poor performance (as were the hogs). In general, I wasn’t happy with the service we were delivering.

The Netequalizer has resulted in dramatically improved service to our customers. Most of the time, our customers are seeing their full bandwidth. The only time they don’t see it now is when they’re downloading big files. And, when they don’t see full performance, its only for the brief period that the AP is approaching saturation. The available bandwidth is re-evaulated every 2 seconds, so the throttling periods are often brief.

Bottom line to this is that we can deliver significantly more data through the same AP. The customers hitting web pages, checking e-mail, etc. virtually always see full bandwidth, and the hogs don’t impact these customers. Even the hogs see better performance (although that wasn’t one of my priorities).

I didn’t tell any customers that I was deploying the Netequalizers. Without solicitation, I’ve had a number of them comment that the service seems faster lately. It sure is fun to hear unsolicited compliments…

The only tweak of significance I made to the default setup was to change the MOVING_AVG from 8 to 29 (it can be set higher, but you can’t do it in the web interface). This makes it so that the Netequalizer considers someone to be a hog when their average data rate over the last 29 seconds is greater than HOGMIN (which we’ve left at 12,000 – 96 kbps). Given that our customers are set for 768k, this means that they can burst at full rate for a little under 4 seconds before they are considered a hog (approximately 350 KiloBytes of data). The default setting of 8 would allow approximately 1 second at full bandwidth (a little under 100K). By making this change, almost all web pages would never be subject to throttling. It also makes it so that most bandwidth test servers will not see any throttling. The change makes us more at risk that we can peak out the AP (since less customers may be subject to throttling), but we’ve seen that the throttling usually kicks in long before we see that problem.

The only feature I’d like to see in these units is to have a “half duplex” mode. The Netequalizers have separate upload and download pools. This works fine for most ISPs using typical full duplex circuits. However, most hardware that WISPs use are half duplex. So, our Trangos have 2.5 Mbps available TOTAL of upload and download. In order to have the Netequalizer throttle well, I configured it so that the Trangos had 1.9 Mbps down and .6 Mbps up. I would prefer to have a single 2.5 Mbps pool that throttles only when download + upload approaches 2.5 Mbps. If we had this feature, we could move even more data through the Trangos

Related Article

%d bloggers like this: