NetEqualizer News: March 2016

We hope you enjoy this month’s NetEqualizer Newsletter. Highlights include features from Release 8.4, our 2016 Leasing Program, and a presentation highlighting the NetEqualizer at the 2016 ASCUE Conference.

March 2016
Release 8.4 is almost here!
Greetings! Enjoy another issue of NetEqualizer News.

I write this today in the midst of a spring blizzard in Colorado. So far it appears that I have at least 15 inches of snow and drifts up to three feet outside my house, while it continues to blow more snow in at 35 miles an hour. Just another typical March day in Colorado! I was hoping to talk about spring in this newsletter, but now it seems far away.0fad184f-5ea1-44c3-ad71-1093fd99f808

This month we are talking about our upcoming release, slated for May, which features a lot of cool Usability Enhancements. Read below to learn more. We also continue our discussion on how the NetEqualizer is Cloud-Ready, as all things Cloud continues to be top-of-mind for all of us.

We are excited to announce that we will be represented at the ASCUE Conference in June. Join Young Harris College at their talk featuring the NetEqualizer.

And finally, we share more news about our 2016 Leasing Program, and how we are keeping bandwidth shaping affordable.

And remember we are now on Twitter! You can now follow us @NetEqualizer.

We love it when we hear back from you – so if you have a story you would like to share with us of how we have helped you, let us know. Email me directly at I would love to hear from you!

– Art Reisman (CTO)

In this issue:

:: NetEqualizer Release 8.4 – Enhanced Usability – Is Almost Ready!
:: Keeping Bandwidth Shaping Affordable
:: Join a Presentation on NetEqualizer at ASCUE in June 2016
:: Six Ways to Save with Cloud Computing

NetEqualizer Release 8.4 – Enhanced Usability – Is Almost Ready!
A Complete GUI Redesign!

We recently had the chance to kick the tires on our new 8.4 Release interface. It really has some significant wow factor type features. In hindsight, perhaps we should have called this NetEqualizer 9.0 and not just lowly 8.4. We have been talking about this release as a GUI Redesign & Pool Enhancements, but I really think 8.4 is a release full of Usability Enhancements, that will make it easier to manage and configure your NetEqualizer.
The biggest changes center on the the regular NetEqualizer GUI. We have transitioned everything to share the same look and feel as RTR. Here are some of the pages and features we are most excited about!1) Edit traffic limits on the fly without having to add/remove them one at a time! The screenshot below shows the Pool/VLAN shared limit interface. You can see the Pools, their names, and their associated members.mpxGG3D2) We added a cool new dashboard that serves as the homepage for NetEqualizer management (license key information blocked out in grey):dash3) The new GUI also has an easy way to set the time and pick a timezone – no more logging in to the NetEqualizer terminal!date4) You can now choose your units for the entire interface! This includes units for the configuration and RTR.unitesCheck back next month for an update on more exciting changes planned for 8.4!Our time frame for General Acceptance of this release is May of 2016.As with all software releases, the 8.4 Release will be free to all customers with valid NetEqualizer Software and Support (NSS).
Keeping Bandwidth Shaping Affordable
NetEqualizer Leasing Program

At APconnections, we are proud of our reputation for offering affordable bandwidth shaping solutions. In the summer of 2013, we decided that we could help our customers that need to better align costs with recurring revenue, by offering a Leasing

We are happy to announce that we have enhanced our lease offerings in 2016. Our “Standard” lease now comes with a 1Gbps license, and leases for $500 per month. Adding 1Gbps fiber at any of our lease levels just bumps up the price by $100 per month. And for those needing maximum performance, we now also give you access to an Enterprise-class NE4000 with our 5Gbps license and 10Gbps fiber.

If leasing is of interest to you, and you would like to learn more, you can view our Leasing Program agreement here.

Please note that the NetEqualizer Leasing Program is generally available to customers in the United States and Canada. If you are outside of these countries, contact us to see if leasing is available in your area.


Join a Presentation on NetEqualizer at ASCUE in June 2016
Association Supporting Computer Users in Education

We are excited to announce that one of our long-time customers, Hollis Townsend, Director of Technology Support and Operations at Young Harris College, will be talking about his experience with the NetEqualizer in his talk at ASCUE, June 12-16, 2016 in Myrtle Beach, South Carolina.yhc

Young Harris has been using NetEqualizer to solve their network congestion issues since July, 2007. They have upgraded their NetEQ as their network has grown over the years, and currently run an NE3000 with a 1Gbps license.

We are also happy to announce that APconnections, home of the NetEqualizer, will be a Silver Sponsor at the ASCUE Conference. We will be giving away a great door prize – a Fitbit fitness watch!ascue

If you use technology in higher education, you may want to consider attending ASCUE this June. And if you have ever wanted to talk to a colleague about their experience with the NetEqualizer, please join Hollis’ presentation. His presentation is tentatively titled “Shaping Bandwidth – Learning to Love Netflix on Campus”.

ASCUE is the Association Supporting Computer Users in Education and they have been around since 1968. Members hail from all over North America. ASCUE’s mission is to provide opportunities for resource-sharing, networking, and collaboration within an environment that fosters creativity and innovation in the use of technology within higher education.

Click here to learn more about ASCUE or register for the June conference.


Six Ways to Save with Cloud Computing
NetEqualizer Looks to the Clouds

We are continuing our focus on the cloud for NetEqualizer. The NetEqualizer is now cloud ready – as we’ve written about in previous newsletters. There are a lot of benefits to using the cloud in general. Here are just a few:

1) Fully utilized hardware
2) Lower power costs
3) Lower people costs
4) Zero capital costs
5) Resilience without redundancy
6) Lower network costs

The last one, lower network costs, is interesting. Since your business services are in the cloud, you can ditch all of those expensive MPLS links that you use to privately tie your offices to your back-end systems, and replace them with lower-cost commercial Internet links. You do not really need more bandwidth, just better bandwidth performance. The commodity Internet links are likely good enough, but when you move to the Cloud, you will need a smart bandwidth shaper.

Your link to the Internet becomes even more critical when you go the Cloud. But that does not mean bigger and more expensive pipes. Cloud applications are very lean and you do not need a big pipe to support them. You just need to make sure recreational traffic does not cut into your business application traffic.

The NetEqualizer fits perfectly as the bandwidth shaping product in the above infrastructure. Let us know if you have any questions about the cloud-ready NetEqualizer!


Best Of Blog
How to Build Your Own Speed Test Tool

By Art Reisman – CTO – APconnections

Editor’s Note: We often get asked to “prove” the NetEqualizer is making a difference regarding end user experience. The tool description and method outlined in our blog post, can be used to objectively justify the NetEqualizer value. Let us know if you need any help setting it up.

Most speed test sites measure the download speed of a large file from a server to your computer. There are two potential problems with using this metric.

1) ISPs can design their networks so these tests show best case results.
2) Humans are much more sensitive to the load time of interactive sites.

A better test of your perceived speed is how long it takes to load up a new web page…

Photo Of The Month
Have you ever wondered what happens to balloons when they are released into the sky? The remnants of this balloon landed right in front of a staff member on a clear day while hiking Black Star Canyon in Orange County, CA. Balloons like this are actually an environmental disaster as they often end up in oceans and are eaten by sea and wildlife.

NetEqualizer News: February 2016

We hope you enjoy this month’s NetEqualizer Newsletter. Highlights include discussions on Cloud Computing, the new VM release, and updates on Software Release 8.4.

February 2016
NetEqualizer-VM is Ready, QoS for your Cloud!
Greetings! Enjoy another issue of NetEqualizer News.

February is off to a snowy start in Colorado this year, with a major snowstorm on February 1st dumping 16+ inches of snow in Boulder! While we were snowed in, I had time to reflect and think about where bandwidth shaping is headed, and how we are well-positioned for the industry transition to Cloud Computing. In this month’s newsletter you can read how the NetEqualizer is “Cloud Ready”.0fad184f-5ea1-44c3-ad71-1093fd99f808

We are now ready with our first VM release (NetEqualizer-VM); you can read all about it below. And finally, we share more news about our 8.4 Release – Enhanced Pools & Other GUI Features.

And remember we are now on Twitter! You can now follow us @NetEqualizer.

We love it when we hear back from you – so if you have a story you would like to share with us of how we have helped you, let us know. Email me directly at I would love to hear from you!

– Art Reisman (CTO)

NetEqualizer-VM is Ready!
NetEqualizer-VM Release Ready for Networks <= 100 Mbps
We are excited to announce that our VM release is now ready! If you are already running virtual machines in your data center, this may be a good fit for you.

The first release is certified for VM systems for up to 100 megabits of throughput.

Base pricing will run at $3,500 USD per year. However, for a limited time, we are running a special pre-order price of $2,500 USD per year.

Please note: The first year is due prior to delivery of the software. We offer a 30 day trial with a $500 USD non-refundable support charge.

Your VM server will need to meet a minimum specification to run the NetEqualizer shaping solution. We have detailed specifications for any VM system – contact us for details!

Release 8.4 Update
Enhanced Pools + GUI Redesign

In previous months’ newsletters we talked about changes coming to the regular NetEqualizer GUI. Over the next couple of months, we’ll highlight those changes here.

One of the changes we are very excited about is the ability to manage Pools on the fly, and also the ability to name them! See the screenshot below:


One of the best parts of this screen is that you can manage all Pools and all Pool Members at once. For example, see Pool 1 expanded to show the two Pool Members. You can also change the limits for the Pool, add new Pools, and delete Pools that you no longer need.

We are also enhancing the new user interface with four primary menu options:


This will help guide first-time users through the process of using NetEqualizer, and will also help separate the functionality out into to main usage categories.

Check back next month for an update on more exciting changes planned for 8.4!

Our time frame for General Acceptance of this release is April/May of 2016.

As with all software releases, the 8.4 Release will be free to all customers with valid NetEqualizer Software and Support (NSS).


Next Generation Bandwidth Control
NetEqualizer is Cloud Ready

We received a call today from one of the Largest Tier 1 providers in the world. The salesperson on the other end was lamenting about his inability to sell cloud services to his customers. His service offerings were hot, but the customers’ Internet connections were not. Until his customers resolve their congestion problems, they were in a holding pattern for new cloud services.

As a brief aside, here is a list of what a Next Generation Bandwidth Controller can do:
1. Next Generation Bandwidth Controllers must be able to mitigate traffic flows originating from the Internet such that important Cloud Applications get priority.
2. Next Generation Bandwidth Controllers must NOT rely on Layer 7 DPI technology to identify traffic (too much encryption and tunneling today for this to be viable).
3. Next Generation Bandwidth Controllers must hit a price range of $5k to $10k USD for medium to large businesses.
4. Next Generation Bandwidth Controllers must not require babysitting and adjustments from the IT staff to remain effective.
5. Next Generation Bandwidth Controller should adopt a Heuristics-based decision model (like the one used in the NetEqualizer).

As for those businesses mentioned by the sales representative, when they moved to the cloud, many of them had run into bottlenecks. The bottlenecks were due to their iOS updates and recreational “crap” killing the cloud application traffic on their shared Internet trunk.

Their original assumption was they could use the QoS on their routers to mitigate traffic. After all, that worked great when all they had between them and their remote business logic was a nailed-up MPLS network. Because it was a private corporate link, they had QoS devices on both ends of the link and no problems with recreational congestion.

Moving to the Cloud was a wake up call! Think about it, when you go to the cloud you only control one end of the link. This means that your router-based QoS is no longer effective, and incoming traffic will crush you if you do not do something different.BT_logo

The happy ending is that we were able to help our friend at BT telecom, by mitigating his customers’ bottlenecks. Contact us if you are interested in more details.


Best Of Blog

Capacity Planning for Cloud Applications
By Art Reisman – CTO – APconnections

The main factors to consider when capacity planning your Internet Link for cloud applications are:

1) How much bandwidth do your cloud applications actually need?

Typical cloud applications require about 1/2 of a megabit or less. There are exceptions to this rule, but for the most part a good cloud application design does not involve large transfers of data. QuickBooks, Salesforce, Gmail, and just about any cloud-based data base will be under the 1/2 megabit guideline. The chart below really brings to light the difference between your typical, interactive Cloud Application and the types of applications that will really eat up your data link.

Photo Of The Month
This closeup of a local grasshopper was taken by a staff member while in Kansas, a state in the central United States. We hope this picture doesn’t bug you.

You Must Think Outside the Box to Bring QoS to the Cloud and Wireless Mesh Networks

By Art Reisman

About 10 years ago, we had this idea for QoS across an Internet link. It was simple and elegant, and worked like a charm. Ten years later, as services spread out over the Internet cloud, our original techniques are more important than ever. You cannot provide QoS using TOS (diffserv) techniques over any public or semi public Internet link, but using our techniques we have proven the impossible is possible.

Why TOS bits don’t work over the Internet.

The main reason is that setting TOS bits are only effective when you control all sides of a conversation on a link, and this is not possible on most Internet links (think cloud computing and wireless mesh networks). For standard TOS services to work, you must control all the equipment in between the two end points. All it takes is one router in the path of a VoIP conversation to ignore a TOS bit, and its purpose becomes obsolete. Thus TOS bits for priority are really only practical inside a corporate LAN/WAN topology.

Look at the root cause of poor quality services and you will find alternative solutions.

Most people don’t realize the problem with congested VoIP, on any link, is due to the fact that their VoIP packets are getting crowded out by larger downloads and things like recreational video (this is also true for any interactive cloud access congestion). Often, the offending downloads are initiated by their own employees or users. A good behavior-based shaper will be able to favor VoIP streams over less essential data streams without any reliance on the sending party adhering to a TOS scheme.

How do we accomplish priority for VoIP?

We do this by monitoring all the streams on a link with one piece of equipment inserted anywhere in the congested link. In our current terminology, a stream consists of an IP (local), talking to another IP (remote Internet). When we see a large stream dominating the link, we step back and ask, is the link congested? Is that download crowding out other time-sensitive transactions such as VOIP? If the answer is yes to both questions, then we proactively take away some bandwidth from the offending stream. I know this sounds ridiculously simple, and does not seem plausible, but it works. It works very well and it works with just one device in the link irrespective of any other complex network engineering. It works with minimal set up. It works over MPLS links. I could go on and on, the only reason you have not heard of it is perhaps is that it goes against the grain of what most vendors are selling – and that is large orders for expensive high end routers using TOS bits.

Related article QoS over the Internet – is it possible?

Fast forward to our next release, how to provide QOS deep inside a cloud or mesh network where sending or receiving IP addresses are obfuscated.

Coming this winter we plan to improve upon our QoS techniques so we can drill down inside of Mesh and Cloud networks a bit better.

As the use of NAT, distributed across mesh networks, becomes more wide spread, and the bundling of services across cloud computing becomes more prevalent, one side effect has been that our stream based behavior shaping (QoS) is not as effective as it is when all IP addresses are visible (not masked behind a NAT/PAT device).

This is due to the fact that currently, we base our decision on a pair of IP’s talking to each other, but we do not consider the IP port numbers, and sometimes especially in a cloud or mesh network, services are trunked across a tunnel using the same IP. As these services get tunneled across a trunk, the data streams are bundled together using one common pair of IP’s and then the streams are broken out based on IP ports so they can be routed to their final destination. For example, in some cloud computing environments there is no way to differentiate the video stream within the tunnel coming from the cloud, from a smaller data access session. They can sometimes both be talking across the same set of IP’s to the cloud. In a normal open network we could slow the video (or in some cases give priority to it) by knowing the IP of the video server, and the IP of the receiving user,  but when the video server is buried within the tunnel sharing the IP’s of other services, our current equalizing (QOS techniques) become less effective.

Services within a tunnel, cloud, or mesh may be bundled using the same IPs, but they are often sorted out on different ports at the ends of the tunnel. With our new release coming this winter, we will start to look at streams as IP and port number, thus allowing for much greater resolution for QOS inside the Cloud and inside your mesh network. Stay tuned!

Cloud Computing – Do You Have Enough Bandwidth? And a Few Other Things to Consider

The following is a list of things to consider when using a cloud-computing model.

Bandwidth: Is your link fast enough to support cloud computing?

We get asked this question all the time: What is the best-practice standard for bandwidth allocation?

Well, the answer depends on what you are computing.

– First, there is the application itself.  Is your application dynamically loading up modules every time you click on a new screen? If the application is designed correctly, it will be lightweight and come up quickly in your browser. Flash video screens certainly spruce up the experience, but I hate waiting for them. Make sure when you go to a cloud model that your application is adapted for limited bandwidth.

– Second, what type of transactions are you running? Are you running videos and large graphics or just data? Are you doing photo processing from Kodak? If so, you are not typical, and moving images up and down your link will be your constraining factor.

– Third, are you sharing general Internet access with your cloud link? In other words, is that guy on his lunch break watching a replay of royal wedding bloopers on YouTube interfering with your access?

The good news is (assuming you will be running a transactional cloud computing environment – e.g. accounting, sales database, basic email, attendance, medical records – without video clips or large data files), you most likely will not need additional Internet bandwidth. Obviously, we assume your business has reasonable Internet response times prior to transitioning to a cloud application.

Factoid: Typically, for a business in an urban area, we would expect about 10 megabits of bandwidth for every 100 employees. If you fall below this ratio, 10/100, you can still take advantage of cloud computing but you may need  some form of QoS device to prevent the recreational or non-essential Internet access from interfering with your cloud applications.  See our article on contention ratio for more information.

Security: Can you trust your data in the cloud?

For the most part, chances are your cloud partner will have much better resources to deal with security than your enterprise, as this should be a primary function of their business. They should have an economy of scale – whereas most companies view security as a cost and are always juggling those costs against profits, cloud-computing providers will view security as an asset and invest more heavily.

We addressed security in detail in our article how secure is the cloud, but here are some of the main points to consider:

1) Transit security: moving data to and from your cloud provider. How are you going to make sure this is secure?
2) Storage: handling of your data at your cloud provider, is it secure once it gets there from an outside hacker?
3) Inside job: this is often overlooked, but can be a huge security risk. Who has access to your data within the provider network?

Evaluating security when choosing your provider.

You would assume the cloud company, whether it be Apple or Google (Gmail, Google Calendar), uses some best practices to ensure security. My fear is that ultimately some major cloud provider will fail miserably just like banks and brokerage firms. Over time, one or more of them will become complacent. Here is my check list on what I would want in my trusted cloud computing partner:

1) Do they have redundancy in their facilities and their access?
2) Do they screen their employees for criminal records and drug usage?
3) Are they willing to let you, or a truly independent auditor, into their facility?
4) How often do they back-up data and how do they test recovery?

Big Brother is watching.

This is not so much a traditional security threat, but if you are using a free service you are likely going to agree, somewhere in their fine print, to expose some of your information for marketing purposes. Ever wonder how those targeted ads appear that are relevant to the content of the mail you are reading?

Link reliability.

What happens if your link goes down or your provider link goes down, how dependent are you? Make sure your business or application can handle unexpected downtime.

Editors note: unless otherwise stated, these tips assume you are using a third-party provider for resources applications and are not a large enterprise with a centralized service on your Internet. For example, using QuickBooks over the Internet would be considered a cloud application (and one that I use extensively in our business), however, centralizing Microsoft excel on a corporate server with thin terminal clients would not be cloud computing.

How Safe is The Cloud?

By Zack Sanders, NetEqualizer Guest Columnist

There is no question that cloud-computing infrastructures are the future for businesses of every size. The advantages they offer are plentiful:

  • Scalability – IT personnel used to have to scramble for hardware when business decisions dictated the need for more servers or storage. With cloud computing, an organization can quickly add and subtract capacity at will. New server instances are available within minutes of provisioning them.
  • Cost – For a lot of companies (especially new ones), the prospect of purchasing multiple $5,000 servers (and to pay to have someone maintain them) is not very attractive. Cloud servers are very cheap – and you only pay for what you use. If you don’t require a lot of storage space, you can pay around 1 cent per hour per instance. That’s roughly $8/month. If you can’t incur that cost, you should probably reevaluate your business model.
  • Availability – In-house data centers experience routine outages. When you outsource your data center to the cloud, everything server related is in the hands of industry experts. This greatly increases quality of service and availability. That’s not to say outages don’t occur – they do – just not nearly as often or as unpredictably.

While it’s easy to see the benefits of cloud computing, it does have its potential pitfalls. The major questions that always accompany cloud computing discussions are:

  • “How does the security landscape change in the cloud?” – and
  • “What do I need to do to protect my data?”

Businesses and users are concerned about sending their sensitive data to a server that is not totally under their control – and they are correct to be wary. However, when taking proper precautions, cloud infrastructures can be just as safe – if not safer – than physical, in-house data centers. Here’s why:

  • They’re the best at what they do – Cloud computing vendors invest tons of money securing their physical servers that are hosting your virtual servers. They’ll be compliant with all major physical security guidelines, have up-to-date firewalls and patches, and have proper disaster recovery policies and redundant environments in place. From this standpoint, they’ll rank above almost any private company’s in-house data center.
  • They protect your data internally – Cloud providers have systems in place to prevent data leaks or access by third parties. Proper separation of duties should ensure that root users at the cloud provider couldn’t even penetrate your data.
  • They manage authentication and authorization effectively – Because logging and unique identification are central components to many compliance standards, cloud providers have strong identity management and logging solutions in place.

The above factors provide a lot of piece of mind, but with security it’s always important to layer approaches and be diligent. By layering, I mean that the most secure infrastructures have layers of security components that, if one were to fail, the next one would thwart an attack. This diligence is just as important for securing your external cloud infrastructure. No environment is ever immune to compromise. A key security aspect of the cloud is that your server is outside of your internal network, and thus your data must travel public connections to and from your external virtual machine. Companies with sensitive data are very worried about this. However, when taking the following security measures, your data can be just as safe in the cloud:

  • Secure the transmission of data – Setup SSL connections for sensitive data, especially logins and database connections.
  • Use keys for remote login – Utilize public/private keys, two-factor authentication, or other strong authentication technologies. Do not allow remote root login to your servers. Brute force bots hound remote root logins incessantly in cloud provider address spaces.
  • Encrypt sensitive data sent to the cloud – SSL will take care of the data’s integrity during transmission, but it should also be stored encrypted on the cloud server.
  • Review logs diligently – use log analysis software ALONG WITH manual review. Automated technology combined with a manual review policy is a good example of layering.

So, when taking proper precautions (precautions that you should already be taking for your in-house data center), the cloud is a great way to manage your infrastructure needs. Just be sure to select a provider that is reputable and make sure to read the SLA. If the hosting price is too good to be true, it probably is. You can’t take chances with your sensitive data.

About the author:

Zack Sanders is a Web Application Security Specialist with Fiddler on the Root (FOTR). FOTR provides web application security expertise to any business with an online presence. They specialize in ethical hacking and penetration testing as a service to expose potential vulnerabilities in web applications. The primary difference between the services FOTR offers and those of other firms is that they treat your website like an ACTUAL attacker would. They use a combination of hacking tools and savvy know-how to try and exploit your environment. Most security companies  just run automated scans and deliver the results. FOTR is for executives that care about REAL security.

%d bloggers like this: