NetEqualizer: Coming in 2013
We are always looking to improve our NetEqualizer product line such that our customers are getting maximum value from their purchase. Part of this process is brainstorming changes and additional features to adapt and help meet that need.
Here are a couple of ideas for changes to NetEqualizer that will arrive in 2013. Stay tuned to NetEqualizer News and our blog for updates on these features!
1) NetEqualizer in Mesh Networks and Cloud Computing
As the use of NAT distributed across mesh networks becomes more widespread, and the bundling of services across cloud computing becomes more prevalent, our stream-based behavior shaping will need to evolve.
This is due to the fact that we base our decision of whether or not to shape on a pair of IP addresses talking to each other without considering port numbers. Sometimes, in cloud or mesh networks, services are trunked across a tunnel using the same IP address. As they cross the trunk, the streams are broken out appropriately based on port number.
So, for example, say you have a video server as part of a cloud computing environment. Without any NAT, on a wide-open network, we would be able to give that video server priority simply by knowing its IP address. However, in a meshed network, the IP connection might be the same as other streams, and we’d have no way to differentiate it. It turns out, though, that services within a tunnel may share IP addresses, but the differentiating factor will be the port number.
Thus, in 2013 we will no longer shape just on IP to IP, but will evolve to offer shaping on IP(Port) to IP(Port). The result will be quality of service improvements even in heavily NAT’d environments.
2) 10 Gbps Line Speeds without Degradation
Some of our advantages over the years have been our price point, the techniques we use on standard hardware, and the line speeds we can maintain.
Right now, our NE3000 and above products all have true multi-core processors, and we want to take advantage of that to enhance our packet analysis. While our analysis is very quick and efficient today (sustained speeds of 1 Gbps up and down), in very high-speed networks, multi-core processing will amp up our throughput even more. In order to get to 10 Gbps on our Intel-based architecture, we must do some parallel analysis on IP packets in the Linux kernel.
The good news is that we’ve already developed this technology in our NetGladiator product (check out this blog article here).
Coming in 2013, we’ll port this technology to NetEqualizer. The result will be low-cost bandwidth shapers that can handle extremely high line speeds without degradation. This is important because in a world where bandwidth keeps getting cheaper, the only reason to invest in an optimizer is if it makes good business sense.
We have prided ourselves on smart, efficient, optimization techniques for years – and we will continue to do that for our customers!
Secure Your Web Applications for the Holidays!
We want YOU to be proactive about security. If your business has external-facing web applications, don’t wait for an attack to happen – protect yourself now! It only takes a few hours of our in-house security experts’ time to determine if your site might have issues, so, for the Holidays, we are offering a $500 upfront security assessment for customers with web applications that need testing!
If it is determined that our NetGladiator product can help shore up your issues, that $500 will be applied toward your first year of NetGladiator Software & Support (GSS). We also offer further consulting based on that assessment on an as-needed basis.
To learn more about NetGladiator, check out our video here.
Or, contact us at:
ips@apconnections.net
-or-
303-997-1300 x123
Don’t Forget to Upgrade to 6.0!: With a brief tutorial on User Quotas
If you have not already upgraded your NetEqualizer to Software Update 6.0, now is the perfect time!
We have discussed the new upgrade in depth in previous newsletters and blog posts, so this month we thought we’d show you how to take advantage of one of the new features – User Quotas.
User quotas are great if you need to track bandwidth usage over time per IP address or subnet. You can also send alerts to notify you if a quota has been surpassed.
To begin, you’ll want to navigate to the Manage User Quotas menu on the left. You’ll then want to start the Quota System using the third interface from the top, Start/Stop Quota System.
Now that the Quota System is turned on, we’ll add a new quota. Click on Configure User Quotas and take a look at the first window:

Here are the settings associated with setting up a new quota rule:
Host IP: Enter in the Host IP or Subnet that you want to give a quota rule to.
Quota Amount: Enter in the number of total bytes for this quota to allow.
Duration: Enter in the number of minutes you want the quota to be tracked for before it is reset (1 day, 1 week, etc.).
Hard Limit Restriction: Enter in the number of bytes/sec to allow the user once the quota is surpassed.
Contact: Enter in a contact email for the person to notify when the quota is passed.
After you populate the form, click Add Rule. Congratulations! You’ve just set up your first quota rule!
From here, you can view reports on your quota users and more.
Remember, the new GUI and all the new features of Software Update 6.0 are available for free to customers with valid NetEqualizer Software & Support (NSS).
If you don’t have the new GUI or are not current with NSS, contact us today!
sales@apconnections.net
-or-
toll-free U.S. (888-287-2492),
worldwide (303) 997-1300 x. 103
Best Of The Blog
Internet User’s Bill of Rights
By Art Reisman – CTO – APconnections
This is the second article in our series. Our first was a Bill of Rights dictating the etiquette of software updates. We continue with a proposed Bill of Rights for consumers with respect to their Internet service.
1) Providers must divulge the contention ratio of their service.
At the core of all Internet service is a balancing act between the number of people that are sharing a resource and how much of that resource is available.
For example, a typical provider starts out with a big pipe of Internet access that is shared via exchange points with other large providers. They then subdivide this access out to their customers in ever smaller chunks – perhaps starting with a gigabit exchange point and then narrowing down to a 10 megabit local pipe that is shared with customers across a subdivision or area of town.
The speed you, the customer, can attain is limited to how many people might be sharing that 10 megabit local pipe at any one time. If you are promised one megabit service, it is likely that your provider would have you share your trunk with more than 10 subscribers and take advantage of the natural usage behavior, which assumes that not all users are active at one time.
The exact contention ratio will vary widely from area to area, but from experience, your provider will want to maximize the number of subscribers who can share the pipe, while minimizing service complaints due to a slow network. In some cases, I have seen as many as 1,000 subscribers sharing 10 megabits. This is a bit extreme, but even with a ratio as high as this, subscribers will average much faster speeds when compared to dial up…

Four Reasons Why Companies Remain Vulnerable to Cyber Attacks
May 30, 2012 — netequalizerOver the past year, since the release of our IPS product, we have spent many hours talking to resellers and businesses regarding Internet security. Below are our observations about security investment, and more importantly, non-investment.
1) By far the number one reason why companies are vulnerable is procrastination.
Seeing is believing, and many companies have never been hacked or compromised.
Some clarification here, most attacks do not end in something being destroyed or any obvious trail of data being lifted. This does not mean they do not happen; it’s just that there was no immediate ramification in many cases hence, business as usual.
Companies are run by people, and most people are reactive, and furthermore somewhat single threaded, thus they can only address a few problems at a time. Without a compelling obvious problem, security gets pushed down the list. The exception to the procrastination rule would be verticals such as financial institutions, where security audits are mandatory (more on audits in a bit). Most companies, although aware of risk factors, are reluctant to spend on a problem that has never happened. In their defense, a company that reacts to all the security FUD, might find itself hamstrung and out of business. Sometimes, to be profitable, you have to live with a little risk.
2) Existing security tools are ignored.
Many security suites are just too broad to be relevant. Information overload can lead to a false sense of coverage.
The best analogy I can give is the Tornado warning system used by the National Weather Service. Their warning system, although well-intended, has been so diffuse in specificity that after a while people ignore the warnings. The same holds true with security tools. In order to impress and out-do one another, security tools have become bloated with quantity, not quality. This overload of data can lead to an overwhelming glut of frivolous information. It would be like a stock analyst predicting every possible outcome and expecting you to invest on that advice. Without a specific, targeted piece of information, your security solution can be a distraction.
3) Security audits are mandated formalities.
In some instances, a security audit is treated as a bureaucratic mandate. When security audits are mandated as a standard, the process of the audit can become the objective. The soldiers carrying out the process will view the completed checklist as the desired result and thus may not actually counter existing threats. It’s not that the audit does not have value, but the audit itself becomes a minimum objective. And most likely the audit is a broad cookie-cutter approach which mostly serves to protect the company or individuals from blame.
4) It may just not be worth the investment.
The cost of getting hacked may be less than the ongoing fees and consumption of time required to maintain a security solution. On a mini-scale, I followed this advice on my home laptop running Windows. It was easier to re-load my system every 6 months when I got a virus rather than mess with all the security virus protection being thrown at me, slowing my system down. The same holds true on a corporate scale. Although nobody would ever come out and admit this publicly, or make it deliberately easy, but it might be more cost-effective to recover from a security breach than to proactively invest in preventing it. What if your customer records get stolen, so what? Consumers are hearing about the largest banks and government security agencies getting hacked every day. If you are a mid-sized business it might be more cost-effective to invest in some damage control after the fact rather than jeopardize cash flow today.
So what is the future for security products? Well, they are not going to go away. They just need to be smarter, more cost-effective, and turn-key, and then perhaps companies will find the benefit-to-risk more acceptable.
<Article Reference: Security Data overload article >
Share this:
Like this: