Complimentary NetEqualizer Bandwidth Management Seminar in the UK


Press Release issued via BusinessWire.

April 08, 2015 01:05 AM Mountain Daylight Time

LAFAYETTE, Colo.–(BUSINESS WIRE)–APconnections, an innovation-driven technology company that delivers best-in-class network traffic management solutions, is excited to announce its upcoming complimentary NetEqualizer Technical Seminar on April 23rd, 2015, in Oxfordshire, United Kingdom, hosted by Flex Information Technology Ltd.

This is not a marketing presentation; it is run by and created for technical staff.

Join us to meet APconnections’ CTO Art Reisman, a visionary in the bandwidth management industry (check out Art’s blog). This is not a marketing presentation; it is run by and created for technical staff. The Seminar will feature in-depth, example-driven discussions of network optimization and provide participants with a first-hand look at NetEqualizer technology.

Seminar highlights include:

  • Learn how behavior-based shaping provides superior QoS for Internet traffic
  • Optimize business-critical VoIP, email, web browsing, SaaS & web applications
  • Control excessive bandwidth use by non-priority applications
  • Gain control over P2P traffic
  • Get visibility into your network with real-time reporting
  • See the NetEqualizer in action! We will demo a live system.

We welcome both customers and those just beginning to think about bandwidth shaping. The Seminar will take place at 14:30pm, Thursday, April 23rd, at Flex Information Technology Ltd in Grove Technology Park, Wantage, Oxfordshire OX12 9FF.

Online registration, including location and driving directions, is available here. There is no cost to attend, but registration is requested. Questions? Contact Paul Horseman at paul@flex.co.uk or call +44(0)333.101.7313.

About Flex Information Technology Ltd
Flex Information Technology is a partnership founded in 1993 to provide maintenance and support services to wide range of customers with large mission critical systems, particularly the Newspaper and Insurance sectors. In 1998 the company began focusing on support for small to medium businesses. Today we provide “Smart IT Solutions combined with Flexible and Quality Services for Businesses” to a growing satisfied customer base. We have accounts with leading IT suppliers and hardware and software distributors in the UK.

About APconnections
APconnections is a privately held company founded in 2003 and is based in Lafayette, Colorado, USA. Our flexible and scalable network traffic management solutions can be found at thousands of customer sites in public and private organizations of all sizes across the globe, including: Fortune 500 companies, major universities, K-12 schools, Internet providers, libraries, and government agencies on six continents.

Contacts

APconnections, Inc.
Sandy McGregor, 303-997-1300 x104
sandym@apconnections.net
or
Flex Information Technology Ltd
Paul Horseman, +44(0)333 101 7313
paul@flex.co.uk

Will Bandwidth Shaping Ever Be Obsolete?


By Art Reisman

CTO – www.netequalizer.com

I find public forums where universities openly share information about their bandwidth shaping policies an excellent source of information. Unlike commercial providers, these user groups have found technical collaboration is in their best interest, and they often openly discuss current trends in bandwidth control.

A recent university IT user group discussion thread kicked off with the following comment:

“We are in the process of trying to decide whether or not to upgrade or all together remove our packet shaper from our residence hall network.  My network engineers are confident we can accomplish rate limiting/shaping through use of our core equipment, but I am not convinced removing the appliance will turn out well.”

Notice that he is not talking about removing rate limits completely, just backing off from an expensive extra piece of packet shaping equipment and using the simpler rate limits available on his router.  The point of my reference to this discussion is not so much to discourse over the different approaches of rate limiting, but to emphasize, at this point in time, running wide-open without some sort of restriction is not even being considered.

Despite an 80 to 90 percent reduction in bulk bandwidth prices in the past few years, bandwidth is not quite yet cheap enough for an ISP to run wide-open. Will it ever be possible for an ISP to run wide-open without deliberately restricting their users?

The answer is not likely.

First of all, there seems to be no limit to the ways consumer devices and content providers will conspire to gobble bandwidth. The common assumption is that no matter what an ISP does to deliver higher speeds, consumer appetite will outstrip it.

Yes, an ISP can temporarily leap ahead of demand.

We do have a precedent from several years ago. In 2006, the University of Brighton in the UK was able to unplug our bandwidth shaper without issue. When I followed up with their IT director, he mentioned that their students’ total consumption was capped by the far end services of the Internet, and thus they did not hit their heads on the ceiling of the local pipes. Running without restriction, 10,000 students were not able to eat up their 1 gigabit pipe! I must caveat this experiment by saying that in the UK their university system had invested heavily in subsidized bandwidth and were far ahead of the average ISP curve for the times. Content services on the Internet for video were just not that widely used by students at the time. Such an experiment today would bring a pipe under a similar contention ratio to its knees in a few seconds. I suspect today one would need more or on the order of 15 to 25 gigabits to run wide open without contention-related problems.

It also seems that we are coming to the end of the line for bandwidth in the wireless world much more quickly than wired bandwidth.

It is unlikely consumers are going to carry cables around with their iPad’s and iPhones to plug into wall jacks any time soon. With the diminishing returns in investment for higher speeds on the wireless networks of the world, bandwidth control is the only way to keep order of some kind.

Lastly I do not expect bulk bandwidth prices to continue to fall at their present rate.

The last few years of falling prices are the result of a perfect storm of factors not likely to be repeated.

For these reasons, it is not likely that bandwidth control will be obsolete for at least another decade. I am sure we will be revisiting this issue in the next few years for an update.

APconnections Celebrates New NetEqualizer Lite with Introductory Pricing


Editor’s Note:  This is a copy of a press release that went out on May 15th, 2012.  Enjoy!

Lafayette, Colorado – May 15, 2012 – APconnections, an innovation-driven technology company that delivers best-in-class network traffic management solutions, is celebrating the expansion of its NetEqualizer Lite product line by offering special pricing for a limited time.

NetEqualizer’s VP of Sales and Business Development, Joe D’Esopo is excited to announce “To make it easy for you to try the new NetEqualizer Lite, for a limited time we are offering the NetEqualizer Lite-10 at introductory pricing of just $999 for the unit, our Lite-20 at $1,100, and our Lite-50 at $1,400.  These are incredible deals for the value you will receive; we believe unmatched today in our industry.”

We have upgraded our base technology for the NetEqualizer Lite, our entry-level bandwidth-shaping appliance.  Our new Lite still retains a small form-factor, which sets it apart, and makes it ideal for implementation in the Field, but now has enhanced CPU and memory. This enables us to include robust graphical reporting like in our other product lines, and also to support additional bandwidth license levels.

The Lite is geared towards smaller networks with less than 350 users, is available in three license levels, and is field-upgradable across them: our Lite-10 runs on networks up to 10Mbps and up to 150 users ($999), our Lite-20 (20Mbps and 200 users for $1,100), and Lite-50 (50Mbps and 350 users for $1,400).  See our NetEqualizer Price List for complete details.  One year renewable NetEqualizer Software & Support (NSS) and NetEqualizer Hardware Warranties (NHW) are offered.

Like all of our bandwidth shapers, the NetEqualizer Lite is a plug-n-play, low maintenance solution that is quick and easy to set-up, typically taking one hour or less.  QoS is implemented via behavior-based bandwidth shaping, “equalizing”, giving priority to latency-sensitive applications, such as VoIP, web browsing, chat and e-mail over large file downloads and video that can clog your Internet pipe.

About APconnections:  APconnections is based in Lafayette, Colorado, USA.  We released our first commercial offering in July 2003, and since then thousands of customers all over the world have put our products into service.  Today, our flexible and scalable solutions can be found in over 4,000 installations in many types of public and private organizations of all sizes across the globe, including: Fortune 500 companies, major universities, K-12 schools, and Internet providers on six (6) continents.  To learn more, contact us at sales@apconnections.net.

Contact: Sandy McGregor
Director, Marketing
APconnections, Inc.
303.997.1300
sandy@apconnections.net

Our Take on Network Instruments 5th Annual Network Global Study


Editors Note: Network Instruments released their “Fifth Annual State of the Network Global study” on March 13th, 2o12. You can read their full study here. Their results were based on responses by 163 network engineers, IT directors, and CIOs in North America, Asia, Europe, Africa, Australia, and South America. Responses were collected from October 22, 2011 to January 3, 2012.

What follows is our take (or my .02 cents) on the key findings around Bandwidth Management and Bandwidth Monitoring from the study.

Finding #1: Over the next two years, more than one-third of respondents expect bandwidth consumption to increase by more than 50%.

Part of me says “well, duh!” but that is only because we hear that from many of our customers. So I guess if you were an Executive, far removed from the day-to-day, this would be an important thing to have pointed out to you. Basically, this is your wake up call (if you are not already awake) to listen to your Network Admins who keep asking you to allocate funds to the network. Now is the time to make your case for more bandwidth to your CEO/President/head guru. Get together budget and resources to build out your network in anticipation of this growth – so that you are not caught off guard. Because if you don’t, someone else will do it for you.

Finding #2: 41% stated network and application delay issues took more than an hour to resolve.

You can and should certainly put monitoring on your network to be able to see and react to delays. However, another way to look at this, admittedly biased from my bandwidth shaping background, is get rid of the delays!

If you are still running an unshaped network, you are missing out on maximizing your existing resource. Think about how smoothly traffic flows on roads, because there are smoothing algorithms (traffic lights) and rules (speed limits) that dictate how traffic moves, hence “traffic shaping.” Now, imagine driving on roads without any shaping in place. What would you do when you got to a 4-way intersection? Whether you just hit the accelerator to speed through, or decided to stop and check out the other traffic probably depends on your risk-tolerance and aggression profile. And the result would be that you make it through OK (live) or get into an ugly crash (and possibly die).

Similarly, your network traffic, when unshaped, can live (getting through without delays) or die (getting stuck waiting in a queue) trying to get to its destination. Whether you look at deep packet inspection, rate limiting, equalizing, or a home-grown solution, you should definitely look into bandwidth shaping. Find a solution that makes sense to you, will solve your network delay issues, and gives you a good return-on-investment (ROI). That way, your Network Admins can spend less time trying to find out the source of the delay.

Finding #3: Video must be dealt with.

24% believe video traffic will consume more than half of all bandwidth in 12 months.
47% say implementing and measuring QoS for video is difficult.
49% have trouble allocating and monitoring bandwidth for video.

Again, no surprise if you have been anywhere near a network in the last 2 years. YouTube use has exploded and become the norm on both consumer and business networks. Add that to the use of video conferencing in the workplace to replace travel, and Netflix or Hulu to watch movies and TV, and you can see that video demand (and consumption) has risen sharply.

Unfortunately, there is no quick, easy fix to make sure that video runs smoothly on your network. However, a combination of solutions can help you to make video run better.

1) Get more bandwidth.

This is just a basic fact-of-life. If you are running a network of < 10Mbps, you are going to have trouble with video, unless you only have one (1) user on your network. You need to look at your contention ratio and size your network appropriately.

2) Cache static video content.

Caching is a good start, especially for static content such as YouTube videos. One caveat to this, do not expect caching to solve network congestion problems (read more about that here) – as users will quickly consume any bandwidth that caching has freed up. Caching will help when a video has gone viral, and everyone is accessing it repeatedly on your network.

3) Use bandwidth shaping to prioritize business-critical video streams (servers).

If you have a designated video-streaming server, you can define rules in your bandwidth shaper to prioritize this server. The risk of this strategy is that you could end up giving all your bandwidth to video; you can reduce the risk by rate capping the bandwidth portioned out to video.

As I said, this is just my take on the findings. What do you see? Do you have a different take? Let us know!

What Does it Cost You Per Mbs for Bandwidth Shaping?


Sometimes by using a cost metric you can distill a relatively complicated thing down to a simple number for comparison. For example, we can compare housing costs by Dollars Per Square Foot or the fuel efficiency of cars by using the Miles Per Gallon (MPG) metric.  There are a number of factors that go into buying a house, or a car, and a compelling cost metric like those above may be one factor.   Nevertheless, if you decide to buy something that is more expensive to operate than a less expensive alternative, you are probably aware of the cost differences and justify those with some good reasons.

Clearly this makes sense for bandwidth shaping now more than ever, because the cost of bandwidth continues to decline and as the cost of bandwidth declines, the cost of shaping the bandwidth should decline as well.  After all, it wouldn’t be logical to spend a lot of money to manage a resource that’s declining in value.

With that in mind, I thought it might be interesting to looking at bandwidth shaping on a cost per Mbs basis. Alternatively, I could look at bandwidth shaping on a cost per user basis, but that metric fails to capture the declining cost of a Mbs of bandwidth. So, cost per Mbs it is.

As we’ve pointed out before in previous articles, there are two kinds of costs that are typically associated with bandwidth shapers:

1) Upfront costs (these are for the equipment and setup)

2) Ongoing costs (these are for annual renewals, upgrades, license updates, labor for maintenance, etc…)

Upfront, or equipment costs, are usually pretty easy to get.  You just call the vendor and ask for the price of their product (maybe not so easy in some cases).  In the case of the NetEqualizer, you don’t even have to do that – we publish our prices here.

With the NetEqualizer, setup time is normally less than an hour and is thus negligible, so we’ll just divide the unit price by the throughput level, and here’s the result:

I think this is what you would expect to see.

For ongoing costs you would need to add all the mandatory per year costs and divide by throughput, and the metric would be an ongoing “yearly” per Mbs cost.

Again, if we take the NetEqualizer as an example, the ongoing costs are almost zero.  This is because it’s a turn-key appliance and it requires no time from the customer for bandwidth analysis, nor does it require any policy setup/maintenance to effectively run (it doesn’t use policies). In fact, it’s a true zero maintenance product and that yields zero labor costs. Besides no labor, there’s no updates or licenses required (an optional service contract is available if you want ongoing access to technical support, or software upgrades).

Frankly, it’s not worth the effort of graphing this one. The ongoing cost of a NetEqualizer Support Agreement ranges from $29 (dollars) – $.20 (cents) per Mbs per year. Yet, this isn’t the case for many other products and this number should be evaluated carefully. In fact, in some cases the ongoing costs of some products exceed the upfront cost of a new NetEqualizer!

Again, it may not be the case that the lowest cost per Mbs of bandwidth shaping is the best solution for you – but, if it’s not, you should have some good reasons.

If you shape bandwidth now, what is your cost per Mbs of bandwidth shaping? We’d be interested to know.

If your ongoing costs are higher than the upfront costs of a new NetEqualizer and you’re open to a discussion, you should drop us a note at sales@apconnections.net.

Developing Technology to Detect a Network Hacker


Editors note:  Updated on Feb 1st, 2012.  Our new product, NetGladiator, has been released.  You can learn more about it on the NetGladiator website at www.netgladiator.net or calling us at 303.997.1300 x123.

In a few weeks we will be releasing a product to automatically detect and prevent a web application hacker from breaking into a private enterprise. What follows are the details of how this product was born.  If you are currently seeking or researching intrusion detection & prevention technology, you will find the following quite useful.

Like many technology innovations, our solution resulted from the timely intersection of two technologies.

Technology 1: About one year ago we starting working with a consultant in our local tech community to do some programming work on a minor feature in our NetEqualizer product line. Fiddlerontheroot is the name of their company, and they specialize in ethical hacking. Ethical hacking is the process of deliberately hacking into a high-profile client company with the intention of exposing their weaknesses. The key expertise that they provided was a detailed knowledge of how to hack into a network or website.

Technology 2: Our NetEqualizer technology is well known for providing state-of-the-art bandwidth control. While working with Fiddler on the Root, we realized our toolset could be reconfigured to spot, and thwart, unwanted entry into a network. A key piece to the puzzle would be our long-forgotten Deep Packet Inspection technology. DPI is the frowned upon practice of looking inside data packets traversing the Internet.

An ironic twist to this new product journey was that, due to the privacy controversy, as well as finding a better way to shape bandwidth, we removed all of our DPI methodology from our core bandwidth shaping product four years ago.  Just like with any weapon, there are appropriate uses for DPI. Over a lunch conversation one day, we realized that using DPI to prevent a hacker intrusion was a legitimate use of DPI technology. Preventing an attack is much different from a public ISP scanning and censoring customer data.

So how did we merge these technologies to create a unique heuristics-based IPS system?

Before I answer that question, perhaps you are thinking that revealing our techniques might provide a potential hacker or competitor with inside secrets? More on this later…

The key to using DPI to prevent an intrusion (hack) revolves around 3 key facts:

1) A hacker MUST try to enter your enterprise by exploiting weaknesses in your normal entry points.

2) One of the normal entry points is a web page, and everybody has them. After all, if you had no publicly available data there would be no reason to be attached to the Internet.

3) By using DPI technology to monitor incoming requests and looking for abnormalities, we can now reliably spot unwanted intrusion attempts.

When we met with Fiddler on the Root, we realized that a normal entry by a customer and a probing entry by a hacker are radically different. A hacker attempts things that no normal visitor could even possibly stumble into. In our new solution we have directed our DPI technology to watch for abnormal entry intrusion attempts. This involved months of observing a group of professional hackers and then developing a set of profiles which clearly distinguish them from a friendly user.

What other innovations are involved in a heuristics-based Intrusion Prevention System (IPS)?

Spotting the hacker pattern with DPI was only part of a complete system. We also had to make sure we did not get any false positives – this is the case where a normal activity might accidentally be flagged as an intruder, and this obviously would be unacceptable. In our test lab we have a series of computers that act like users searching the Internet, the only difference is we can ramp these robot users up to hyper-speed so that they access millions of pages over a short period of time. We then measure our “false positive” rate from our simulation and ensure that our false positive rate on intrusion detection is below 0.001 percent.

Our solution, NetGladiator, is different than other IPS appliances.  We are not an “all-in-one solution”, which can be rendered useless by alerting you thousands of times a day, can block legitimate requests, and break web functionality.  We do one thing very well – we catch & stop hackers during their information discovery process – keeping your web applications secure.  NetGladiator is custom-configured for your environment, alerting you on meaningful attempts without false positive alerts.

We also had to dig into our expertise in real-time optimization. Although that sounds like marketing propaganda to impress somebody, we can break that statement down to mean something.

When doing DPI, you must look at and analyze every data stream and packet coming into your enterprise, skipping something might lead to a security breach. Looking at data and analyzing it requires quite a bit more CPU power than just moving it along a network. Many intrusion detection systems are afterthoughts to standard routers and switches. These devices were originally not designed to do computing-intensive heuristics on data. Doing so may slow your network down to a crawl, a common complaint with lower-end affordable security updates. We did not want to force our customers to make that trade-off. Our technology uses a series of processors embedded in our equipment all working in unison to analyze each packet of Internet data without causing any latency. Although we did not invent the idea of using parallel processing for analysis of data, we are the only product in our price range able to do this.

How did we validate and test our IPS solution?

1) We have been putting our systems in front of beta test sites and asking white knights to try to hack into them.

2) We have been running our technology in front of some of our own massive web crawlers. Our crawlers do not attempt anything abnormal but can push through millions of sites and web pages. This is how we test for false positives blocking a web crawler that is NOT attempting anything abnormal.

Back to the question, does divulging our methodology render it easier to breach?

The holes that hackers exploit are relatively consistent – in other words there really is only a finite number of exploitations that hackers use. They can either choose to exploit these holes or not, and if they attempt to exploit the hole they will be spotted by our DPI. Hence announcing that we are protecting these holes is more likely to discourage a hacker, who will then look for another target.

What Does Net Privacy Have to Do with Bandwidth Shaping?


I definitely understand the need for privacy. Obviously, if I was doing something nefarious, I wouldn’t want it known, but that’s not my reason. Day in and day out, measures are taken to maintain my privacy in more ways than I probably even realize. You’re likely the same way.

For example, to avoid unwanted telephone and mail solicitations, you don’t advertise your phone numbers or give out your address. When you buy something with your credit card, you usually don’t think twice about your card number being blocked out on the receipt. If you go to the pharmacist, you take it for granted that the next person in line has to be a certain distance behind so they can’t hear what prescription you’re picking up. The list goes on and on. For me personally, I’m sure there are dozens, if not hundreds, of good examples why I appreciate privacy in my life. This is true in my daily routines as well as in my experiences online.

The topic of Internet privacy has been raging for years. However, the Internet still remains a hotbed for criminal activity and misuse of personal information. Email addresses are valued commodities sold to spammers. Search companies have dedicated countless bytes of storage to every search term and IP address made. Websites place tracking cookies on your system so they can learn more about your Web travels, habits, likes, dislikes, etc.  Forensically, you can tell a lot about a person from their online activities. To be honest, it’s a little creepy.

Maybe you think this is much ado about nothing. Why should you care? However, you may recall that less than four years ago, AOL accidentally released around 20 million search keywords from over 650,000 users. Now, those 650,000 users and their searches will exist forever in cyberspace.  Could it happen again? Of course, why wouldn’t it happen again since all it takes is a packed laptop to walk out the door?

Internet privacy is an important topic, and as a result, technology is becoming more and more available to help people protect information they want to keep confidential. And that’s a good thing. But what does this have to do with bandwidth management? In short, a lot (no pun intended)!

Many bandwidth management products (from companies like Blue Coat, Allot, and Exinda, for example) intentionally work at the application level. These products are commonly referred to as Layer 7 or Deep Packet Inspect tools. Their mission is to allocate bandwidth specifically by what you’re doing on the Internet. They want to determine how much bandwidth you’re allowed for YouTube, Netflix, Internet games, Facebook, eBay, Amazon, etc. They need to know what you’re doing so they can do their job.

In terms of this article, whether you’re philosophically adamant about net privacy (like one of the inventors of the Internet), or could care less, is really not important. The question is, what happens to an application-managed approach when people take additional steps to protect their own privacy?

For legitimate reasons, more and more people will be hiding their IPs, encrypting, tunneling, or otherwise disguising their activities and taking privacy into their own hands. As privacy technology becomes more affordable and simple, it will become more prevalent. This is a mega-trend, and it will create problems for those management tools that use this kind of information to enforce policies.

However, alternatives to these application-level products do exist, such as “fairness-based” bandwidth management. Fairness-based bandwidth management, like the NetEqualizer, is the only a 100% neutral solution and ultimately provides a more privacy friendly approach for Internet users and a more effective solution for administrators when personal privacy protection technology is in place. Fairness is the idea of managing bandwidth by how much you can use, not by what you’re doing. When you manage bandwidth by fairness instead of activity, not only are you supporting a neutral, private Internet, but you’re also able to address the critical task of bandwidth allocation, control and quality of service.

%d bloggers like this: