Net Neutrality Enforcement and Debate: Will It Ever Be Settled?


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper. APconnections removed all Deep Packet Inspection technology from their NetEqualizer product over 2 years ago.

As the debate over net neutrality continues, we often forget what an ISP actually is and why they exist.
ISPs in this country are for-profit private companies made up of stockholders and investors who took on risk (without government backing) to build networks with the hopes of making a profit. To make a profit they must balance users expectations for performance against costs of implementing a network.

The reason bandwidth control is used in the first place is the standard switching problem capacity problem. Nobody can afford the investment of infrastructure to build a network to meet peak demands at all times. Would you build a house with 10 bedrooms if you were only expecting one or two kids sometime in the future? ISPs build networks to handle an average load, and when peak loads come along, they must do some mitigation. You can argue they should have built their networks; with more foresight until you are green, but the fact is demand for bandwidth will always outstrip supply.

So, where did the net neutrality debate get its start?
Unfortunately, in many Internet providers’ first attempt to remedy the overload issue on their networks, the layer-7 techniques they used opened a Pandora’s box of controversy that may never be settled.

When the subject of net neutrality started heating up around 2007 and 2008, the complaints from consumers revolved around ISP practices of looking inside customer’s transmittal of data and blocking or redirecting traffic based on content. There were all sorts of rationalizations for this practice and I’ll be the first to admit that it was not done with intended malice. However, the methodology was abhorrent.

I likened this practice to the phone company listening into your phone calls and deciding which calls to drop to keep their lines clear. Or, if you want to take it a step farther, the postal service making a decision to toss your junk mail based on their own private criteria. Legally I see no difference between looking inside mail or looking inside Internet traffic. It all seems to cross a line. When referring to net neutrality, the bloggers of this era were originally concerned with this sort of spying and playing God with what type of data can be transmitted.

To remedy this situation, Comcast and others adopted methods that relegated Internet usage based on patterns of usage and not content. At the time, we were happy to applaud them and claim that the problem of spying on data had been averted. I pretty much turned my attention away from the debate at that time, but I recently started looking back at the debate and, wow, what a difference a couple of years make.

So, where are we headed?
I am not sure what his sources are, but Rush Limbaugh claims that net neutrality is going to become a new fairness doctrine. To summarize, the FCC or some government body would start to use its authority to ensure equal access to content from search engine companies. For example, making sure that minority points of view on subjects got top billing in search results. This is a bit a scary, although perhaps a bit alarmist, but it would not surprise me since, once in government control, anything is possible. Yes, I realize conservative talk radio show hosts like to elicit emotional reactions, but usually there is some truth to back up their claims.

Other intelligent points of view:

The CRTC (Canadian FCC) seems to have a head on their shoulders, they have stated that ISPs must disclose their practices, but are not attempting to regulate how in some form of over reaching doctrine. Although I am not in favor of government institutions, if they must exist then the CRTC stance seems like a sane and appropriate request with regard to regulating ISPs.

Freedom to Tinker

What Is Deep Packet Inspection and Why All the Controversy?

Using NetEqualizer to Ensure Clean, Clear QoS for VOIP Calls


A Little Bit of History

Many VoIP installations  are designed with an initial architecture that assumes inter-office  phone calls will reside within the confines of the company LAN. Internal LANs  are almost always 100 megabit and consist of multiple paths between end points. The basic corporate LAN design usually provides more than enough bandwidth to route all inter-office VoIP calls without congestion.

As enterprises  become more dispersed geographically, care must be taken when extending  VoIP calls beyond the main office.  Once a VoIP call leaves the confines of your local network and traverses  over  the public Internet link, it will have to compete for space with any data traffic that might also be destined  for the Internet. Without careful planning, your Enterprise will most likely start dropping VoIP calls during  busy traffic times.

The most common way of dealing with priority or VoIP  is to set what is called the TOS bit.  The TOS bit acts like a little flag inside each Internet packet of the VoIP stream. An Internet router can  rearrange the packets destined for the Internet, and give priority to the outgoing VoIP packets by looking at the TOS bit. The downside of this method is that it does not help with VoIP calls originating  from the outside coming into your network.  For example, somebody receiving a VoIP call in the main office from a VPN user working at home, may experience some distortion on the incoming VoIP  call.  This is usually caused when somebody else in the office is doing a large download during the VoIP call.  Routers typically cannot set priority on incoming data, hence the inbound data download can dominate all the bandwidth, rendering the VoIP call inaudible.

How NetEqualizer Solves VoIP Congestion Issues

The NetEqualizer  solves the problem of  VoIP traffic competing with regular data traffic by using a simple  method. A NetEqualizer provides priority for both incoming and outgoing VoIP traffic . It does not use TOS bits.  It is VoIP and Network agnostic.  Sounds like the old Saturday Night Live commercial where Chevy Chase hawks a floor cleaner that is also an ice cream topping.

Here is how it works…

It turns out that VoIP streams require no more than 100kbs per  call,  usually quite a bit less.  Large downloads, on the other hand, will grab the entire Internet Trunk if they can get it.  The NetEqualizer has been designed to favor streams of less than 100kbs over larger data streams. When a large download is competing with a VoIP call for precious resources, the NetEqualizer will create some artificial latency on the download stream causing it to back off and slow down. No need to rely on TOS bits in this scenario, problem solved.

Conceptually, that is all there is to it.  Obviously, the NetEqualizer engineering team has refined and tuned  this technique over the years.  In general, the NetEqualizer Default Rules need very little set-up, and a unit can be inline in a matter of minutes.

The scenarios where NetEqualizer is appropriate for ensuring that your VoIP system runs smoothly are:

  1. You are running an Enterprise VoIP service with remote offices that connect to your main PBX over VPN links
  2. You are an ISP and your customers use a VoIP service over limited bandwidth connectivity

Recommended Reading

Other vendor White Papers on the subject:  River Bed

Other suggested reading:  http://www.bandwidth.com/wiki/article/QoS_(Quality_of_Service)

Building a Software Company from Scratch


By Art Reisman, CEO, CTO, and co-founder of APconnections, Inc.

Adapted from an article first published in Entrepreneurship.org and updated with new material in April 2010.

At APconnections, our flagship product, NetEqualizer, is a traffic management and WAN optimization tool. Rather than using compression and caching techniques, NetEqualizer analyzes connections and then doles out bandwidth to them based on preset rules. We look at every connection on the network and compare it to the overall trunk size to determine how to eliminate congestion on the links. NetEqualizer also prevents peer-to-peer traffic from slowing down higher-priority application traffic without shutting down those connections.

When we started the company, we had lots of time, very little cash, some software development skills, and a technology idea.  This article covers a couple of bootstrapping pearls of wisdom that we learned to implement by doing.

Don’t be Afraid to Use Open Source

Using open source technology to develop and commercialize new application software can be an invaluable bootstrapping tool for startup entrepreneurs. It has allowed us to validate new technology with a willing set of early adopters who, in turn, provided us with references and debugging. We used this huge number of early adopters, who love to try open source applications, to legitimize our application.  Further, this large set of commercial “installs” helped us ring out many of the bugs by users who have no grounds to demand perfection.

In addition, we jump-started our products without incurring large development expense. We used open source by starting with technology already in place and extending it, rather than building (or licensing) every piece from scratch.  Using open source code makes at least a portion of our technology publicly available. We use bundling, documentation, and proprietary extensions to make it difficult for larger players to steal our thunder. Proprietary extensions account for over half of development work, but can be protected by copyright.  Afraid of copycats?  In many cases, nothing could be better than to have a large player copy you.  Big players value time-to-market.  If one player clones your work, another may acquire your company to catch up in the market.

The transition from open source users to paying customers is a big jump, requiring traditional sales and marketing. Don’t expect your loyal base of open source beta users to start paying for your product.  However, use testimonials from this critical mass of users to market to paying customers, who are reluctant to be early adopters (see below).

Channels? Use Direct Selling and the Web

Our innovation is a bit of a stretch from existing products, and like most innovations, requires some education of the user.  Much of the early advice we received related to picking a sales channel.  Just sign-up reps, resellers, and distributors and revenues will grow. We found the exact opposite to be true.  Priming channels is expensive.  And, after we pointed the sales channel at customers, closing the sale and supporting the customer fell back on us anyway.  Direct selling is not the path to rapid growth.  But as a bootstrapping tool, direct selling has rewarded us with loyal customers, better margins, and many fewer returns.

We use the Internet to generate hot leads, but we don’t worry about our Google ranking.  The key for us is to get every satisfied customer to post something about our product.  It probably hasn’t improved our Google ratings, but customer comments have surely improved our credibility in the marketplace.

Honest postings to blogs and user groups have significant influence on potential customers.  We explain to each customer how important their posting is to our company.  We often provide them with a link to a user group or appropriate blog.  And, as you know, these blogs stay around forever.  Then, when we encounter new potential customers, we suggest that they Google our “brand name” and blog, which always generates a slew of testimonials. (Check out our Web site to see some of the ways we use testimonials.)

Conclusion

Using open source code and direct sales are surely out-of-step with popular ideas for growing technology companies, especially those funded by equity investors.  But, they worked very well for us as we grew our company with limited resources to positive cash flow and beyond.

Here are some notes on what type product to create. Obviously, you’ll want to do something you are passionate about, otherwise there is no sense in even getting started.  If you are passionate about more than one thing remember this:  trying  to sell product on value, to IT people or engineering types, is much harder than selling to other Entrepreneurs or sales people.  Technical people are generally skeptical about new claims of something working well.  Also, unless somebody asks, they often really don’t tell many other people about the product they bought and the value they are receiving from it.

Looking for a peer group to get some advice from?  Find a local software group that you can join.  If you are in the Denver area,  I would recommend trying  http://www.denversoftware.org/

NetEqualizer chosen as role model bandwidth controller for HEOA


Just ran across this posting where  Educause recommended the NetEqualizer solution as role model for bandwidth control in meeting  HEOA requirements.

Pomona College and Reed College were sited as two schools currently deploying Netequalizer equipment.

Related Article from Ars Techica website also discusses approaches schools are using to meet HEOA rules.

About Educause:

EDUCAUSE is a nonprofit association whose mission is to advance higher education by promoting the intelligent use of information technology. EDUCAUSE helps those who lead, manage, and use information resources to shape strategic decisions at every level. A comprehensive range of resources and activities is available to all interested employees at EDUCAUSE member organizations, with special opportunities open to designated member representatives.

About HEOA:

The Higher Education Opportunity Act (Public Law 110-315) (HEOA) was enacted on August 14, 2008, and reauthorizes the Higher Education Act of 1965, as amended (HEA). This page provides information on the Department’s implementation of the HEOA.

Some parts of the law will be implemented through new or revised regulations. The negotiated rulemaking process will be used for some regulations, as explained below. Other areas will be regulated either through the usual notice and comment process or, where regulations will merely reflect the changes to the HEA and not expand upon those changes, as technical changes.

Ten Things to Consider When Choosing a Bandwidth Shaper


This article is intended as an objective guide for anyone trying to narrow down their options in the bandwidth controller market. Organizations today have a plethora of product options to choose from. To further complicate your choices, not only are there  specialized bandwidth controllers, you’ll also find that most Firewall and Router products today contain some form of  bandwidth shaping and QoS  features .

What follows is an  all-encompassing  list of questions that will help you to quickly organize your  priorities with regards to choosing a bandwidth shaper.

1) What is the Cost of Increasing your Bandwidth?

Although this question may be a bit obvious, it must be asked. We assume that anybody in the market for a bandwidth controller also has the option of increasing their bandwidth. The costs of purchasing  and operating a bandwidth controller should ultimately be compared with the cost of increasing bandwidth on your network.

2) How much Savings should you expect from your Bandwidth Controller?

A good bandwidth controller in many situations can increase your carrying capacity by up to 50 percent.  However, beware, some technologies designed to optimize your network can create labor overhead in maintenance hours. Labor costs with some solutions can far exceed the cost of adding bandwidth.

3) Can you out-run your Organization’s Appetite for Increased Bandwidth  with a One-Time Bandwidth Upgrade?

The answer is yes, it is possible to buy enough bandwidth such that all your users cannot possibly exhaust the supply.  The bad news is that this solution is usually cost-prohibitive.  Many organizations that come to us have previously doubled their bandwidth, sometimes more than once, only to be back to overwhelming congestion within  a few months after their upgrade.  The appetite for bandwidth is insatiable, and in our opinion, at some point a bandwidth control device becomes your only rational option. Outrunning your user base usually is only possible where  Internet infrastructure is subsidized by a government entity, hiding the true costs.  For example, a small University with 1000 students will likely not be able to consume a true 5 Gigabit pipe, but purchasing a pipe of that size would be out of reach for most US-based Universities.

4) How Valuable is Your Time? Are you a Candidate for a Freeware-type Solution?

What we have seen in the market place is that small shops with high technical expertise, or small ISPs on a budget, can often make use of a freeware do-it-yourself bandwidth control solution.  If you are cash-strapped, this may be a viable solution for you.  However, please go into this with your eyes open.  The general pitfalls and risks are as follows:

a) Staff can easily run up 80 or more hours trying to  save a few thousand dollars fiddling with an unsupported solution.  And this is only for the initial installation & set-up.  Over the useful life of the solution, this can continue at a high-level, due to the unsupported nature of these technologies.

b) Investors  do not like to invest in businesses with homegrown technology, for many reasons, including finding personnel to sustain the solution, upgrading and adding features, as well as overall risk of keeping it in working order, unless it gives them a very large competitive advantage. You can easily shoot yourself in the foot with prospective buyers by becoming too dependent on homegrown, freeware solutions, in order to save costs. When you rely on something homegrown, it generally means an employee or two holds the keys to the operational knowledge, hence potential buyers can become uncomfortable (you would be too!).

5) Are you Looking to Enforce Bandwidth Limits as part of a Rate Plan that you Resell to Clients?

For example , let’s say that you have a good-sized backbone of bandwidth at a reasonable cost per megabit, and you just want to enforce class of service speeds to sell your bandwidth in incremental revenue chunks.

If this is truely your only requirement, and not optimization to support high contention ratios, then you should be careful not to overspend on your solution. A basic NetEqualizer or Allot system may be all that you need. You can also most likely leverage the bandwidth control features bundled into your Router or Firewall.  The thing to be careful of if using your Router/Firewall is that these devices can become overwhelmed due to lack of horsepower.

6) Are you just Trying to Optimize the Bandwidth that you have, based on Well-Known Priorities?

Some context:

If you have a very static network load, with a finite well-defined set of  applications running through your enterprise, there are application shaping (Layer-7 shaping) products out there such as the Blue Coat PacketShaper,which uses deep packet inspection, that can be set up once to allocate different amounts bandwidth based on application.  If the PacketShaper is a bit too pricey, the Cymphonics product can also detect most common applications.

If  you are trying to optimize your bandwidth on a variable, wide-open plethora of applications, then you may find yourself with extremely high maintenance costs by using a Layer-7 application shaper. A generic behavior-based product such as the NetEqualizer will do the trick.

Update 2015

Note : We are seeing quite a bit of Encryption on  common applications. We strongly recommend avoiding layer 7 type devices for public Internet traffic as the accuracy is diminishing due to the fact that encrypted traffic is UN-classifieble , a heuristics based behavior based approach is advised

7) Make sure  what looks elegant on the cover does not have hidden costs by doing a little research on the Internet.

Yes this is an obvious one too, but lest you forget your due diligence!

Before purchasing any traffic shaping solution  you should try a simple internet search with well placed keywords to uncover objective opinions. Current testimonials supplied by the vendor are a good source of information, but only tell half the story. Current customers are always biased toward their decision sometimes in the face of ignoring a better solution.

If you are not familiar with this technology, nor have the in-house expertise to work with a traffic shaper, you may want to consider buying additional bandwidth as your solution.  In order to assess if this is a viable solution for you, we recommend you think about the following: How much bandwidth do you need ? What is the appropriate amount for your ISP or organization?  We actually dedicated a complete article to this question.

8) Are you a Windows Shop?  Do you expect a Microsoft-based solution due to your internal expertise?

With all respect to Microsoft and the strides they have made toward reliability in their server solutions, we believe that you should avoid a Windows-based product for any network routing or bandwidth control mission.

To be effective, a bandwidth control device must be placed such that all traffic is forced to pass through the device. For this reason, all manufacturers that we are aware of develop their network devices using a derivative of  Linux. Linux-based is based on Open Source, which means that an OEM can strip down the operating system to its simplest components.  The simpler operating system in your network device, the less that can go wrong.  However, with Windows the core OS source code is not available to third-party developers, hence an OEM may not always be able to track down serious bugs. This is not to say that bugs do not occur in Linux, they do, however the OEM can often get a patch out quickly.

For the Windows IT person trained on Windows, a well-designed networking device presents its interface via a standard web page.  Hence, a technician likely needs no specific Linux background.

9) Are you a CIO (or C level Executive) Looking to Automate and Reduce Costs ?

Bandwidth controllers can become a means to do cool things with a network.  Network Administrators can get caught up reading fancy reports, making daily changes, and interpreting results, which can become  extremely labor-intensive.  There is a price/benefit crossover point where a device can create more work (labor cost)  than bandwidth saved.  We have addressed this paradox in detail in a previous article.

10) Do you have  any Legal or Political Requirement to Maintain Logs or Show Detailed Reports to a Third-Party (i.e. management ,oversight committee, etc.)?

For example…

A government requirement to provide data wire taps dictated by CALEA?

Or a monthly report on employee Internet behavior?

Related article how to choose the right bandwidth management solution

Links to other bandwidth control products on the market.

Packet Shaper by Blue Coat

NetEqualizer ( my favorite)

Exinda

Riverbed

Exinda  Packet Shaper  and Riverbed tend to focus on the enterprise WAN optimization market.

Cymphonix

Cymphonix comes  from a background of detailed reporting.

Emerging Technologies

Very solid  product for bandwidth shaping.

Exinda

Exinda from Australia has really made a good run in the US market offering a good alternative to the incumbants.

Netlimiter

For those of you who are wed to Windows NetLimiter is your answer

Antamediabandwidth

Behind the Scenes on the latest Comcast Ruling on Net Neutrality


Yesterday the FCC ruled in favor of Comcast regarding their rights to manipulate consumer traffic . As usual, the news coverage was a bit oversimplified and generic. Below we present a breakdown of the players involved, and our educated opinion as to their motivations.

1) The Large Service Providers for Internet Service: Comcast, Time Warner, Quest

From the perspective of Large Service Providers, these companies all want to get a return on their investment, charging the most money the market will tolerate. They will also try to increase market share by consolidating provider choices in local markets. Since they are directly visible to the public, they will also be trying to serve the public’s interest at heart; for without popular support, they will get regulated into oblivion. Case in point, the original Comcast problems stemmed from angry consumers after learning their p2p downloads were being redirected and/or  blocked.

Any and all government regulation will be opposed at every turn, as it is generally not good for private business. In the face of a strong headwind, don’t be surprised if Large Service Providers might try to reach a compromise quickly to alleviate any uncertainty.  Uncertainty can be more costly than regulation.

To be fair, Large Service Providers are staffed top to bottom with honest, hard-working people but, their decision-making as an entity will ultimately be based on profit.  To be the most profitable they will want to prevent third-party Traditional Content Providers from flooding  their networks with videos.  That was the original reason why Comcast thwarted bittorrent traffic. All of the Large Service Providers are currently, or plotting  to be, content providers, and hence they have two motives to restrict unwanted traffic. Motive one, is to keep their capacities in line with their capabilities for all generic traffic. Motive two, would be to thwart other content providers, thus making their content more attractive. For example who’s movie service are you going to subscribe with?  A generic cloud provider such as Netflix whose movies run choppy or your local provider with better quality by design?

2) The Traditional Content Providers:  Google, YouTube, Netflix etc.

They have a vested interest in expanding their reach by providing expanded video content.  Google, with nowhere to go for new revenue in the search engine and advertising business, will be attempting  an end-run around Large Service Providers to take market share.   The only thing standing in their way is the shortcomings in the delivery mechanism. They have even gone so far as to build out an extensive, heavily subsidized, fiber test network of their own.  Much of the hubbub about Net Neutrality is  based on a market play to force Large Service Providers to shoulder the Traditional Content Providers’ delivery costs.  An analogy from the bird world would be the brown-headed cowbird, where the mother lays her eggs in another bird’s nest, and then lets her chicks be raised by an unknowing other species.  Without their own delivery mechanism direct-to-the-consumer, the Traditional Content Providers  must keep pounding at the FCC  for rulings in their favor.  Part of the strategy is to rile consumers against the Large Service Providers, with the Net Neutrality cry.

3) The FCC

The FCC is a government organization trying to take their existing powers, which were granted for airwaves, and extend them to the Internet. As with any regulatory body, things start out well-intentioned, protection of consumers etc., but then quickly they become self-absorbed with their mission.  The original reason for the FCC was that the public airways for television and radio have limited frequencies for broadcasts. You can’t make a bigger pipe than what frequencies will allow, and hence it made sense to have a regulatory body oversee this vital  resource. In  the early stages of commercial radio, there was a real issue of competing entities  broadcasting  over each other in an arms race for the most powerful signal.  Along those lines, the regulatory entity (FCC) has forever expanded their mission.  For example, the government deciding what words can be uttered on primetime is an extension of this power.

Now with Internet, the FCC’s goal will be to regulate whatever they can, slowly creating rules for the “good of the people”. Will these rules be for the better?  Most likely the net effect is no; left alone the Internet was fine, but agencies will be agencies.

4) The Administration and current Congress

The current Administration has touted their support of Net Neutrality, and perhaps have been so overburdened with the battle on health care and other pressing matters that there has not been any regulation passed.  In the face of the aftermath of the FCC getting slapped down in court to limit their current powers, I would not be surprised to see a round of legislation on this issue to regulate Large Service Providers in the near future.  The Administraton will be painted as consumer protection against big greedy companies that need to be reigned in, as we have seen with banks, insurance companies, etc…. I hope that we do not end up with an Internet Czar, but some regulation is inevitable, if nothing else for a revenue stream to tap into.

5) The Public

The Public will be the dupes in all of this, ignorant voting blocks lobbied by various scare tactics.   The big demographic difference on swaying this opinion will be much different from the health care lobby.  People concerned for and against Internet Regulation will be in income brackets that have a higher education and employment rate than the typical entitlement lobbies that support regulation.  It is certainly not going to be the AARP or a Union Lobbyist leading the charge to regulate the Internet; hence legislation may be a bit delayed.

6) Al Gore

Not sure if he has a dog in this fight; we just threw him in here for fun.

7) NetEqualizer

Honestly, bandwidth control will always be needed, as long as there is more demand for bandwidth than there is bandwidth available.  We will not be lobbying for or against Net Neutrality.

8) The Courts

This is an area where I am a bit weak in understanding how a Court will follow legal precedent.  However, it seems to me that almost any court can rule from the bench, by finding the precedent they want and ignoring others if they so choose?  Ultimately, Congress can pass new laws to regulate just about anything with impunity.  There is no constitutional protection regarding Internet access.  Most likely the FCC will be the agency carrying out enforcement once the laws are in place.

APconnections Announces New API for Customizing Bandwidth User Quotas


APconnections is proud to announce the release of its NetEqualizer User-Quota API (NUQ API) programmer’s toolkit. This new toolkit will allow NetEqualizer users to generate custom configurations to better handle bandwidth quotas* as well as keep customers informed of their individual bandwidth usage.

The NetEqualizer User-Quota API (NUQ API) programmer’s toolkit features include:

  1. Tracking user data by IP and MAC address (MAC address tracking will be out in the second release)
  2. Specifying quotas and bandwidth limits by IP or a subnet block
  3. Monitoring real-time bandwidth utilization at any time
  4. Setting up a notification alarm when a user exceeds a bandwidth limit
  5. Utilizing an API programming interface

In addition to providing the option to create separate bandwidth quotas for individual customers and reduce a customer’s Internet pipe when they have reached their individual set limit, customers themselves can be notified when a limit is reached and even have access to an interface to monitor current monthly usage so they are not surprised when they reach their limit.

Overall, the NUQ API will provide a quick and easy tool to customize your business and business process.

If you do not currently have the resources to use the NUQ API and customize it to fit your business, please contact us and we can arrange for one of our consulting partners to put together an estimate for you.  Or, if you just have a few questions, we’d be happy to put together a reasonable support contract (Support for the API programs is not included in our standard software support (NSS)).

*Bandwidth quotas are used by ISPs as a means to meter total bandwidth downloaded over a period of time. Although not always disclosed, most ISPs reserve the right to limit service for users that continually download data. Some providers use the threat of quotas as a deterrent to keep overall traffic on an Internet link down.

See how bandwidth hogs are being treated in Asia

Equalizing Compared to Application Shaping (Traditional Layer-7 “Deep Packet Inspection” Products)


Editor’s Note: (Updated with new material March 2012)  Since we first wrote this article, many customers have implemented the NetEqualizer not only to shape their Internet traffic, but also to shape their company WAN.  Additionally, concerns about DPI and loss of privacy have bubbled up. (Updated with new material September 2010)  Since we first published this article, “deep packet inspection”, also known as Application Shaping, has taken some serious industry hits with respect to US-based ISPs.   

==============================================================================================
Author’s Note: We often get asked how NetEqualizer compares to Packeteer (Bluecoat), NetEnforcer (Allot), Network Composer (Cymphonix), Exinda, and a plethora of other well-known companies that do Application Shaping (aka “packet shaping”, “deep packet inspection”, or “Layer-7” shaping).   After several years of these questions, and discussing different aspects with former and current application shaping with IT administrators, we’ve developed a response that should clarify the differences between NetEqualizer’s behavior- based approach and the rest of the pack.
We thought of putting our response into a short, bullet-by-bullet table format, but then decided that since this decision often involves tens of thousands of dollars, 15 minutes of education on the subject with content to support the bullet chart was in order.  If you want to skip the details, see our Summary Table at the end of this article

However, if you’re looking to really understand the differences, and to have the question answered as objectively as possible, please take a few minutes to read on…
==============================================================================================

How NetEqualizer compares to Bluecoat, Allot, Cymphonix, & Exinda

In the following sections, we will cover specifically when and where Application Shaping is used, how it can be used to your advantage, and also when it may not be a good option for what you are trying to accomplish.  We will also discuss how Equalizing, NetEqualizer’s behavior-based shaping, fits into the landscape of application shaping, and how in many cases Equalizing is a much better alternative.

Download the full article (PDF)  Equalizing Compared To Application Shaping White Paper

Read the rest of this entry »

Will the Rural Broadband Initiative Create New Jobs?


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com
Art Reisman is a partner and co-founder of APconnections, a company that provides bandwidth control solutions (NetEqualizer) to
ISPs, Universities, Wireless ISPs, Libraries, Mining Camps, and any organization where groups of users must share their Internet resources equitably.

I’m sure that most people living in rural areas are excited about the prospects of lower cost broadband. But, what will be the ultimate result of this plan? Will it be a transforming technology on the scale of previous campaigns implemented for electricity and interstate highways?  Will the money borrowed see a return on investment through higher productivity and increased national wealth?

The answer is most likely “no.” Here’s why…

  1. The premise of a return on investment by bringing bandwidth to rural areas assumes there is some kind of dormant untapped economic engine  that  will spring to life once sprinkled with additional bandwidth. This isn’t necessarily the case.
  2. There is also an implied myth that somehow rural America does not have access to broadband. This is simply not true.

Here are some questions and issues to consider:

Are rural communities really starved for bandwidth?

Most rural small businesses already have  access to decent broadband speeds and are not stuck on dial up.  To be fair, rural broadband currently is not quite fast enough to watch unlimited YouTube, but it is certainly fast enough to allow for VoIP, E-mail, sending documents and basic communication without the plodding of dial up.

We support approximately 500 rural operators around the US and the world.  The enabling technology for getting bandwidth to rural areas is well established using readily available line of sight back haul equipment.

For example, let’s say you want to start a broadband business 80 miles southwest of Wichita Kansas. How do you tap into the major Internet backbone?  Worst case scenario is that the nearest pop to a major backbone Internet provider is in Wichita. For a few thousand dollars, you can run a microwave link from Wichita out to your town and using common backhaul technology. You could then distribute broadband access to your local community using point to multipoint technology. The technology to move broadband into rural areas is not futuristic, it is a viable and profitable industry that has evolved to meet market demands.

How much bandwidth is enough for rural business needs?

We support hundreds of businesses and their bandwidth needs. From our observations, what we have found is that unless a business is specially  a content distribution or hosting company, they purchase minimal pipes, much less per capita than a consumer household.

Why? They don’t want to subsidize their employees’ YouTube and online entertainment habits. Therefore, they typically just don’t need more than a 1.5 meg for an office of 20 or so employees.

As mentioned, bandwidth in rural American towns is not quite up to the same standards as major metro areas, but the service is adequate to ensure that businesses are not at a disadvantage.  Most  high speed connections beyond business needs are used primarily for entertainment -watching videos, playing Xbox, etc. It’s not that these activities are bad, it’s just that they are consumer activities and not related to business productivity. Hence, considering this, I would argue that a government subsidy to bring high speed into rural areas will have little additional  economic impact.

The precedent of building highways to rural areas cannot be compared to broadband.

Highways did open the country to new forms of commerce, but there was a clear geographic hurdle to overcome that no commercial entity would take on. There were farm producers in rural America, vital to our GDP, that had to get product to market efficiently.

The interstate system was necessary to open the country to commerce, and I would agree that moving goods from coast to coast via highway certainly benefits everybody. Grain and corn from the Midwest must be brought to market through a system of feeder roads connecting into the Interstate and rail sytems. And, the only way to transport goods from anyplace must include a segment of highway.

But the Internet transports data, and  there is  no geographic restriction on where data gets created and consumed. So, there is not an underlying need to make use of rural america for economic reasons with respect to data. Even if there was a small business building widgets in rural America, I challenge any government official to cite one instance of a business not being able function for lack of Internet conectivity. I am able to handle my e-mail on a $49 -per-month WildBlue Internet connection 20 miles from the nearest town in the middle of Kansas and my customers cannot tell the difference — and neither can I.

With broadband there is only data to transport, and unlike the geographic necessity of farm products, there is no compelling reason why it needs to be produced in rural areas. Nor is there evidence of an issue moving it from one end of the country to another, the major links between cities are already well established.

Since Europeans are far better connected than the US, we are falling behind.

This comparison is definitely effective in convincing Americans that something drastic needs to be done about the country’s broadband deficiencies, but it needs to be kept in perspective.

While it is true the average teenagar in Europe can download and play oodles more games with much more efficiency than a poor American farmhand in rural Texas, is that really setting the country back?

Second, the population densities in Western Europe make the econimics of high-speed links to everybody much more feasible than stringing lines through rural towns 40 miles apart in America’s heartland.  I don’t think the Russians are trying to send gigabit lines to every village in Siberia, which would be a more realistic analogy than comparing U.S. broadband coverage to Western Europe in general.

Therefore, while the prospect of expanded broadband Internet access to rural America is appealing for many reasons, both the positive outcomes of its implementation as well as the consequences of the current broadband shortcomings must be kept in perspective. The majority of rural America is not completely bandwidth deprived. Although there are shortcomings, they are not to the extent that commerce is suffering, nor to the extent that changes will lead to a significant increase in jobs or productivity. This is not to say that rural bandwidth projects should not be undertaken, but rather that overly ambitious expectations should not be the driving force behind them.

Looks Robert Mitchell in this 2007  PC World article  disagrees with me.

NetEqualizer: Advanced Tuning

NetEqualizer Bandwidth Shaping Solution: Business Centers


In working with numerous Business Center network administrators, we have heard the same issues and challenges repeatedly. Here are just a few:

Download Business Centers White Paper

  • We need to do more with less bandwidth.
  • We need a solution that’s low cost, low maintenance, and easy to set up.
  • We need to support selling fixed bandwidth to our customers, by office and/or user.
  • We need to be able to report on subscriber usage.
  • We need to increase user satisfaction and reduce network troubleshooting calls

In this article, we’ll talk about how the NetEqualizer has been used to solve these issues for many Business Centers around the world.

Download article (PDF) Business Centers White Paper

Read full article …

NetEqualizer Superhero Animation Helps to Redefine the World of WAN Optimization


Lafayette, CO, February 2, 2010 — APconnections, a leading supplier of plug-and-play bandwidth shaping products and the creator of the NetEqualizer, today announced their new animation available for online viewing.

 Eli Riles, a consulting partner at APconnections, summed it up this way:

“Over the years, we’ve had several clients ask us for an easy way to explain how the NetEqualizer works. Well, our newest NetEqualizer video may be our best response yet. With the help of People Productions of Boulder, we’ve captured the NetEqualizer’s Network Optimization effectiveness in two new videos — one straight to the point and the other a little more detailed.

“So, if you’re looking for an easy way to explain exactly what you’re doing to make your network run smoothly, or are just in need of an overview of how the NetEqualizer works, take a look.”

To view the video: NetEqualizer Superhero Video

NetEqualizer Bandwidth Shaping Solution: K-12 Schools


Download K-12 Schools White Paper

In working with network administrators at public and private K-12 schools over the years, we’ve repeatedly heard the same issues and challenges facing them. Here are just a few:

  • We need a solution that’s low cost, low maintenance, and easy to set up.
  • We need a solution that will prioritize classroom videos and other online educational tools (e.g. blackboard.com).
  • We need to improve the overall Web-user experience for students.
  • We need a solution that doesn’t require “per-user” licensing.

In this article, we’ll talk about how the NetEqualizer has been used to solve these issues for many public and private K-12 schools around the world.

Download article (PDF) K-12 Schools White Paper

Read full article …

URL-Based Shaping With Your NetEqualizer: A How To Guide


What is URL-based Shaping?

URL shaping is the ability to specify the URL, normally a popular site such as YouTube or NetFlix, and set up a fixed-rate limit for traffic to that specific URL.

Is URL shaping just a matter of using a reverse lookup on a URL to get the IP address and plugging it into a bandwidth controller?

In the simplest case, yes, but for sites such as YouTube, the URL of http://www.youtube.com will have many associated IP addresses used for downloading actual videos. Shaping exclusively on the base URL would not be effective.

Is URL shaping the same thing as application shaping?

No. Although similar in some ways, there are significant differences:

  1. URL shaping is essentially the same as shaping by a known IP address. The trick with URL shaping is to discover IP addresses associated with a well-known URL.
  2. Application shaping uses Deep Packet Inspection (DPI). URL shaping does not. It does not inspect or open customer data.

How to set up URL-based shaping on your NetEqualizer

The following specifications are necessary:

  1. NetEqualizer version 4.0 or later
  2. A separate Linux-based client such that the client must access the Internet through the NetEqualizer
  3. The Perl source code for client URL shaping (listed below) loaded onto a client
  4. You will also need to set up your client so that it has permissions to run RSH (remote Shell) commands on your NetEqualizer without requiring a password to be entered. If you do not do this, your Perl discovery routine will hang. The notes for setting up the RSH permissions are outlined below.

How it works…

Save the Perl source code into a .pl file we suggest urlfinder.pl

Make sure to make this file executable

chmod 777 urlfinder.pl

Run the perl command with the following syntax from the command line, where domain.com will be replaced with the specific URL you wish to shape:

./urlfinder.pl http://www.domain.com pool# downlimit uplimit x.x.x.x y.y.y.y

  • Pool# is an unused bandwidth pool on your NetEqualizer unit
  • Downlimit is the rate in bytes per second incoming for the URL
  • Uplimit is the rate bytes per second outgoing to the Internet for the URL
  • x.x.x.x is the IP address of your NetEqualizer
  • y.y.y.y is the IP address of the client

The script will attempt an http request using http://www.domain.com. It will then continue to do recursive Web accesses on subsequent links starting on the main domain URL. It will stop when there are no more links to follow or when 150 pages have been accessed. Any foreign IP’s found during the access session will be put into the given bandwidth pool as CLASS B masks, and will immediately be forever shaped until you remove the pool.

Notes:

In our beta testing, the script did well in finding YouTube subnets used for videos.  We did not confirm whether the main NetFlix home page URL shares IP subnets with their download sites.

Notes for setting up RSH

Begin Notes

These notes  assume you are either logged in on the Client as root or you use sudo -i and are acting as root.

192.168.1.143 is used in the example as the Server (NetEq) IP.

On your Client machine, do:

  • ssh-keygen -t rsa -b 4096
  • ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.143
  • nano -w /etc/ssh/ssh_config

Make sure that these are as follows:

  • RhostsRSAAuthentication yes
  • RSAAuthentication yes
  • EnableSSHKeysign yes
  • HostbasedAuthentication yes

The next line is all one line to the ssh_known_hosts

  • scp /etc/ssh/ssh_host_rsa_key.pub root@192.168.1.143:/etc/ssh/ssh_known_hosts

The next line is all one line to the ssh_known_hosts2

  • scp /etc/ssh/ssh_host_rsa_key.pub root@192.168.1.143:/etc/ssh/ssh_known_hosts2

Now, find out your HOSTNAME on the Client:

  • echo $HOSTNAME

On the Server machine, do:

  • nano -w /etc/hosts.equiv
  • harry-lin root
  • my $HOSTNAME of the Client was harry-lin
  • nano -w /etc/ssh/sshd_config

Check the following:

  • PermitRootLogin yes
  • StrictModes yes
  • RSAAuthentication yes
  • PubkeyAuthentication yes
  • AuthorizedKeysFile %h/.ssh/authorized_keys
  • IgnoreRhosts no
  • RhostsRSAAuthentication no
  • HostbasedAuthentication yes

Now do:

  • chown root:root /root

Then:

  • /etc/init.d/ssh reload

Now you can try something like this from your Client:

  • ssh root@192.168.1.143

If it doesn’t work, then do the following, which gives you details if possible:

  • ssh -v root@192.168.1.143

Final Notes: While support for this utility is NOT currently included with your NetEqualizer, we will assist any customers with a current Network Software Subscription for up to one hour. For additional support, consulting fees may apply.

Comcast Suit: Was Blocking P2P Worth the Final Cost?


By Art Reisman
CTO of APconnections
Makers of the plug-and-play bandwidth control and traffic shaping appliance NetEqualizer

Art Reisman CTO www.netequalizer.com

Comcast recently settled a class action suit in the state of Pennsylvania regarding its practice of selectively blocking of P2P.  So far, the first case was settled for 16 million dollars with more cases on the docket yet to come. To recap, Comcast and other large ISPs invested in technology to thwart P2P, denied involvment when first accused, got spanked by the FCC,  and now Comcast is looking to settle various class action suits.

When Comcast’s practices were established, P2P usage was sky-rocketing with no end in sight and the need to block some of it was required in order to preserve reasonable speeds for all users. Given that there was no specific law or ruling on the book, it seemed like mucking with P2P to alleviate gridlock was a rational business decision. This decision made even more sense considering that DSL providers were stealing disgruntled customers. With this said, Comcast wasn’t alone in the practice — all of the larger providers were doing it, throttling P2P to some extent to ensure good response times for all of their customers.

Yet, with the lawsuits mounting, it appears on face value that things backfired a bit for Comcast. Or did they?

We can work out some very rough estimates as the final cost trade-off. Here goes:

I am going to guess that before this plays out completely, settlements will run close to $50 million or more. To put that in perspective, Comcast shows a 2008 profit of close to $3 billion. Therefore, $50 million is hardly a dent to their stock holders. But, in order to play this out, we must ask what the ramifications would have been to not blocking P2P back when all of this began and P2P was a more serious bandwidth threat (Today, while P2P has declined, YouTube and online video are now the primary bandwidth hogs).

We’ll start with the customer. The cost of getting a new customer is usually calculated at around 6 months of service or approximately $300. So, to make things simple, we’ll assume the net cost of a losing a customer is roughly $300. In addition, there are also the support costs related to congested networks that can easily run $300 per customer incident.

The other more subtle cost of P2P is that the methods used to deter P2P traffic were designed to keep traffic on the Comcast network. You see, ISPs pay for exchanging data when they hand off to other networks, and by limiting the amount of data exchanged, they can save money. I did some cursory research on the costs involved with exchanging data and did not come up with anything concrete, so I’ll assume a P2P customer can cost you $5 per month.

So, lets put the numbers together to get an idea of how much potential financial damage P2P was causing back in 2007 (again, I must qualify that these are based on estimates and not fact. Comments and corrections are welcome).

  • Comcast had approximately 15 million broadband customers in 2008.
  • If 1 in 100 were heavy P2P users, the exchange cost would be $7.5 million per month in exchange costs.
  • Net lost customers to a competitor might be 1 in 500 a month. That would run $9 million a month.
  • Support calls due to preventable congestion might run another 1 out of 500 customers or $9 million a month.

So, very conservatively for 2007 and 2008, incremental costs related to unmitigated P2P could have easily run a total of $600 million right off the bottom line.

Therefore, while these calculations are approximations, in retrospect it was likely financially well worth the risk for Comcast to mitigate the effects of unchecked P2P. Of course, the public relations costs are much harder to quantify.