White Paper: A Simple Guide to Network Capacity Planning


After many years of consulting and supporting the networking world with WAN optimization devices, we have sensed a lingering fear among Network Administrators who wonder if their capacity is within the normal range.

So the question remains:

How much bandwidth can you survive with before you impact morale or productivity?

The formal term we use to describe the number of users sharing a network link to the Internet is  contention ratio. This term  is defined as  the size of an Internet trunk divided by the number of users. We normally think of Internet trunks in units of megabits. For example, 10 users sharing a one megabit trunk would have a 10-to- 1 contention ratio. If sharing the bandwidth on the trunk equally and simultaneously, each user could sustain a constant feed of 100kbs, which is exactly 1/10 of the overall bandwidth.

From a business standpoint, it is whatever a customer will put up with and pay for without canceling their service. This definition may seem ethically suspect, but whether in the bygone days of telecommunications phone service or contemporary Internet bandwidth business, there are long-standing precedents for overselling. What do you think a circuit busy signal is caused by? Or a dropped cell phone call?

So, without pulling any punches, what exactly will a customer tolerate before pulling the plug?

Here are some basic observations about consumers and acceptable contention ratios:

  • Rural customers in the US and Canada: Contention ratios of 50 to 1 are common
  • International customers in remote areas of the world: Contention ratios of 80 to 1 are common
  • Internet providers in urban areas: Contention ratios of 15 to 1 are to be expected
  • Generic Business ratio 50 to 1 , and sometimes higher

Update Jan 2015, quite a bit has happened since these original numbers were published. Internet prices have plummeted, here is my updated observation.

Rural customers in the US and Canada: Contention ratios of 10 to 1 are common
International customers in remote areas of the world: Contention ratios of 20 to 1 are common
Internet providers in urban areas: Contention ratios of 2 to 1 are to be expected
Generic Business ratio 5 to 1 , and sometimes higher

As a rule Businesses can general get away with slightly higher contention ratios.  Most business use does not create the same load as recreational use, such as YouTube and File Sharing. Obviously, many businesses will suffer the effects of recreational use and perhaps haphazardly turn their heads on enforcement of such use. The above ratio of 50 to 1 is a general guideline of what a business should be able to work with, assuming they are willing to police their network usage and enforce policy.

The numbers above are a good, rough starting point, but things are not as simple as they look. There is a statistical twist as bandwidth amounts get higher.

Contention ratios can actually increase as the overall Internet trunk size gets larger. For example, if 50 people can share one megabit without mutiny, it should follow that 100 people can share two megabits without mutiny as the ratio has not changed. It is still 50 to 1.

However, from observations of hundreds of ISPs, we can easily conclude that perhaps 110 people can share two megabits with the same tolerance as 50 people sharing one megabit. What this means is that the larger the ISP, the more bandwidth at a fixed cost per megabit, and thus the larger the contention ratios you can get away with.

Is this really true? And if so, what are its implications for your business?

This is simply an empirical observation, backed up by talking to literally thousands of ISPs over the course of four years and noticing how their oversubscription ratios increase with the size of their trunk.

A conservative estimate is that, starting with the baseline ratio listed above, you can safely add 10 percent more subscribers above and beyond the original contention ratio for each megabit of trunk they share.

Thus, to provide an illustration, 50 people sharing one megabit can safely be increased to 110 people sharing two megabits, and at four megabits you can easily handle 280 customers. With this understanding, getting more from your bandwidth becomes that much easier.

I also ran across this thread in a discussion group for Resnet Adminstrators around the country.

From Resnet Listserv

Brandon  Enright at University of California San Diego breaks it down as follows:
Right now we’re at .2 Mbps per student.  We could go as low as .1 right
now without much of any impact.  Things would start to get really ugly
for us at .05 Mpbs / student.

So at 10k students I think our lower-bound is 500 Mbps.

I can’t disclose what we’re paying for bandwidth but even if we fully
saturated 2Gbps for the 95% percentile calculation it would come out to
be less than $5 per student per month.  Those seem like reasonable
enough costs to let the students run wild.
Brandon

Editors note: I am not sure why a public institution can’t  exactly disclose what they are paying for bandwidth ( Brian does give a good hint), as this would be useful to the world for comparison; however many Universities get lower than commercial rates through state infrastructure not available to private operators.

Related Article ISP contention ratios.

By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

Simple Is Better with Bandwidth Monitoring and Traffic Shaping Equipment


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

For most IT administrators, bandwidth monitoring of some sort is an essential part of keeping track of, as well as justifying, network expenses. However, the question a typical CIO will want to know before approving any purchase is, “What is the return on investment for your equipment purchase?”.  Putting a hard and fast number on  bandwidth optimization equipment may seem straight forward.  If you can quantify the cost of your bandwidth and project an approximate reduction in usage or increase in throughput, you can crunch the numbers. But, is that all you should consider when determining how much you should spend on a bandwidth optimization device?

The traditional way of looking at monitoring your Internet has two dimensions.  First, the fixed cost of the monitoring tool used to identify traffic, and second, the labor associated with devising and implementing the remedy.  In an ironic inverse correlation, we assert that your ROI will degrade with the complexity of the monitoring tool.

Obviously, the more detailed the reporting/shaping tool, the more expensive its initial price tag. Yet, the real kicker comes with part two. The more detailed data output generally leads to an increase in the time an administrator is likely to spend making adjustments and looking for optimal performance.

But, is it really fair to assume higher labor costs with more advanced monitoring and information?

Well, obviously it wouldn’t make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. But, typically, the more information an admin has about a network, the more inclined he or she might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief that when the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network adjusting can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention. For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980s. The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.

An effective compromise with many of our customers is that they are stepping down from expensive, complex reporting tools to a simpler approach.  Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.

For example, with a simple reporting tool you can plot network usage by user. Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing. Abuse becomes obvious just looking at the usage (a simple report).

However, there is also the personal control factor, which often does not follow clear lines of ROI.

What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into, for example, a trip to Hawaii.

In our next article, we’ll put some real world numbers to the test for actual breakdowns, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.

List of monitoring tools compiled by Stanford

ROI tool , determine how much a bandwidth control device can save.

Great article on choosing a bandwidth controller

Planetmy
Linux Tips
How to set up a monitor for free

Good enough is better a lesson from the Digital Camera Revolution

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

The Promise of Streaming Video: An Unfunded Mandate


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com
Art Reisman is a partner and co-founder of APconnections, a company that provides bandwidth control solutions (NetEqualizer) to
ISPs, Universities, Libraries, Mining Camps, and any organization where groups of users must share their Internet resources equitably. What follows is an objective educational journey on how consumers and ISPs can live in harmony with the explosion of YouTube video.

The following is written primarily for the benefit of mid-to-small sized internet services providers (ISPs).  However, home consumers may also find the details interesting.  Please follow along as I break down the business cost model of the costs required to keep up with growing video demand.

In the past few weeks, two factors have come up in conversations with our customers, which has encouraged me to investigate this subject further and outline the challenges here:

1) Many of our ISP customers are struggling to offer video at competitive levels during the day, and yet are being squeezed due to high bandwidth costs.  Many look to the NetEqualizer to alleviate video congestion problems.  As you know, there are always trade-offs to be made in handling any congestion issue, which I will discuss at the end of this article.  But back to the subject at hand.  What I am seeing from customers is that there is an underlying fear that they (IT adminstrators) are behind the curve.   As I have an opinion on this, I decided I need to lay out what is “normal” in terms of contention ratios for video, as well what is “practical” for video in today’s world.

2) My internet service provider, a major player that heavily advertises how fast their speed is to the home, periodically slows down standard YouTube Videos.  I should be fair with my accusation, with the Internet you can actually never be quite certain who is at fault.  Whether I am being throttled or not, the point is that there are an ever-growing number of video content providers , who are pushing ahead with plans that do not take into account, nor care about, a last mile provider’s ability to handle the increased load.  A good analogy would be a travel agency that is booking tourists onto a cruise ship without keeping a tally of tickets sold, nor caring, for that matter.  When all those tourists show up to board the ship, some form of chaos will ensue (and some will not be able to get on the ship at all).

Some ISPs are also adding to this issue, by building out infrastructure without regard to content demand, and hoping for the best.  They are in a tight spot, getting caught up in a challenging balancing act between customers, profit, and their ability to actually deliver video at peak times.

The Business Cost Model of an ISP trying to accommodate video demands

Almost all ISPs rely on the fact that not all customers will pull their full allotment of bandwidth all the time.  Hence, they can map out an appropriate subscriber ratio for their network, and also advertise bandwidth rates that are sufficient enough to handle video.  There are four main governing factors on how fast an actual consumer circuit will be:

1) The physical speed of the medium to the customer’s front door (this is often the speed cited by the ISP)
2) The combined load of all customers sharing their local circuit and  the local circuit’s capacity (subscriber ratio factors in here)
3) How much bandwidth the ISP contracts out to the Internet (from the ISP’s provider)

4) The speed at which the source of the content can be served (Youtube’s servers), we’ll assume this is not a source of contention for our examples below, but it certainly should remain a suspect in any finger pointing of a slow circuit.

The actual limit to the am0unt of bandwidth a customer gets at one time, which dictates whether they can run a live streaming video, usually depends  on how oversold their ISP is (based on the “subscriber ratio” mentioned in points 1 and 2 above). If  your ISP can predict the peak loads of their entire circuit correctly, and purchase enough bulk bandwidth to meet that demand (point 3 above), then customers should be able to run live streaming video without interruption.

The problem arises when providers put together a static set of assumptions that break down as consumer appetite for video grows faster than expected.  The numbers below typify the trade-offs a mid-sized provider is playing with in order to make a profit, while still providing enough bandwidth to meet customer expectations.

1) In major metropolitan areas, as of 2010, bandwidth can be purchased in bulk for about $3000 per 50 megabits. Some localities less some more.

2) ISPs must cover a fixed cost per customer amortized: billing, sales staff, support staff, customer premise equipment, interest on investment , and licensing, which comes out to about $35 per month per customer.

3) We assume market competition fixes price at about $45 per month per customer for a residential Internet customer.

4) This leaves $10 per month for profit margin and bandwidth fees.  We assume an even split: $5 a month per customer for profit, and $5 per month per customer to cover bandwidth fees.

With 50 megabits at $3000 and each customer contributing $5 per month, this dictates that you must share the 50 Megabit pipe amongst 600 customers to be viable as a business.  This is the governing factor on how much bandwidth is available to all customers for all uses, including video.

So how many simultaneous YouTube Videos can be supported given the scenario above?

Live streaming YouTube video needs on average about 750kbs , or about 3/4 of a megabit, in order to run without breaking up.

On a 50 megabit shared link provided by an ISP, in theory you could support about 70 simultaneous YouTube sessions, assuming nothing else is running on the network.  In the real world there would always be background traffic other than YouTube.

In reality, you are always going to have a minimum fixed load of internet usage from 600 customers of approximately 10-to-20 megabits.  The 10-to-20 megabit load is just to support everything else, like web sufing, downloads, skype calls, etc.  So realistically you can support about 40 YouTube sessions at one time.  What this implies that if 10 percent of your customers (60 customers) start to watch YouTube at the same time you will need more bandwidth, either that or you are going to get some complaints.  For those ISPs that desperately want to support video, they must count on no more than about 40 simultaneous videos running at one time, or a little less than 10 percent of their customers.

Based on the scenario above, if 40 customers simultaneously run YouTube, the link will be exhausted and all 600 customers will be wishing they had their dial-up back.  At last check, YouTube traffic accounted for 10 percent of all Internet Traffic.  If left completely unregulated, a typical rural ISP could find itself on the brink of saturation from normal YouTube usage already.  With tier-1 providers in major metro areas, there is usually more bandwidth, but with that comes higher expectations of service and hence some saturation is inevitable.

This is why we believe that Video is currently an “unfunded mandate”.  Based on a reasonable business cost model, as we have put forth above, an ISP cannot afford to size their network to have even 10% of their customers running real-time streaming video at the same time.  Obviously, as bandwidth costs decrease, this will help the economic model somewhat.

However, if you still want to tune for video on your network, consider the options below…

NetEqualizer and Trade-offs to allow video

If you are not a current NetEqualizer user, please feel free to call our engineering team for more background.  Here is my short answer on “how to allow video on your network” for current NetEqualizer users:

1) You can determine the IP address ranges for popular sites and give them priority via setting up a “priority host”.
This is not recommended for customers with 50 megs or less, as generally this may push you over into a gridlock situation.

2) You can raise your HOGMIN to 50,000 bytes per second.
This will generally let in the lower resolution video sites.  However, they may still incur Penalities should they start buffering at a higher rate than 50,000.  Again, we would not recommend this change for customers with pipes of 50 megabits or less.

With either of the above changes you run the risk of crowding out web surfing and other interactive uses , as we have described above. You can only balance so much Video before you run out of room.  Please remember that the Default Settings on the NetEq are designed to slow video before the entire network comes to halt.

For more information, you can refer to another of Art’s articles on the subject of Video and the Internet:  How much YouTube can the Internet Handle?

Other blog posts about ISPs blocking YouTube

Do We Really Need IPv6 And When?


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper. APconnections removed all Deep Packet Inspection technology from their NetEqualizer product over two years ago.

First off, let me admit my track record is not that stellar when it comes to predicting the timing of eminent technology changes.

In 1943, Thomas Watson, the chairman of IBM forecast a world market for “maybe only five computers.” Years before IBM launched the personal computer in 1981, Xerox had already successfully designed and used PCs internally… but decided to concentrate on the production of photocopiers. Even Ken Olson, founder of Digital Equipment Corporation, said in 1977, “There is no reason anyone would want a computer in their home” (read about other predictions that missed the mark).

As a young computer scientist 1984-ish,  I would  often get questions from friends on whether they needed a personal computer. I was on the same bandwagon as Ken Olsen, telling anybody that asked — my dentist, my in-laws, random strangers in the park — that  it was absurd to think the average person would ever need a PC.

I did learn from my mistake and now simply understand that I really just suck at predicting consumer trends.

However, while the adoption of the personal computer was  a private consumer-driven phenomenon, IPv6, on the other hand, is not a consumer issue. And, my track record as an innovator of technology for business is much better. My years of guiding engineering decisions in Bell Labs, and now running my own technology company, provide a good base for understanding the headwinds facing IPv6.

Since the transition to IPv6 is not a consumer adoption issue, it has  many more parallels to the Y2K scare than the iPod. But, even then there are major differences.

Y2K had a time bomb of deadline. You could choose to ignore it,  but most IT managers could not afford to be wrong, so they were played by their vendors with expensive upgrades.

My prediction is that we will not transition to IPV6 this century, and if we attempt such a change, there will be utter chaos and mayhem to the point that we will have to revert back to IPv4.

Here’s my argument:

  1. There is no formal central control for  certification of Internet equipment. Yes, manufactures are self-proclaiming readiness, but even if  they all do a relatively good and professional job of testing — even with a 99 percent accuracy — on switchover day, the day everybody starts using IPV6 address space, the cumulative errors from traffic getting lost, delayed, or bounced from the one percent of equipment with problems will bring the Internet to its knees.  I don’t think the world will sit around for a few weeks or even months without the Internet while millions of pieces of routing equipment from thousands of manufacturers are retrofitted with upgrades.
  2. There’s no precedence. The only close precedent for changing the Internet address space would be the last time when AT&T added an extra digits to the dialing plan.  At the time they controlled everything from end to end.  They also had only one mission , and that was to complete a circuit from A to B. Internet routers, other than in the main backbone, do all kinds of auxiliary functions today such as firewalls, Web filtering, and optimization, hence further distancing themselves from any previous precedence.
  3. We have a viable workaround. Although a bit cumbersome, organizations and ISPs have been making due with a limited public address space using NetWork Address Translation for more than 10 years already. NAT can expand one Internet address into thousands.  Yes, public IP addresses for every man woman child for earth and every other planet in the Milky Way is possible with IPV6, but for the forseeable future, NAT combined with the 4 billion addresses available in IPv4 should do the trick, especially given the insurmountable difficulty with a switchover.
  4. Phased  Switchover nonsense ?  The pundits of moving to IPv6 are touting a phased switchover.  I am not sure what this accomplishes . If one set of users starts using a larger address range, for example, the Indian Government, they will still need to keep their original address range in order to communicate with the rest of the world. To realize the benefits of IPV6, the world as  whole, will need 100 percent participation. Phased switchover by  a segment of users, will only benefit vendors selling equipment.

Despite these predictions, the NetEqualier is ready for IPv6. We have already done some preliminary validation on IPv6  implementation in our NetEqualizer. In fact, we have even run on networks with IPv6 traffic without issues. While we have some work to do to make our product fully functional, we’ve already sufficiently tested enough to have confidence that if and when the IPv6 switch over happens, we will not cause any issues.

Net Neutrality Enforcement and Debate: Will It Ever Be Settled?


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper. APconnections removed all Deep Packet Inspection technology from their NetEqualizer product over 2 years ago.

As the debate over net neutrality continues, we often forget what an ISP actually is and why they exist.
ISPs in this country are for-profit private companies made up of stockholders and investors who took on risk (without government backing) to build networks with the hopes of making a profit. To make a profit they must balance users expectations for performance against costs of implementing a network.

The reason bandwidth control is used in the first place is the standard switching problem capacity problem. Nobody can afford the investment of infrastructure to build a network to meet peak demands at all times. Would you build a house with 10 bedrooms if you were only expecting one or two kids sometime in the future? ISPs build networks to handle an average load, and when peak loads come along, they must do some mitigation. You can argue they should have built their networks; with more foresight until you are green, but the fact is demand for bandwidth will always outstrip supply.

So, where did the net neutrality debate get its start?
Unfortunately, in many Internet providers’ first attempt to remedy the overload issue on their networks, the layer-7 techniques they used opened a Pandora’s box of controversy that may never be settled.

When the subject of net neutrality started heating up around 2007 and 2008, the complaints from consumers revolved around ISP practices of looking inside customer’s transmittal of data and blocking or redirecting traffic based on content. There were all sorts of rationalizations for this practice and I’ll be the first to admit that it was not done with intended malice. However, the methodology was abhorrent.

I likened this practice to the phone company listening into your phone calls and deciding which calls to drop to keep their lines clear. Or, if you want to take it a step farther, the postal service making a decision to toss your junk mail based on their own private criteria. Legally I see no difference between looking inside mail or looking inside Internet traffic. It all seems to cross a line. When referring to net neutrality, the bloggers of this era were originally concerned with this sort of spying and playing God with what type of data can be transmitted.

To remedy this situation, Comcast and others adopted methods that relegated Internet usage based on patterns of usage and not content. At the time, we were happy to applaud them and claim that the problem of spying on data had been averted. I pretty much turned my attention away from the debate at that time, but I recently started looking back at the debate and, wow, what a difference a couple of years make.

So, where are we headed?
I am not sure what his sources are, but Rush Limbaugh claims that net neutrality is going to become a new fairness doctrine. To summarize, the FCC or some government body would start to use its authority to ensure equal access to content from search engine companies. For example, making sure that minority points of view on subjects got top billing in search results. This is a bit a scary, although perhaps a bit alarmist, but it would not surprise me since, once in government control, anything is possible. Yes, I realize conservative talk radio show hosts like to elicit emotional reactions, but usually there is some truth to back up their claims.

Other intelligent points of view:

The CRTC (Canadian FCC) seems to have a head on their shoulders, they have stated that ISPs must disclose their practices, but are not attempting to regulate how in some form of over reaching doctrine. Although I am not in favor of government institutions, if they must exist then the CRTC stance seems like a sane and appropriate request with regard to regulating ISPs.

Freedom to Tinker

What Is Deep Packet Inspection and Why All the Controversy?

NetEqualizer chosen as role model bandwidth controller for HEOA


Just ran across this posting where  Educause recommended the NetEqualizer solution as role model for bandwidth control in meeting  HEOA requirements.

Pomona College and Reed College were sited as two schools currently deploying Netequalizer equipment.

Related Article from Ars Techica website also discusses approaches schools are using to meet HEOA rules.

About Educause:

EDUCAUSE is a nonprofit association whose mission is to advance higher education by promoting the intelligent use of information technology. EDUCAUSE helps those who lead, manage, and use information resources to shape strategic decisions at every level. A comprehensive range of resources and activities is available to all interested employees at EDUCAUSE member organizations, with special opportunities open to designated member representatives.

About HEOA:

The Higher Education Opportunity Act (Public Law 110-315) (HEOA) was enacted on August 14, 2008, and reauthorizes the Higher Education Act of 1965, as amended (HEA). This page provides information on the Department’s implementation of the HEOA.

Some parts of the law will be implemented through new or revised regulations. The negotiated rulemaking process will be used for some regulations, as explained below. Other areas will be regulated either through the usual notice and comment process or, where regulations will merely reflect the changes to the HEA and not expand upon those changes, as technical changes.

Behind the Scenes on the latest Comcast Ruling on Net Neutrality


Yesterday the FCC ruled in favor of Comcast regarding their rights to manipulate consumer traffic . As usual, the news coverage was a bit oversimplified and generic. Below we present a breakdown of the players involved, and our educated opinion as to their motivations.

1) The Large Service Providers for Internet Service: Comcast, Time Warner, Quest

From the perspective of Large Service Providers, these companies all want to get a return on their investment, charging the most money the market will tolerate. They will also try to increase market share by consolidating provider choices in local markets. Since they are directly visible to the public, they will also be trying to serve the public’s interest at heart; for without popular support, they will get regulated into oblivion. Case in point, the original Comcast problems stemmed from angry consumers after learning their p2p downloads were being redirected and/or  blocked.

Any and all government regulation will be opposed at every turn, as it is generally not good for private business. In the face of a strong headwind, don’t be surprised if Large Service Providers might try to reach a compromise quickly to alleviate any uncertainty.  Uncertainty can be more costly than regulation.

To be fair, Large Service Providers are staffed top to bottom with honest, hard-working people but, their decision-making as an entity will ultimately be based on profit.  To be the most profitable they will want to prevent third-party Traditional Content Providers from flooding  their networks with videos.  That was the original reason why Comcast thwarted bittorrent traffic. All of the Large Service Providers are currently, or plotting  to be, content providers, and hence they have two motives to restrict unwanted traffic. Motive one, is to keep their capacities in line with their capabilities for all generic traffic. Motive two, would be to thwart other content providers, thus making their content more attractive. For example who’s movie service are you going to subscribe with?  A generic cloud provider such as Netflix whose movies run choppy or your local provider with better quality by design?

2) The Traditional Content Providers:  Google, YouTube, Netflix etc.

They have a vested interest in expanding their reach by providing expanded video content.  Google, with nowhere to go for new revenue in the search engine and advertising business, will be attempting  an end-run around Large Service Providers to take market share.   The only thing standing in their way is the shortcomings in the delivery mechanism. They have even gone so far as to build out an extensive, heavily subsidized, fiber test network of their own.  Much of the hubbub about Net Neutrality is  based on a market play to force Large Service Providers to shoulder the Traditional Content Providers’ delivery costs.  An analogy from the bird world would be the brown-headed cowbird, where the mother lays her eggs in another bird’s nest, and then lets her chicks be raised by an unknowing other species.  Without their own delivery mechanism direct-to-the-consumer, the Traditional Content Providers  must keep pounding at the FCC  for rulings in their favor.  Part of the strategy is to rile consumers against the Large Service Providers, with the Net Neutrality cry.

3) The FCC

The FCC is a government organization trying to take their existing powers, which were granted for airwaves, and extend them to the Internet. As with any regulatory body, things start out well-intentioned, protection of consumers etc., but then quickly they become self-absorbed with their mission.  The original reason for the FCC was that the public airways for television and radio have limited frequencies for broadcasts. You can’t make a bigger pipe than what frequencies will allow, and hence it made sense to have a regulatory body oversee this vital  resource. In  the early stages of commercial radio, there was a real issue of competing entities  broadcasting  over each other in an arms race for the most powerful signal.  Along those lines, the regulatory entity (FCC) has forever expanded their mission.  For example, the government deciding what words can be uttered on primetime is an extension of this power.

Now with Internet, the FCC’s goal will be to regulate whatever they can, slowly creating rules for the “good of the people”. Will these rules be for the better?  Most likely the net effect is no; left alone the Internet was fine, but agencies will be agencies.

4) The Administration and current Congress

The current Administration has touted their support of Net Neutrality, and perhaps have been so overburdened with the battle on health care and other pressing matters that there has not been any regulation passed.  In the face of the aftermath of the FCC getting slapped down in court to limit their current powers, I would not be surprised to see a round of legislation on this issue to regulate Large Service Providers in the near future.  The Administraton will be painted as consumer protection against big greedy companies that need to be reigned in, as we have seen with banks, insurance companies, etc…. I hope that we do not end up with an Internet Czar, but some regulation is inevitable, if nothing else for a revenue stream to tap into.

5) The Public

The Public will be the dupes in all of this, ignorant voting blocks lobbied by various scare tactics.   The big demographic difference on swaying this opinion will be much different from the health care lobby.  People concerned for and against Internet Regulation will be in income brackets that have a higher education and employment rate than the typical entitlement lobbies that support regulation.  It is certainly not going to be the AARP or a Union Lobbyist leading the charge to regulate the Internet; hence legislation may be a bit delayed.

6) Al Gore

Not sure if he has a dog in this fight; we just threw him in here for fun.

7) NetEqualizer

Honestly, bandwidth control will always be needed, as long as there is more demand for bandwidth than there is bandwidth available.  We will not be lobbying for or against Net Neutrality.

8) The Courts

This is an area where I am a bit weak in understanding how a Court will follow legal precedent.  However, it seems to me that almost any court can rule from the bench, by finding the precedent they want and ignoring others if they so choose?  Ultimately, Congress can pass new laws to regulate just about anything with impunity.  There is no constitutional protection regarding Internet access.  Most likely the FCC will be the agency carrying out enforcement once the laws are in place.

NetEqualizer Bandwidth Shaping Solution: Hotels & Resorts


In working with some of the world’s leading hotels and resorts, we’ve repeatedly heard the same issues and challenges facing network administrators. Here are just a few:

Download Hotels White Paper

  • We need to do more with less bandwidth.
  • We need a solution that’s low cost, low maintenance, and easy to set up.
  • We need to meet the expectations of our tech-savvy customers and prevent Internet congestion during times of peak usage.
  • We need a solution that can meet the demands of a constantly changing clientele. We need to offer tiered internet access for our hotel guests, and provide managed access for conference attendees.

In this article, we’ll talk about how the NetEqualizer has been used to solve these issues for many Hotels and Resorts around the world.

Download article (PDF) Hotels & Resorts White Paper

Read full article …

NetEqualizer Bandwidth Shaping Solution: Telecom, Satellite Systems, Cable, and Wired and Wireless ISPs


In working with Internet providers around the world, we’ve repeatedly heard the same issues and challenges facing network administrators. Here are just a few:

Download ISP White Paper

  • We need to support selling fixed bandwidth to our customers.
  • We need to be able to report on subscriber usage.
  • We need the ability to increase subscriber ratio, or not have a subscriber cutback, before having to buy more bandwidth.
  • We need to meet the varying needs of all of our users.
  • We need to manage P2P traffic.
  • We need to give VoIP traffic priority.
  • We need to make exemptions for customers routing all of their traffic through VPN tunnels.
  • We need a solution that’s low cost, low maintenance, and easy to set up.
  • We need a solution that will grow with our network.
  • We need a solution that will meet CALEA requirements.

In this article, we will talk about how the NetEqualizer has been used to solve these issues for Internet providers worldwide.

Download article (PDF) ISP White Paper

Read full article …

What Is Burstable Bandwidth? Five Points to Consider


IMG_20170403_180712

Internet Bursting

Internet Providers continually use clever marketing analogies to tout their burstable high-speed Internet connections. One of my favorites is the comparison to an automobile with overdrive that at the touch of button can burn up the road. At first, the analogies seem valid, but there are usually some basic pitfalls and unresolved issues.  Below are five points that are designed to make you ponder just what you’re getting with your burstable Internet connection, and may ultimately call some of these analogies, and burstable Internet speeds altogether, into question.

  1. The car acceleration analogy just doesn’t work.

    First, you don’t share your car’s engine with other users when you’re driving.  Whatever the engine has to offer is yours for the taking when you press down on the throttle.  As you know, you do share your Internet connection with many other users.  Second, with your Internet connection, unless there is a magic button next to your router, you don’t have the ability to increase your speed on command.  Instead, Internet bursting is a mysterious feature that only your provider can dole out when they deem appropriate.  You have no control over the timing.

  2. Since you don’t have the ability to decide when you can be granted the extra power, how does your provider decide when to turn up your burst speed?

    Most providers do not share details on how they implement bursting policies, but here is an educated guess – based on years of experience helping providers enforce various policies regarding Internet line speeds.  I suspect your provider watches your bandwidth consumption and lets you pop up to your full burst speed, typically 10 megabits, for a few seconds at a time.  If you continue to use the full 10 megabits for more than a few seconds, they likely will reign you back down to your normal committed rate (typically 1 megabit). Please note this is just an example from my experience and may not reflect your provider’s actual policy.

  3. Above, I mentioned a few seconds for a burst, but just how long does a typical burst last?

    If you were watching a bandwidth-intensive HD video for an hour or more, for example, could you sustain adequate line speed to finish the video? A burst of a few seconds will suffice to make a Web page load in 1/8 of a second instead of perhaps the normal 3/4 of a second. While this might be impressive to a degree, when it comes to watching an hour-long video, this might eclipse your baseline speed. So, if you’re watching a movie or doing any another sustained bandwidth-intensive activity, it is unlikely you will be able to benefit from any sort of bursting technology.

  4. Why doesn’t my provider let me have the burst speed all of the time?

    The obvious answer is that if they did,  it would not be a burst, so it must somehow be limited in some duration.  A better answer is that your provider has peaks and valleys in their available bandwidth during the day, and the higher speed of a burst cannot be delivered consistently.  Therefore, it’s better to leave bursting as a nebulous marketing term rather than a clearly defined entity.  One other note is that if you only get bursting during your provider’s Internet “valleys”, it may not help you at all, as the time of day may be no where near your busy hour time, and so although it will not hurt you, it will not help much either.

  5. When are the likely provider peak times where my burst is compromised?

    Slower service and the inability to burst are most likely occurring during times when everybody else on the Internet is watching movies — during the early evening.  Again, if this is your busy hour, just when you could really use bursting, it is not available to you.

These five points should give you a good idea of the multiple questions and issues that need to be considered when weighing the viability and value of burstable Internet speeds.  Of course, a final decision on bursting will ultimately depend on your specific circumstances.  For further related reading on the subject, we suggest you visit our articles How Much YouTube Can the Internet Handle and Field Guide to Contention Ratios.

How does your ISP actually enforce your Internet Speed


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com

YT

Have you ever wondered how your ISP manages to control the speed of your connection? If so, you might find the following article enlightening.  Below, we’ll discuss the various trade-offs used to control and break out bandwidth rate limits and the associated side effects of using those techniques.

Dropping Packets (Cisco term “traffic policing”)

One of the simplest methods for a bandwidth controller to enforce a rate cap is by dropping packets. When using the packet-dropping method, the bandwidth controlling device will count the total number of bytes that cross a link during a second.  If the target rate is exceeded during any single second, the bandwidth controller will drop packets for the remainder of that second. For example, if the bandwidth limit is 1 megabit, and the bandwidth controller counts 1 million bits gone by  in 1/2 a second, it will then drop packets for the remainder of the second.  The counter will then reset for the next second. From most evidence we have observed, rate caps enforced by many ISPs use the drop packet method, as it is the least expensive method supported on most basic routers.

So, what is wrong with dropping packets to enforce a bandwidth cap?

Well, when a link hits a rate cap and packets are dropped en masse, it can wreak havoc on a network. For example, the standard reaction of a Web browser when it perceives web traffic is getting lost is to re-transmit the lost data. For a better understanding of dropping packets, let’s use the analogy of a McDonald’s fast food restaurant.

Suppose the manager of the restaurant was told his bonus was based on making sure there was a never a line at the cash register. So, whenever somebody showed up to order food when all registers were occupied, the manager would open a trap door conveniently ejecting   the customer back out into the parking lot.  The customer, being extremely hungry, will come running back in the door (unless of course they die of starvation or get hit by a car) only to be ejected again. To make matters worse, let’s suppose a bus load of school kids arrive. As the kids file in to the McDonald’s, the remaining ones on the bus have no idea their classmates inside are getting ejected, so they keep streaming into the McDonald’s. Hopefully, you get the idea.

Well, when bandwidth shapers deploy packet-dropping technology to enforce a rate cap, you can get the same result seen with the trapdoor analogy in the McDonald’s. Web browsers and other user-based applications will beat their heads into the wall when they don’t get responses from their counterparts on the other end of the line. When packets are being dropped en masse,  the network tends to spiral out-of-control until all the applications essentially give up.  Perhaps you have seen this behavior while staying at a hotel with an underpowered Internet link. Your connectivity will alternate between working and then hanging up completely for a minute or so during busy hours. This can obviously be very maddening.

The solution to shaping bandwidth on a network without causing gridlock is to implement queuing.

Queuing Packets (Cisco term “traffic shaping”)

Queuing is the art of putting something in a line and making it wait before continuing on. Obviously, this is what fast food restaurants do in reality. They plan enough staff on hand to handle the average traffic throughout the day, and then queue up their customers when they are arriving at a faster rate then they can fill orders. The assumption with this model is that at some point during the day the McDonald’s will get caught up with the number of arriving customers and the lines will shrink away.

Another benefit of queuing is that wait times can perhaps be estimated by customers as they drive by and see the long line extending out into the parking lot, and thus, they will save their energy and not attempt to go inside.

But, what happens in the world of the Internet?

With queuing methods implemented, a bandwidth controller looks at the data rate of the incoming packets, and if deemed too fast, it will delay the packets in a queue. The packets will eventually get to their destination, albeit somewhat later than expected. Packets on queue can pile up very quickly, and without some help, the link would saturate. Computer memory to store the packets in the queue would also saturate and, much like the scenario mentioned above, the packets would eventually get dropped if they continued to come in at a faster rate than they were sent out.

TCP to the Rescue (keeping queuing under control)

Most internet applications use a service called TCP (transmission control protocol) to handle their data transfers. TCP has developed intelligence to figure out the speed of the link for which it is sending data on, and then can make adjustments. When the NetEqualizer bandwidth controller queues a packet or two, the TCP controllers on the customer end-point computers will sense the slower packets and back off the speed of the transfer. With just a little bit of queuing, the sender slows down a bit and dropping packets can be kept to a minimum.

Queuing Inside the NetEqualizer

The NetEqualizer bandwidth shaper uses a combination of queuing and dropping packets to get speed under control. Queuing is the first option, but when a sender does not back off eventually, their packets will get dropped. For the most part, this combination of queuing and dropping works well.

So far we have been inferring a simple case of a single sender and a single queue, but what happens if you have gigabit link with 10,000 users and you want to break off 100 megabits to be shared by 3000 users? How would a bandwidth shaper accomplish this? This is another area where a well-designed bandwidth controller like the NetEqualizer separates itself from the crowd.

In order to provide smooth shaping for a large group of users sharing a link, the NetEqualizer does several things in combination.

  1. It keeps track of all streams, and based on their individual speeds, the NetEqualizer will use different queue delays on each stream.
  2. Streams that back off will get minimal queuing
  3. Streams that do not back off may eventually have some of their packets dropped

The net effect of the NetEqualizer queuing intelligence is that all users will experience steady response times and smooth service.

Notes About UDP and Rate Limits

Some applications such as video do not use TCP to send data. Instead, they use a “send-and-forget” mechanism called UDP, which has no built-in back-off mechanism. Without some higher intelligence, UDP packets will continue to be sent at a fixed rate, even if the packets are coming too quickly for the receiver.  The good news is that even most UDP applications also have some way of measuring if their packets are getting to their destination. It’s just that with UDP, the mechanism of synchronization is not standardized.

Finally there are those applications that just don’t care if the packets get to their destination. Speed tests and viruses send UDP packets as fast as they can, regardless of whether the network can handle them or not. The only way to enforce a rate cap with such ill-mannered application is to drop the packets.

Hopefully this primer has given you a good introduction to the mechanisms used to enforce Internet Speeds, namely dropping packets & queuing.  And maybe you will think about this the next time you visit a fast food restaurant during their busy time…

NetEqualizer provides Net Neutrality solution for bandwidth control.


By Eli Riles NetEqualizer VP of Sales

This morning I read an article on how some start up companies are being hurt awaiting the FCC’s decision on Net Neutrality.

Late in the day, a customer called and exclaimed, “Wow now with the FCC coming down  hard on technologies that jeopardize net neutrality, your business  must booming since you offer an excellent viable alternative” And yet  in face of this controversy, several of our competitors continue to sell deep packet inspection devices to customers.

Public operators and businesses that continue to purchase such technology are likely uninformed about the growing fire-storm of opposition against Deep Packet Inspection techniques.  The allure of being able to identify, and control Internet Traffic by type is very a natural solution, which customers often demand. Suppliers who sell DPI devices are just doing what their customer have asked. As with all technologies once the train leaves the station it is hard to turn around. What is different in the case of DPI is that suppliers and ISPs had their way with an ignorant public starting in the late 90’s. Nobody really gave much thought as to how DPI might be the villain in the controversy over Net Nuetrality. It was just assumed that nobody would notice their internet traffic being watched and redirected by routing devices. With behemoths such as Google having a vested interest in keeping traffic flowing without Interference on the Internet, commercial deep packet inspection solutions are slowly falling out of favor in the ISP sector. The bigger question for the players betting the house on DPI is , will it fall out favor in other  business verticals?

The NetEqualizer decision to do away with DPI two years ago is looking quite brilliant now, although at the time it was clearly a risk bucking market trends.  Today, even in the face of world wide recession our profit and unit sales are up for the first three quarters of 2009 this year.

As we have claimed in previous articles there is a time and place for deep packet inspection; however any provider using DPI to manipulate data is looking for a potential dog fight with the FCC.

NetEqualizer has been providing alternative bandwidth control options for ISPs , Businesses , and Schools of all sizes for 7 years without violating any of the Net Nuetrality sacred cows. If you have not heard about us, maybe now is a good time to pick up the phone. We have been on the record touting our solution as being fair equitable for quite some time now.

Burstable Internet Connections — Are They of Any Value?


A burstable Internet connection conjures up the image of a super-charged Internet reserve, available at your discretion during a moment of need, like pushing the gas pedal to the floor to pass an RV on a steep grade. Americans find comfort knowing that they have that extra horsepower at their disposal. The promise of power is ingrained in our psyche, and is easily tapped into when marketing an Internet service. However, if you stop for a minute, and think about what is a bandwidth burst, it might not be a feature worth paying for in reality.

Here are some key questions to consider:

  • Is a burst one second, 10 seconds, or 10 hours at a time? This might seem like a stupid question, but it is at the heart of the issue. What good is a 1-second burst if you are watching a 20-minute movie?
  • If it is 10 seconds, then how long do I need to wait before it becomes available again?
  • Is it available all of the time, or just when my upstream provider(s) circuits are not busy?
  • And overall, is the burst really worth paying for? Suppose the electric company told you that you had a burstable electric connection or that your water pressure fluctuated up for a few seconds randomly throughout the day? Is that a feature worth paying for? Just because it’s offered doesn’t necessarily mean it’s needed or even that advantageous.

While the answers to each of these questions will ultimately depend on the circumstances, they all serve to point out a potential fallacy in the case for burstable Internet speeds: The problem with bursting and the way it is marketed is that it can be a meaningless statement without a precise definition. Perhaps there are providers out there that lay out exact definitions for a burstable connection, and abide by those terms. Even then we could argue that the value of the burst is limited.

What we have seen in practice is that most burstable Internet connections are unpredictable and simply confuse and annoy customers. Unlike the turbo charger in your car, you have no control over when you can burst and when you can’t. What sounded good in the marketing literature may have little practical value without a clear contract of availability.

Therefore, to ensure that burstable Internet speeds really will work to your advantage, it’s important to ask the questions mentioned above. Otherwise, it very well may just serve as a marketing ploy or extra cost with no real payoff in application.

Update: October 1, 2009

Today a user group published a bill of rights in order to nail ISPs down on what exactly they are providing in their service contracts.
ISP claims of bandwidth speed.

I noticed that  in the article, the bill of rights, requires a full disclosure about the speed of the providers link to the consumers modem. I am not sure if this is enough to accomplish a fixed minimus speed to the consumer.  You see, a provider could then quite easily oversell the capacity on their swtiching point. The point where they hook up to a backbone of other providers.  You can not completely regulate speed across the Internet, since by design providers hand off or exchange traffic with other providers.  Your provider cannot control the speed of your connection once it is off their network.

Posted by Eli Riles, VP of sales www.netequalizer.com.

Why is NetEqualizer the low price leader in Bandwidth Control


Recently we have gotten feed back from customers that stating they almost did not consider the NetEqualizer because the price was so much less than solutions  from the likes of: Packeteer (Blue Coat), Allot NetEnforcer and Exinda.

Sometimes low price will raise a red flag on a purchase decision, especially when the price is an order of magnitude less than the competition.

Given this feed back we thought it would be a good idea to go over some of the major cost structure differences betwen APconnections maker of the NetEqualizer and some of the competition.

1) NetEqualizer’s are sold mostly direct by word of mouth. We do not have a traditional indirect sales channel.

– The down side for us as a company is that this does limit our reach a bit.  Many IT departments do not have the resources to seek out new products on their own, and are limited to only what is presented to them.

– The good news for all involved is selling direct takes quite a bit of cost out of delivering the product. Indirect  sales channels need to be incented to sell,  Often times they will steer the customer toward the highest commission product in their arsenal.  Our  direct channel eliminates this overhead.

-The other good thing about not using a sales channel is that when you talk to one of our direct (non commissioned) sales reps you can be sure that they are experts on the NetEqualizer. With a sales channel a sales rep often sells many different kinds of products and they can get rusty on some of the specifics.

2) We have bundled our Manufacturing with a company that also produces a popular fire wall. We also have a back source to manufacture our products at all times thus insuring a steady flow of product without the liability of a Manufacturing facility

3) We have never borrowed money to run Apconnections,

– this keeps us very stable and able to withstand market fluctuations

– there are no greedy investors calling the shots looking for a return and demanding higher prices

4) The NetEqualizer is simple and elegant

– Many products keep adding features to grow their market share we have a solution that works well but does not require constant current engineering

How to Implement Network Access Control and Authentication


There are a number of basic ways an automated network access control (NAC) system can identify unauthorized users and keep them from accessing your network. However, there are pros and cons to using these different NAC methods.  This article will discuss both the basic network access control principles and the different trade-offs each brings to the table, as well as explore some additional NAC considerations. Geared toward the Internet service provider, hotel operator, library, or other public portal operator who provides Internet service and wishes to control access, this discussion will give you some insight into what method might be best for your network.

The NAC Strategies

MAC Address

MAC addresses are unique to every computer connected to the network, and thus many NAC systems use them to grant or deny access.  Since MAC addresses are unique, NAC systems can use them to identify an individual customer and grant them access.

While they can be effective, there are limitations to using MAC addresses for network access. For example, if a customer switches to a new computer in the system, it will not recognize them, as their MAC address will have changed.  As a result, for mobile customer bases, MAC address authentication by itself is not viable.

Furthermore, on larger networks with centralized authentication, MAC addresses do not propagate beyond one network hop, hence MAC address authentication can only be done on smaller networks (no hops across routers).  A work-around for this limit would be to use a distributed set of authentication points local to each segment. This would involve multiple NAC devices, which would automatically raise complexity with regard to synchronization. Your entire authentication database would need to be replicated on each NAC.

Finally, a common question when it comes to MAC addresses is whether or not they can be spoofed. In short, yes, they can, but it does require some sophistication and it is unlikely a normal user with the ability to do so would go through all the trouble to avoid paying an access charge.  That is not to say it won’t happen, but rather that the risk of losing revenue is not worth the cost of combating the determined isolated user.

I mention this because some vendors will sell you features to combat spoofing and most likely it is not worth the incremental cost.  If your authentication is set up by MAC address, the spoofer would have to also have the MAC address of a paying user in order to get in. Since there is no real pattern to MAC addresses, guessing another customer’s MAC address would be nearly impossible without inside knowledge.

IP Address

IP addresses allow a bit more flexibility than MAC addresses because IP addresses can span across a network segment separated by a router to a central location. Again, while this strategy can be effective, IP address authentication has the same issue as MAC addressing, as it does not allow a customer to switch computers, thus requiring that the customer use the same computer each time they log in. In theory, a customer could change the IP address should they switch computers, but this would be way too much of an administrative headache to explain when operating a consumer-based network.

In addition, IP addresses are easy to spoof and relatively easy to guess should a user be trying to steal another user’s identity. But, should two users log on with the same IP address at the same time, the ruse can quickly be tracked down. So, while plausible, it is a risky thing to do.

User ID  Combined with MAC Address or IP Address

This methodology solves the portability issue found when using MAC addresses and IP addresses by themselves. With this strategy, the user authenticates their session with a user ID and password and the NAC module records their IP or MAC address for the duration of the session.

For a mobile consumer base, this is really the only practical way to enforce network access control. However, there is a caveat with this method. The NAC controller must expire a user session when there is a lack of activity.  You can’t expect users to always log out from their network connection, so the session server (NAC) must take an educated guess as to when they are done. The ramification is that they must log back in again. This usually isn’t a major problem, but can simply be a hassle for users.

The good news is the inactivity timer can be extended to hours or even days, and should a customer login in on a different computer while current on a previous session, the NAC can sense this and terminate the old session automatically.

The authentication method currently used with the NetEqualizer is based on IP address and user ID/password, since it was designed for ISPs serving a transient customer base.

Other Important Considerations

NAC and Billing Systems

Many NAC solutions also integrate billing services. Overlooking the potential complexity and ballooning costs with a billing system has the potential to cut into efficiency and profits for both customer and vendor. Our philosophy is that a flat rate and simple billing are best.

To name a few examples, different customers may want time of day billing; billing by day, hour, month, or year; automated refunds; billing by speed of connections; billing by type of property (geographic location); or tax codes. It can obviously go from a simple idea to a complicated one in a hurry. While there’s nothing wrong with these requests, history has shown that costs can increase exponentially when maintaining a system and trying to meet these varied demands, once you get beyond simple flat rate.

Another thing to look out for with billing is integration with a credit card processor. Back-end integration for credit card processing takes some time and energy to validate. For example, the most common credit card authentication system in the US, Authorize.net, does not work unless you also have a US bank account.  You may be tempted to shop your credit card billing processor based on fees, but if you plan on doing automated integration with a NAC system, it is best to make sure the CC authorization company provides automated tools to integrate with the computer system and your consulting firm accounts for this integration work.

Redirection Requirements

You cannot purchase and install a NAC system without some network analysis. Most NAC systems will re-direct unauthorized users to a Web page that allows them to sign up for the service. Although this seems relatively straight forward, there are some basic network features that need to be in place in order for this redirection to work correctly. The details involved go beyond the scope of this article, but you should expect to have a competent network administrator or consultant on hand in order to set this up correctly. To be safe, plan for eight to 40 hours of consulting time for troubleshooting and set-up above and beyond the cost of the equipment.

Network Access for Organizational Control

Thus far we have focused on the basic ways to restrict basic access to the Internet for a public provider. However, in a private or institutional environment where security and access to information are paramount, the NAC mission can change substantially. For example, in the Wikipedia article on network access control, a much broader mission is outlined than what a simple service provider would require. The article reads:

“Network Access Control aims to do exactly what the name implies—control access to a network with policies, including pre-admission endpoint security policy checks and post-admission controls over where users and devices can go on a network and what they can do.”

This paragraph was obviously written by a contributor that views NAC as a broad control technique reaching deep into a private network.  Interestingly, there is an ongoing dispute on Wikipedia stating that this definition goes beyond the simpler idea of just granting access.

The rift on Wikipedia can be summarized as an argument over whether a NAC should be a simple gatekeeper for access to a network, with users having free rein to wander once in, or whether the NAC has responsibilities to protect various resources within the network once access is attained. Both camps are obviously correct, but it depends on the customer and type of business as to what type of NAC is required.

Therefore, in closing, the overarching message that emerges from this discussion is simply that implementing network access control requires an evaluation not only of the network setup, but also how the network will be used. Strategies that may work perfectly in certain circumstances can leave network administrators and users frustrated in other situations. However, with the right amount of foresight, network access control technologies can be implemented to facilitate the success of your network and the satisfaction of users rather than serving as an ongoing frustrating limitation.