Just How Fast Is Your 4G Network?


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com

The subject of Internet speed and how to make it go faster is always a hot topic. So that begs the question, if everybody wants their Internet to go faster, what are some of the limitations? I mean, why can’t we just achieve infinite speeds when we want them and where we want them?

Below, I’ll take on some of the fundamental gating factors of Internet speeds, primarily exploring the difference between wired and wireless connections. As we have “progressed” from a reliance on wired connections to a near-universal expectation of wireless Internet options, we’ve also put some limitations on what speeds can be reliably achieved. I’ll discuss why the wired Internet to your home will likely always be faster than the latest fourth generation (4G) wireless being touted today.

To get a basic understanding of the limitations with wireless Internet, we must first talk about frequencies. (Don’t freak out if you’re not tech savvy. We usually do a pretty good job at explaining these things using analogies that anybody can understand.) The reason why frequencies are important to this discussion is that they’re the limiting factor to speed in a wireless network.

The FCC allows cell phone companies and other wireless Internet providers to use a specific range of frequencies (channels) to transmit data. For the sake of argument, let’s just say there are 256 frequencies available to the local wireless provider in your area. So in the simplest case of the old analog world, that means a local cell tower could support 256 phone conversations at one time.

However, with the development of better digital technology in the 1980s, wireless providers have been able to juggle more than one call on each frequency. This is done by using a time sharing system where bits are transmitted over the frequency in a round-robin type fashion such that several users are sharing the channel at one time.

The wireless providers have overcome the problem of having multiple users sharing a channel by dividing it up in time slices. Essentially this means when you are talking on your cell phone or bringing up a Web page on your browser, your device pauses to let other users on the channel. Only in the best case would you have the full speed of the channel to yourself (perhaps at 3 a.m. on a deserted stretch of interstate). For example, I just looked over some of the mumbo jumbo and promises of one-gigabit speeds for 4G devices, but only in a perfect world would you be able to achieve that speed.

In the real world of wireless, we need to know two things to determine the actual data rates to the end user.

  1. The maximum amount of data that can be transmitted on a channel
  2. The number of users sharing the channel

The answer to part one is straightforward: A typical wireless provider has channel licenses for frequencies in the 800 megahertz range.

A rule of thumb for transmitting digital data over the airwaves is that you can only send bits of  data at 1/2 the frequency. For example, 800 megahertz is 800 million cycles per second and 1/2 of that is 400 million cycles per second. This translates to a theoretical maximum data rate of 400 megabits. Realistically, with noise and other environmental factors, 1/10 of the original frequency is more likely. This gives us a maximum carrying capacity per channel of 80 megabits and a ballpark estimate for our answer to part one above.

However, the actual answer to variable two, the number of users sharing a channel, is a closely guarded secret among service providers. Conservatively, let’s just say you’re sharing a channel with 20 other users on a typical cell tower in a metro area. With 80 megabits to start from, this would put your individual maximum data rate at about four megabits during a period of heavy usage.

So getting back to the focus of the article, we’ve roughly worked out a realistic cap on your super-cool new 4G wireless device at four megabits. By today’s standards, this is a pretty fast connection. But remember this is a conservative benefit-of-the-doubt best case. Wireless providers are now talking about quota usage and charging severely for overages. That translates to the fact that they must be teetering on gridlock with their data networks now.  There is limited frequency real estate and high demand for content data services. This is likely to only grow as more and more users adopt mobile wireless technologies.

So where should you look for the fastest and most reliable connection? Well, there’s a good chance it’s right at home. A standard fiber connection, like the one you likely have with your home network, can go much higher than four megabits. However, as with the channel sharing found with wireless, you must also share the main line coming into your central office with other users. But assuming your cable operator runs a point-to-point fiber line from their office to your home, gigabit speeds would certainly be possible, and thus wired connections to your home will always be faster than the frequency limited devices of wireless.

Related Article: Commentary on Verizon quotas

Interesting  side note , in this article  by Deloitte they do not mention limitations of frequency spectrum as a limiting factor to growth.

10 Things You Should Know about IPv6


I just read the WordPress article about World IPv6 Day, and many of the comments in response expressed that they only had a very basic understanding of what an IPv6 Internet address actually is. To better explain this issue, we have provided a 10-point FAQ that should help clarify in simple terms and analogies the ramifications of transitioning to IPv6.

To start, here’s an overview of some of the basics:

Why are we going to IPv6?

Every device connected to the Internet requires an IP address. The current system, put in place back in 1977, is called IPv4 and was designed for 4 billion addresses. At the time, the Internet was an experiment and there was no central planning for anything like the commercial Internet we are experiencing today. The official reason we need IPv6 is that we have run out of IPv4 addresses (more on this later).

Where does my IP address come from?

A consumer with an account through their provider gets their IP address from their ISP (such as Comcast). When your provider installed your Internet, they most likely put a little box in your house called a router. When powered up, this router sends a signal to your provider asking for an IP address. Your provider has large blocks of IP addresses that were allocated to them most likely by IANI.

If there are 4 billion IPv4 addresses, isn’t that enough for the world right now?

It should be considering the world population is about 6 billion. We can assume for now that private access to the Internet is a luxury of the economic middle class and above. Generally you need one Internet address per household and only one per business, so it would seem that perhaps 2 billion would be plenty of addresses at the moment to meet the current need.

So, if this is the case, why can’t we live with 4 billion IP addresses for now?

First of all, industrialized societies are putting (or planning to put) Internet addresses in all kinds of devices (mobile phones, refrigerators, etc.). So allocating one IP address per household or business is no longer valid. The demand has surpassed this considerably as many individuals require multiple IP addresses.

Second, the IP addresses were originally distributed by IANI like cheap wine. Blocks of IP addresses were handed out in chunks to organizations in much larger quantities than needed. In fairness, at the time, it was originally believed that every computer in a company would need its own IP addresses. However, since the advent of NAT/PAT back in the 1980s, most companies and many ISPs can easily stretch a single IP to 255 users (sharing it). That brings the actual number of users that IPv4 could potentially support to well over a trillion!

Yet, while this is true, the multiple addresses originally distributed to individual organizations haven’t been reallocated for use elsewhere. Most of the attempted media scare surrounding IPv6 is based on the fact that IANI has given out all the centrally controlled IP addresses, and the IP addresses already given out are not easily reclaimed. So, despite there being plenty of supply overall, it’s not distributed as efficiently as it could be.

Can’t we just reclaim and reuse the surplus of IPv4 addresses?

Since we just very recently ran out, there is no big motivation in place for the owners to give/sell the unused IPs back. There is currently no mechanism or established commodity market for them (yet).

Also, once allocated by IANI, IP addresses are not necessarily accounted for by anyone. Yes, there is an official owner, but they are not under any obligation to make efficient use of their allocation. Think of it like a retired farmer with a large set of historical water rights. Suppose the farmer retires and retains his water rights because there is nobody to which he can sell them back. The difference here is that water rights are very valuable. Perhaps you see where I am going with this for IPv4? Demand and need are not necessarily the same thing.

How does an IPv4-enabled user talk to an IPv6 user?

In short, they don’t. At least not directly. For now it’s done with smoke and mirrors. The dirty secret with this transition strategy is that the customer must actually have both IPv6 and IPv4 addresses at the same time. They cannot completely switch to an IPv6 address without retaining their old IPv4 address. So it is in reality a duplicate isolated Internet where you are in one or the other.

Communication is possible, though, using a dual stack. The dual-stack method is what allows an IPv6 customer to talk to IPv4 users and IPv6 users at the same time. With the dual stack, the Internet provider will match up IPv6 users to talk with IPv6 if they are both IPv6 enabled. However, IPv4 users CANNOT talk to IPv6 users, so the customer must maintain an IPv4 address otherwise they would cut themselves off from 99.99+ percent of Internet users. The dual-stack method is just maintaining two separate Internet interfaces. Without maintaining the IPv4 address at the same time, a customer would isolate themselves from huge swaths of the world until everybody had IPv6. To date, in limited tests less than .0026 percent of the traffic on the Internet has been IPv6. The rest is IPv4, and that was for a short test experiment.

Why is it so hard to transition to IPv6? Why can’t we just switch tomorrow?

To recap previous points:

1) IPv4 users, all 4 billion of them, currently cannot talk to new IPv6 users.

2) IPv6 users cannot talk to IPv4 users unless they keep their old IPv4 address and a dual stack.

3) IPv4 still works quite well, and there are IPv4 addresses available. However, although the reclamation of IPv4 addresses currently lacks some organization, it may become more econimically feasible as problems with the transition to IPv6 crop up. Only time will tell.

What would happen if we did not switch? Could we live with IPv4?

Yes, the Internet would continue to operate. However, as the pressure for new and easy to distribute IP addresses for mobile devices heats up, I think we would see IP addresses being sold like real estate.

Note:  A bigger economic gating factor to the adoption of the expanding Internet is the limitation of wireless frequency space. You can’t create any more frequencies for wireless in areas that are already saturated. IP addresses are just now coming under some pressure, and as with any fixed commodity, we will see their value rise as the holders of large blocks of IP addresses sell them off and redistribute the existing 4 billion. I suspect the set we have can last another 100 years under this type of system.

Is it possible that a segment of the Internet will split off and exclusively use IPv6?

Yes, this is a possible scenario, and there is precedent for it. Vendors, given a chance, can eliminate competition simply by having a critical mass of users willing to adopt their services. Here is the scenario: (Keep in mind that some of the following contains opinions and conjecture on IPv6, the future, and the motivation of players involved in pushing IPv6.)

With a complete worldwide conversion to IPv6 not likely in the near future,  a small number of larger ISPs and content providers turn on IPv6 and start serving IPv6 enabled customers with unique and original content not accessible to customers limited to IPv4. For example, Facebook starts a new service only available on their IPv6 network supported by AT&T. This would be similar to what was initially done with the iPad and iPhone.

It used to be that all applications on the Internet ran from a standard Web browser and were device independent. However, there is a growing subset of applications that only run on the Apple devices. Just a few years ago it was a forgone conclusion that vendors would make Web applications capable of running on any browser and any hardware device. I am not so sure this is the case anymore.

When will we lose our dependency on IPv4?

Good question. For now, most of the push for IPv6 seems to be coming from vendors using the standard fear tactic. However, as is always the case, with the development of new products and technologies, all of this could change very quickly.

YouTube Caching Results: detailed analysis from live systems


Since the release of YouTube caching support on our NetEqualizer bandwidth controller,  we have been able to review several live systems in the field. Below we will go over the basic hit rate of YouTube videos and explain in detail how this effects the user experience. The analysis  below is based on an actual snapshot from a mid-sized state university, using a 64 Gigabyte cache, and approximately 2000 students in residence.

The Squid Proxy server provides a wide range of statistics. You can easily spend hours examining them and become exhausted with MSOS, an acronym for “meaningless stat overload syndrome”.  To save you some time we are going to look at just one stat from one report.  From the Squid Statistics Tab on the NetEqualizer, we selected the Cache Client List option. This report shows individual Cache stats for all clients on your network. At the very bottom is a summary report totaling all squid stats and hits for all clients.

TOTALS

  • ICP : 0 Queries, 0 Hits (0%)
  • HTTP: 21990877 Requests, 3812 Hits (0%)

At first glance it appears as if the ratio of actual cache hits,  3812, to HTTP requests,  21990877,  is extremely low.  As with all statistics the obvious conclusion can be misleading. First off, the NetEqualizer cache is deliberately tuned to NOT cache HTTP requests smaller than 2 Megabytes. This is done for a couple of reasons:

1) Generally, there is no advantage to caching small Web pages, as they normally load up quickly on systems with NetEqualizer fairness in place. They already have priority.

2) With a few exceptions of popular web sites , small web hits are widely varied and fill up the cache – taking away space that we would like to use for our target content, Youtube Videos.

Breaking down the amount of data in a typical web site versus a Youtube hit.

It is true that web sites today can often exceed a Megabyte.  However ,rarely does a web site of 2 Megabytes load up as a single hit. It is comprised of many sub-links, each of which generates a web hit in the summary statistics. A simple HTTP page typically triggers about 10 HTTP requests for perhaps 100K bytes of data total. A more complex page may generate 500K. For example, when you go to the CNN home page there are quite a few small links, and each link increments the HTTP counter. On the other hand, a YouTube hit generates one hit for about 20 megabits of data. When we start to look at actual data cached instead of total Web Hits, the ratio of cached to not cached is quite different.

Our cache set up is also designed to only cache Web pages from 2 megabytes to 40 megabytes, with an estimated average of 20 megabytes. When we look at actual data cached (instead of hits) this gives us about 400 gigabytes of regular HTTP data of which about 76 Gigabytes  came from the cache. Conservatively about 10 percent of all HTTP data came from cache by this rough estimate. This number is  much more significant than the  HTTP statistics reveal.

Even more telling, is that effect these hits have on user experience.

YouTube streaming data, although not the majority of data on this customer system, is very time-sensitive while at the same time being very bandwidth intensive.  The subtle boost made possible by caching 10 percent of the data on this system has a discernible effect on the user experience. Think about it, if 10 percent of your experience on the Web is video, and you were resigned to it timing out and bogging down, you will notice the difference when those YouTube videos play through to completion, even if only half of them come from cache.

For a more detailed technical overview of NetEqualizer YouTube caching (NCO) click here.

Setting Up a Squid Proxy Caching Co-Resident with a Bandwidth Controller


Editor’s Note: It was a long road to get here (building the NetEqualizer Caching Option (NCO) a new feature offered on the NE3000 & NE4000), and for those following in our footsteps or just curious on the intricacies of YouTube caching, we have laid open the details.

This evening, I’m burning the midnight oil. I’m monitoring Internet link statistics at a state university with several thousand students hammering away on their residential network. Our bandwidth controller, along with our new NetEqualizer Caching Option (NCO), which integrates Squid for caching, has been running continuously for several days and all is stable. From the stats I can see, about 1,000 YouTube videos have been played out of the local cache over the past several hours. Without the caching feature installed, most of the YouTube videos would have played anyway, but there would be interruptions as the Internet link coughed and choked with congestion. Now, with NCO running smoothly, the most popular videos will run without interruptions.

Getting the NetEqualizer Caching Option to this stable product was a long and winding road.  Here’s how we got there.

First, some background information on the initial problem.

To use a Squid proxy server, your network administrator must put hooks in your router so that all Web requests go the Squid proxy server before heading out to the Internet. Sometimes the Squid proxy server will have a local copy of the requested page, but most of the time it won’t. When a local copy is not present, it sends your request on to the Internet to get the page (for example the Yahoo! home page) on your behalf. The squid server will then update a local copy of the page in its cache (storage area) while simultaneously sending the results back to you, the original requesting user. If you make a subsequent request to the same page, the Squid will quickly check it to see if the content has been updated since it stored away the first time, and if it can, it will send you a local copy. If it detects that the local copy is no longer valid (the content has changed), then it will go back out to the Internet and get a new copy.

Now, if you add a bandwidth controller to the mix, things get interesting quickly. In the case of the NetEqualizer, it decides when to invoke fairness based on the congestion level of the Internet trunk. However, with the bandwidth controller unit (BCU) on the private side of the Squid server, the actual Internet traffic cannot be distinguished from local cache traffic. The setup looks like this:

Internet->router->Squid->bandwidth controller->users

The BCU in this example won’t know what is coming from cache and what is coming from the Internet. Why? Because the data coming from the Squid cache comes over the same path as the new Internet data. The BCU will erroneously think all the traffic is coming from the Internet and will shape cached traffic as well as Internet traffic, thus defeating the higher speeds provided by the cache.

In this situation, the obvious solution would be to switch the position of the BCU to a setup like this:

Internet->router->bandwidth controller->Squid->users

This configuration would be fine except that now all the port 80 HTTP traffic (cached or not) will appear like it is coming from the Squid proxy server and your BCU will not be able to do things like put rate limits on individual users.

Fortunately, with the our NetEqualizer 5.0 release, we’ve created an integration with NetEqualizer and co-resident Squid (our NetEqualizer Caching Option) such that everything works correctly. (The NetEqualizer still sees and acts on all traffic as if it were between the user and the Internet. This required some creative routing and actual bug fixes to the bridging and routing in the Linux kernel. We also had to develop a communication module between the NetEqualizer and the Squid server so the NetEqualizer gets advance notice when data is originating in cache and not the Internet.)

Which do you need, Bandwidth Control or Caching?

At this point, you may be wondering if Squid caching is so great, why not just dump the BCU and be done with the complexity of trying to run both? Well, while the Squid server alone will do a fine job of accelerating the access times of large files such as video when they can be fetched from cache, a common misconception is that there is a big relief on your Internet pipe with the caching server.  This has not been the case in our real world installations.

The fallacy for caching as panacea for all things congested assumes that demand and overall usage is static, which is unrealistic.  The cache is of finite size and users will generally start watching more YouTube videos when they see improvements in speed and quality (prior to Squid caching, they might have given up because of slowness), including videos that are not in cache.  So, the Squid server will have to fetch new content all the time, using additional bandwidth and quickly negating any improvements.  Therefore, if you had a congested Internet pipe before caching, you will likely still have one afterward, leading to slow access for many e-mail, Web  chat and other non-cachable content. The solution is to include a bandwidth controller in conjunction with your caching server.  This is what NetEqualizer 5.0 now offers.

In no particular order, here is a list of other useful information — some generic to YouTube caching and some just basic notes from our engineering effort. This documents the various stumbling blocks we had to overcome.

1. There was the issue of just getting a standard Squid server to cache YouTube files.

It seemed that the URL tags on these files change with each access, like a counter, and a normal Squid server is fooled into believing the files have changed. By default, when a file changes, a caching server goes out and gets the new copy. In the case of YouTube files, the content is almost always static. However, the caching server thinks they are different when it sees the changing file names. Without modifications, the default Squid caching server will re-retrieve the YouTube file from the source and not the cache because the file names change. (Read more on caching YouTube with Squid…).

2. We had to move to a newer Linux kernel to get a recent of version of Squid (2.7) which supports the hooks for YouTube caching.

A side effect was that the new kernel destabilized some of the timing mechanisms we use to implement bandwidth control. These subtle bugs were not easily reproduced with our standard load generation tools, so we had to create a new simulation lab capable of simulating thousands of users accessing the Internet and YouTube at the same time. Once we built this lab, we were able to re-create the timing issues in the kernel and have them patched.

3. It was necessary to set up a firewall re-direct (also on the NetEqualizer) for port 80 traffic back to the Squid server.

This configuration, and the implementation of an extra bridge, were required to get everything working. The details of the routing within the NetEqualizer were customized so that we would be able to see the correct IP addresses of  Internet sources and users when shaping.  (As mentioned above, if you do not take care of this, all IPs (traffic) will appear as if they are coming from the Proxy server.

4. The firewall has a table called ConnTrack (not be confused with NetEqualizer connection tracking but similar).

The connection tracking table on the firewall tends to fill up and crash the firewall, denying new requests for re-direction if you are not careful. If you just go out and make the connection table randomly enormous that can also cause your system to lock up. So, you must measure and size this table based on experimentation. This was another reason for us to build our simulation lab.

5. There was also the issue of the Squid server using all available Linux file descriptors.

Linux comes with a default limit for security reasons, and when the Squid server hit this limit (it does all kinds of file reading and writing keeping descriptors open), it locks up.

Tuning changes that we made to support Caching with Squid

a. To limit the file size of a cached object of 2 megabytes (2MB) to 40 megabytes (40MB)

  • minimum_object_size 2000000 bytes
  • maximum_object_size 40000000 bytes

If you allow smaller cached objects it will rapidly fill up your cache and there is little benefit to caching small pages.

b. We turned off the Squid keep reading flag

  • quick_abort_min 0 KB
  • quick_abort_max 0 KB

This flag when set continues to read a file even if the user leave the page, for example when watching a video if the user aborts on their browser the Squid cache continues to read the file. I suppose this could now be turned back on, but during testing it was quite obnoxious to see data transfers talking place to the squid cache when you thought nothing was going on.

c. We also explicitly told the Squid what DNS servers to use in its configuration file. There was some evidence that without this the Squid server may bog down, but we never confirmed it. However, no harm is done by setting these parameters.

  • dns_nameservers   x.x.x.x

d. You have to be very careful to set the cache size not to exceed your actual capacity. Squid is not smart enough to check your real capacity, so it will fill up your file system space if you let it, which in turn causes a crash. When testing with small RAM disks less than four gigs of cache, we found that the Squid logs will also fill up your disk space and cause a lock up. The logs are refreshed once a day on a busy system. With a large amount of pages being accessed, the log will use close to one (1) gig of data quite easily, and then to add insult to injury, the log back up program makes a back up. On a normal-sized caching system there should be ample space for logs

e. Squid has a short-term buffer not related to caching. It is just a buffer where it stores data from the Internet before sending it to the client. Remember all port 80 (HTTP) requests go through the squid, cached or not, and if you attempt to control the speed of a transfer between Squid and the user, it does not mean that the Squid server slows the rate of the transfer coming from the Internet right away. With the BCU in line, we want the sender on the Internet to back off right away if we decide to throttle the transfer, and with the Squid buffer in between the NetEqualizer and the sending host on the Internet, the sender would not respond to our deliberate throttling right away when the buffer was too large (Link to Squid caching parameter).

f. How to determine the effectiveness of your YouTube caching statistics?

I use the Squid client cache statistics page. Down at the bottom there is a entry that lists hits verses requests.

TOTALS

  • ICP : 0 Queries, 0 Hits (0%)
  • HTTP: 21990877 Requests, 3812 Hits (0%)

At first glance, it may appear that the hit rate is not all that effective, but let’s look at these stats another way. A simple HTTP page generates about 10 HTTP requests for perhaps 80K bytes of data total. A more complex page may generate 500k. For example, when you go to the CNN home page there are quite a few small links, and each link increments the HTTP counter. On the other hand, a YouTube hit generates one hit for about 20 megabits of data. So, if I do a little math based on bytes cached we get, the summary of HTTP hits and requests above does not account for total data. But, since our cache is only caching Web pages from two megabits to 40 megabits, with an estimated average of 20 megabits, this gives us about 400 gigabytes of regular HTTP and 76 Gigabytes of data that came from the cache. Abut 20 percent of all HTTP data came from cache by this rough estimate, which is a quite significant.

What Does Net Privacy Have to Do with Bandwidth Shaping?


I definitely understand the need for privacy. Obviously, if I was doing something nefarious, I wouldn’t want it known, but that’s not my reason. Day in and day out, measures are taken to maintain my privacy in more ways than I probably even realize. You’re likely the same way.

For example, to avoid unwanted telephone and mail solicitations, you don’t advertise your phone numbers or give out your address. When you buy something with your credit card, you usually don’t think twice about your card number being blocked out on the receipt. If you go to the pharmacist, you take it for granted that the next person in line has to be a certain distance behind so they can’t hear what prescription you’re picking up. The list goes on and on. For me personally, I’m sure there are dozens, if not hundreds, of good examples why I appreciate privacy in my life. This is true in my daily routines as well as in my experiences online.

The topic of Internet privacy has been raging for years. However, the Internet still remains a hotbed for criminal activity and misuse of personal information. Email addresses are valued commodities sold to spammers. Search companies have dedicated countless bytes of storage to every search term and IP address made. Websites place tracking cookies on your system so they can learn more about your Web travels, habits, likes, dislikes, etc.  Forensically, you can tell a lot about a person from their online activities. To be honest, it’s a little creepy.

Maybe you think this is much ado about nothing. Why should you care? However, you may recall that less than four years ago, AOL accidentally released around 20 million search keywords from over 650,000 users. Now, those 650,000 users and their searches will exist forever in cyberspace.  Could it happen again? Of course, why wouldn’t it happen again since all it takes is a packed laptop to walk out the door?

Internet privacy is an important topic, and as a result, technology is becoming more and more available to help people protect information they want to keep confidential. And that’s a good thing. But what does this have to do with bandwidth management? In short, a lot (no pun intended)!

Many bandwidth management products (from companies like Blue Coat, Allot, and Exinda, for example) intentionally work at the application level. These products are commonly referred to as Layer 7 or Deep Packet Inspect tools. Their mission is to allocate bandwidth specifically by what you’re doing on the Internet. They want to determine how much bandwidth you’re allowed for YouTube, Netflix, Internet games, Facebook, eBay, Amazon, etc. They need to know what you’re doing so they can do their job.

In terms of this article, whether you’re philosophically adamant about net privacy (like one of the inventors of the Internet), or could care less, is really not important. The question is, what happens to an application-managed approach when people take additional steps to protect their own privacy?

For legitimate reasons, more and more people will be hiding their IPs, encrypting, tunneling, or otherwise disguising their activities and taking privacy into their own hands. As privacy technology becomes more affordable and simple, it will become more prevalent. This is a mega-trend, and it will create problems for those management tools that use this kind of information to enforce policies.

However, alternatives to these application-level products do exist, such as “fairness-based” bandwidth management. Fairness-based bandwidth management, like the NetEqualizer, is the only a 100% neutral solution and ultimately provides a more privacy friendly approach for Internet users and a more effective solution for administrators when personal privacy protection technology is in place. Fairness is the idea of managing bandwidth by how much you can use, not by what you’re doing. When you manage bandwidth by fairness instead of activity, not only are you supporting a neutral, private Internet, but you’re also able to address the critical task of bandwidth allocation, control and quality of service.

$10,000 Prize for Predicting the World Switchover Date from IPv4


Although somewhat overshadowed by the major news stories developing around the world in recent weeks, those of us in the tech industry have seen no shortage of attention paid to the impending changes surrounding IPv4. Just today, I read a few articles about how the world has run out of IPv4 addresses. I also recently received a survey about our specific plans for IPv6.

Even with all of this media attention, however, there are many questions that still remain (one of which we’ve decided to use for a new contest). While we can’t answer all of them, we’d at least like to chime in about a few.

Will a switch to IPv6 really reduce the need for IPv4?

Despite its availability, no one will choose to completely convert to IPv6 until the rest of the world knows how to send and receive it. To do so would be communication suicide. Only when there is a near full conversion to IPv6 could you reliably use it to exclusively communicate. This creates a paradox of sorts: In order to remain accessible to all, you must retain your old IPv4 address.

This is easier said than done for some.

While there are certainly products and services to forward your mail when you establish an IPv6 address, what about a new company established from scratch with no pre-existing Web presence? When the owners call their ISP to obtain an address for their new website, instead of the simple exchange that may have taken place in the past, the conversation will go a little like this:

ISP: “We ran out of IPv4 addresses last week, but don’t worry, we are going to hook you up with a brand-spanking-new IPv6 address and you should be good to go.”

Business Owner: “So, how do the people that don’t speak IPv6 contact me?”

ISP:Don’t worry. We’ll handle the conversions for you, like the postal office forwards your mail when you move.”

Business Owner: “Yes, but I did not have an existing address. I am a new company.”

Therefore, new companies must not only establish an IPv6 address, but they must also somehow scrounge up an old IPv4 address to prevent being cut off from the percentage of the world that has not switched over.

The point is that even with IPv6, there will be no immediate relief on the IPv4 address space (Fortunately, viable alternatives do exist).

So, when will IPv4 be obsolete?

We have no idea exactly when, but based on the discussion above, we don’t think it will happen any time soon.

What does it mean to be completely switched over to IPv6?

This question will only be answered over time, and even then, it will be open to various interpretations. However, to better track the implementation of IPv6, and to facilitate our understanding of it, we’ve decided to establish a contest.

 

The Contest

Note: The following is a contest overview. Official contest rules and registration details will be revealed in our April newsletter (click here to register for the upcoming newsletter).

Contest Rules and Requirements

We, APconnections, makers of the NetEqualizer, will award one $10,000 USD prize as per the following criteria:

  • First, you must register for the contest and provide all required information. The registration link will be included in the April NetEqualizerNews newsletter and posted on the NetEqualizer News Blog after our newsletter goes out next month.
  • Winners will be awarded based on predicting the date of the actual adoption of IPv6 worldwide (see below).
  • If no entries are entered for the actual date, then the prize will be awarded to the next closest prediction after the date of switchover.
  • One entry per person. Duplicate registrations will disqualify an entrant.
  • Entrants must be 18 years of age or older on the date of entry.
  • If more than one contestant chooses the winning date, the $10,000 USD prize will be divided equally among winners.

APconnections will monitor and announce when the world has switched over to IPv6 based on the following criteria:

  • The winning date shall be determined by the first time/date we can actively verify that any set of 50 companies with revenue of over $5 million USD per year has changed its public-facing Internet addresses to a full 128-bit address.
  • None of the 50 qualifying companies can be using any form of an older IPv4 address for any public communications with the Internet (i.e., e-mail servers, publicly accessible Web pages administered or licensed to the company).
  • None of the 50 qualifying companies shall be using any special conversion equipment to translate between IPv4 and IPv6 addresses.
  • Internal IPv6 intranet conversions do not qualify.
  • All public addresses at qualifying companies must use an address with more than 32 bits (greater than 255.255.255.255).
  • To be valid for the contest award, IPv6 worldwide adoption criteria date must be validated and published by the APconnections engineering staff and not by any other third party. Please feel free to help us by sending the names of any companies using IPv6 for verification.

Again, the official contest rules, registration information, and deadlines will be released in our upcoming April newsletter. So, be sure to sign up.

Notes on the Complexity of Internet Billing Systems


When using a product or service in business, it’s almost instinctive to think of ways to make it better. This is especially true when it’s a customer-centered application. For some, this thought process is just a habit. However, for others, it leads to innovation and new product development.

I recently experienced this type of stream of consciousness when working with network access control products and billing systems. Rather than just disregarding my conclusions, I decided to take a few notes on what could be changed for the better. These are just a few of the thoughts that came to mind.

The ideal product would:

  1. Cost next to nothing
  2. Auto-sense unique customer requirements
  3. Suggest differentiators such as custom Web screens where customers could view their bill
  4. Roll out the physical deployment bug free in any network topology

Up to this point, the closest products I’ve seen to fulfilling these tasks are from the turn-key vendors that supply systems en mass to hot-spot operators. The other alternative is to rely on custom-built systems. However, there are advantages and drawbacks to both options.

Turn-key Solutions

Let’s start with systems from the turn-key vendors. In short, these aren’t for everyone and only tend to be viable under certain circumstances, which include:

  1. A large greenfield ISP installation — In this situation, the cost of development of the application should be small relative to the size of the customer base. Also, the business model needs some flexibility to work with the features of the billing and access design.
  2. If you have plenty of time to troubleshoot your network — This translates into you having plenty of money allocated to troubleshooting and also realizing there will be several integrations and iterations in order to work out the kinks. This means you must have a realistic expectation for ongoing support (more on the this later). Projects go sour when vendor and customer assume the first iteration is all that’s needed. This is never true when doing even the most innocuous custom development.
  3. If you are willing to take the vendors’ suggestions on equipment and the business process — Generally, the vendor you’re using provides some basic options for your billing and authentication. This may require you to adjust your business process to meet some existing models.

The upside to these turn-key solutions is that if you’re able to operate within these constraints, you can likely get something going at a great price and fairly quickly. But, unfortunately, if you waiver from the turn-key vendor system, your support and cost cycle will likely increase dramatically.

The Hidden Costs of Customization

If you don’t fit into the categories discussed above, you may start looking into custom-built systems to better suit your specific needs. While going the custom-built route will obviously add to your initial price, it’s also important to realize that the long-term costs may increase as well.

Many custom network access control projects start as a nice prototype, but then profit margins tend to drop and changes need to be made. The largest hidden cost from prototype to finished product is in handling error cases and boundary conditions. In addition to adding to the development costs, ongoing support will be required to cover these cases. In our experience, here are a few of the common issues that tend to develop:

  1. Auditing and synchronization with customer databases — This is where your enforcement program (the feature that allows people on to your network) syncs up with your database. But, suppose you lose power and then come back up. How do you re-validate all of your customer ? Do you force them to re-login?
  2. Capacity planning — In many cases, the test system did not account for the size of a growing system. At what point will you be forced to divide and tranisition to multiple authentications systems?
  3. General “feature creep” — This occurs when changing customer expectations pressure the vendor to overrun a fixed-price bid. This in turn leads to shoddy work and more problems as the vendor tries to cut corners in order to hold onto some profit margin.

Conclusion

Based on this discussion, it’s clear that the perfect, one-time-fix NAC billing system may still only be in the minds of users. Therefore, it’s not a matter of trying to find the flawless solution but rather of taking your own needs into account while understanding the limitations of existing options. If you have a clear idea of what you need, as well as a reasonable expectation of what certain solutions can provide (and at what cost), the process of finding and implementing an NAC billing system will not only be more effective but also more painless.

NetEqualizer Testing and Integration of Squid Caching Server


Editor’s Note: Due to the many variables involved with tuning and supporting Squid Caching Integration, this feature will require an additional upfront support charge. It will also require at minimum a NE3000 platform. Contact sales@netequalizer.com for specific details.

In our upcoming 5.0 release, the main enhancement will be the ability to implement YouTube caching from a NetEqualizer. Since a squid-caching server can potentially be implemented separately by your IT department, the question does come up about what the difference is between using the embedded NetEqualizer integration and running the caching server stand-alone on a network.

Here are a few of the key reasons why using the NetEqualizer caching integration provides for the most efficient and effective set up:

1. Communication – For proper performance, it’s important that the NetEqualizer know when a file is coming from cache and when it’s coming from the Internet. It would be counterproductive to have data from cache shaped in any way. To accomplish this, we wrote a new utility, aptly named “cache helper,” to advise the NetEqualizer of current connections originating from cache. This allows the NetEqualizer to permit cached traffic to pass without being shaped.

2. Creative Routing – It’s also important that the NetEqualizer be able to see the public IP addresses of traffic originating on the Internet. However, using a stand-alone caching server prevents this. For example, if you plug a caching server into your network in front of a NetEqualizer (between the NetEqualizer and your users), all port 80 traffic would appear to come from the proxy server’s IP address. Cached or not, it would appear this way in a default setup. The NetEqualizer shaping rules would not be of much use in this mode as they would think all of the Internet traffic was originating from a single server. Without going into details, we have developed a set of special routing rules to overcome this limitation in our implementation.

3. Advanced Testing and Validation – Squid proxy servers by themselves are very finicky. Time and time again, we hear about implementations where a customer installed a proxy server only to have it cause more problems than it solved, ultimately slowing down the network. To ensure a simple yet tight implementation, we ran a series of scenarios under different conditions. This required us to develop a whole new methodology for testing network loads through the Netequalizer. Our current class of load generators is very good at creating a heavy load and controlling it precisely, but in order to validate a caching system, we needed a different approach. We needed a load simulator that could simulate the variations of live internet traffic. For example, to ensure a stable caching system, you must take the following into consideration:

  • A caching proxy must perform quite a large number of DNS look-ups
  • It must also check tags for changes in content for cached Web pages
  • It must facilitate the delivery of cached data and know when to update the cache
  • The squid process requires a significant chunk of CPU and memory resources
  • For YouTube integration, the Squid caching server must also strip some URL tags on YouTube files on the fly

To answer this challenge, and provide the most effective caching feature, we’ve spent the past few months developing a custom load generator. Our simulation lab has a full one-gigabit connection to the Internet. It also has a set of servers that can simulate thousands of simultaneous users surfing the Internet at the same time. We can also queue up a set of YouTube users vying for live video from the cache and Internet. Lastly, we put a traditional point-to-point FTP and UDP load across the NetEqualizer using our traditional load generator.

Once our custom load generator was in place, we were able to run various scenarios that our technology might encounter in a live network setting.  Our testing exposed some common, and not so common, issues with YouTube caching and we were able to correct them. This kind of analysis is not possible on a live commercial network, as experimenting and tuning requires deliberate outages. We also now have the ability to re-create a customer problem and develop actual Squid source code patches should the need arise.

The Dark Side of Net Neutrality


Net neutrality, however idyllic in principle, comes with a price. The following article was written to shed some light on the big money behind the propaganda of net neutrality. It may change your views, but at the very least it will peel back one more layer of the the onion that is the issue of net neutrality.

First, an analogy to set the stage:

I live in a neighborhood that equally shares a local community water system among 60 residential members. Nobody is metered. Through a mostly verbal agreement, all users try to keep our usage to a minimum. This requires us to be very water conscious, especially in the summer months when the main storage tanks need time to recharge overnight.

Several years ago, one property changed hands, and the new owner started raising organic vegetables using a drip irrigation system. The neighborhood precedent had always been that using water for a small lawn and garden area was an accepted practice, however, the new neighbor expanded his garden to three acres and now sells his produce at the local farmers market. Even with drip irrigation, his water consumption is likely well beyond the rest of the neighborhood combined.

You can see where I am going with this. Based on this scenario, it’s obvious that an objective observer would conclude that this neighbor should pay an additional premium — especially when you consider he is exploiting the community water for a commercial gain.

The Internet, much like our neighborhood example, was originally a group of cooperating parties (educational and government institutions) that connected their networks in an effort to easily share information. There was never any intention of charging for access amongst members. As the Internet spread away from government institutions, last-mile carriers such as cable and phone companies invested heavily in infrastructure. Their  business plans assumed that all parties would continue to use the Internet with lightweight content such as Web pages, e-mails, and the occasional larger document or picture.

In the latter part of 2007, a few companies, with substantial data content models, decided to take advantage of the low delivery fees for movies and music by serving them up over the Internet. Prior to their new-found Internet delivery model, content providers had to cover the distribution costs for the physical delivery of records, video cassettes and eventually discs.

As of 2010, Internet delivery costs associated with the distribution of media had plummeted to near zero. It seems that consumers have pre-paid their delivery cost when they paid their monthly Internet bill. Everybody should be happy, right?

The problem is, as per our analogy with the community water system, we have a few commercial operators jamming the pipes with content, and jammed pipes have a cost. Upgrading a full Internet pipe at any level requires a major investment, and providers to date are already leveraged and borrowed with their existing infrastructure. Thus, the Internet companies that carry the data need to pass this cost on to somebody else.

As a result of these conflicting interests, we now have a pissing match between carriers and content providers in which the latter are playing the “neutrality card” and the former are lobbying lawmakers to grant them special favors in order to govern ways to limit access.

Therefore, whether it be water, the Internet or grazing on public lands, absolute neutrality can be problematic — especially when money is involved. While the concept of neutrality certainly has the overwhelming support of consumer sentiment, be aware that there are, and  always will be, entities exploiting the system.

Related Articles

For more on NetFlix, see Level 3-Netflix Expose their Hidden Agenda.

Network Redundancy must start with your provider


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

The chances of being killed by a shark are 1 in 264 million. The chance of being mauled by a bear on your weekend outing in the woods are even less.   Fear is a strange emotion rooted deep within our brains. Despite a rational understanding of risks people are programmed to lose sleep and exhaust their adrenaline supply worrying about events that will never happen.

It is this same lack of rational risk evaluation that makes it possible  for vendors to sell unneeded equipment to otherwise budget conscious businesses.  The current , in vogue,  unwarranted  fears used to move network equipment    are IPv6 preparedness, and  equipment redundancy.

Equipment vendors tend to push customers toward internal redundant hardware solutions , not because they have your best interest in mind ,  if they did, they would first encourage you to get a redundant link to your ISP.

Twenty years of practical hands on experience tells us  that your Internet router’s chance of catastrophic failure is about 1 percent over a three-year period. On the other hand, your internet provider has a 95-percent chance of having a full-day outage during that same three-year period.

If you are truly worried about a connectivity failure into your business, you MUST source two separate paths to the Internet to have any significant reduction in risk. Requiring fail-over on individual pieces of equipment, without first securing complete redundancy in your network from your provider is like putting a band-aid on your finger while pleading from your jugular vein.

Some other useful tips on making your network more reliable include

Do not turn on unneeded bells and whistles on your router and firewall equipment.

Many router and device failures are not absolute. Equipment will get cranky, slow, or belligerent based on human error or system bugs. Although system bugs are rare when these devices are used in the default set-up, it seems turning on bells and whistles is often an irresistible enticement for a tech. The more features you turn on, the less standard your configuration becomes, and all too often the mission of the device is pushed well beyond its original intent. Routers doing billing systems, for example.

These “soft” failure situations are common, and the fail-over mechanism likely will not kick in, even though the device is sick and not passing traffic as intended. I have witnessed this type of failure first-hand at major customer installations. The failure itself is bad enough, but the real embarrassment comes from having to tell your customer that the fail-over investment they purchased is useless in a real-life situation. Fail-over systems are designed with the idea that the equipment they route around will die and go belly up like a pheasant shot point-blank with a 12-gauge shotgun. In reality, for every “hard” failure, there are 100 system-related lock ups where equipment sputters and chokes but does not completely die.

Start with a high-quality Internet line.

T1 lines, although somewhat expensive, are based on telephone technology that has long been hardened and paid for. While they do cost a bit more than other solutions, they are well-engineered to your doorstep.

Make sure all your devices have good UPS sources and surge protectors.

Consider this when purchasing redundant equipment,  what is the cost of manually moving a wire to bypass a failed piece of equipment?

Look at this option before purchasing redundancy options on single point of failure. We often see customers asking for redundant fail-over embedded in their equipment. This tends to be a strategy of purchasing hardware such as routers, firewalls, bandwidth shapers, and access points that provide a “fail open” (meaning traffic will still pass through the device) should they catastrophically fail. At face value, this seems like a good idea to cover your bases. Most of these devices embed a failover switch internally to their hardware. The cost of this technology can add about $3,000 to the price of the unit.

If equipment is vital to your operation, you’ll need a spare unit on hand in case of failure. If the equipment is optional or used occasionally, then take it out of your network.

Again, these are just some basic tips, and your final Internet redundancy plan will ultimately depend on your specific circumstances. But, these tips and questions should put you on your way to a decision based on facts rather than one based on unnecessary fears and concerns.

What Is Deep Packet Inspection and Why the Controversy?


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper. APconnections removed all deep packet inspection technology from their NetEqualizer product over 2 years ago.

Article Updated March 2012

As the debate over Deep Packet Inspection continues, network administrators are often faced with a difficult decision: ensure network quality or protect user privacy. However, the legality of the practice is now being called into question, adding a new twist to the mix. Yet, for many Internet users, deep packet inspection continues to be an ambiguous term in need of explanation. In the discussion that follows, deep packet inspection will be explored in the context of the ongoing debate.

Exactly what is deep packet inspection?

All traffic on the Internet travels around in what is called an IP packet. An IP packet is a string of characters moving from computer A to computer B. On the outside of this packet is the address where it is being sent. On the inside of the packet is the data that is being transmitted.

The string of characters on the inside of the packet can be conceptually thought of as the “payload,” much like the freight inside of a railroad car. These two elements, the address and the payload, comprise the complete IP packet.

When you send an e-mail across the Internet, all your text is bundled into packets and sent on to its destination. A deep packet inspection device literally has the ability to look inside those packets and read your e-mail (or whatever the content might be).

Products sold that use DPI are essentially specialized snooping devices that examine the content (pay load inside) of Internet packets. Other terms sometimes used to describe techniques that examine Internet data are packet shapers, layer-7 traffic shaping, etc.

How is deep packet inspection related to net neutrality?

Net neutrality is based on the belief that nobody has the right to filter content on the Internet. Deep packet inspection is a method used for filtering. Thus, there is a conflict between the two approaches. The net neutrality debate continues to rage in its own right.

Why do some Internet providers use deep packet inspection devices?

There are several reasons:

1) Targeted advertising If a provider knows what you are reading, they can display content advertising on the pages they control, such as your login screen or e-mail account.

2) Reducing “unwanted” traffic — Many providers are getting overwhelmed by types of traffic that they deem as less desirable such as Bittorrent and other forms of peer-to-peer. Bittorrent traffic can overwhelm a network with volume. By detecting and redirecting the Bittorrent traffic, or slowing it down, a provider can alleviate congestion.

3) Block offensive material — Many companies or institutions that perform content filtering are looking inside packets to find, and possibly block, offensive material or web sites.

4) Government spying — In the case of Iran (and to some extent China), DPI is used to keep tabs on the local population.

When is it appropriate to use deep packet inspection?

1) Full disclosure — Private companies/institutions/ISPs that notify employees that their Internet use is not considered private have the right to snoop, although I would argue that creating an atmosphere of mistrust is not the mark of a healthy company.

2) Law enforcement — Law enforcement agencies with a warrant issued by a judge would be the other legitimate use.

3) Intrusion detection and prevention– It is one thing to be acting as an ISP  and to eaves drop on a public conversation;  it is entirely another paradigm if you are a  private business examining the behavior of somebody  coming in your front door. For example in a private home it is within your right to look through your peep hole and not let shady characters into your home.  In a private business it is a good idea to use Deep packet inspection in order to block unwanted intruders from your network. Blocking bad guys before they break into and damage your network and is perfectly acceptable.

4) Spam filtering- Most consumers are very happy to have their ISP or email provider remove spam.  I would categorize this type of DPI as implied disclosure. For example, in Gmail you do have the option to turn Spam filtering off, and although most consutomers may not realize that google is reading their mail ( humans don’t read it but computer scanners do), their motives are understood. What consumers may not realize is that their email provider is also reading everything they do in order to set target advertising

Does Content filtering use Deep Packet Inspection ?

For the most part no. Content filtering is generally  done at the URL level. URL’s are generally considered public information, as routers need to look this up anyway. We have only encountered content filters at private institutions that are within their right.

What about spam filtering, does that use Deep Packet Inspection?

Yes many Spam filters will look at content, and most people could not live without their spam filter, however with spam filtering most people have opted in at one point or another, hence it is generally done with permission.

What is all the fuss about?

It seems that consumers are finally becoming aware of what is going on behind the scenes as they surf the Internet, and they don’t like it. What follows are several quotes and excerpts from articles written on the topic of deep packet inspection. They provide an overview not only of how DPI is currently being used, but also the many issues that have been raised with the practice.

For example, this is an excerpt from a recent PC world article:

Not that we condone other forms of online snooping, but deep packet inspection is the most egregious and aggressive invasion of privacy out there….It crosses the line in a way that is very frightening.

Paul Stephens, director of policy and advocacy for the Privacy Rights Clearinghouse, as quoted in the E-Commerce Times on November 14, 2008. Read the full article here.

Recently, Comcast had their hand slapped for re-directing Bittorrent traffic:

Speaking at the Stanford Law School Center for Internet and Society, FCC Chairman Kevin Martin said he’s considering taking action against the cable operator for violating the agency’s network-neutrality principles. Seems Martin was troubled by Comcast’s dissembling around the BitTorrent issue, not to mention its efforts to pack an FCC hearing on Net neutrality with its own employees.

— Digital Daily, March 10, 2008. Read the full article here.

Later in 2008, the FCC came down hard on Comcast.

In a landmark ruling, the Federal Communications Commission has ordered Comcast to stop its controversial practice of throttling file sharing traffic.

By a 3-2 vote, the commission on Friday concluded that Comcast monitored the content of its customers’ internet connections and selectively blocked peer-to-peer connections.

Wired.com, August 1, 2008.Read the full article here.

To top everything off, some legal experts are warning companies practicing deep packet inspection that they may be committing a felony.

University of Colorado law professor Paul Ohm, a former federal computer crimes prosecutor, argues that ISPs such as Comcast, AT&T and Charter Communications that are or are contemplating ways to throttle bandwidth, police for copyright violations and serve targeted ads by examining their customers’ internet packets are putting themselves in criminal and civil jeopardy.

Wired.com, May 22, 2008. Read the full article here.

However, it looks like things are going the other way in the U.K. as Britain’s Virgin Media has announced they are dumping net neutrality in favor of targeting bittorrent.

The UK’s second largest ISP, Virgin Media, will next year introduce network monitoring technology to specifically target and restrict BitTorrent traffic, its boss has told The Register.

The Register, December 16, 2008. Read the full article here.

Canadian ISPs confess en masse to deep packet inspection in January 2009.

With the amount of attention being paid to Comcast recently, a lot of people around the world have begun to look at their ISPs and wonder exactly what happens to their traffic once it leaves. This is certainly true for Canada, where several Canadian ISPs have come under the scrutiny of the CRTC, the regulatory agency responsible for Canada. After investigation, it was determined that all large ISPs in Canada filter P2P traffic in some fashion.

Tech Spot, January 21, 2009. Read the full article here.

In April 2009, U.S. lawmakers announced plans to introduce legislation that would limit the how ISPs could track users. Online privacy advocates spoke out in support of such legislation.

In our view, deep packet inspection is really no different than postal employees opening envelopes and reading letters inside. … Consumers simply do not expect to be snooped on by their ISPs or other intermediaries in the middle of the network, so DPI really defies legitimate expectations of privacy that consumers have.

Leslie Harris, president and CEO of the Center for Democracy and Technology, as quoted on PCWorld.com on April 23, 2009. Read the full article here.

The controversy continues in the U.S. as AT&T is accused of traffic shaping, lying and blocking sections of the Internet.

7/26/2009 could mark a turning point in the life of AT&T, when the future looks back on history, as the day that the shady practices of an ethically challenged company finally caught up with them: traffic filtering, site banning, and lying about service packages can only continue for so long before the FCC, along with the bill-paying public, takes a stand.

Kyle Brady, July 27, 2009. Read the full article here.

[February 2011 Update] The Egyptian government uses DPI to filter elements of their Internet Traffic, and this act in itself becomes the news story. In this video in this news piece, Al Jazeera takes the opportunity to put out an unflattering piece on the company Naurus that makes the DPI technology and sold it to the Egyptians.

While the debate over deep packet inspection will likely rage on for years to come, APconnections made the decision to fully abandon the practice over two years ago, having since proved the viability of alternative approaches to network optimization. Network quality and user privacy are no longer mutually exclusive goals.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

NetEqualizer Brand Becoming an Eponym for Fairness and Net Neutrality techniques


An eponym is a general term used to describe from what or whom something derived its name. Therefore, a proprietary eponym could be considered a brand name, product or service mark which has fallen into general use.

Examples of common brand Eponyms include Xerox, Google, and  Band Aid.  All of these brands have become synonymous with the general use of the class of product regardless of the actual brand.

Over the past 7 years we have spent much of our time explaining the NetEqualizer methods to network administrators around the country;  and now,there is mounting evidence,  that  the NetEqualizer brand, is taking on a broader societal connotation. NetEqualizer, is in the early stages as of becoming and Eponym for the class of bandwidth shapers that, balance network loads and ensure fairness and  Neutrality.   As evidence, we site the following excerpts taken from various blogs and publications around the world.

From Dennis OReilly <Dennis.OReilly@ubc.ca> posted on ResNet Forums

These days the only way to classify encrypted streams is through behavioral analysis.  ….  Thus, approaches like the NetEqualizer or script-based ‘penalty box’ approaches are better.

Wisp tutorial Butch Evans

About 2 months ago, I began experimenting with an approach to QOS that mimics much of the functionality of the NetEqualizer (http://www.netequalizer.com) product line.

TMC net

Comcast Announces Traffic Shaping Techniques like APconnections’ NetEqualizer…

From Technewsworld

It actually sounds a lot what NetEqualizer (www.netequalizer.com) does and most people are OK with it…..

From Network World

NetEqualizer looks at every connection on the network and compare it to the overall trunk size to determine how to eliminate congestion on the links

Star Os Forum

If you’d really like to have your own netequalizer-like system then my advice…..

Voip-News

Has anyone else tried Netequalizer or something like it to help with VoIP QoS? It’s worked well so far for us and seems to be an effective alternative for networks with several users…..

How to Determine a Comprehensive ROI for Bandwidth Shaping Products


In the past, we’ve published several articles on our blog to help customers better understand the NetEqualizer’s potential return on investment (ROI). Obviously, we do this because we think we offer a compelling ROI proposition for most bandwidth-shaping decisions. Why? Primarily because we provide the benefits of bandwidth shaping at a a very low cost — both initially and even more so over time. (Click here for the NetEqualizer ROI calculator.)

But, we also want to provide potential customers with the questions that need to be considered before a product is purchased, regardless of whether or not the answers lead to the NetEqualizer. With that said, this article will break down these questions, addressing many issues that may not be obvious at first glance, but are nonetheless integral when determining what bandwidth shaping product is best for you.

First, let’s discuss basic ROI. As a simple example, if an investment cost $100, and if in one year that investment returned $120, the ROI is 20 percent.  Simple enough. But what if your investment horizon is five years or longer? It gets a little more complicated, but suffice it to say you would perform a similar calculation for each year while adjusting these returns for time and cost.

The important point is that this technique is a well-known calculation for evaluating whether one thing is a better investment than another — be it bandwidth shaping products or real estate. Naturally and obviously the best financial decision will be determined by the greatest return for the smallest cost.

The hard part is determining what questions to ask in order to accurately determine the ROI. A missed cost or benefit here or there could dramatically alter the outcome, potentially leading to significant unforeseen losses.

For the remainder of this article, I’ll discuss many of the potential costs and returns associated with bandwidth shaping products, with some being more obscure than others. In the end, it should better prepare you to address the most important questions and issues and ultimately lead to a more accurate ROI assessment.

Let’s start by looking at the largest components of bandwidth shaping product “costs” and whether they are one-time or ongoing. We’ll then consider the returns.

COSTS

  • The initial cost of the tool
    • This is a one-time cost.
  • The cost of vendor support and license updates
    • These are ongoing costs and include monthly and annual licenses for support, training, software updates, library updates, etc…  The difference from vendor to vendor can be significant — especially over the long run.
  • The cost of upgrades within the time horizon of the investment
    • These upgrades can come in several different forms. For example, what does it cost to go from a 50Mbs tool to 100Mbs? Can your tool be upgraded, or do you have to buy a whole new tool? This can be a one-time cost or it can occur several times. It really depends on the growth of your network, but it’s usually inevitable for networks of any size.
  • The internal (human) cost to support the tool
    • For example, how many man hours do you have to spend to maintain the tool, to optimize it and to adapt it to your changing network? This could be a considerable “hidden” cost and it’s generally recurring. It also usually increases in time as the cost of salaries/benefits tend to go up. Because of that, this is a very important component that should be quantified for a good ROI analysis. Tools that require little or no ongoing maintenance will have a large advantage.
  • Overall impact on the network
    • Does the product add latency or other inefficiencies? Does it create any processing overhead and how much? If the answer is yes, costs such as these will constantly impact your network quality and add up over time.

RETURNS

  • Savings from being able to delay or eliminate buying more bandwidth
    • This could either be a one-time or ongoing return. Even delaying a bandwidth upgrade for six months or a year can be highly valuable.
  • Savings from not losing existing revenue sources
    • How many customers did you not lose because they did not get frustrated with their network/Internet service? This return is ongoing.
  • Ability to generate new revenue
    • How many new customers did you add because of a better-maintained network?  Were you able to generate revenue by adding new higher-value services like a tiered rate structure? This will usually be an ongoing return.
  • Savings from the ability eliminate or reduce the financial impact of unprofitable customers
    • This is an ongoing savings. Can you convert an unprofitable customer to a profitable one by reducing their negative impact on the network? If not, and they walk, do you care?
  • Avoidance of having to buy additional equipment
    • Were you able to avoid having to “divide and conquer” by buying new access points, splitting VLANs, etc..? This can be a one-time or ongoing return.
  • Savings in the cost of responding to technical support calls
    • How much time was saved by not having to receive an irate customer call, research it and respond back? If this is something you typically deal with on a regular basis, the savings will add up every day, week or month this is avoided.

Overall, these issues are the basic financial components and questions that need to be quantified to make a good ROI analysis. For each business, and each tool, this type of analysis may yield a different answer, but it is important to note that over time there are many more items associated with ongoing costs/savings than those occurring only once. Thus, you must take great care to understand the impact of these for each tool, especially those issues that lead to costs that increase over time.

NetEqualizer YouTube Caching FAQ


Editor’s Note: This week, we announced the availability of the NetEqualizer YouTube caching feature we first introduced in October. Over the past month, interest and inquiries have been high, so we’ve created the following Q&A to address many of the common questions we’ve received.

This may seem like a silly question, but why is caching advantageous?

The bottleneck most networks deal with is that they have a limited pipe leading out to the larger public Internet cloud. When a user visits a website or accesses content online, data must be transferred to and from the user through this limited pipe, which is usually meant for only average loads (increasing its size can be quite expensive). During busy times, when multiple users are accessing material from the Internet at once, the pipe can become clogged and service slowed. However, if an ISP can keep a cached copy of certain bandwidth-intensive content, such as a popular video, on a server in their local office, this bottleneck can be avoided. The pipe remains open and unclogged and customers are assured their video will always play faster and more smoothly than if they had to go out and re-fetch a copy from the YouTube server on the Internet.

What is the ROI benefit of caching YouTube? How much bandwidth can a provider conserve?

At the time of this writing, we are still in the early stages of our data collection on this subject. What we do know is that YouTube can account for up to 15 percent of Internet traffic. We expect to be able to cache at least the most popular 300 YouTube videos with this initial release and perhaps more when we release the mass-storage version of our caching server in the future. Considering this, realistic estimates put the savings in terms of bandwidth overhead somewhere between 5 and 15 percent. But this is only the instant benefits in terms bandwidth savings. The long-term customer-satisfaction benefit is that many more YouTube videos will play without interruption on a crowded network (busy hour) than before. Therefore, ROI shouldn’t be measured in bandwidth savings alone.

Why is it just the YouTube caching feature? Why not cache everything?

There are a couple of good reasons not to cache everything.

First, there are quite a few Web pages that are dynamically generated or change quite often, and a caching mechanism relies on content being relatively static. This allows it to grab content from the Internet and store it locally for future use without the content changing. As mentioned, when users/clients visit the specific Web pages that have been stored, they are directed to the locally saved content rather than over the Internet and to the original website. Therefore, caching obviously wouldn’t be possible for pages that are constantly changing. Caching dynamic content can cause all kinds of issues — especially with merchant and secure sites where each page is custom-generated for the client.

Second, a caching server can realistically only store a subset of data that it accesses. Yes, data storage is getting less expensive every year, but a local store is finite in size and will eventually fill up. So, when making a decision on what to cache and what not to cache, YouTube, being both popular and bandwidth intensive, was the logical choice.

Will the NetEqualizer ever cache content beyond YouTube? Such as other videos?

At this time, the NetEqualizer is caching files that traverse port 80 and correspond to video files from 30 seconds to 10 minutes. It is possible that some other port 80 file will fall into this category, but the bulk of it will be YouTube.

Is there anything else about YouTube that makes it a good candidate to cache?

Yes, YouTube content meets the level of stability discussed above that’s needed for effective caching. Once posted, most YouTube videos are not edited or changed. Hence, the copy in the local cache will stay current and be good indefinitely.

When I download large distributions, the download utility often gives me a choice of mirrored sites around the world. Is this the same as caching?

By definition this is also caching, but the difference is that there is a manual step to choosing one of these distribution sites. Some of the large-content open source distributions have been delivered this way for many years. The caching feature on the NetEqualizer is what is called “transparent,” meaning users do not have to do anything to get a cached copy.

If users are getting a file from cache without their knowledge, could this be construed as a violation of net neutrality?

We addressed the tenets of net neutrality in another article and to our knowledge caching has not been controversial in any way.

What about copyright violations? Is it legal to store someone’s content on an intermediate server?

This is a very complex question and anything is possible, but with respect to intent and the NetEqualizer caching mechanism, the Internet provider is only caching what is already freely available. There is no masking or redirection of the actual YouTube administrative wrappings that a user sees (this would be where advertising and promotions appear). Hence, there is no loss of potential of revenue for YouTube. In fact, it would be considered more of a benefit for them as it helps more people use their service where connections might otherwise be too slow.

Final Editor’s Note: While we’re confident this Q&A will answer many of the questions that arise about the NetEqualizer YouTube caching feature, please don’t hesitate to contact us with further inquiries. We can be reached at 1-888-287-2492 or sales@apconnections.net.

NetEqualizer YouTube Caching a Win for Net Neutrality


Over the past few years, much of the controversy over net neutrality has ultimately stemmed from the longstanding rift between carriers and content providers. Commercial content providers such as NetFlix have entire business models that rely on relatively unrestricted bandwidth access for their customers, which has led to an enormous increase in the amount of bandwidth that is being used. In response to these extreme bandwidth loads and associated costs, ISPs have tried all types of schemes to limit and restrict total usage. Some of the solutions that have been tried include:

While in many cases effective, most of these efforts have been mired in controversy with respect to net neutrality. However, caching is the one exception.

Up to this point, caching has proven to be the magic bullet that can benefit both ISPs and consumers (faster access to videos, etc.) while respecting net neutrality. To illustrate this, we’ll run caching through the gauntlet of questions that have been raised about these other solutions in regard to a violation of net neutrality. In the end, it comes up clean.

1. Does caching involve deep introspection of user traffic without their knowledge (like layer-7 shaping and DPI)?

No.

2. Does Caching perform any form of preferential treatment based on content?

No.

3. Does caching perform any form of preferential treatment based on fees?

No.

Yet, despite avoiding these pitfalls, caching has still proven to be extremely effective, allowing Internet providers to manage increasing customer demands without infringing upon customers’ rights or quality of service. It was these factors that led APconnections to develop our most recent NetEqualizer feature, YouTube caching.

For more on this feature, or caching in general, check out our new NetEqualizer YouTube Caching FAQ post.