How to Speed Up Data Access on Your iPhone


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

Ever wonder if there was anything you can do to make your iPhone access a little bit faster?

When on Your Provider’s 4g Network and Data Access is Slow.

The most likely reason for slow data access is congestion on the provider line. 3g and 4g networks all have a limited sized pipe from the nearest tower back to the Internet. It really does not matter what your theoretical data speed is, when there are more people using the tower than the back-haul pipe can handle, you can temporarily lose service, even when your phone is showing three or four bars.

The other point of contention can be the amount of users connected to a tower exceeds the the towers carrying capacity in terms of frequency.  If this occurs you likely will not only lose data connectivity but also the ability to make and receive phone calls.

Unfortunately, you only have a couple of options in this situation.

– If you are in a stadium with a large crowd, your best bet is to text during the action. Pick a time when you know the majority of people are not trying to send data. If you wait for a timeout or end of the game, you’ll find this corresponds to the times when the network slows to a crawl, so try to finish your access before the last out of the game or the end of the quarter.

Get away from the area of congestion. I have experienced complete lockout of up to 30 minutes, when trying to text, as a sold out stadium emptied out. In this situation my only chance was to walk about 1/2 mile or so from the venue to get a text out. Once away from the main stadium, my iPhone connected to a tower with a different back haul away from the congested stadium towers.

When connected to a local wireless network and access is slow.

Get close to the nearest access point.

Oftentimes, on a wireless network, the person with the strongest signal wins. Unlike the cellular data network , 802.11  protocols used by public wireless access points have no way to time-slice data access. Basically, this means the device that talks the loudest will get all the bandwidth. In order to talk the loudest, you need to be closest to the access point.

On a relatively uncrowded network you might have noticed that you get fairly good speed even on a moderate or weak signal.  However, when there are a large number of users competing for the attention of a local access point, the loudest have the ability to dominate all the bandwidth, leaving nothing for the weaker iPhones. The phenomenon of the loudest talker getting all the bandwidth is called the hidden node problem. For a good explanation of the hidden node issue you can reference our white paper on the problem.

Shameless plug: If you happen to be a provider or know somebody that works for a provider please tell them to call us and we’d be glad to explain the simplicity of equalizing and how it can restore sanity to a congested network.

How to Block Frostwire, utorrent and Other P2P Protocols


By Art Reisman, CTO, http://www.netequalizer.com

Art Reisman CTO www.netequalizer.com

Disclaimer: It is considered controversial and by some definitions illegal for a US-based ISP to use deep packet inspection on the public Internet.

At APconnections, we subscribe to the philosophy that there is more to be gained by explaining your technology secrets than by obfuscating them with marketing babble. Read on to learn how I hunt down aggressive P2P traffic.

In order to create a successful tool for blocking a P2P application, you must first figure out how to identify P2P traffic. I do this by looking at the output data dump from a P2P session.

To see what is inside the data packets I use a custom sniffer that we developed. Then to create a traffic load, I use a basic Windows computer loaded up with the latest utorrent client.

Editors Note: The last time I used a P2P engine on a Windows computer, I ended up reloading my Windows OS once a week. Downloading random P2P files is sure to bring in the latest viruses, and unimaginable filth will populate your computer.

The custom sniffer is built into our NetGladiator device, and it does several things:

1) It detects and dumps the data inside packets as they cross the wire to a file that I can look at later.

2) It maps non printable ASCII characters to printable ASCII characters. In this way, when I dump the contents of an IP packet to a file, I don’t get all kinds of special characters embedded in the file. Since P2P data is encoded random music files and video, you can’t view data without this filter. If you try, you’ll get all kinds of garbled scrolling on the screen when you look at the raw data with a text editor.

So what does the raw data output dump of a P2P client look like ?

Here is a snippet of some of the utorrent raw data I was looking at just this morning. The sniffer has converted the non printable characters to “x”.
You can clearly see some repeating data patterns forming below. That is the key to identifying anything with layer 7. Sometimes it is obvious, while sometimes you really have work to find a pattern.

Packet 1 exx_0ixx`12fb*!s[`|#l0fwxkf)d1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:ka 31:v4:utk21:y1:qe
Packet 2 exx_0jxx`1kmb*!su,fsl0’_xk<)d1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:xv4^1:v4:utk21:y1:qe
Packet 3 exx_0kxx`1exb*!sz{)8l0|!xkvid1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:09hd1:v4:utk21:y1:qe
Packet 4 exx_0lxx`19-b*!sq%^:l0tpxk-ld1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:=x{j1:v4:utk21:y1:qe

The next step is to develop a layer 7 regular expression to identify the patterns in the data. In the output you’ll notice the string “exx” appears in line, and that is what you look for. A repeating pattern is a good place to start.

The regular expression I decided to use looks something like:

exx.0.xx.*qe

This translates to: match any string starting with “exx” followed, by any character “.” followed by “0”, followed by “xx”, followed by any sequence of characters ending with “qe”.

Note: When I tested this regular expression it turns out to only catch a fraction of the Utorrent, but it is a start. What you don’t want to do is make your regular expression so simple that you get false positives. A layer 7 product that creates a high degree of false positives is pretty useless.

The next thing I do with my new regular expression is a test for accuracy of target detection and false positives.

Accuracy of detection is done by clearing your test network of everything except the p2p target you are trying to catch, and then running your layer 7 device with your new regular expression and see how well it does.

Below is an example from my NetGladiator in a new sniffer mode. In this mode I have the layer 7 detection on, and I can analyze the detection accuracy. In the output below, the sniffer puts a tag on every connection that matches my utorrent regular expression. In this case, my tag is indicated by the word “dad” at the end of the row. Notice how every connection is tagged. This means I am getting 100 percent hit rate for utorrent. Obviously I doctored the output for this post :)

ndex SRCP DSTP Wavg Avg IP1 IP2 Ptcl Port Pool TOS
0 0 0 17 53 255.255.255.255 95.85.150.34 — 2 99 dad
1 0 0 16 48 255.255.255.255 95.82.250.60 — 2 99 dad
2 0 0 16 48 255.255.255.255 95.147.1.179 — 2 99 dad
3 0 0 18 52 255.255.255.255 95.252.60.94 — 2 99 dad
4 0 0 12 24 255.255.255.255 201.250.236.194 — 2 99 dad
5 0 0 18 52 255.255.255.255 2.3.200.165 — 2 99 dad
6 0 0 10 0 255.255.255.255 99.251.180.164 — 2 99 dad
7 0 0 88 732 255.255.255.255 95.146.136.13 — 2 99 dad
8 0 0 12 0 255.255.255.255 189.202.6.133 — 2 99 dad
9 0 0 12 24 255.255.255.255 79.180.76.172 — 2 99 dad
10 0 0 16 48 255.255.255.255 95.96.179.38 — 2 99 dad
11 0 0 11 16 255.255.255.255 189.111.5.238 — 2 99 dad
12 0 0 17 52 255.255.255.255 201.160.220.251 — 2 99 dad
13 0 0 27 54 255.255.255.255 95.73.104.105 — 2 99 dad
14 0 0 10 0 255.255.255.255 95.83.176.3 — 2 99 dad
15 0 0 14 28 255.255.255.255 123.193.132.219 — 2 99 dad
16 0 0 14 32 255.255.255.255 188.191.192.157 — 2 99 dad
17 0 0 10 0 255.255.255.255 95.83.132.169 — 2 99 dad
18 0 0 24 33 255.255.255.255 99.244.128.223 — 2 99 dad
19 0 0 17 53 255.255.255.255 97.90.124.181 — 2 99 dad

A bit more on reading this sniffer output…

Notice columns 4 and 5, which indicate data transfer rates in bytes per second. These columns contain numbers that are less than 100 bytes per second – Very small data transfers. This is mostly because as soon as that connection is identified as utorrent, the NetGladiator drops all future packets on the connection and it never really gets going. One thing I did notice is that the modern utorrent protocol hops around very quickly from connection to connection. It attempts not to show it’s cards. Why do I mention this? Because in layer 7 shaping of P2P, speed of detection is everything. If you wait a few milliseconds too long to analyze and detect a torrent, it is already too late because the torrent has transferred enough data to keep it going. It’s just a conjecture, but I suspect this is one of the main reasons why this utorrent is so popular. By hopping from source to source, it is very hard for an ISP to block this one without the latest equipment. I recently wrote a companion article regarding the speed of the technology behind a good layer 7 device.

The last part of testing a regular expression involves looking for false positives. For this we use a commercial grade simulator. Our simulator uses a series of pre-programmed web crawlers that visit tens of thousands of web pages an hour at our test facility. We then take our layer 7 device with our new regular expression and make sure that none of the web crawlers accidentally get blocked while reading thousands of web pages. If this test passes we are good to go with our new regular expression.

Editors Note: Our primary bandwidth shaping product manages P2P without using deep packet inspection.
The following layer 7 techniques can be run on our NetGladiator Intrusion Prevention System. We also advise that public ISPs check their country regulations before deploying a deep packet inspection device on a public network.

Ever Wonder Why Your Video (YouTube) Over the Internet is Slow Sometimes?


By: Art Reisman

Art Reisman CTO www.netequalizer.com

Art Reisman is the CTO of APconnections. He is Chief Architect on the NetGladiator and NetEqualizer product lines.

I live in a nice suburban neighborhood with both DSL and Cable service options for my Internet. My speed tests always show better than 10 megabits of download speed, and yet sometimes, a basic YouTube or iTunes download just drags on forever. Calling my provider to complain about broken promises of Internet speed is futile. Their call center people in India have the patience of saints; they will wear me down with politeness despite my rudeness and screaming. Although I do want to believe in some kind of Internet Santa Claus, I know first hand that streaming unfettered video for all is just not going to happen. Below I’ll break down some of the limitations for video over the Internet, and explain some of the seemingly strange anomalies for various video performance problems.

The factors dictating the quality of video over the Internet are:

1) How many customers are sharing the link between your provider and the rest of the Internet

Believe it or not, your provider pays a fee to connect up to the Internet. Perhaps not in the same exact way a consumer does, but the more traffic they connect up to the rest of the Internet the more it costs them. There are times when their connection to the Internet is saturated, at which point all of their customers will experience slower service of some kind.

2) The server(s) where the video is located

It is possible that the content hosted site has overloaded servers and their disk drives are just not fast enough to maintain decent quality. This is usually what your operator will claim regardless if it is their fault or not. :)

3) The link from the server to the Internet location of your provider

Somewhere between the content video server and your provider there could be a bottleneck.

4) The “last mile”  link between you and your provider (is it dedicated or shared?)

For most cable and DSL customers, you have a direct wire back to your provider. For wireless broadband, it is a completely different story. You are likely sharing the airwaves to your nearest tower with many customers.

So why is my video slow sometimes for YouTube but not for NetFlix?

The reason why I can watch some NetFlix movies, and a good number of popular YouTube videos without any issues on my home system is that my provider uses a trick called caching to host some content locally. By hosting the video content locally, the provider can insure that items 2 and 3 (above) are not an issue. Many urban cable operators also have a dedicated wire from their office to your residence which eliminates issues with item 4 (above).

Basically, caching is nothing new for a cable operator. Even before the Internet, cable operators had movies on demand that you could purchase. With movies on demand, cable operators maintained a server with local copies of popular movies in their main office, and when you called them they would actually throw a switch of some kind and send the movie down the coaxial cable from their office to your house. Caching today is a bit more sophisticated than that but follows the same principles. When you watch a NetFlix movie, or YouTube video that is hosted on your provider’s local server (cache),  the cable company can send the video directly down the wire to your house. In most setups, you don’t share your local last mile wire, and hence the movie plays without contention.

Caching is great, and through predictive management (guessing what is going to be used the most), your provider often has the content you want in a local copy and so it downloads quickly.  However, should you truly surf around to get random or obscure YouTube videos, your chances of a slower video will increase dramatically, as it is not likely to be stored in your provider’s cache.

Try This: The next time you watch a (not popular) YouTube video that is giving your problems, kill it, and try a popular trending video. More often than not, the popular trending video will run without interruption. If you repeat this experiment a few times and get the same results, you can be certain that your provider is caching some video to speed up your experience.

In case you need more proof that this is “top of mind” for Internet Providers, check out the January 1st 2012, CED Magazine article on the Top Broadband 50 for 2011 (read the whole article here).  #25 (enclosed below) is tied to improving video over the Internet.

#25: Feeding the video frenzy with CDNs

So everyone wants their video anywhere, anytime and on any device. One way of making sure that video is poised for rapid deployment is through content delivery networks. The prime example of a cable CDN is the Comcast Content Distribution Network (CCDN), which allows Comcast to use its national backbone to tie centralized storage libraries to regional and local cache servers.

Of course, not every cable operator can afford the grand-scale CDN build-out that Comcast is undertaking, but smaller MSOs can enjoy some of the same benefits through partnerships. – MR

How to Build Your Own Linux-Based Access Point in 5 Minutes


The motivation to build your access point using Linux are many, and I have listed a few compelling reasons below:

1) You can use the Linux-rich set of firewall rules to customize access to any segment of your wireless network.
2) You can use SNMP utilities to report on traffic going through your AP.
3) You can configure your AP to send e-mail alerts if there are problems with your AP.
4) You can custom coordinate communications with other access points – for example, build your own Mesh network.
5) You can build specialized user authentication services and run them from the Linux server.

Note: We had experimented with building access points with a Linux-based server several years ago, but found that the Linux support for Wireless Radio cards was severely lacking. Most of the compatibility issues have been solved in the newer Linux kernels.

Building your own Linux access point in about 5 minutes:

Yes, 5 minutes or less is what it just took me to configure an access point by following this document to test that it was written correctly. This was after creating the CF from a ready-made image containing Voyage. Also, I did “edit the CF directly” method mentioned below so I could just cut and paste the lines that belong in the four necessary files.

Building your own Linux access point using the Alix 3D2 and the Atheros-based Wistron CM9 MiniPCI card may not be the cheapest way to do your own access point if you have to buy all the parts but here is how you can do it. These instructions may be used to setup any number of other combinations of hardware such as leftover computers from your Pacman gaming days that happen to have an Atheros chipset wireless radio attached as long as Voyage sees it as the same device name and so on.

This access point has a transparent bridge and uses your existing DHCP server to give out IPs to wireless devices that connect to it. This means just plug in the Ethernet cable to your existing network and connect wirelessly without the fuss or muss just like you plugged into your switch. This is the only way that will be described in this article, but you can of course setup your own DHCP server on the unit if you know how to do so.

Parts list:
ALIX3D2 (ALIX.3D2)with 1 LAN and 2 miniPCI, LX800, 256Mb
18w (15v/1.2A) AC-DC Power Adapter with Power Cord
Wistron CM9 MiniPCI Card
N-Type female Straight Pigtail
ANT-N-5 – Outdoor Omni Antenna, 5.5Dbi, N-Ttpe male, Straight type (rubber ducky type)
Kingston 4 GB CompactFlash Memory Card CF/4GB

Total for the above from one provider was under $200.

Optional parts:
Power Over Ethernet Injector – for about $4 and only necessary if you want to run the unit out to some area that does not have power right there such as an attic.
Case for Alix3D2 – price and link not available as this is a bench test model.

Assembly:
Plug CF card (once imaged with Voyage software and optionally already configured as mentioned below) into board. Only goes one way and only one place to put it.
Plug in the pigtail with antenna attached to the CM9 antenna connection that is closest to the center of the radio. Its easier to do this with the radio out.
Plug in the CM9 wireless radio in the card slot on the other side of the Alix board which has the LAN port on it.
Plug in a standard LAN cable into your switch connected to your network.
Plug in the power adapter to the Alix board and then plug into the wall (when you do this, it boots up, so ready the CF first).

Configuration tools needed:
Null modem serial cable
Windows or Linux or Mac with some terminal software installed so as to access the serial port of your new access point for setup. Windows XP with Hyperterm or Linux with Minicom or Mac with Zterm.
Optionally, instead of using a Null modem and terminal software you can setup the new access point by editing the CF card directly prior to installing it. Editing it directly can be a lot easier than figuring out how to use the serial port and terminal software.

Software used was Voyage Linux. Searching for Voyage Linux will lead you to their home page at http://linux.voyage.hk/
Version used was 0.7.5 (there are probably newer versions by now)
You can create your own CF by following the instructions on the Voyage Linux website or you can search for ready made CF images. If you search for “voyage075_2GB_ALIX” you currently can find an image ready to go and will fit on a 2gb or larger CF card. Since the suggested CF card in the parts list says 4gb we are good.

Now, assuming you have created a CF card with Voyage Linux 0.7.5 on it and can log into the console with your terminal software, or have access to the CF directly from a computer that can read the Linux disk, then do the following steps:

(If logged into a booted-up Alix board with the CF installed on it using the serial port, then run remountrw first so you can create and edit files.)

Set it up as an access point by first creating a file in/root called apup. In that file, you can put the following lines:
#!/bin/sh
/sbin/ifconfig eth0 0.0.0.0 up
/usr/sbin/brctl addbr br0
/usr/sbin/brctl addif br0 eth0
/usr/sbin/hostapd -B /etc/hostapd/hostapd.wlan0.conf
/usr/sbin/brctl addif br0 wlan0
/sbin/ifconfig br0 192.168.0.100 netmask 255.255.255.0 up
/sbin/route add default gw 192.168.0.1
echo 1 > /proc/sys/net/ipv4/ip_forward
/sbin/iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE

Change that 192.168.0.100 and netmask to whatever you want the IP for the access point to be so that you can get to it via SSH. Change the 192.168.0.1 to your default route or gateway.

Now use chmod to make /root/apup executable with something like chmod a+x /root/apup

Now edit /etc/hostapd/hostapd.wlan0.conf and edit (if there already) so that it has the following:
interface=wlan0
driver=nl80211
logger_syslog=-1
logger_syslog_level=2
logger_stdout=-1
logger_stdout_level=2
debug=4
#dump_file=/tmp/hostapd.dump
#ctrl_interface=/var/run/hostapd
#ctrl_interface_group=0
channel=1
macaddr_acl=0
auth_algs=3
eapol_key_index_workaround=0
eap_server=0
wpa=3
ssid=alix
wpa_passphrase=voyage
wpa_key_mgmt=WPA-PSK
wpa_pairwise=TKIP
eapol_version=1

Edit the file /etc/network/interfaces and change the area that brings up eth0 to:
auto eth0
#iface eth0 inet dhcp
iface eth0 inet static
address 192.168.0.100
netmask 255.255.255.0
gateway 192.168.0.1

This is so that if for some reason the bridge br0 does not come up then possibly you can still access eth0 via the same IP you put in apup.

Now, edit /etc/rc.local and put one line towards the bottom to run /root/apup so it looks like this:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will “exit 0” on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
/root/apup
exit 0

That’s it for software setup. If you want to change the SSID and have it say something besides alix then edit the line in /etc/hostapd/hostapd.wlan0.conf and if you want a different wpa password then edit the line in there dealing with that as well. The channel the radio will use is also setup there.

If you logged into the unit using the serial port and if the CF is still in read/write mode then run remountro to put it back in readonly mode and reboot.

From a laptop you should see your new access point show up as alix and secured with WPA password of voyage.

Just How Fast Is Your 4G Network?


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com

The subject of Internet speed and how to make it go faster is always a hot topic. So that begs the question, if everybody wants their Internet to go faster, what are some of the limitations? I mean, why can’t we just achieve infinite speeds when we want them and where we want them?

Below, I’ll take on some of the fundamental gating factors of Internet speeds, primarily exploring the difference between wired and wireless connections. As we have “progressed” from a reliance on wired connections to a near-universal expectation of wireless Internet options, we’ve also put some limitations on what speeds can be reliably achieved. I’ll discuss why the wired Internet to your home will likely always be faster than the latest fourth generation (4G) wireless being touted today.

To get a basic understanding of the limitations with wireless Internet, we must first talk about frequencies. (Don’t freak out if you’re not tech savvy. We usually do a pretty good job at explaining these things using analogies that anybody can understand.) The reason why frequencies are important to this discussion is that they’re the limiting factor to speed in a wireless network.

The FCC allows cell phone companies and other wireless Internet providers to use a specific range of frequencies (channels) to transmit data. For the sake of argument, let’s just say there are 256 frequencies available to the local wireless provider in your area. So in the simplest case of the old analog world, that means a local cell tower could support 256 phone conversations at one time.

However, with the development of better digital technology in the 1980s, wireless providers have been able to juggle more than one call on each frequency. This is done by using a time sharing system where bits are transmitted over the frequency in a round-robin type fashion such that several users are sharing the channel at one time.

The wireless providers have overcome the problem of having multiple users sharing a channel by dividing it up in time slices. Essentially this means when you are talking on your cell phone or bringing up a Web page on your browser, your device pauses to let other users on the channel. Only in the best case would you have the full speed of the channel to yourself (perhaps at 3 a.m. on a deserted stretch of interstate). For example, I just looked over some of the mumbo jumbo and promises of one-gigabit speeds for 4G devices, but only in a perfect world would you be able to achieve that speed.

In the real world of wireless, we need to know two things to determine the actual data rates to the end user.

  1. The maximum amount of data that can be transmitted on a channel
  2. The number of users sharing the channel

The answer to part one is straightforward: A typical wireless provider has channel licenses for frequencies in the 800 megahertz range.

A rule of thumb for transmitting digital data over the airwaves is that you can only send bits of  data at 1/2 the frequency. For example, 800 megahertz is 800 million cycles per second and 1/2 of that is 400 million cycles per second. This translates to a theoretical maximum data rate of 400 megabits. Realistically, with noise and other environmental factors, 1/10 of the original frequency is more likely. This gives us a maximum carrying capacity per channel of 80 megabits and a ballpark estimate for our answer to part one above.

However, the actual answer to variable two, the number of users sharing a channel, is a closely guarded secret among service providers. Conservatively, let’s just say you’re sharing a channel with 20 other users on a typical cell tower in a metro area. With 80 megabits to start from, this would put your individual maximum data rate at about four megabits during a period of heavy usage.

So getting back to the focus of the article, we’ve roughly worked out a realistic cap on your super-cool new 4G wireless device at four megabits. By today’s standards, this is a pretty fast connection. But remember this is a conservative benefit-of-the-doubt best case. Wireless providers are now talking about quota usage and charging severely for overages. That translates to the fact that they must be teetering on gridlock with their data networks now.  There is limited frequency real estate and high demand for content data services. This is likely to only grow as more and more users adopt mobile wireless technologies.

So where should you look for the fastest and most reliable connection? Well, there’s a good chance it’s right at home. A standard fiber connection, like the one you likely have with your home network, can go much higher than four megabits. However, as with the channel sharing found with wireless, you must also share the main line coming into your central office with other users. But assuming your cable operator runs a point-to-point fiber line from their office to your home, gigabit speeds would certainly be possible, and thus wired connections to your home will always be faster than the frequency limited devices of wireless.

Related Article: Commentary on Verizon quotas

Interesting  side note , in this article  by Deloitte they do not mention limitations of frequency spectrum as a limiting factor to growth.

10 Things You Should Know about IPv6


I just read the WordPress article about World IPv6 Day, and many of the comments in response expressed that they only had a very basic understanding of what an IPv6 Internet address actually is. To better explain this issue, we have provided a 10-point FAQ that should help clarify in simple terms and analogies the ramifications of transitioning to IPv6.

To start, here’s an overview of some of the basics:

Why are we going to IPv6?

Every device connected to the Internet requires an IP address. The current system, put in place back in 1977, is called IPv4 and was designed for 4 billion addresses. At the time, the Internet was an experiment and there was no central planning for anything like the commercial Internet we are experiencing today. The official reason we need IPv6 is that we have run out of IPv4 addresses (more on this later).

Where does my IP address come from?

A consumer with an account through their provider gets their IP address from their ISP (such as Comcast). When your provider installed your Internet, they most likely put a little box in your house called a router. When powered up, this router sends a signal to your provider asking for an IP address. Your provider has large blocks of IP addresses that were allocated to them most likely by IANI.

If there are 4 billion IPv4 addresses, isn’t that enough for the world right now?

It should be considering the world population is about 6 billion. We can assume for now that private access to the Internet is a luxury of the economic middle class and above. Generally you need one Internet address per household and only one per business, so it would seem that perhaps 2 billion would be plenty of addresses at the moment to meet the current need.

So, if this is the case, why can’t we live with 4 billion IP addresses for now?

First of all, industrialized societies are putting (or planning to put) Internet addresses in all kinds of devices (mobile phones, refrigerators, etc.). So allocating one IP address per household or business is no longer valid. The demand has surpassed this considerably as many individuals require multiple IP addresses.

Second, the IP addresses were originally distributed by IANI like cheap wine. Blocks of IP addresses were handed out in chunks to organizations in much larger quantities than needed. In fairness, at the time, it was originally believed that every computer in a company would need its own IP addresses. However, since the advent of NAT/PAT back in the 1980s, most companies and many ISPs can easily stretch a single IP to 255 users (sharing it). That brings the actual number of users that IPv4 could potentially support to well over a trillion!

Yet, while this is true, the multiple addresses originally distributed to individual organizations haven’t been reallocated for use elsewhere. Most of the attempted media scare surrounding IPv6 is based on the fact that IANI has given out all the centrally controlled IP addresses, and the IP addresses already given out are not easily reclaimed. So, despite there being plenty of supply overall, it’s not distributed as efficiently as it could be.

Can’t we just reclaim and reuse the surplus of IPv4 addresses?

Since we just very recently ran out, there is no big motivation in place for the owners to give/sell the unused IPs back. There is currently no mechanism or established commodity market for them (yet).

Also, once allocated by IANI, IP addresses are not necessarily accounted for by anyone. Yes, there is an official owner, but they are not under any obligation to make efficient use of their allocation. Think of it like a retired farmer with a large set of historical water rights. Suppose the farmer retires and retains his water rights because there is nobody to which he can sell them back. The difference here is that water rights are very valuable. Perhaps you see where I am going with this for IPv4? Demand and need are not necessarily the same thing.

How does an IPv4-enabled user talk to an IPv6 user?

In short, they don’t. At least not directly. For now it’s done with smoke and mirrors. The dirty secret with this transition strategy is that the customer must actually have both IPv6 and IPv4 addresses at the same time. They cannot completely switch to an IPv6 address without retaining their old IPv4 address. So it is in reality a duplicate isolated Internet where you are in one or the other.

Communication is possible, though, using a dual stack. The dual-stack method is what allows an IPv6 customer to talk to IPv4 users and IPv6 users at the same time. With the dual stack, the Internet provider will match up IPv6 users to talk with IPv6 if they are both IPv6 enabled. However, IPv4 users CANNOT talk to IPv6 users, so the customer must maintain an IPv4 address otherwise they would cut themselves off from 99.99+ percent of Internet users. The dual-stack method is just maintaining two separate Internet interfaces. Without maintaining the IPv4 address at the same time, a customer would isolate themselves from huge swaths of the world until everybody had IPv6. To date, in limited tests less than .0026 percent of the traffic on the Internet has been IPv6. The rest is IPv4, and that was for a short test experiment.

Why is it so hard to transition to IPv6? Why can’t we just switch tomorrow?

To recap previous points:

1) IPv4 users, all 4 billion of them, currently cannot talk to new IPv6 users.

2) IPv6 users cannot talk to IPv4 users unless they keep their old IPv4 address and a dual stack.

3) IPv4 still works quite well, and there are IPv4 addresses available. However, although the reclamation of IPv4 addresses currently lacks some organization, it may become more econimically feasible as problems with the transition to IPv6 crop up. Only time will tell.

What would happen if we did not switch? Could we live with IPv4?

Yes, the Internet would continue to operate. However, as the pressure for new and easy to distribute IP addresses for mobile devices heats up, I think we would see IP addresses being sold like real estate.

Note:  A bigger economic gating factor to the adoption of the expanding Internet is the limitation of wireless frequency space. You can’t create any more frequencies for wireless in areas that are already saturated. IP addresses are just now coming under some pressure, and as with any fixed commodity, we will see their value rise as the holders of large blocks of IP addresses sell them off and redistribute the existing 4 billion. I suspect the set we have can last another 100 years under this type of system.

Is it possible that a segment of the Internet will split off and exclusively use IPv6?

Yes, this is a possible scenario, and there is precedent for it. Vendors, given a chance, can eliminate competition simply by having a critical mass of users willing to adopt their services. Here is the scenario: (Keep in mind that some of the following contains opinions and conjecture on IPv6, the future, and the motivation of players involved in pushing IPv6.)

With a complete worldwide conversion to IPv6 not likely in the near future,  a small number of larger ISPs and content providers turn on IPv6 and start serving IPv6 enabled customers with unique and original content not accessible to customers limited to IPv4. For example, Facebook starts a new service only available on their IPv6 network supported by AT&T. This would be similar to what was initially done with the iPad and iPhone.

It used to be that all applications on the Internet ran from a standard Web browser and were device independent. However, there is a growing subset of applications that only run on the Apple devices. Just a few years ago it was a forgone conclusion that vendors would make Web applications capable of running on any browser and any hardware device. I am not so sure this is the case anymore.

When will we lose our dependency on IPv4?

Good question. For now, most of the push for IPv6 seems to be coming from vendors using the standard fear tactic. However, as is always the case, with the development of new products and technologies, all of this could change very quickly.

VLAN tags made simple


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com

Why am I writing a post on VLAN tags ?

VLAN tags and Bandwidth Control are often intimately related, but before I can post on the relationship I thought it prudent to comment on VLAN tags, I definitly think they are way over used and hope to comment on that also in a future post.

I generally don’t like VLAN tags, the original idea behind them was to solve the issue with  Ethernet broadcasts saturating network segment. Wikipedia explains it like this…

After successful experiments with voice over Ethernet from 1981 to 1984, Dr. W. David Sincoskie joined Bellcore and turned to the problem of scaling up Ethernet networks. At 10 Mbit/s, Ethernet was faster than most alternatives of the time; however, Ethernet was a broadcast network and there was not a good way of connecting multiple Ethernets together. This limited the total bandwidth of an Ethernet network to 10 Mbit/s and the maximum distance between any two nodes to a few hundred feet.

What does that mean and why do you care?

First lets address how an Ethernet broadcast works and then we can discuss Dr Sincoskies solution and make some sense of it.

When a bunch of computers share a single Ethernet segment of a network separated by switches everybody can hear each other talking

Think of 2 people in a room yelling back and forth to communicate, that might work if one person pauses after each yell to give the other person a chance to yell back.  Now if you had three people in a room they can still yell at each other and pause and listen for other people yelling and that might still work, but if you had 1000 people in the room and they are trying to talk to people on the other side of the room the pausing technique waiting for other people to talk does not work very well.  And that is exactly the problem with Ethernet as it grows everybody is trying to talk on the same wire at once.  VLAN tags work by essentially creating a bunch of smaller virtual  rooms where only the noise and yelling from the people in the virtual room can be heard at one time.

Now when you set up a VLAN tag (virtual room ) you have to put up the dividers. On a network this is done by having  the switches, the things the computers plug into,  be aware of what virtual room each computer is in. The Ethernet tag specifies the identifier for the virtual room and so once set up you have a bunch of virtual rooms and everybody can talk.

This sort of begs the question

Does everybody attached to the Internet live in a virtual room ?

No virtual rooms  (VLANs) were needed so a single organization like a company can put a box around their network segments to protect them with a common set of access rules ( firewall router), the Internet works fine without VLAN tags.

So a VLAN tag is only appropriate when a group of users sit behind a common router ?

Yes that is correct , Ethernet broadcasts ( yelling  as per our analogy) do not cross cross router boundaries on the Internet.

Routers handle public IP addresses to figure out where to send things. A router does not use broadcast (yelling), it is much more discrete , it only sends on data to another router if it knows that the data is supposed to go there.

So why do we have two mechanisms one for  local computers sending Ethernet broadcasts and another for routers using point to point routing ?

This post was supposed to be about VLAN tags….. I’ll take it one step further to explain the difference.

Perhaps you have heard about the layers of networking, layer 2 is Ethernet and Layer 3 is IP.

Answers.com gave me the monologue below, which is technically correct, but does not really make much sense unless you already had a good understanding of networking in the first place , so I’ll finish by breaking down this into something a little more relevant with some in-line comments.

Basically a layer 2 switch operates utilizing Mac addresses in it’s caching table to quickly pass information from port to port. A layer 3 switch utilizes IP addresses to do the same.

What this means is that an Ethernet switch looks at MAC addresses which are used by your router for local addressing to a computer on your network. Think back to people shouting in the room to communicate, the MAC address would be a Nick name that only their closest friends would use when they shout at each other. At the head end of your network is a router, this is where you connect to the Internet, and other Internet users send data to you from your IP address and this is essentially the well known public address at your router. The IP address could be thought of as the address of the building where everybody is inside shouting at each other. The routers job is to get information,sent by IP address  destined for some body inside the room to the door. If you are a Comcast home user you likely have a Modem where you cable plugs in the Modem is the gateway to your house and is addressed by IP address by the outside world.


Essentially, A layer 2 switch is essentially a multiport transparent bridge. A layer 2 switch will learn about MAC addresses connected to each port and passes frames marked for those ports.

The above paragraph is referring to how an Ethernet switch sends data around, everybody in room registers their Nick-Name to the switch so it can shout in the direction of the person in the room when new data comes in.

It also knows that if a frame is sent out a port but is looking for the MAC address of the port it is connected to and drop that frame. Whereas a single CPU Bridge runs in serial, todays hardware based switches run in parallel, translating to extremly fast switching.


I left this paragraph in because it is completely unrelated to the question I asked that Answers.com responded to, so ignore it. This is  a commentary about how modern switches can be reading and sending from multiple interfaces at the same time.

Layer 3 switching is a hybrid, as one can imagine, of a router and a switch. There are different types of layer 3 switching, route caching andtopology-based. In route caching the switch required both a Route Processor (RP) and a Switch Engine (SE). The RP must listen to the first packet to determine the destination. At that point the Switch Engine makes a shortcut entry in the caching table for the rest of the packets to follow.

More random stuff unrelated to the question “What is the difference between layer 3 and layer 2 ”

Due to advancement in processing power and drastic reductions in the cost of memory, today’s higher end layer 3 switches implement a topology-based switching which builds a lookup table and and poputlates it with the entire network’s topology. The database is held in hardware and is referenced there to maintain high throughput. It utilizes the longest address match as the layer 3 destination.

This is talking about how a Router translates between the local address Nick-Name of people yelling in the room and the public address of data leaving the building.
Now when and why would one use a l2 vs l3 vs a router? Simply put, a router will generally sit at the gateway between a private and a public network. A router can performNAT whereas an l3 switch cannot (imagine a switch that had the topology entries for the ENTIRE Internet!!).

What Is Deep Packet Inspection and Why the Controversy?


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper. APconnections removed all deep packet inspection technology from their NetEqualizer product over 2 years ago.

Article Updated March 2012

As the debate over Deep Packet Inspection continues, network administrators are often faced with a difficult decision: ensure network quality or protect user privacy. However, the legality of the practice is now being called into question, adding a new twist to the mix. Yet, for many Internet users, deep packet inspection continues to be an ambiguous term in need of explanation. In the discussion that follows, deep packet inspection will be explored in the context of the ongoing debate.

Exactly what is deep packet inspection?

All traffic on the Internet travels around in what is called an IP packet. An IP packet is a string of characters moving from computer A to computer B. On the outside of this packet is the address where it is being sent. On the inside of the packet is the data that is being transmitted.

The string of characters on the inside of the packet can be conceptually thought of as the “payload,” much like the freight inside of a railroad car. These two elements, the address and the payload, comprise the complete IP packet.

When you send an e-mail across the Internet, all your text is bundled into packets and sent on to its destination. A deep packet inspection device literally has the ability to look inside those packets and read your e-mail (or whatever the content might be).

Products sold that use DPI are essentially specialized snooping devices that examine the content (pay load inside) of Internet packets. Other terms sometimes used to describe techniques that examine Internet data are packet shapers, layer-7 traffic shaping, etc.

How is deep packet inspection related to net neutrality?

Net neutrality is based on the belief that nobody has the right to filter content on the Internet. Deep packet inspection is a method used for filtering. Thus, there is a conflict between the two approaches. The net neutrality debate continues to rage in its own right.

Why do some Internet providers use deep packet inspection devices?

There are several reasons:

1) Targeted advertising If a provider knows what you are reading, they can display content advertising on the pages they control, such as your login screen or e-mail account.

2) Reducing “unwanted” traffic — Many providers are getting overwhelmed by types of traffic that they deem as less desirable such as Bittorrent and other forms of peer-to-peer. Bittorrent traffic can overwhelm a network with volume. By detecting and redirecting the Bittorrent traffic, or slowing it down, a provider can alleviate congestion.

3) Block offensive material — Many companies or institutions that perform content filtering are looking inside packets to find, and possibly block, offensive material or web sites.

4) Government spying — In the case of Iran (and to some extent China), DPI is used to keep tabs on the local population.

When is it appropriate to use deep packet inspection?

1) Full disclosure — Private companies/institutions/ISPs that notify employees that their Internet use is not considered private have the right to snoop, although I would argue that creating an atmosphere of mistrust is not the mark of a healthy company.

2) Law enforcement — Law enforcement agencies with a warrant issued by a judge would be the other legitimate use.

3) Intrusion detection and prevention– It is one thing to be acting as an ISP  and to eaves drop on a public conversation;  it is entirely another paradigm if you are a  private business examining the behavior of somebody  coming in your front door. For example in a private home it is within your right to look through your peep hole and not let shady characters into your home.  In a private business it is a good idea to use Deep packet inspection in order to block unwanted intruders from your network. Blocking bad guys before they break into and damage your network and is perfectly acceptable.

4) Spam filtering- Most consumers are very happy to have their ISP or email provider remove spam.  I would categorize this type of DPI as implied disclosure. For example, in Gmail you do have the option to turn Spam filtering off, and although most consutomers may not realize that google is reading their mail ( humans don’t read it but computer scanners do), their motives are understood. What consumers may not realize is that their email provider is also reading everything they do in order to set target advertising

Does Content filtering use Deep Packet Inspection ?

For the most part no. Content filtering is generally  done at the URL level. URL’s are generally considered public information, as routers need to look this up anyway. We have only encountered content filters at private institutions that are within their right.

What about spam filtering, does that use Deep Packet Inspection?

Yes many Spam filters will look at content, and most people could not live without their spam filter, however with spam filtering most people have opted in at one point or another, hence it is generally done with permission.

What is all the fuss about?

It seems that consumers are finally becoming aware of what is going on behind the scenes as they surf the Internet, and they don’t like it. What follows are several quotes and excerpts from articles written on the topic of deep packet inspection. They provide an overview not only of how DPI is currently being used, but also the many issues that have been raised with the practice.

For example, this is an excerpt from a recent PC world article:

Not that we condone other forms of online snooping, but deep packet inspection is the most egregious and aggressive invasion of privacy out there….It crosses the line in a way that is very frightening.

Paul Stephens, director of policy and advocacy for the Privacy Rights Clearinghouse, as quoted in the E-Commerce Times on November 14, 2008. Read the full article here.

Recently, Comcast had their hand slapped for re-directing Bittorrent traffic:

Speaking at the Stanford Law School Center for Internet and Society, FCC Chairman Kevin Martin said he’s considering taking action against the cable operator for violating the agency’s network-neutrality principles. Seems Martin was troubled by Comcast’s dissembling around the BitTorrent issue, not to mention its efforts to pack an FCC hearing on Net neutrality with its own employees.

— Digital Daily, March 10, 2008. Read the full article here.

Later in 2008, the FCC came down hard on Comcast.

In a landmark ruling, the Federal Communications Commission has ordered Comcast to stop its controversial practice of throttling file sharing traffic.

By a 3-2 vote, the commission on Friday concluded that Comcast monitored the content of its customers’ internet connections and selectively blocked peer-to-peer connections.

Wired.com, August 1, 2008.Read the full article here.

To top everything off, some legal experts are warning companies practicing deep packet inspection that they may be committing a felony.

University of Colorado law professor Paul Ohm, a former federal computer crimes prosecutor, argues that ISPs such as Comcast, AT&T and Charter Communications that are or are contemplating ways to throttle bandwidth, police for copyright violations and serve targeted ads by examining their customers’ internet packets are putting themselves in criminal and civil jeopardy.

Wired.com, May 22, 2008. Read the full article here.

However, it looks like things are going the other way in the U.K. as Britain’s Virgin Media has announced they are dumping net neutrality in favor of targeting bittorrent.

The UK’s second largest ISP, Virgin Media, will next year introduce network monitoring technology to specifically target and restrict BitTorrent traffic, its boss has told The Register.

The Register, December 16, 2008. Read the full article here.

Canadian ISPs confess en masse to deep packet inspection in January 2009.

With the amount of attention being paid to Comcast recently, a lot of people around the world have begun to look at their ISPs and wonder exactly what happens to their traffic once it leaves. This is certainly true for Canada, where several Canadian ISPs have come under the scrutiny of the CRTC, the regulatory agency responsible for Canada. After investigation, it was determined that all large ISPs in Canada filter P2P traffic in some fashion.

Tech Spot, January 21, 2009. Read the full article here.

In April 2009, U.S. lawmakers announced plans to introduce legislation that would limit the how ISPs could track users. Online privacy advocates spoke out in support of such legislation.

In our view, deep packet inspection is really no different than postal employees opening envelopes and reading letters inside. … Consumers simply do not expect to be snooped on by their ISPs or other intermediaries in the middle of the network, so DPI really defies legitimate expectations of privacy that consumers have.

Leslie Harris, president and CEO of the Center for Democracy and Technology, as quoted on PCWorld.com on April 23, 2009. Read the full article here.

The controversy continues in the U.S. as AT&T is accused of traffic shaping, lying and blocking sections of the Internet.

7/26/2009 could mark a turning point in the life of AT&T, when the future looks back on history, as the day that the shady practices of an ethically challenged company finally caught up with them: traffic filtering, site banning, and lying about service packages can only continue for so long before the FCC, along with the bill-paying public, takes a stand.

Kyle Brady, July 27, 2009. Read the full article here.

[February 2011 Update] The Egyptian government uses DPI to filter elements of their Internet Traffic, and this act in itself becomes the news story. In this video in this news piece, Al Jazeera takes the opportunity to put out an unflattering piece on the company Naurus that makes the DPI technology and sold it to the Egyptians.

While the debate over deep packet inspection will likely rage on for years to come, APconnections made the decision to fully abandon the practice over two years ago, having since proved the viability of alternative approaches to network optimization. Network quality and user privacy are no longer mutually exclusive goals.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

How to Determine a Comprehensive ROI for Bandwidth Shaping Products


In the past, we’ve published several articles on our blog to help customers better understand the NetEqualizer’s potential return on investment (ROI). Obviously, we do this because we think we offer a compelling ROI proposition for most bandwidth-shaping decisions. Why? Primarily because we provide the benefits of bandwidth shaping at a a very low cost — both initially and even more so over time. (Click here for the NetEqualizer ROI calculator.)

But, we also want to provide potential customers with the questions that need to be considered before a product is purchased, regardless of whether or not the answers lead to the NetEqualizer. With that said, this article will break down these questions, addressing many issues that may not be obvious at first glance, but are nonetheless integral when determining what bandwidth shaping product is best for you.

First, let’s discuss basic ROI. As a simple example, if an investment cost $100, and if in one year that investment returned $120, the ROI is 20 percent.  Simple enough. But what if your investment horizon is five years or longer? It gets a little more complicated, but suffice it to say you would perform a similar calculation for each year while adjusting these returns for time and cost.

The important point is that this technique is a well-known calculation for evaluating whether one thing is a better investment than another — be it bandwidth shaping products or real estate. Naturally and obviously the best financial decision will be determined by the greatest return for the smallest cost.

The hard part is determining what questions to ask in order to accurately determine the ROI. A missed cost or benefit here or there could dramatically alter the outcome, potentially leading to significant unforeseen losses.

For the remainder of this article, I’ll discuss many of the potential costs and returns associated with bandwidth shaping products, with some being more obscure than others. In the end, it should better prepare you to address the most important questions and issues and ultimately lead to a more accurate ROI assessment.

Let’s start by looking at the largest components of bandwidth shaping product “costs” and whether they are one-time or ongoing. We’ll then consider the returns.

COSTS

  • The initial cost of the tool
    • This is a one-time cost.
  • The cost of vendor support and license updates
    • These are ongoing costs and include monthly and annual licenses for support, training, software updates, library updates, etc…  The difference from vendor to vendor can be significant — especially over the long run.
  • The cost of upgrades within the time horizon of the investment
    • These upgrades can come in several different forms. For example, what does it cost to go from a 50Mbs tool to 100Mbs? Can your tool be upgraded, or do you have to buy a whole new tool? This can be a one-time cost or it can occur several times. It really depends on the growth of your network, but it’s usually inevitable for networks of any size.
  • The internal (human) cost to support the tool
    • For example, how many man hours do you have to spend to maintain the tool, to optimize it and to adapt it to your changing network? This could be a considerable “hidden” cost and it’s generally recurring. It also usually increases in time as the cost of salaries/benefits tend to go up. Because of that, this is a very important component that should be quantified for a good ROI analysis. Tools that require little or no ongoing maintenance will have a large advantage.
  • Overall impact on the network
    • Does the product add latency or other inefficiencies? Does it create any processing overhead and how much? If the answer is yes, costs such as these will constantly impact your network quality and add up over time.

RETURNS

  • Savings from being able to delay or eliminate buying more bandwidth
    • This could either be a one-time or ongoing return. Even delaying a bandwidth upgrade for six months or a year can be highly valuable.
  • Savings from not losing existing revenue sources
    • How many customers did you not lose because they did not get frustrated with their network/Internet service? This return is ongoing.
  • Ability to generate new revenue
    • How many new customers did you add because of a better-maintained network?  Were you able to generate revenue by adding new higher-value services like a tiered rate structure? This will usually be an ongoing return.
  • Savings from the ability eliminate or reduce the financial impact of unprofitable customers
    • This is an ongoing savings. Can you convert an unprofitable customer to a profitable one by reducing their negative impact on the network? If not, and they walk, do you care?
  • Avoidance of having to buy additional equipment
    • Were you able to avoid having to “divide and conquer” by buying new access points, splitting VLANs, etc..? This can be a one-time or ongoing return.
  • Savings in the cost of responding to technical support calls
    • How much time was saved by not having to receive an irate customer call, research it and respond back? If this is something you typically deal with on a regular basis, the savings will add up every day, week or month this is avoided.

Overall, these issues are the basic financial components and questions that need to be quantified to make a good ROI analysis. For each business, and each tool, this type of analysis may yield a different answer, but it is important to note that over time there are many more items associated with ongoing costs/savings than those occurring only once. Thus, you must take great care to understand the impact of these for each tool, especially those issues that lead to costs that increase over time.

The 10-Gigabit Barrier for Bandwidth Controllers and Intel-Based Routers


By Art Reisman

Editor’s note: This article was adapted from our answer to a NetEqualizer pre-sale question asked by an ISP that was concerned with its upgrade path. We realized the answer was useful in a broader sense and decided to post it here.

Any router, bandwidth controller, or firewall that is based on Intel architecture and buses will never be able to go faster than about about 7 gigabits sustained. (This includes our NE4000 bandwidth controller. While the NE4000 can actually reach speeds close to 10 gigabits, we rate our equipment for five gigabits because we don’t like quoting best-case numbers to our customers.) The limiting factor in Intel architecture is that to expand beyond 10-gigabit speeds you cannot be running with a central clock. Therefore, with a central clock controlling the show, it is practically impossible to move data around much faster than 10 gigabits.

The alternative is to use a specialized asynchronous design, which is what faster switches and hardware do. They have no clock or centralized multiprocessor/bus. However, the price point for such hardware quickly jumps to 5-10 times the Intel architecture because it must be custom designed. It is also quite limited in function once released.

Obviously, vendors can stack a bunch of 10-gig fiber bandwidth controllers behind a switch and call it something faster, but this is no different from dividing up your network paths and using multiple bandwidth controllers yourself.  So, be careful when assessing the claims of other manufacturers in this space.

Considering these limitations, many cable operators here in the US have embraced the 10-gigabit barrier. At some point you must divide and conquer using multiple 10-gig fiber links and multiple NE4000 type boxes, which we believe is really the only viable plan — that is if you want any sort of sophistication in your bandwidth controller.

While there are some that will keep requesting giant centralized boxes, and paying a premium for them (it’s in their blood to think single box, central location), when you think about the Internet, it only works because it is made of many independent paths. There is no centralized location by design. However, as you approach 10-gigabit speeds in your organization, it might be time to stop thinking “single box.”

I went through this same learning curve as a system architect at AT&T Bell Labs back in the 1990s.  The sales team was constantly worried about how many telephone ports we could support in one box because that is what operators were asking for.  It shot the price per port through the roof with some of our designs. So, in our present case, we (NetEqualizer) decided not to get into that game because we believe that price per megabit of shaping will likely win out in the end.

Art Reisman is currently CTO and co-founder of APconnections, creator of the NetEqualizer. He  has worked at several start-up companies over the years and has invented and brought several technology products to market, both on his own and with the backing of larger corporations. This includes tools for the automotive industry.

The Facts and Myths of Network Latency


There are many good references that explain how some applications such as VoIP are sensitive to network latency, but there is also some confusion as to what latency actually is as well as perhaps some misinformation about the causes. In the article below, we’ll separate the facts from the myths and also provide some practical analogies to help paint a clear picture of latency and what may be behind it.

Fact or Myth?

Network latency is caused by too many switches and routers in your network.

This is mostly a myth.

Yes, an underpowered router can introduce latency, but most local network switches add minimal latency — a few milliseconds at most. Anything under about 10 milliseconds is, for practical purposes, not humanly detectable. A router or switch (even a low-end one) may add about 1 millisecond of latency. To get to 10 milliseconds you would need eight or more hops, and even then you wouldn’t be near anything noticeable.

The faster your link (Internet) speed, the less latency you have.

This is a myth.

The speed of your network is measured by how fast IP packets arrive. Latency is the measure of how long they took to get there. So, it’s basically speed vs. time. An example of latency is when NASA sends commands to a Mars orbiter. The information travels at the speed of light, but it takes several minutes or longer for commands sent from earth to get to the orbiter. This is an example of data moving at high speed with extreme latency.

VoIP is very sensitive to network latency.

This is a fact.

Can you imagine talking in real time to somebody on the moon? Your voice would take about eight seconds to get there. For VoIP networks, it is generally accepted that anything over about 150 milliseconds of latency can be a problem. When latency gets higher than 150 milliseconds, issues will emerge — especially for fast talkers and rapid conversations.

Xbox games are sensitive to latency.

This is another fact.

For example, in may collaborative combat games, participants are required to battle players from other locations. Low latency on your network is everything when it comes to beating the opponent to the draw. If you and your opponent shoot your weapons at the exact same time, but your shot takes 200 milliseconds to register at the host server and your opponent’s shot gets there in 100 milliseconds, you die.

Does a bandwidth shaping device such as NetEqualizer increase latency on a network ?

This is true, but only for the “bad” traffic that’s slowing the rest of your network down anyway.

Ever hear of the firefighting technique where you light a back fire to slow the fire down? This is similar to the NetEqualizer approach. NetEqualizer deliberately adds latency to certain bandwidth intensive applications, such as large downloads and p2p traffic, so that chat, email, VoIP, and gaming get the bandwidth they need. The “back fire” (latency) is used to choke off the unwanted, or non-time sensitive, applications. (For more information on how the NetEqualizer works, click here.)

Video is sensitive to latency.

This is a myth.

Video is sensitive to the speed of the connection but not the latency. Let’s go back to our man on the moon example where data takes eight seconds to travel from the earth to the moon. Latency creates a problem with two-way voice communication because in normal conversion, an eight second delay in hearing what was said makes it difficult to carry a conversion. What generally happens with voice and long latency is that both parties start talking at the same time and then eight seconds later you experience two people talking over each other. You see this happening a lot with on television with interviews done via satellite. However most video is one way. For example, when watching a Netflix movie, you’re not communicating video back to Netflix. In fact, almost all video transmissions are on delay and nobody notices since it is usually a one way transmission.

Five Tips to Manage Network Congestion


As the demand for Internet access continues to grow around the world, the complexity of planning, setting up, and administering your network grows. Here are five (5) tips that we have compiled, based on discussions with network administrators in the field.

#1) Be Smart About Buying Bandwidth
The local T1 provider does not always give you the lowest price bandwidth.  There are many Tier 1 providers out there that may have fiber within line-of-sight of your business. For example, Level 3 has fiber rings already hot in many metro areas and will be happy to sell you bandwidth. To get a low-cost high-speed link to your point of presence, numerous companies can set up your wireless network infrastructure.

#2) Manage Expectations
You know the old saying “under promise and over deliver”.  This holds true for network offerings.  When building out your network infrastructure, don’t let your network users just run wide open. As you add bandwidth, you need to think about and implement appropriate rate limits/caps for your network users.  Do not wait; the problem with waiting is that your original users will become accustomed to higher speeds and will not be happy with sharing as network use grows – unless you enforce some reasonable restrictions up front.  We also recommend that you write up an expectations document for your end users “what to expect from the network” and post it on your website for them to reference.

#3) Understand Your Risk Factors
Many network administrators believe that if they set maximum rate caps/limits for their network users, then the network is safe from locking up due to congestion. However, this is not the case.  You also need to monitor your contention ratio closely.  If your network contention ratio becomes unreasonable, your users will experience congestion aka “lock ups” and “freeze”. Don’t make this mistake.

This may sound obvious, but let me spell it out. We often run into networks with 500 network users sharing a 20-meg link. The network administrator puts in place two rate caps, depending on the priority of the user  — 1 meg up and down for user group A and 5 megs up and down for user group B.  Next, they put rate caps on each group to ensure that they don’t exceed their allotted amount. Somehow, this is supposed to exonerate the network from experiencing contention/congestion. This is all well and good, but if you do the math, 500 network users on a 20 meg link will overwhelm the network at some point, and nobody will then be able to get anywhere close to their “promised amount.”

If you have a high contention ratio on your network, you will need something more than rate limits to prevent lockups and congestion. At some point, you will need to go with a layer-7 application shaper (such as Blue Coat Packeteer or Allot NetEnforcer), or go with behavior-based shaping (NetEqualizer). Your only other option is to keep adding bandwidth.

#4) Decide Where You Want to Spend Your Time
When you are building out your network, think about what skill sets you have in-house and those that you will need to outsource.  If you can select network applications and appliances that minimize time needed for set-up, maintenance, and day-to-day operations, you will reduce your ongoing costs. This is true whether your insource or outsource, as there is an “opportunity cost” for spending time with each network toolset.

#5) Use What You Have Wisely
Optimize your existing bandwidth.   Bandwidth shaping appliances can help you to optimize your use of the network.   Bandwidth shapers work in different ways to achieve this.  Layer-7 shapers will allocate portions of your network to pre-defined application types, splitting your pipe into virtual pipes based on how you want to allocate your network traffic.  Behavior-based shaping, on the other hand, will not require predefined allocations, but will shape traffic based on the nature of the traffic itself (latency-sensitive, short/bursty traffic is prioritized higher than hoglike traffic).   For known traffic patterns on a WAN, Layer-7 shaping can work very well.  For unknown patterns like Internet traffic, behavior-based shaping is superior, in our opinion.

On Internet links, a NetEqualizer bandwidth shaper will allow you to increase your customer base by between 10 to 30 percent without having to purchase additional bandwidth. This allows you to increase the amount of people you can put into your infrastructure without an expensive build out.

In order to determine whether the return-on-investment (ROI) makes sense in your environment, use our ROI tool to calculate your payback period on adding bandwidth control to your network.  You can then compare this one-time cost with your expected recurring month costs of additional bandwidth.  Also note in many cases you will need to do both at some point.  Bandwidth shaping can delay or defer purchasing additional bandwidth, but with growth in your network user base, you will eventually need to consider purchasing more bandwidth.

In Summary…
Obviously, these five tips are not rocket science, and some of them you may be using already.  We offer them here as a quick guide & reminder to help in your network planning.  While the sea change that we are all seeing in internet usage (more on that later…) makes network administration more challenging every day, adequate planning can help to prepare your network for the future.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here to request a full price list.

Network Capacity Planning: Is Your Network Positioned for Growth?


Authored by:  Sandy McGregor, Director of Sales & Marketing for APConnections, Inc.
Sandy has a Masters in Management Information Systems and over 17 years experience in the Applications Development Life Cycle.  In the past, she has been a Project Manager for large-scale data center projects, as well as a Director heading up architecture, development and operations teams.  In Sandy’s current role at APConnections, she is responsible for tracking industry trends.

As you may have guessed, mobile users are gobbling up network bandwidth in 2010!  Based on research conducted in the first half of 2010, Allot Communications has released The Allot MobileTrends Report , H1 2010 showing dramatic growth in mobile data bandwidth usage in 2010- up 68% in Q1 and Q2.

I am sure that you are seeing the impacts of all this usage on your networks.  The good news is all this usage is good for your business, as a network provider,  if you are positioned to grow to meet the needs of all this growth!  Whether you sell network usage to customers (as a ISP or WISP) or “sell” it internally (colleges and corporations), growth means that the infrastructure you provide becomes more and more critical to your business.

Here are some areas that we found of particular interest in the article, and their implications on your network, from our perspective…

1) Video Streaming grew by 92% to 35% of mobile use

It should be no surprise that video steaming applications take up a 35% share of mobile bandwidth, and grew by 92%.  At this growth rate, which we believe will continue and grow even faster in the future, your network capacity will need to grow as well.  Luckily, bandwidth prices are continuing to come down in all geographies.

No matter how much you partition your network using a bandwidth shaping strategy, the fact is that video streaming takes up a lot of bandwidth.  Add to that the fact that more and more users are using video, and you have a full pipe before you know it!  While you can look at ways to cache video, we believe that you have no choice but to add bandwidth to your network.

2) Users are downloading like crazy!

When your customers are not watching videos, they are downloading, either via P2P or HTTP, which combined represented 31 percent of mobile bandwidth, with an aggregate growth rate of 80 percent.  Although additional network capacity can help somewhat here, large downloads or multiple P2P users can still quickly clog your network.

You need to first determine if you want to allow P2P traffic on your network.  If you decide to support P2P usage, you may want to think how you will identify which users are doing P2P and if you will charge a premium for this service. Also, be aware that encrypted P2P traffic is on the rise, which makes it difficult to figure out what traffic is truly P2P.

Large file downloads need to be supported.  Your goal here should be to figure out how to enable downloading for your customers without slowing down other users and bringing the rest of your network to a halt.

In our opinion, P2P and downloading is an area where you should look at bandwidth shaping solutions.  These technologies use various methods to prioritize and control traffic, such as application shaping (Allot, BlueCoat, Cymphonix) or behavior-based shaping (NetEqualizer).

These tools, or various routers (such as Mikrotik), should also enable you to set rate limits on your user base, so that no one user can take up too much of your network capacity.  Ideally, rate limits should be flexible, so that you can set a fixed amount by user, group of users (subnet, VLAN), or share a fixed amount across user groups.

3) VoIP and IM are really popular too

The second fastest growing traffic types were VoIP and Instant Messaging (IM).  Note that if your customers are not yet using VoIP, they will be soon.  The cost model for VoIP just makes it so compelling for many users, and having one set of wires if an office configuration is attractive as well (who likes the tangle of wires dangling from their desk anyways?).

We believe that your network needs to be able to handle VoIP without call break-up or delay.  For a latency-sensitive application like VoIP, bandwidth shaping (aka traffic control, aka bandwidth management) is key.  Regardless of your network capacity, if your VoIP traffic is not given priority, call break up will occur.  We believe that this is another area where bandwidth shaping solutions can help you.

IM on the other hand, can handle a little latency (depending on how fast your customers type & send messages).  To a point, customers will tolerate a delay in IM – but probably 1-2 seconds max.  After that,they will blame your network, and if delays persist, will look to move to another network provider.

In summary, to position your network for growth:

1) Buy More Bandwidth – It is a never-ending cycle, but at least the cost of bandwidth is coming down!

2) Implement Rate Limits – Stop any one user from taking up your whole network.

3) Add Bandwidth Shaping – Maximize what you already have.  Think efficiency here.  To determine the payback period on an investment in the NetEqualizer, try our new ROI tool.  You can put together similar calculations for other vendors.

Note:  The Allot MobileTrends Report data was collected from Jan. 1 to June 30 from leading mobile operators worldwide with a combined user base of 190 million subscribers.

Bandwidth Control Return on Investment (ROI) Calculator


Are you looking to justify the cost of purchasing a bandwidth control device for your Internet or WAN link? Our ROI calculator is Industry neutral, click here to see custom results based on your network.

Aside from our customers’ comments about the overall improvement in their network performance, one of the most common remarks we hear from NetEqualizer users concerns the technology’s positive return on investment (ROI).

However, it’s also one of the most common questions we get from potential customers – How will the NetEqualizer benefit my bottom line?

To better answer this question, we recently interviewed NetEqualizer customers from across several verticals to get their best estimates of the cost savings and value associated with their NetEqualizer. We compiled their answers into a knowledge base that we now use to estimate reasonable ROI calculations.

Our calculations are based on real data and were done conservatively as not to create false promises. There are plenty of congested Internet links suffering out there every day, and hence there is more than enough value with the NetEqualizer. So, we did not need to exaggerate.

ROI calculations were based on the following:

  1. Savings in Bandwidth Costs – Stay at your current bandwidth level or delay future upgrades.
  2. Reduced Labor and Support Costs – Avoid Internet congestion issues that lead to support calls during peak usage times.
  3. Retention of Customers – Stop losing customers, clients, and guests because of unreliable or unresponsive Internet service (applies to ISPs and operators such as hotels and executive suites).
  4. Addition of New Customers – Put more users on your link than before while keeping them all happy.

To see what the NetEqualizer can do for you, visit http://www.netequalizer.com

Other ROI calculators

QoS Over The Internet – Is it possible? Five Must-Know Facts


I had an inquiry from a potential customer yesterday asking if we could monitor their QoS. I was a bit miffed as to what to tell them. At first, the question struck me as if they’d asked if we can monitor electrons on their power grid. In other words, it was a legitimate question in a sense, but of what use would it be to monitor QoS? I then asked him why he had implemented QoS in the first place. How did he know he needed it?

After inquiring a bit deeper, I also found out this customer was using extensive VPNs to remote offices over DSL internet circuits. His WAN traffic from the remote offices was sharing links with regular Internet data traffic, and all of it was traversing the public Internet. Then it hit me – he did not realize his QoS mechanisms were useless outside of his internal network.

Where there is one customer with confusion there are usually others. Hence, I’ve put together a quick fact sheet on QoS over an Internet link. Below, you’ll find five quick facts that should help clarify QoS and answer the primary question of “is it possible over the Internet?”.

Fact #1

If your QoS mechanism involves modifying packets with special instructions (ToS bits) on how it should be treated, it will only work on links where you control both ends of the circuit and everything in between.

Fact #2

Most Internet congestion is caused by incoming traffic. For data originating at your facility, you can certainly have your local router give priority to it on its way out, but you cannot set QoS bits on traffic coming into your network (We assume  from a third party). Regulating outgoing traffic with ToS  bits will not have any effect on incoming traffic.

Fact #3

Your public Internet provider will not treat ToS bits with any form of priority (The exception would be a contracted MPLS type network). Yes, they could, but if they did then everybody would game the system to get an advantage and they would not have much meaning anyway.

Fact #4

The next two facts address our initial question — Is QoS over the Internet possible? The answer is, yes, QoS on an Internet link is possible. We have spent the better part of seven years practicing this art form and it is not rocket science, but it does require a philosophical shift in thinking to get your arms around it.

We call it “equalizing,” or behavior-based shaping, and it involves monitoring incoming and outgoing streams on your Internet link.  Priority or QoS is nothing more than favoring one stream’s packets over another stream’s. You can accomplish priority QoS on incoming streams by queuing (slowing down) one stream over another without relying on ToS bits.

Fact #5

Surprisingly, behavior-based methods such as those used by our NetEqualizer do provide a level QoS for VoIP on the public Internet. Although you can’t tell the Internet to send your VoIP packets faster, most people don’t realize the problem with congested VoIP is due to the fact that their VoIP packets are getting crowded out by large downloads. Often, the offending downloads are initiated by their own employees or users. A good behavior-based shaper will be able to favor VoIP streams over less essential data streams without any reliance on the sending party adhering to a QoS scheme.

For more information, check out Using NetEqualizer To Ensure Clean Clear VOIP.

%d bloggers like this: