Natural Address Translation FAQ


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

Editors Note: The official term for one public IP address mapped to multiple private IP addresses is PAT. However, most IP people use the terms interchangeably.


I was doing some internal research on natural address translation (NAT) this past week, and as I looked for reliable sources, I became a bit frustrated with the information available. Yes, the information is out there and the Wikipedia article has some nice charts with all the details. But, if you’re looking  for the rational reasons behind NAT, you might want to shoot yourself in the head by the time you read through all of the information and find what you’re looking for.

To preserve your sanity, as well as answer some key questions quickly, I’ve put together the following Q&A detailing some key points when it comes to NAT. We’ll start with the basics and go from there.

What is NAT?

In order to allow multiple users to share a single IP address, modern routers utilize NAT to find unused port numbers and map them to a set of local private IP addresses. So, for example, let’s say your Internet provider gives you a single IP address for your household. It could be something like 98.245.90.60, which is a public IP address owned by Comast.

All of the computers in your house must share the single IP address that Comcast provides. So, your local router — the Linksys wireless router you bought for $79 — will use NAT to tag traffic with port numbers and then create some additional IP addresses right where your house connects to the Internet.

Let’s say you contacted the Microsoft website to download the latest service pack. When Microsoft sends you the download, it’s going to send it to 98.245.90.60:5001. “5001” is the port number established for the FTP transfer and 98.245.90.60 is the Comcast-owned Internet address for your entire house. Using NAT, your router will then interpret the port number and change the IP address to a unique internal address (like 192.168.1.103:8700, for example) before it gets to your computer.

Why do we need NAT?

NAT is useful because home users often have more than one computer in their household and yet only have a single IP address from their provider. Since every computer that talks on the Internet requires an IP address, it would not be possible to have more than one computer in your house without NAT.

How does NAT map a single IP address to multiple computers without things like Web browsing getting mixed up?

First, here’s some background on the difference between a base IP address and a port number. Internet addresses have two parts: an IP address, such as 98.243.90.60, and a port number. The IP address is used to route data across the Internet and the port is used by the receiving device — your computer — to determine what service to provide. For example, port 80 is the default port address for Web browsing.

Before the invention of NAT, Internet routers mostly ignored the port part of the address as they did not need it to move IP packets across the Internet. When describing the function of a port number, I like to use the analogy of a large dormitory with individual room numbers for the people living there. The postal service ignores the room numbers as their service ends at the address of the dormitory. They do not sort the mail by room number. For internet routers, port numbers are like room numbers. They deliver the packet to the end user’s computer and the port number is then interpreted.

The range of possible port numbers are in the tens of thousands, which is more than enough interpreting services by a user’s computer.  Think of a dorm with 1,000 residents in which they would only need 1,000 numbers for mailboxes, but still had 1,000,000 reserved.

What happens if there are no free ports to do the translation?

On small home networks this is not likely to happen, but you can get conflicts if, for example, you try to use NAT on a network with tens of thousands  of users. The total number of unique ports available is 65,000 and most users will require more than one port at a time.

Does NAT slow down my Internet connection?

Not enough for you to notice.

Why does my provider only allocate one IP address for my residence?

Even though there are about 4,000,000,000 (four billion) possible Internet addresses, the actual addresses are given out in large blocks, and once given out, they are hard to get back. So, and this is purely an example, let’s say a large company was given a class B set of addresses (which used to be common in the early days). They would have 64,000 addresses in their control. Hence, even with 4,000,000,000 possible addresses, they are in short supply, and your provider cannot afford to give them out more than one at a time.

Can I have more than one IP address?

Yes, but you would likely need a business class Internet service, which is generally quite a bit more expensive than residential-type service.

When will the world run out of IP addresses?

Some say we already have and there is a big push to go to a new standard called IPV6. However, we don’t think that will ever happen.

Editors Note: The official term for one public IP address mapped to multiple private IP addresses is PAT. However, most IP people use the terms interchangeably.

Seven Points to Consider When Planning Internet Redundancy


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

The chances of being killed by a shark are 1 in 264 million. Despite those low odds, most people worry about sharks when they enter the ocean, and yet the same people do not think twice about getting into a car without a passenger-side airbag.

And so it is with networking redundancy solutions. Many equipment purchase decisions are enhanced by an irrational fear (created by vendors) and not on actual business-risk mitigation.

The solution to this problem is simple. It’s a matter of being informed and making decisions based on facts rather than fear or emotion. While every situation is different, here a few basic tips and questions to consider when it comes to planning Internet redundancy.

1) Where is your largest risk of losing Internet connectivity?

Vendors tend to push customers toward internal hardware solutions to reduce risk.  For example, most customers want a circuit design within their servers that will allow traffic to pass should the equipment fail. Yet our polling data of our customers shows that your Internet router’s chance of catastrophic failure is about 1 percent over a three-year period.  On the other hand, your internet provider has an almost 100-percent chance of having a full-day outage during that same three-year period.

Perhaps the cost of sourcing two independent providers is prohibitive, and there is no choice but to live with this risk. All well and good, but if you are truly worried about a connectivity failure into your business, you cannot meaningfully mitigate this risk by sourcing hot failover equipment at your site.  You MUST source two separate paths to the Internet to have any significant reduction in risk.  Requiring failover on individual pieces of equipment, without complete redundancy in your network from your provider down, with all due respect, is a mitigation of political and not actual risk.

2) Do not turn on unneeded bells and whistles on your router and firewall equipment.

Many router and device failures are not absolute.  Equipment will get cranky,  slow, or belligerent based on human error or system bugs.  Although system bugs are rare when these devices are used in the default set-up, it seems turning on bells and whistles is often an irresistible enticement for a tech.  The more features you turn on, the less standard your configuration becomes, and all too often the mission of the device is pushed well beyond its original intent.  Routers doing billing systems, for example.

These “soft” failure situations are common, and the fail-over mechanism likely will not kick in, even though the device is sick and not passing traffic as intended.  I have witnessed this type of failure first-hand at major customer installations.  The failure itself is bad enough, but the real embarrassment comes from having to tell your customer that the fail-over investment they purchased is useless in a real-life situation. Fail-over systems are designed with the idea that the equipment they route around will die and go belly up like a pheasant shot point-blank with a 12-gauge shotgun.  In reality, for every “hard” failure, there are 100 system-related lock ups where equipment sputters and chokes but does not completely die.

3) Start with a high-quality Internet line.

T1 lines, although somewhat expensive, are based on telephone technology that has long been hardened and paid for. While they do cost a bit more than other solutions, they are well-engineered to your doorstep.

4) If possible, source two Internet providers and use BGP to combine them.

Since Internet providers are the usually weakest link in your connection, critical operations should consider this option first before looking to optimize other aspects of your internal circuit.

5) Make sure all your devices have good UPS sources and surge protectors.

6) What is the cost of manually moving a wire to bypass a failed piece of equipment?

Look at this option before purchasing redundancy options on single point of failure. We often see customers asking for redundant fail-over embedded in their equipment. This tends to be a strategy of purchasing hardware such as  routers, firewalls, bandwidth shapers, and access points that provide a “fail open” (meaning traffic will still pass through the device) should they catastrophically fail.  At face value, this seems like a good idea to cover your bases. Most of these devices embed a failover switch internally to their hardware.  The cost of this technology can add about $3,000 to the price of the unit.

7) If equipment is vital to your operation, you’ll need a spare unit on hand in case of failure. If the equipment is optional or used occasionally, then take it out of your network.

Again, these are just some basic tips, and your final Internet redundancy plan will ultimately depend on your specific circumstances.  But, these tips and questions should put you on your way to a decision based on facts rather than one based on unnecessary fears and concerns.

Nine Tips and Technologies for Network WAN Optimization


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

Although there is no way to actually make your true WAN speed faster, here are some tips for  corporate IT professionals that can make better use of the bandwidth you already have, thus providing the illusion of a faster pipe.

1) Caching — How  does it work and is it a good idea?

Caching servers have built-in intelligence to store the most recently and most frequently requested information, thus preventing future requests from traversing a WAN/Internet link unnecessarily.

Caching servers keep a time stamp of their last update to data. If the page time stamp has not changed since the last time a user has accessed the page, the caching server will present a local stored copy of the Web page, saving the time it would take to load the page from across the Internet.

Caching on your WAN link in some instances can reduce traffic by 50 percent or more. For example, if your employees are making a run on the latest PDF explaining their benefits, without caching each access would traverse the WAN link to a central server duplicating the data across the link many times over. With caching, they will receive a local copy from the caching server.

What is the downside of caching?

There are two main issues that can arise with caching:

a) Keeping the cache current –If you access a cache page that is not current you are at risk of getting old and incorrect information. Some things you may never want to be cached. For example, the results of a transactional database query. It’s not that these problems are insurmountable, but there is always the risk the data in cache will not be synchronized with changes. I personally have been misled by old data from my cache on several occasions.

b) Volume – There are some 300 million websites on the Internet. Each site contains upwards of several megabytes of public information. The amount of data is staggering and even the smartest caching scheme cannot account for the variation in usage patterns among users and the likelihood they will hit an uncached page.

We recommend Squid as a proxy solution.

2) Protocol Spoofing

Historically, there have been client server applications developed for an internal LAN. Many of these applications are considered chatty. For example, to complete a transaction between a client and server, tens of messages may be transmitted when perhaps one or two would suffice. Everything was fine until companies, for logistical and other reasons, extended their LANs across the globe using WAN links to tie different locations together.

To get a better visual on what goes on in a chatty application perhaps an analogy will help. It’s like  sending family members your summer vacation pictures, and, for some insane reason, putting each picture in a separate envelope and mailing them individually on the same mail run. Obviously, this would be extremely inefficient, just as chatty applications can be.

What protocol spoofing accomplishes is to “fake out” the client or server side of the transaction and then send a more compact version of the transaction over the Internet (i.e., put all the pictures in one envelope and send it on your behalf, thus saving you postage).

For more information, visit the Protocol Spoofing page at WANOptimization.org.

3) Compression

At first glance, the term compression seems intuitively obvious. Most people have at one time or another extracted a compressed Windows ZIP file. If you examine the file sizes pre- and post-extraction, it reveals there is more data on the hard drive after the extraction. Well, WAN compression products use some of the same principles, only they compress the data on the WAN link and decompress it automatically once delivered, thus saving space on the link, making the network more efficient. Even though you likely understand compression on a Windows file conceptually, it would be wise to understand what is really going on under the hood during compression before making an investment to reduce network costs. Here are two questions to consider.

a) How Does it Work? — A good and easy way to visualize data compression is comparing it to the use of short hand when taking dictation. By using a single symbol for common words a scribe can take written dictation much faster than if he were to spell out each word. The basic principle behind compression techniques is to use shortcuts to represent common data.

Commercial compression algorithms, although similar in principle, can vary widely in practice. Each company offering a solution typically has its own trade secrets that they closely guard for a competitive advantage. However, there are a few general rules common to all strategies. One technique is to encode a repeated character within a data file. For a simple example, let’s suppose we were compressing this very document and as a format separator we had a row with a solid dash.

The data for this solid dash line is comprised of approximately 160 times the ASCII character “-�. When transporting the document across a WAN link without compression, this line of document would require 80 bytes of data, but with clever compression, we can encode this using a special notation “-� X 160.

The compression device at the front end would read the 160 character line and realize,”Duh, this is stupid. Why send the same character 160 times in a row?” So, it would incorporate a special code to depict the data more efficiently.

Perhaps that was obvious, but it is important know a little bit about compression techniques to understand the limits of their effectiveness. There are many types of data that cannot be efficiently compressed.

For example, many image and voice recordings are already optimized and there is very little improvement in data size that can be accomplished with compression techniques. The companies that sell compression based solutions should be able to provide you with profiles on what to expect based on the type of data sent on your WAN link.

b) What are the downsides? — Compression always requires equipment at both ends of the link and results can be sporadic depending on the traffic type.

If you’re looking for compression vendors, we recommend FatPipe, Juniper Networks

4) Requesting Text Only from Browsers on Remote Links

Editors note: Although this may seem a bit archaic and backwoods, it can be effective in a pinch to keep a remote office up and running.

If you are stuck with a dial-up or slower WAN connection, have your users set their browsers to text-only mode. However, while this will speed up general browsing and e-mail, it will do nothing to speed up more bandwidth intensive activities like video conferencing. The reason why text only can be effective is that  most Web pages are loaded with graphics which take up the bulk of the load time. If you’re desperate, switching to text-only will eliminate the graphics and save you quite a bit of time.

5) Application Shaping on Your WAN Link

Editor’s Note: Application shaping is appropriate for corporate IT administrators and is generally not a practical solution for a home user. Makers of application shapers include Packeteer and Allot and are typically out of the price range for many smaller networks and home users.

One of the most popular and intuitive forms of optimizing bandwidth is a method called “application shaping,” with aliases of “traffic shaping,” “bandwidth control,” and perhaps a few others thrown in for good measure. For the IT manager that is held accountable for everything that can and will go wrong on a network, or the CIO that needs to manage network usage policies, this is a dream come true. If you can divvy up portions of your WAN/Internet link to various applications, then you can take control of your network and ensure that important traffic has sufficient bandwidth.

At the center of application shaping is the ability to identify traffic by type.  For example, identifying between Citrix traffic, streaming audio, Kazaa peer-to-peer, or something else. However, this approach is not without its drawbacks.

Here are a few common questions potential users of application shaping generally ask.

a) Can you control applications with just a firewall or do you need a special product? — Many applications are expected to use Internet ports when communicating across the Web. An Internet port is part of an Internet address, and many firewall products can easily identify ports and block or limit them. For example, the “FTP” application commonly used for downloading files uses the well known “port 21.”

The fallacy with this scheme, as many operators soon find out, is that there are many applications that do not consistently use a fixed port for communication. Many application writers have no desire to be easily classified. In fact, they don’t want IT personnel to block them at all, so they deliberately design applications to not conform to any formal port assignment scheme. For this reason, any product that aims to block or alter application flows by port should be avoided if your primary mission is to control applications by type.

b) So, if standard firewalls are inadequate at blocking applications by port, what can help?

As you are likely aware, all traffic on the Internet travels around in what is called an IP packet. An IP packet can very simply be thought of as a string of characters moving from Computer A to Computer B. The string of characters is called the “payload,” much like the freight inside a railroad car. On the outside of this payload, or data, is the address where it is being sent. These two elements, the address and the payload, comprise the complete IP packet.

In the case of different applications on the Internet, we would expect to see different kinds of payloads. For example, let’s take the example of a skyscraper being transported from New York to Los Angeles. How could this be done using a freight train? Common sense suggests that one would disassemble the office tower, stuff it into as many freight cars as it takes to transport it, and then when the train arrived in Los Angeles hopefully the workers on the other end would have the instructions on how to reassemble the tower.

Well, this analogy works with almost anything that is sent across the Internet, only the payload is some form of data, not a physical hunk of bricks, metal and wires. If we were sending a Word document as an e-mail attachment, guess what, the contents of the document would be disassembled into a bunch of IP packets and sent to the receiving e-mail client where it would be re-assembled. If I looked at the payload of each Internet packet in transit, I could actually see snippets of the document in each packet and could quite easily read the words as they went by.

At the heart of all current application shaping products is special software that examines the content of Internet packets, and through various pattern matching techniques, determines what type of application a particular flow is. Once a flow is determined, then the application shaping tool can enforce the operators policies on that flow. Some examples of policy are:

  • Limit Citrix traffic to 100kbs
  • Reserve 500kbs for Shoretel voice traffic

The list of rules you can apply to traffic types and flow is unlimited. However, there is a  downside to application shaping of which you should be aware. Here are a few:

  • The number of applications on the Internet is a moving target. The best application shaping tools do a very good job of identifying several thousand of them, and yet there will always be some traffic that is unknown (estimated at 10 percent by experts from the leading manufacturers). The unknown traffic is lumped into the unknown classification and an operator must make a blanket decision on how to shape this class. Is it important? Is it not? Suppose the important traffic was streaming audio for a Web cast and is not classified. Well, you get the picture. Although theory behind application shaping by type is a noble one, the cost for a company to stay up to date is large and there are cracks.
  • Even if the application spectrum could be completely classified, the spectrum of applications constantly changes. You must keep licenses current to ensure you have the latest in detection capabilities. And even then it can be quite a task to constantly analyze and change the mix of policies on your network. As bandwidth costs lessen, how much human time should be spent divvying up and creating ever more complex policies to optimize your WAN traffic?

6) Test Your WAN-Link Speed

A common issues with slow WAN link service is that your provider is not giving you what they have advertised.

For more information, see The Real Meaning of Comcast Generosity.

7) Make Sure There Is No Interference on Your Wireless Point-to-Point WAN Link

If the signal between locations served by a point to point link are weak, the wireless equipment will automatically downgrade its service to a slower speed. We have seen this many times where a customer believes they have perhaps a 40-megabit backhaul link and perhaps are only realizing five megabits.

8) Deploy a Fairness Device to Smooth Out Those Rough Patches During Contentious Busy Hours

Yes, this is the NetEqualizer News Blog, but with all bias aside, these things work great. If you are in an office sharing an Internet feed with various users, the NetEqualizer will keep aggressive bandwidth users from crowding others out. No, it cannot create additional bandwidth on your pipe, but it will eliminate the gridlock caused  by your colleague  in the next cubicle  downloading a Microsoft service pack.

Yes, there are other devices on the market (like your fancy router), but the NetEqualizer was specifically designed for that mission.

9) Bonus Tip: Kill All of Those Security Devices and See What Happens

With recent out break of the H1N1 virus, it reminded me of  how sometimes the symptoms and carnage from a vaccine are worse than the disease it claims to cure. Well, the same holds true for your security protection hardware on your network. From proxies to firewalls, underpowered equipment can be the biggest choke point on your network.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email.

Click here for a full price list.

Links to other bandwidth control products on the market.

Packet Shaper by Blue Coat

Exinda

Riverbed

Exinda  Packet Shaper  and Riverbed tend to focus on the enterprise WAN optimization market.

Cymphonix

Cymphonix comes  from a background of detailed reporting.

Emerging Technologies

Very solid  product for bandwidth shaping.

Exinda

Exinda from Australia has really made a good run in the US market offering a good alternative to the incumbants.

Netlimiter

For those of you who are wed to Windows NetLimiter is your answer

NetEqualizer Field Guide to Network Capacity Planning


I recently reviewed an article that covered bandwidth allocations for various Internet applications. Although the information was accurate, it was very high level and did not cover the many variances that affect bandwidth consumption. Below, I’ll break many of these variances down, discussing not only how much bandwidth different applications consume, but the ranges of bandwidth consumption, including ping times and gaming, as well as how our own network optimization technology measures bandwidth consumption.

E-mail

Some bandwidth planning guides make simple assumptions and provide a single number for E-mail capacity planning, oftentimes overstating the average consumption. However, this usually doesn’t provide an accurate assessment. Let’s consider a couple of different types of E-mail.

E-mail — Text

Most E-mail text messages are at most a paragraph or two of text. On the scale of bandwidth consumption, this is negligible.

However, it is important to note that when we talk about the bandwidth consumption of different kinds of applications, there is an element of time to consider — How long will this application be running for? So, for example, you might send two kilobytes of E-mail over a link and it may roll out at the rate of one megabit. A 300-word, text-only E-mail can and will consume one megabit of bandwidth. The catch is that it generally lasts just a fraction of second at this rate. So, how would you capacity plan for heavy sustained E-mail usage on your network?

When computing bandwidth rates for classification with a commercial bandwidth controller such as a NetEqualizer, the industry practice is to average the bandwidth consumption for several seconds, and then calculate the rate in units of kilobytes per second (Kbs).

For example, when a two kilobyte file (a very small E-mail, for example) is sent over a link for a fraction of a second, you could say that this E-mail consumed two megabits of bandwidth. For the capacity planner, this would be a little misleading since the duration of the transaction was so short. If you take this transaction average over a couple of seconds, the transfer rate would be just one kbs, which for practical purposes, is equivalent to zero.

E-mail with Picture Attachments

A normal text E-mail of a few thousand bytes can quickly become 10 megabits of data with a few picture attachments. Although it may not look all the big on your screen, this type of E-mail can suck up some serious bandwidth when being transmitted. In fact, left unmolested, this type of transfer will take as much bandwidth as is available in transit. On a T1 circuit, a 10-megabit E-mail attachment may bring the line to a standstill for as long as six seconds or more. If you were talking on a Skype call while somebody at the same time shoots a picture E-mail to a friend, your Skype call is most likely going to break up for five seconds or so. It is for this reason that many network operators on shared networks deploy some form of bandwidth contorl or QoS as most would agree an E-mail attachment should not take priority over a live phone call.

E-mail with PDf Attachment

As a rule, PDF files are not as large as picture attachments when it comes to E-mail traffic. An average PDF file runs in the range of 200 thousand bytes whereas today’s higher resolution digital cameras create pictures of a few million bytes, or roughly 10 times larger. On a T1 circuit, the average bandwidth of the PDF file over a few seconds will be around 100kbs, which leaves plenty of room for other activities. The exception would be the 20-page manual which would be crashing your entire T1 for a few seconds just as the large picture attachments referred to above would do.

Gaming/World of Warcraft

There are quite a few blogs that talk about how well World of Warcraft runs on DSL, cable, etc., but most are missing the point about this game and games in general and their actual bandwidth requirements. Most gamers know that ping times are important, but what exactly is the correlation between network speed and ping time?

The problem with just measuring speed is that most speed tests start a stream of packets from a server of some kind to your home computer, perhaps a 20-megabit test file. The test starts (and a timer is started) and the file is sent. When the last byte arrives, a timer is stopped. The amount of data sent over the elapsed seconds yields the speed of the link. So far so good, but a fast speed in this type of test does not mean you have a fast ping time. Here is why.

Most people know that if you are talking to an astronaut on the moon there is a delay of several seconds with each transmission. So, even though the speed of the link is the speed of light for practical purposes, the data arrives several seconds later. Well, the same is true for the Internet. The data may be arriving at a rate of 10 megabits, but the time it takes in transit could be as high as 1 second. Hence, your ping time (your mouse click to fire your gun) does not show up at the controlling server until a full second has elapsed. In a quick draw gun battle, this could be fatal.

So, what affects ping times?

The most common cause would be a saturated network. This is when your network transmission rates of all data on your Internet link exceed the links rated capacity. Some links like a T1 just start dropping packets when full as there is no orderly line to send out waiting packets. In many cases, data that arrive to go out of your router when the link is filled just get tossed. This would be like killing off excess people waiting at a ticket window or something. Not very pleasant.

If your router is smart, it will try to buffer the excess packets and they will arrive late. Also, if the only thing running on your network is World of Warcraft, you can actually get by with 120kbs in many cases since the amount of data actually sent of over the network is not that large. Again, the ping time is more important and a 120kbs link unencumbered should have ping times faster than a human reflex.

There may also be some inherent delay in your Internet link beyond your control. For example, all satellite links, no matter how fast the data speed, have a minimum delay of around 300 milliseconds. Most urban operators do not need to use satellite links, but they all have some delay. Network delay will vary depending on the equipment your provider has in their network, and also how and where they connect up to other providers as well as the amount of hops your data will take. To test your current ping time, you can run a ping command from a standard Windows machine

Citrix

Applications vary widely in the amount of bandwidth consumed. Most mission critical applications using Citrix are fairly lightweight.

YouTube Video — Standard Video

A sustained YouTube video will consume about 500kbs on average over the video’s 10-minute duration. Most video players try to store the video up locally as fast as they can take it. This is important to know because if you are sizing a T1 to be shared by voice phones, theoretically,  if a user was watching a YouTube video, you would have 1 -megabit left over for the voice traffic. Right? Well, in reality, your video player will most likely take the full T1, or close to it, if it can while buffering YouTube.

YouTube — HD Video

On average, YouTube HD consumes close to 1 megabit.

See these other Youtube articles for more specifics about YouTube consumption

Netflix – Movies On Demand

Netflix is moving aggressively to a model where customers download movies over the Internet, versus having a DVD sent to them in the mail.  In a recent study, it was shown that 20% of bandwidth usage during peak in the U.S. is due to Netflix downloads. An average a two hour movie takes about 1.8 gigabits, if you want high-definition movies then its about 3 gigabits for two hours.   Other estimates are as high as 3-5 gigabits per movie.

On a T1 circuit, the average bandwidth of a high-definition Netflix movie (conversatively 3 gigabits/2 hours) over one second will be around 400kbs, which consumes more than 25% of the total circuit.

Skype/VoIP Calls

The amount of bandwidth you need to plan for a VoIP network is a hot topic. The bottom line is that VoIP calls range from 8kbs to 64kbs. Normally, the higher the quality the transmission, the higher the bit rate. For example, at 64kbs you can also transmit with the quality that one might experience on an older style AM radio. At 8kbs, you can understand a voice if the speaker is clear and pronunciates  their words clearly.  However, it is not likely you could understand somebody speaking quickly or slurring their words slightly.

Real-Time Music, Streaming Audio and Internet Radio

Streaming audio ranges from about 64kbs to 128kbs for higher fidelity.

File Transfer Protocol (FTP)/Microsoft Servicepack Downloads

Updates such as Microsoft service packs use file transfer protocol. Generally, this protocol will use as much bandwidth as it can find. There are several limiting factors for the actual speed an FTP will attain, though.

  1. The speed of your link — If the factors below (2 and 3) do not come into effect, an FTP transfer will take your entire link and crowd out VoIP calls and video.
  2. The speed of the senders server — There is no guarantee that the  sending serving is able to deliver data at the speed of your high speed link. Back in the days of dial-up 28.8kbs modems, this was never a factor. But, with some home internet links approaching 10 megabits, don’t be surprised if the sending server cannot keep up. During peak times, the sending server may be processing many requests at one time, and hence, even though it’s coming from a commercial site, it could actually be slower than your home network.
  3. The speed of the local receiving machine — Yes, even the computer you are receiving the file on has an upper limit. If you are on a high speed university network, the line speed of the network can easily exceed your computers ability to take up data.

While every network will ultimately be different, this field guide should provide you with an idea of the bandwidth demands your network will experience. After all, it’s much better to plan ahead rather than risking a bandwidth overload that causes your entire network to come to a hault.

Related Article a must read for anybody upgrading their Internet Pipe is our article on Contention Ratios

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

Other products that classify bandwidth

Using NetEqualizer to Ensure Clean, Clear QoS for VOIP Calls


A Little Bit of History

Many VoIP installations  are designed with an initial architecture that assumes inter-office  phone calls will reside within the confines of the company LAN. Internal LANs  are almost always 100 megabit and consist of multiple paths between end points. The basic corporate LAN design usually provides more than enough bandwidth to route all inter-office VoIP calls without congestion.

As enterprises  become more dispersed geographically, care must be taken when extending  VoIP calls beyond the main office.  Once a VoIP call leaves the confines of your local network and traverses  over  the public Internet link, it will have to compete for space with any data traffic that might also be destined  for the Internet. Without careful planning, your Enterprise will most likely start dropping VoIP calls during  busy traffic times.

The most common way of dealing with priority or VoIP  is to set what is called the TOS bit.  The TOS bit acts like a little flag inside each Internet packet of the VoIP stream. An Internet router can  rearrange the packets destined for the Internet, and give priority to the outgoing VoIP packets by looking at the TOS bit. The downside of this method is that it does not help with VoIP calls originating  from the outside coming into your network.  For example, somebody receiving a VoIP call in the main office from a VPN user working at home, may experience some distortion on the incoming VoIP  call.  This is usually caused when somebody else in the office is doing a large download during the VoIP call.  Routers typically cannot set priority on incoming data, hence the inbound data download can dominate all the bandwidth, rendering the VoIP call inaudible.

How NetEqualizer Solves VoIP Congestion Issues

The NetEqualizer  solves the problem of  VoIP traffic competing with regular data traffic by using a simple  method. A NetEqualizer provides priority for both incoming and outgoing VoIP traffic . It does not use TOS bits.  It is VoIP and Network agnostic.  Sounds like the old Saturday Night Live commercial where Chevy Chase hawks a floor cleaner that is also an ice cream topping.

Here is how it works…

It turns out that VoIP streams require no more than 100kbs per  call,  usually quite a bit less.  Large downloads, on the other hand, will grab the entire Internet Trunk if they can get it.  The NetEqualizer has been designed to favor streams of less than 100kbs over larger data streams. When a large download is competing with a VoIP call for precious resources, the NetEqualizer will create some artificial latency on the download stream causing it to back off and slow down. No need to rely on TOS bits in this scenario, problem solved.

Conceptually, that is all there is to it.  Obviously, the NetEqualizer engineering team has refined and tuned  this technique over the years.  In general, the NetEqualizer Default Rules need very little set-up, and a unit can be inline in a matter of minutes.

The scenarios where NetEqualizer is appropriate for ensuring that your VoIP system runs smoothly are:

  1. You are running an Enterprise VoIP service with remote offices that connect to your main PBX over VPN links
  2. You are an ISP and your customers use a VoIP service over limited bandwidth connectivity

Recommended Reading

Other vendor White Papers on the subject:  River Bed

Other suggested reading:  http://www.bandwidth.com/wiki/article/QoS_(Quality_of_Service)

*Preview Version* of NetEqualizer Online Guided Product Demonstration Video

Ten Things to Consider When Choosing a Bandwidth Shaper


This article is intended as an objective guide for anyone trying to narrow down their options in the bandwidth controller market. Organizations today have a plethora of product options to choose from. To further complicate your choices, not only are there  specialized bandwidth controllers, you’ll also find that most Firewall and Router products today contain some form of  bandwidth shaping and QoS  features .

What follows is an  all-encompassing  list of questions that will help you to quickly organize your  priorities with regards to choosing a bandwidth shaper.

1) What is the Cost of Increasing your Bandwidth?

Although this question may be a bit obvious, it must be asked. We assume that anybody in the market for a bandwidth controller also has the option of increasing their bandwidth. The costs of purchasing  and operating a bandwidth controller should ultimately be compared with the cost of increasing bandwidth on your network.

2) How much Savings should you expect from your Bandwidth Controller?

A good bandwidth controller in many situations can increase your carrying capacity by up to 50 percent.  However, beware, some technologies designed to optimize your network can create labor overhead in maintenance hours. Labor costs with some solutions can far exceed the cost of adding bandwidth.

3) Can you out-run your Organization’s Appetite for Increased Bandwidth  with a One-Time Bandwidth Upgrade?

The answer is yes, it is possible to buy enough bandwidth such that all your users cannot possibly exhaust the supply.  The bad news is that this solution is usually cost-prohibitive.  Many organizations that come to us have previously doubled their bandwidth, sometimes more than once, only to be back to overwhelming congestion within  a few months after their upgrade.  The appetite for bandwidth is insatiable, and in our opinion, at some point a bandwidth control device becomes your only rational option. Outrunning your user base usually is only possible where  Internet infrastructure is subsidized by a government entity, hiding the true costs.  For example, a small University with 1000 students will likely not be able to consume a true 5 Gigabit pipe, but purchasing a pipe of that size would be out of reach for most US-based Universities.

4) How Valuable is Your Time? Are you a Candidate for a Freeware-type Solution?

What we have seen in the market place is that small shops with high technical expertise, or small ISPs on a budget, can often make use of a freeware do-it-yourself bandwidth control solution.  If you are cash-strapped, this may be a viable solution for you.  However, please go into this with your eyes open.  The general pitfalls and risks are as follows:

a) Staff can easily run up 80 or more hours trying to  save a few thousand dollars fiddling with an unsupported solution.  And this is only for the initial installation & set-up.  Over the useful life of the solution, this can continue at a high-level, due to the unsupported nature of these technologies.

b) Investors  do not like to invest in businesses with homegrown technology, for many reasons, including finding personnel to sustain the solution, upgrading and adding features, as well as overall risk of keeping it in working order, unless it gives them a very large competitive advantage. You can easily shoot yourself in the foot with prospective buyers by becoming too dependent on homegrown, freeware solutions, in order to save costs. When you rely on something homegrown, it generally means an employee or two holds the keys to the operational knowledge, hence potential buyers can become uncomfortable (you would be too!).

5) Are you Looking to Enforce Bandwidth Limits as part of a Rate Plan that you Resell to Clients?

For example , let’s say that you have a good-sized backbone of bandwidth at a reasonable cost per megabit, and you just want to enforce class of service speeds to sell your bandwidth in incremental revenue chunks.

If this is truely your only requirement, and not optimization to support high contention ratios, then you should be careful not to overspend on your solution. A basic NetEqualizer or Allot system may be all that you need. You can also most likely leverage the bandwidth control features bundled into your Router or Firewall.  The thing to be careful of if using your Router/Firewall is that these devices can become overwhelmed due to lack of horsepower.

6) Are you just Trying to Optimize the Bandwidth that you have, based on Well-Known Priorities?

Some context:

If you have a very static network load, with a finite well-defined set of  applications running through your enterprise, there are application shaping (Layer-7 shaping) products out there such as the Blue Coat PacketShaper,which uses deep packet inspection, that can be set up once to allocate different amounts bandwidth based on application.  If the PacketShaper is a bit too pricey, the Cymphonics product can also detect most common applications.

If  you are trying to optimize your bandwidth on a variable, wide-open plethora of applications, then you may find yourself with extremely high maintenance costs by using a Layer-7 application shaper. A generic behavior-based product such as the NetEqualizer will do the trick.

Update 2015

Note : We are seeing quite a bit of Encryption on  common applications. We strongly recommend avoiding layer 7 type devices for public Internet traffic as the accuracy is diminishing due to the fact that encrypted traffic is UN-classifieble , a heuristics based behavior based approach is advised

7) Make sure  what looks elegant on the cover does not have hidden costs by doing a little research on the Internet.

Yes this is an obvious one too, but lest you forget your due diligence!

Before purchasing any traffic shaping solution  you should try a simple internet search with well placed keywords to uncover objective opinions. Current testimonials supplied by the vendor are a good source of information, but only tell half the story. Current customers are always biased toward their decision sometimes in the face of ignoring a better solution.

If you are not familiar with this technology, nor have the in-house expertise to work with a traffic shaper, you may want to consider buying additional bandwidth as your solution.  In order to assess if this is a viable solution for you, we recommend you think about the following: How much bandwidth do you need ? What is the appropriate amount for your ISP or organization?  We actually dedicated a complete article to this question.

8) Are you a Windows Shop?  Do you expect a Microsoft-based solution due to your internal expertise?

With all respect to Microsoft and the strides they have made toward reliability in their server solutions, we believe that you should avoid a Windows-based product for any network routing or bandwidth control mission.

To be effective, a bandwidth control device must be placed such that all traffic is forced to pass through the device. For this reason, all manufacturers that we are aware of develop their network devices using a derivative of  Linux. Linux-based is based on Open Source, which means that an OEM can strip down the operating system to its simplest components.  The simpler operating system in your network device, the less that can go wrong.  However, with Windows the core OS source code is not available to third-party developers, hence an OEM may not always be able to track down serious bugs. This is not to say that bugs do not occur in Linux, they do, however the OEM can often get a patch out quickly.

For the Windows IT person trained on Windows, a well-designed networking device presents its interface via a standard web page.  Hence, a technician likely needs no specific Linux background.

9) Are you a CIO (or C level Executive) Looking to Automate and Reduce Costs ?

Bandwidth controllers can become a means to do cool things with a network.  Network Administrators can get caught up reading fancy reports, making daily changes, and interpreting results, which can become  extremely labor-intensive.  There is a price/benefit crossover point where a device can create more work (labor cost)  than bandwidth saved.  We have addressed this paradox in detail in a previous article.

10) Do you have  any Legal or Political Requirement to Maintain Logs or Show Detailed Reports to a Third-Party (i.e. management ,oversight committee, etc.)?

For example…

A government requirement to provide data wire taps dictated by CALEA?

Or a monthly report on employee Internet behavior?

Related article how to choose the right bandwidth management solution

Links to other bandwidth control products on the market.

Packet Shaper by Blue Coat

NetEqualizer ( my favorite)

Exinda

Riverbed

Exinda  Packet Shaper  and Riverbed tend to focus on the enterprise WAN optimization market.

Cymphonix

Cymphonix comes  from a background of detailed reporting.

Emerging Technologies

Very solid  product for bandwidth shaping.

Exinda

Exinda from Australia has really made a good run in the US market offering a good alternative to the incumbants.

Netlimiter

For those of you who are wed to Windows NetLimiter is your answer

Antamediabandwidth

Check Out Our New NetEqualizer Video…

What Is Burstable Bandwidth? Five Points to Consider


IMG_20170403_180712

Internet Bursting

Internet Providers continually use clever marketing analogies to tout their burstable high-speed Internet connections. One of my favorites is the comparison to an automobile with overdrive that at the touch of button can burn up the road. At first, the analogies seem valid, but there are usually some basic pitfalls and unresolved issues.  Below are five points that are designed to make you ponder just what you’re getting with your burstable Internet connection, and may ultimately call some of these analogies, and burstable Internet speeds altogether, into question.

  1. The car acceleration analogy just doesn’t work.

    First, you don’t share your car’s engine with other users when you’re driving.  Whatever the engine has to offer is yours for the taking when you press down on the throttle.  As you know, you do share your Internet connection with many other users.  Second, with your Internet connection, unless there is a magic button next to your router, you don’t have the ability to increase your speed on command.  Instead, Internet bursting is a mysterious feature that only your provider can dole out when they deem appropriate.  You have no control over the timing.

  2. Since you don’t have the ability to decide when you can be granted the extra power, how does your provider decide when to turn up your burst speed?

    Most providers do not share details on how they implement bursting policies, but here is an educated guess – based on years of experience helping providers enforce various policies regarding Internet line speeds.  I suspect your provider watches your bandwidth consumption and lets you pop up to your full burst speed, typically 10 megabits, for a few seconds at a time.  If you continue to use the full 10 megabits for more than a few seconds, they likely will reign you back down to your normal committed rate (typically 1 megabit). Please note this is just an example from my experience and may not reflect your provider’s actual policy.

  3. Above, I mentioned a few seconds for a burst, but just how long does a typical burst last?

    If you were watching a bandwidth-intensive HD video for an hour or more, for example, could you sustain adequate line speed to finish the video? A burst of a few seconds will suffice to make a Web page load in 1/8 of a second instead of perhaps the normal 3/4 of a second. While this might be impressive to a degree, when it comes to watching an hour-long video, this might eclipse your baseline speed. So, if you’re watching a movie or doing any another sustained bandwidth-intensive activity, it is unlikely you will be able to benefit from any sort of bursting technology.

  4. Why doesn’t my provider let me have the burst speed all of the time?

    The obvious answer is that if they did,  it would not be a burst, so it must somehow be limited in some duration.  A better answer is that your provider has peaks and valleys in their available bandwidth during the day, and the higher speed of a burst cannot be delivered consistently.  Therefore, it’s better to leave bursting as a nebulous marketing term rather than a clearly defined entity.  One other note is that if you only get bursting during your provider’s Internet “valleys”, it may not help you at all, as the time of day may be no where near your busy hour time, and so although it will not hurt you, it will not help much either.

  5. When are the likely provider peak times where my burst is compromised?

    Slower service and the inability to burst are most likely occurring during times when everybody else on the Internet is watching movies — during the early evening.  Again, if this is your busy hour, just when you could really use bursting, it is not available to you.

These five points should give you a good idea of the multiple questions and issues that need to be considered when weighing the viability and value of burstable Internet speeds.  Of course, a final decision on bursting will ultimately depend on your specific circumstances.  For further related reading on the subject, we suggest you visit our articles How Much YouTube Can the Internet Handle and Field Guide to Contention Ratios.

How does your ISP actually enforce your Internet Speed


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com

YT

Have you ever wondered how your ISP manages to control the speed of your connection? If so, you might find the following article enlightening.  Below, we’ll discuss the various trade-offs used to control and break out bandwidth rate limits and the associated side effects of using those techniques.

Dropping Packets (Cisco term “traffic policing”)

One of the simplest methods for a bandwidth controller to enforce a rate cap is by dropping packets. When using the packet-dropping method, the bandwidth controlling device will count the total number of bytes that cross a link during a second.  If the target rate is exceeded during any single second, the bandwidth controller will drop packets for the remainder of that second. For example, if the bandwidth limit is 1 megabit, and the bandwidth controller counts 1 million bits gone by  in 1/2 a second, it will then drop packets for the remainder of the second.  The counter will then reset for the next second. From most evidence we have observed, rate caps enforced by many ISPs use the drop packet method, as it is the least expensive method supported on most basic routers.

So, what is wrong with dropping packets to enforce a bandwidth cap?

Well, when a link hits a rate cap and packets are dropped en masse, it can wreak havoc on a network. For example, the standard reaction of a Web browser when it perceives web traffic is getting lost is to re-transmit the lost data. For a better understanding of dropping packets, let’s use the analogy of a McDonald’s fast food restaurant.

Suppose the manager of the restaurant was told his bonus was based on making sure there was a never a line at the cash register. So, whenever somebody showed up to order food when all registers were occupied, the manager would open a trap door conveniently ejecting   the customer back out into the parking lot.  The customer, being extremely hungry, will come running back in the door (unless of course they die of starvation or get hit by a car) only to be ejected again. To make matters worse, let’s suppose a bus load of school kids arrive. As the kids file in to the McDonald’s, the remaining ones on the bus have no idea their classmates inside are getting ejected, so they keep streaming into the McDonald’s. Hopefully, you get the idea.

Well, when bandwidth shapers deploy packet-dropping technology to enforce a rate cap, you can get the same result seen with the trapdoor analogy in the McDonald’s. Web browsers and other user-based applications will beat their heads into the wall when they don’t get responses from their counterparts on the other end of the line. When packets are being dropped en masse,  the network tends to spiral out-of-control until all the applications essentially give up.  Perhaps you have seen this behavior while staying at a hotel with an underpowered Internet link. Your connectivity will alternate between working and then hanging up completely for a minute or so during busy hours. This can obviously be very maddening.

The solution to shaping bandwidth on a network without causing gridlock is to implement queuing.

Queuing Packets (Cisco term “traffic shaping”)

Queuing is the art of putting something in a line and making it wait before continuing on. Obviously, this is what fast food restaurants do in reality. They plan enough staff on hand to handle the average traffic throughout the day, and then queue up their customers when they are arriving at a faster rate then they can fill orders. The assumption with this model is that at some point during the day the McDonald’s will get caught up with the number of arriving customers and the lines will shrink away.

Another benefit of queuing is that wait times can perhaps be estimated by customers as they drive by and see the long line extending out into the parking lot, and thus, they will save their energy and not attempt to go inside.

But, what happens in the world of the Internet?

With queuing methods implemented, a bandwidth controller looks at the data rate of the incoming packets, and if deemed too fast, it will delay the packets in a queue. The packets will eventually get to their destination, albeit somewhat later than expected. Packets on queue can pile up very quickly, and without some help, the link would saturate. Computer memory to store the packets in the queue would also saturate and, much like the scenario mentioned above, the packets would eventually get dropped if they continued to come in at a faster rate than they were sent out.

TCP to the Rescue (keeping queuing under control)

Most internet applications use a service called TCP (transmission control protocol) to handle their data transfers. TCP has developed intelligence to figure out the speed of the link for which it is sending data on, and then can make adjustments. When the NetEqualizer bandwidth controller queues a packet or two, the TCP controllers on the customer end-point computers will sense the slower packets and back off the speed of the transfer. With just a little bit of queuing, the sender slows down a bit and dropping packets can be kept to a minimum.

Queuing Inside the NetEqualizer

The NetEqualizer bandwidth shaper uses a combination of queuing and dropping packets to get speed under control. Queuing is the first option, but when a sender does not back off eventually, their packets will get dropped. For the most part, this combination of queuing and dropping works well.

So far we have been inferring a simple case of a single sender and a single queue, but what happens if you have gigabit link with 10,000 users and you want to break off 100 megabits to be shared by 3000 users? How would a bandwidth shaper accomplish this? This is another area where a well-designed bandwidth controller like the NetEqualizer separates itself from the crowd.

In order to provide smooth shaping for a large group of users sharing a link, the NetEqualizer does several things in combination.

  1. It keeps track of all streams, and based on their individual speeds, the NetEqualizer will use different queue delays on each stream.
  2. Streams that back off will get minimal queuing
  3. Streams that do not back off may eventually have some of their packets dropped

The net effect of the NetEqualizer queuing intelligence is that all users will experience steady response times and smooth service.

Notes About UDP and Rate Limits

Some applications such as video do not use TCP to send data. Instead, they use a “send-and-forget” mechanism called UDP, which has no built-in back-off mechanism. Without some higher intelligence, UDP packets will continue to be sent at a fixed rate, even if the packets are coming too quickly for the receiver.  The good news is that even most UDP applications also have some way of measuring if their packets are getting to their destination. It’s just that with UDP, the mechanism of synchronization is not standardized.

Finally there are those applications that just don’t care if the packets get to their destination. Speed tests and viruses send UDP packets as fast as they can, regardless of whether the network can handle them or not. The only way to enforce a rate cap with such ill-mannered application is to drop the packets.

Hopefully this primer has given you a good introduction to the mechanisms used to enforce Internet Speeds, namely dropping packets & queuing.  And maybe you will think about this the next time you visit a fast food restaurant during their busy time…

How to Implement Network Access Control and Authentication


There are a number of basic ways an automated network access control (NAC) system can identify unauthorized users and keep them from accessing your network. However, there are pros and cons to using these different NAC methods.  This article will discuss both the basic network access control principles and the different trade-offs each brings to the table, as well as explore some additional NAC considerations. Geared toward the Internet service provider, hotel operator, library, or other public portal operator who provides Internet service and wishes to control access, this discussion will give you some insight into what method might be best for your network.

The NAC Strategies

MAC Address

MAC addresses are unique to every computer connected to the network, and thus many NAC systems use them to grant or deny access.  Since MAC addresses are unique, NAC systems can use them to identify an individual customer and grant them access.

While they can be effective, there are limitations to using MAC addresses for network access. For example, if a customer switches to a new computer in the system, it will not recognize them, as their MAC address will have changed.  As a result, for mobile customer bases, MAC address authentication by itself is not viable.

Furthermore, on larger networks with centralized authentication, MAC addresses do not propagate beyond one network hop, hence MAC address authentication can only be done on smaller networks (no hops across routers).  A work-around for this limit would be to use a distributed set of authentication points local to each segment. This would involve multiple NAC devices, which would automatically raise complexity with regard to synchronization. Your entire authentication database would need to be replicated on each NAC.

Finally, a common question when it comes to MAC addresses is whether or not they can be spoofed. In short, yes, they can, but it does require some sophistication and it is unlikely a normal user with the ability to do so would go through all the trouble to avoid paying an access charge.  That is not to say it won’t happen, but rather that the risk of losing revenue is not worth the cost of combating the determined isolated user.

I mention this because some vendors will sell you features to combat spoofing and most likely it is not worth the incremental cost.  If your authentication is set up by MAC address, the spoofer would have to also have the MAC address of a paying user in order to get in. Since there is no real pattern to MAC addresses, guessing another customer’s MAC address would be nearly impossible without inside knowledge.

IP Address

IP addresses allow a bit more flexibility than MAC addresses because IP addresses can span across a network segment separated by a router to a central location. Again, while this strategy can be effective, IP address authentication has the same issue as MAC addressing, as it does not allow a customer to switch computers, thus requiring that the customer use the same computer each time they log in. In theory, a customer could change the IP address should they switch computers, but this would be way too much of an administrative headache to explain when operating a consumer-based network.

In addition, IP addresses are easy to spoof and relatively easy to guess should a user be trying to steal another user’s identity. But, should two users log on with the same IP address at the same time, the ruse can quickly be tracked down. So, while plausible, it is a risky thing to do.

User ID  Combined with MAC Address or IP Address

This methodology solves the portability issue found when using MAC addresses and IP addresses by themselves. With this strategy, the user authenticates their session with a user ID and password and the NAC module records their IP or MAC address for the duration of the session.

For a mobile consumer base, this is really the only practical way to enforce network access control. However, there is a caveat with this method. The NAC controller must expire a user session when there is a lack of activity.  You can’t expect users to always log out from their network connection, so the session server (NAC) must take an educated guess as to when they are done. The ramification is that they must log back in again. This usually isn’t a major problem, but can simply be a hassle for users.

The good news is the inactivity timer can be extended to hours or even days, and should a customer login in on a different computer while current on a previous session, the NAC can sense this and terminate the old session automatically.

The authentication method currently used with the NetEqualizer is based on IP address and user ID/password, since it was designed for ISPs serving a transient customer base.

Other Important Considerations

NAC and Billing Systems

Many NAC solutions also integrate billing services. Overlooking the potential complexity and ballooning costs with a billing system has the potential to cut into efficiency and profits for both customer and vendor. Our philosophy is that a flat rate and simple billing are best.

To name a few examples, different customers may want time of day billing; billing by day, hour, month, or year; automated refunds; billing by speed of connections; billing by type of property (geographic location); or tax codes. It can obviously go from a simple idea to a complicated one in a hurry. While there’s nothing wrong with these requests, history has shown that costs can increase exponentially when maintaining a system and trying to meet these varied demands, once you get beyond simple flat rate.

Another thing to look out for with billing is integration with a credit card processor. Back-end integration for credit card processing takes some time and energy to validate. For example, the most common credit card authentication system in the US, Authorize.net, does not work unless you also have a US bank account.  You may be tempted to shop your credit card billing processor based on fees, but if you plan on doing automated integration with a NAC system, it is best to make sure the CC authorization company provides automated tools to integrate with the computer system and your consulting firm accounts for this integration work.

Redirection Requirements

You cannot purchase and install a NAC system without some network analysis. Most NAC systems will re-direct unauthorized users to a Web page that allows them to sign up for the service. Although this seems relatively straight forward, there are some basic network features that need to be in place in order for this redirection to work correctly. The details involved go beyond the scope of this article, but you should expect to have a competent network administrator or consultant on hand in order to set this up correctly. To be safe, plan for eight to 40 hours of consulting time for troubleshooting and set-up above and beyond the cost of the equipment.

Network Access for Organizational Control

Thus far we have focused on the basic ways to restrict basic access to the Internet for a public provider. However, in a private or institutional environment where security and access to information are paramount, the NAC mission can change substantially. For example, in the Wikipedia article on network access control, a much broader mission is outlined than what a simple service provider would require. The article reads:

“Network Access Control aims to do exactly what the name implies—control access to a network with policies, including pre-admission endpoint security policy checks and post-admission controls over where users and devices can go on a network and what they can do.”

This paragraph was obviously written by a contributor that views NAC as a broad control technique reaching deep into a private network.  Interestingly, there is an ongoing dispute on Wikipedia stating that this definition goes beyond the simpler idea of just granting access.

The rift on Wikipedia can be summarized as an argument over whether a NAC should be a simple gatekeeper for access to a network, with users having free rein to wander once in, or whether the NAC has responsibilities to protect various resources within the network once access is attained. Both camps are obviously correct, but it depends on the customer and type of business as to what type of NAC is required.

Therefore, in closing, the overarching message that emerges from this discussion is simply that implementing network access control requires an evaluation not only of the network setup, but also how the network will be used. Strategies that may work perfectly in certain circumstances can leave network administrators and users frustrated in other situations. However, with the right amount of foresight, network access control technologies can be implemented to facilitate the success of your network and the satisfaction of users rather than serving as an ongoing frustrating limitation.

The Real Killer Apps and What You Can Do to Stop Them from Bringing Down Your Internet Links


When planning a new network, or when diagnosing a problem on an existing one, a common question that’s raised concerns the impact that certain applications may have on overall performance. In some cases, solving the problem can be as simple as identifying and putting an end to (or just cutting back) the use of certain bandwidth-intensive applications. So, the question, then, is what applications may actually be the source of the problem?

The following article works to identify and break down the applications that will most certainly kill your network, but also provides suggestions as to what you can do about them. While every application certainly isn’t covered, our experience working with network administrators around the world has helped us identify the most common problems.

The Common Culprits

YouTube Video (standard video) — On average, a sustained 10-minute YouTube video will consume about 500kbs over its duration. Most video players try to store the video (buffer ahead) locally as fast as your network  can take it.   On a shared network, this has the effect of bringing everything else on your network to its knees. This may not be a problem if you are the only person using the Internet link, but in today’s businesses and households, that is rarely the case.

For more specifics about YouTube consumption, see these other Youtube articles.

Microsoft Service-Pack Downloads — Updates such as Microsoft service packs use file transfer protocol (FTP). Generally, this protocol will use as much bandwidth as it can find. The end result is that your VoIP phone may lock up, your video’s will become erratic, and Web surfing will come to a crawl.

Keeping Your Network Running Smoothly While Handling Killer Apps

There is no magic pill that can give you unlimited bandwidth, but each of  the following solutions may help. However, they often require trade offs.

  1. The obvious solution is to communicate with other members of your household or business when using bandwidth intensive applications. This is not always practical, but, if other users agree to change their behavior, it’s usually a surefire solution.
  2. Deploy a fairness device to smooth out those rough patches during contentious busy hours — Yes, this is the NetEqualizer News blog, but with all bias aside, these types of technologies often work great. If you are in an office sharing an Internet feed with various users, the NetEqualizer will keep aggressive bandwidth users from crowding others out. No, it cannot create additional bandwidth on your pipe, but it will eliminate the gridlock caused by your colleague  in the next cubicle  downloading a Microsoft service pack. Yes, there are other  devices on the market that can enforce fairness, but the NetEqualizer was specifically designed for this mission. And, with a starting price of around $1400, it is a product small businesses can invest in and avoid longer term costs (see option 3).
  3. Buy more bandwidth — In most cases, this is the most expensive of the different solutions in the long term and should usually be a last resort. This is especially true if the problems are largely caused by recreational Internet use on a business network. However, if the bandwidth-intensive activities are a necessary part of your operation, and they can’t afford to be regulated by a fairness device, upgrading your bandwidth may be the only long-term solution. But, before signing the contract, be sure to explore options one and two first.

As mentioned, not every network-killing application is discussed here, but this should head you in the right direction in identifying the problem and finding a solution. For a more detailed discussion of this issue, visit the links below.

  • For a  more detailed discussion on how much bandwidth specific applications consume, click here.
  • For a set of detailed tips/tricks on making your Internet run faster, click here.
  • For an in-depth look at more complex methods used to mitigate network congestion on a WAN or Internet link, click here.

Top Tips To Quantify The Cost Of WAN Optimization


Editor’s Note: As we mentioned in a recent article, there’s often some confusion when it comes to how WAN optimization fits into the overall network optimization industry — especially when compared to Internet optimization. Although similar, the two techniques require different approaches to optimization. What follows are some simple questions to ask your vendor before you purchase a WAN optimization appliance. For the record, the NetEqualizer is primarily used for Internet optimization.

When presenting a WAN optimization ROI argument, your vendor rep will clearly make a compelling case for savings.  The ROI case will be made by amortizing the cost of equipment against your contracted rate from your provider. You can and should trust these basic raw numbers. However, there is more to evaluating a WAN optimization (packet shaping) appliance than comparing equipment cost against bandwidth savings. Here are a few things to keep in mind:

  1. The amortization schedule should also make reasonable assumptions about future costs for T1, DS3, and OC3 links. Most contracted rates have been dropping in many metro areas and it is reasonable to assume that bandwidth costs will perhaps be 50-percent less two to three years out.
  2. If you do increase bandwidth, the licensing costs for the traffic shaping equipment can increase substantially. You may also find yourself in a situation where you need to do a forklift upgrade as you outrun your current hardware.
  3. Recurring licensing costs are often mandatory to keep your equipment current. Without upgrading your license, your deep packet inspection (layer 7 shaping filters) will become obsolete.
  4. Ongoing labor costs to tune and re-tune your WAN optimization appliance can often costs thousands per week.
  5. The good news is that optimization companies will normally allow you to try an appliance before you buy. Make sure you take the time to manage the equipment with your own internal techs or IT consultant to get an idea of how it will fit into your network.  The honeymoon with new equipment (supported by a well trained pre-sales team) can be short lived. After the free pre-sale support has expired, you will be on your own.

There are certainly times when WAN optimization makes sense, yet it many cases, what appears to be a no-brainer decision at first will begin to be called into question as costs mount down the line. Hopefully these five contributing factors will paint a clearer picture of what to expect.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

Optimizing Your WAN Is Not The Same As Optimizing Your Internet Link — Here’s Why…


WAN optimization is a catch-all phrase for making a network more efficient. However, few products distinguish between optimizing a WAN link and optimizing an Internet link. Yet, the methods used for the latter do not necessarily overlap with WAN optimization. In this article, we’ll break down the differences and similarities between the two practices and explain why WAN optimization tends to be the more common, yet not necessarily most effective, of the two techniques when it comes to overall network optimization.

Some Basic Definitions

A WAN link is always a point-to-point link where an institution/business controls both ends of the link. However, a WAN link does not provide Internet access.

On the other hand, an Internet link is one where one end terminates in a business/home/institution and the other end terminates in the Internet cloud, thus providing the former with Internet access.

A VPN link is a special case of a WAN link where the link traverses across the public Internet to get to another location within an organization.  This is not an Internet link by our definition mentioned above.

Whether dealing with a small business, a home user, or public entities such as libraries, schools etc., there are far more Internet links out there than WAN links. Each of these entities will most certainly have a dedicated Internet link while many will not have a WAN link.

Some Common Questions

If Internet links far outnumber WAN links, why are there so many commercial products dedicated to optimizing WAN links and so few specifically dedicated to Internet optimization?

There are a few reasons for this:

  1. WAN optimization is fairly easy to measure and quantify, so a WAN optimization vendor can easily demonstrate their value by showing before and after results.
  2. Many WAN-based applications — Citrix, SQL queries, etc. — are inherently inefficient and in need of optimization.
  3. The market is flooded with vendors and analysts (such as Gartner) which all tend  to promote and sustain the WAN optimization market.
  4. WAN optimization tools also double as reporting and monitoring tools, which administrators gravitate toward.
  5. A large number of commercial Internet connection are located at Small or medium sized business and and the ROI on an optimization device for their Internet Link is either not that compelling or not understood.

Why is a WAN optimizing tool not the best tool to optimize an Internet link? Don’t the methodologies overlap?

Most of the methods used by a WAN optimizing appliance make use of two principles:

  1. The organization owns both ends of the link and will use two optimizing devices — one at each end. For example, compression techniques require that you own both ends of the link. As mentioned earlier, you cannot control both ends of an Internet link.
  2. The types of traffic running over a WAN Link are consistent and well defined. Organizations tend to do the same thing over and over again on their internal link. Yet, on an Internet link, the traffic varies from minute to minute and cannot be easily quantified.

So, how does one optimize unbounded traffic coming into an Internet link?

You need an appliance such as a NetEqualizer that dynamically manages over all flows for more information you can read. But,  don’t take it from us, you can also check in on what existing NetEqualizer users are saying.

How does a company quantify the cost of using a device to optimize their Internet link?

Admittedly, the results may be a bit subjective. The good news is that optimization companies will normally allow you to try an appliance before you buy. On the other hand, most Internet providers will require you to purchase a fixed length contract.

The fact of the matter is that an Internet link can be rendered useless by  a small number of users during peak times. If you blindly upgrade your contract to accommodate this problem, it is akin to buying gourmet lunches for some employees while feeding everybody else microwave popcorn. In the end, the majority will be unhappy.

While the appropriate network optimization technique will vary from situation to situaiton, Internet optimization appliances tend to work well under most circumstances and are worth implementing. Or, at the very least, they’re worth exploring before signing on to a long-term bandwidth increase with your ISP.

See: Related Discussion on Internet Congestion and predictability.

The True Price of Bandwidth Monitoring


By Art Reisman

Art Reisman CTO www.netequalizer.com

For most IT administrators, bandwidth monitoring of some sort is an essential part of keeping track of, as well as justifying, network expenses. Without visibility into a network load, an administrator’s job would degrade into a quagmire of random guesswork. Or would it?

The traditional way of  looking at monitoring your Internet has two parts: the fixed cost of the monitoring tool used to identify traffic, and the labor associated with devising a remedy. In an ironic inverse correlation, we assert that costs increase with the complexity of the monitoring tool. Obviously, the more detailed the reporting tool, the more expensive its initial price tag. The kicker comes with part two. The more expensive the tool, the more  detail  it will provide, and the more time an administrator is likely to spend adjusting and mucking, looking for optimal performance.

But, is it a fair to assume higher labor costs with  more advanced monitoring and information?

Well, obviously it would not make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. Why have the reporting tool in the first place if the only output was to stare at reports and do nothing? Typically, the more information an admin has about a network, the more inclined he might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief  that when  the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network mucking can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention.  For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980’s.  The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.

An effective compromise with many of our customers is that they are stepping down from expensive complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.

For example, with a simple reporting tool you can plot network usage by user.  Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing; abuse becomes obvious just looking at the usage (a simple report).

However, there is also the personal control factor, which often does not follow clear lines of ROI (return on investment).

What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into a trip to Hawaii.

In our next article, we’ll put some real world numbers to the test for actual break downs, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.

List of monitoring tools compiled by Stanford

Planetmy
Linux Tips
How to set up a monitor for free

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.