How Much Bandwidth Do You Really Need?


By Art Reisman – CTO – www.netequalizer.com

Art Reisman CTO www.netequalizer.com

When it comes to how much money to spend on the Internet, there seems to be this underlying feeling of guilt with everybody I talk to. From ISPs, to libraries or multinational corporations, they all have a feeling of bandwidth inadequacy. It is very similar to the guilt I used to feel back in College when I would skip my studies for some social activity (drinking). Only now it applies to bandwidth contention ratios. Everybody wants to know how they compare with the industry average in their sector. Are they spending on bandwidth appropriately, and if not, are they hurting their institution, will they become second-rate?

To ease the pain, I was hoping to put a together a nice chart on industry standard recommendations, validating that your bandwidth consumption was normal, and I just can’t bring myself to do it quite yet. There is this elephant in the room that we must contend with. So before I make up a nice chart on recommendations, a more relevant question is… how bad do you want your video service to be?

Your choices are:

  1. bad
  2. crappy
  3. downright awful

Although my answer may seem a bit sarcastic, there is a truth behind these choices. I sense that much of the guilt of our customers trying to provision bandwidth is based on the belief that somebody out there has enough bandwidth to reach some form of video Shangri-La; like playground children bragging about their father’s professions, claims of video ecstasy are somewhat exaggerated.

With the advent of video, it is unlikely any amount of bandwidth will ever outrun the demand; yes, there are some tricks with caching and cable on demand services, but that is a whole different article. The common trap with bandwidth upgrades is that there is a false sense of accomplishment experienced before actual video use picks up. If you go from a network where nobody is running video (because it just doesn’t work at all), and then you increase your bandwidth by a factor of 10, you will get a temporary reprieve where video seems reliable, but this will tempt your users to adopt it as part of their daily routine. In reality you are most likely not even close to meeting the potential end-game demand, and 3 months later you are likely facing another bandwidth upgrade with unhappy users.

To understand the video black hole, it helps to compare the potential demand curve pre and post video.

A  quality VOIP call, which used to be the measuring stick for decent Internet service runs about 54kbs. A quality  HD video stream can easily consume about 40 times that amount. 

Yes, there are vendors that claim video can be delivered at 250kbs or less, but they are assuming tiny little stop action screens.

Couple this tremendous increase in video stream size with a higher percentage of users that will ultimately want video, and you would need an upgrade of perhaps 60 times your pre-video bandwidth levels to meet the final demand. Some of our customers, with big budgets or government subsidized backbones, are getting close but, most go on a honeymoon with an upgrade of 10 times their bandwidth, only to end up asking the question, how much bandwidth do I really need?

So what is an acceptable contention ratio?

  • Typically in an urban area right now we are seeing anywhere from 200 to 400 users sharing 100 megabits.
  • In a rural area double that rati0 – 400 to 800 sharing 100 megabits.
  • In the smaller cities of Europe ratios drop to 100 people or less sharing 100 megabits.
  • And in remote areas served by satellite we see 40 to 50 sharing 2 megabits or less.

Is Equalizing Technology the Same as Bandwidth Fairness?


Editors Note:

The following was posted in a popular forum in response to the assumption that the NetEqualizer is a simple fairness engine. We can certainly understand how our technology can be typecast in the same bucket with simple fairness techniques; however, equalizing provides a much more sophisticated solution as the poster describes in detail below.

You have stated your reservations, but I am still going to have to recommend the NetEqualizer. Carving up the bandwidth equally will mean that the user perception of the Internet connection will be poor even when you have bandwidth to spare. It makes more sense to have a device that can maximize the user’s perception of a connection. Here are some example scenarios.

NetEQ when utilization is low, and it is not doing anything:
User perception of Skype like services: Good
User perception of Netflix like services: Good
User perception of large file downloads: Good
User perception of “ajaxie” webpages that constantly update some doodad on the page: Good
User perception of games: Good

Equally allocated bandwidth when utilization is low:
User perception of Skype like services: OK as long as the user is not doing anything else.
User perception of Netflix like services: OK as long as long as the user is not doing anything else.
User perception of large file downloads: Slow all of the time regardless of where the user is downloading the file from.
User perception of “ajaxie” webpages that constantly update some doodad on the page: OK
User perception of games: OK as long as the user is not doing anything else. That is until the game needs to download custom content from a server, then the user has to wait to enter the next round because of the hard rate limit.

NetEQ when utilization is high and penalizing the top flows:
User perception of Skype like services: Good
User perception of Netflix like services: Good – The caching bar at the bottom should be slightly delayed, but the video shouldn’t skip. The user is unlikely to notice.
User perception of large file downloads: Good – The file is delayed a bit, but will still download relatively quickly compared to a hard bandwidth cap. The user is unlikely to notice.
User perception of “ajaxie” webpages that constantly update some doodad on the page: Good
User perception of games: Good downloading content between rounds might be a tiny bit slower, but fast compared to a hard rate limit.

Equally allocated bandwidth when utilization is high:
User perception of Skype like services: OK as long as the user is not doing anything else.
User perception of Netflix like services: OK as long as long as the user is not doing anything else.
User perception of large file downloads: Slow all of the time regardless of where the user is downloading the file from.
User perception of “ajaxie” webpages that constantly update some doodad on the page: OK as long as the user is not doing anything else.
User perception of games: OK as long as the user is not doing anything else. That is until the game needs to download custom content from a server, then the user has to wait to enter the next round because of the hard rate limit.

As far as the P2P thing is concerned. While I too realized that theoretically P2P would be favored, in practice it wasn’t really noticeable.  If you wish, you can use connection limits to deal with this.

One last thing to note:  On Obama’s inauguration day, the NetEQ at our University was able to tame the ridiculous number of live streams of the event without me intervening to change settings.  The only problems reported turned out to be bandwidth problems on the other end.

What Is Deep Packet Inspection and Why the Controversy?


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper. APconnections removed all deep packet inspection technology from their NetEqualizer product over 2 years ago.

Article Updated March 2012

As the debate over Deep Packet Inspection continues, network administrators are often faced with a difficult decision: ensure network quality or protect user privacy. However, the legality of the practice is now being called into question, adding a new twist to the mix. Yet, for many Internet users, deep packet inspection continues to be an ambiguous term in need of explanation. In the discussion that follows, deep packet inspection will be explored in the context of the ongoing debate.

Exactly what is deep packet inspection?

All traffic on the Internet travels around in what is called an IP packet. An IP packet is a string of characters moving from computer A to computer B. On the outside of this packet is the address where it is being sent. On the inside of the packet is the data that is being transmitted.

The string of characters on the inside of the packet can be conceptually thought of as the “payload,” much like the freight inside of a railroad car. These two elements, the address and the payload, comprise the complete IP packet.

When you send an e-mail across the Internet, all your text is bundled into packets and sent on to its destination. A deep packet inspection device literally has the ability to look inside those packets and read your e-mail (or whatever the content might be).

Products sold that use DPI are essentially specialized snooping devices that examine the content (pay load inside) of Internet packets. Other terms sometimes used to describe techniques that examine Internet data are packet shapers, layer-7 traffic shaping, etc.

How is deep packet inspection related to net neutrality?

Net neutrality is based on the belief that nobody has the right to filter content on the Internet. Deep packet inspection is a method used for filtering. Thus, there is a conflict between the two approaches. The net neutrality debate continues to rage in its own right.

Why do some Internet providers use deep packet inspection devices?

There are several reasons:

1) Targeted advertising If a provider knows what you are reading, they can display content advertising on the pages they control, such as your login screen or e-mail account.

2) Reducing “unwanted” traffic — Many providers are getting overwhelmed by types of traffic that they deem as less desirable such as Bittorrent and other forms of peer-to-peer. Bittorrent traffic can overwhelm a network with volume. By detecting and redirecting the Bittorrent traffic, or slowing it down, a provider can alleviate congestion.

3) Block offensive material — Many companies or institutions that perform content filtering are looking inside packets to find, and possibly block, offensive material or web sites.

4) Government spying — In the case of Iran (and to some extent China), DPI is used to keep tabs on the local population.

When is it appropriate to use deep packet inspection?

1) Full disclosure — Private companies/institutions/ISPs that notify employees that their Internet use is not considered private have the right to snoop, although I would argue that creating an atmosphere of mistrust is not the mark of a healthy company.

2) Law enforcement — Law enforcement agencies with a warrant issued by a judge would be the other legitimate use.

3) Intrusion detection and prevention– It is one thing to be acting as an ISP  and to eaves drop on a public conversation;  it is entirely another paradigm if you are a  private business examining the behavior of somebody  coming in your front door. For example in a private home it is within your right to look through your peep hole and not let shady characters into your home.  In a private business it is a good idea to use Deep packet inspection in order to block unwanted intruders from your network. Blocking bad guys before they break into and damage your network and is perfectly acceptable.

4) Spam filtering- Most consumers are very happy to have their ISP or email provider remove spam.  I would categorize this type of DPI as implied disclosure. For example, in Gmail you do have the option to turn Spam filtering off, and although most consutomers may not realize that google is reading their mail ( humans don’t read it but computer scanners do), their motives are understood. What consumers may not realize is that their email provider is also reading everything they do in order to set target advertising

Does Content filtering use Deep Packet Inspection ?

For the most part no. Content filtering is generally  done at the URL level. URL’s are generally considered public information, as routers need to look this up anyway. We have only encountered content filters at private institutions that are within their right.

What about spam filtering, does that use Deep Packet Inspection?

Yes many Spam filters will look at content, and most people could not live without their spam filter, however with spam filtering most people have opted in at one point or another, hence it is generally done with permission.

What is all the fuss about?

It seems that consumers are finally becoming aware of what is going on behind the scenes as they surf the Internet, and they don’t like it. What follows are several quotes and excerpts from articles written on the topic of deep packet inspection. They provide an overview not only of how DPI is currently being used, but also the many issues that have been raised with the practice.

For example, this is an excerpt from a recent PC world article:

Not that we condone other forms of online snooping, but deep packet inspection is the most egregious and aggressive invasion of privacy out there….It crosses the line in a way that is very frightening.

Paul Stephens, director of policy and advocacy for the Privacy Rights Clearinghouse, as quoted in the E-Commerce Times on November 14, 2008. Read the full article here.

Recently, Comcast had their hand slapped for re-directing Bittorrent traffic:

Speaking at the Stanford Law School Center for Internet and Society, FCC Chairman Kevin Martin said he’s considering taking action against the cable operator for violating the agency’s network-neutrality principles. Seems Martin was troubled by Comcast’s dissembling around the BitTorrent issue, not to mention its efforts to pack an FCC hearing on Net neutrality with its own employees.

— Digital Daily, March 10, 2008. Read the full article here.

Later in 2008, the FCC came down hard on Comcast.

In a landmark ruling, the Federal Communications Commission has ordered Comcast to stop its controversial practice of throttling file sharing traffic.

By a 3-2 vote, the commission on Friday concluded that Comcast monitored the content of its customers’ internet connections and selectively blocked peer-to-peer connections.

Wired.com, August 1, 2008.Read the full article here.

To top everything off, some legal experts are warning companies practicing deep packet inspection that they may be committing a felony.

University of Colorado law professor Paul Ohm, a former federal computer crimes prosecutor, argues that ISPs such as Comcast, AT&T and Charter Communications that are or are contemplating ways to throttle bandwidth, police for copyright violations and serve targeted ads by examining their customers’ internet packets are putting themselves in criminal and civil jeopardy.

Wired.com, May 22, 2008. Read the full article here.

However, it looks like things are going the other way in the U.K. as Britain’s Virgin Media has announced they are dumping net neutrality in favor of targeting bittorrent.

The UK’s second largest ISP, Virgin Media, will next year introduce network monitoring technology to specifically target and restrict BitTorrent traffic, its boss has told The Register.

The Register, December 16, 2008. Read the full article here.

Canadian ISPs confess en masse to deep packet inspection in January 2009.

With the amount of attention being paid to Comcast recently, a lot of people around the world have begun to look at their ISPs and wonder exactly what happens to their traffic once it leaves. This is certainly true for Canada, where several Canadian ISPs have come under the scrutiny of the CRTC, the regulatory agency responsible for Canada. After investigation, it was determined that all large ISPs in Canada filter P2P traffic in some fashion.

Tech Spot, January 21, 2009. Read the full article here.

In April 2009, U.S. lawmakers announced plans to introduce legislation that would limit the how ISPs could track users. Online privacy advocates spoke out in support of such legislation.

In our view, deep packet inspection is really no different than postal employees opening envelopes and reading letters inside. … Consumers simply do not expect to be snooped on by their ISPs or other intermediaries in the middle of the network, so DPI really defies legitimate expectations of privacy that consumers have.

Leslie Harris, president and CEO of the Center for Democracy and Technology, as quoted on PCWorld.com on April 23, 2009. Read the full article here.

The controversy continues in the U.S. as AT&T is accused of traffic shaping, lying and blocking sections of the Internet.

7/26/2009 could mark a turning point in the life of AT&T, when the future looks back on history, as the day that the shady practices of an ethically challenged company finally caught up with them: traffic filtering, site banning, and lying about service packages can only continue for so long before the FCC, along with the bill-paying public, takes a stand.

Kyle Brady, July 27, 2009. Read the full article here.

[February 2011 Update] The Egyptian government uses DPI to filter elements of their Internet Traffic, and this act in itself becomes the news story. In this video in this news piece, Al Jazeera takes the opportunity to put out an unflattering piece on the company Naurus that makes the DPI technology and sold it to the Egyptians.

While the debate over deep packet inspection will likely rage on for years to come, APconnections made the decision to fully abandon the practice over two years ago, having since proved the viability of alternative approaches to network optimization. Network quality and user privacy are no longer mutually exclusive goals.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

Ten Things to Consider When Choosing a Bandwidth Shaper


This article is intended as an objective guide for anyone trying to narrow down their options in the bandwidth controller market. Organizations today have a plethora of product options to choose from. To further complicate your choices, not only are there  specialized bandwidth controllers, you’ll also find that most Firewall and Router products today contain some form of  bandwidth shaping and QoS  features .

What follows is an  all-encompassing  list of questions that will help you to quickly organize your  priorities with regards to choosing a bandwidth shaper.

1) What is the Cost of Increasing your Bandwidth?

Although this question may be a bit obvious, it must be asked. We assume that anybody in the market for a bandwidth controller also has the option of increasing their bandwidth. The costs of purchasing  and operating a bandwidth controller should ultimately be compared with the cost of increasing bandwidth on your network.

2) How much Savings should you expect from your Bandwidth Controller?

A good bandwidth controller in many situations can increase your carrying capacity by up to 50 percent.  However, beware, some technologies designed to optimize your network can create labor overhead in maintenance hours. Labor costs with some solutions can far exceed the cost of adding bandwidth.

3) Can you out-run your Organization’s Appetite for Increased Bandwidth  with a One-Time Bandwidth Upgrade?

The answer is yes, it is possible to buy enough bandwidth such that all your users cannot possibly exhaust the supply.  The bad news is that this solution is usually cost-prohibitive.  Many organizations that come to us have previously doubled their bandwidth, sometimes more than once, only to be back to overwhelming congestion within  a few months after their upgrade.  The appetite for bandwidth is insatiable, and in our opinion, at some point a bandwidth control device becomes your only rational option. Outrunning your user base usually is only possible where  Internet infrastructure is subsidized by a government entity, hiding the true costs.  For example, a small University with 1000 students will likely not be able to consume a true 5 Gigabit pipe, but purchasing a pipe of that size would be out of reach for most US-based Universities.

4) How Valuable is Your Time? Are you a Candidate for a Freeware-type Solution?

What we have seen in the market place is that small shops with high technical expertise, or small ISPs on a budget, can often make use of a freeware do-it-yourself bandwidth control solution.  If you are cash-strapped, this may be a viable solution for you.  However, please go into this with your eyes open.  The general pitfalls and risks are as follows:

a) Staff can easily run up 80 or more hours trying to  save a few thousand dollars fiddling with an unsupported solution.  And this is only for the initial installation & set-up.  Over the useful life of the solution, this can continue at a high-level, due to the unsupported nature of these technologies.

b) Investors  do not like to invest in businesses with homegrown technology, for many reasons, including finding personnel to sustain the solution, upgrading and adding features, as well as overall risk of keeping it in working order, unless it gives them a very large competitive advantage. You can easily shoot yourself in the foot with prospective buyers by becoming too dependent on homegrown, freeware solutions, in order to save costs. When you rely on something homegrown, it generally means an employee or two holds the keys to the operational knowledge, hence potential buyers can become uncomfortable (you would be too!).

5) Are you Looking to Enforce Bandwidth Limits as part of a Rate Plan that you Resell to Clients?

For example , let’s say that you have a good-sized backbone of bandwidth at a reasonable cost per megabit, and you just want to enforce class of service speeds to sell your bandwidth in incremental revenue chunks.

If this is truely your only requirement, and not optimization to support high contention ratios, then you should be careful not to overspend on your solution. A basic NetEqualizer or Allot system may be all that you need. You can also most likely leverage the bandwidth control features bundled into your Router or Firewall.  The thing to be careful of if using your Router/Firewall is that these devices can become overwhelmed due to lack of horsepower.

6) Are you just Trying to Optimize the Bandwidth that you have, based on Well-Known Priorities?

Some context:

If you have a very static network load, with a finite well-defined set of  applications running through your enterprise, there are application shaping (Layer-7 shaping) products out there such as the Blue Coat PacketShaper,which uses deep packet inspection, that can be set up once to allocate different amounts bandwidth based on application.  If the PacketShaper is a bit too pricey, the Cymphonics product can also detect most common applications.

If  you are trying to optimize your bandwidth on a variable, wide-open plethora of applications, then you may find yourself with extremely high maintenance costs by using a Layer-7 application shaper. A generic behavior-based product such as the NetEqualizer will do the trick.

Update 2015

Note : We are seeing quite a bit of Encryption on  common applications. We strongly recommend avoiding layer 7 type devices for public Internet traffic as the accuracy is diminishing due to the fact that encrypted traffic is UN-classifieble , a heuristics based behavior based approach is advised

7) Make sure  what looks elegant on the cover does not have hidden costs by doing a little research on the Internet.

Yes this is an obvious one too, but lest you forget your due diligence!

Before purchasing any traffic shaping solution  you should try a simple internet search with well placed keywords to uncover objective opinions. Current testimonials supplied by the vendor are a good source of information, but only tell half the story. Current customers are always biased toward their decision sometimes in the face of ignoring a better solution.

If you are not familiar with this technology, nor have the in-house expertise to work with a traffic shaper, you may want to consider buying additional bandwidth as your solution.  In order to assess if this is a viable solution for you, we recommend you think about the following: How much bandwidth do you need ? What is the appropriate amount for your ISP or organization?  We actually dedicated a complete article to this question.

8) Are you a Windows Shop?  Do you expect a Microsoft-based solution due to your internal expertise?

With all respect to Microsoft and the strides they have made toward reliability in their server solutions, we believe that you should avoid a Windows-based product for any network routing or bandwidth control mission.

To be effective, a bandwidth control device must be placed such that all traffic is forced to pass through the device. For this reason, all manufacturers that we are aware of develop their network devices using a derivative of  Linux. Linux-based is based on Open Source, which means that an OEM can strip down the operating system to its simplest components.  The simpler operating system in your network device, the less that can go wrong.  However, with Windows the core OS source code is not available to third-party developers, hence an OEM may not always be able to track down serious bugs. This is not to say that bugs do not occur in Linux, they do, however the OEM can often get a patch out quickly.

For the Windows IT person trained on Windows, a well-designed networking device presents its interface via a standard web page.  Hence, a technician likely needs no specific Linux background.

9) Are you a CIO (or C level Executive) Looking to Automate and Reduce Costs ?

Bandwidth controllers can become a means to do cool things with a network.  Network Administrators can get caught up reading fancy reports, making daily changes, and interpreting results, which can become  extremely labor-intensive.  There is a price/benefit crossover point where a device can create more work (labor cost)  than bandwidth saved.  We have addressed this paradox in detail in a previous article.

10) Do you have  any Legal or Political Requirement to Maintain Logs or Show Detailed Reports to a Third-Party (i.e. management ,oversight committee, etc.)?

For example…

A government requirement to provide data wire taps dictated by CALEA?

Or a monthly report on employee Internet behavior?

Related article how to choose the right bandwidth management solution

Links to other bandwidth control products on the market.

Packet Shaper by Blue Coat

NetEqualizer ( my favorite)

Exinda

Riverbed

Exinda  Packet Shaper  and Riverbed tend to focus on the enterprise WAN optimization market.

Cymphonix

Cymphonix comes  from a background of detailed reporting.

Emerging Technologies

Very solid  product for bandwidth shaping.

Exinda

Exinda from Australia has really made a good run in the US market offering a good alternative to the incumbants.

Netlimiter

For those of you who are wed to Windows NetLimiter is your answer

Antamediabandwidth

How does your ISP actually enforce your Internet Speed


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com

YT

Have you ever wondered how your ISP manages to control the speed of your connection? If so, you might find the following article enlightening.  Below, we’ll discuss the various trade-offs used to control and break out bandwidth rate limits and the associated side effects of using those techniques.

Dropping Packets (Cisco term “traffic policing”)

One of the simplest methods for a bandwidth controller to enforce a rate cap is by dropping packets. When using the packet-dropping method, the bandwidth controlling device will count the total number of bytes that cross a link during a second.  If the target rate is exceeded during any single second, the bandwidth controller will drop packets for the remainder of that second. For example, if the bandwidth limit is 1 megabit, and the bandwidth controller counts 1 million bits gone by  in 1/2 a second, it will then drop packets for the remainder of the second.  The counter will then reset for the next second. From most evidence we have observed, rate caps enforced by many ISPs use the drop packet method, as it is the least expensive method supported on most basic routers.

So, what is wrong with dropping packets to enforce a bandwidth cap?

Well, when a link hits a rate cap and packets are dropped en masse, it can wreak havoc on a network. For example, the standard reaction of a Web browser when it perceives web traffic is getting lost is to re-transmit the lost data. For a better understanding of dropping packets, let’s use the analogy of a McDonald’s fast food restaurant.

Suppose the manager of the restaurant was told his bonus was based on making sure there was a never a line at the cash register. So, whenever somebody showed up to order food when all registers were occupied, the manager would open a trap door conveniently ejecting   the customer back out into the parking lot.  The customer, being extremely hungry, will come running back in the door (unless of course they die of starvation or get hit by a car) only to be ejected again. To make matters worse, let’s suppose a bus load of school kids arrive. As the kids file in to the McDonald’s, the remaining ones on the bus have no idea their classmates inside are getting ejected, so they keep streaming into the McDonald’s. Hopefully, you get the idea.

Well, when bandwidth shapers deploy packet-dropping technology to enforce a rate cap, you can get the same result seen with the trapdoor analogy in the McDonald’s. Web browsers and other user-based applications will beat their heads into the wall when they don’t get responses from their counterparts on the other end of the line. When packets are being dropped en masse,  the network tends to spiral out-of-control until all the applications essentially give up.  Perhaps you have seen this behavior while staying at a hotel with an underpowered Internet link. Your connectivity will alternate between working and then hanging up completely for a minute or so during busy hours. This can obviously be very maddening.

The solution to shaping bandwidth on a network without causing gridlock is to implement queuing.

Queuing Packets (Cisco term “traffic shaping”)

Queuing is the art of putting something in a line and making it wait before continuing on. Obviously, this is what fast food restaurants do in reality. They plan enough staff on hand to handle the average traffic throughout the day, and then queue up their customers when they are arriving at a faster rate then they can fill orders. The assumption with this model is that at some point during the day the McDonald’s will get caught up with the number of arriving customers and the lines will shrink away.

Another benefit of queuing is that wait times can perhaps be estimated by customers as they drive by and see the long line extending out into the parking lot, and thus, they will save their energy and not attempt to go inside.

But, what happens in the world of the Internet?

With queuing methods implemented, a bandwidth controller looks at the data rate of the incoming packets, and if deemed too fast, it will delay the packets in a queue. The packets will eventually get to their destination, albeit somewhat later than expected. Packets on queue can pile up very quickly, and without some help, the link would saturate. Computer memory to store the packets in the queue would also saturate and, much like the scenario mentioned above, the packets would eventually get dropped if they continued to come in at a faster rate than they were sent out.

TCP to the Rescue (keeping queuing under control)

Most internet applications use a service called TCP (transmission control protocol) to handle their data transfers. TCP has developed intelligence to figure out the speed of the link for which it is sending data on, and then can make adjustments. When the NetEqualizer bandwidth controller queues a packet or two, the TCP controllers on the customer end-point computers will sense the slower packets and back off the speed of the transfer. With just a little bit of queuing, the sender slows down a bit and dropping packets can be kept to a minimum.

Queuing Inside the NetEqualizer

The NetEqualizer bandwidth shaper uses a combination of queuing and dropping packets to get speed under control. Queuing is the first option, but when a sender does not back off eventually, their packets will get dropped. For the most part, this combination of queuing and dropping works well.

So far we have been inferring a simple case of a single sender and a single queue, but what happens if you have gigabit link with 10,000 users and you want to break off 100 megabits to be shared by 3000 users? How would a bandwidth shaper accomplish this? This is another area where a well-designed bandwidth controller like the NetEqualizer separates itself from the crowd.

In order to provide smooth shaping for a large group of users sharing a link, the NetEqualizer does several things in combination.

  1. It keeps track of all streams, and based on their individual speeds, the NetEqualizer will use different queue delays on each stream.
  2. Streams that back off will get minimal queuing
  3. Streams that do not back off may eventually have some of their packets dropped

The net effect of the NetEqualizer queuing intelligence is that all users will experience steady response times and smooth service.

Notes About UDP and Rate Limits

Some applications such as video do not use TCP to send data. Instead, they use a “send-and-forget” mechanism called UDP, which has no built-in back-off mechanism. Without some higher intelligence, UDP packets will continue to be sent at a fixed rate, even if the packets are coming too quickly for the receiver.  The good news is that even most UDP applications also have some way of measuring if their packets are getting to their destination. It’s just that with UDP, the mechanism of synchronization is not standardized.

Finally there are those applications that just don’t care if the packets get to their destination. Speed tests and viruses send UDP packets as fast as they can, regardless of whether the network can handle them or not. The only way to enforce a rate cap with such ill-mannered application is to drop the packets.

Hopefully this primer has given you a good introduction to the mechanisms used to enforce Internet Speeds, namely dropping packets & queuing.  And maybe you will think about this the next time you visit a fast food restaurant during their busy time…

NetEqualizer reaches 5 Gigabit milestone, strengthens market lead inbandwidth controller price performance.


NetEqualizer reaches 5 Gigabit milestone, strengthens market lead in
bandwidth controller price performance.

LAFAYETTE, Colo., Sep 15 APconnections, a leading supplier of
bandwidth shaping products, today announced  the addition of a
5-gigabit  model  to their NetEqualizer brand of traffic shapers. The
initial release will also be able to shape 40,000 simultaneous
Internet users.

“Prior to this release, our largest model, was rated for one gigabit,”
said Eli Riles, APconnections vice president of sales. “Many of our
current customers liked our technology, but just needed a higher-end
machine.   The price performance of our new traffic shaping appliance
is unmatched in the industry”

In its initial release, the five-gigabit model will start at  $11000
USD. For more information, contact APconnections at 1-800-918-2763 or
via email at sales@netequalizer.com.

The NetEqualizer is a plug-and-play bandwidth control and WAN
optimization appliance. NetEqualizer technology is deployed at over
3000 businesses and institutions around the world. It is used to speed
up shared Internet connections for ISP’s , Libraries, Universities,
Schools and Fortune 500 companies.

APconnections is a privately held company founded in 2003 and is based
in Lafayette, Colorado.

Contact: APconnections, 1-800-918-2763 http://www.apconnections.net/

http://www.netequalizer.com/

Special thanks to Candela Technologies www.candelatech.com and their
Network Emulation laboratories for making this release possible.

$1000 Discount Offered Through NetEqualizer Cash For Conversion Program


After witnessing the overwhelming popularity of the government’s Cash for Clunkers new car program, we’ve decided to offer a similar deal to potential NetEqualizer customers. Therefore, this week, we’re announcing the launch of our Cash for Conversion program.The program offers owners of select brands (see below) of network optimization technology a $1000 credit toward the list-price purchase of NetEqualizer NE2000-10 or higher models (click here for a full price list). All owners have to do is send us your old (working or not) or out of license bandwidth control technology. Products from the following manufacturers will be accepted:

  • Exinda
  • Packeteer/Blue Coat
  • Allot
  • Cymphonics
  • Procera

In addition to receiving the $1000 credit toward a NetEqualizer, program participants will also have the peace of mind of knowing that their old technology will be handled responsibly through refurbishment or electronics recycling programs.

Only the listed manufacturers’ products will qualify. Offer good through the Labor Day weekend (September 7, 2009). For more information, contact us at 303-997-1300 or admin@apconnections.net.

The True Price of Bandwidth Monitoring


By Art Reisman

Art Reisman CTO www.netequalizer.com

For most IT administrators, bandwidth monitoring of some sort is an essential part of keeping track of, as well as justifying, network expenses. Without visibility into a network load, an administrator’s job would degrade into a quagmire of random guesswork. Or would it?

The traditional way of  looking at monitoring your Internet has two parts: the fixed cost of the monitoring tool used to identify traffic, and the labor associated with devising a remedy. In an ironic inverse correlation, we assert that costs increase with the complexity of the monitoring tool. Obviously, the more detailed the reporting tool, the more expensive its initial price tag. The kicker comes with part two. The more expensive the tool, the more  detail  it will provide, and the more time an administrator is likely to spend adjusting and mucking, looking for optimal performance.

But, is it a fair to assume higher labor costs with  more advanced monitoring and information?

Well, obviously it would not make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. Why have the reporting tool in the first place if the only output was to stare at reports and do nothing? Typically, the more information an admin has about a network, the more inclined he might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief  that when  the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network mucking can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention.  For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980’s.  The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.

An effective compromise with many of our customers is that they are stepping down from expensive complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.

For example, with a simple reporting tool you can plot network usage by user.  Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing; abuse becomes obvious just looking at the usage (a simple report).

However, there is also the personal control factor, which often does not follow clear lines of ROI (return on investment).

What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into a trip to Hawaii.

In our next article, we’ll put some real world numbers to the test for actual break downs, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.

List of monitoring tools compiled by Stanford

Planetmy
Linux Tips
How to set up a monitor for free

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

APconnections Announces NetEqualizer Lifetime Buyer Protection Policy


This week, we announced the launch of the NetEqualizer Lifetime Buyer Protection Policy. In the event of an un-repairable failure of a NetEqualizer unit at any time, or in the event that it is time to retire a unit, customers will have the option to purchase a replacement unit and apply a 50-percent credit of their original unit purchase price, toward the new unit.  For current pricing see register for our price list.  This includes units that are more than three years old (the expected useful life for hardware) and in service at the time of failure.

For example, if you purchased a unit in 2003 for $4000 and were looking to replace it or upgrade with a newer model, APconnections would kick in a $2000 credit toward the replacement purchase.

The Policy will be in addition to the existing optional yearly NetEqualizer Hardware Warranty (NHW), which offers customers cost-free repairs or replacement of any malfunctioning unit while NHW is in effect (read details on NHW).

Our decision to implement the policy was a matter of customer peace-of-mind rather than necessity. While the failure rate of any NetEqualizer unit is ultimately very low, we want customers to know that we stand behind our products – even if it’s several years down the line.

To qualify,

  • users must be the original owner of the NetEqualizer unit,
  • the customer must have maintained a support contract that has been current within last 18 months , lapses of support longer than 18 months will void our replacement policy
  • the unit must have been in use on your network at the time of failure.

Shipping is not included in the discounted price. Purchasers of the one-year NetEqualizer hardware warranty (NHW) will still qualify for full replacement at no charge while under hardware warranty.  Contact us for more details by emailing sales@apconnections.net, or calling 303.997.1300 x103 (International), or 1.888.287.2492 (US Toll Free).

Note: This Policy does not apply to the NetEqualizer Lite.

Speeding up Your T1, DS3, or Cable Internet Connection with an Optimizing Appliance


By Art Reisman, CTO, APconnections (www.netequalizer.com)

Whether you are a home user or a large multinational corporation, you likely want to get the most out of your Internet connection. In previous articles, we have  briefly covered using Equalizing (Fairness)  as a tool to speed up your connection without purchasing additional bandwidth. In the following sections, we’ll break down  exactly how this is accomplished in layman’s terms.

First , what is an optimizing appliance?

An optimizing appliance is a piece of networking equipment that has one Ethernet input and one Ethernet output. It is normally located between the router that terminates your Internet connection and the users on your network. From this location, all Internet traffic must pass through the device. When activated, the optimizing appliance can rearrange traffic loads for optimal service, thus preventing the need for costly new bandwidth upgrades.

Next, we’ll summarize equalizing and behavior-based shaping.

Overall, equalizing is a simple concept. It is the art form of looking at the usage patterns on the network, and when things get congested, robbing from the rich to give to the poor. In other words, heavy users are limited in the amount of badwidth to which they have access in order to ensure that ALL users on the network can utilize the network effectively. Rather than writing hundreds of rules to specify allocations to specific traffic as in traditional application shaping, you can simply assume that large downloads are bad, short quick traffic is good, and be done with it.

How is Fairness implemented?

If you have multiple users sharing your Internet trunk and somebody mentions “fairness,” it probably conjures up the image of each user waiting in line for their turn. And while a device that enforces fairness in this way would certainly be better than doing nothing, Equalizing goes a few steps further than this.

We don’t just divide the bandwidth equally like a “brain dead” controller. Equalizing is a system of dynamic priorities that reward smaller users at the expense of heavy users. It is very very dynamic, and there is no pre-set limit on any user. In fact, the NetEqualizer does not keep track of users at all. Instead, we monitor user streams. So, a user may be getting one stream (FTP Download) slowed down while at the same time having another stream untouched(e-mail).

Another key element in behavior-based shaping is connections. Equalizing takes care of instances of congestion caused by single-source bandwidth hogs. However, the other main cause of Internet gridlock (as well as bringing down routers and access points) is p2p and its propensity to open hundreds or perhaps thousands of connections to different sources on the Internet. Over the years, the NetEqualizer engineers have developed very specific algorithms to spot connection abuse and avert its side effects.

What is the result?

The end result is that applications such as Web surfing, IM, short downloads, and voice all naturally receive higher priority, while large downloads and p2p receive lower priority. Also, situations where we cut back large streams is  generally for a short duration. As an added advantage, this behavior-based shaping does not need to be updated constantly as applications change.

Trusting a heuristic solution such as NetEqualizer is not always an easy step. Oftentimes, customers are concerned with accidentally throttling important traffic that might not fit the NetEqualizer model, such as video. Although there are exceptions, it is rare for the network operator not to know about these potential issues in advance, and there are generally relatively few to consider. In fact, the only exception that we run into is video, and the NetEqualizer has a low level routine that easily allows you to give overriding priority to a specific server on your network, hence solving the problem. The NetEqualizer also has a special feature whereby you can exempt and give priority to any IP address specifically in the event that a large stream such as video must be given priority.

Through the implementation of Equalizing technology, network administrators are able to get the most out of their network. Users of the NetEqualizer are often surprised to find that their network problems were not a result of a lack of bandwidth, but rather a lack of bandwidth control.

See who else is using this technology.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

When is Deep Packet Inspection a Good Thing?


Commentary

Update September 2011

Seems some shareholders  of a company who over promised layer 7 technology are not happy.

By Eli Riles

As many of our customers are aware, we publicly stated back in October 2008 that we officially had switched all of our bandwidth control solutions over to behavior-based shaping. Consequently, we  also completely disavowed Deep Packet Inspection in a move that has Ars Technica described as “vendor throws deep packet inspection under the bus.”

In the last few weeks, there has been a barrage of attacks on Deep Packet Inspection, and then a volley of PR supporting it from those implementing the practice.

I had been sitting on an action item to write something in defense of DPI, and then this morning I came across a pro-DPI blog post in the New York Times. The following excerpt is in reference to using DPI to give priority to certain types of traffic such as gaming:

“Some customers will value what they see as low priority as high priority,” he said. I asked Mr. Scott what he thought about the approach of Plusnet, which lets consumers pay more if they want higher priority given to their game traffic and downloads. Surprisingly, he had no complaints.

“If you said to me, the consumer, ‘You can choose what applications to prioritize and which to deprioritize, and, oh, by the way, prices will change as a result of how you do this,’ I don’t have a problem with that,” he said.

The key to this excerpt is the phrase, “IF YOU ASK THE CONSUMER WHAT THEY WANT.” This implies permission. If you use DPI as an opt-in , above-board technology, then obviously there is nothing wrong with it. The threat to privacy is only an issue if you use DPI without consumer knowledge. It should not be up to the provider to decide appropriate use of DPI,  regardless of good intent.

The quickest way to deflate the objections  of the DPI opposition is to allow consumers to choose. If you subscribe to a provider that allows you to have higher priority for certain application, and it is in their literature, then by proxy you have granted permission to monitor your traffic. I can still see the Net Neutrality purist unhappy with any differential service, but realistically I think there is a middle ground.

I read an article the other day where a defender of DPI practices (sorry no reference) pointed out how spam filtering is widely accepted and must use DPI techniques to be effective. The part the defender again failed to highlight was that most spam filtering is done as an opt-in with permission. For example, the last time I checked my Gmail account, it gave the option to turn the spam filter off.

In sum, we are fully in support of DPI technology when the customer is made aware of its use and has a choice to opt out. However, any use of DPI done unknowingly and behind the scenes is bound to create controversy and may even be illegal. The exception would be a court order for a legal wiretap. Therefore, the Deep Packet Inspection debate isn’t necessarily a black and white case of two mutually exclusive extremes of right and wrong. If done candidly, DPI can be beneficial to both the Internet user and provider.

See also what is deep packet inspection.

Eli Riles, a consultant for APconnections (Netequalizer), is a retired insurance agent from New York. He is a self-taught expert in network infrastructure. He spends half the year traveling and visiting remote corners of the earth. The other half of the year you’ll find him in his computer labs testing and tinkering with the latest network technology.

For questions or comments, please contact him at eliriles@yahoo.com.

Tucson Unified School District Could Use a Bandwidth Controller


The excerpt below from the Arizona Star Daily sums up the network gridlock  situation at the Tucson Unified School Distirct.  The reason for posting this on our blog is the hope that other administrators will find us before they go out and commit to the recurring costs of additional expensive bandwidth.

At Fruchthendler Elementary School, one first-grade teacher was supposed to give an online assessment, only to find it took 10 minutes to load each question. She finally gave up and printed out the tests.

We are a 21st-century school running on 20th-century bandwidth,” Little said. “I feel like I’m back to what I had in high school, which is pretty much nothing.

Read the full Article from the Arizona Stqr Daily

Although we have no other details about the situation in Tucson  and their gridlocked Internet service, we are confident that an affordably priced 21st century bandwidth control solution could certainly make a difference.

NetEqualizer is being used in school districts across the country and has been largely effective in preventing many of the problems experienced in Tucson. Click here for feedback and reviews from just a few of the school districts that have deployed NetEqualizer.

Seventeen Unique Ideas to Speed up Your Internet


By Eli Riles
Eli Riles is a retired insurance agent from New York. He is a self-taught expert in network infrastructure. He spends half the year traveling and visiting remote corners of the earth. The other half of the year you’ll find him in his computer labs testing and tinkering with the latest network technology.  For questions or comments please contact him at
admin@netequalizer.com

Updated 11/30/2015 – We are now up to sixteen (17) tips!
————————————————————————————————————————————————

Although there is no way to actually make your true Internet speed faster, here are some tips for home and corporate users that can make better use of the bandwidth you have, thus providing the illusion of a faster pipe.

1) Use A VPN tunnel to get to blocked content.

One of the little know secrets your provider does not want you to know is that they will slow video or software updates if the content is not hosted on their network. Here is an article with details on how you can get around this restriction.

 

 

 

2) Time of day does make a difference

During peak internet Usage times, 5 PM to Midnight local time, your upstream provider is also most likely congested.  If you have a bandwidth intensive task to do, such as downloading an update for your IPAD, you can likely get a much faster download by doing your download earlier in the day. I have even noticed that the more obscure YouTube’s and videos,  have problems running at peak traffic times. My upstream provider does a good job with Netflix and popular videos during peak hours ( these can be found in their cache), but if I get something that is not likely stored in a local copy on their servers the video will lag during peak times. (see our article on caching)

3) Turn off Java Script

There are some trade offs with doing this , but it does make a big difference on how fast pages will load. Here is an article where cover all the  relevant details.

Note: Prior to 2010  setting your browser to text only mode was a viable option, but today most sites are full of graphics and virtually unreadable in text only mode.

  • If you are stuck with a dial-up or slower broadband connection, your  browser likely has an  option to load text-only. If you are a power user that’s gaming or watching YouTube, text-only will obviously have no effect on these activities, but it will speed up general browsing and e-mail.  Most web pages are loaded with graphics which take up the bulk of the load time, so switching to text-only will eliminate the graphics and save you quite a bit of time.

4) Install a bandwidth controller to make sure no single connection dominates your bandwidth

Everything you do on the Internet creates a connection from inside your network to the Internet, and all of these connections compete for the limited amount of bandwidth your ISP provides.

Your router (cable modem) connection to the Internet provides first come/first serve service to all the applications trying to access the Internet. To make matters worse, the heavier users, the ones with the larger persistent downloads, tend to get more than their fair share of router cycles.  Large downloads are like the school yard bully, they tend to butt in line, and not play fair.

Read the full article.

5) Turn off the other computers in the house

Many times, even during the day when the kids are off to school, I’ll be using my Skype phone and the connection will break up.  I have no idea what exactly the kids’ computers are doing, but if I log them off the Internet, things get better with the Skype call every time. In a sense, it’s a competition for limited bandwidth resources, so, decreasing the competition will usually boost your computer’s performance.

6) Kill background tasks on your computer

You should also try to turn off any BitTorrent or background tasks on your computer if you are having trouble while trying to watch a video or make a VoIP call.  Use your task bar to see what applications are running and kill the ones you don’t want.  Although this is a bit drastic, you may just find that it makes a difference. You’d be surprised what’s running on your computer without you even knowing it (or wanting it).

For you gamers out there, this also means turning off the audio component on your games if you do not need it for collaboration.

7) Test your Internet speed

One of the most common issues with slow internet service is that your provider is not giving you the speed/bandwidth that they have advertised.  Here is a link to our article on testing your Internet speed, which is a good place to start.

Note:  Comcast has adopted a 15 minute Penalty box in some markets. Your initial speed tests will likely show no degradation, but if you persist at watching high-definition video for more than 15 minutes, you may get put into their Penalty box.  This practice helps preserve a limited resource in some crowded markets.  We note it here because we have heard reports of people happily watching YouTube videos only to have service degrade.

Related Article: The real meaning of Comcast generosity.

8) Make sure you are not accidentally connected to a weak access point signal

There are several ways an access point can slow down your connection a bit.  If the signal between you and the access point is weak, the access point will automatically downgrade its service to a slower speed. This happens to me all the time. My access point goes on the blink (needs to be re-booted) and my computer connects to the neighbor’s with a weaker signal. The speed of my connection on the weaker signaled AP is quite variable.  So, if you are on wireless in a densely populated area, check to make sure what signal you are connected  to.

9) Caching — How  does it work and is it a good idea?

Offered by various vendors and built into Internet Explorer, caching can be very effective in many situations. Caching servers have built-in intelligence to store the most recently and most frequently requested information, thus preventing future requests from traversing a WAN/Internet link unnecessarily.

Many web servers keep a time stamp of their last update to data, and browsers such as the popular Internet Explorer will check the time stamp on the host server. If the page time stamp has not changed since the last time you accessed the page, IE will grab it and present a local stored copy of the Web page (from the last time you accessed the page), saving the time it would take to load the page from across the Internet.

So what is the downside of caching?

There are two main issues that can arise with caching:

a) Keeping the cache current. If you access a cached page that is not current, then you are at risk of getting old and incorrect information. Some things you may never want to be cached, for example the results of a transactional database query. It’s not that these problems are insurmountable, but there is always the risk that the data in cache will not be synchronized with changes. I personally have been misled by old data from my cache on several occasions.

b) Volume. There are some 100 million Web sites out on the Internet. Each site contains upwards of several megabytes of public information. The amount of data is staggering and even the smartest caching scheme cannot account for the variation in usage patterns among users and the likelihood they will hit an uncached page.

Recommended: Related article on how ISPs use caching to speed up NetFlix and Youtube Videos.

For information on turning off caching, click here.

 

10) Kill your virus protection software

With the recent outbreak of the H1N1 virus, it reminded me of  how sometimes the symptoms and carnage from a vaccine are worse than the disease it purports to cure.  Well, the same holds true for your virus protection software. Yes, viruses are real and can take down your computer, but so can a disk crash, which is also inevitable.  You must back up your critical data regularly.  However, that virus software seems to dominate more resources on my desktop than anything else.  I no longer use anything and could not be happier.  But be sure to use a reliable back-up (as you will need to rebuild your computer now and then, which I find a better alternative than running a slow computer all of the time).

11) Set a TOS bit to provide priority

A TOS bit  is a special bit within an IP packet that directs routers to give preferential treatment to selected packets.  This sounds great, just set a bit and move to the front of the line for faster service.  As always, there are limitations.

– How does one set a TOS bit?
It seems that only very special enterprise  applications, like a VoIP PBX, actually set and make use of TOS bits. Setting the actual bit is not all that difficult if you have an application that deals with the network layer, but most commercial applications just send their data on to the host computer’s clearing house for data, which in turn puts it into IP packets without a TOS bit set.  After searching around for a while, I just don’t see any literature on being able to set a TOS bit at the application level. For example, there are a couple of forums where people mention setting the TOS bit in Skype but nothing definitive on how to do it.

– Who enforces the priority for TOS packets?
This is a function of routers at the edge of your network, and all routers along the path to wherever the IP packet is going. Generally, this limits the effectiveness of using a TOS bit to networks that you control end-to-end. In other words, a consumer using a public Internet connection cannot rely on their provider to give any precedence to TOS bits, hence this feature is relegated to enterprise networks within a business or institution.

–  Incoming traffic generally cannot be controlled.
The subject of when you can and cannot control a TOS bit does get a bit more involved.  We have gone over this in more detail in a separate  article.

12) Avoid Quota Penalties

Some providers are implementing Quotas where they slow you down if you use too much data over a period of time.  If you know that you have a large set of downloads to do, for example synching your device with iTunes Cloud, go to a library and use their free service. Or, if you are truly without morals, logon to your neighbor’s wireless network and do your synch.

13) Consider Application Shaping?

Note: Application shaping is an appropriate topic for corporate IT administrators and is generally not a practical solution for a home user.  Makers of application shapers include Blue Coat (Packeteer) and Allot (NetEnforcer), products that are typically out of the price range for many smaller networks and home users.

One of the most popular and intuitive forms of optimizing bandwidth is a method called “application shaping”, with aliases of “deep packet inspection”, “layer 7 shaping”, and perhaps a few others thrown in for good measure. For the IT manager that is held accountable for everything that can and will go wrong on a network, or the CIO that needs to manage network usage policies, this at first glance may seem like a dream come true.  If you can divvy up portions of your WAN/Internet link to various applications, then you can take control of your network and ensure that important traffic has sufficient bandwidth, right?  Well, you be the judge…

At the center of application shaping is the ability to identify traffic by type.  For example, identifying between Citrix traffic, streaming audio, Kazaa peer-to-peer, or something else.  However, this approach is not without its drawbacks.

Drawback #1: Applications can purposely use non-standard ports
Many applications are expected to use Internet ports when communicating across the Web. An Internet port is part of an Internet address, and many firewall products can easily identify ports and block or limit them. For example, the “FTP” application commonly used for downloading files uses as standard the well-known “port 21”. The fallacy with this scheme, as many operators soon find out, is that there are many applications that do not consistently use a standard fixed port for communication. Many application writers have no desire to be easily classified. In fact, they don’t want IT personnel to block them at all, so they deliberately design applications to not conform to any formal port assignment scheme. For this reason, any product that aims to block or alter application flows by port should be avoided if your primary mission is to control applications by type.

So, if standard firewalls are inadequate at blocking applications by port, what can help?

As you are likely aware, all traffic on the Internet travels around in what is called an IP packet. An IP packet can very simply be thought of as a string of characters moving from Computer A to Computer B. The string of characters is called the “payload,” much like the freight inside a railroad car. On the outside of this payload, or data, is the address where it is being sent. These two elements, the address and the payload, comprise the complete IP packet.

In the case of different applications on the Internet, we would expect to see different kinds of payloads. For example, let’s take the example of a skyscraper being transported from New York to Los Angeles. How could this be done using a freight train? Common sense suggests that one would disassemble the office tower, stuff it into as many freight cars as it takes to transport it, and then when the train arrived in Los Angeles, hopefully the workers on the other end would have the instructions on how to reassemble the tower.

Well, this analogy works with almost anything that is sent across the Internet, only the payload is some form of data, not a physical hunk of bricks, metal and wires. If we were sending a Word document as an e-mail attachment, guess what, the contents of the document would be disassembled into a bunch of IP packets and sent to the receiving e-mail client where it would be re-assembled. If I looked at the payload of each Internet packet in transit, I could actually see snippets of the document in each packet and could quite easily read the words as they went by.

At the heart of all current application shaping products is special software that examines the content of Internet packets (aka “deep packet inspection”), and through various pattern matching techniques, determines what type of application a particular flow is. Once a flow is determined, then the application shaping tool can enforce the operator’s policies on that flow. Some examples of policy are:

Limit AIM messenger traffic to 100kbs
Reserve 500kbs for Shoretell voice traffic

The list of rules you can apply to traffic types and flow is unlimited.

Drawback #2: The number of applications on the Internet is a moving target.
The best application shaping tools do a very good job of identifying several thousand of them, and yet there will always be some traffic that is unknown (estimated at 10 percent by experts from the leading manufacturers). The unknown traffic is lumped into the unknown classification and an operator must make a blanket decision on how to shape this class. Is it important? Is it not? Suppose the important traffic was streaming audio for a webcast and is not classified. Well, you get the picture. Although theory behind application shaping by type is a noble one, the cost for a company to stay up-to-date is large and there are cracks.

Drawback #3: The spectrum of application types is not static
Even if the application spectrum could be completely classified, the spectrum of applications constantly changes. You must keep licenses current to ensure you have the latest in detection capabilities. And even then it can be quite a task to constantly analyze and change the mix of policies on your network. As bandwidth costs lessen, how much human time should be spent divvying up and creating ever more complex policies to optimize your WAN traffic?

Drawback #4: Net neutrality is comprised by application shaping.
Techniques used in application shaping have become controversial on public networks, with privacy issues often conflicting with attempts to ensure network quality.

Based on these drawbacks, we believe that application shaping is not the dream come true that it may seem at first glance.  Once CIOs and IT Managers are educated on the drawbacks, they tend to agree.

14) Bypass that local consumer reseller

This option might be a little bit out of the price range of the average consumer, and it may not be practical logistically –  but if you like to do things out-of-the-box, you don’t have to buy Internet service from your local cable operator or phone company, especially if you are in a metro area.  Many customers we know have actually gone directly to a Tier 1 point of presence (backbone provider) and put in a radio backhaul direct to the source.  There are numerous companies that can set you up with a 40-to-60 megabit link with no gimmicks.

15) Speeding up your iPhone

Ever been in a highly populated area with 3 or 4 bars and still your iPhone access slows to crawl ?

The most likely reason for this problem is congestion on the provider line. 3g and 4g networks all have a limited sized pipe from the nearest tower back to the Internet. It really does not matter what your theoretical data speed is, when there are more people using the tower than the back-haul pipe can handle, you can temporarily lose service, even when your phone is showing three or four bars.

Unfortunately, you only have a couple of options in this situation. If you are in a stadium with a large crowd, your best bet is to text during the action.  If you wait for a timeout or end of the game,  you’ll find this corresponds to the times when the network slows to a crawl,  so try to finish your access before the last out of the game or the end of the quarter. Pick a time when you know the majority of people are not trying to send data.

Get away from the area of congestion. I have experienced complete lockout of up to 30 minutes, when trying to text, as a sold out stadium emptied out.  In this situation my only chance was  to walk about  1/2 mile or so from the venue to get a text out. Once away from the main stadium, my iPhone connected to a tower with a different back haul away from the congested stadium towers.

Shameless plug: If you happen to be a provider or know somebody that works for a provider  please tell them to call us and we’d be glad to explain the simplicity of equalizing and how it can restore sanity to a congested wireless backhaul.

16) Turn off HTTPS and other Encryption

Although this may sound a bit controversial , there are some providers that,  for sake of survival assume that encrypted traffic is bad traffic.  For example p2p is considered bad traffic, they usee be able to use special equipment to throw it into a lower priority pool so that it gets sent out at a slower speed.   Many applications are starting to encrypt p2p , face book etc…. The provider may assume that all this is “bad”traffic because they don’t know what it is, and hence give it a lower priority.

17) Protocol Spoofing

Note:  This method is applied to Legacy Database servers doing operations over a WAN.  Skip this tip if you are a home user.

Historically, there are client-server applications that were developed for an internal LAN. Many of these applications are considered chatty. For example, to complete a transaction between a client and server, tens of messages may be transmitted when perhaps one or two would suffice. Everything was fine until companies, for logistical and other reasons, extended their LANs across the globe using WAN links to tie different locations together.

To get a better visual on what goes on in a chatty application, perhaps an analogy will help.  It’s like  sending family members your summer vacation pictures, and, for some insane reason, putting each picture in a separate envelope and mailing them individually on the same mail run. Obviously, this would be extremely inefficient, as chatty applications can be.

What protocol spoofing accomplishes is to fake out the client or server-side of the transaction and then send a more compact version of the transaction over the Internet, i.e. put all the pictures in one envelope and send it on your behalf, thus saving you postage.

You might ask why not just improve the inefficiencies in these chatty applications rather than write software to deal with the problem? Good question, but that would be the subject of a totally different article on how IT organizations must evolve with legacy technology, which is beyond the scale of the present article.

In Conclusion

Again, while there is no way to increase your true Internet speed without upgrading your service, these tips can improve performance, and help you to get better results from the bandwidth that you already have.  You’re paying for it, so you might as well make sure it’s being used as effectively as possible. : )

Related Article on testing true video speed over the Internet

A great article from the tech guy regarding tips on dealing with your ISP

Other Articles on Speeding up Your Internet

Five tips and tricks to speed up your Internet

How to speed up your Internet Connection Without any Software

Tips on how to speed up your Internet

About APconnections

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here to request our full pricelist.

Hotel Property Managers Should Consider Generic Bandwidth Control Solutions


Editors Note: The following Hotelsmag.com article caught my attention this morning. The hotel industry is now seriously starting to understand that they need some form of bandwidth control.   However, many hotel solutions for bandwidth control are custom marketed, which perhaps puts their economy-of-scale at a competitive disadvantage. Yet, the NetEqualizer bandwidth controller, as well as our competitors, cross many market verticals, offering hotels an effective solution without the niche-market costs. For example, in addition to the numerous other industries in which the NetEqualizer is being used, some of our hotel customers include: The Holiday Inn Capital Hill, a prominent Washington DC hotel; The Portola Plaza Hotel and Conference Center in Monterrey, California; and the Hotel St. Regis in New York City.

For more information about the NetEqualizer, or to check out our live demo, visit www.netequalizer.com.

Heavy Users Tax Hotel Systems:Hoteliers and IT Staff Must Adapt to a New Reality of Extreme Bandwidth Demands

By Stephanie Overby, Special to Hotels — Hotels, 3/1/2009

The tweens taking up the seventh floor are instant-messaging while listening to Internet radio and downloading a pirated version of “Twilight” to watch later. The 200-person meeting in the ballroom has a full interactive multimedia presentation going for the next hour. And you do not want to know what the businessman in room 1208 is streaming on BitTorrent, but it is probably not a productivity booster.

To keep reading, click here.

ROI calculator for Bandwidth Controllers


Is your commercial Internet link getting full ? Are you evaluating whether to increase the size of your existing internet pipe and trying to do a cost trade off on investing in an optimization solution? If you answered yes to either of these questions then you’ll find the rest of this post useful.

To get started, we assume you are somewhat familiar with the NetEqualizer’s automated fairness and behavior based shaping.

To learn more about NetEqualizer behavior based shaping  we suggest our  NetEqualizer FAQ.

Below are the criteria we used for our cost analysis.

1) It was based on feedback from numerous customers (different verticals) over the previous six years.

2) In keeping with our policies we used average and not best case scenarios of savings.
3) Our Scenario is applicable to any private business or public operator that administers a shared Internet Link with 50 or more users

4) For our example  we will assume a 10 megabit trunk at a cost of $1500 per month.

ROI savings #1 Extending the number of users you can support.

NetEqualizer Equalizing and fairness typically extends the number of users that can share a trunk by making better use of the available bandwidth in a time period. Bandwidth can be stretched from 10 to 30 percent:

savings $150 to $450 per month

ROI savings #2 Reducing support calls caused by peak period brownouts.

We conservatively assume a brownout once a month caused by general network overload. With a transient brownout scenario you will likely spend debug time  trying to find the root cause. For example, a bad DNS server could the problem, or your upstream provider may have an issue. A brownout  may be caused by simple congestion .   Assuming you dispatch staff time to trouble shoot a congestion problem once a month and at an overhead  from 1 to 3 hours. Savings would be $300 per month in staff hours.

ROI savings #3 No recurring costs with your NetEqualizer.

Since the NetEqualizer uses behavior based shaping your license is essentially good for the life of the unit. Layer 7 based protocol shapers must be updated at least once a year.  Savings $100 to $500 per month

The total

The cost of a NetEqualizer unit for a 10 meg circuit runs around $3000, the low estimate for savings per month is around $500 per month.

In our scenario the ROI is very conservatively 6 months.

Note: Commercial Internet links supported by NetEqualizer include T1,E1,DS3,OC3,T3, Fiber, 1 gig and more

Related Articles

%d bloggers like this: