How to Speed Up Your Internet Connection with a Bandwidth Controller


slow-internet

It occurred to me today, that in all the years I have been posting about common ways to speed up your Internet, I have never really written a plain and simple consumer explanation dedicated to how a bandwidth controller can speed up your Internet. After all, it seems intuitive, that a bandwidth controller is something an ISP would use to slow down your Internet; but there can be a beneficial side to a bandwidth controller, even at the home-consumer level.

Quite a bit of slow Internet service problems are due to contention on your link to the Internet. Even if you are the only user on the Internet, a simple update to your virus software running in the background can dominate your Internet link. A large download often will cause everything else you try (email, browsing) to come to a crawl.

What causes slowness on a shared link?

Everything you do on your Internet creates a connection from inside your network to the Internet, and all these connections compete for the limited amount of bandwidth which your ISP provides.

Your router (cable modem) connection to the Internet provides first-come, first-serve service to all the applications trying to access the Internet. To make matters worse, the heavier users (the ones with the larger persistent downloads), tend to get more than their fair share of router cycles. Large downloads are like the school yard bully – they tend to butt in line, and not play fair.

So how can a bandwidth controller make my Internet faster?

A smart bandwidth controller will analyze all your Internet connections on the fly. It will then selectively take away some bandwidth from the bullies. Once the bullies are removed, other applications will get much needed cycles out to the Internet, thus speeding them up.

What application benefits most when a bandwidth controller is deployed on a network?

The most noticeable beneficiary will be your VoIP service. VoIP calls typically don’t use that much bandwidth, but they are incredibly sensitive to a congested link. Even small quarter-second gaps in a VoIP call can make a conversation unintelligible.

Can a bandwidth controller make my YouTube videos play without interruption?

In some cases yes, but generally no. A YouTube video will require anywhere from 500kbs to 1000kbs of your link, and is often the bully on the link; however in some instances there are bigger bullies crushing YouTube performance, and a bandwidth controller can help in those instances.

Can a home user or small business with a slow connection take advantage of a bandwidth controller?

Yes, but the choice is a time-cost-benefit decision. For about $1,600 there are some products out there that come with support that can solve this issue for you, but that price is hard to justify for the home user – even a business user sometimes.

Note: I am trying to keep this article objective and hence am not recommending anything in particular.

On a home-user network it might be easier just to police it yourself, shutting off background applications, and unplugging the kids’ computers when you really need to get something done. A bandwidth controller must sit between your modem/router and all the users on your network.

Related Article Ten Things to Consider When Choosing a Bandwidth Shaper.

You May Be the Victim of Internet Congestion


Have you ever had a mysterious medical malady? The kind where maybe you have strange spots on your tongue, pain in your left temple, or hallucinations of hermit crabs at inappropriate times – symptoms seemingly unknown to mankind?

But then, all of a sudden, you miraculously find an exact on-line medical diagnosis?

Well, we can’t help you with medical issues, but we can provide a similar oasis for diagnosing the cause of your slow network – and even better, give you something proactive to do about it.

Spotting classic congested network symptoms:

You are working from your hotel room late one night, and you notice it takes a long time to get connected. You manage to fire off a couple emails, and then log in to your banking website to pay some bills. You get the log-in prompt, hit return, and it just cranks for 30 seconds, until… “Page not found.” Well maybe the bank site is experiencing problems?

You decide to get caught up on Christmas shopping. Initially the Macy’s site is a bit a slow to come up, but nothing too out of the ordinary on a public connection. Your Internet connection seems stable, and you are able to browse through a few screens and pick out that shaving cream set you have been craving – shopping for yourself is more fun anyway. You proceed to checkout, enter in your payment information, hit submit, and once again the screen locks up. The payment verification page times out. You have already entered your credit card, and with no confirmation screen, you have no idea if your order was processed.

We call this scenario, “the cyclical rolling brown out,” and it is almost always a problem with your local Internet link having too many users at peak times. When the pressure on the link from all active users builds to capacity, it tends to spiral into a complete block of all access for 20 to 30 seconds, and then, service returns to normal for a short period of time – perhaps another 30 seconds to 1 minute. Like a bad case of Malaria, the respites are only temporary, making the symptoms all the more insidious.

What causes cyclical loss of Internet service?

When a shared link in something like a hotel, residential neighborhood, or library reaches capacity, there is a crescendo of compound gridlock. For example, when a web page times out the first time, your browser starts sending retries. Multiply this by all the users sharing the link, and nobody can complete their request. Think of it like an intersection where every car tries to proceed at the same time, they crash in the middle and nobody gets through. Additional cars keep coming and continue to pile on. Eventually the police come with wreckers and clear everything out of the way. On the Internet, eventually the browsers and users back off and quit trying – for a few minutes at least. Until late at night when the users finally give up, the gridlock is likely to build and repeat.

What can be done about gridlock on an Internet link?

The easiest way to prevent congestion is to purchase more bandwidth. However, sometimes even with more bandwidth, the congestion might overtake the link. Eventually most providers also put in some form of bandwidth control – like a NetEqualizer. The ideal solution is this layered approach – purchasing the right amount of bandwidth AND having arbitration in place. This creates a scenario where instead of having a busy four-way intersection with narrow streets and no stop signs, you now have an intersection with wider streets and traffic lights. The latter is more reliable and has improved quality of travel for everyone.

For some more ideas on controlling this issue, you can reference our previous article, Five Tips to Manage Internet Congestion.

Speeding Up Your Internet Connection Using a TOS Bit


A TOS bit (Type Of Service bit) is a special bit within an IP packet that directs routers to give preferential treatment to selected packets. This sounds great, just set a bit and move to the front of the line for faster service. As always there are limitations.

How does one set a TOS bit?

It seems that only very special enterprise applications, like VoIP PBX’s, actually set and make use of TOS bits. Setting the actual bit is not all that difficult if you have an application that deals with the Network layer, but most commercial applications just send their data on to their local host computer clearing house for data, which in turn, puts the data into IP packets without a TOS bit set. After searching around for a while, I just don’t see any literature on being able to set a TOS bit at the application level. For example, there are several forums where people mention setting the TOS bit in Skype but nothing definitive on how to do it.

However, not to be discouraged, and being the hacker that I am, I could, with some work, make a little module to force every packet leaving my computer or wireless device standard with the TOS bit set. So why not package this up and sell it to the public as an Internet accelerator?

Well before I spend any time on it, I must consider the following:

Who enforces the priority for TOS packets?

This is a function of routers at the edge of your network, and all routers along the path to wherever the IP packet is going. Generally, this limits the effectiveness of using a TOS bit to networks that you control end-to-end. In other words, a consumer using a public Internet connection cannot rely on their provider to give any precedence to TOS bits; hence this feature is relegated to enterprise networks within a business or institution.

Incoming traffic generally cannot be controlled.

The subject of when you can and cannot control a TOS bit does get a bit more involved (pun intended). We have gone over it in more detail in a separate article.

Most of what you do is downloading.

So assuming that your Internet provider did give special treatment to incoming data (which it likely does not), such as video, downloads, and VoIP, the problem with my accelerator idea is that it could only set the TOS bit on data leaving your computer. Incoming TOS bits would have to be set by the sending server.

The moral of the story is that TOS bits that traverse the public Internet don’t have much of a chance in making a difference in your connection speed.

In conclusion, we are going to continue to study TOS bits to see where they might be beneficial and complement our behavior-based shaping (aka “equalizing”) technology.

Five More Tips on Testing Your Internet Speed


By Art Reisman

Art Reisman is currently CTO and co-founder of NetEqualizer

Imagine if every time you went to a gas station the meters were adjusted to exaggerate the amount of fuel pumped, or the gas contained inert additives. Most consumers count on the fact that state and federal regulators monitor your local gas station to ensure that a gallon is a gallon and the fuel is not a mixture of water and rubbing alcohol. But in the United States, there are no rules governing truth in bandwidth claims. At least none that we are aware of.

Given there is no standard in regulating Internet speed, it’s up to the consumer to take the extra steps to make sure you’re getting what you pay for. In the past, we’ve offered some tips both on speeding up your Internet connection as well as questions you should ask your provider. Here are some additional tips on how to fairly test your Internet speed.

1. Use a speed test site that mimics the way you actually access the Internet.

Why?

Using a popular speed test tool is too predictable, and your Internet provider knows this. In other words, they can optimize their service to show great results when you use a standard speed test site. To get a better measure of you speed,  your test must be unpredictable. Think of a movie star going to the Oscars. With time to plan, they are always going to look their best. But the candid pictures captured by the tabloids never show quite as well.

To get a candid picture of your providers true throughput, we suggest using a tool such as the speed test utility from M-Lab.

2. Try a very large download to see if your speed is sustained.

We suggest downloading a full Knoppix CD. Most download utilities will give you a status bar on the speed of your download. Watch the download speed over the course of the download and see if the speed backs off after a while.

Why?

Some providers will start slowing your speed after a certain amount of data is passed in a short period, so the larger the file in the test the better. The common speed test sites likely do not use large enough downloads to trigger a slower download speed enforced by your provider.

3. If you must use a standard speed test site, make sure to repeat your tests with at least three different speed test sites.

Different speed test sites use different methods for passing data and results will vary.

4. Run your tests during busy hours — typically between 5 and 9 p.m. — and try running them at different times.

Often times IPs have trouble providing their top advertised speeds during busy hours.

5. Make sure to shut off other activities that use the Internet when you test. 

This includes other computers in your house, not just the computer you are testing from.

Why?

All the computers in your house share the same Internet pipe to your provider. If somebody is watching a Netflix movie while you run your test, the movie stream will skew your results.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

Just How Fast Is Your 4G Network?


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com

The subject of Internet speed and how to make it go faster is always a hot topic. So that begs the question, if everybody wants their Internet to go faster, what are some of the limitations? I mean, why can’t we just achieve infinite speeds when we want them and where we want them?

Below, I’ll take on some of the fundamental gating factors of Internet speeds, primarily exploring the difference between wired and wireless connections. As we have “progressed” from a reliance on wired connections to a near-universal expectation of wireless Internet options, we’ve also put some limitations on what speeds can be reliably achieved. I’ll discuss why the wired Internet to your home will likely always be faster than the latest fourth generation (4G) wireless being touted today.

To get a basic understanding of the limitations with wireless Internet, we must first talk about frequencies. (Don’t freak out if you’re not tech savvy. We usually do a pretty good job at explaining these things using analogies that anybody can understand.) The reason why frequencies are important to this discussion is that they’re the limiting factor to speed in a wireless network.

The FCC allows cell phone companies and other wireless Internet providers to use a specific range of frequencies (channels) to transmit data. For the sake of argument, let’s just say there are 256 frequencies available to the local wireless provider in your area. So in the simplest case of the old analog world, that means a local cell tower could support 256 phone conversations at one time.

However, with the development of better digital technology in the 1980s, wireless providers have been able to juggle more than one call on each frequency. This is done by using a time sharing system where bits are transmitted over the frequency in a round-robin type fashion such that several users are sharing the channel at one time.

The wireless providers have overcome the problem of having multiple users sharing a channel by dividing it up in time slices. Essentially this means when you are talking on your cell phone or bringing up a Web page on your browser, your device pauses to let other users on the channel. Only in the best case would you have the full speed of the channel to yourself (perhaps at 3 a.m. on a deserted stretch of interstate). For example, I just looked over some of the mumbo jumbo and promises of one-gigabit speeds for 4G devices, but only in a perfect world would you be able to achieve that speed.

In the real world of wireless, we need to know two things to determine the actual data rates to the end user.

  1. The maximum amount of data that can be transmitted on a channel
  2. The number of users sharing the channel

The answer to part one is straightforward: A typical wireless provider has channel licenses for frequencies in the 800 megahertz range.

A rule of thumb for transmitting digital data over the airwaves is that you can only send bits of  data at 1/2 the frequency. For example, 800 megahertz is 800 million cycles per second and 1/2 of that is 400 million cycles per second. This translates to a theoretical maximum data rate of 400 megabits. Realistically, with noise and other environmental factors, 1/10 of the original frequency is more likely. This gives us a maximum carrying capacity per channel of 80 megabits and a ballpark estimate for our answer to part one above.

However, the actual answer to variable two, the number of users sharing a channel, is a closely guarded secret among service providers. Conservatively, let’s just say you’re sharing a channel with 20 other users on a typical cell tower in a metro area. With 80 megabits to start from, this would put your individual maximum data rate at about four megabits during a period of heavy usage.

So getting back to the focus of the article, we’ve roughly worked out a realistic cap on your super-cool new 4G wireless device at four megabits. By today’s standards, this is a pretty fast connection. But remember this is a conservative benefit-of-the-doubt best case. Wireless providers are now talking about quota usage and charging severely for overages. That translates to the fact that they must be teetering on gridlock with their data networks now.  There is limited frequency real estate and high demand for content data services. This is likely to only grow as more and more users adopt mobile wireless technologies.

So where should you look for the fastest and most reliable connection? Well, there’s a good chance it’s right at home. A standard fiber connection, like the one you likely have with your home network, can go much higher than four megabits. However, as with the channel sharing found with wireless, you must also share the main line coming into your central office with other users. But assuming your cable operator runs a point-to-point fiber line from their office to your home, gigabit speeds would certainly be possible, and thus wired connections to your home will always be faster than the frequency limited devices of wireless.

Related Article: Commentary on Verizon quotas

Interesting  side note , in this article  by Deloitte they do not mention limitations of frequency spectrum as a limiting factor to growth.

The Facts and Myths of Network Latency


There are many good references that explain how some applications such as VoIP are sensitive to network latency, but there is also some confusion as to what latency actually is as well as perhaps some misinformation about the causes. In the article below, we’ll separate the facts from the myths and also provide some practical analogies to help paint a clear picture of latency and what may be behind it.

Fact or Myth?

Network latency is caused by too many switches and routers in your network.

This is mostly a myth.

Yes, an underpowered router can introduce latency, but most local network switches add minimal latency — a few milliseconds at most. Anything under about 10 milliseconds is, for practical purposes, not humanly detectable. A router or switch (even a low-end one) may add about 1 millisecond of latency. To get to 10 milliseconds you would need eight or more hops, and even then you wouldn’t be near anything noticeable.

The faster your link (Internet) speed, the less latency you have.

This is a myth.

The speed of your network is measured by how fast IP packets arrive. Latency is the measure of how long they took to get there. So, it’s basically speed vs. time. An example of latency is when NASA sends commands to a Mars orbiter. The information travels at the speed of light, but it takes several minutes or longer for commands sent from earth to get to the orbiter. This is an example of data moving at high speed with extreme latency.

VoIP is very sensitive to network latency.

This is a fact.

Can you imagine talking in real time to somebody on the moon? Your voice would take about eight seconds to get there. For VoIP networks, it is generally accepted that anything over about 150 milliseconds of latency can be a problem. When latency gets higher than 150 milliseconds, issues will emerge — especially for fast talkers and rapid conversations.

Xbox games are sensitive to latency.

This is another fact.

For example, in may collaborative combat games, participants are required to battle players from other locations. Low latency on your network is everything when it comes to beating the opponent to the draw. If you and your opponent shoot your weapons at the exact same time, but your shot takes 200 milliseconds to register at the host server and your opponent’s shot gets there in 100 milliseconds, you die.

Does a bandwidth shaping device such as NetEqualizer increase latency on a network ?

This is true, but only for the “bad” traffic that’s slowing the rest of your network down anyway.

Ever hear of the firefighting technique where you light a back fire to slow the fire down? This is similar to the NetEqualizer approach. NetEqualizer deliberately adds latency to certain bandwidth intensive applications, such as large downloads and p2p traffic, so that chat, email, VoIP, and gaming get the bandwidth they need. The “back fire” (latency) is used to choke off the unwanted, or non-time sensitive, applications. (For more information on how the NetEqualizer works, click here.)

Video is sensitive to latency.

This is a myth.

Video is sensitive to the speed of the connection but not the latency. Let’s go back to our man on the moon example where data takes eight seconds to travel from the earth to the moon. Latency creates a problem with two-way voice communication because in normal conversion, an eight second delay in hearing what was said makes it difficult to carry a conversion. What generally happens with voice and long latency is that both parties start talking at the same time and then eight seconds later you experience two people talking over each other. You see this happening a lot with on television with interviews done via satellite. However most video is one way. For example, when watching a Netflix movie, you’re not communicating video back to Netflix. In fact, almost all video transmissions are on delay and nobody notices since it is usually a one way transmission.

New APconnections Corporate Speed Test Tool Released for NetEqualizer


For many Internet users, one of the first troubleshooting steps when online access seems to slow is to run a simple speed test. And, under the right circumstances, speed tests can be an effective way to pinpoint the problem.

However, slowing Internet speeds aren’t just an issue for the casual user. Over our years of troubleshooting thousands of corporate and other commercial links, a recurring issue has been customers not getting their full-advertised bandwidth from their upstream provider. Some customers are aware something is amiss from examining bandwidth reports on their routers and some of these problems we stumble upon while troubleshooting network congestion issues.

But, what if you have a shared, busy corporate Internet connection such as this — with hundreds or thousands of users on the link at one time? Should a traditional speed test be the first place to turn? In this situation, the answer is “no.” Running a speed test under these conditions is neither meaningful nor useful.

Let me explain.

The problem starts with the overall design and process of the speed test itself. Speed tests usually run short duration files. For example, a 10-megabit file sent over a hundred-megabit link might complete in 0.1 seconds, reporting the link speed to the operator at 100 megabits. However, statistically this is just a snapshot of one very small moment in time and is of little value when the demands on a network are constantly changing. Furthermore, with this type of test, the link must be free of active users, which is nearly impossible when you have an entire office, for example, accessing the network at once.

On these larger shared links, the true speed can only be measured during peak times with users accessing a wide variety of applications persistently over a significant period. But, there is no easily controlled Web speed test site that can measure this type of performance on your link.

Yes, a sophisticated IT administrator can run reports and see trends and make assumptions. And many do. Yet, for some businesses, this isn’t practical.

For this reason, we’ve introduced the NetEqualizer Speed Test Utility.

How Does the NetEqualizer Speed Test Utility Work?

The NetEqualizer Speed Test Utility is an intelligent tool embedded in your NetEqualizer that can be activated from your GUI. On high-traffic networks, there is always a busy hour background load on the link – a baseline if you will. When you set up the speed test tool, you simply tell the NetEqualizer some basics about your network, including:

  • Link Speed
  • Number of Users
  • Busy Hours

After turning the tool on, it will keep track of your network’s bandwidth usage. If your usage drops below expected levels, it will present a mild warning on the GUI screen that your bandwidth may be compromised and give an explanation of the deviation. The operator can also be notified by e-mail.

This set up allows bandwidth to be monitored without having to depend on unreliable speed tests or run time-consuming reports, allowing the problem to be more quickly identified and addressed.

For more information about the NetEqualizer Speed Test Utility, contact APconnections at sales@apconnections.net.

What to expect from Internet Bursting


APconnections will be releasing ( version 4.7) a bursting feature on their NetEqualizer bandwidth controller this week. What follows is an explanation of the feature and also some facts and information about Internet Bursting that consumers will also find useful.

First an explanation on how the NetEqualizer bursting feature works.

– The NetEqualizer currently comes with a feature that lets you set a rate limit by IP address.

– Prior to the bursting feature, the top speed allowed for each user was fixed at a set rate limit.

– Now with bursting a user can be allowed a burst of bandwidth for 10 seconds with speeds multiples of two , three or four, or any multiple of their base rate limit.

So if for example a user has a base rate limit of 2 megabits a second, and a burst factor of 4, then their connection will be allowed to burst all the way up to 8 megabits for 10 seconds, at which time it will revert back to the original 2 megabits per second. This type of burst will be noticed when loading large Web pages loaded with graphics. They will essentially fly up in the browser at warp speed.

In order to make  bursting a “special” feature it obviously can’t be on all the time. For this reason the NetEqualizer by default, will force a user to wait 80 seconds before they can burst again.

Will bursting show up in speed tests?

With the default settings of 10 second bursts and an 80 second time out before the next burst it is unlikely a user will be able to see their  full burst speed accurately with a speed test site.

How do you set a bursting feature for an IP address ?

From the GUI

Select

Add Rules->set hard limit

The last field in the command specifies the burst factor.  Set this field to the multiple of the default speed you wish to burst up to.

Note: Once bursting has been set-up, bursting on an IP address will start when that IP exceeds its rate limit (across all connections for that IP).  The burst applies to all connections across the IP address.

How do you turn the burst feature off for an IP address.

You must remove the Hard Limit on the IP address and then recreate the Hard Limit by IP without bursting defined.

From the Web GUI Main Menu, Click on ->Remove/Deactivate Rules

Select the appropriate Hard Limit from the drop-down box. Click on ->Remove Rule

To re-add the rule without bursting, from the Web GUI Main Menu, Click on ->Add Rules->Hard Limit by IP and leave the last field set to 1.

Can you change the global bursting defaults for duration of burst and time between bursts ?

Yes, from the GUI screen you can select

misc->run command

In the space provided you would run the following command

/usr/sbin/brctl setburstparams my 40  30

The first parameter is the time,in seconds, an IP must wait before it can burst again. If an IP has done a burst cycle it will be forced to wait this long in seconds before it can burst again.

The second parameter is the time, in seconds, an IP will be allowed  to burst before begin relegated back to its default rate cap.

The global burst parameters are not persistent, meaning you will need to put a command in the start up file if you want them to stick  between reboots.

/usr/sbin/brctl

If speed tests are not a good way to measure a burst, then what do you recommend?

The easiest way would be  to extend the burst time to minutes (instead of the default 10 seconds ) and then run the speed test.

With the default set at 10 seconds the best was to see a burst in action is to take a continuous snap shot of an IP’s consumption during an extended download.

Beware of the confusion that bursting might cause.

What Is Burstable Bandwidth? Five Points to Consider


IMG_20170403_180712

Internet Bursting

Internet Providers continually use clever marketing analogies to tout their burstable high-speed Internet connections. One of my favorites is the comparison to an automobile with overdrive that at the touch of button can burn up the road. At first, the analogies seem valid, but there are usually some basic pitfalls and unresolved issues.  Below are five points that are designed to make you ponder just what you’re getting with your burstable Internet connection, and may ultimately call some of these analogies, and burstable Internet speeds altogether, into question.

  1. The car acceleration analogy just doesn’t work.

    First, you don’t share your car’s engine with other users when you’re driving.  Whatever the engine has to offer is yours for the taking when you press down on the throttle.  As you know, you do share your Internet connection with many other users.  Second, with your Internet connection, unless there is a magic button next to your router, you don’t have the ability to increase your speed on command.  Instead, Internet bursting is a mysterious feature that only your provider can dole out when they deem appropriate.  You have no control over the timing.

  2. Since you don’t have the ability to decide when you can be granted the extra power, how does your provider decide when to turn up your burst speed?

    Most providers do not share details on how they implement bursting policies, but here is an educated guess – based on years of experience helping providers enforce various policies regarding Internet line speeds.  I suspect your provider watches your bandwidth consumption and lets you pop up to your full burst speed, typically 10 megabits, for a few seconds at a time.  If you continue to use the full 10 megabits for more than a few seconds, they likely will reign you back down to your normal committed rate (typically 1 megabit). Please note this is just an example from my experience and may not reflect your provider’s actual policy.

  3. Above, I mentioned a few seconds for a burst, but just how long does a typical burst last?

    If you were watching a bandwidth-intensive HD video for an hour or more, for example, could you sustain adequate line speed to finish the video? A burst of a few seconds will suffice to make a Web page load in 1/8 of a second instead of perhaps the normal 3/4 of a second. While this might be impressive to a degree, when it comes to watching an hour-long video, this might eclipse your baseline speed. So, if you’re watching a movie or doing any another sustained bandwidth-intensive activity, it is unlikely you will be able to benefit from any sort of bursting technology.

  4. Why doesn’t my provider let me have the burst speed all of the time?

    The obvious answer is that if they did,  it would not be a burst, so it must somehow be limited in some duration.  A better answer is that your provider has peaks and valleys in their available bandwidth during the day, and the higher speed of a burst cannot be delivered consistently.  Therefore, it’s better to leave bursting as a nebulous marketing term rather than a clearly defined entity.  One other note is that if you only get bursting during your provider’s Internet “valleys”, it may not help you at all, as the time of day may be no where near your busy hour time, and so although it will not hurt you, it will not help much either.

  5. When are the likely provider peak times where my burst is compromised?

    Slower service and the inability to burst are most likely occurring during times when everybody else on the Internet is watching movies — during the early evening.  Again, if this is your busy hour, just when you could really use bursting, it is not available to you.

These five points should give you a good idea of the multiple questions and issues that need to be considered when weighing the viability and value of burstable Internet speeds.  Of course, a final decision on bursting will ultimately depend on your specific circumstances.  For further related reading on the subject, we suggest you visit our articles How Much YouTube Can the Internet Handle and Field Guide to Contention Ratios.

How does your ISP actually enforce your Internet Speed


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com

YT

Have you ever wondered how your ISP manages to control the speed of your connection? If so, you might find the following article enlightening.  Below, we’ll discuss the various trade-offs used to control and break out bandwidth rate limits and the associated side effects of using those techniques.

Dropping Packets (Cisco term “traffic policing”)

One of the simplest methods for a bandwidth controller to enforce a rate cap is by dropping packets. When using the packet-dropping method, the bandwidth controlling device will count the total number of bytes that cross a link during a second.  If the target rate is exceeded during any single second, the bandwidth controller will drop packets for the remainder of that second. For example, if the bandwidth limit is 1 megabit, and the bandwidth controller counts 1 million bits gone by  in 1/2 a second, it will then drop packets for the remainder of the second.  The counter will then reset for the next second. From most evidence we have observed, rate caps enforced by many ISPs use the drop packet method, as it is the least expensive method supported on most basic routers.

So, what is wrong with dropping packets to enforce a bandwidth cap?

Well, when a link hits a rate cap and packets are dropped en masse, it can wreak havoc on a network. For example, the standard reaction of a Web browser when it perceives web traffic is getting lost is to re-transmit the lost data. For a better understanding of dropping packets, let’s use the analogy of a McDonald’s fast food restaurant.

Suppose the manager of the restaurant was told his bonus was based on making sure there was a never a line at the cash register. So, whenever somebody showed up to order food when all registers were occupied, the manager would open a trap door conveniently ejecting   the customer back out into the parking lot.  The customer, being extremely hungry, will come running back in the door (unless of course they die of starvation or get hit by a car) only to be ejected again. To make matters worse, let’s suppose a bus load of school kids arrive. As the kids file in to the McDonald’s, the remaining ones on the bus have no idea their classmates inside are getting ejected, so they keep streaming into the McDonald’s. Hopefully, you get the idea.

Well, when bandwidth shapers deploy packet-dropping technology to enforce a rate cap, you can get the same result seen with the trapdoor analogy in the McDonald’s. Web browsers and other user-based applications will beat their heads into the wall when they don’t get responses from their counterparts on the other end of the line. When packets are being dropped en masse,  the network tends to spiral out-of-control until all the applications essentially give up.  Perhaps you have seen this behavior while staying at a hotel with an underpowered Internet link. Your connectivity will alternate between working and then hanging up completely for a minute or so during busy hours. This can obviously be very maddening.

The solution to shaping bandwidth on a network without causing gridlock is to implement queuing.

Queuing Packets (Cisco term “traffic shaping”)

Queuing is the art of putting something in a line and making it wait before continuing on. Obviously, this is what fast food restaurants do in reality. They plan enough staff on hand to handle the average traffic throughout the day, and then queue up their customers when they are arriving at a faster rate then they can fill orders. The assumption with this model is that at some point during the day the McDonald’s will get caught up with the number of arriving customers and the lines will shrink away.

Another benefit of queuing is that wait times can perhaps be estimated by customers as they drive by and see the long line extending out into the parking lot, and thus, they will save their energy and not attempt to go inside.

But, what happens in the world of the Internet?

With queuing methods implemented, a bandwidth controller looks at the data rate of the incoming packets, and if deemed too fast, it will delay the packets in a queue. The packets will eventually get to their destination, albeit somewhat later than expected. Packets on queue can pile up very quickly, and without some help, the link would saturate. Computer memory to store the packets in the queue would also saturate and, much like the scenario mentioned above, the packets would eventually get dropped if they continued to come in at a faster rate than they were sent out.

TCP to the Rescue (keeping queuing under control)

Most internet applications use a service called TCP (transmission control protocol) to handle their data transfers. TCP has developed intelligence to figure out the speed of the link for which it is sending data on, and then can make adjustments. When the NetEqualizer bandwidth controller queues a packet or two, the TCP controllers on the customer end-point computers will sense the slower packets and back off the speed of the transfer. With just a little bit of queuing, the sender slows down a bit and dropping packets can be kept to a minimum.

Queuing Inside the NetEqualizer

The NetEqualizer bandwidth shaper uses a combination of queuing and dropping packets to get speed under control. Queuing is the first option, but when a sender does not back off eventually, their packets will get dropped. For the most part, this combination of queuing and dropping works well.

So far we have been inferring a simple case of a single sender and a single queue, but what happens if you have gigabit link with 10,000 users and you want to break off 100 megabits to be shared by 3000 users? How would a bandwidth shaper accomplish this? This is another area where a well-designed bandwidth controller like the NetEqualizer separates itself from the crowd.

In order to provide smooth shaping for a large group of users sharing a link, the NetEqualizer does several things in combination.

  1. It keeps track of all streams, and based on their individual speeds, the NetEqualizer will use different queue delays on each stream.
  2. Streams that back off will get minimal queuing
  3. Streams that do not back off may eventually have some of their packets dropped

The net effect of the NetEqualizer queuing intelligence is that all users will experience steady response times and smooth service.

Notes About UDP and Rate Limits

Some applications such as video do not use TCP to send data. Instead, they use a “send-and-forget” mechanism called UDP, which has no built-in back-off mechanism. Without some higher intelligence, UDP packets will continue to be sent at a fixed rate, even if the packets are coming too quickly for the receiver.  The good news is that even most UDP applications also have some way of measuring if their packets are getting to their destination. It’s just that with UDP, the mechanism of synchronization is not standardized.

Finally there are those applications that just don’t care if the packets get to their destination. Speed tests and viruses send UDP packets as fast as they can, regardless of whether the network can handle them or not. The only way to enforce a rate cap with such ill-mannered application is to drop the packets.

Hopefully this primer has given you a good introduction to the mechanisms used to enforce Internet Speeds, namely dropping packets & queuing.  And maybe you will think about this the next time you visit a fast food restaurant during their busy time…

Burstable Internet Connections — Are They of Any Value?


A burstable Internet connection conjures up the image of a super-charged Internet reserve, available at your discretion during a moment of need, like pushing the gas pedal to the floor to pass an RV on a steep grade. Americans find comfort knowing that they have that extra horsepower at their disposal. The promise of power is ingrained in our psyche, and is easily tapped into when marketing an Internet service. However, if you stop for a minute, and think about what is a bandwidth burst, it might not be a feature worth paying for in reality.

Here are some key questions to consider:

  • Is a burst one second, 10 seconds, or 10 hours at a time? This might seem like a stupid question, but it is at the heart of the issue. What good is a 1-second burst if you are watching a 20-minute movie?
  • If it is 10 seconds, then how long do I need to wait before it becomes available again?
  • Is it available all of the time, or just when my upstream provider(s) circuits are not busy?
  • And overall, is the burst really worth paying for? Suppose the electric company told you that you had a burstable electric connection or that your water pressure fluctuated up for a few seconds randomly throughout the day? Is that a feature worth paying for? Just because it’s offered doesn’t necessarily mean it’s needed or even that advantageous.

While the answers to each of these questions will ultimately depend on the circumstances, they all serve to point out a potential fallacy in the case for burstable Internet speeds: The problem with bursting and the way it is marketed is that it can be a meaningless statement without a precise definition. Perhaps there are providers out there that lay out exact definitions for a burstable connection, and abide by those terms. Even then we could argue that the value of the burst is limited.

What we have seen in practice is that most burstable Internet connections are unpredictable and simply confuse and annoy customers. Unlike the turbo charger in your car, you have no control over when you can burst and when you can’t. What sounded good in the marketing literature may have little practical value without a clear contract of availability.

Therefore, to ensure that burstable Internet speeds really will work to your advantage, it’s important to ask the questions mentioned above. Otherwise, it very well may just serve as a marketing ploy or extra cost with no real payoff in application.

Update: October 1, 2009

Today a user group published a bill of rights in order to nail ISPs down on what exactly they are providing in their service contracts.
ISP claims of bandwidth speed.

I noticed that  in the article, the bill of rights, requires a full disclosure about the speed of the providers link to the consumers modem. I am not sure if this is enough to accomplish a fixed minimus speed to the consumer.  You see, a provider could then quite easily oversell the capacity on their swtiching point. The point where they hook up to a backbone of other providers.  You can not completely regulate speed across the Internet, since by design providers hand off or exchange traffic with other providers.  Your provider cannot control the speed of your connection once it is off their network.

Posted by Eli Riles, VP of sales www.netequalizer.com.

Speeding up Your T1, DS3, or Cable Internet Connection with an Optimizing Appliance


By Art Reisman, CTO, APconnections (www.netequalizer.com)

Whether you are a home user or a large multinational corporation, you likely want to get the most out of your Internet connection. In previous articles, we have  briefly covered using Equalizing (Fairness)  as a tool to speed up your connection without purchasing additional bandwidth. In the following sections, we’ll break down  exactly how this is accomplished in layman’s terms.

First , what is an optimizing appliance?

An optimizing appliance is a piece of networking equipment that has one Ethernet input and one Ethernet output. It is normally located between the router that terminates your Internet connection and the users on your network. From this location, all Internet traffic must pass through the device. When activated, the optimizing appliance can rearrange traffic loads for optimal service, thus preventing the need for costly new bandwidth upgrades.

Next, we’ll summarize equalizing and behavior-based shaping.

Overall, equalizing is a simple concept. It is the art form of looking at the usage patterns on the network, and when things get congested, robbing from the rich to give to the poor. In other words, heavy users are limited in the amount of badwidth to which they have access in order to ensure that ALL users on the network can utilize the network effectively. Rather than writing hundreds of rules to specify allocations to specific traffic as in traditional application shaping, you can simply assume that large downloads are bad, short quick traffic is good, and be done with it.

How is Fairness implemented?

If you have multiple users sharing your Internet trunk and somebody mentions “fairness,” it probably conjures up the image of each user waiting in line for their turn. And while a device that enforces fairness in this way would certainly be better than doing nothing, Equalizing goes a few steps further than this.

We don’t just divide the bandwidth equally like a “brain dead” controller. Equalizing is a system of dynamic priorities that reward smaller users at the expense of heavy users. It is very very dynamic, and there is no pre-set limit on any user. In fact, the NetEqualizer does not keep track of users at all. Instead, we monitor user streams. So, a user may be getting one stream (FTP Download) slowed down while at the same time having another stream untouched(e-mail).

Another key element in behavior-based shaping is connections. Equalizing takes care of instances of congestion caused by single-source bandwidth hogs. However, the other main cause of Internet gridlock (as well as bringing down routers and access points) is p2p and its propensity to open hundreds or perhaps thousands of connections to different sources on the Internet. Over the years, the NetEqualizer engineers have developed very specific algorithms to spot connection abuse and avert its side effects.

What is the result?

The end result is that applications such as Web surfing, IM, short downloads, and voice all naturally receive higher priority, while large downloads and p2p receive lower priority. Also, situations where we cut back large streams is  generally for a short duration. As an added advantage, this behavior-based shaping does not need to be updated constantly as applications change.

Trusting a heuristic solution such as NetEqualizer is not always an easy step. Oftentimes, customers are concerned with accidentally throttling important traffic that might not fit the NetEqualizer model, such as video. Although there are exceptions, it is rare for the network operator not to know about these potential issues in advance, and there are generally relatively few to consider. In fact, the only exception that we run into is video, and the NetEqualizer has a low level routine that easily allows you to give overriding priority to a specific server on your network, hence solving the problem. The NetEqualizer also has a special feature whereby you can exempt and give priority to any IP address specifically in the event that a large stream such as video must be given priority.

Through the implementation of Equalizing technology, network administrators are able to get the most out of their network. Users of the NetEqualizer are often surprised to find that their network problems were not a result of a lack of bandwidth, but rather a lack of bandwidth control.

See who else is using this technology.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

5 Tips to speed up your business T1/DS3 to the Internet


By Art Reisman

Art Reisman CTO www.netequalizer.com

In tight times expanding your corporate Internet pipe is a hard pill to swallow, especially when your instincts tell you the core business should be able to live within the current allotment.

Here are some tips and hard facts that you  you may want to consider  to help stretch your business Internet pipe

1) Layer 7 application shaping.

The market place is crawling with solutions that allow you to set policies on bandwidth based on type of application.  Application shaping allows an administrator to restrict lower priority activities, while allowing mission critical Apps favorable consideration. This methodology is very seductive , but from our experience it can send your IT department into a nanny state, constantly trying to figure out what to allow and what to restrict. Also the cost of an Internet link expansion is dropping, while many of the application shaping solutions start around $10,000 and go up from there.

The up side is Layer 7 application shaping does work well when it comes to internal WAN links that do not carry Internet traffic. An administrator can get a handle on the fixed traffic running privately within their network quite easily.

2) Using your router to restrict specific IP and ports

If your core business utilization can be isolated to a single server or group of servers a few simple rules to allocate a large chunk of the pipe to these resources (by IP address) may be a good fit.

In an environment where business priorities change and are not isolated to a fixed server or two, this solution can backfire, but if your resource allocation requirements are stable doing something on your router to restrict one particular subnet over another can be useful in stretching your bandwidth.

One thing to be careful is that it often takes a skilled technician to set up specialty rules on your router. You can easilyu rack  up  $$ to your IT consultants if  your set up is not static.

3) Behavior based shaping

Editors note: We are the makers of the NetEqualizer which specializes in this technology; however our intent in this article is to be objective.

Behavior based shaping works well and affordably in most situations. Most business related applications will get priority as they tend to use small amounts of data or web pages.  Occasionally there are exceptions that need to override the basic behavior based shaping such as video.  Video can easily  be excluded from the generic policies.  Implementing a few exclusions is far less cumbersome than trying to classify all traffic all the time such as with application shaping.

4) Add more bandwidth and by pass your local loop carrier

T1’s and T3’s from your local telco may not be the only options for bandwidth in your area. Many of our customers get creative by purchasing bandwidth directly from a tier one provider (such as Level 3) and then using a Microwave back haul the bandwidth to their location. The Telco’s make a killing with what they call a loop charge (before they put any bandwidth on your line) With Microwave backhaul technology you can by-pass this charge for significant savings.

5) Clean up the laptops and computers on your network.  Many robots and viruses run in the background on your windows machines and can generate a cacophony of back ground traffic.  A business wide license for good virus protection may be worth the investment.  Stay away from the free ware versions of virus protection they tend to miss quite a bit.

New Speed Test Tools from M-Lab Expose ISP Bandwidth Throttling Practices


In a recent article, we wrote about the “The White Lies ISPs tell about their bandwidth speeds“.  We even hinted at how they (your ISP)  might be inclined to give preferential treatment to normal speed test sites.  Well, now there is a speed test site from M-Lab that goes beyond simple speed tests. M-lab gives the consumer sophisticated results and exposes any tricks your ISP might be up to.

Features provided include:

  • Network Diagnostic Tool – Test your connection speed and receive sophisticated diagnosis of problems limiting speed.
  • Glasnost – Test whether BitTorrent is being blocked or throttled.
  • Network Path and Application Diagnosis – Diagnose common problems that impact last-mile broadband networks.
  • DiffProbe (coming soon) – Determine whether an ISP is giving some traffic a lower priority than other traffic.
  • NANO (coming soon) – Determine whether an ISP is degrading the performance of a certain subset of users, applications, or destinations.

Click here to learn more about M-Lab.

Related article on how to determine your true video speed over the Internet.

Seventeen Unique Ideas to Speed up Your Internet


By Eli Riles
Eli Riles is a retired insurance agent from New York. He is a self-taught expert in network infrastructure. He spends half the year traveling and visiting remote corners of the earth. The other half of the year you’ll find him in his computer labs testing and tinkering with the latest network technology.  For questions or comments please contact him at
admin@netequalizer.com

Updated 11/30/2015 – We are now up to sixteen (17) tips!
————————————————————————————————————————————————

Although there is no way to actually make your true Internet speed faster, here are some tips for home and corporate users that can make better use of the bandwidth you have, thus providing the illusion of a faster pipe.

1) Use A VPN tunnel to get to blocked content.

One of the little know secrets your provider does not want you to know is that they will slow video or software updates if the content is not hosted on their network. Here is an article with details on how you can get around this restriction.

 

 

 

2) Time of day does make a difference

During peak internet Usage times, 5 PM to Midnight local time, your upstream provider is also most likely congested.  If you have a bandwidth intensive task to do, such as downloading an update for your IPAD, you can likely get a much faster download by doing your download earlier in the day. I have even noticed that the more obscure YouTube’s and videos,  have problems running at peak traffic times. My upstream provider does a good job with Netflix and popular videos during peak hours ( these can be found in their cache), but if I get something that is not likely stored in a local copy on their servers the video will lag during peak times. (see our article on caching)

3) Turn off Java Script

There are some trade offs with doing this , but it does make a big difference on how fast pages will load. Here is an article where cover all the  relevant details.

Note: Prior to 2010  setting your browser to text only mode was a viable option, but today most sites are full of graphics and virtually unreadable in text only mode.

  • If you are stuck with a dial-up or slower broadband connection, your  browser likely has an  option to load text-only. If you are a power user that’s gaming or watching YouTube, text-only will obviously have no effect on these activities, but it will speed up general browsing and e-mail.  Most web pages are loaded with graphics which take up the bulk of the load time, so switching to text-only will eliminate the graphics and save you quite a bit of time.

4) Install a bandwidth controller to make sure no single connection dominates your bandwidth

Everything you do on the Internet creates a connection from inside your network to the Internet, and all of these connections compete for the limited amount of bandwidth your ISP provides.

Your router (cable modem) connection to the Internet provides first come/first serve service to all the applications trying to access the Internet. To make matters worse, the heavier users, the ones with the larger persistent downloads, tend to get more than their fair share of router cycles.  Large downloads are like the school yard bully, they tend to butt in line, and not play fair.

Read the full article.

5) Turn off the other computers in the house

Many times, even during the day when the kids are off to school, I’ll be using my Skype phone and the connection will break up.  I have no idea what exactly the kids’ computers are doing, but if I log them off the Internet, things get better with the Skype call every time. In a sense, it’s a competition for limited bandwidth resources, so, decreasing the competition will usually boost your computer’s performance.

6) Kill background tasks on your computer

You should also try to turn off any BitTorrent or background tasks on your computer if you are having trouble while trying to watch a video or make a VoIP call.  Use your task bar to see what applications are running and kill the ones you don’t want.  Although this is a bit drastic, you may just find that it makes a difference. You’d be surprised what’s running on your computer without you even knowing it (or wanting it).

For you gamers out there, this also means turning off the audio component on your games if you do not need it for collaboration.

7) Test your Internet speed

One of the most common issues with slow internet service is that your provider is not giving you the speed/bandwidth that they have advertised.  Here is a link to our article on testing your Internet speed, which is a good place to start.

Note:  Comcast has adopted a 15 minute Penalty box in some markets. Your initial speed tests will likely show no degradation, but if you persist at watching high-definition video for more than 15 minutes, you may get put into their Penalty box.  This practice helps preserve a limited resource in some crowded markets.  We note it here because we have heard reports of people happily watching YouTube videos only to have service degrade.

Related Article: The real meaning of Comcast generosity.

8) Make sure you are not accidentally connected to a weak access point signal

There are several ways an access point can slow down your connection a bit.  If the signal between you and the access point is weak, the access point will automatically downgrade its service to a slower speed. This happens to me all the time. My access point goes on the blink (needs to be re-booted) and my computer connects to the neighbor’s with a weaker signal. The speed of my connection on the weaker signaled AP is quite variable.  So, if you are on wireless in a densely populated area, check to make sure what signal you are connected  to.

9) Caching — How  does it work and is it a good idea?

Offered by various vendors and built into Internet Explorer, caching can be very effective in many situations. Caching servers have built-in intelligence to store the most recently and most frequently requested information, thus preventing future requests from traversing a WAN/Internet link unnecessarily.

Many web servers keep a time stamp of their last update to data, and browsers such as the popular Internet Explorer will check the time stamp on the host server. If the page time stamp has not changed since the last time you accessed the page, IE will grab it and present a local stored copy of the Web page (from the last time you accessed the page), saving the time it would take to load the page from across the Internet.

So what is the downside of caching?

There are two main issues that can arise with caching:

a) Keeping the cache current. If you access a cached page that is not current, then you are at risk of getting old and incorrect information. Some things you may never want to be cached, for example the results of a transactional database query. It’s not that these problems are insurmountable, but there is always the risk that the data in cache will not be synchronized with changes. I personally have been misled by old data from my cache on several occasions.

b) Volume. There are some 100 million Web sites out on the Internet. Each site contains upwards of several megabytes of public information. The amount of data is staggering and even the smartest caching scheme cannot account for the variation in usage patterns among users and the likelihood they will hit an uncached page.

Recommended: Related article on how ISPs use caching to speed up NetFlix and Youtube Videos.

For information on turning off caching, click here.

 

10) Kill your virus protection software

With the recent outbreak of the H1N1 virus, it reminded me of  how sometimes the symptoms and carnage from a vaccine are worse than the disease it purports to cure.  Well, the same holds true for your virus protection software. Yes, viruses are real and can take down your computer, but so can a disk crash, which is also inevitable.  You must back up your critical data regularly.  However, that virus software seems to dominate more resources on my desktop than anything else.  I no longer use anything and could not be happier.  But be sure to use a reliable back-up (as you will need to rebuild your computer now and then, which I find a better alternative than running a slow computer all of the time).

11) Set a TOS bit to provide priority

A TOS bit  is a special bit within an IP packet that directs routers to give preferential treatment to selected packets.  This sounds great, just set a bit and move to the front of the line for faster service.  As always, there are limitations.

– How does one set a TOS bit?
It seems that only very special enterprise  applications, like a VoIP PBX, actually set and make use of TOS bits. Setting the actual bit is not all that difficult if you have an application that deals with the network layer, but most commercial applications just send their data on to the host computer’s clearing house for data, which in turn puts it into IP packets without a TOS bit set.  After searching around for a while, I just don’t see any literature on being able to set a TOS bit at the application level. For example, there are a couple of forums where people mention setting the TOS bit in Skype but nothing definitive on how to do it.

– Who enforces the priority for TOS packets?
This is a function of routers at the edge of your network, and all routers along the path to wherever the IP packet is going. Generally, this limits the effectiveness of using a TOS bit to networks that you control end-to-end. In other words, a consumer using a public Internet connection cannot rely on their provider to give any precedence to TOS bits, hence this feature is relegated to enterprise networks within a business or institution.

–  Incoming traffic generally cannot be controlled.
The subject of when you can and cannot control a TOS bit does get a bit more involved.  We have gone over this in more detail in a separate  article.

12) Avoid Quota Penalties

Some providers are implementing Quotas where they slow you down if you use too much data over a period of time.  If you know that you have a large set of downloads to do, for example synching your device with iTunes Cloud, go to a library and use their free service. Or, if you are truly without morals, logon to your neighbor’s wireless network and do your synch.

13) Consider Application Shaping?

Note: Application shaping is an appropriate topic for corporate IT administrators and is generally not a practical solution for a home user.  Makers of application shapers include Blue Coat (Packeteer) and Allot (NetEnforcer), products that are typically out of the price range for many smaller networks and home users.

One of the most popular and intuitive forms of optimizing bandwidth is a method called “application shaping”, with aliases of “deep packet inspection”, “layer 7 shaping”, and perhaps a few others thrown in for good measure. For the IT manager that is held accountable for everything that can and will go wrong on a network, or the CIO that needs to manage network usage policies, this at first glance may seem like a dream come true.  If you can divvy up portions of your WAN/Internet link to various applications, then you can take control of your network and ensure that important traffic has sufficient bandwidth, right?  Well, you be the judge…

At the center of application shaping is the ability to identify traffic by type.  For example, identifying between Citrix traffic, streaming audio, Kazaa peer-to-peer, or something else.  However, this approach is not without its drawbacks.

Drawback #1: Applications can purposely use non-standard ports
Many applications are expected to use Internet ports when communicating across the Web. An Internet port is part of an Internet address, and many firewall products can easily identify ports and block or limit them. For example, the “FTP” application commonly used for downloading files uses as standard the well-known “port 21”. The fallacy with this scheme, as many operators soon find out, is that there are many applications that do not consistently use a standard fixed port for communication. Many application writers have no desire to be easily classified. In fact, they don’t want IT personnel to block them at all, so they deliberately design applications to not conform to any formal port assignment scheme. For this reason, any product that aims to block or alter application flows by port should be avoided if your primary mission is to control applications by type.

So, if standard firewalls are inadequate at blocking applications by port, what can help?

As you are likely aware, all traffic on the Internet travels around in what is called an IP packet. An IP packet can very simply be thought of as a string of characters moving from Computer A to Computer B. The string of characters is called the “payload,” much like the freight inside a railroad car. On the outside of this payload, or data, is the address where it is being sent. These two elements, the address and the payload, comprise the complete IP packet.

In the case of different applications on the Internet, we would expect to see different kinds of payloads. For example, let’s take the example of a skyscraper being transported from New York to Los Angeles. How could this be done using a freight train? Common sense suggests that one would disassemble the office tower, stuff it into as many freight cars as it takes to transport it, and then when the train arrived in Los Angeles, hopefully the workers on the other end would have the instructions on how to reassemble the tower.

Well, this analogy works with almost anything that is sent across the Internet, only the payload is some form of data, not a physical hunk of bricks, metal and wires. If we were sending a Word document as an e-mail attachment, guess what, the contents of the document would be disassembled into a bunch of IP packets and sent to the receiving e-mail client where it would be re-assembled. If I looked at the payload of each Internet packet in transit, I could actually see snippets of the document in each packet and could quite easily read the words as they went by.

At the heart of all current application shaping products is special software that examines the content of Internet packets (aka “deep packet inspection”), and through various pattern matching techniques, determines what type of application a particular flow is. Once a flow is determined, then the application shaping tool can enforce the operator’s policies on that flow. Some examples of policy are:

Limit AIM messenger traffic to 100kbs
Reserve 500kbs for Shoretell voice traffic

The list of rules you can apply to traffic types and flow is unlimited.

Drawback #2: The number of applications on the Internet is a moving target.
The best application shaping tools do a very good job of identifying several thousand of them, and yet there will always be some traffic that is unknown (estimated at 10 percent by experts from the leading manufacturers). The unknown traffic is lumped into the unknown classification and an operator must make a blanket decision on how to shape this class. Is it important? Is it not? Suppose the important traffic was streaming audio for a webcast and is not classified. Well, you get the picture. Although theory behind application shaping by type is a noble one, the cost for a company to stay up-to-date is large and there are cracks.

Drawback #3: The spectrum of application types is not static
Even if the application spectrum could be completely classified, the spectrum of applications constantly changes. You must keep licenses current to ensure you have the latest in detection capabilities. And even then it can be quite a task to constantly analyze and change the mix of policies on your network. As bandwidth costs lessen, how much human time should be spent divvying up and creating ever more complex policies to optimize your WAN traffic?

Drawback #4: Net neutrality is comprised by application shaping.
Techniques used in application shaping have become controversial on public networks, with privacy issues often conflicting with attempts to ensure network quality.

Based on these drawbacks, we believe that application shaping is not the dream come true that it may seem at first glance.  Once CIOs and IT Managers are educated on the drawbacks, they tend to agree.

14) Bypass that local consumer reseller

This option might be a little bit out of the price range of the average consumer, and it may not be practical logistically –  but if you like to do things out-of-the-box, you don’t have to buy Internet service from your local cable operator or phone company, especially if you are in a metro area.  Many customers we know have actually gone directly to a Tier 1 point of presence (backbone provider) and put in a radio backhaul direct to the source.  There are numerous companies that can set you up with a 40-to-60 megabit link with no gimmicks.

15) Speeding up your iPhone

Ever been in a highly populated area with 3 or 4 bars and still your iPhone access slows to crawl ?

The most likely reason for this problem is congestion on the provider line. 3g and 4g networks all have a limited sized pipe from the nearest tower back to the Internet. It really does not matter what your theoretical data speed is, when there are more people using the tower than the back-haul pipe can handle, you can temporarily lose service, even when your phone is showing three or four bars.

Unfortunately, you only have a couple of options in this situation. If you are in a stadium with a large crowd, your best bet is to text during the action.  If you wait for a timeout or end of the game,  you’ll find this corresponds to the times when the network slows to a crawl,  so try to finish your access before the last out of the game or the end of the quarter. Pick a time when you know the majority of people are not trying to send data.

Get away from the area of congestion. I have experienced complete lockout of up to 30 minutes, when trying to text, as a sold out stadium emptied out.  In this situation my only chance was  to walk about  1/2 mile or so from the venue to get a text out. Once away from the main stadium, my iPhone connected to a tower with a different back haul away from the congested stadium towers.

Shameless plug: If you happen to be a provider or know somebody that works for a provider  please tell them to call us and we’d be glad to explain the simplicity of equalizing and how it can restore sanity to a congested wireless backhaul.

16) Turn off HTTPS and other Encryption

Although this may sound a bit controversial , there are some providers that,  for sake of survival assume that encrypted traffic is bad traffic.  For example p2p is considered bad traffic, they usee be able to use special equipment to throw it into a lower priority pool so that it gets sent out at a slower speed.   Many applications are starting to encrypt p2p , face book etc…. The provider may assume that all this is “bad”traffic because they don’t know what it is, and hence give it a lower priority.

17) Protocol Spoofing

Note:  This method is applied to Legacy Database servers doing operations over a WAN.  Skip this tip if you are a home user.

Historically, there are client-server applications that were developed for an internal LAN. Many of these applications are considered chatty. For example, to complete a transaction between a client and server, tens of messages may be transmitted when perhaps one or two would suffice. Everything was fine until companies, for logistical and other reasons, extended their LANs across the globe using WAN links to tie different locations together.

To get a better visual on what goes on in a chatty application, perhaps an analogy will help.  It’s like  sending family members your summer vacation pictures, and, for some insane reason, putting each picture in a separate envelope and mailing them individually on the same mail run. Obviously, this would be extremely inefficient, as chatty applications can be.

What protocol spoofing accomplishes is to fake out the client or server-side of the transaction and then send a more compact version of the transaction over the Internet, i.e. put all the pictures in one envelope and send it on your behalf, thus saving you postage.

You might ask why not just improve the inefficiencies in these chatty applications rather than write software to deal with the problem? Good question, but that would be the subject of a totally different article on how IT organizations must evolve with legacy technology, which is beyond the scale of the present article.

In Conclusion

Again, while there is no way to increase your true Internet speed without upgrading your service, these tips can improve performance, and help you to get better results from the bandwidth that you already have.  You’re paying for it, so you might as well make sure it’s being used as effectively as possible. : )

Related Article on testing true video speed over the Internet

A great article from the tech guy regarding tips on dealing with your ISP

Other Articles on Speeding up Your Internet

Five tips and tricks to speed up your Internet

How to speed up your Internet Connection Without any Software

Tips on how to speed up your Internet

About APconnections

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here to request our full pricelist.

%d bloggers like this: