Five More Tips on Testing Your Internet Speed


By Art Reisman

Art Reisman is currently CTO and co-founder of NetEqualizer

Imagine if every time you went to a gas station the meters were adjusted to exaggerate the amount of fuel pumped, or the gas contained inert additives. Most consumers count on the fact that state and federal regulators monitor your local gas station to ensure that a gallon is a gallon and the fuel is not a mixture of water and rubbing alcohol. But in the United States, there are no rules governing truth in bandwidth claims. At least none that we are aware of.

Given there is no standard in regulating Internet speed, it’s up to the consumer to take the extra steps to make sure you’re getting what you pay for. In the past, we’ve offered some tips both on speeding up your Internet connection as well as questions you should ask your provider. Here are some additional tips on how to fairly test your Internet speed.

1. Use a speed test site that mimics the way you actually access the Internet.

Why?

Using a popular speed test tool is too predictable, and your Internet provider knows this. In other words, they can optimize their service to show great results when you use a standard speed test site. To get a better measure of you speed,  your test must be unpredictable. Think of a movie star going to the Oscars. With time to plan, they are always going to look their best. But the candid pictures captured by the tabloids never show quite as well.

To get a candid picture of your providers true throughput, we suggest using a tool such as the speed test utility from M-Lab.

2. Try a very large download to see if your speed is sustained.

We suggest downloading a full Knoppix CD. Most download utilities will give you a status bar on the speed of your download. Watch the download speed over the course of the download and see if the speed backs off after a while.

Why?

Some providers will start slowing your speed after a certain amount of data is passed in a short period, so the larger the file in the test the better. The common speed test sites likely do not use large enough downloads to trigger a slower download speed enforced by your provider.

3. If you must use a standard speed test site, make sure to repeat your tests with at least three different speed test sites.

Different speed test sites use different methods for passing data and results will vary.

4. Run your tests during busy hours — typically between 5 and 9 p.m. — and try running them at different times.

Often times IPs have trouble providing their top advertised speeds during busy hours.

5. Make sure to shut off other activities that use the Internet when you test. 

This includes other computers in your house, not just the computer you are testing from.

Why?

All the computers in your house share the same Internet pipe to your provider. If somebody is watching a Netflix movie while you run your test, the movie stream will skew your results.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

Just How Fast Is Your 4G Network?


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com

The subject of Internet speed and how to make it go faster is always a hot topic. So that begs the question, if everybody wants their Internet to go faster, what are some of the limitations? I mean, why can’t we just achieve infinite speeds when we want them and where we want them?

Below, I’ll take on some of the fundamental gating factors of Internet speeds, primarily exploring the difference between wired and wireless connections. As we have “progressed” from a reliance on wired connections to a near-universal expectation of wireless Internet options, we’ve also put some limitations on what speeds can be reliably achieved. I’ll discuss why the wired Internet to your home will likely always be faster than the latest fourth generation (4G) wireless being touted today.

To get a basic understanding of the limitations with wireless Internet, we must first talk about frequencies. (Don’t freak out if you’re not tech savvy. We usually do a pretty good job at explaining these things using analogies that anybody can understand.) The reason why frequencies are important to this discussion is that they’re the limiting factor to speed in a wireless network.

The FCC allows cell phone companies and other wireless Internet providers to use a specific range of frequencies (channels) to transmit data. For the sake of argument, let’s just say there are 256 frequencies available to the local wireless provider in your area. So in the simplest case of the old analog world, that means a local cell tower could support 256 phone conversations at one time.

However, with the development of better digital technology in the 1980s, wireless providers have been able to juggle more than one call on each frequency. This is done by using a time sharing system where bits are transmitted over the frequency in a round-robin type fashion such that several users are sharing the channel at one time.

The wireless providers have overcome the problem of having multiple users sharing a channel by dividing it up in time slices. Essentially this means when you are talking on your cell phone or bringing up a Web page on your browser, your device pauses to let other users on the channel. Only in the best case would you have the full speed of the channel to yourself (perhaps at 3 a.m. on a deserted stretch of interstate). For example, I just looked over some of the mumbo jumbo and promises of one-gigabit speeds for 4G devices, but only in a perfect world would you be able to achieve that speed.

In the real world of wireless, we need to know two things to determine the actual data rates to the end user.

  1. The maximum amount of data that can be transmitted on a channel
  2. The number of users sharing the channel

The answer to part one is straightforward: A typical wireless provider has channel licenses for frequencies in the 800 megahertz range.

A rule of thumb for transmitting digital data over the airwaves is that you can only send bits of  data at 1/2 the frequency. For example, 800 megahertz is 800 million cycles per second and 1/2 of that is 400 million cycles per second. This translates to a theoretical maximum data rate of 400 megabits. Realistically, with noise and other environmental factors, 1/10 of the original frequency is more likely. This gives us a maximum carrying capacity per channel of 80 megabits and a ballpark estimate for our answer to part one above.

However, the actual answer to variable two, the number of users sharing a channel, is a closely guarded secret among service providers. Conservatively, let’s just say you’re sharing a channel with 20 other users on a typical cell tower in a metro area. With 80 megabits to start from, this would put your individual maximum data rate at about four megabits during a period of heavy usage.

So getting back to the focus of the article, we’ve roughly worked out a realistic cap on your super-cool new 4G wireless device at four megabits. By today’s standards, this is a pretty fast connection. But remember this is a conservative benefit-of-the-doubt best case. Wireless providers are now talking about quota usage and charging severely for overages. That translates to the fact that they must be teetering on gridlock with their data networks now.  There is limited frequency real estate and high demand for content data services. This is likely to only grow as more and more users adopt mobile wireless technologies.

So where should you look for the fastest and most reliable connection? Well, there’s a good chance it’s right at home. A standard fiber connection, like the one you likely have with your home network, can go much higher than four megabits. However, as with the channel sharing found with wireless, you must also share the main line coming into your central office with other users. But assuming your cable operator runs a point-to-point fiber line from their office to your home, gigabit speeds would certainly be possible, and thus wired connections to your home will always be faster than the frequency limited devices of wireless.

Related Article: Commentary on Verizon quotas

Interesting  side note , in this article  by Deloitte they do not mention limitations of frequency spectrum as a limiting factor to growth.

How does your ISP actually enforce your Internet Speed


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com

YT

Have you ever wondered how your ISP manages to control the speed of your connection? If so, you might find the following article enlightening.  Below, we’ll discuss the various trade-offs used to control and break out bandwidth rate limits and the associated side effects of using those techniques.

Dropping Packets (Cisco term “traffic policing”)

One of the simplest methods for a bandwidth controller to enforce a rate cap is by dropping packets. When using the packet-dropping method, the bandwidth controlling device will count the total number of bytes that cross a link during a second.  If the target rate is exceeded during any single second, the bandwidth controller will drop packets for the remainder of that second. For example, if the bandwidth limit is 1 megabit, and the bandwidth controller counts 1 million bits gone by  in 1/2 a second, it will then drop packets for the remainder of the second.  The counter will then reset for the next second. From most evidence we have observed, rate caps enforced by many ISPs use the drop packet method, as it is the least expensive method supported on most basic routers.

So, what is wrong with dropping packets to enforce a bandwidth cap?

Well, when a link hits a rate cap and packets are dropped en masse, it can wreak havoc on a network. For example, the standard reaction of a Web browser when it perceives web traffic is getting lost is to re-transmit the lost data. For a better understanding of dropping packets, let’s use the analogy of a McDonald’s fast food restaurant.

Suppose the manager of the restaurant was told his bonus was based on making sure there was a never a line at the cash register. So, whenever somebody showed up to order food when all registers were occupied, the manager would open a trap door conveniently ejecting   the customer back out into the parking lot.  The customer, being extremely hungry, will come running back in the door (unless of course they die of starvation or get hit by a car) only to be ejected again. To make matters worse, let’s suppose a bus load of school kids arrive. As the kids file in to the McDonald’s, the remaining ones on the bus have no idea their classmates inside are getting ejected, so they keep streaming into the McDonald’s. Hopefully, you get the idea.

Well, when bandwidth shapers deploy packet-dropping technology to enforce a rate cap, you can get the same result seen with the trapdoor analogy in the McDonald’s. Web browsers and other user-based applications will beat their heads into the wall when they don’t get responses from their counterparts on the other end of the line. When packets are being dropped en masse,  the network tends to spiral out-of-control until all the applications essentially give up.  Perhaps you have seen this behavior while staying at a hotel with an underpowered Internet link. Your connectivity will alternate between working and then hanging up completely for a minute or so during busy hours. This can obviously be very maddening.

The solution to shaping bandwidth on a network without causing gridlock is to implement queuing.

Queuing Packets (Cisco term “traffic shaping”)

Queuing is the art of putting something in a line and making it wait before continuing on. Obviously, this is what fast food restaurants do in reality. They plan enough staff on hand to handle the average traffic throughout the day, and then queue up their customers when they are arriving at a faster rate then they can fill orders. The assumption with this model is that at some point during the day the McDonald’s will get caught up with the number of arriving customers and the lines will shrink away.

Another benefit of queuing is that wait times can perhaps be estimated by customers as they drive by and see the long line extending out into the parking lot, and thus, they will save their energy and not attempt to go inside.

But, what happens in the world of the Internet?

With queuing methods implemented, a bandwidth controller looks at the data rate of the incoming packets, and if deemed too fast, it will delay the packets in a queue. The packets will eventually get to their destination, albeit somewhat later than expected. Packets on queue can pile up very quickly, and without some help, the link would saturate. Computer memory to store the packets in the queue would also saturate and, much like the scenario mentioned above, the packets would eventually get dropped if they continued to come in at a faster rate than they were sent out.

TCP to the Rescue (keeping queuing under control)

Most internet applications use a service called TCP (transmission control protocol) to handle their data transfers. TCP has developed intelligence to figure out the speed of the link for which it is sending data on, and then can make adjustments. When the NetEqualizer bandwidth controller queues a packet or two, the TCP controllers on the customer end-point computers will sense the slower packets and back off the speed of the transfer. With just a little bit of queuing, the sender slows down a bit and dropping packets can be kept to a minimum.

Queuing Inside the NetEqualizer

The NetEqualizer bandwidth shaper uses a combination of queuing and dropping packets to get speed under control. Queuing is the first option, but when a sender does not back off eventually, their packets will get dropped. For the most part, this combination of queuing and dropping works well.

So far we have been inferring a simple case of a single sender and a single queue, but what happens if you have gigabit link with 10,000 users and you want to break off 100 megabits to be shared by 3000 users? How would a bandwidth shaper accomplish this? This is another area where a well-designed bandwidth controller like the NetEqualizer separates itself from the crowd.

In order to provide smooth shaping for a large group of users sharing a link, the NetEqualizer does several things in combination.

  1. It keeps track of all streams, and based on their individual speeds, the NetEqualizer will use different queue delays on each stream.
  2. Streams that back off will get minimal queuing
  3. Streams that do not back off may eventually have some of their packets dropped

The net effect of the NetEqualizer queuing intelligence is that all users will experience steady response times and smooth service.

Notes About UDP and Rate Limits

Some applications such as video do not use TCP to send data. Instead, they use a “send-and-forget” mechanism called UDP, which has no built-in back-off mechanism. Without some higher intelligence, UDP packets will continue to be sent at a fixed rate, even if the packets are coming too quickly for the receiver.  The good news is that even most UDP applications also have some way of measuring if their packets are getting to their destination. It’s just that with UDP, the mechanism of synchronization is not standardized.

Finally there are those applications that just don’t care if the packets get to their destination. Speed tests and viruses send UDP packets as fast as they can, regardless of whether the network can handle them or not. The only way to enforce a rate cap with such ill-mannered application is to drop the packets.

Hopefully this primer has given you a good introduction to the mechanisms used to enforce Internet Speeds, namely dropping packets & queuing.  And maybe you will think about this the next time you visit a fast food restaurant during their busy time…

Burstable Internet Connections — Are They of Any Value?


A burstable Internet connection conjures up the image of a super-charged Internet reserve, available at your discretion during a moment of need, like pushing the gas pedal to the floor to pass an RV on a steep grade. Americans find comfort knowing that they have that extra horsepower at their disposal. The promise of power is ingrained in our psyche, and is easily tapped into when marketing an Internet service. However, if you stop for a minute, and think about what is a bandwidth burst, it might not be a feature worth paying for in reality.

Here are some key questions to consider:

  • Is a burst one second, 10 seconds, or 10 hours at a time? This might seem like a stupid question, but it is at the heart of the issue. What good is a 1-second burst if you are watching a 20-minute movie?
  • If it is 10 seconds, then how long do I need to wait before it becomes available again?
  • Is it available all of the time, or just when my upstream provider(s) circuits are not busy?
  • And overall, is the burst really worth paying for? Suppose the electric company told you that you had a burstable electric connection or that your water pressure fluctuated up for a few seconds randomly throughout the day? Is that a feature worth paying for? Just because it’s offered doesn’t necessarily mean it’s needed or even that advantageous.

While the answers to each of these questions will ultimately depend on the circumstances, they all serve to point out a potential fallacy in the case for burstable Internet speeds: The problem with bursting and the way it is marketed is that it can be a meaningless statement without a precise definition. Perhaps there are providers out there that lay out exact definitions for a burstable connection, and abide by those terms. Even then we could argue that the value of the burst is limited.

What we have seen in practice is that most burstable Internet connections are unpredictable and simply confuse and annoy customers. Unlike the turbo charger in your car, you have no control over when you can burst and when you can’t. What sounded good in the marketing literature may have little practical value without a clear contract of availability.

Therefore, to ensure that burstable Internet speeds really will work to your advantage, it’s important to ask the questions mentioned above. Otherwise, it very well may just serve as a marketing ploy or extra cost with no real payoff in application.

Update: October 1, 2009

Today a user group published a bill of rights in order to nail ISPs down on what exactly they are providing in their service contracts.
ISP claims of bandwidth speed.

I noticed that  in the article, the bill of rights, requires a full disclosure about the speed of the providers link to the consumers modem. I am not sure if this is enough to accomplish a fixed minimus speed to the consumer.  You see, a provider could then quite easily oversell the capacity on their swtiching point. The point where they hook up to a backbone of other providers.  You can not completely regulate speed across the Internet, since by design providers hand off or exchange traffic with other providers.  Your provider cannot control the speed of your connection once it is off their network.

Posted by Eli Riles, VP of sales www.netequalizer.com.

The Real Killer Apps and What You Can Do to Stop Them from Bringing Down Your Internet Links


When planning a new network, or when diagnosing a problem on an existing one, a common question that’s raised concerns the impact that certain applications may have on overall performance. In some cases, solving the problem can be as simple as identifying and putting an end to (or just cutting back) the use of certain bandwidth-intensive applications. So, the question, then, is what applications may actually be the source of the problem?

The following article works to identify and break down the applications that will most certainly kill your network, but also provides suggestions as to what you can do about them. While every application certainly isn’t covered, our experience working with network administrators around the world has helped us identify the most common problems.

The Common Culprits

YouTube Video (standard video) — On average, a sustained 10-minute YouTube video will consume about 500kbs over its duration. Most video players try to store the video (buffer ahead) locally as fast as your network  can take it.   On a shared network, this has the effect of bringing everything else on your network to its knees. This may not be a problem if you are the only person using the Internet link, but in today’s businesses and households, that is rarely the case.

For more specifics about YouTube consumption, see these other Youtube articles.

Microsoft Service-Pack Downloads — Updates such as Microsoft service packs use file transfer protocol (FTP). Generally, this protocol will use as much bandwidth as it can find. The end result is that your VoIP phone may lock up, your video’s will become erratic, and Web surfing will come to a crawl.

Keeping Your Network Running Smoothly While Handling Killer Apps

There is no magic pill that can give you unlimited bandwidth, but each of  the following solutions may help. However, they often require trade offs.

  1. The obvious solution is to communicate with other members of your household or business when using bandwidth intensive applications. This is not always practical, but, if other users agree to change their behavior, it’s usually a surefire solution.
  2. Deploy a fairness device to smooth out those rough patches during contentious busy hours — Yes, this is the NetEqualizer News blog, but with all bias aside, these types of technologies often work great. If you are in an office sharing an Internet feed with various users, the NetEqualizer will keep aggressive bandwidth users from crowding others out. No, it cannot create additional bandwidth on your pipe, but it will eliminate the gridlock caused by your colleague  in the next cubicle  downloading a Microsoft service pack. Yes, there are other  devices on the market that can enforce fairness, but the NetEqualizer was specifically designed for this mission. And, with a starting price of around $1400, it is a product small businesses can invest in and avoid longer term costs (see option 3).
  3. Buy more bandwidth — In most cases, this is the most expensive of the different solutions in the long term and should usually be a last resort. This is especially true if the problems are largely caused by recreational Internet use on a business network. However, if the bandwidth-intensive activities are a necessary part of your operation, and they can’t afford to be regulated by a fairness device, upgrading your bandwidth may be the only long-term solution. But, before signing the contract, be sure to explore options one and two first.

As mentioned, not every network-killing application is discussed here, but this should head you in the right direction in identifying the problem and finding a solution. For a more detailed discussion of this issue, visit the links below.

  • For a  more detailed discussion on how much bandwidth specific applications consume, click here.
  • For a set of detailed tips/tricks on making your Internet run faster, click here.
  • For an in-depth look at more complex methods used to mitigate network congestion on a WAN or Internet link, click here.

Speeding up Your T1, DS3, or Cable Internet Connection with an Optimizing Appliance


By Art Reisman, CTO, APconnections (www.netequalizer.com)

Whether you are a home user or a large multinational corporation, you likely want to get the most out of your Internet connection. In previous articles, we have  briefly covered using Equalizing (Fairness)  as a tool to speed up your connection without purchasing additional bandwidth. In the following sections, we’ll break down  exactly how this is accomplished in layman’s terms.

First , what is an optimizing appliance?

An optimizing appliance is a piece of networking equipment that has one Ethernet input and one Ethernet output. It is normally located between the router that terminates your Internet connection and the users on your network. From this location, all Internet traffic must pass through the device. When activated, the optimizing appliance can rearrange traffic loads for optimal service, thus preventing the need for costly new bandwidth upgrades.

Next, we’ll summarize equalizing and behavior-based shaping.

Overall, equalizing is a simple concept. It is the art form of looking at the usage patterns on the network, and when things get congested, robbing from the rich to give to the poor. In other words, heavy users are limited in the amount of badwidth to which they have access in order to ensure that ALL users on the network can utilize the network effectively. Rather than writing hundreds of rules to specify allocations to specific traffic as in traditional application shaping, you can simply assume that large downloads are bad, short quick traffic is good, and be done with it.

How is Fairness implemented?

If you have multiple users sharing your Internet trunk and somebody mentions “fairness,” it probably conjures up the image of each user waiting in line for their turn. And while a device that enforces fairness in this way would certainly be better than doing nothing, Equalizing goes a few steps further than this.

We don’t just divide the bandwidth equally like a “brain dead” controller. Equalizing is a system of dynamic priorities that reward smaller users at the expense of heavy users. It is very very dynamic, and there is no pre-set limit on any user. In fact, the NetEqualizer does not keep track of users at all. Instead, we monitor user streams. So, a user may be getting one stream (FTP Download) slowed down while at the same time having another stream untouched(e-mail).

Another key element in behavior-based shaping is connections. Equalizing takes care of instances of congestion caused by single-source bandwidth hogs. However, the other main cause of Internet gridlock (as well as bringing down routers and access points) is p2p and its propensity to open hundreds or perhaps thousands of connections to different sources on the Internet. Over the years, the NetEqualizer engineers have developed very specific algorithms to spot connection abuse and avert its side effects.

What is the result?

The end result is that applications such as Web surfing, IM, short downloads, and voice all naturally receive higher priority, while large downloads and p2p receive lower priority. Also, situations where we cut back large streams is  generally for a short duration. As an added advantage, this behavior-based shaping does not need to be updated constantly as applications change.

Trusting a heuristic solution such as NetEqualizer is not always an easy step. Oftentimes, customers are concerned with accidentally throttling important traffic that might not fit the NetEqualizer model, such as video. Although there are exceptions, it is rare for the network operator not to know about these potential issues in advance, and there are generally relatively few to consider. In fact, the only exception that we run into is video, and the NetEqualizer has a low level routine that easily allows you to give overriding priority to a specific server on your network, hence solving the problem. The NetEqualizer also has a special feature whereby you can exempt and give priority to any IP address specifically in the event that a large stream such as video must be given priority.

Through the implementation of Equalizing technology, network administrators are able to get the most out of their network. Users of the NetEqualizer are often surprised to find that their network problems were not a result of a lack of bandwidth, but rather a lack of bandwidth control.

See who else is using this technology.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

APconnections Releases NetEqualizer for Small Business and WISP Market


LAFAYETTE, Colo., April 13 /PRNewswire/ -- APconnections (http://www.netequalizer.com),
a leading supplier of plug-and-play bandwidth shaping products,
today announced the release of its newest NetEqualizer model,
developed specifically with WISPs and small business users in mind.

This newest NetEqualizer release easily handles up to 10 megabits of traffic and up to 100 users, allowing room for expansion for growing demand. Furthermore, in addition to offering all standard NetEqualizer features, this smaller model will be Power over Ethernet, providing administrators greater flexibility in placing the unit within their network.

The model was developed to meet a growing demand both for an affordable traffic shaping device to help small businesses run VoIP concurrent with data traffic over their Internet link as well as a need for a shaping unit with PoE for the WISP market.

In a large wireless network, congestion often occurs at tower locations. However, with a low-cost PoE version of the NetEqualizer, wireless providers can now afford to have advanced bandwidth control at or near their access distribution points.

“About half of wireless network slowness comes from p2p (Bit Torrent) and video users overloading the access points,” said Joe D’Esopo, vice president of business development at APconnections. “We have had great success with our NE2000 series, but the price point of $2,500 was a bit too high to duplicate all over the network.”

For a small- or medium-sized office with a hosted VoIP PBX solution, the NetEqualizer is one of the few products on the market that can provide QoS for VoIP over an Internet link. And now, with volume pricing approaching $1,000, the NetEqualizer will help revolutionize the way offices use their Internet connection.

Pricing for the new model will be $1,200 for existing NetEqualizer users and $1,499 for non-customers purchasing their first unit. However, the price for subsequent units will be $1,200 for users and nonusers alike.

The NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology gives priority to latency sensitive applications, such as VoIP and email. It does it all dynamically and automatically, improving on other available bandwidth shaping technology. It controls network flow for the best WAN optimization.

APconnections is a privately held company founded in 2003 and is based in Lafayette, Colorado.

Full Article
%d bloggers like this: