Wireless Network Supercharger 10 Times Faster?


By Art Reisman

CTO – http://www.netequalizer.com

I just reviewed this impressive article:

  • David Talbot reports to MIT‘s Technology Review that “Academic researchers have improved wireless bandwidth by an order of magnitude… by using algebra to banish the network-clogging task of resending dropped packets.”

Unfortunately, I do not have enough details to explain the break through claims in the article specifically. However, through some existing background and analogies, I have detailed why there is room for improvement.

What follows below is a general explanation on  why there is room for a better method of data correction and elimination of retries on a wireless network.

First off, we need to cover the effects of missing wireless packets and why they happen.

In a wireless network, when transmitting data, the sender transmits a series of one’s and zero’s using a carrier frequency. Think of it like listening to your radio, and instead of hearing a person talking , all you hear is a series of beeps and silence. Although, in the case of a wireless network transmission, beeps would be coming so fast, you could not possibly hear the difference between the beep and silence. The good news is that a wireless receiver not only hears the beeps and silence, it interprets them into binary “ones’s” and “zeros’s” and puts them together into a packet.

The problem with this form of transmission is that wireless frequencies have many uncontrolled variables that can affect reliability. It would not be all that bad if carriers were not constantly pushing the envelope. Advertised speeds are based on a best-case signal, where the provider needs to cram as many bits on the frequency window in the shortest amount of time possible. There is no margin for error. With thousands of bits typically in a packet, all it takes is a few of them to be misinterpreted, and then the whole packet is lost and must be re-transmitted.

The normal way to tell if a packet is good or bad is using a technique called a check sum. Basically this means the receiver counts the number of incoming bits and totals them up as they a arrive. Everything in this dance is based on timing. The receiver listens to each time slot, and if it hears a beep it increments a counter, and if it hears silence, it does not increment the counter. At the end of a prescribed time, it totals the bits received and then compares the total to a separate sum (that is also transmitted). I am oversimplifying this process a bit, but think of it like two guys sending box cars full of chickens back and forth on a blind railroad with no engineers, sort of rolling them down hill to each other.

Guy 1 sends three box cars full in of chickens to Guy 2, and then a fourth box car with a note saying, “Please tell me if you got three box cars full of chickens, and also confirm there were 100 chickens in each car,” and then he waits for confirmation back from Guy 2.

Guy 2 gets 2 box cars full of chickens and the note, reads the note and realizes he only got two of the three, and there was a couple of chickens missing from on of the box cars,  so he sends a note back to Guy 1 that says, “I did not get 3 box cars of chickens just two and some of the chickens were missing, they must have escaped.”

The note arrives for Guy 1 and he re-sends a new box car to make up for the mixing chickens and a new not, telling Guy 1 what he re-sent a new box car with make up chickens.

I know this analogy of two guys sending chickens blindly in box cars with confirmation notes sounds somewhat silly and definitely inefficient, but the analogy serves to explain just how inefficient wireless communications can get with re-sends, especially if some of the bits are lost in transmission. Sending bits through the air-waves can quickly become a quagmire if conditions are not perfect and bits start getting lost.

The MIT team has evidently found a better way to confirm and ensure the transition of data. As I have pointed out, in countless articles about how congestion control speeds up networks, it follows that there is great room for improvement if you can eliminate the inefficiencies of retries on a wireless network. I don’t doubt claims of 10 fold increases in actual data transmitted and received can be achieved.

Hotel Property Managers Should Consider Generic Bandwidth Control Solutions


Editors Note: The following Hotelsmag.com article caught my attention this morning. The hotel industry is now seriously starting to understand that they need some form of bandwidth control.   However, many hotel solutions for bandwidth control are custom marketed, which perhaps puts their economy-of-scale at a competitive disadvantage. Yet, the NetEqualizer bandwidth controller, as well as our competitors, cross many market verticals, offering hotels an effective solution without the niche-market costs. For example, in addition to the numerous other industries in which the NetEqualizer is being used, some of our hotel customers include: The Holiday Inn Capital Hill, a prominent Washington DC hotel; The Portola Plaza Hotel and Conference Center in Monterrey, California; and the Hotel St. Regis in New York City.

For more information about the NetEqualizer, or to check out our live demo, visit www.netequalizer.com.

Heavy Users Tax Hotel Systems:Hoteliers and IT Staff Must Adapt to a New Reality of Extreme Bandwidth Demands

By Stephanie Overby, Special to Hotels — Hotels, 3/1/2009

The tweens taking up the seventh floor are instant-messaging while listening to Internet radio and downloading a pirated version of “Twilight” to watch later. The 200-person meeting in the ballroom has a full interactive multimedia presentation going for the next hour. And you do not want to know what the businessman in room 1208 is streaming on BitTorrent, but it is probably not a productivity booster.

To keep reading, click here.

How Much YouTube Can the Internet Handle?


By Art Reisman, CTO, http://www.netequalizer.com 

Art Reisman CTO www.netequalizer.com

Art Reisman

 

As the Internet continues to grow and true speeds become higher,  video sites like YouTube are taking advantage of these fatter pipes. However, unlike the peer-to-peer traffic of several years ago (which seems to be abating), YouTube videos don’t face the veil of copyright scrutiny cast upon p2p which caused most users to back off.
 

In our experience, there are trade offs associated with the advancements in technology that have come with YouTube. From measurements done in our NetEqualizer laboratories, the typical normal quality YouTube video needs about 240kbs sustained over the 10 minute run time for the video. The newer higher definition videos run at a rate at least twice that. 

Many of the rural ISPs that we at NetEqualizer support with our bandwidth shaping and control equipment have contention ratios of about 300 users per 10-megabit link. This seems to be the ratio point where these small businesses can turn  a profit.  Given this contention ratio, if 40 customers simultaneously run YouTube, the link will be exhausted and all 300 customers will be wishing they had their dial-up back. At last check, YouTube traffic accounted for 10 percent of all Internet Traffic.  If left completely unregulated,  a typical rural  ISP could find itself on the brink of saturation from normal YouTube usage already. With tier-1 providers in major metro areas there is usually more bandwidth, but with that comes higher expectations of service and hence some saturation is inevitable. 

If you believe there is a conspiracy, or that ISPs are not supposed to profit as they take risk and operate in a market economy, you are entitled to your opinion, but we are dealing with reality. And there will always be tension between users and their providers, much the same as there is with government funds and highway congestion. 

The fact is all ISPs have a fixed amount of bandwidth they can deliver and when data flows exceed their current capacity, they are forced to implement some form of passive constraint. Without them many networks would lock up completely. This is no different than a city restricting water usage when reservoirs are low. Water restrictions are well understood by the populace and yet somehow bandwidth allocations and restrictions are perceived as evil. I believe this misconception is simply due to the fact that bandwidth is so dynamic, if there was a giant reservoir of bandwidth pooled up in the mountains where you could see this resource slowly become depleted , the problem could be more easily visualized. 

The best compromise offered, and the only comprise that is not intrusive is bandwidth rationing at peak hours when needed. Without rationing, a network will fall into gridlock, in which case not only do the YouTube videos come to halt , but  so does e-mail , chat , VOIP and other less intensive applications. 

There is some good news, alternative ways to watch YouTube videos. 

We noticed during out testing that YouTube videos attempt to play back video as a  real-time feed , like watching live TV.  When you go directly to YouTube to watch a video, the site and your PC immediately start the video and the quality becomes dependent on having that 240kbs. If your providers speed dips below this level your video will begin to stall, very annoying;  however if you are willing to wait a few seconds there are tools out there that will play back YouTube videos for you in non real-time. 

Buffering Tool 

They accomplish this by pre-buffering before the video starts playing.  We have not reviewed any of these tools so do your research. We suggest you google “YouTube buffering tools” to see what is out there. Not only do these tools smooth out the YouTube playback during peak times or on slower connections , but they also help balance the load on the network during peak times. 

Bio Art Reisman is a partner and co-founder of APconnections, a company that provides bandwidth control solutions (NetEqualizer) to ISPs, Universities, Libraries, Mining Camps and any organization where groups of users must share their Internet resources equitably. What follows is an objective educational journey on how consumers and ISPs can live in harmony with the explosion of YouTube video.

Tips for testing your internet speed


Five tips to look for when testing your network speed

By Eli Riles

Eli Riles is a retired Insurance Agent from New York. He is a self taught expert in network infrastructure. He spends half the year traveling and visiting remote corners of the earth. The other half of the year you’ll find him in his computer labs testing and tinkering with the latest network technology. For questions or comments please contact him at eliriles@yahoo.com.

In the United States, there are no rules governing truth in bandwidth claims, at least none that we are aware of. Just imagine if every time you went to a gas station, the meters were adjusted to exaggerate the amount of fuel pumped, or the gas contained inert additives. Most consumers count on the fact that state and federal regulators monitor your local gas station to insure that a gallon is a gallon and the fuel is not a mixture of water and rubbing alcohol.

Unfortunately in the Internet service provider world, there is no regulation at this time. So it is up to you the consumer to ensure you are getting what you are paying for.

Network operators deploy an array of strategies to make their service seem faster than others. The most common technique is to simply oversell the amount of bandwidth they can actually handle and hope that not all users are active at one time.

It is up to the consumer, who often has a choice of service providers, Satellite, Cable, Phone company, wireless operator etc, to insure that they are getting what they are paying for.

We at Network Optimization news want to help you level the playing field so here are some tips to use when testing your network speed.

1)Use a speed test site that transfers at least 10 megabits of data with each test.

Some providers will start slowing your speed after a certain amount of data is passed in a short period, the larger the file in the test the better


2)Repeat your tests with at least three different speed test sites.

Different speed test sites use different methods for passing data and results will vary.


3)Try not to use speed test sites recommended by your provider. 

Or at least augment their recommended sites with other sites.

Enough said.

4)Run your tests during busy hours typically between 5 and 9 p.m. in the evening, try running them at different times.

Often times providers have trouble providing their top advertised speeds during busy hours.


5)Make sure you test your speed in both directions. 

The test you use should upload as well as download information.

To find the latest speed test sites on the network, we suggest you use a Google search with the terms:

“test my network speed”

Dig down deep in the list of results for more obscure sites.

Lastly, remember the grass is not always greener.  If you find your speeds are not always up to their advertised rates don’t be alarmed – the industry is not regulated in the US and speeds can vary for a variety of reasons. Your provider is likely doing the best job it can while trying to stay profitable.

Good Luck!

Eli Riles

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

%d bloggers like this: