By Art Reisman
CTO – http://www.netequalizer.com
I just reviewed this impressive article:
- David Talbot reports to MIT‘s Technology Review that “Academic researchers have improved wireless bandwidth by an order of magnitude… by using algebra to banish the network-clogging task of resending dropped packets.”
Unfortunately, I do not have enough details to explain the break through claims in the article specifically. However, through some existing background and analogies, I have detailed why there is room for improvement.
What follows below is a general explanation on why there is room for a better method of data correction and elimination of retries on a wireless network.
First off, we need to cover the effects of missing wireless packets and why they happen.
In a wireless network, when transmitting data, the sender transmits a series of one’s and zero’s using a carrier frequency. Think of it like listening to your radio, and instead of hearing a person talking , all you hear is a series of beeps and silence. Although, in the case of a wireless network transmission, beeps would be coming so fast, you could not possibly hear the difference between the beep and silence. The good news is that a wireless receiver not only hears the beeps and silence, it interprets them into binary “ones’s” and “zeros’s” and puts them together into a packet.
The problem with this form of transmission is that wireless frequencies have many uncontrolled variables that can affect reliability. It would not be all that bad if carriers were not constantly pushing the envelope. Advertised speeds are based on a best-case signal, where the provider needs to cram as many bits on the frequency window in the shortest amount of time possible. There is no margin for error. With thousands of bits typically in a packet, all it takes is a few of them to be misinterpreted, and then the whole packet is lost and must be re-transmitted.
The normal way to tell if a packet is good or bad is using a technique called a check sum. Basically this means the receiver counts the number of incoming bits and totals them up as they a arrive. Everything in this dance is based on timing. The receiver listens to each time slot, and if it hears a beep it increments a counter, and if it hears silence, it does not increment the counter. At the end of a prescribed time, it totals the bits received and then compares the total to a separate sum (that is also transmitted). I am oversimplifying this process a bit, but think of it like two guys sending box cars full of chickens back and forth on a blind railroad with no engineers, sort of rolling them down hill to each other.
Guy 1 sends three box cars full in of chickens to Guy 2, and then a fourth box car with a note saying, “Please tell me if you got three box cars full of chickens, and also confirm there were 100 chickens in each car,” and then he waits for confirmation back from Guy 2.
Guy 2 gets 2 box cars full of chickens and the note, reads the note and realizes he only got two of the three, and there was a couple of chickens missing from on of the box cars, so he sends a note back to Guy 1 that says, “I did not get 3 box cars of chickens just two and some of the chickens were missing, they must have escaped.”
The note arrives for Guy 1 and he re-sends a new box car to make up for the mixing chickens and a new not, telling Guy 1 what he re-sent a new box car with make up chickens.
I know this analogy of two guys sending chickens blindly in box cars with confirmation notes sounds somewhat silly and definitely inefficient, but the analogy serves to explain just how inefficient wireless communications can get with re-sends, especially if some of the bits are lost in transmission. Sending bits through the air-waves can quickly become a quagmire if conditions are not perfect and bits start getting lost.
The MIT team has evidently found a better way to confirm and ensure the transition of data. As I have pointed out, in countless articles about how congestion control speeds up networks, it follows that there is great room for improvement if you can eliminate the inefficiencies of retries on a wireless network. I don’t doubt claims of 10 fold increases in actual data transmitted and received can be achieved.
Hotel Property Managers Should Consider Generic Bandwidth Control Solutions
March 1, 2009 — netequalizerEditors Note: The following Hotelsmag.com article caught my attention this morning. The hotel industry is now seriously starting to understand that they need some form of bandwidth control. However, many hotel solutions for bandwidth control are custom marketed, which perhaps puts their economy-of-scale at a competitive disadvantage. Yet, the NetEqualizer bandwidth controller, as well as our competitors, cross many market verticals, offering hotels an effective solution without the niche-market costs. For example, in addition to the numerous other industries in which the NetEqualizer is being used, some of our hotel customers include: The Holiday Inn Capital Hill, a prominent Washington DC hotel; The Portola Plaza Hotel and Conference Center in Monterrey, California; and the Hotel St. Regis in New York City.
For more information about the NetEqualizer, or to check out our live demo, visit www.netequalizer.com.
Heavy Users Tax Hotel Systems:Hoteliers and IT Staff Must Adapt to a New Reality of Extreme Bandwidth Demands
By Stephanie Overby, Special to Hotels — Hotels, 3/1/2009
The tweens taking up the seventh floor are instant-messaging while listening to Internet radio and downloading a pirated version of “Twilight” to watch later. The 200-person meeting in the ballroom has a full interactive multimedia presentation going for the next hour. And you do not want to know what the businessman in room 1208 is streaming on BitTorrent, but it is probably not a productivity booster.
To keep reading, click here.
Share this:
Like this: