Your heard it here first, our prediction on how video will evolve to conserve bandwidth


Editors Note:

I suspect somebody out there has already thought of this,  but in my quick internet search I could not find any references to this specific idea, so I am takaing journalistic first  claim unofficial first rights to this idea.

The best example I think of to exemplify efficiency in video, are the old style cartoons,  such as the parody of South Park. If you ever watch south park animation,  the production quality  is done deliberately cheesy, very few moving parts with fixed backgrounds. In the South Park case, the intention was obviously not to save production costs.  The cheap animation is part of the comedy. That was not always the case,  the evolution of this sort of stop animation cartoon was from the early days  before computer animation took over the work of human artists working frame by frame. The fewer moving parts in a scene, the less work for the animator.  They could re-use existing drawings of a figure and just change the orientation of the mouth in perhaps three positions to animate talking.

Modern video compression tries to take advantage of some of the inherit static data from image to image , such that, each new frame is transmitted with less information.  At best, this is a hit or miss proposition.  There are likely many frivolous moving parts in a back ground that perhaps on the small screen of hand held device are not necessary.

My prediction is we will soon see a collaboration between production of video and Internet transport providers that allows for the average small device video production to have a much smaller footprint in transit.

Some of the basics of this technique would involve.

1) deliberately blurring or sending a background separate from the action. Think of a wide shot of break away lay-up in a basketball game. All you really need to see is the player and the basket in the frame the brain is going to ignore background details such as the crowd, they might as well be static character animations, especially on the scale of the screen of your Iphone not the same experience as your 56 inch HD flat screen.

2) Many of the videos in circulation the internet are news casts of a talking head giving the latest headlines. If you wanted to be extreme, you could  make the production such that the head is  tiny and animate it like a south park character,  this will take a much smaller footprint but technically still be video, and it would be much more like to play through without pausing.

3) The content sender can actually send a different production of the same video for low-bandwidth clients.

Note the reason why the production side of the house must get involved with the compression and delivery side of video is that the compression engines can only make assumptions on what is important and what is not, when removing information (pixels) from a video.

With a smart production engine geared toward the Internet, there is big savings here. Video is busting out all over the Internet and conserving from a production side only makes sense if you want to get your content deployed and viewed everywhere .

The security industry also does something similar taking advantage with fixed cameras on fixed backgrounds.

Related How much YouTube can the Internet Handle

Related Out of the box ideas on how to speed up your Internet

Blog dedicated to video compression, Euclid Discoveries.

 

 

Wireless Network Supercharger 10 Times Faster?


By Art Reisman

CTO – http://www.netequalizer.com

I just reviewed this impressive article:

  • David Talbot reports to MIT‘s Technology Review that “Academic researchers have improved wireless bandwidth by an order of magnitude… by using algebra to banish the network-clogging task of resending dropped packets.”

Unfortunately, I do not have enough details to explain the break through claims in the article specifically. However, through some existing background and analogies, I have detailed why there is room for improvement.

What follows below is a general explanation on  why there is room for a better method of data correction and elimination of retries on a wireless network.

First off, we need to cover the effects of missing wireless packets and why they happen.

In a wireless network, when transmitting data, the sender transmits a series of one’s and zero’s using a carrier frequency. Think of it like listening to your radio, and instead of hearing a person talking , all you hear is a series of beeps and silence. Although, in the case of a wireless network transmission, beeps would be coming so fast, you could not possibly hear the difference between the beep and silence. The good news is that a wireless receiver not only hears the beeps and silence, it interprets them into binary “ones’s” and “zeros’s” and puts them together into a packet.

The problem with this form of transmission is that wireless frequencies have many uncontrolled variables that can affect reliability. It would not be all that bad if carriers were not constantly pushing the envelope. Advertised speeds are based on a best-case signal, where the provider needs to cram as many bits on the frequency window in the shortest amount of time possible. There is no margin for error. With thousands of bits typically in a packet, all it takes is a few of them to be misinterpreted, and then the whole packet is lost and must be re-transmitted.

The normal way to tell if a packet is good or bad is using a technique called a check sum. Basically this means the receiver counts the number of incoming bits and totals them up as they a arrive. Everything in this dance is based on timing. The receiver listens to each time slot, and if it hears a beep it increments a counter, and if it hears silence, it does not increment the counter. At the end of a prescribed time, it totals the bits received and then compares the total to a separate sum (that is also transmitted). I am oversimplifying this process a bit, but think of it like two guys sending box cars full of chickens back and forth on a blind railroad with no engineers, sort of rolling them down hill to each other.

Guy 1 sends three box cars full in of chickens to Guy 2, and then a fourth box car with a note saying, “Please tell me if you got three box cars full of chickens, and also confirm there were 100 chickens in each car,” and then he waits for confirmation back from Guy 2.

Guy 2 gets 2 box cars full of chickens and the note, reads the note and realizes he only got two of the three, and there was a couple of chickens missing from on of the box cars,  so he sends a note back to Guy 1 that says, “I did not get 3 box cars of chickens just two and some of the chickens were missing, they must have escaped.”

The note arrives for Guy 1 and he re-sends a new box car to make up for the mixing chickens and a new not, telling Guy 1 what he re-sent a new box car with make up chickens.

I know this analogy of two guys sending chickens blindly in box cars with confirmation notes sounds somewhat silly and definitely inefficient, but the analogy serves to explain just how inefficient wireless communications can get with re-sends, especially if some of the bits are lost in transmission. Sending bits through the air-waves can quickly become a quagmire if conditions are not perfect and bits start getting lost.

The MIT team has evidently found a better way to confirm and ensure the transition of data. As I have pointed out, in countless articles about how congestion control speeds up networks, it follows that there is great room for improvement if you can eliminate the inefficiencies of retries on a wireless network. I don’t doubt claims of 10 fold increases in actual data transmitted and received can be achieved.

%d bloggers like this: