By Art Reisman
I just got off the phone with one our customers who happens to be a large ISP. He chewed me out because we were throttling his video, and his customers were complaining. I tell him, if we did not throttle his video during peak times, his whole pipe would come to screeching halt. Seems everybody is looking for a magic bullet to squeeze blood from a turnip.
Can the Internet be retrofitted for video?
Yes, there are a few tricks an ISP can do to make video more acceptable, but the bottom line is, the Internet was never intended to deliver video.
One basic basic trick being used to eek out some video, is to cache local copies of video content, and then deliver it to you when you click a URL for a movie. This technique follows along the same path as the original on demand video of the 1980’s. The kind of service where you called your cable company and purchased a movie to start at 3:00 pm. Believe it or not, there was often a video player with a cassette at other end of the cable going into your home, and your provider would just turn the video player on with the movie at the prescribed time. Today, the selection of available video has expanded and the delivery mechanism has gotten a bit more sophisticated, but for the most part, popular video is delivered via a direct wire from the operator into your home. It is usually NOT coming across the public Internet, it only appears that way (if it came across the Internet it would be slow and sporadic). Content that comes from the open Internet must come through an exchange point, and if your ISP has to rely on their exchange point to retrieve video content, things can get congested rather quickly.
What is an Internet Exchange point and why does it matter?
Perhaps an explanation of exchange points might help. Think of a giant railroad yard, where trains from all over the country converge and then return from where they came. In the yard they exchange their goods with the other train operators. For example, a train from Montana brings in coal destined for power plants in the east, and the trains from the east brings mining supplies and food for the people of Montana. As per a gentleman’s agreement, the railroad companies will transfer some goods to other operators, and take some goods in return. Although fictional, this would be a fair trade agreement. The fair trade in our railroad example works as as long as everybody exchanges about the same amount of stuff. But, suppose one day a train from the south shows up with 10 times the size load they wish to exchange data with, and suppose their goods are perishable, like raw milk products. Not only do they have more than their fair share to exchange, but they also have a time dependency on the exchange. They must get their milk to other markets quickly or it loses all value. You can imagine that the some of the railroads in the exchange co-operative would be overloaded and problems would arise.
I wish I could take every media person who writes about the Internet, take them into a room, and not let them leave until they understand the concept of an Internet exchange point. The Internet is founded on a best effort exchange agreement. Everything is built off this mode, and it cannot easily be changed.
So how does this relate back to the problems of video?
There really is no problem with the Internet, it works as intended and is a magnificent model of best effort exchange. The problem occurs with the delusion of content providers pumping video content into the pipes without any consideration of what might happen at the exchange points.
A bit of quick history on exchange point evolution.
Over the years, the original government network operators started exchanging with private operators, such as AT&T, Verizon, and Level 3. These private operators have made great improvement efforts to the capacity of their links and exchange points, but the basic problem still exists. The sender and receiver never have any guarantee if their real time streaming video will get to the other end in a timely manner.
As for caching, it is a band aid, and works some of the time for the most popular videos that get watched over and over again, but it does not solve the problem at the exchange points, and consumers and providers are always pumping more content into the pipes.
So can the problem of streaming content be solved?
The short answer is yes, but it would not be the Internet. I suspect one might call it the Internet for marketing purposes, but out of necessity. It would be some new network with a different political structure and entirely different rules. This would have much higher cost to ensure data paths for video, and operators would have to pass the cost of transport and path set up directly on to the content providers to make it work. Best effort fair exchange would be out of the picture.
For example, over the years I have seen numerous plans by wizards who draw up block diagrams on how to make the Internet a signaling switching network, instead of a best effort network. Each time I see one of these plans, I just sort of shrug. It has been done before and done very well, they never consider the data networks originally built by AT&T, which was a fully functional switched network for sending data to anybody with guaranteed bandwidth. We’ll see where we end up.



















Networking Equipment and Virtual Machines Do Not Mix
October 4, 2012 — netequalizerBy Joe DEsopo
Editors Note:
We often get asked why we don’t offer our NetEqualizer as a virtual machine. Although the excerpt below is geared toward the NetEqualizer, you could just as easily substitute the word “router” or “firewall” in place of NetEqualizer and the information would apply to just about any networking product on the market. For example, even a simple Linksys router has a version of Linux under the hood and to my knowlege they don’t offer that product as VM. In the following excerpt lifted from a real response to one of our larger customers (a hotel operator), we detail the reasons.
————————————————————————–
Dear Customer
We’ve very consciously decided not to release a virtualized copy of the software. The driver for our decision is throughput performance and accuracy.
As you can imagine, The NetEqualizer is optimized to do very fast packet/flow accounting and rule enforcement while minimizing unwanted negative effects (latencies, etc…) in networks. As you know, the NetEqualizer needs to operate in the sub-second time domain over what could be up to tens of thousands of flows per second.
As part of our value proposition, we’ve been successful, where others have not, at achieving tremendous throughput levels on low cost commodity platforms (Intel based Supermicro motherboards), which helps us provide a tremendous pricing advantage (typically we are 1/3 – 1/5 the price of alternative solutions). Furthermore, from an engineering point of view, we have learned from experience that slight variations in Linux, System Clocks, NIC Drivers, etc… can lead to many unwanted effects and we often have to re-optimize our system when these things are upgraded. In some special areas, in order to enable super-fast speeds, we’ve had to write our own Kernel-level code to bypass unacceptable speed penalties that we would otherwise have to live with on generic Linux systems. To some degree, this is our “secret sauce.” Nevertheless, I hope you can see that the capabilities of the NetEqualizer can only be realized by a carefully engineered synergy between our Software, Linux and the Hardware.
With that as a background, we have taken the position that a virtualized version of the NetEqualizer would not be in anyone’s best interest. The fact is, we need to know and understand the specific timing tolerances in any given moment and system environment. This is especially true if a bug is encountered in the field and we need to reproduce it in our labs in order to isolate and fix the problem (note: many bugs we find our not of our own making – they are often changes in Linux that used to work fine, but for some reason have changed in a newer release and we are unaware and that requires us to discover and re-optimize around).
I hope I’ve done a good job of explaining the technical complexities surrounding a “virtualized” NetEqualizer. I know it sounds like a great idea, but really we think it cannot be done to an acceptable level of performance and support.
Share this: