By Art Reisman, CTO, www.netequalizer.com


Have you ever wondered how your ISP manages to control the speed of your connection? If so, you might find the following article enlightening. Below, we’ll discuss the various trade-offs used to control and break out bandwidth rate limits and the associated side effects of using those techniques.
One of the simplest methods for a bandwidth controller to enforce a rate cap is by dropping packets. When using the packet-dropping method, the bandwidth controlling device will count the total number of bytes that cross a link during a second. If the target rate is exceeded during any single second, the bandwidth controller will drop packets for the remainder of that second. For example, if the bandwidth limit is 1 megabit, and the bandwidth controller counts 1 million bits gone by in 1/2 a second, it will then drop packets for the remainder of the second. The counter will then reset for the next second. From most evidence we have observed, rate caps enforced by many ISPs use the drop packet method, as it is the least expensive method supported on most basic routers.
So, what is wrong with dropping packets to enforce a bandwidth cap?
Well, when a link hits a rate cap and packets are dropped en masse, it can wreak havoc on a network. For example, the standard reaction of a Web browser when it perceives web traffic is getting lost is to re-transmit the lost data. For a better understanding of dropping packets, let’s use the analogy of a McDonald’s fast food restaurant.
Suppose the manager of the restaurant was told his bonus was based on making sure there was a never a line at the cash register. So, whenever somebody showed up to order food when all registers were occupied, the manager would open a trap door conveniently ejecting the customer back out into the parking lot. The customer, being extremely hungry, will come running back in the door (unless of course they die of starvation or get hit by a car) only to be ejected again. To make matters worse, let’s suppose a bus load of school kids arrive. As the kids file in to the McDonald’s, the remaining ones on the bus have no idea their classmates inside are getting ejected, so they keep streaming into the McDonald’s. Hopefully, you get the idea.
Well, when bandwidth shapers deploy packet-dropping technology to enforce a rate cap, you can get the same result seen with the trapdoor analogy in the McDonald’s. Web browsers and other user-based applications will beat their heads into the wall when they don’t get responses from their counterparts on the other end of the line. When packets are being dropped en masse, the network tends to spiral out-of-control until all the applications essentially give up. Perhaps you have seen this behavior while staying at a hotel with an underpowered Internet link. Your connectivity will alternate between working and then hanging up completely for a minute or so during busy hours. This can obviously be very maddening.
The solution to shaping bandwidth on a network without causing gridlock is to implement queuing.
Queuing is the art of putting something in a line and making it wait before continuing on. Obviously, this is what fast food restaurants do in reality. They plan enough staff on hand to handle the average traffic throughout the day, and then queue up their customers when they are arriving at a faster rate then they can fill orders. The assumption with this model is that at some point during the day the McDonald’s will get caught up with the number of arriving customers and the lines will shrink away.
Another benefit of queuing is that wait times can perhaps be estimated by customers as they drive by and see the long line extending out into the parking lot, and thus, they will save their energy and not attempt to go inside.
But, what happens in the world of the Internet?
With queuing methods implemented, a bandwidth controller looks at the data rate of the incoming packets, and if deemed too fast, it will delay the packets in a queue. The packets will eventually get to their destination, albeit somewhat later than expected. Packets on queue can pile up very quickly, and without some help, the link would saturate. Computer memory to store the packets in the queue would also saturate and, much like the scenario mentioned above, the packets would eventually get dropped if they continued to come in at a faster rate than they were sent out.
TCP to the Rescue (keeping queuing under control)
Most internet applications use a service called TCP (transmission control protocol) to handle their data transfers. TCP has developed intelligence to figure out the speed of the link for which it is sending data on, and then can make adjustments. When the NetEqualizer bandwidth controller queues a packet or two, the TCP controllers on the customer end-point computers will sense the slower packets and back off the speed of the transfer. With just a little bit of queuing, the sender slows down a bit and dropping packets can be kept to a minimum.
The NetEqualizer bandwidth shaper uses a combination of queuing and dropping packets to get speed under control. Queuing is the first option, but when a sender does not back off eventually, their packets will get dropped. For the most part, this combination of queuing and dropping works well.
So far we have been inferring a simple case of a single sender and a single queue, but what happens if you have gigabit link with 10,000 users and you want to break off 100 megabits to be shared by 3000 users? How would a bandwidth shaper accomplish this? This is another area where a well-designed bandwidth controller like the NetEqualizer separates itself from the crowd.
In order to provide smooth shaping for a large group of users sharing a link, the NetEqualizer does several things in combination.
- It keeps track of all streams, and based on their individual speeds, the NetEqualizer will use different queue delays on each stream.
- Streams that back off will get minimal queuing
- Streams that do not back off may eventually have some of their packets dropped
The net effect of the NetEqualizer queuing intelligence is that all users will experience steady response times and smooth service.
Notes About UDP and Rate Limits
Some applications such as video do not use TCP to send data. Instead, they use a “send-and-forget” mechanism called UDP, which has no built-in back-off mechanism. Without some higher intelligence, UDP packets will continue to be sent at a fixed rate, even if the packets are coming too quickly for the receiver. The good news is that even most UDP applications also have some way of measuring if their packets are getting to their destination. It’s just that with UDP, the mechanism of synchronization is not standardized.
Finally there are those applications that just don’t care if the packets get to their destination. Speed tests and viruses send UDP packets as fast as they can, regardless of whether the network can handle them or not. The only way to enforce a rate cap with such ill-mannered application is to drop the packets.
Hopefully this primer has given you a good introduction to the mechanisms used to enforce Internet Speeds, namely dropping packets & queuing. And maybe you will think about this the next time you visit a fast food restaurant during their busy time…
Comcast Suit: Was Blocking P2P Worth the Final Cost?
December 29, 2009 — netequalizerBy Art Reisman
CTO of APconnections
Makers of the plug-and-play bandwidth control and traffic shaping appliance NetEqualizer
Comcast recently settled a class action suit in the state of Pennsylvania regarding its practice of selectively blocking of P2P. So far, the first case was settled for 16 million dollars with more cases on the docket yet to come. To recap, Comcast and other large ISPs invested in technology to thwart P2P, denied involvment when first accused, got spanked by the FCC, and now Comcast is looking to settle various class action suits.
When Comcast’s practices were established, P2P usage was sky-rocketing with no end in sight and the need to block some of it was required in order to preserve reasonable speeds for all users. Given that there was no specific law or ruling on the book, it seemed like mucking with P2P to alleviate gridlock was a rational business decision. This decision made even more sense considering that DSL providers were stealing disgruntled customers. With this said, Comcast wasn’t alone in the practice — all of the larger providers were doing it, throttling P2P to some extent to ensure good response times for all of their customers.
Yet, with the lawsuits mounting, it appears on face value that things backfired a bit for Comcast. Or did they?
We can work out some very rough estimates as the final cost trade-off. Here goes:
I am going to guess that before this plays out completely, settlements will run close to $50 million or more. To put that in perspective, Comcast shows a 2008 profit of close to $3 billion. Therefore, $50 million is hardly a dent to their stock holders. But, in order to play this out, we must ask what the ramifications would have been to not blocking P2P back when all of this began and P2P was a more serious bandwidth threat (Today, while P2P has declined, YouTube and online video are now the primary bandwidth hogs).
We’ll start with the customer. The cost of getting a new customer is usually calculated at around 6 months of service or approximately $300. So, to make things simple, we’ll assume the net cost of a losing a customer is roughly $300. In addition, there are also the support costs related to congested networks that can easily run $300 per customer incident.
The other more subtle cost of P2P is that the methods used to deter P2P traffic were designed to keep traffic on the Comcast network. You see, ISPs pay for exchanging data when they hand off to other networks, and by limiting the amount of data exchanged, they can save money. I did some cursory research on the costs involved with exchanging data and did not come up with anything concrete, so I’ll assume a P2P customer can cost you $5 per month.
So, lets put the numbers together to get an idea of how much potential financial damage P2P was causing back in 2007 (again, I must qualify that these are based on estimates and not fact. Comments and corrections are welcome).
So, very conservatively for 2007 and 2008, incremental costs related to unmitigated P2P could have easily run a total of $600 million right off the bottom line.
Therefore, while these calculations are approximations, in retrospect it was likely financially well worth the risk for Comcast to mitigate the effects of unchecked P2P. Of course, the public relations costs are much harder to quantify.
Share this: