Changing times, Five Points to Consider When Trying to Shape Internet Traffic

By Art Reisman, CTO, APconnections

1 ) Traditional Layer 7 traffic shaper methods are NOT able to identify encrypted traffic. In fact, short of an NSA back door, built into some encryption schemes, traditional Layer 7 traffic shapers are slowly becoming obsolete as the percentage of encrypted traffic expands.
2 ) As of 2014, it was estimated that up to 6 percent of the traffic on the Internet is encrypted, and this is expected to double in the next year or so.
3) It is possible to identify the source and destination of traffic even on encrypted streams. The sending and receiving IP’s of encrypted traffic are never encrypted, hence large content providers, such as Facebook, YouTube, and Netflix may be identified by their IP address, but there some major caveats.

– it is common for the actual content from major content providers to be served from regional servers under different domain names (they are often registered to third parties). Simply trying to identify traffic content from its originating domain is too simplistic.

– I have been able to trace proxied traffic back to its originating domain with accuracy by first doing some experiments. I start by initiating a download from a known source, such as YouTube or Netflix, and then I can figure out the actual IP address of the proxy that the download is coming from. From this, I then know that this particular IP is most likely the source of any subsequent YouTube. The shortfall with relying on this technique is that IP addresses change regionally, and there are many of them. You cannot assume what was true today will be true tomorrow with respect to any proxy domain serving up content. Think of the domains used for content like a leased food cart that changes menus each week.

4) Some traffic can be identified by behavior, even when it is encrypted. For example, the footprint of a single computer with a large connection count can usually be narrowed down to one of two things. It is usually either BitTorrent, or some kind of virus on a local computer. BitTorrents tend to open many small connections and hold them open for long periods of time. But again there are caveats. Legit BitTorrent providers such as Universities distributing public material will use just a few connections to accomplish the data transfer. Whereas consumer grade BitTorrents, often used for illegal file sharing, may use 100’s of connections to move a file.

5)  I have been alerted to solutions that require organizations to retrofit all endpoints with pre-encryption utilities, thus allowing the traffic shaper to receive data before it is encrypted.  I am not privy to the mechanics on how this is implemented, but I would assume outside of very tightly controlled networks, such a method would be a big imposition on users.

Surviving iOS updates

The birds outside my office window are restless. I can see the strain in the Comcast cable wires as they droop, heavy with the burden of additional bits, weighting them down like a freak ice storm.   It is time, once again, for Apple to update every device in the Universe with their latest IOS update.

Assuming you are responsible for a Network with a limited Internet pipe, and you are staring down 100 or more users, about to hit the accept button for their update, what can you do to prevent your user network from being gridlocked?

The most obvious option to gravitate to is caching. I found this nice article (thanks Luke) on the Squid settings used for a previous iOS update in 2013.  Having worked with Squid quite a bit helping our customers, I was not surprised on the amount of tuning required to get this to work, and I suspect there will be additional changes to make it work in 2014.

If you have a Squid caching solution already up and running it is worth a try, but I am on the fence of recommending a Squid install from scratch.  Why? Because we are seeing diminishing returns from Squid caching each year due to the amount of dynamic content.  Translation: Very few things on the Internet come from the same place with the same filename anymore, and for many content providers they are marking much of their content as non-cacheable.

If you have a NetEqualizer in place you can easily blunt the effects of the data crunch with a standard default set-up. The NetEqualizer will automatically push the updates out further into time, especially during peak hours when there is contention. This will allow other applications on your network to function normally during the day. I doubt anybody doing the update will notice the difference.

Finally if you are desperate, you might be able to block access to anything IOS update on your firewall.  This might seem a bit harsh, but then again Apple did not consult with you, and besides isn’t that what the free Internet at Starbucks is for?

Here is a snippet pulled from a forum on how to block it.

iOS devices check for new versions by polling the server This is done via HTTP, port 80. Specifically, the URL is:

If you block or redirect, you will inhibit the check for software updates. If you are really ambitious, you could redirect the query to a cached copy of the XML, but I haven’t tried that. Please remove the block soon; you wouldn’t want to prevent those security updates, would you?

Is Your Bandwidth Controller Obsolete Technology?

Although not free yet, bandwidth contracts have been dropping in cost faster than a bad stock during a recession.  With cheaper bandwidth costs , the question often arises on whether or not an enterprise can do without their trusty bandwidth controller.

Below, we have compiled a list of factors that will determine whether or not Bandwidth Controllers stick around for a while, or go the route of the analog modem,  a relic of when people received their Internet from AOL and dial up.

  • In Many areas of the world bandwidth prices are still very high. For example most of Africa,  and also Parts of the Middle East  they do not have the infrastructure in  place to deliver high speed low cost circuits . Bandwidth controllers are essential equipment in these regions.
  • Even in countries where bandwidth infrastructure is subsidized, and urban access is relatively cheap,  people like to work and play in remote places. Bandwidth consumers have come to expect bandwidth while choosing to live in a remote village. Many of these lifestyle choices find people far away from the main fiber lines that crisscross the urban landscape. Much like serving fresh seafood in mining camp, providing bandwidth to remote locations,  has a high price, and bandwidth controllers are more essential than ever in the remote areas of developed countries.   For example we are seeing a pick up in NetEqualizer interest in luxury resort hotels on tropical islands, and national parks , where high speed Internet is now a necessity but it is not cheap.
  • Government spending on Internet infrastructure has grown out of favor, at least in the US. After the recent waste and fraud scandals, don’t expect another windfall like the broad band initiative any time soon. Government subsidies were a one time factor in the drop in bandwidth prices during the 2007 to 2010 time frame.
  • As the market matures and providers look to show profit, they will be tempted to raise prices again, especially as demand grows.  The recession of 2007 drove down some commercial demand at a time when there was significant infrastructure increases in capacity, we may be at the tail end of that deflationary bubble.
  • There was also a one time infrastructure enhancement, that gained momentum around 2007, this compounded the deflationary pressure on bandwidth. WDM technology allowed existing fiber to carry up to 16 times the original planned capacity.  We don’t expect any new infrastructure innovations of that magnitude to occur any time soon.  Moore’s law has finally cracked  (proved false) in the computer industry and so will the honeymoon increases in the carrying capacity of fiber.
  • Lastly, the wireless frequencies are crowded beyond capacity and bandwidth is still hard to find here, and operators are running out of tricks.
  • We must concede that we have seen cases where customers are getting bandwidth at such a low cost that they forgo investing in bandwidth controllers, but we expect that trend to flatten out as bandwidth prices hold steady or start to creep back up a bit in the coming decade.

Stay tuned.

Is Layer 7 Shaping Officially Dead ?

Sometimes life throws you a curve ball and you must change directions.

We have some nice color coded pie chart  graphs that show customers percentages of  their bandwidth by application. This feature is popular  really catches their eye.

In an effort to improve our latest layer 7  reporting feature, we have been collecting data from some of our Beta users.

Layer 7 PIe Chart

Layer 7 PIe Chart 

The  accuracy of the Layer 7 data has always and continues to be an issue. Normally this is resolved by revising the layer 7 protocol patterns, which we use internally to identify the signatures of various applications.   We  had anticipated and planned to address accuracy in  a second release. However when we start to look at the root cause as to what is causing the missed classifications, we start to  see more cases of encrypted data. Encrypted data cannot be identified.

We then checked with one of our ISP customers in South Africa , who handles over a million residential users. It seems that some of their investment in Layer 7 classification is also being thwarted by increased encryption. And this is more  than the traditional p2p traffic,  encryption has spread to  the common social services such as face book.

Admittedly some of this early data is anecdotal,  but two independent observers reporting increased encryption is hard to ignore.

Evidently the increased encryption techniques now being used by common applications,  is a back lash to all the security issues bogging down the Internet.  There are workarounds for enterprises that must use layer 7 classification to prioritize traffic; however the workarounds require that all devices using the network must be retrofitted with special software to identify the traffic on the device ( iPad, iPhone). Such a workaround is impractical for an ISP.

The net side effect is, that if this trend continues traditional layer 7 packet shapers will become museum pieces right beside old Atari Games, and giant 3 pound cell phones.

Stuck on Desert Island, Do You Take Your Caching Server or Your Netequalizer ?

Caching is a great idea and works well, but I’ll take my NetEqualizer with me if forced to choose between the two on my remote island with a satellite link.

Yes there are  a few circumstances where a caching server might have a nice impact. Our most successful deployments are in educational environments where the same video is watched repeatedly as an assignment;  but for most wide open installations  ,expectations of performance far outweigh reality.   Lets  have at look at what works and also drill down on expectations that are based on marginal assumptions.

From my personal archive of experience here are some of the expectations attributed to caching that perhaps are a bit too optimistic.

“Most of my users go to their Yahoo or Face Book home page every day when they log in and that is the bulk of all they do”

– I doubt this customer’s user base is that conformist :),   and they’ll find out once they install their caching solution.  But even if true, only some of the content on Face  Book and Yahoo is static.  A good portion of these pages are by default dynamic, and ever-changing with content.  They are marked as Dynamic in their URLs which means the bulk of the page must be reloaded each time.  For example,  in order for caching to have an impact , the users in this scenario would have to stick to their home pages , and not look at friend photo’s or other pages.

” We expect to see a 30 percent hit rate when we deploy our cache.”

You won’t see a 30 percent hit rate, unless somebody designs some specific robot army to test your cache, hitting the same pages over and over again. Perhaps, on IOS update day, you might see a bulk of your hits going to the same large file and have a significant performance boost for a day. But overall you will be  doing well if  you get a 3 or 4 percent hit rate.

” I expect the cache hits to take pressure off my Internet Link”

Assuming you want your average user to experience a fast loading Internet, this is where you really want your NetEqualizer ( or similar intelligent bandwidth controller) over your caching engine. The smart bandwidth controller can re-arrange traffic on the fly insuring Interactive hits get the best response. A caching engine does not have that intelligence.

Let’s suppose you have a 100 megabit link to the Internet ,and you install a cache engine that effectively gets a 6 percent hit rate. That would be exceptional  hit rate.

So what is the  end user experience with a 6 percent hit rate compared to pre-cache ?

-First off, it is not the hit rate that matters when looking at total bandwidth. Much of those hits will likely be smallish image  files from the Yahoo home page or common sites, that account for less than 1 percent of your actual traffic.  Most of your traffic is likely dominated by large file downloads and only a portion of those may be coming from cache.

– A 6 percent hit rate means that 94 percent miss rate , and if your Internet was slow from congestion before the caching server it will still be slow 94 percent of the time.

– Putting in a caching server  would be like upgrading your bandwidth from 100 megabits to 104 megabits to relieve congestion. That cache hits may add to the total throughput in your reports, but the 100 megabit bottleneck is still there, and to the end user, there is little or no difference in user perception at this point.  A  portion of your Internet access is still marginal or unusable during peak times, and other than the occasional web page or video loading nice and snappy , users are getting duds most of the time.

Even the largest caching server is insignificant in how much data it can store.

– The Internet is Vast and your Cache is not. Think of a tiny Ant standing on top of Mount Everest. YouTube puts up 100 hours of new content every minute of every day. A small commercial caching server can store about 1/1000 of what YouTube uploads in day, not to mention yesterday and the day before and last year. It’s just not going to be in your cache.

So why is a NetEqualizer bandwidth controller so much more superior than a caching server when changing user perception of speed?  Because the NetEqualizer is designed to keep Internet access from crashing , and this is accomplished by reducing the large file transfers and video download footprints during peak times. Yes these videos  and downloads may be slow or sporadic, but they weren’t going to work anyway, so why let them crush the interactive traffic ? In the end caching and equalizing are not perfect, but from real world trials the equalizer changes the user experience from slow to fast for all Interactive transactions, caching is hit or miss ( pun intended).

Clone(skb), The inside story on Packet Sniffing Efficiently on a Linux Platform

Even if you are not a  complete geek you might find this interesting.

The two common tools in standard Linux used in many commercial packet sniffing firewalls are, IPtables, and  the Layer7 Packet Classifier.  These low level rule sets are often used in commercial firewalls to identify protocols ( Youtube, Netflix etc) and  then to take action by blocking them or reducing their footprint;  however in their current form, they can bog down your firewall when exposed to higher throughput levels.  The  basic problems as you  run at high line speeds are

  •  The path through the Linux Kernel is bottle necked around an Interface port. What this means is that for every packet that must be analyzed for a specific protocol, the interface port where packets arrive, is put on hold while the analysis is completed. Think of a line of cars being searched going through a border patrol check point. Picture the back up as each car is completely searched at the gate while other cars wait in line. This is essentially what happens in the standard Linux-based packet classifier, all packets are searched while other packets wait in line. Eventually this can cause latency.
  • The publicly available protocol patterns are not owned and supported by any entity and  they are somewhat unreliable. I know, because I wrote and tested many of them over 10 years ago and they are still published and re-used. In fairness, protocol accuracy will always be the Achilles heel of layer 7 detection. There is however some good news in this area which I will cover shortly.

Technology Changes in the Kernel to alleviate the bottleneck

A couple of years ago we had an idea to create a low-cost turn-key intrusion detection device. To build something that could stand up to todays commercial line speeds we would require a better layer 7 detection engine that the standard IPtables solution.  We ended building a very nice Intrusion detection device called the NetGladiator.  One of the stumbling blocks of building this device that we overcame was to maintain a commercial grade line speed of up to 1 gigabyte while still being able to inspect packets.  How did we do it?

Okay so I am a geek, but while poking around in the Linux Kernel I  noticed an interesting call titled Clone(skb). What clone skb does, is allow you to make a very fast copy of an IP packet and its data as it comes through the kernel.  I also noticed in  the newer Linux kernel there was a mechanism for multi-threading.  If you go back to my analogy of cars lined up the border  you can think of  multi-threading and cloning each car such that:

1) Car comes to the border station,

2) clone (copy) it, wave it through without delay

3) send the clone off to a processing lab for analysis ,  a really close by lab near the border

4) If the analysis  from the lab comes back with contraband in the clone, then send a helicopter after the original car and arrest the occupants

5) Throw the clone away

We have taken the cloning and multi-threading elements of the Linux Kernel and produced a low cost, accurate packet classifier that can run at 10 times the line speeds as the standard tools. It will be released in Mid February


Virtual Machines and Network Equipment Don’t Mix

By Art Reisman, CTO

Perhaps I am a bit old fashioned, but I tend to cringe when we get asked if we can run the NetEqualizer on  a virtual machine.

Here’s why.

The NetEqualizer performs a delicate balancing act between bandwidth shaping  and price/performance.   During this dance, it is of the utmost importance that the NetEqualizer,  “do no harm“.    That adage relates to making sure that all packets pass through the NetEqualizer such that:

1) The network does not see the NetEqualizer

2) The packets do not experience any latency

3) You do not change or molest the packet in any way

4) You do not crash

Yes, it would certainly be possible to run a NetEqualizer on a virtual machine, and I suspect that 90 percent of the time there would be no issues.  However. if there was a problem, a crash, or latency,  it would be virtually impossible (pun intended) to support the product – as there would be no way quantify the issue.

When we build and test NetEqualizer, and deliver it on a hardware platform, all performance and stability metrics are based on the assumption that the NetEqualizer is the sole occupant of the platform.  This means we have quantifiable resources for CPU, memory and LAN ports.  The rules above break down when you run a network device on a virtual machine.

A network device such as the NetEqualizer is carefully tested and quantified with the assumption that it has exclusive access to all the hardware resources on the platform.  If it were loaded on a shared hardware platform (VM) , you could no longer guarantee any performance metrics.

The Rebirth of Wired Bandwidth

By Art Reisman. CTO

As usual marketing expectations for internet speed have out-run reality; only this time reality is having a hard time catching up.

I am starting to get spotty yet reliable reports, from sources at some of the larger wireless carriers, that the guys in the trenches charged with supporting wireless technology are about ready to throw in the towel.  The reports are coming in from technicians who work with the large service providers.

No, I am not predicting the demise of wireless bandwidth and devices, but I am claiming we are at their critical saturation point. In the near future we will likely see only small incremental improvements in wireless data speeds.

The common myth with technology, especially in the first few decades, is that improvements are endless and infinite.  Yes, the theory is validated with technologies that are relatively new and moving fast, but the physical world eventually puts the brakes on.

For example, air travel saw huge jumps in comfort and speed for a 20 year span from the 1930’s to the 1950’s, culminating in jet travel across oceans.  While trans-ocean travel became a reality about 50 years ago, since that time there have been no improvements in speed. The Concorde was just not practical; as a result we have seen no net improvements in jet travel speed in 50 years.

Well, the same goes for wireless technology in 2013. The airwaves are saturated, the frequencies can only carry so much bandwidth.  Perhaps there will be one last gasp of innovation, similar to WDM on wired networks, but the future of high-speed computing will require point-to-point wires.  For this reason, I am still holding onto my prediction that we will see plugins for your devices start to pop up again as an alternative and convenience to wireless in the near future.

Related posts:

The truth about the wireless bandwidth crises This article assumes that there is a payment problem with the cost of paying for the technology.

ISP speed claim dilemma.

Your heard it here first, our prediction on how video will evolve to conserve bandwidth

Editors Note:

I suspect somebody out there has already thought of this,  but in my quick internet search I could not find any references to this specific idea, so I am takaing journalistic first  claim unofficial first rights to this idea.

The best example I think of to exemplify efficiency in video, are the old style cartoons,  such as the parody of South Park. If you ever watch south park animation,  the production quality  is done deliberately cheesy, very few moving parts with fixed backgrounds. In the South Park case, the intention was obviously not to save production costs.  The cheap animation is part of the comedy. That was not always the case,  the evolution of this sort of stop animation cartoon was from the early days  before computer animation took over the work of human artists working frame by frame. The fewer moving parts in a scene, the less work for the animator.  They could re-use existing drawings of a figure and just change the orientation of the mouth in perhaps three positions to animate talking.

Modern video compression tries to take advantage of some of the inherit static data from image to image , such that, each new frame is transmitted with less information.  At best, this is a hit or miss proposition.  There are likely many frivolous moving parts in a back ground that perhaps on the small screen of hand held device are not necessary.

My prediction is we will soon see a collaboration between production of video and Internet transport providers that allows for the average small device video production to have a much smaller footprint in transit.

Some of the basics of this technique would involve.

1) deliberately blurring or sending a background separate from the action. Think of a wide shot of break away lay-up in a basketball game. All you really need to see is the player and the basket in the frame the brain is going to ignore background details such as the crowd, they might as well be static character animations, especially on the scale of the screen of your Iphone not the same experience as your 56 inch HD flat screen.

2) Many of the videos in circulation the internet are news casts of a talking head giving the latest headlines. If you wanted to be extreme, you could  make the production such that the head is  tiny and animate it like a south park character,  this will take a much smaller footprint but technically still be video, and it would be much more like to play through without pausing.

3) The content sender can actually send a different production of the same video for low-bandwidth clients.

Note the reason why the production side of the house must get involved with the compression and delivery side of video is that the compression engines can only make assumptions on what is important and what is not, when removing information (pixels) from a video.

With a smart production engine geared toward the Internet, there is big savings here. Video is busting out all over the Internet and conserving from a production side only makes sense if you want to get your content deployed and viewed everywhere .

The security industry also does something similar taking advantage with fixed cameras on fixed backgrounds.

Related How much YouTube can the Internet Handle

Related Out of the box ideas on how to speed up your Internet

Blog dedicated to video compression, Euclid Discoveries.



Five Tips to Control Encrypted Traffic on Your Network

Editors Note:

Our intent with our tips is to exemplify some of the impracticalities involved with “brute force” shaping of encrypted traffic, and to offer some alternatives.

1) Insert Pre-Encryption software at each end node on your network.

This technique requires a special a custom APP that would need to be installed on Iphones, Ipads, and the laptops of end users. The app is designed  to relay all data to a centralized shaping device in an un-encrypted format.

  •   assumes that the a centralized  IT department has the authority to require special software on all devices using the network. It would not be feasible for environments where end users freely use their own equipment.


2) Use a sniffer traffic shaper that can decrypt the traffic on the fly.

  • The older 40 bit encryption codes could be hacked by a computer in about a one week, the newer 128 bit encryption codes would require the computer to run longer than the age of the Universe.

3) Just drop encrypted traffic, don’t allow it, forcing users to turn off SSL on their browsers.   Note: A traffic shaper, can spot encrypted traffic, it  just can’t tell you specifically what it is by content.

  • Seems rather draconian to block secure private transmissions, however the need to encrypt traffic over the Internet is vastly overblown. It is actually extremely unlikely for a personal information or credit card to get stolen in transit , but that is another subject
  • Really not practical where you have autonomous or public users, it will cause confusion at best, a revolt at worst.

4) Perhaps re-think what you are trying to accomplish.   There are more heuristic approaches to managing traffic which are immune to encryption.  Please feel free to contact us for more details on a heuristic approach to shaping encrypted traffic.

5) Charge a premium for encrypted traffic.  This would be more practical than blocking encrypted traffic, and would perhaps offset some of the costs for associate with the  overuse of p2p encrypted traffic.

Caching Success Urban Myth or Reality

Editors Note:

Caching is a bit overrated as a means of eliminating congestion and speeding up Internet access. Yes there are some nice caching tricks that create fleeting illusions of speed, but in the end, caching alone will fail to mitigate problems due to congestion. The following article adapted from our previous November  2011  posting details why.

You might be surprised to learn that Internet link congestion cannot be mitigated with a caching server alone. Contention can only be eliminated by:

1) Increasing bandwidth

2) Some form of intelligent bandwidth control

3) Or a combination of 1) and 2)

A common assumption about caching is that somehow you will be able to cache a large portion of common web content – such that a significant amount of your user traffic will not traverse your backbone with a decent caching solution. Unfortunately, our real world experience has shown us that the after the implementation of a caching solution the overall congestion on your Internet link shows no improvement.

For example: Let’s take the case of an  Internet trunk  that delivers 100 megabits, and is heavily saturated prior to implementing a caching  solution. What happens when you add a caching server to the mix?

From our experience, a good hit rate to cache will likely not exceed  5 percent. Yes, we have heard claims of 50 percent, but have not seen this in practice and suspect this is just best case vendor hype or a very specialized solution targeted at NetFLix (not general caching).  We have been selling a caching solution and discussing other caching solutions with customers for almost 3 years, and like any urban  myth, claims of high percentage caching hits are impossible to track down.

Why is the hit rate at best only 5 percent?

he Internet is huge relative to a cache, and you can only cache a tiny fraction of total Internet content. Even Google, with billions invested in data storage, does not come close. You can attempt to keep trending popular content in the cache, but the majority of access requests to the Internet will tend to be somewhat random and impossible to anticipate. Yes, a good number of hits locally resolve a Yahoo home page, but many more users are going to do unique things. For example, common hits like email and Facebook are all very different for each user, are not a shared resource maintained in the cache. User hobbies are also all different, and thus they traverse different web pages and watch different videos. The point is you can’t anticipate this data and keep it in a local cache any more reliably than guessing the weather long term. You can get a small statistical advantage, and that accounts for the 5 percent that you get right.


Even with caching at a 5 percent hit rate, your backbone link usage will not decline.

With caching in place, any gain in efficiency will be countered by a corresponding increase in total usage. Why is this?

If you assume an optimistic 10 percent hit rate to cache, you will end up getting a boost and obviously handle 10 percent more traffic than you did prior to caching , however your main pipe won’t.

This is worth repeating, if you cache 10 percent  of your data, that does not mean your Internet pipe usage will go from  100 percent to 90 percent , it is not a zero sum game. The net effect will be your main pipe will remain at 100 percent full , and you will get 10 percent on top of that from your cache.Thus your net usage to the  Internet appears to be 110 percent.  The problem is you still have a congested pipe and the associated slow web pages and files that are not stored in cache will suffer , you have not solved your congestion problem!

Perhaps I am beating a dead horse with examples, but just one more.

Let’s start with a very congested 100 megabit Internet link. Web hits are slow, YouTube takes forever, email responses are slow, and Skype calls break up. To solve these issues, you put in a caching server.

Now 10 percent of your hits come from cache, but since you did nothing to mitigate overall bandwidth usage, your users will simply eat up the extra 10 percent from cache and then some. It is like giving a drug addict a free hit of their preferred drug. If you serve up a fast YouTube, it will just encourage more YouTube usage.

Even with a good caching solution in place, if somebody tries to access Grandma’s Facebook page, it will have to come over the congested link, and it may time out and not load right away. Or, if somebody makes a Skype call it will still be slow. In other words, the 90 percent of the hits not in cache are still slow even though some video and some pages play fast, so the question is:

If 10 percent of your traffic is really fast, and 90 percent is doggedly slow, did your caching solution help?

The answer is yes, of course it helped, 10 percent of users are getting nice, uninterrupted YouTube. It just may not seem that way when the complaints keep rolling in. :)

Imagine Unlimited Bandwidth

By Art Reisman – CTO –

Art Reisman CTO

I was feeling a bit idealistic today about the future of bandwidth, so I jotted these words down. I hope it brightens your day

Imagine there’s no congestion
 It’s easy if you try
No hidden fees surprise us
Above us high speed guy
Imagine all providers, giving bandwidth away

Imagine there’s no Quota’s
It isn’t hard to use
 No killer apps that die for
A lack of bandwidth too
Imagine all the gamers living layer 7 free

You may say, I’m a streamer
But I’m just gonna download one
I hope some day you’ll join us
And your speed concerns will be done

The Wireless Density Problem

Recently, we have been involved in several projects where an IT consulting company has attempted to bring public wireless service into a high density arena. So far, the jury is out on how effective these service offerings have fared.

The motivation for such a project is driven by several factors.

1) Most standard cellular 4G data coverage is generally not adequate to handle 20,000 people with iPhones in a packed arena. I am sure the larger carriers are also feverishly working on a solution, but I have no inside information as to their approach nor chance of success.

Note: I’d be interested to learn about any arenas with great coverage?

2) Venue operators have customers that expect to be able to use their wireless devices during the course of a game to check stats, send pictures, etc.

3) Public frequency, wireless controllers, and access points are getting smarter rather quickly. Even though I have not seen clear success in these extremely high densities, free wireless solutions are gaining momentum.

We are actually doing a trial at a major sports venue in the coming weeks. From the perspective of the NetEqualizer, we are invited along to keep the  primary 1GB Internet pipe feeding the entire arena from going down. To date we have not been asked to referee the mayhem of access point regional gridlock and congestion in an arena setting, mostly because of of our price point and cost to deploy at each radio.

Why do these high density roll outs fail to meet expectation?

It seems, that 20+ thousand people in a small arena transmitting and receiving data over public frequencies really sucks for access points. The best way to picture this chaos would be to imagine listening to a million crickets on a warm summer night and trying to pick out the cadence of a single insect. Yes you might be able to single out a cricket  if it landed on your nose, but in a large arena not everybody can be next to an access point. The echoes from all the transmissions coming in to the radios in these high densities are unprecedented. Even with an initial success we see problems build as usage up take rises.  If you build it they will come! Typically what happens is that only a small percentage of attendees login to the wireless offering on the initial trial. The early success is tempered as usage doubles and doubles again eventually overwhelming the radios and their controllers.

My surprising conclusion

My prediction is that in the near future, we will start to see little plug in stations in high density venues. These stations will be compatible with next generation wireless devices, thus serving up data to your seat. You may scoff, but I am already hearing rumbles from many of our cutting edge high density housing internet providers on this issue. Due to wireless technology limitations they plan to keep their wired portals in their buildings, even in areas where they have spent heavily on wireless coverage.

Related Articles: radio coverage

Addressing issues of wireless data coverage.

How to speed up access on your Iphone

How Much Bandwidth Do You Really Need?

By Art Reisman – CTO –

Art Reisman CTO

When it comes to how much money to spend on the Internet, there seems to be this underlying feeling of guilt with everybody I talk to. From ISPs, to libraries or multinational corporations, they all have a feeling of bandwidth inadequacy. It is very similar to the guilt I used to feel back in College when I would skip my studies for some social activity (drinking). Only now it applies to bandwidth contention ratios. Everybody wants to know how they compare with the industry average in their sector. Are they spending on bandwidth appropriately, and if not, are they hurting their institution, will they become second-rate?

To ease the pain, I was hoping to put a together a nice chart on industry standard recommendations, validating that your bandwidth consumption was normal, and I just can’t bring myself to do it quite yet. There is this elephant in the room that we must contend with. So before I make up a nice chart on recommendations, a more relevant question is… how bad do you want your video service to be?

Your choices are:

  1. bad
  2. crappy
  3. downright awful

Although my answer may seem a bit sarcastic, there is a truth behind these choices. I sense that much of the guilt of our customers trying to provision bandwidth is based on the belief that somebody out there has enough bandwidth to reach some form of video Shangri-La; like playground children bragging about their father’s professions, claims of video ecstasy are somewhat exaggerated.

With the advent of video, it is unlikely any amount of bandwidth will ever outrun the demand; yes, there are some tricks with caching and cable on demand services, but that is a whole different article. The common trap with bandwidth upgrades is that there is a false sense of accomplishment experienced before actual video use picks up. If you go from a network where nobody is running video (because it just doesn’t work at all), and then you increase your bandwidth by a factor of 10, you will get a temporary reprieve where video seems reliable, but this will tempt your users to adopt it as part of their daily routine. In reality you are most likely not even close to meeting the potential end-game demand, and 3 months later you are likely facing another bandwidth upgrade with unhappy users.

To understand the video black hole, it helps to compare the potential demand curve pre and post video.

A  quality VOIP call, which used to be the measuring stick for decent Internet service runs about 54kbs. A quality  HD video stream can easily consume about 40 times that amount. 

Yes, there are vendors that claim video can be delivered at 250kbs or less, but they are assuming tiny little stop action screens.

Couple this tremendous increase in video stream size with a higher percentage of users that will ultimately want video, and you would need an upgrade of perhaps 60 times your pre-video bandwidth levels to meet the final demand. Some of our customers, with big budgets or government subsidized backbones, are getting close but, most go on a honeymoon with an upgrade of 10 times their bandwidth, only to end up asking the question, how much bandwidth do I really need?

So what is an acceptable contention ratio?

  • Typically in an urban area right now we are seeing anywhere from 200 to 400 users sharing 100 megabits.
  • In a rural area double that rati0 – 400 to 800 sharing 100 megabits.
  • In the smaller cities of Europe ratios drop to 100 people or less sharing 100 megabits.
  • And in remote areas served by satellite we see 40 to 50 sharing 2 megabits or less.

A Brief History of Peer to Peer File Sharing and the Attempts to Block It

By Art Reisman

The following history is based on my notes and observations as both a user of peer to peer, and as a network engineer tasked with cleaning  it up.

Round One, Napster, Centralized Server, Circa 2002

Napster was a centralized service, unlike the peer to peer behemoths of today there was never any question of where the copyrighted material was being stored and pirated from. Even though Napster did not condone pirated music and movies on their site, the courts decided by allowing copyrighted material to exist on their servers, they were in violation of copyright law. Napster’s days of free love were soon over.

From an historic perspective the importance of the decision to force the shut down of Napster was that it gave rise to a whole new breed of p2p applications. We detailed this phenomenon in our 2008 article.

Round Two, Mega-Upload  Shutdown, Centralized Server, 2012

We again saw a doubling down on p2p client sites (they expanded) when the Mega-Upload site, a centralized sharing site, was shutdown back in Jan 2012.

“On the legal side, the recent widely publicized MegaUpload takedown refocused attention on less centralized forms of file sharing (i.e. P2P). Similarly, improvements in P2P technology coupled with a growth in file sharing file size from content like Blue-Ray video also lead many users to revisit P2P.”

Read the full article from

The shut down of Mega-Upload had a personal effect on me as I had used it to distribute a 30 minute account from a 92-year-old WWII vet where he recalled, in oral detail, his experience of surviving a German prison camp.

Blocking by Signature, Alias Layer 7 Shaping, Alias Deep packet inspection. Late 1990’s till present

Initially, the shining star savior in the forefront against spotting illegal content on your network, this technology can be expensive and fail miserably in the face of newer encrypted p2p applications. It also can get quite expensive to keep up with the ever changing application signatures, and yet it is still often the first line of defense attempted by ISPs.

We covered this topic in detail, in our recent article,  Layer 7 Shaping Dying With SSL.

Blocking by Website

Blocking the source sites where users download their p2p clients is still possible. We see this method applied at mostly private secondary schools, where content blocking is an accepted practice. This method does not work for computers and devices that already have p2p clients. Once loaded, p2p files can come from anywhere and there is no centralized site to block.

Blocking Uninitiated Requests. Circa Mid-2000

The idea behind this method is to prevent your Network from serving up any content what so ever! Sounds a bit harsh, but the average Internet consumer rarely, if ever, hosts anything intended for public consumption. Yes at one time, during the early stages of the Internet, my geek friends would set up home pages similar to what everybody exposes on Facebook today. Now, with the advent hosting sites, there is just no reason for a user to host content locally, and thus, no need to allow access from the outside. Most firewalls have a setting to disallow uninitiated requests into your network (obviously with an exemption for your publicly facing servers).

We actually have an advanced version of this feature in our NetGladiator security device. We watch each IP address on your internal network and take note of outgoing requests, nobody comes in unless they were invited. For example, if we see a user on the Network make a request to a Yahoo Server , we expect a response to come back from a Yahoo server; however if we see a Yahoo server contact a user on your network without a pending request, we block that incoming request. In the world of p2p this should prevent an outside client from requesting a receiving a copyrighted file hosted on your network, after all no p2p client is going to randomly send out invites to outside servers or would they?

I spent a few hours researching this subject, and here is what I found (this may need further citations). It turns out that p2p distribution may be a bit more sophisticated and has ways to get around the block uninitiated query firewall technique.

P2P networks such as Pirate Bay use a directory service of super nodes to keep track of what content peers have and where to find them. When you load up your p2p client for the first time, it just needs to find one super node to get connected, from there it can start searching for available files.

Note: You would think that if these super nodes were aiding and abetting in illegal content that the RIAA could just shut them down like they did Napster. There are two issues with this assumption:

1) The super nodes do not necessarily host content, hence they are not violating any copyright laws. They simply coordinate the network in the same way DNS service keep track of URL names and were to find servers.
2) The super nodes are not hosted by Pirate Bay, they are basically commandeered from their network of users, who unwittingly or unknowingly agree to perform this directory service when clicking the license agreement that nobody ever reads.

From my research I have talked to network administrators that claim despite blocking uninitiated outside requests on their firewalls, they still get RIAA notices. How can this be?

There are only two ways this can happen.

1) The RIAA is taking liberty to simply accuse a network of illegal content based on the directory listings of a super node. In other words if they find a directory on a super node pointing to copyrighted files on your network, that might be information enough to accuse you.

2) More likely, and much more complex, is that the Super nodes are brokering the transaction as a condition of being connected. Basically this means that when a p2p client within your network, contacts a super node for information, the super node directs the client to send data to a third-party client on another network. Thus the send of information from the inside of your network looks to the firewall as if it was initiated from within. You may have to think about this, but it makes sense.

Behavior based thwarting of p2p. Circa 2004 – NetEqualizer

Behavior-based shaping relies on spotting the unique footprint of a client sending and receiving p2p applications. From our experience, these clients just do not know how to lay low and stay under the radar. It’s like the criminal smuggling drugs doing 100 MPH on the highway, they just can’t help themselves. Part of the p2p methodology is to find as many sources of files as possible, and then, download from all sources simultaneously. Combine this behavior with the fact that most p2p consumers are trying to build up a library of content, and thus initiating many file requests, and you get a behavior footprint that can easily be spotted. By spotting this behavior and making life miserable for these users, you can achieve self compliance on your network.

Read a smarter way to block p2p traffic.

Blocking the RIAA probing servers

If you know where the RIAA is probing from you can deny all traffic to their probes and thus prevent the probe of files on your network, and ensuing nasty letters to desist.

%d bloggers like this: