The Wireless Density Problem


Recently, we have been involved in several projects where an IT consulting company has attempted to bring public wireless service into a high density arena. So far, the jury is out on how effective these service offerings have fared.

The motivation for such a project is driven by several factors.

1) Most standard cellular 4G data coverage is generally not adequate to handle 20,000 people with iPhones in a packed arena. I am sure the larger carriers are also feverishly working on a solution, but I have no inside information as to their approach nor chance of success.

Note: I’d be interested to learn about any arenas with great coverage?

2) Venue operators have customers that expect to be able to use their wireless devices during the course of a game to check stats, send pictures, etc.

3) Public frequency, wireless controllers, and access points are getting smarter rather quickly. Even though I have not seen clear success in these extremely high densities, free wireless solutions are gaining momentum.

We are actually doing a trial at a major sports venue in the coming weeks. From the perspective of the NetEqualizer, we are invited along to keep the  primary 1GB Internet pipe feeding the entire arena from going down. To date we have not been asked to referee the mayhem of access point regional gridlock and congestion in an arena setting, mostly because of of our price point and cost to deploy at each radio.

Why do these high density roll outs fail to meet expectation?

It seems, that 20+ thousand people in a small arena transmitting and receiving data over public frequencies really sucks for access points. The best way to picture this chaos would be to imagine listening to a million crickets on a warm summer night and trying to pick out the cadence of a single insect. Yes you might be able to single out a cricket  if it landed on your nose, but in a large arena not everybody can be next to an access point. The echoes from all the transmissions coming in to the radios in these high densities are unprecedented. Even with an initial success we see problems build as usage up take rises.  If you build it they will come! Typically what happens is that only a small percentage of attendees login to the wireless offering on the initial trial. The early success is tempered as usage doubles and doubles again eventually overwhelming the radios and their controllers.

My surprising conclusion

My prediction is that in the near future, we will start to see little plug in stations in high density venues. These stations will be compatible with next generation wireless devices, thus serving up data to your seat. You may scoff, but I am already hearing rumbles from many of our cutting edge high density housing internet providers on this issue. Due to wireless technology limitations they plan to keep their wired portals in their buildings, even in areas where they have spent heavily on wireless coverage.

Related Articles:

Siradel.com radio coverage

Addressing issues of wireless data coverage.

How to speed up access on your Iphone

How Much Bandwidth Do You Really Need?


By Art Reisman – CTO – www.netequalizer.com

Art Reisman CTO www.netequalizer.com

When it comes to how much money to spend on the Internet, there seems to be this underlying feeling of guilt with everybody I talk to. From ISPs, to libraries or multinational corporations, they all have a feeling of bandwidth inadequacy. It is very similar to the guilt I used to feel back in College when I would skip my studies for some social activity (drinking). Only now it applies to bandwidth contention ratios. Everybody wants to know how they compare with the industry average in their sector. Are they spending on bandwidth appropriately, and if not, are they hurting their institution, will they become second-rate?

To ease the pain, I was hoping to put a together a nice chart on industry standard recommendations, validating that your bandwidth consumption was normal, and I just can’t bring myself to do it quite yet. There is this elephant in the room that we must contend with. So before I make up a nice chart on recommendations, a more relevant question is… how bad do you want your video service to be?

Your choices are:

  1. bad
  2. crappy
  3. downright awful

Although my answer may seem a bit sarcastic, there is a truth behind these choices. I sense that much of the guilt of our customers trying to provision bandwidth is based on the belief that somebody out there has enough bandwidth to reach some form of video Shangri-La; like playground children bragging about their father’s professions, claims of video ecstasy are somewhat exaggerated.

With the advent of video, it is unlikely any amount of bandwidth will ever outrun the demand; yes, there are some tricks with caching and cable on demand services, but that is a whole different article. The common trap with bandwidth upgrades is that there is a false sense of accomplishment experienced before actual video use picks up. If you go from a network where nobody is running video (because it just doesn’t work at all), and then you increase your bandwidth by a factor of 10, you will get a temporary reprieve where video seems reliable, but this will tempt your users to adopt it as part of their daily routine. In reality you are most likely not even close to meeting the potential end-game demand, and 3 months later you are likely facing another bandwidth upgrade with unhappy users.

To understand the video black hole, it helps to compare the potential demand curve pre and post video.

A  quality VOIP call, which used to be the measuring stick for decent Internet service runs about 54kbs. A quality  HD video stream can easily consume about 40 times that amount. 

Yes, there are vendors that claim video can be delivered at 250kbs or less, but they are assuming tiny little stop action screens.

Couple this tremendous increase in video stream size with a higher percentage of users that will ultimately want video, and you would need an upgrade of perhaps 60 times your pre-video bandwidth levels to meet the final demand. Some of our customers, with big budgets or government subsidized backbones, are getting close but, most go on a honeymoon with an upgrade of 10 times their bandwidth, only to end up asking the question, how much bandwidth do I really need?

So what is an acceptable contention ratio?

  • Typically in an urban area right now we are seeing anywhere from 200 to 400 users sharing 100 megabits.
  • In a rural area double that rati0 – 400 to 800 sharing 100 megabits.
  • In the smaller cities of Europe ratios drop to 100 people or less sharing 100 megabits.
  • And in remote areas served by satellite we see 40 to 50 sharing 2 megabits or less.

A Brief History of Peer to Peer File Sharing and the Attempts to Block It


By Art Reisman

The following history is based on my notes and observations as both a user of peer to peer, and as a network engineer tasked with cleaning  it up.

Round One, Napster, Centralized Server, Circa 2002

Napster was a centralized service, unlike the peer to peer behemoths of today there was never any question of where the copyrighted material was being stored and pirated from. Even though Napster did not condone pirated music and movies on their site, the courts decided by allowing copyrighted material to exist on their servers, they were in violation of copyright law. Napster’s days of free love were soon over.

From an historic perspective the importance of the decision to force the shut down of Napster was that it gave rise to a whole new breed of p2p applications. We detailed this phenomenon in our 2008 article.

Round Two, Mega-Upload  Shutdown, Centralized Server, 2012

We again saw a doubling down on p2p client sites (they expanded) when the Mega-Upload site, a centralized sharing site, was shutdown back in Jan 2012.

“On the legal side, the recent widely publicized MegaUpload takedown refocused attention on less centralized forms of file sharing (i.e. P2P). Similarly, improvements in P2P technology coupled with a growth in file sharing file size from content like Blue-Ray video also lead many users to revisit P2P.”

Read the full article from deepfield.net

The shut down of Mega-Upload had a personal effect on me as I had used it to distribute a 30 minute account from a 92-year-old WWII vet where he recalled, in oral detail, his experience of surviving a German prison camp.

Blocking by Signature, Alias Layer 7 Shaping, Alias Deep packet inspection. Late 1990’s till present

Initially, the shining star savior in the forefront against spotting illegal content on your network, this technology can be expensive and fail miserably in the face of newer encrypted p2p applications. It also can get quite expensive to keep up with the ever changing application signatures, and yet it is still often the first line of defense attempted by ISPs.

We covered this topic in detail, in our recent article,  Layer 7 Shaping Dying With SSL.

Blocking by Website

Blocking the source sites where users download their p2p clients is still possible. We see this method applied at mostly private secondary schools, where content blocking is an accepted practice. This method does not work for computers and devices that already have p2p clients. Once loaded, p2p files can come from anywhere and there is no centralized site to block.

Blocking Uninitiated Requests. Circa Mid-2000

The idea behind this method is to prevent your Network from serving up any content what so ever! Sounds a bit harsh, but the average Internet consumer rarely, if ever, hosts anything intended for public consumption. Yes at one time, during the early stages of the Internet, my geek friends would set up home pages similar to what everybody exposes on Facebook today. Now, with the advent hosting sites, there is just no reason for a user to host content locally, and thus, no need to allow access from the outside. Most firewalls have a setting to disallow uninitiated requests into your network (obviously with an exemption for your publicly facing servers).

We actually have an advanced version of this feature in our NetGladiator security device. We watch each IP address on your internal network and take note of outgoing requests, nobody comes in unless they were invited. For example, if we see a user on the Network make a request to a Yahoo Server , we expect a response to come back from a Yahoo server; however if we see a Yahoo server contact a user on your network without a pending request, we block that incoming request. In the world of p2p this should prevent an outside client from requesting a receiving a copyrighted file hosted on your network, after all no p2p client is going to randomly send out invites to outside servers or would they?

I spent a few hours researching this subject, and here is what I found (this may need further citations). It turns out that p2p distribution may be a bit more sophisticated and has ways to get around the block uninitiated query firewall technique.

P2P networks such as Pirate Bay use a directory service of super nodes to keep track of what content peers have and where to find them. When you load up your p2p client for the first time, it just needs to find one super node to get connected, from there it can start searching for available files.

Note: You would think that if these super nodes were aiding and abetting in illegal content that the RIAA could just shut them down like they did Napster. There are two issues with this assumption:

1) The super nodes do not necessarily host content, hence they are not violating any copyright laws. They simply coordinate the network in the same way DNS service keep track of URL names and were to find servers.
2) The super nodes are not hosted by Pirate Bay, they are basically commandeered from their network of users, who unwittingly or unknowingly agree to perform this directory service when clicking the license agreement that nobody ever reads.

From my research I have talked to network administrators that claim despite blocking uninitiated outside requests on their firewalls, they still get RIAA notices. How can this be?

There are only two ways this can happen.

1) The RIAA is taking liberty to simply accuse a network of illegal content based on the directory listings of a super node. In other words if they find a directory on a super node pointing to copyrighted files on your network, that might be information enough to accuse you.

2) More likely, and much more complex, is that the Super nodes are brokering the transaction as a condition of being connected. Basically this means that when a p2p client within your network, contacts a super node for information, the super node directs the client to send data to a third-party client on another network. Thus the send of information from the inside of your network looks to the firewall as if it was initiated from within. You may have to think about this, but it makes sense.

Behavior based thwarting of p2p. Circa 2004 – NetEqualizer

Behavior-based shaping relies on spotting the unique footprint of a client sending and receiving p2p applications. From our experience, these clients just do not know how to lay low and stay under the radar. It’s like the criminal smuggling drugs doing 100 MPH on the highway, they just can’t help themselves. Part of the p2p methodology is to find as many sources of files as possible, and then, download from all sources simultaneously. Combine this behavior with the fact that most p2p consumers are trying to build up a library of content, and thus initiating many file requests, and you get a behavior footprint that can easily be spotted. By spotting this behavior and making life miserable for these users, you can achieve self compliance on your network.

Read a smarter way to block p2p traffic.

Blocking the RIAA probing servers

If you know where the RIAA is probing from you can deny all traffic to their probes and thus prevent the probe of files on your network, and ensuing nasty letters to desist.

Alternatives to Bandwidth Addiction


By Art Reisman

CTO – http://www.netequalizer.com

Art Reisman CTO www.netequalizer.com

Bandwidth providers are organized to sell bandwidth. In the face of bandwidth congestion, their fall back position is always to sell more bandwidth, never to slow consumption. Would a crack dealer send their clients to a treatment program?

For example, I have had hundreds of encounters with people at bandwidth resellers; all of our exchanges have been courteous and upbeat, and yet a vendor relationship rarely develops. Whether they are executives, account managers, or front-line technicians, the only time they call us is as a last resort to save an account, and for several good reasons.

1) It is much easier, conceptually, to sell a bandwidth upgrade rather than a piece of equipment.

2) Bandwidth contracts bring recurring revenue.

3) Providers can lock in a bandwidth contract, investors like contracts that guarantee revenue.

4) There is very little overhead to maintain a leased bandwidth line once up and running.

5) And as I eluded to before, would a crack dealer send a client to rehab?

6) Commercial bandwidth infrastructure costs have come down in the last several years.

7) Bandwidth upgrades are very often the most viable and easiest path to relieve a congested Internet connection.

Bandwidth optimization companies exist because at some point customers realize they cannot outrun their consumption. Believe it or not, the limiting factor to Internet access speed is not always the pure cost of raw bandwidth, enterprise infrastructure can be the limiting factor. Switches, routers, cabling, access points and back-hauls all have a price tag to upgrade, and sometimes it is easier to scale back on frivolous consumption.

The ROI of optimization is something your provider may not want you know.

The next time you consider a bandwidth upgrade at the bequest of your provider, you might want to look into some simple ways to optimize your consumption. You may not be able to fully arrest your increased demand with an optimizer, but realistically you can slow growth rate from a typical unchecked 20 percent a year to a more manageable 5 percent a year. With an optimization solution in place, your doubling time for bandwidth demand can easily reduce down from about 3.5 years to 15 years, which translates to huge cost savings.

Note: Companies such as level 3 offer optimization solutions, but with all do respect, I doubt those business units are exciting stock holders with revenue. My guess is they are a break even proposition; however I’d be glad to eat crow if I am wrong, I am purely speculating.  Sometimes companies are able to sell adjunct services at a nice profit.

Related NY times op-ed on bandwidth addiction

NetEqualizer News: December 2012


December 2012

Greetings!

Enjoy another issue of NetEqualizer News! This month, we preview feature additions to NetEqualizer coming in 2013, offer a special deal on web application security testing for the Holidays, and remind NetEqualizer customers to upgrade to Software Update 6.0. As always, feel free to pass this along to others who might be interested in NetEqualizer News.

A message from Art…
Art Reisman, CTO – APconnections

artdaughterThis month’s picture is from Parent’s Night for my daughter’s volleyball team. In December, as I get ready for the Holidays, I often think about what is important to me – like family, friends, my health, and how I help to run this business. While pondering these thoughts, I came up with some quotes that have meaning to me, which I am sharing here. I hope you enjoy them, or that they at least get you thinking about what is important to you!

“Technology is not what has already been done.”
“Following too closely ruins the journey.”
“Innovation is not a democratic endeavor.”
“Time is not linear, it just appears that way most of the time.”

What are your favorite quotes? We love it when we hear back from you – so if you have a quote or a story you would like to share with us of how we have helped you, let us know. Email me directly at art@apconnections.net. I would love to hear from you!

NetEqualizer: Coming in 2013

We are always looking to improve our NetEqualizer product line such that our customers are getting maximum value from their purchase. Part of this process is brainstorming changes and additional features to adapt and help meet that need.

Here are a couple of ideas for changes to NetEqualizer that will arrive in 2013. Stay tuned to NetEqualizer News and our blog for updates on these features!

1) NetEqualizer in Mesh Networks and Cloud Computing

As the use of NAT distributed across mesh networks becomes more widespread, and the bundling of services across cloud computing becomes more prevalent, our stream-based behavior shaping will need to evolve.

This is due to the fact that we base our decision of whether or not to shape on a pair of IP addresses talking to each other without considering port numbers. Sometimes, in cloud or mesh networks, services are trunked across a tunnel using the same IP address. As they cross the trunk, the streams are broken out appropriately based on port number.

So, for example, say you have a video server as part of a cloud computing environment. Without any NAT, on a wide-open network, we would be able to give that video server priority simply by knowing its IP address. However, in a meshed network, the IP connection might be the same as other streams, and we’d have no way to differentiate it. It turns out, though, that services within a tunnel may share IP addresses, but the differentiating factor will be the port number.

Thus, in 2013 we will no longer shape just on IP to IP, but will evolve to offer shaping on IP(Port) to IP(Port). The result will be quality of service improvements even in heavily NAT’d environments.

2) 10 Gbps Line Speeds without Degradation

Some of our advantages over the years have been our price point, the techniques we use on standard hardware, and the line speeds we can maintain.

Right now, our NE3000 and above products all have true multi-core processors, and we want to take advantage of that to enhance our packet analysis. While our analysis is very quick and efficient today (sustained speeds of 1 Gbps up and down), in very high-speed networks, multi-core processing will amp up our throughput even more. In order to get to 10 Gbps on our Intel-based architecture, we must do some parallel analysis on IP packets in the Linux kernel.

The good news is that we’ve already developed this technology in our NetGladiator product (check out this blog article here).

Coming in 2013, we’ll port this technology to NetEqualizer. The result will be low-cost bandwidth shapers that can handle extremely high line speeds without degradation. This is important because in a world where bandwidth keeps getting cheaper, the only reason to invest in an optimizer is if it makes good business sense.

We have prided ourselves on smart, efficient, optimization techniques for years – and we will continue to do that for our customers!


Secure Your Web Applications for the Holidays!

We want YOU to be proactive about security. If your business has external-facing web applications, don’t wait for an attack to happen – protect yourself now! It only takes a few hours of our in-house security experts’ time to determine if your site might have issues, so, for the Holidays, we are offering a $500 upfront security assessment for customers with web applications that need testing!

If it is determined that our NetGladiator product can help shore up your issues, that $500 will be applied toward your first year of NetGladiator Software & Support (GSS). We also offer further consulting based on that assessment on an as-needed basis.

To learn more about NetGladiator, check out our video here.

Or, contact us at:

ips@apconnections.net

-or-

303-997-1300 x123


Don’t Forget to Upgrade to 6.0!: With a brief tutorial on User Quotas

If you have not already upgraded your NetEqualizer to Software Update 6.0, now is the perfect time!

We have discussed the new upgrade in depth in previous newsletters and blog posts, so this month we thought we’d show you how to take advantage of one of the new features – User Quotas.

User quotas are great if you need to track bandwidth usage over time per IP address or subnet. You can also send alerts to notify you if a quota has been surpassed.

To begin, you’ll want to navigate to the Manage User Quotas menu on the left. You’ll then want to start the Quota System using the third interface from the top, Start/Stop Quota System.

Now that the Quota System is turned on, we’ll add a new quota. Click on Configure User Quotas and take a look at the first window:

quota1

Here are the settings associated with setting up a new quota rule:

Host IP: Enter in the Host IP or Subnet that you want to give a quota rule to.

Quota Amount: Enter in the number of total bytes for this quota to allow.

Duration: Enter in the number of minutes you want the quota to be tracked for before it is reset (1 day, 1 week, etc.).

Hard Limit Restriction: Enter in the number of bytes/sec to allow the user once the quota is surpassed.  

Contact: Enter in a contact email for the person to notify when the quota is passed.

After you populate the form, click Add Rule. Congratulations! You’ve just set up your first quota rule!

From here, you can view reports on your quota users and more.

Remember, the new GUI and all the new features of Software Update 6.0 are available for free to customers with valid NetEqualizer Software & Support (NSS).

If you don’t have the new GUI or are not current with NSS, contact us today!

sales@apconnections.net

-or-

toll-free U.S. (888-287-2492),

worldwide (303) 997-1300 x. 103


Best Of The Blog

Internet User’s Bill of Rights

By Art Reisman – CTO – APconnections

This is the second article in our series. Our first was a Bill of Rights dictating the etiquette of software updates. We continue with a proposed Bill of Rights for consumers with respect to their Internet service.

1) Providers must divulge the contention ratio of their service. 

At the core of all Internet service is a balancing act between the number of people that are sharing a resource and how much of that resource is available.

For example, a typical provider starts out with a big pipe of Internet access that is shared via exchange points with other large providers. They then subdivide this access out to their customers in ever smaller chunks – perhaps starting with a gigabit exchange point and then narrowing down to a 10 megabit local pipe that is shared with customers across a subdivision or area of town.

The speed you, the customer, can attain is limited to how many people might be sharing that 10 megabit local pipe at any one time. If you are promised one megabit service, it is likely that your provider would have you share your trunk with more than 10 subscribers and take advantage of the natural usage behavior, which assumes that not all users are active at one time.

The exact contention ratio will vary widely from area to area, but from experience, your provider will want to maximize the number of subscribers who can share the pipe, while minimizing service complaints due to a slow network. In some cases, I have seen as many as 1,000 subscribers sharing 10 megabits. This is a bit extreme, but even with a ratio as high as this, subscribers will average much faster speeds when compared to dial up…

Photo Of The Month

sandybike

Kansas Clouds

The wide-open ranch lands in middle America provide a nice retreat from the bustle of city life. When he can find time, one of our staff members visits his property in Kansas with his family. The Internet connection out there is shaky, but it is a welcome change from routine.

Equalizing is the Silver Bullet for Quality of Service


Silver Bullet (n.) – A simple and seemingly magical solution to a complex problem.

The amount of solutions available that have been developed to improve Quality of Service (QoS) for data traveling across a network (video, VoIP, etc.) are endless. Often, these tools appear to be simple, but seem to fall short in implementation:

Compression: Compressing files in transit helps reduce congestion by decreasing the amount of bandwidth a transfer requires. This appears to be a viable solution, but in practice, most of the large streams that tend to clog networks (high resolution media files, etc.) are already compressed. Thus, most networks won’t see much improvement in QoS when this method is used.

Layer 7 Inspection: Providing QoS to specific applications also sounds like a reasonable approach to the problem. However, most applications are increasingly utilizing encryption for transferring data, and thus determining the purpose of a network packet is a much harder problem. It also requires constant tweaking and updates to ensure the proper applications are given priority.

Type of Service: Each network packet has a flag as part of its payload that denotes its “type of service.” This flag was intended to help give QoS to packets based on their importance and purpose. This method, however, requires lots of custom router configurations and is not very reliable as far as who is able to set the flag, when, and why.

These solutions are analogous to the diet pill and weight loss products that inundate our lives on a daily basis. They are offering complex solutions to a simple problem:

Overweight? Buy this machine, watch these DVDs, take this pill.

When the real solution is:

Overweight? Eat better.

Simple solutions are what good engineering is all about, and it drives the entire philosophy behind Equalizing – the bandwidth control method implemented in our NetEqualizer. The truth is, you can accomplish 99% of your QoS needs on a fixed link SIMPLY by cranking down on the large streams of traffic. While the above approaches try to do this in various ways, nothing is easier and more hands-off than looking at the behavior of a connection relative to the available bandwidth, and subsequently throttling it as needed. No deep packet inspection, compression, or packet analysis required. No need to concern yourself with new Internet usage trends or the latest media file types. Just fair bandwidth, regardless of trunk size, for all of your users, at all times of day. When bandwidth is controlled, connection quality is allowed to be as good as possible for everyone!

Consumer Bill of Rights for Software Updates


This morning I attached my iPhone to my Mac so I could import some of my latest Thanksgiving pictures. I have done this particular sync perhaps a 100 times in the past, but today I was in a hurry and wanted get everything on my Mac so I could  shoot an e-mail out with the new pictures. Yes I know it is possible to send email from an iPhone directly, but the tiny little box of screen is like working with my eyes closed and my hands behind my back.

Upon initiating the sync, my Mac informed me that something needed an update to complete the operation, not sure why, but it was adamant there was no other way. I clicked the update button and 20 minutes later the update was still running so I gave up. Have you ever wanted to scream “I DON’T WANT THE UPDATE! I AM COMPLETELY HAPPY WITH THE WAY THINGS ARE!” Shortly after this incident, I remembered how congress had passed a bill rights for airline passengers. I suspect as our electronic equipment becomes essential to every day life, somebody is going to come along with a bill of rights for technology users, so I thought it would be a good time to get a head start.

Bill of Rights for updates to smart devices:

1) Tell the user how long an update is going to take before they click a button. If you don’t know how long it will take, then make it a two step process where step one calculates how long it will take, and step two is the update.

2) Give the user an easy option to see what is in the update before they click.

3) Never force a user to take an update unless there is some radical change in technology that requires it.

4) Give the user the option to cancel the update in progress at any time without any consequences.

5) Don’t let your engineering team make some lame excuses as to why you can’t follow the Bill of Rights above. I would be glad to come in as consultant and help make your update process follow the Bill of Rights, and yes I can write the code if needed.

* Yes I am guilty of not always having the best update process for our product line. However we are getting much better. :)

7)  Don’t make a user close applications during the update. If you can’t figure out how to update your software with my applications open than see 5) above.

8) These rules apply to smart TV’s and cable boxes – I missed the first 5 minutes to a big game last year while my visio TV updated itself.

Coming soon the Bill of Rights for truth in Bandwidth Speed and why the Internet is not intended to run video.

The original Computer Users Bill of Rights.

Related Internet Users Bill of Rights

Follow

Get every new post delivered to your Inbox.

Join 56 other followers

%d bloggers like this: