Wireless is Nice, but Wired Networks are Here to Stay

By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com

The trend to go all wireless in high density housing was seemingly a slam dunk just a few years ago. The driving forces behind the exclusive deployment of wireless over wired access was two fold.

  • Wireless cost savings. It is much less expensive to strafe a building with a mesh network  rather than to pay a contractor to insert RJ45 cable throughout the building.
  • People expect wireless. Nobody plugs a computer into the wall anymore – or do they?

Something happened on the way to wireless Shangri-La. The physical limitations of wireless, combined with the appetite for ever increasing video, have caused some high density housing operators to rethink their positions.

In a recent discussion with several IT administrators representing large residential housing units, the topic turned to whether or not the wave of the future would continue to include wired Internet connections. I was surprised to learn that the consensus was that wired connections were not going away anytime soon.

To quote one attendee…

“Our parent company tried cutting costs by going all wireless in one of our new builds. The wireless access in buildings just can’t come close to achieving the speeds we can get in the wired buildings. When push comes to shove, our tenants still need to plug into the RJ45 connector in the wall socket. We have plenty of bandwidth at the core , but the wireless just does can’t compete with the expectations we have attained with our wired connections.”

I found this statement on a Resnet Mailing list from Brown University.


     I just wanted to weigh-in on this idea. I know that a lot of folks seem to be of the impression that ‘wireless is all we need’, but I regularly have to connect physically to get reasonable latency and throughput. From a bandwidth perspective, switching to wireless-only is basically the same as replacing switches with half-duplex hubs.
     Sure, wireless is convenient, and it’s great for casual email/browsing/remote access users (including, unfortunately, the managers who tend to make these decisions). Those of us who need to move chunks of data around or who rely on low-latency responsiveness find themselves marginalized in wireless-only settings. For instance: RDP, SSH, and X11 over even moderately busy wireless connections are often barely usable, and waiting an hour for a 600MB Debian ISO seems very… 1997.”

Despite the tremendous economic pressure to build ever faster wireless networks, the physics of transmitting signals through the air will ultimately limit the speed of wireless connections far below of what can be attained by wired connections. I always knew this, but was not sure how long it would take reality to catch up with hype.

Why is wireless inferior to wired connections when it comes to throughput?

In the real world of wireless, the factors that limit speed include

  1. The maximum amount of data that can be transmitted on a wireless channel is less than wired. A rule of thumb for transmitting digital data over the airwaves is that you can only send bits of  data at 1/2 the frequency. For example, 800 megahertz ( a common wireless carrier frequency) has  800 million cycles per second and 1/2 of that is 400 million cycles per second. This translates to a theoretical maximum data rate of 400 megabits. Realistically though, with imperfect signals (noise) and other environmental factors, 1/10 of the original frequency is more likely the upper limit. This gives us a maximum carrying capacity per channel of 80 megabits on our 800 megahertz channel. For contrast, the upper limit of a single fiber cable is around 10 gigabits, and higher speeds are attained by laying cables in parallel, bonding multiple wires together in one cable, and on major back bones, providers can transmit multiple frequencies of light down the same fiber achieving speeds of 100 gigabits on a single fiber! In fairness, wireless signals can also use multiple frequencies for multiple carrier signals, but the difference is you cannot have them in close proximity to each other.
  2. The number of users sharing the channel is another limiting factor. Unlike a single wired connection, wireless users in densely populated areas must share a frequency, you cannot pick out a user in the crowd and dedicate the channel for a single person.  This means, unlike the dedicated wire going straight from your Internet provider to your home or office, you must wait your turn to talk on the frequency when there are other users in your vicinity. So if we take our 80 megabits of effective channel bandwidth on our 800 megahertz frequency, and add in 20 users, we are no down to 4 megabits per user.
  3. The efficiency of the channel. When multiple people are sharing a channel, the efficiency of how they use the channel drops. Think of traffic at a 4-way stop. There is quite a bit of wasted time while drivers try to figure out whose turn it is to go, not to mention they take a while to clear the intersection. Same goes for wireless users sharing techniques there is always overhead in context switching between users. Thus we can take our 20 user scenario down to an effective data rate of 2 megabits
  4. Noise.  There is noise and then there is NOISE. Although we accounted for average noise in our original assumptions, in reality there will always be segments of the network that experience higher noise levels than average. When NOISE spikes there is further degradation of the network, and sometimes a user cannot communicate at all with an AP. NOISE is a maddening and unquantifiable variable. Our assumptions above were based on the degradation from “average noise levels”, it is not unheard of for an AP to drop its effective transmit rate by 4 or 5 times to account for noise, and thus an effective data rate for all users on that segment from our original example drops down to 500kbs, just barely enough bandwidth to watch a bad video.

Long live wired connections!

Deja Vu, IVR, and the Online Shopper’s Bill of Rights

By Art Reisman

My Bill of Rights for how the online shopping experience should be in a perfect world.

1) Ship to multiple addresses. This means specifically the ability to ship any item in an order to any address.

2) On the confirmation page, always let the user edit their order right there, delete, change quantity, ship to address, shipping options, etc. All buttons should be available for each item.

3) Never force the user to hit the back button for any mistake, assume they need to edit everything from every page, as if in a fully connected matrix. Let them navigate to anywhere from anywhere.

4) Don’t show items out of stock or on back order UNLESS the customer requests to see that garbage.

5) You had better know what is out of stock. :)

6) The submit button should immediately disappear when it is hit, it is either hit or not hit, and there should be no way for a customer to order something twice by accident or to be left wondering if they have ordered twice. The system should also display the appropriate status messages while an order is being processed.

7) If there is a problem on any page in the ordering process, a detailed message on what the problem was should appear at the top of page, along with highlighting the problem field, leaving a customer to wonder what they did wrong is just bad.

8) Gift wrap available or not when selecting an item, not at the end of the ordering process.

9) If the item or order is not under your inventory control then don’t sell it or pretend to sell it without a disclaimer.

10) Remember all the fields when navigating between options. For example, a user should never have to fill out an address twice unless it is a new address.

Why is it so hard to solve these problems ?

Long before the days of Internet, I was a system architect charged with designing an Integrated Voice Response product called Conversant (Conversant was one of the predecessors to Avaya IP Office). Although not nearly as wide-spread as the Internet of today, most large companies provided automated services over the phone throughout the 1990’s. Perhaps you are familiar with a typical IVR – Press 1 for sales, press 2 for support, etc. In an effort to reduce labor costs, companies also used the phone touch tone interface for more complex operations such as tracking your package or placing an order on a stock. It turns out that most of the quality factors associated with designing an IVR application of yesterday are now reflected in many of the issues facing the online shopping experience of today.

Most small companies really don’t have the resources to use anything more than a templated application. Sometimes the pre-built application is flawed, but more often than not, the application needs integration into the merchants back-end and business processes. The pre-built applications come with programming stubs for error conditions which must be handled. For small businesses, even the simplest customizations to an on-line application will run a minimum of 10k in programmer costs, and to hire a reputable company that specializes in customer integration is more like 50k.

Related Internet users bill of rights

NetEqualizer News: December 2012

December 2012


Enjoy another issue of NetEqualizer News! This month, we preview feature additions to NetEqualizer coming in 2013, offer a special deal on web application security testing for the Holidays, and remind NetEqualizer customers to upgrade to Software Update 6.0. As always, feel free to pass this along to others who might be interested in NetEqualizer News.

A message from Art…
Art Reisman, CTO – APconnections

artdaughterThis month’s picture is from Parent’s Night for my daughter’s volleyball team. In December, as I get ready for the Holidays, I often think about what is important to me – like family, friends, my health, and how I help to run this business. While pondering these thoughts, I came up with some quotes that have meaning to me, which I am sharing here. I hope you enjoy them, or that they at least get you thinking about what is important to you!

“Technology is not what has already been done.”
“Following too closely ruins the journey.”
“Innovation is not a democratic endeavor.”
“Time is not linear, it just appears that way most of the time.”

What are your favorite quotes? We love it when we hear back from you – so if you have a quote or a story you would like to share with us of how we have helped you, let us know. Email me directly at art@apconnections.net. I would love to hear from you!

NetEqualizer: Coming in 2013

We are always looking to improve our NetEqualizer product line such that our customers are getting maximum value from their purchase. Part of this process is brainstorming changes and additional features to adapt and help meet that need.

Here are a couple of ideas for changes to NetEqualizer that will arrive in 2013. Stay tuned to NetEqualizer News and our blog for updates on these features!

1) NetEqualizer in Mesh Networks and Cloud Computing

As the use of NAT distributed across mesh networks becomes more widespread, and the bundling of services across cloud computing becomes more prevalent, our stream-based behavior shaping will need to evolve.

This is due to the fact that we base our decision of whether or not to shape on a pair of IP addresses talking to each other without considering port numbers. Sometimes, in cloud or mesh networks, services are trunked across a tunnel using the same IP address. As they cross the trunk, the streams are broken out appropriately based on port number.

So, for example, say you have a video server as part of a cloud computing environment. Without any NAT, on a wide-open network, we would be able to give that video server priority simply by knowing its IP address. However, in a meshed network, the IP connection might be the same as other streams, and we’d have no way to differentiate it. It turns out, though, that services within a tunnel may share IP addresses, but the differentiating factor will be the port number.

Thus, in 2013 we will no longer shape just on IP to IP, but will evolve to offer shaping on IP(Port) to IP(Port). The result will be quality of service improvements even in heavily NAT’d environments.

2) 10 Gbps Line Speeds without Degradation

Some of our advantages over the years have been our price point, the techniques we use on standard hardware, and the line speeds we can maintain.

Right now, our NE3000 and above products all have true multi-core processors, and we want to take advantage of that to enhance our packet analysis. While our analysis is very quick and efficient today (sustained speeds of 1 Gbps up and down), in very high-speed networks, multi-core processing will amp up our throughput even more. In order to get to 10 Gbps on our Intel-based architecture, we must do some parallel analysis on IP packets in the Linux kernel.

The good news is that we’ve already developed this technology in our NetGladiator product (check out this blog article here).

Coming in 2013, we’ll port this technology to NetEqualizer. The result will be low-cost bandwidth shapers that can handle extremely high line speeds without degradation. This is important because in a world where bandwidth keeps getting cheaper, the only reason to invest in an optimizer is if it makes good business sense.

We have prided ourselves on smart, efficient, optimization techniques for years – and we will continue to do that for our customers!

Secure Your Web Applications for the Holidays!

We want YOU to be proactive about security. If your business has external-facing web applications, don’t wait for an attack to happen – protect yourself now! It only takes a few hours of our in-house security experts’ time to determine if your site might have issues, so, for the Holidays, we are offering a $500 upfront security assessment for customers with web applications that need testing!

If it is determined that our NetGladiator product can help shore up your issues, that $500 will be applied toward your first year of NetGladiator Software & Support (GSS). We also offer further consulting based on that assessment on an as-needed basis.

To learn more about NetGladiator, check out our video here.

Or, contact us at:



303-997-1300 x123

Don’t Forget to Upgrade to 6.0!: With a brief tutorial on User Quotas

If you have not already upgraded your NetEqualizer to Software Update 6.0, now is the perfect time!

We have discussed the new upgrade in depth in previous newsletters and blog posts, so this month we thought we’d show you how to take advantage of one of the new features – User Quotas.

User quotas are great if you need to track bandwidth usage over time per IP address or subnet. You can also send alerts to notify you if a quota has been surpassed.

To begin, you’ll want to navigate to the Manage User Quotas menu on the left. You’ll then want to start the Quota System using the third interface from the top, Start/Stop Quota System.

Now that the Quota System is turned on, we’ll add a new quota. Click on Configure User Quotas and take a look at the first window:


Here are the settings associated with setting up a new quota rule:

Host IP: Enter in the Host IP or Subnet that you want to give a quota rule to.

Quota Amount: Enter in the number of total bytes for this quota to allow.

Duration: Enter in the number of minutes you want the quota to be tracked for before it is reset (1 day, 1 week, etc.).

Hard Limit Restriction: Enter in the number of bytes/sec to allow the user once the quota is surpassed.  

Contact: Enter in a contact email for the person to notify when the quota is passed.

After you populate the form, click Add Rule. Congratulations! You’ve just set up your first quota rule!

From here, you can view reports on your quota users and more.

Remember, the new GUI and all the new features of Software Update 6.0 are available for free to customers with valid NetEqualizer Software & Support (NSS).

If you don’t have the new GUI or are not current with NSS, contact us today!



toll-free U.S. (888-287-2492),

worldwide (303) 997-1300 x. 103

Best Of The Blog

Internet User’s Bill of Rights

By Art Reisman – CTO – APconnections

This is the second article in our series. Our first was a Bill of Rights dictating the etiquette of software updates. We continue with a proposed Bill of Rights for consumers with respect to their Internet service.

1) Providers must divulge the contention ratio of their service. 

At the core of all Internet service is a balancing act between the number of people that are sharing a resource and how much of that resource is available.

For example, a typical provider starts out with a big pipe of Internet access that is shared via exchange points with other large providers. They then subdivide this access out to their customers in ever smaller chunks – perhaps starting with a gigabit exchange point and then narrowing down to a 10 megabit local pipe that is shared with customers across a subdivision or area of town.

The speed you, the customer, can attain is limited to how many people might be sharing that 10 megabit local pipe at any one time. If you are promised one megabit service, it is likely that your provider would have you share your trunk with more than 10 subscribers and take advantage of the natural usage behavior, which assumes that not all users are active at one time.

The exact contention ratio will vary widely from area to area, but from experience, your provider will want to maximize the number of subscribers who can share the pipe, while minimizing service complaints due to a slow network. In some cases, I have seen as many as 1,000 subscribers sharing 10 megabits. This is a bit extreme, but even with a ratio as high as this, subscribers will average much faster speeds when compared to dial up…

Photo Of The Month


Kansas Clouds

The wide-open ranch lands in middle America provide a nice retreat from the bustle of city life. When he can find time, one of our staff members visits his property in Kansas with his family. The Internet connection out there is shaky, but it is a welcome change from routine.

You Must Think Outside the Box to Bring QoS to the Cloud and Wireless Mesh Networks

By Art Reisman
CTO – http://www.netequalizer.com

About 10 years ago, we had this idea for QoS across an Internet link. It was simple and elegant, and worked like a charm. Ten years later, as services spread out over the Internet cloud, our original techniques are more important than ever. You cannot provide QoS using TOS (diffserv) techniques over any public or semi public Internet link, but using our techniques we have proven the impossible is possible.

Why TOS bits don’t work over the Internet.

The main reason is that setting TOS bits are only effective when you control all sides of a conversation on a link, and this is not possible on most Internet links (think cloud computing and wireless mesh networks). For standard TOS services to work, you must control all the equipment in between the two end points. All it takes is one router in the path of a VoIP conversation to ignore a TOS bit, and its purpose becomes obsolete. Thus TOS bits for priority are really only practical inside a corporate LAN/WAN topology.

Look at the root cause of poor quality services and you will find alternative solutions.

Most people don’t realize the problem with congested VoIP, on any link, is due to the fact that their VoIP packets are getting crowded out by larger downloads and things like recreational video (this is also true for any interactive cloud access congestion). Often, the offending downloads are initiated by their own employees or users. A good behavior-based shaper will be able to favor VoIP streams over less essential data streams without any reliance on the sending party adhering to a TOS scheme.

How do we accomplish priority for VoIP?

We do this by monitoring all the streams on a link with one piece of equipment inserted anywhere in the congested link. In our current terminology, a stream consists of an IP (local), talking to another IP (remote Internet). When we see a large stream dominating the link, we step back and ask, is the link congested? Is that download crowding out other time-sensitive transactions such as VOIP? If the answer is yes to both questions, then we proactively take away some bandwidth from the offending stream. I know this sounds ridiculously simple, and does not seem plausible, but it works. It works very well and it works with just one device in the link irrespective of any other complex network engineering. It works with minimal set up. It works over MPLS links. I could go on and on, the only reason you have not heard of it is perhaps is that it goes against the grain of what most vendors are selling – and that is large orders for expensive high end routers using TOS bits.

Related article QoS over the Internet – is it possible?

Fast forward to our next release, how to provide QOS deep inside a cloud or mesh network where sending or receiving IP addresses are obfuscated.

Coming this winter we plan to improve upon our QoS techniques so we can drill down inside of Mesh and Cloud networks a bit better.

As the use of NAT, distributed across mesh networks, becomes more wide spread, and the bundling of services across cloud computing becomes more prevalent, one side effect has been that our stream based behavior shaping (QoS) is not as effective as it is when all IP addresses are visible (not masked behind a NAT/PAT device).

This is due to the fact that currently, we base our decision on a pair of IP’s talking to each other, but we do not consider the IP port numbers, and sometimes especially in a cloud or mesh network, services are trunked across a tunnel using the same IP. As these services get tunneled across a trunk, the data streams are bundled together using one common pair of IP’s and then the streams are broken out based on IP ports so they can be routed to their final destination. For example, in some cloud computing environments there is no way to differentiate the video stream within the tunnel coming from the cloud, from a smaller data access session. They can sometimes both be talking across the same set of IP’s to the cloud. In a normal open network we could slow the video (or in some cases give priority to it) by knowing the IP of the video server, and the IP of the receiving user,  but when the video server is buried within the tunnel sharing the IP’s of other services, our current equalizing (QOS techniques) become less effective.

Services within a tunnel, cloud, or mesh may be bundled using the same IPs, but they are often sorted out on different ports at the ends of the tunnel. With our new release coming this winter, we will start to look at streams as IP and port number, thus allowing for much greater resolution for QOS inside the Cloud and inside your mesh network. Stay tuned!

Will Bandwidth Shaping Ever Be Obsolete?

By Art Reisman

CTO – www.netequalizer.com

I find public forums where universities openly share information about their bandwidth shaping policies an excellent source of information. Unlike commercial providers, these user groups have found technical collaboration is in their best interest, and they often openly discuss current trends in bandwidth control.

A recent university IT user group discussion thread kicked off with the following comment:

“We are in the process of trying to decide whether or not to upgrade or all together remove our packet shaper from our residence hall network.  My network engineers are confident we can accomplish rate limiting/shaping through use of our core equipment, but I am not convinced removing the appliance will turn out well.”

Notice that he is not talking about removing rate limits completely, just backing off from an expensive extra piece of packet shaping equipment and using the simpler rate limits available on his router.  The point of my reference to this discussion is not so much to discourse over the different approaches of rate limiting, but to emphasize, at this point in time, running wide-open without some sort of restriction is not even being considered.

Despite an 80 to 90 percent reduction in bulk bandwidth prices in the past few years, bandwidth is not quite yet cheap enough for an ISP to run wide-open. Will it ever be possible for an ISP to run wide-open without deliberately restricting their users?

The answer is not likely.

First of all, there seems to be no limit to the ways consumer devices and content providers will conspire to gobble bandwidth. The common assumption is that no matter what an ISP does to deliver higher speeds, consumer appetite will outstrip it.

Yes, an ISP can temporarily leap ahead of demand.

We do have a precedent from several years ago. In 2006, the University of Brighton in the UK was able to unplug our bandwidth shaper without issue. When I followed up with their IT director, he mentioned that their students’ total consumption was capped by the far end services of the Internet, and thus they did not hit their heads on the ceiling of the local pipes. Running without restriction, 10,000 students were not able to eat up their 1 gigabit pipe! I must caveat this experiment by saying that in the UK their university system had invested heavily in subsidized bandwidth and were far ahead of the average ISP curve for the times. Content services on the Internet for video were just not that widely used by students at the time. Such an experiment today would bring a pipe under a similar contention ratio to its knees in a few seconds. I suspect today one would need more or on the order of 15 to 25 gigabits to run wide open without contention-related problems.

It also seems that we are coming to the end of the line for bandwidth in the wireless world much more quickly than wired bandwidth.

It is unlikely consumers are going to carry cables around with their iPad’s and iPhones to plug into wall jacks any time soon. With the diminishing returns in investment for higher speeds on the wireless networks of the world, bandwidth control is the only way to keep order of some kind.

Lastly I do not expect bulk bandwidth prices to continue to fall at their present rate.

The last few years of falling prices are the result of a perfect storm of factors not likely to be repeated.

For these reasons, it is not likely that bandwidth control will be obsolete for at least another decade. I am sure we will be revisiting this issue in the next few years for an update.

%d bloggers like this: