Are Hotels Jamming 3G Access?


By Art Reisman

About 10 years ago, hotel operators were able to squeeze a nice chunk of change out of guests by charging high toll rates for phone service. However, most of that revenue went by the wayside in the early 2000s when every man, woman, and child on earth started carrying a cell phone. While this loss of revenue was in some cases offset by fees for Internet usage, thanks to 3G access cards most business travelers don’t even bother with hotel Internet service anymore — especially if they have to pay for it.

Yet, these access cards, and even your cell phone, aren’t always reliable in certain hotel settings, such as in interior conference rooms. But, are these simply examples of the random “dead spots” we encounter within the wireless world, or is there more to it? From off-the-record conversations with IT managers, we have learned that many of these rooms are designed with materials that deliberately block 3G signals — or at best make no attempt to allow the signals in. This is especially troubling in hotels that are still hanging on to the pay-for-Internet revenue stream, which will exist as long as customers (or their companies) will support it.

However, reliable complimentary Internet access is quickly becoming an increasingly common selling point for many hotels and is already a difference maker for some chains. We expect this will soon become a selling point even for the large conference centers that are currently implementing the pay-for-access plan.

While meeting the needs and expectations of every hotel guest can be challenging, the ability to provide reliable and affordable Internet service should be a relatively painless way for hotels and conference centers to keep customers happy. Reliable Internet service can be a differentiating factor and an incentive, or deterrent, for future business.

The challenge is finding a balance between the customer-satisfaction benefits of providing such a service and your bottom line. When it comes to Internet service, many  hotels and conference centers are achieving this balance with the help of the NetEqualizer system. In the end, the NetEqualizer is allowing hotels and conference centers to provide better and more affordable service while keeping their own costs down. While the number of 3G and 4G users will certainly continue to grow, the option of good old wireless broadband is hard to overlook. And if it’s available to guests at a minimal fee or no extra charge, hotels and conference centers will not longer have to worry about keeping competing means of Internet access out.

Note: I could not find any specific references to hotels’ shrinking phone toll rate revenue, but as anecdotal evidence, most of the articles complaining about high phone toll charges were at least 7 years old, meaning not much new has been written on the subject in the last few years.

Update 2015

It seems that my suspicions have been confirmed officially. You can read the entire article here Marriott fined for jamming wifi

The Facts and Myths of Network Latency


There are many good references that explain how some applications such as VoIP are sensitive to network latency, but there is also some confusion as to what latency actually is as well as perhaps some misinformation about the causes. In the article below, we’ll separate the facts from the myths and also provide some practical analogies to help paint a clear picture of latency and what may be behind it.

Fact or Myth?

Network latency is caused by too many switches and routers in your network.

This is mostly a myth.

Yes, an underpowered router can introduce latency, but most local network switches add minimal latency — a few milliseconds at most. Anything under about 10 milliseconds is, for practical purposes, not humanly detectable. A router or switch (even a low-end one) may add about 1 millisecond of latency. To get to 10 milliseconds you would need eight or more hops, and even then you wouldn’t be near anything noticeable.

The faster your link (Internet) speed, the less latency you have.

This is a myth.

The speed of your network is measured by how fast IP packets arrive. Latency is the measure of how long they took to get there. So, it’s basically speed vs. time. An example of latency is when NASA sends commands to a Mars orbiter. The information travels at the speed of light, but it takes several minutes or longer for commands sent from earth to get to the orbiter. This is an example of data moving at high speed with extreme latency.

VoIP is very sensitive to network latency.

This is a fact.

Can you imagine talking in real time to somebody on the moon? Your voice would take about eight seconds to get there. For VoIP networks, it is generally accepted that anything over about 150 milliseconds of latency can be a problem. When latency gets higher than 150 milliseconds, issues will emerge — especially for fast talkers and rapid conversations.

Xbox games are sensitive to latency.

This is another fact.

For example, in may collaborative combat games, participants are required to battle players from other locations. Low latency on your network is everything when it comes to beating the opponent to the draw. If you and your opponent shoot your weapons at the exact same time, but your shot takes 200 milliseconds to register at the host server and your opponent’s shot gets there in 100 milliseconds, you die.

Does a bandwidth shaping device such as NetEqualizer increase latency on a network ?

This is true, but only for the “bad” traffic that’s slowing the rest of your network down anyway.

Ever hear of the firefighting technique where you light a back fire to slow the fire down? This is similar to the NetEqualizer approach. NetEqualizer deliberately adds latency to certain bandwidth intensive applications, such as large downloads and p2p traffic, so that chat, email, VoIP, and gaming get the bandwidth they need. The “back fire” (latency) is used to choke off the unwanted, or non-time sensitive, applications. (For more information on how the NetEqualizer works, click here.)

Video is sensitive to latency.

This is a myth.

Video is sensitive to the speed of the connection but not the latency. Let’s go back to our man on the moon example where data takes eight seconds to travel from the earth to the moon. Latency creates a problem with two-way voice communication because in normal conversion, an eight second delay in hearing what was said makes it difficult to carry a conversion. What generally happens with voice and long latency is that both parties start talking at the same time and then eight seconds later you experience two people talking over each other. You see this happening a lot with on television with interviews done via satellite. However most video is one way. For example, when watching a Netflix movie, you’re not communicating video back to Netflix. In fact, almost all video transmissions are on delay and nobody notices since it is usually a one way transmission.

NetEqualizer News: November 2010


NetEqualizer

November 2010

NetEqualizer News

Upcoming NetEqualizer Feature To Supercharge YouTube

Greetings! 

Enjoy another issue of the NetEqualizer Newsletter. This month, we introduce an upcoming NetEqualizer feature that will change the way YouTube is viewed on your network. As always, feel free to pass this along to others who might be interested in NetEqualizer News.

In this issue:

  • Supercharge YouTube With Our Upcoming NetEqualizer Feature

  • Thanks To You: Celebrate The Holidays With A New NetEqualizer

  • Best Of The Blog

  • Congratulations, David!

Supercharge YouTube With Our Upcoming NetEqualizer Feature
YouTube LogoGeneral caching is usually not something we promote because of the problems it can cause with secure pages and rapidly changing content. But, we also understand it’s inevitable that most ISPs will need to selectively cache content to stay competitive. This is especially true for certain high-traffic and bandwidth-intensive websites.

Considering this, we’re now working in our test labs to integrate a custom YouTube caching feature for the NetEqualizer. This feature will store the top 300-500 trending YouTube videos, which make up a significant portion of YouTube traffic, for faster and more efficient access.

This approach is already being taken by several major ISPs, but should prove beneficial for networks of all sizes.

For more information about how caching video can improve your network performance, click here.

If video caching (YouTube or otherwise) with the NetEqualizer is something that would be of interest to you, or if you have any questions, please let us know at admin@apconnections.net or 1-800-918-2763.

Thanks To You: Celebrate The Holidays With A New NetEqualizer
As we celebrate Thanksgiving and move into the holiday season, we at APconnections want to express our thanks to all of our customers. To start, we’re pleased to introduce a new and expanded version of our NetEqualizer lifetime trade-in policy. Customers with NetEqualizers purchased four or more years ago now qualify for a credit of 50 percent of the original unit’s purchase price (not including NSS, NHW, etc.) toward a new NetEqualizer.
This offer is an addition to our original lifetime trade-in policy guaranteeing that in the event of an un-repairable failure of a NetEqualizer unit, customers have the option to purchase a replacement unit at a 50-percent discount off of the list price. 

While this policy is unique in its own right, we’re also challenging tech-industry tradition by offering it on units purchased from authorized NetEqualizer resellers. To learn more, or to get your trade-in started, contact us at sales@apconnections.net or 1-800-918-2763.

For our official trade-in policy, visit our website.

Best Of The Blog
Product Ideas Worth Bringing To Market  

Editor’s Note:This month’s Best Of The Blog is a little out of the box, but it’s fun to think of product ideas. Feel free to add to our list (or to let us know if the products already exist) in the comments section on the blog and we’ll put your ideas in. Obviously, save the best ideas for yourself!

The following post will serve as a running list of various ideas as I think of them. I promise at least two or three a week. Since I run a technology company, I really don’t have time to see any of these ideas through to fruition.

The reason I’m sharing them is simply that I hate to let an idea go to waste. Notice that I did not say good idea. An idea cannot be judged until you make an attempt to develop it further, which I have not done in most cases.

Note: I cannot ensure exclusive rights or ownership for the development of any of these ideas.

1) A Real Unbiased Cell Phone Coverage Map We all know those spots on the interstate and parts of towns where our cell phone coverage is worthless. If you could publish an easy-to-use, widely accepted and maintained guide to these areas, it would become a very popular site.

Research Findings: From my brief search on the subject a consumer trade rag called CNET has done some work in this area but I could only find their demos and press releases. I kept getting the map of the Seattle area with no obvious way to get a broader map search.

2) Commodity Land Trading Site

Congratulations, David!
David WallaceCongratulations to David Wallace, our long-time marketing and public relations consultant. David is only a few short months away from receiving his doctorate in Communication from the University of Colorado at Boulder. 

Over the past four years, David has been a driving force behind the growth of APconnections. He’s a pioneer in guerrilla Internet marketing and research and has advanced the field in many ways that continue to astound us. We wish him the best as he transitions into a faculty position in his field. Good luck, David!

Contact Us
email: admin@apconnections.net 

phone: 303-997-1300

web: http://www.netequalizer.com

 

-
-
APconnections Partners AiBridges  

Candela Technologies

DoubleRadius

Dynamic Broadband

ExNet

Extensive Networks

FISPA25

Grupo Imaginación

Cibernética

PacificNet Telefonía Pública y Privada S.A.

Tranzeo Wireless Technologies

Vox Solutions

ZCorum

-


Site Meter


Analyzing the cost of Layer 7 Packet Shaping


November, 2010

By Eli RIles

For most IT administrators layer 7 packet shaping involves two actions.

Action 1:  Involves inspecting and analyzing data to determine what types of traffic are on your network.

Action 2: Involves taking action by adjusting application  flows on your network .

Without  the layer 7 visibility and actions,  an administrator’s job would degrade into a quagmire of random guesswork. Or would it?

Layer 7 monitoring and shaping is intuitively appealing , but it is a good idea to take a step back and break down examine the full life cycle costs of your methodology .

In an ironic inverse correlation, we assert that costs increase with the complexity of the monitoring tool.

1) Obviously, the more detailed the reporting tool (layer 7 ) , the more expensive its initial price tag.

2)  The kicker comes with part two. The more expensive the tool, the more  detail  it will provide, and the more time an administrator is likely to spend adjusting and mucking, looking for optimal performance.

But, is it a fair to assume higher labor costs with  more advanced monitoring and information?

Well, obviously it would not make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. Why have the reporting tool in the first place if the only output was to stare at reports and do nothing? Typically, the more information an admin has about a network, the more inclined he might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief  that when  the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network mucking can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention.  For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980′s.  The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.

An effective compromise with many of our customers is that they are stepping down from expensive complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.

For example, with a simple reporting tool you can plot network usage by user.  Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing; abuse becomes obvious just looking at the usage (a simple report).

However, there is also the personal control factor, which often does not follow clear lines of ROI (return on investment).

What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into a trip to Hawaii.

In our next article, we’ll put some real world numbers to the test for actual break downs, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.

List of monitoring tools compiled by Stanford

Top five free monitoring tools

Planetmy
Linux Tips
How to set up a monitor for free