An Easy Way to Get Rid of Wireless Dead Spots and Get Whole Home Music


By Steve Wagor, Co-Founder APconnections

Wireless dead spots are a common problem in homes and offices that expand beyond the range of single wireless access point. For example in my home office, my little Linksys Access point works great on my main floor , but down in my basement the signal just does not reach very well. The problem with a simple access point is if you need to expand your area you must mesh a new one, and off the shelf they do not know how to talk to each other.

For those of you have tried to expand your home network into a mesh with multiple access points there are howto’s out there for rigging them up

Many use wireless access points that are homemade, or the commercial style made for long range. With these solutions you will most likely need a rubber ducky antenna and either some old computers or at least small board computers with attached wireless cards. You will also need to know a bit of networking and setup most of these types of things via what some people would consider complex commands to link them all up into the mesh.

Well its a lot easier than that if you don’t need miles and miles of coverage using off the shelf Apple products. These are small devices with no external antennas.

First you need to install an Apple Extreme access point:
http://www.apple.com/airport-extreme
- at the time of this being written it is $199 and has been at that price for at least a couple of years now.

Now for every dead spot you just need the Apple Express:
http://www.apple.com/airport-express/
- at the time of this being written it is $99 and has been at that price for at least a couple of years now too.

So for every dead spot you have you can solve the problem for $99 after the Apple Extreme is installed. And Apple has very good install instructions on the product line so you don’t need to be a network professional to configure it. Most of it is simple point and click and all done via a GUI and without having to go to a command line ever.

For whole home music fairly effortlessly you can use the Analog/Optical Audio Jack on the back of the Airport Express and plug into your stereo or externally powered speakers. Now connect your iPhone or Mac product up to the same wireless network provided by your Airport Extreme and you can use Airplay to toggle on all or any of the stereos that your network has access to. So if you let your guests access your wireless network and they have an iPhone with Airplay then they could let you listen to anything they are playing by using Airplay to play it on your stereo for example while you are working out together in your home gym.

Stuck on Desert Island, Do You Take Your Caching Server or Your Netequalizer ?


Caching is a great idea and works well, but I’ll take my NetEqualizer with me if forced to choose between the two on my remote island with a satellite link.

Yes there are  a few circumstances where a caching server might have a nice impact. Our most successful deployments are in educational environments where the same video is watched repeatedly as an assignment;  but for most wide open installations  ,expectations of performance far outweigh reality.   Lets  have at look at what works and also drill down on expectations that are based on marginal assumptions.

From my personal archive of experience here are some of the expectations attributed to caching that perhaps are a bit too optimistic.

“Most of my users go to their Yahoo or Face Book home page every day when they log in and that is the bulk of all they do”

- I doubt this customer’s user base is that conformist :),   and they’ll find out once they install their caching solution.  But even if true, only some of the content on Face  Book and Yahoo is static.  A good portion of these pages are by default dynamic, and ever-changing with content.  They are marked as Dynamic in their URLs which means the bulk of the page must be reloaded each time.  For example,  in order for caching to have an impact , the users in this scenario would have to stick to their home pages , and not look at friend photo’s or other pages.

” We expect to see a 30 percent hit rate when we deploy our cache.”

You won’t see a 30 percent hit rate, unless somebody designs some specific robot army to test your cache, hitting the same pages over and over again. Perhaps, on IOS update day, you might see a bulk of your hits going to the same large file and have a significant performance boost for a day. But overall you will be  doing well if  you get a 3 or 4 percent hit rate.

” I expect the cache hits to take pressure off my Internet Link”

Assuming you want your average user to experience a fast loading Internet, this is where you really want your NetEqualizer ( or similar intelligent bandwidth controller) over your caching engine. The smart bandwidth controller can re-arrange traffic on the fly insuring Interactive hits get the best response. A caching engine does not have that intelligence.

Let’s suppose you have a 100 megabit link to the Internet ,and you install a cache engine that effectively gets a 6 percent hit rate. That would be exceptional  hit rate.

So what is the  end user experience with a 6 percent hit rate compared to pre-cache ?

-First off, it is not the hit rate that matters when looking at total bandwidth. Much of those hits will likely be smallish image  files from the Yahoo home page or common sites, that account for less than 1 percent of your actual traffic.  Most of your traffic is likely dominated by large file downloads and only a portion of those may be coming from cache.

- A 6 percent hit rate means that 94 percent miss rate , and if your Internet was slow from congestion before the caching server it will still be slow 94 percent of the time.

- Putting in a caching server  would be like upgrading your bandwidth from 100 megabits to 104 megabits to relieve congestion. That cache hits may add to the total throughput in your reports, but the 100 megabit bottleneck is still there, and to the end user, there is little or no difference in user perception at this point.  A  portion of your Internet access is still marginal or unusable during peak times, and other than the occasional web page or video loading nice and snappy , users are getting duds most of the time.

Even the largest caching server is insignificant in how much data it can store.

- The Internet is Vast and your Cache is not. Think of a tiny Ant standing on top of Mount Everest. YouTube puts up 100 hours of new content every minute of every day. A small commercial caching server can store about 1/1000 of what YouTube uploads in day, not to mention yesterday and the day before and last year. It’s just not going to be in your cache.

So why is a NetEqualizer bandwidth controller so much more superior than a caching server when changing user perception of speed?  Because the NetEqualizer is designed to keep Internet access from crashing , and this is accomplished by reducing the large file transfers and video download footprints during peak times. Yes these videos  and downloads may be slow or sporadic, but they weren’t going to work anyway, so why let them crush the interactive traffic ? In the end caching and equalizing are not perfect, but from real world trials the equalizer changes the user experience from slow to fast for all Interactive transactions, caching is hit or miss ( pun intended).

Federal Judge Orders Internet Name be Changed to CDSFBB (Content Delivery Service for Big Business)


By Art Reisman – CTO – APconnections

Okay, so I fabricated that headline, it’s not true, but I hope it goes viral and sends a message that our public Internet is being threatened by business interests and activist judges.

I’ll concede our government does serve us well in some cases;  they have produced some things that could not be done without their oversight, for example:

1) The highway system

2) The FAA does a pretty good job keeping us safe

3) The Internet. At least up until some derelict court ruling that will allow ISPs to give preferential treatment to content providers for a payment (bribe), whatever you want to call it.

The ramifications of this ruling may bring an end to the Internet as we know it. Perhaps the ball was put in motion when the Internet was privatized back in 1994. In any case, if this ruling stands up,  you can forget about the Internet as the great equalizer. A place where a small businesses can have a big web site. The Internet where a new idea on a small budget can blossom into a fortune 500 company. A place where the little guy can compete on equal footing without an entry fee to get noticed. No, the tide won’t turn right away, but at some point through a series of rationalizations, content companies and ISPs, with deep pockets, will kill anything that moves.

This ruling establishes a legal precedent. Legal precedents with suspect DNA are like cancers, they mutate into ugly variations, and replicate rapidly. There is no drug that can stop them. Obviously, the forces at work here are not the court systems themselves, but businesses with motives. The poor carriers just can’t seem to find any other solution to their congestion other than charge for access? Combine this with oblivious consumers that just want content on their devices, and you have a dangerous mixture. Ironically, these consumers already subsidize ISPs with a huge chunk of their disposable income. The hoodwink is on. Just as the public airwaves are controlled by a few large media conglomerates, so will go the Internet.

The only hope in this case is for the FCC to step in and take back the Internet. Give it back to the peasants. However, I suspect their initial statements are just grandstanding politics.  This is, after all, the same FCC that auctions off the airwaves to the highest bidder.

Squid Caching Can be Finicky


Editors Note: The past few weeks we have been working on tuning and testing our caching engine. We have been working  closely with  some of the developers who contribute to the Squid open source program.

Following are some of my  observations and discoveries regarding Squid Caching from our testing process.

Our primary mission was to make sure YouTube files cache correctly ( which we have done). One of the tricky aspects of caching a YouTube file, is that many of these files are considered dynamic content. Basically, this means their content contains a portion that may change with each access, sometimes the URL itself is just a pointer to a server where the content is generated fresh with each new access.

An extreme example of dynamic content would be your favorite stock quote site. During the business day much of the information on these pages is changing constantly, thus it  is obsolete within seconds. A poorly designed caching engine would do much more harm than good if it served up out of data stock quotes.

Caching engines by default try not cache dynamic content, and for good reason.    There are two different methods a caching server uses to decide whether or not to cache a page

1) The web designer can specifically set flags in the  format the actual URL  to tell caching engines whether a page is safe to cache or not.

In a recent test I set up a crawler to walk through the excite web site and all its urls. I use this crawler to create load in our test lab as well as to fill up our caching engine with repeatable content. I set my Squid Configuration file to cache all content less than 4k. Normally this would generate a great deal of Web hits , but for some reason none of the Excite content would cache. Upon further analysis our Squid consultant found the problem.

  I have completed the initial analysis. The problem is the excite.com
server(s). All of the “200 OK” excite.com responses that I have seen
among the first 100+ requests contain Cache-Control headers that
prohibit their caching by shared caches. There appears to be only two
kinds of Cache-Control values favored by excite:

Cache-Control: no-store, no-cache, must-revalidate, post-check=0,
               pre-check=0

and

Cache-Control: private,public

Both are deadly for a shared Squid cache like yours. Squid has options
to overwrite most of these restrictions, but you should not do that for
all traffic as it will likely break some sites.”

2) The second method is a bit more passive than deliberate directives.  Caching engines look at the actual URL of a page to gain clues about its permanence. A “?” used in the url implies dynamic content and is generally a  red flag to the caching server . And here-in lies the issue with caching Youtube files, almost all of them have  a “?” embedded within their URL.

Fortunately  Youtube Videos,  are normally permanent and unchanging once they are uploaded. I am still getting a handle these pages, but it seems the dynamic part is used for the insertion of different advertisements on the front end of the Video.  Our squid caching server uses a normalizing technique to keep the root of the URL consistent and thus serve up the correct base YouTube every time. Over the past two years we have had to replace our normalization technique twice in order to consistently cache YouTube files.

Network User Authentication Using Heuristics


Most authentication systems are black and white, once you are in , you are in. It was brought our attention recently, that authentication should be an ongoing process,  not a one time gate with continuous unchecked free rein once in.

The reasons are well founded.

1) Students at universities and employees at businesses, have all kinds of devices which can get stolen/borrowed while open.

My high school kids can attest this many times over. Often the result is just an innocuous string of embarrassing texts emanating from their phones claiming absurd things. For example  ” I won’t be at the party, I was digging for a booger and got a nose bleed” ,  blasted out to their friends after they left their phone unlocked.

2) People will also deliberately give out their authentication to friends and family

This leaves a hole in standard authentication strategies .

Next year we plan to add an interesting twist to our Intrusion Detection Device ( NetGladiator). The idea was actually not mine, but was suggested by a customer recently at our user group meeting in Western Michigan.

Here is the plan.

The idea for our intrusion detection device would be to build a knowledge base of a user’s habits over time and then match those established patterns against a  tiered alert system when there is any kind of abrupt   change.

It should be noted that we would not be monitoring content, and thus we would be far less invasive than Google Gmail ,with their targeted advertisements,  we would primarily just following the trail or path of usage and not reading content.

The heuristics would consist of a three-pronged model.

Prong one, would look at general trending access across all users globally . If  an aggregate group of users on the network were downloading an IOS update, then this behavior would be classified as normal for individual users.

Prong two ,  would look at the pattern of usage for the authenticated user. For example most people tune their devices to start at a particular page. They also likely use a specific e-mail client, and then have their favorite social networking sites. String together enough these and you would develop unique foot print for that user. Yes the user could deviate from their pattern of established usage as long as there were still elements of their normal usage in their access patterns.

Prong three would be the alarming level. In general a user would receive a risk rating when they deviated into suspect behaviors outside their established baseline. Yes this is profiling similar to psychological profiling on employment tests, which are very accurate at predicting future behavior.

A simple example of a risk factor would be a user that all of sudden starts executing login scripts en masse outside of their normal pattern. Something this egregious would be flagged as high risk,  and the administrator could specify an automatic disconnection for the user at a high risk level. Lower risk behavior would be logged for after the fact forensics if any internal servers became compromised.

Latest Notes on the Peer to Peer Front and DMCA Notices


Just getting back from our tech talk seminar today at Western Michigan University. The topic of DMCA requests came up in our discussions, and here are some of my notes on the subject.

Background: The DMCA, which is the enforcement arm of the motion picture copyright conglomerate, tracks down users with illegal content.

They seem to sometimes shoot first and ask questions later when sending out their notices more specific detail to follow.

Unconfirmed Rumor has it, that one very large University in the State of Michigan just tosses the requests in the garbage and does nothing with them, I have heard of other organizations taking this tact. They basically claim  this problem for the DMCA is not the responsibility of the ISP.

I also am aware of a sovereign Caribbean country that also ignores them. I am not advocating this as a solution just an observation.

There was also a discussion on how the DMCA discovers copyright violators from the outside.

As standard practice,  most network administrators use their firewall to block UN-initiated requests  into the network from the outside. With this type of firewall setting, an outsider cannot just randomly probe a network  to find out what copyrighted material is being hosted. You must get invited in first by an outgoing request.

An analogy would be that if you show up at my door  uninvited, and knock, my doorman is not going to let you in, because there is no reason for you to be at my door. But if I order a pizza and you show up wearing  a pizza delivery shirt, my doorman is going to let you in.  In the world of p2p, the invite into the network is a bit more subtle, and most users are not aware they have sent out the invite, but it turns out any user with a p2p client is constantly sending out requests to p2p super nodes to attain information on what content is out there.  Doing so, opens the door on the firewall to let the P2p super node into the network.  The DMCA p2p super nodes just look like another web site to the firewall so it lets it in. Once in the DMCA reads directories of p2p clients.

In one instance, the DMCA is not really inspecting files for copyrighted material, but was only be checking for titles. A  music student who recorded their own original music, but named their files after original artists and songs based on the style of the song.  Was flagged erroneously with DMCA notifications based on his naming convention   The school security examined his computer and determined the content was not copyrighted at all.   What we can surmise from this account was that the DMCA was probing the network directories and not actually looking at the content of the files to see if they were truly in violation of copying original works.
Back to the how does the DMCA probe theory ? The consensus was that it is very likely that DMCA is actually running  super nodes, so they will get access to client directories.  The super  node is a server node that p2p clients contact to get advice on where to get music and movie content ( pirated most likely). The speculation among the user group , and these are very experienced front line IT administrators that have seen just about every kind  of p2p scheme.  They suspect that the since the DMCA super node is contacted by their student network first, it opens the door from the super node to come back and probe for content. In other words the super node looks like the Pizza delivery guy where you place your orders.
It was also further discussed and this theory is still quite open, that sophisticated p2p  networks try to cut out the DMCA  spy super nodes.  This gets more convoluted than peeling off character masks at a mission impossible movie. The p2p network operators need super nodes to distribute content, but these nodes cannot be permanently hosted, they must live in the shadows and are perhaps parasites themselves on client computers.

So questions that remain for future study on this subject are , how do the super nodes get picked , and how does the p2p network disable a spy DMCA super node ?

Caching in the Cloud is Here


By Art Reisman, CTO APconnections (www.netequalizer.com)

I just got a note from a customer, a University, that their ISP is offering them 200 megabit internet at fixed price. The kicker is, they can also have access to a 1 gigabit feed specifically for YouTube at no extra cost.  The only explanation for this is that their upstream ISP has an extensive in-network YouTube cache. I am just kicking myself for not seeing this coming!

I was well-aware that many of the larger ISPs cached NetFlix and YouTube on a large scale, but this is the first I have heard of a bandwidth provider offering a special reduced rate for YouTube to a customer downstream. I am just mad at myself for not predicting this type of offer and hearing about it from a third party.

As for the NetEqualizer, we have already made adjustments in our licensing for this differential traffic to come through at no extra charge beyond your regular license level, in this case 200 megabits. So if for example, you have a 350 megabit license, but have access to a 1Gbps YouTube feed, you will pay for a 350 megabit license, not 1Gbps.  We will not charge you for the overage while accessing YouTube.

Using OpenDNS on Your Wireless Network to Prevent DMCA infringements


Editor’s Note:  The following was written by guest columnist, Sam Beskur, CTO of Global Gossip.  APconnections and Global Gossip have partnered to offer a  joint hotel service solution, HMSIO.  Read our HMSIO service offering datasheet to learn more.

Traffic Filtering with OpenDNS

 


Abstract

AUP (Acceptable Use Policy) violations which include DMCA infringements on illegal downloads (P2P, Usenet or otherwise) have been hugely troublesome in many locations where we provide public access WiFi.  Nearly all major carriers here in the US now have some form of notification system to alert customers when violation occur and the once that don’t send notifications are silently tracking this behavior.

As a managed service provider it is incredibly frustrating to receive these violation notifications as they never contain information one needs to stop the abuse but only the WAN IP of the offending location.  An end user who committed the infraction is often behind a NATed private address (192.168.x.x or 172.x.x.x) and for reasons still unknown to me they never provide information on the site hosting the illegal material, botnet, adware etc.

When a customer, on whose behalf one may be providing managed services for, receives one of these notifications this can jeopardize your account.

Expensive layer 7 DPI appliances will do the job in filtering P2P traffic but often times customers are reluctant to invest in these devices for a number of reasons: yet another appliance device to power, configure, maintain, support, another point of failure, another config to backup, no more Rackspace, etc, etc ad nausea.

Summary

Below we outline an approach that uses a cloud approach based on OpenDNS and NetEq which has very nearly eliminated all AUP violations across the networks we manage.

Anyone can use the public OpenDNS servers at the following addresses:

208.67.222.222

208.67.220.220

If however, one wishes to use the advanced filter capabilities you will need to subscribe to and create a paid account and register the static WAN IP of the address you are trying to filter.  Prices vary.

  1. Adjusted our content filter/traffic shaper (NetEqualizer) to limit/block # P2P connections.

  1. Configure your router / gateway device / dhcp server to use 208.67.222.220,  208.67.222.222  as primary and secondary DNS server.

     

  1. Once you have an OpenDNS account add your location for filtering and configure DNS blocking of P2P and malware sites         

  1. In order to prevent the more technically savvy end users from specifying ones own DNS server (8.8.8.8, 4.2.2.2, 4.2.2.1, etc.) it is a VERY good idea to configure your gateway to block all traffic on port 53 to all endpoints accept the OpenDNS servers.  DNS uses UDP port 53 so configuring this within IPTables (maybe even another feature for NetEqualizer) or within Cisco IOS is fairly trivial.  If you’re router doesn’t allow this hack it or get another one.

     

Depending on your setup there are a number of other techniques that can be added to this approach to further augment your ability to track NATed end user traffic but as I mentioned these steps alone have very nearly eliminated our AUP violation notifications.

Is a Balloon Based Internet Service a Threat to Traditional Cable and DSL?


Update:

 

Looks like this might be the real deal. A mystery barge in San Francisco Bay owned by Google

 

I recently read an article regarding Google’s foray into balloon based Internet services.

This intriguing idea sparked a discussion with some of the engineers at a major satellite internet provider on the same subject. They, as well as myself, were somewhat skeptical at the feasibility of this balloon idea. Could we be wrong? Obviously, there are some unconventional obstacles with bouncing Internet signals off balloons, but what if those obstacles could be economically overcome?

First lets look at the practicalities of using balloons to beam Internet signals from ground based stations to consumers.

Advantages over satellite service

Latency

Satellite Internet, the kind used by Wild Blue, usually comes with a minimum of a 1 second delay, sometimes more. The bulk of this signal delay is due to the distance required for a stationary satellite, 22,000 miles.

A balloon would be located much closer to the earth, in  the atmosphere at around 2 to 12 miles up. The delay at this distance latency is just a few milliseconds.

Cost

Getting a basic stationary satellite into space runs at a minimum 50 million dollars, and perhaps a bit less for a low orbiting non stationary satellite.

Balloons are relatively inexpensive compared to a satellite. Although I don’t have exact numbers on a balloon, the launch cost is practically zero, a balloon carries its payload without any additional energy or infrastructure, the only real cost is the balloon, the payload, and ground based stations. For comparison purposes let’s go with 50,000 per balloon.

Power

Both options can use solar, orienting a balloon position with solar collectors might require 360 degree coverage; however as we will see a balloon can be tethered and periodically raised and lowered, in which case power can be ground based rechargeable.

Logistics

This is the elephant in the room. The position of a satellite in time is extremely predictable. Even for satellites that are not stationery, they can be relied on to be where they are supposed to be at any given time. This makes coverage planning deterministic. Balloons on the other hand, unless tethered will wonder with very little future predictability.

Coverage Range

A balloon at 10,000 feet can cover a Radius on the ground of about 70 miles.  A stationary satellite can cover an entire continent.  So you would need a series of balloons to cover an area reliably.

Untethered

I have to throw out the idea of untethered high altitude balloons. They would wander all over the world , and crash back to earth in random places. Even if  it was cost-effective to saturate the upper atmosphere with them, and pick them out when in range for communications, I just don’t think NASA would be too excited to have 1000′s of these large balloons in unpredictable drift patterns .

Tethered

As crazy as it sounds, there is a precedent for tethering a communication balloon to a 10,000 foot cable. Evidently the US did something like this to broadcast TV signals into Cuba. I suppose for an isolated area where you can hang out offshore well out-of-the-way of any air traffic, this is possible

High Density Area Competition

So far I have been running under the assumption that the balloon based Internet service was an alternative to satellite coverage which finds its niche exclusively in rural areas of the world.  When I think of the monopoly and cost advantage existing carriers have in urban areas, a wireless service with beamed high speeds from overhead might have some staying power. Certainly there could be some overlap with rural users and thus the economics of deployment become more cost-effective. The more subscribers the better. But I do not see urban coverage as a driving business factor.

Would the consumer need a directional Antenna?

I have been assuming all along that these balloons would supply direct service to the consumer. I would suspect that some sort of directional antenna pointing at your local offshore balloon would need to be attached to the side of your house.  This is another reason why the balloons would need to be in a stationary position

My conclusion is that somebody, like Google, could conceivably create a balloon zone off of any coastline with a series of Balloons tethered to barges of some kind. The main problem assuming cost was not an issue, would be the political ramifications of  a plane hitting one of the tethers. With Internet demand on the rise, 4g’s limited range, and the high cost of laying wires to the rural home, I would not be surprised to see a test network someplace in the near future.

Tethered Balloon ( Courtesy of Arstechnica article)

Five Things to Consider When Building a Commercial Wireless Network


By Art Reisman, CTO, APconnections,  www.netequalizer.com

with help from Sam Beskur, CTO Global Gossip North America, http://hsia.globalgossip.com/

Over the past several years we have provided our Bandwidth Controllers as a key component in many wireless networks.  Along the way we have seen many successes, and some not so successful deployments.  What follows are some key learnings  from our experiences with wireless deployment,

1) Commercial Grade Access Points versus Consumer Grade

Commercial grade access points use intelligent collision avoidance in densely packed areas. Basically, what this means is that they make sure that a user with access to multiple access points is only being serviced by one AP at a time. Without this intelligence, you get signal interference and confusion. An analogy would be if  you asked a sales rep for help in a store, and two sales reps start talking back to you at the same time; it would be confusing as to which one to listen to. Commercial grade access points follow a courtesy protocol, so you do not get two responses, or possibly even 3, in a densely packed network.

Consumer grade access points are meant to service a single household.  If there are two in close proximity to each other, they do not communicate. The end result is interference during busy times, as they will both respond at the same time to the same user without any awareness.  Due to this, users will have trouble staying connected. Sometimes the performance problems show up long after the installation. When pricing out a solution for a building or hotel be sure and ask the contractor if they are bidding in commercial grade (intelligent) access points.

2) Antenna Quality

There are a limited number of frequencies (channels) open to public WiFi.  If you can make sure the transmission is broadcast in a limited direction, this allows for more simultaneous conversations, and thus better quality.  Higher quality access points can actually figure out the direction of the users connected to them, such that, when they broadcast they cancel out the signal going out in directions not intended for the end-user.  In tight spaces with multiple access points, signal canceling antennas will greatly improve service for all users.

3) Installation Sophistication and Site Surveys

When installing a wireless network, there are many things a good installer must account for. For example,  the attenuation between access points.  In a perfect world  you want your access points to be far enough apart so they are not getting blasted by their neighbor’s signal. It is okay to hear your neighbor in the background a little bit, you must have some overlap otherwise you would have gaps in coverage,  but you do not want them competing with high energy signals close together.   If you were installing your network in a giant farm field with no objects in between access points, you could just set them up in a grid with the prescribed distance between nodes. In the real world you have walls, trees, windows, and all sorts of objects in and around buildings. A good installer will actually go out and measure the signal loss from these objects in order to place the correct number of access points. This is not a trivial task, but without an extensive site survey the resultant network will have quality problems.

4) Know What is Possible

Despite all the advances in wireless networks, they still have density limitations. I am not quite sure how to quantify this statement other than to say that wireless does not do well in an extremely crowded space (stadium, concert venue, etc.) with many devices all trying to get access at the same time. It is a big jump from designing coverage for a hotel with 1,000 guests spread out over the hotel grounds, to a packed stadium of people sitting shoulder to shoulder. The other compounding issue with density is that it is almost impossible to simulate before building out the network and going live.  I did find a reference to a company that claims to have done a successful build out in Gillette Stadium, home of the New England Patriots.  It might be worth looking into this further for other large venues.

5) Old Devices

Old 802.11b devices on your network will actually cause your access points to back off to slower speeds. Most exclusively-b devices were discontinued in the mid 2000′s, but they are still around. The best practice here is to just block these devices, as they are rare and not worth bringing the speed of your overall network down.

We hope these five (5) practical tips help you to build out a solid commercial wireless network. If you have questions, feel free to contact APconnections or Global Gossip to discuss.

Related Article:  Wireless Site Survey With Free tools

A Novel Idea on How to Cache Data Completely Transparently


By Art Reisman

Recently I got a call from a customer claiming our Squid proxy was not retrieving videos from cache when expected.

This prompted me to set up a test in our lab where I watched  four videos over and over. With each iteration, I noticed that the proxy would  sometimes go out and fetch a new copy of a video, even though the video was already in the local cache, thus confirming the customer’s observation.

Why does this happen?

I have not delved down into the specific Squid code yet, but I think It has to do with the dynamic redirection performed by YouTube in the cloud, and the way the Squid proxy interprets the URL.  If you look closely at YouTube URLs, there is a CGI component in the name, the word “what” followed by a question mark “?”.  The URLs  are not static. Even though I may be watching the same YouTube on successive tries, the cloud is getting the actual video from a different place each time, and so the Squid proxy thinks it is new.

Since caching old copies of data is a big no-no, my Squid proxy, when in doubt, errors on the side of caution and fetches a new copy.

The other hassle with using a proxy caching server  is the complexity of  setting up port re-direction (special routing rules). By definition the Proxy must fake out the client making the request for the video. Getting this re-direction to work requires some intimate network knowledge and good troubleshooting techniques.

My solution for the above issues is to just toss the traditional Squid proxy altogether and invent something easier to use.

Note: I have run the following idea  by the naysayers  (all of my friends who think I am nuts), and yes, there are still  some holes in this idea. I’ll represent their points after I present my case.

My caching idea

To get my thought process started, I tossed all that traditional tomfoolery with re-direction and URL name caching out the window.

My caching idea is to cache streams of data without regard to URL or filename.  Basically, this would require a device to save off streams of characters as they happen.  I am already very familiar with implementing this technology; we do it with our CALEA probe.  We have already built technology that can capture raw streams of data, store, and then index them, so this does not need to be solved.

Figuring out if a subsequent stream matched a stored stream would be a bit more difficult but not impossible.

The benefits of this stream-based caching scheme as I see them:

1) No routing or redirection needed, the device could plugged into any network link by any weekend warrior.

2) No URL confusion.  Even if a stream (video) was kicked off from a different URL, the proxy device would recognize the character stream coming across the wire to be the same as a stored stream in the cache, and then switch over to the cached stream when appropriate, thus saving the time and energy of fetching the rest of the data from across the Internet.

The pure beauty of this solution is that just about any consumer could plug it in without any networking or routing knowledge.

How this could be built

Some rough details on how this would be implemented…

The proxy would cache the most recent 10,000 streams.

1) A stream would be defined as occurring when continuous data was transferred in one direction from an IP and port to another IP and port.

2) The stream would terminate and be stored when the port changed.

3) The server would compare the beginning parts of new streams to streams already in cache, perhaps the first several thousand characters.  If there was a match, it would fake out the sender and receiver and step in the middle and continue sending the data.

What could go wrong

Now for the major flaws in this technology that must be overcome.

1) Since there is no title on the stream from the sender, there would always be the chance that the match was a coincidence.  For example, an advertisement appended to multiple YouTube videos might fool the caching server. The initial sequence of bytes would match the advertisement and not the following video.

2) Since we would be interrupting a client-server transaction mid-stream, the server would have to be cut-off in the middle of the stream when the proxy took over.  That might get ugly as the server tries to keep sending. Faking an ACK back to the sending server would also not be viable, as the sending server would continue to send data, which is what we are trying to prevent with the cache.

Next step, (after I fix our traditional URL matching problem for the customer) is to build an experimental version of stream-based caching.

Stay tuned to see if I can get this idea to work!

The World’s Biggest Caching Server


Caching solutions are used in all shapes and sizes to speed up Internet data retrieval. From your desktop keeping a local copy of the last web page viewed, to your cable company keeping an entire library of NetFlix movies,  there is a broad diversity in the scope and size of  caching solutions.

So, what is the biggest caching server out there?  Moreover, if I found the world’s largest caching server, would  it store  just a tiny microscopic subset of the total data  available from the public  Internet?   Is it possible that somebody has actually cached everything Internet? A caching server the size of the Internet seems absurd, but I decided to investigate anyway, and so with an open mind, I set out to find the biggest caching server in the world.  Below I have detailed my research and findings.

As always I started with Google, but not in the traditional sense. If you think about Google, they seem to have every  public page on the Internet indexed. That is a huge amount of data, and I suspect  they are the worlds biggest caching server.  Asserting Google as the worlds largest caching server seems logical , but somewhat hollow and unsubstantiated, my next step was to quantify my assertion.

To figure out how much data is actually stored by Google,  in a weird twist of logic, I figured the best way to estimate the size of the stored data would be to determine what data is not stored in Google.

I would need to find a good way to stumble into some truly random web pages without using Google to find them, and then specifically test to see if Google knew about those pages by  asking Google to search for unique, deep rooted, text strings within those sites.

Rather than ramble too much, I’ll just walk through one of my experiments below.

To find a random Web site, I started with  one of those random web site stumblers. As advertised, it took me to a  random  web site titled, “Finest Polynesian Tiki Objects”. From there, I looked for unique text strings on the Tiki site.  The  idea here is find a sentence of text from this site that is not likely to found anywhere but on this site. In essence something deep enough so as not to be a deliberatly indexed title already submitted to google.   I poked around on the Tiki site  and found some seemingly innocuous text on their merchant  site. “Presenting Genuine Witco Art – every piece will come with a scanned”. I put that exact string in my Google search box and presto there it was.

Screen Shot 2013-05-29 at 4.21.04 PM

Wow it looks like Google has this somewhat random page archived and indexed because it came up in my search.

A sample set of two data points is not large enough to extrapolate from and draw conclusions, so I repeated my experiment a few more times and here are more samples of what I found….

Try number two.

Random Web Site

http://www.genarowlandsband.com/contact.php

Search String In Google

“For booking or general whatnot, contact Bob. Heck, just write to say hello if you feel like it.”

Screen Shot 2013-05-30 at 2.06.35 PM

It worked again, it found the exact page from a search on a string buried deep on the page.

And then I did it again.

Screen Shot 2013-05-30 at 2.18.55 PM

And again Google found the page.

The conclusion is that Google has cached close to 100 percent of the publicly accessible text on the Internet. In fairness to Google’s competitors they also found the same Web pages using the same search terms.

So how much data is cached in terms of a raw number?

 

There are plenty of public statistics for number of Web sites/pages connected to the Internet, and there is also data detailing the average size of a Web Page, what I have not determined  is how much of the Video, and Images are cached by Google, I do know they are working on image search engines, but for now, to be conservative I’ll base my estimates on Text only.

So roughly there are 15 billion Web Pages, and the average amount of text is 25 thousand bytes. (note most of the Web is Video and Images text is actually a small percentage)

So to get a final number I multiply 15 billion  15,000,000,000 times 25 thousand 25,000 and I get…

375,000,000,000,000 bytes cached…

 

 

Notice the name of te site or the band does not appear in my search string, nothing to tip off the google search engine what I am looking for and presto!

Internet Regulation, what is the world coming to ?


A friend of mine just forwarded an article titled “How Net Neutrality Rules Could Undermine the Open Internet”

Basically Net Neutrality advocates  are now worried that bringing the FCC in to help enforce Neutrality will set a legal precedent  allowing wide-reaching control of other aspects of the Internet. For example, some form of content control extending into grey areas.

Let’s look at the history of the FCC for precedents.

The FCC came into existence to manage and enforce the wireless spectrum,  essentially so you did not get 1000 radio/tv stations blasting signals over each other in every city.  A very necessary and valid government service. Without it, there would be utter anarchy in the airwaves. Imagine roads without traffic signals, or airports without control towers.

At some point in time, their control over frequencies got into content and accessibility mandates.  How did this come about ? Simply put, it is the normal progression of government asserting control over a resource. It is what it is, neither good nor bad, just a reflection of a society that looks to government to make things “right”. And like an escaped non native species in the Hawaiian Islands, it tends to take as much real estate as the ecosystem will allow.

What I do know as a certainty, the FCC, once in the door at regulating anything on the Internet, will continue to grow, in order to make things “right” and “fair” during our browsing experience.

At best we can hope the inevitable progression of control by the FCC gets thwarted at every turn allowing us a few more good years of the good old Internet as we know it. I’ll take the current Internet flaws for a few more years while I can.

How Many Users Can Your High Density Wireless Network Support? Find Out Before you Deploy.


By

Art Reisman

CTO http://www.netequalizer.com

Recently I wrote an article on how tough it has become to deploy wireless technology in high density areas.  It is difficult to predict final densities until fully deployed, and often this leads to missed performance expectations.

In a strange coincidence, while checking  in with my friends over at Candela Technologies last Friday , I was not  surprised to learn that their latest offering ,the Wiser-50 Mobile Wireless Network Emulator,  is taking the industry by storm.  

So how does their wireless emulator work and why would you need one ?

The Wiser-50  allows you to take your chosen access points, load them up with realistic  signals from a densely packed area of users, and play out different load scenarios without actually building out the network . The ability to this type of emulation  allows you to make adjustments to your design on paper without the costly trial and error of field trials.  You will be able to  see how your access points will behave under load  before you deploy them.  You can then make some reasonable assumptions on how densely to place your access points,  and more importantly get an idea on the upper bounds of your final network.

With IT deployments  scaling up into new territories of  densities, an investment in a wireless emulation tool will pay for itself many times over.  Especially when bidding on a project. The ability to justify how you have sized a quality solution over an ad-hock random solution, will allow your customer to make informed decisions on the trade -offs in wireless investment.

The technical capabilities of Wiser-50 are listed below.   If you are not familiar with all the terms involved with wireless testing I would suggest a call to Candelatech network engineers, they have years of experience helping all levels of customers and are extremely patient and easy to work with.

Scenario Definition Tool/Visualization

  • Complete Scenario Definition to add nodes, create mobility vectors and traffic profiles for run-time executable emulation.
  • Runtime GUI visualization with mobility and different link and traffic conditions.
  • Automatic Traffic generation & execution through the GUI.
  • Drag-and-drop capability for re-positioning of nodes.
  • Scenario consistency checks (against node capabilities and physical limitations such as speed of vehicle).
  • Mock-up run of the defined scenario (i.e., run that does not involve the emulator core to look at the scenario)
  • Manipulation of groups of nodes (positioning, movement as a group)
  • Capture and replay log files via GUI.
  • Support for 5/6 pre-defined scenarios.

RF Module

  • Support for TIREM, exponent-based, shadowing, fading, rain models (not included in base package.)
  • Support for adaptive modulation/coding for BER targets for ground-ground links.
  • Support for ground-to-ground & satellite waveforms
  • Support for MA TDMA (variants for ground-ground, ground-air & satellite links).
  • Support for minimal CSMA/CA functionality.
  • Support to add effects of selective ARQ & re-transmissions for the TDMA MAC.

Image

Related Articles

The Wireless Density Problem

Wireless Network Capacity Never Ending Quest Cisco Blog

Does your ISP restrict you from the public Internet?


By Art Reisman

The term, walled off Garden, is the practice of a  service provider  locking  you into their  local content.   A classic  example of the walled off garden  was exemplified by the early years of AOL. Originally when using their dial-up service,  AOL provided all the content you could want.  Access to the actual internet was  granted  by AOL only after other dial-up Internet providers started to compete with their closed offerings.  Today, using much more subtle techniques, Internet providers try to keep you on their networks.  The reason is simple, it costs them money to transfer you across a boundary to another network, and thus,  it is in their economic interest to keep you within their network.

So how do Internet service providers keep you on their network?

1) Sometimes with monetary incentives , for example, with large commercial accounts they just tell you it is going to cost more. My experience with this practice are first hand. I have heard testimonial from many of our customers running   ISPs, mostly outside the US , where they are  sold a chunk of bulk  bandwidth with conditions. The Terms are often something on the order of:

  • - you have a 1  gigabit connection
  • - if you access data outside  the country you can only use 300 megabits.
  • - If you go over 300 megabits outside the country there will hefty additional fees.

obviously there is going to be a trickle down effect where the regional ISP is going to try to discourage usage outside of the local country under such terms.

2) Then there are more passive techniques such as blatantly looking at your private traffic and just not letting off their network. This technique was used in the US,  implemented by large service providers back in the mid 2000′s.  Basically they targeted peer-to-peer requests and made sure you did not leave their network. Essentially you would only find content from other users within your providers network, even though it would appear as though you were searching the entire Internet.  Special equipment was used to intercept your requests and only allow to you probe other users within your providers network thus saving them money by avoiding Internet Exchange fees.

3) Another way your provider will try  to keep you on their network is offer local mirrored content. Basically they keep a copy of common files at a central location . In most cases this  actually causes the user no harm as they still get the same content. But it can cause problems if not done correctly, they risk sending out old data or obsolete news stories that have been updates.

4) Lastly some governments just outright block content, but this is for mostly political reasons.

Editors Note: There are also political reasons to control where you go on the Internet Practiced in China and Iran

Related Article Aol folds original content operations

Related Article: Why Caching alone won’t speed up your Internet

Follow

Get every new post delivered to your Inbox.

Join 54 other followers

%d bloggers like this: