An Easy Way to Get Rid of Wireless Dead Spots and Get Whole Home Music


By Steve Wagor, Co-Founder APconnections

Wireless dead spots are a common problem in homes and offices that expand beyond the range of single wireless access point. For example in my home office, my little Linksys Access point works great on my main floor , but down in my basement the signal just does not reach very well. The problem with a simple access point is if you need to expand your area you must mesh a new one, and off the shelf they do not know how to talk to each other.

For those of you have tried to expand your home network into a mesh with multiple access points there are howto’s out there for rigging them up

Many use wireless access points that are homemade, or the commercial style made for long range. With these solutions you will most likely need a rubber ducky antenna and either some old computers or at least small board computers with attached wireless cards. You will also need to know a bit of networking and setup most of these types of things via what some people would consider complex commands to link them all up into the mesh.

Well its a lot easier than that if you don’t need miles and miles of coverage using off the shelf Apple products. These are small devices with no external antennas.

First you need to install an Apple Extreme access point:
http://www.apple.com/airport-extreme
– at the time of this being written it is $199 and has been at that price for at least a couple of years now.

Now for every dead spot you just need the Apple Express:
http://www.apple.com/airport-express/
– at the time of this being written it is $99 and has been at that price for at least a couple of years now too.

So for every dead spot you have you can solve the problem for $99 after the Apple Extreme is installed. And Apple has very good install instructions on the product line so you don’t need to be a network professional to configure it. Most of it is simple point and click and all done via a GUI and without having to go to a command line ever.

For whole home music fairly effortlessly you can use the Analog/Optical Audio Jack on the back of the Airport Express and plug into your stereo or externally powered speakers. Now connect your iPhone or Mac product up to the same wireless network provided by your Airport Extreme and you can use Airplay to toggle on all or any of the stereos that your network has access to. So if you let your guests access your wireless network and they have an iPhone with Airplay then they could let you listen to anything they are playing by using Airplay to play it on your stereo for example while you are working out together in your home gym.

Stuck on Desert Island, Do You Take Your Caching Server or Your Netequalizer ?


Caching is a great idea and works well, but I’ll take my NetEqualizer with me if forced to choose between the two on my remote island with a satellite link.

Yes there are  a few circumstances where a caching server might have a nice impact. Our most successful deployments are in educational environments where the same video is watched repeatedly as an assignment;  but for most wide open installations  ,expectations of performance far outweigh reality.   Lets  have at look at what works and also drill down on expectations that are based on marginal assumptions.

From my personal archive of experience here are some of the expectations attributed to caching that perhaps are a bit too optimistic.

“Most of my users go to their Yahoo or Face Book home page every day when they log in and that is the bulk of all they do”

- I doubt this customer’s user base is that conformist :),   and they’ll find out once they install their caching solution.  But even if true, only some of the content on Face  Book and Yahoo is static.  A good portion of these pages are by default dynamic, and ever-changing with content.  They are marked as Dynamic in their URLs which means the bulk of the page must be reloaded each time.  For example,  in order for caching to have an impact , the users in this scenario would have to stick to their home pages , and not look at friend photo’s or other pages.

” We expect to see a 30 percent hit rate when we deploy our cache.”

You won’t see a 30 percent hit rate, unless somebody designs some specific robot army to test your cache, hitting the same pages over and over again. Perhaps, on IOS update day, you might see a bulk of your hits going to the same large file and have a significant performance boost for a day. But overall you will be  doing well if  you get a 3 or 4 percent hit rate.

” I expect the cache hits to take pressure off my Internet Link”

Assuming you want your average user to experience a fast loading Internet, this is where you really want your NetEqualizer ( or similar intelligent bandwidth controller) over your caching engine. The smart bandwidth controller can re-arrange traffic on the fly insuring Interactive hits get the best response. A caching engine does not have that intelligence.

Let’s suppose you have a 100 megabit link to the Internet ,and you install a cache engine that effectively gets a 6 percent hit rate. That would be exceptional  hit rate.

So what is the  end user experience with a 6 percent hit rate compared to pre-cache ?

-First off, it is not the hit rate that matters when looking at total bandwidth. Much of those hits will likely be smallish image  files from the Yahoo home page or common sites, that account for less than 1 percent of your actual traffic.  Most of your traffic is likely dominated by large file downloads and only a portion of those may be coming from cache.

- A 6 percent hit rate means that 94 percent miss rate , and if your Internet was slow from congestion before the caching server it will still be slow 94 percent of the time.

- Putting in a caching server  would be like upgrading your bandwidth from 100 megabits to 104 megabits to relieve congestion. That cache hits may add to the total throughput in your reports, but the 100 megabit bottleneck is still there, and to the end user, there is little or no difference in user perception at this point.  A  portion of your Internet access is still marginal or unusable during peak times, and other than the occasional web page or video loading nice and snappy , users are getting duds most of the time.

Even the largest caching server is insignificant in how much data it can store.

- The Internet is Vast and your Cache is not. Think of a tiny Ant standing on top of Mount Everest. YouTube puts up 100 hours of new content every minute of every day. A small commercial caching server can store about 1/1000 of what YouTube uploads in day, not to mention yesterday and the day before and last year. It’s just not going to be in your cache.

So why is a NetEqualizer bandwidth controller so much more superior than a caching server when changing user perception of speed?  Because the NetEqualizer is designed to keep Internet access from crashing , and this is accomplished by reducing the large file transfers and video download footprints during peak times. Yes these videos  and downloads may be slow or sporadic, but they weren’t going to work anyway, so why let them crush the interactive traffic ? In the end caching and equalizing are not perfect, but from real world trials the equalizer changes the user experience from slow to fast for all Interactive transactions, caching is hit or miss ( pun intended).

Federal Judge Orders Internet Name be Changed to CDSFBB (Content Delivery Service for Big Business)


By Art Reisman – CTO – APconnections

Okay, so I fabricated that headline, it’s not true, but I hope it goes viral and sends a message that our public Internet is being threatened by business interests and activist judges.

I’ll concede our government does serve us well in some cases;  they have produced some things that could not be done without their oversight, for example:

1) The highway system

2) The FAA does a pretty good job keeping us safe

3) The Internet. At least up until some derelict court ruling that will allow ISPs to give preferential treatment to content providers for a payment (bribe), whatever you want to call it.

The ramifications of this ruling may bring an end to the Internet as we know it. Perhaps the ball was put in motion when the Internet was privatized back in 1994. In any case, if this ruling stands up,  you can forget about the Internet as the great equalizer. A place where a small businesses can have a big web site. The Internet where a new idea on a small budget can blossom into a fortune 500 company. A place where the little guy can compete on equal footing without an entry fee to get noticed. No, the tide won’t turn right away, but at some point through a series of rationalizations, content companies and ISPs, with deep pockets, will kill anything that moves.

This ruling establishes a legal precedent. Legal precedents with suspect DNA are like cancers, they mutate into ugly variations, and replicate rapidly. There is no drug that can stop them. Obviously, the forces at work here are not the court systems themselves, but businesses with motives. The poor carriers just can’t seem to find any other solution to their congestion other than charge for access? Combine this with oblivious consumers that just want content on their devices, and you have a dangerous mixture. Ironically, these consumers already subsidize ISPs with a huge chunk of their disposable income. The hoodwink is on. Just as the public airwaves are controlled by a few large media conglomerates, so will go the Internet.

The only hope in this case is for the FCC to step in and take back the Internet. Give it back to the peasants. However, I suspect their initial statements are just grandstanding politics.  This is, after all, the same FCC that auctions off the airwaves to the highest bidder.

Squid Caching Can be Finicky


Editors Note: The past few weeks we have been working on tuning and testing our caching engine. We have been working  closely with  some of the developers who contribute to the Squid open source program.

Following are some of my  observations and discoveries regarding Squid Caching from our testing process.

Our primary mission was to make sure YouTube files cache correctly ( which we have done). One of the tricky aspects of caching a YouTube file, is that many of these files are considered dynamic content. Basically, this means their content contains a portion that may change with each access, sometimes the URL itself is just a pointer to a server where the content is generated fresh with each new access.

An extreme example of dynamic content would be your favorite stock quote site. During the business day much of the information on these pages is changing constantly, thus it  is obsolete within seconds. A poorly designed caching engine would do much more harm than good if it served up out of data stock quotes.

Caching engines by default try not cache dynamic content, and for good reason.    There are two different methods a caching server uses to decide whether or not to cache a page

1) The web designer can specifically set flags in the  format the actual URL  to tell caching engines whether a page is safe to cache or not.

In a recent test I set up a crawler to walk through the excite web site and all its urls. I use this crawler to create load in our test lab as well as to fill up our caching engine with repeatable content. I set my Squid Configuration file to cache all content less than 4k. Normally this would generate a great deal of Web hits , but for some reason none of the Excite content would cache. Upon further analysis our Squid consultant found the problem.

  I have completed the initial analysis. The problem is the excite.com
server(s). All of the “200 OK” excite.com responses that I have seen
among the first 100+ requests contain Cache-Control headers that
prohibit their caching by shared caches. There appears to be only two
kinds of Cache-Control values favored by excite:

Cache-Control: no-store, no-cache, must-revalidate, post-check=0,
               pre-check=0

and

Cache-Control: private,public

Both are deadly for a shared Squid cache like yours. Squid has options
to overwrite most of these restrictions, but you should not do that for
all traffic as it will likely break some sites.”

2) The second method is a bit more passive than deliberate directives.  Caching engines look at the actual URL of a page to gain clues about its permanence. A “?” used in the url implies dynamic content and is generally a  red flag to the caching server . And here-in lies the issue with caching Youtube files, almost all of them have  a “?” embedded within their URL.

Fortunately  Youtube Videos,  are normally permanent and unchanging once they are uploaded. I am still getting a handle these pages, but it seems the dynamic part is used for the insertion of different advertisements on the front end of the Video.  Our squid caching server uses a normalizing technique to keep the root of the URL consistent and thus serve up the correct base YouTube every time. Over the past two years we have had to replace our normalization technique twice in order to consistently cache YouTube files.

Network User Authentication Using Heuristics


Most authentication systems are black and white, once you are in , you are in. It was brought our attention recently, that authentication should be an ongoing process,  not a one time gate with continuous unchecked free rein once in.

The reasons are well founded.

1) Students at universities and employees at businesses, have all kinds of devices which can get stolen/borrowed while open.

My high school kids can attest this many times over. Often the result is just an innocuous string of embarrassing texts emanating from their phones claiming absurd things. For example  ” I won’t be at the party, I was digging for a booger and got a nose bleed” ,  blasted out to their friends after they left their phone unlocked.

2) People will also deliberately give out their authentication to friends and family

This leaves a hole in standard authentication strategies .

Next year we plan to add an interesting twist to our Intrusion Detection Device ( NetGladiator). The idea was actually not mine, but was suggested by a customer recently at our user group meeting in Western Michigan.

Here is the plan.

The idea for our intrusion detection device would be to build a knowledge base of a user’s habits over time and then match those established patterns against a  tiered alert system when there is any kind of abrupt   change.

It should be noted that we would not be monitoring content, and thus we would be far less invasive than Google Gmail ,with their targeted advertisements,  we would primarily just following the trail or path of usage and not reading content.

The heuristics would consist of a three-pronged model.

Prong one, would look at general trending access across all users globally . If  an aggregate group of users on the network were downloading an IOS update, then this behavior would be classified as normal for individual users.

Prong two ,  would look at the pattern of usage for the authenticated user. For example most people tune their devices to start at a particular page. They also likely use a specific e-mail client, and then have their favorite social networking sites. String together enough these and you would develop unique foot print for that user. Yes the user could deviate from their pattern of established usage as long as there were still elements of their normal usage in their access patterns.

Prong three would be the alarming level. In general a user would receive a risk rating when they deviated into suspect behaviors outside their established baseline. Yes this is profiling similar to psychological profiling on employment tests, which are very accurate at predicting future behavior.

A simple example of a risk factor would be a user that all of sudden starts executing login scripts en masse outside of their normal pattern. Something this egregious would be flagged as high risk,  and the administrator could specify an automatic disconnection for the user at a high risk level. Lower risk behavior would be logged for after the fact forensics if any internal servers became compromised.

Latest Notes on the Peer to Peer Front and DMCA Notices


Just getting back from our tech talk seminar today at Western Michigan University. The topic of DMCA requests came up in our discussions, and here are some of my notes on the subject.

Background: The DMCA, which is the enforcement arm of the motion picture copyright conglomerate, tracks down users with illegal content.

They seem to sometimes shoot first and ask questions later when sending out their notices more specific detail to follow.

Unconfirmed Rumor has it, that one very large University in the State of Michigan just tosses the requests in the garbage and does nothing with them, I have heard of other organizations taking this tact. They basically claim  this problem for the DMCA is not the responsibility of the ISP.

I also am aware of a sovereign Caribbean country that also ignores them. I am not advocating this as a solution just an observation.

There was also a discussion on how the DMCA discovers copyright violators from the outside.

As standard practice,  most network administrators use their firewall to block UN-initiated requests  into the network from the outside. With this type of firewall setting, an outsider cannot just randomly probe a network  to find out what copyrighted material is being hosted. You must get invited in first by an outgoing request.

An analogy would be that if you show up at my door  uninvited, and knock, my doorman is not going to let you in, because there is no reason for you to be at my door. But if I order a pizza and you show up wearing  a pizza delivery shirt, my doorman is going to let you in.  In the world of p2p, the invite into the network is a bit more subtle, and most users are not aware they have sent out the invite, but it turns out any user with a p2p client is constantly sending out requests to p2p super nodes to attain information on what content is out there.  Doing so, opens the door on the firewall to let the P2p super node into the network.  The DMCA p2p super nodes just look like another web site to the firewall so it lets it in. Once in the DMCA reads directories of p2p clients.

In one instance, the DMCA is not really inspecting files for copyrighted material, but was only be checking for titles. A  music student who recorded their own original music, but named their files after original artists and songs based on the style of the song.  Was flagged erroneously with DMCA notifications based on his naming convention   The school security examined his computer and determined the content was not copyrighted at all.   What we can surmise from this account was that the DMCA was probing the network directories and not actually looking at the content of the files to see if they were truly in violation of copying original works.
Back to the how does the DMCA probe theory ? The consensus was that it is very likely that DMCA is actually running  super nodes, so they will get access to client directories.  The super  node is a server node that p2p clients contact to get advice on where to get music and movie content ( pirated most likely). The speculation among the user group , and these are very experienced front line IT administrators that have seen just about every kind  of p2p scheme.  They suspect that the since the DMCA super node is contacted by their student network first, it opens the door from the super node to come back and probe for content. In other words the super node looks like the Pizza delivery guy where you place your orders.
It was also further discussed and this theory is still quite open, that sophisticated p2p  networks try to cut out the DMCA  spy super nodes.  This gets more convoluted than peeling off character masks at a mission impossible movie. The p2p network operators need super nodes to distribute content, but these nodes cannot be permanently hosted, they must live in the shadows and are perhaps parasites themselves on client computers.

So questions that remain for future study on this subject are , how do the super nodes get picked , and how does the p2p network disable a spy DMCA super node ?

Caching in the Cloud is Here


By Art Reisman, CTO APconnections (www.netequalizer.com)

I just got a note from a customer, a University, that their ISP is offering them 200 megabit internet at fixed price. The kicker is, they can also have access to a 1 gigabit feed specifically for YouTube at no extra cost.  The only explanation for this is that their upstream ISP has an extensive in-network YouTube cache. I am just kicking myself for not seeing this coming!

I was well-aware that many of the larger ISPs cached NetFlix and YouTube on a large scale, but this is the first I have heard of a bandwidth provider offering a special reduced rate for YouTube to a customer downstream. I am just mad at myself for not predicting this type of offer and hearing about it from a third party.

As for the NetEqualizer, we have already made adjustments in our licensing for this differential traffic to come through at no extra charge beyond your regular license level, in this case 200 megabits. So if for example, you have a 350 megabit license, but have access to a 1Gbps YouTube feed, you will pay for a 350 megabit license, not 1Gbps.  We will not charge you for the overage while accessing YouTube.

Follow

Get every new post delivered to your Inbox.

Join 56 other followers

%d bloggers like this: