How to get Access to Blocked Internet Sites and Blocked Video Services


Have you ever taken a flight where video access is blocked?

Perhaps you are in a European Country where a well known provider blocks Skype to force you to use their phone service?

All you need to get around these suspect practices is to use a standard VPN, and it is easier than you think. I am on a flight right now and am going to try watching a movie. I am using IPvanish, but there are many VPN services you can choose from, and use for just a few dollars a month.

Just today, I was trying to restore my iPad to factory defaults. I supposedly have 20 megabit business service from Comcast.  While running the restore, I noticed that my download speed was running at about 200kbs max, and yet speedtests were showing no problems with my connection. So I rebooted my computer, started up my VPN, and found out that I am not getting my full 10 megabits.  What can I infer from this ? Well, I can only assume that Comcast has some sort of bandwidth control and is identifying my Apple device download and slowing it down. I was able to repeat this test.

By the way, I did get to watch a movie on my flight – success!  And that was a much needed break from work.

Note: There is one more trick required to un-block for some VPN services and some  streaming sites.  You may need to hide your DNS activities as well, since some blocking services will also block the DNS request before you even get to the site.

For example, the VPN tunnel will hide what you are doing from anybody, but the initial lookup service to get the site may not be hidden, because you are likely using by default your provider(s) DNS service. So, you should also set your DNS service to a third party site other than your provider after you fire up your VPN. In this way DNS requests should also be encrypted.

Behind The Scenes, How Many Users Can an Access Point Handle ?


Assume you are teaching a class with thirty students, and every one of them needs help with their homework, what would you do? You’d probably schedule a time slot for each student to come in and talk to you one on one (assuming they all had different problems and there was no overlap in your tutoring).

Fast forward to your wireless access point.  You have perhaps heard all the rhetoric about 3.5 gigaherts, or 5.3 megahertz ?

Unfortunately, the word frequency is tossed around in tech buzzword circles the same way car companies and their marketing arms talk about engine sizes. I have no idea what 2.5 Liter Engine is,  it might sound cool and it might be better than a 2 liter engine, but in reality I don’t know how to compare the two numbers. So to answer our original question, we first need a little background on frequencies to get beyond the marketing speak.

A good example of a frequency, that is also easy to visualize, are ripples on pond. When you drop a rock in the water, ripples propagate out in all directions. Now imagine if  you stood in the water, thigh deep across the pond,  and the ripples hit your leg once each second.  The frequency of the ripples in the water would be 1 hertz, or one peak per second. With access points, there are similar ripples that we call radio waves. Although you can’t see them, like the ripples on the water, they are essentially the same thing. Little peaks and values of electromagnetic waves going up and down and hitting the antenna of the wireless device in your computer or Iphone. So when a marketing person tells you their AP is 2.4 Gigahertz, that means those little ripples coming out of  it are hitting your head, and everything else around them, 2.4 billion times each second. That is quite a few ripples per second.

Now in order to transmit a bit of data, the AP actually stops and starts transmitting ripples. One moment it is sending out 2.4 billion ripples pdf second the next moment it is not.  Now this is where it gets a bit weird, at least for me. The 2.4 billion ripples a second really have no meaning as far as data transmission by themselves; what the AP does is set up a schedule of time slots, let’s say 10 million time slots a second, where it is either transmitting ripples, or it turns the ripple generator off. Everybody that is in communication with the AP is aware of the schedule and all the 10 million time slots.  Think of these time slots as dates on your Calendar, and if you have a sunny day, call that a one, while if you have a cloudy day call that a 0.  Cloudy days are a binary 1 and clear day a binary 0. After we string together 8 days we have a sequence of 1’s and 0’s and a full byte. Now 8 days is a long time to transmit a byte, that is why the AP does not use 24 hours for a time slot, but it could , if we were some laid back hippie society where time did not matter.

So let’s go back over what we have learned and plug in some realistic parameters.
Let’s start with a frequency of 2.4 gigahertz. The fastest an AP can realistically turn this ripple generator off and on is about 1/4 the frequency or about 600 time slots/bits per second. This assumes a perfect world and all the bits get out without any interference from other things generating ripples (like your microwave) or something. So in reality the effective rate might be more on the order of 100 million bits a second.
Now let’s say there are 20 users in the room, sharing the available bits equally. They would all be able to run 5 megabits each. But again, there is over head switching between these users (sometimes they talk at the same time and have to constantly back off and re-synch)  Realistically with 20 users all competing for talk time,  1 to 2 megabits per user is more likely.

Other factors that can affect the number of users.
As you can imagine the radio AP manufacturers do all sorts of things to get better numbers. The latest AP’s have multiple antennas and run in two frequencies (two ripple generators) for more bits.

There are also often interference problems with multiple AP’s in the area , all making ripples . The transmission of  ripples for one AP do not stop at a fixed boundary, and this complexity will cause the data rates to slow down while the AP’s sort themselves out.

For related readings on Users and Access Points:

How Many Users Can a Wireless Access Point Handle?

How to Build Your Own Linux Access Points

How to use Access Points to set up and In-Home Music System

So You Think you Have Enough Bandwidth?


There are actually only two tiers of bandwidth , video for all, and not video for all. It is a fairly black and white problem. If you secure enough bandwidth such that 25 to 30 percent of your users can simultaneously watch video feeds, and still have some head room on your circuit, congratulations  – you have reached bandwidth nirvana.

Why is video the lynchpin in this discussion?

Aside from the occasional iOS/Windows update, most consumers really don’t use that much bandwidth on a regular basis. Skype, chat, email, and gaming, all used together, do not consume as much bandwidth as video. Hence, the marker species for congestion is video.

Below, I present some of the metrics to see if you can mothball your bandwidth shaper.

1) How to determine the future bandwidth demand.
Believe it or not, you can outrun your bandwidth demand, if your latest bandwidth upgrade is large enough to handle the average video load per customer.  Then it is possible that no further upgrades will be needed, at least in the foreseeable future.

In the “Video for all” scenario the rule of thumb is you can assume 25 percent of your subscribers watching video at any one time.  If you still have 20 percent of your bandwidth left over, you have reached the video for all threshold.

To put some numbers to this
Assume 2000 subscribers, and a 1 gigabit link. The average video feed will require about 2 megabits. (note some HD video is higher than this )  This would mean, to support video 25 percent of your subscribers would use the entire 1 gigabit and there is nothing left over anybody else, hence you will run out of  bandwidth.

Now if you have 1.5 gigabits for 2000 subscribers you have likely reached the video for all threshold, and most likely you will be able to support them without any advanced intelligent bandwidth control . A simple 10 megabit rate cap per subscriber is likely all you would need.

2) Honeymoon periods are short-lived.
The reason why the reprieve in congestion after a bandwidth upgrade is so short-lived is usually because the operator either does not have a good intelligent bandwidth control solution, or they take their existing solution out thinking mistakenly they have reached the “video for all” level.  In reality, they are still under the auspices of the video not for all. They are lulled into a false sense of security for a brief honeymoon period.  After the upgrade things are okay. It takes a while for a user base to fill the void of a new bandwidth upgrade.

Bottom line: Unless you have the numbers to support 25 to 30 percent of your user base running video you will need some kind of bandwidth control.

Firewall Recipe for DDoS Attack Prevention and Mitigation


Although you cannot “technically” stop a DDoS attack, there are ways to detect and automatically mitigate the debilitating effects on your public facing servers. Below, we shed some light on how to accomplish this without spending hundreds of thousands of dollars on a full service security solution that may be overkill for this situation.

Most of the damage done by a targeted DDoS attack is the result of the overhead incurred on your servers from large volume of  fake inquiries into your network. Often with these attacks, it is not the volume of raw bandwidth  that is the issue, but the reduced the slow response time due to the overhead on your servers. For a detailed discussion of how a DDoS attack is initiated please visit http://computer.howstuffworks.com/zombie-computer3.htm zombie-computer-3d

We assume in our recipe below, that you have some sort of firewall device on your edge that can actually count hits into your network from an outside IP, and also that you can program this device to take blocking action automatically.

Note: We provide this type of service with our NetGladiator line. As of our 8.2 software update, we also provide this in our NetEqualizer line of products.

Step 1
Calculate your base-line incoming activity. This should be a running average of unique hits per minute or perhaps per second. The important thing is that you have an idea of what is normal. Remember we are only concerned with Un-initiated hits into your network, meaning outside clients that contact you without being contacted first.

Step 2
Once you have your base hit rate of incoming queries, then set a flag to take action ( step 3 below), should this hit rate exceed more than 1.5 standard deviations above your base line.  In other words if your hit rate jumps by statistically large amount compared to your base line for no apparent reason i.e .you did not mail out a newsletter.

Step 3
You are at step 3 because you have noticed a much larger than average hit rate of un-initiated requested into your web site. Now you need to look for a hit count by external IP. We assume that the average human will only generate at most a hit every 10 seconds or so, maybe higher. And also on average they will like not generate more than 5 or 6 hits over a period of a few minutes.  Where as a hijacked client attacking your site as part of a DDOS attack is likely to hit you at a much higher rate.  Identify these incoming IP’s and go to Step 4.

Step 4
Block these IP’s on your firewall for a period of 24 hours. You don’t want to block them permanently because it is likely they are just hijacked clients ,and also if they are coming from behind a Nat’d community ( like a University) you will be blocking a larger number of users who had nothing to do with the attack.

If you follow these steps you should have a nice pro-active watch-dog on your firewall to mitigate the effects of any DDoS attack.

For further consulting on DDoS or other security related issues feel free to contact us at admin@apconnections.net.

Related Articles:

Defend your Web Server against DDoS Attacks – techrecipes.com

How DDoS Attacks Work, and Why They’re Hard to Stop

How to Launch a 65 gbps DDoS Attack – and How to Stop It

Net Neutrality must be preserved


As much as I hate to admit it, it seems a few of our Republican congressional leaders are “all in” on allowing large content providers to have privileged priority access on the Internet. Their goal for the 2015 congress is to thwart the President and his Mandate to the FCC on net neutrality. Can you imagine going to visit Yosemite National park and being told that the corporations that sponsor the park have taken all the campsites? Or a special lane on the Interstate dedicated exclusively for Walmart Trucks?  Like our highway system and our National parks, the Internet is a resource shared by all Americans.

I think one of the criteria for being a politician is a certification that you flunked any class in college that involved critical or objective thinking, for example, this statement from Rep Marsha Blackburn

“Federal control of the internet will restrict our online freedom and leave Americans facing the same horrors that they have experienced with HealthCare.gov,”

She might as well compare the Internet to the Macy’s parade, it would make about as much sense; the Internet is a common shared utility similar to electricity and roads, and besides that, it was the Government that invented and funded most of the original Internet. The healthcare system is complex and flawed because it is a socialistic re-distribution of wealth, not even remotely similar to the Internet.  The internet needs very simple regulation to prevent abuse, this is about the only thing the government is designed to do effectively. And then there is this stifle innovation argument…

Rep. Bob Goodlatte, chair of the House Judiciary Committee, said he may seek legislation that would aim to undermine the “FCC’s net neutrality authority by shifting it to antitrust enforcers,” Politico wrote.

Calling any such net neutrality rules a drag on innovation and competition

Let me translate for him because he does not understand or want to understand the motivations of the lobbyist when they talk about stifling innovation. My Words: “Regulation, in the form of FCC imposed net neutrality, will stifle the ability of the larger access providers and content providers from creating a walled off garden, thus stifling their pending monopoly on the Internet.” There are many things where I wish the Government would keep their hands out of, but the Internet is not one of them. I must side with the FCC and the President on this one.

Update Jan 31st

Another win for Net Neutrality, the Canadian Government outlaws the practice of zero rating, which is simply a back door for a provider to give free content over rivals.

More lies and deceit from your ISP


Note: We believe bandwidth shaping is a necessary and very valuable tool for both ISPs and the public. We also support open honest discussion about the need for this technology and encourage our customers to open and honest with their customers.    We do not like deception in the industry at any level and will continue to expose and write about it when we see it. 

Back in 2007, I wrote an article for PC magazine about all the shenanigans that ISPs use to throttle bandwidth.  The article set a record for on-line comments for the day, and the editor was happy.  At that time, I recall feeling like a lone wolf trying to point out these practices.  Finally some redemption came this morning. The FTC is flexing its muscles; they are now taking on AT&T for false claims with respect to unlimited data.

Federal officials on Tuesday sued AT&T, the nation’s second-largest cellular carrier, for allegedly deceiving millions of customers by selling them supposedly “unlimited” data plans that the company later “throttled” by slowing Internet speeds when customers surfed the Web too much.

It seems that you can have an unlimited data plan with AT&T, but if you try to use it all the time, they slow down your speed to the point where the amount of data you get approaches zero. You get unlimited data, as long as you don’t use it – huh?  Does that make sense?

Recently, I have been doing some experiments with Comcast and my live dropcam home video feed.  It seems that if I try to watch this video feed on my business class Comcast, (it comes down from the dropcam cloud), the video will time out within about minute or so. However, other people watching my feed do not have this problem. So, I am starting to suspect that Comcast is using some form of application shaper to cut off my feed (or slow it down to the point where it does not work).  My evidence is only anecdotal.  I am supposed to have unlimited 4 megabits up and 16 megabits down with my new business class service, but I am starting to think there may be some serious caveats hidden in this promise.

Where can you find the fastest Internet Speeds ?


The fastest Internet Speeds on earth can be found on any police detective related shows, CSI, etc.  Pick a modern TV show, or movie for that matter, with a technology scene, and you’ll find that the investigators can log into the Internet from any place on earth, and the connection is perfect. They can bring up images and data files instantly, while on the move, in a coffee shop, in a  hotel, it does not matter.  They can be in some remote village in India or back at the office, super perfectly fast connection every time.  Even the bad guys have unlimited bandwidth from anywhere in the world on these shows.

So if you ever need fast Internet, find a friend who works in government or law enforcement, and ask for shared access.

On the other hand,  I just spent a weekend in a small hotel where nothing worked, their wireless was worthless – pings went unanswered for 30 seconds at a time, and my backup Verizon 4g was also sporadic in and out. So I just gave up and read a magazine. When this happens, I wish I could just go to the Verizon Back Haul at their tower and plug a NetEqualizer in, this would immediately stop their data crush.

End of thought of day

Notes from a cyber criminal


After a couple of recent high profile data thefts,   I put the question to myself,  how does a cyber thief convert a large amount of credit cards into a financial windfall?

I did some research, and then momentarily put on the shoes of a cyber thief, here are my notes and thoughts:

I am the greatest hacker in the world and I just got a-hold of twenty million  Home Depot debit cards and account numbers. What is my next move. Well I guess I could just start shopping at Home Depot every day and maxing out all my stolen account cards with a bunch of Lawn Mowers , Garden Hoses, and other items. How many times could I do this before I got caught ?  Probably not that many, I am sure the buying patterns would be flagged even before the consumer realized their card was stolen , especially if I was nowhere near the home area code of my victim(s).  And then I’d have to fence all those items to turn it into cash. But let’s assume I acted quickly and went on a home depot shopping spree with my twenty million cards.  Since I am a big time crook I am looking for a haul I can retire on, and so I’d want to buy and fence at least a few hundred thousand dollars worth of stuff out the gate. Now that is going to be quite a few craig(s) list advertisements, and one logistical nightmare to move those goods, and also I am leaving a trail back to me because at some point I have to exchange the goods with the buyer and they are going to want to pay by check . Let me re-think this…

Okay so I am getting smarter, forget the conventional method , what if I find some Russian portal where I can just sell the Home Depot cards and have the funds paid in Bitcoin to some third-party account that is untraceable.  How many people actually have Bitcoin accounts, and how many are interested in buying stolen credit cards on the black market, and then how to insure that the numbers have not been deactivated ? Suppose I sell to some Mafia type and the cards are not valid anymore ? Will they track me down and kill me ? Forget the Bitcoin,  I’ll have to use Paypal , again leaving a trail of some kind.  So now how do I market my credit card fencing site, I have 20 million cards to move and no customers.  A television advertisement , an underworld blog post ?  I need customers to buy these cards and I need them fast , once I start selling them Home Depot will only take a few days to shut down their cards . Maybe I can just have an agent hawk them in Thailand for $3 each , that way I stay anonymous, yeh that’s what I’ll do whew , I’ll be happy if I can net a few thousand dollars.

Conclusion: Although the theft of a data makes a great headline and is certainly not to be taken lightly , the ability for the crook(s) to convert bounty into a financial windfall, although possible is most likely a far more difficult task than the data theft . Stealing the data is one thing, but profiting from it on anything but the smallest scale is very difficult if not impossible.

The real problem for the hacked commercial institution is not the covering the loss of revenue from the theft, but the loss of company value from loss of public trust which can mount into the billions.

Although my main business is Bandwidth Control I do spend a good deal of thought cycles on Security as on occasion the two go hand in hand. For example some of the utilities we use on our NetEqualizer are used to thwart DOS attacks.  We also have our NetGladiator product which is simply the best and smartest tool out there for preventing an attack through your Website.

Surviving iOS updates


The birds outside my office window are restless. I can see the strain in the Comcast cable wires as they droop, heavy with the burden of additional bits, weighting them down like a freak ice storm.   It is time, once again, for Apple to update every device in the Universe with their latest IOS update.

Assuming you are responsible for a Network with a limited Internet pipe, and you are staring down 100 or more users, about to hit the accept button for their update, what can you do to prevent your user network from being gridlocked?

The most obvious option to gravitate to is caching. I found this nice article (thanks Luke) on the Squid settings used for a previous iOS update in 2013.  Having worked with Squid quite a bit helping our customers, I was not surprised on the amount of tuning required to get this to work, and I suspect there will be additional changes to make it work in 2014.

If you have a Squid caching solution already up and running it is worth a try, but I am on the fence of recommending a Squid install from scratch.  Why? Because we are seeing diminishing returns from Squid caching each year due to the amount of dynamic content.  Translation: Very few things on the Internet come from the same place with the same filename anymore, and for many content providers they are marking much of their content as non-cacheable.

If you have a NetEqualizer in place you can easily blunt the effects of the data crunch with a standard default set-up. The NetEqualizer will automatically push the updates out further into time, especially during peak hours when there is contention. This will allow other applications on your network to function normally during the day. I doubt anybody doing the update will notice the difference.

Finally if you are desperate, you might be able to block access to anything IOS update on your firewall.  This might seem a bit harsh, but then again Apple did not consult with you, and besides isn’t that what the free Internet at Starbucks is for?

Here is a snippet pulled from a forum on how to block it.

iOS devices check for new versions by polling the server mesu.apple.com. This is done via HTTP, port 80. Specifically, the URL is:

http://mesu.apple.com/assets/com_apple_MobileAsset_SoftwareUpdate/com_apple_MobileAsset_SoftwareUpdate.xml

If you block or redirect mesu.apple.com, you will inhibit the check for software updates. If you are really ambitious, you could redirect the query to a cached copy of the XML, but I haven’t tried that. Please remove the block soon; you wouldn’t want to prevent those security updates, would you?

Your Critical Business Needs Two Sources for Internet


Time Warner’s Nationwide outage got my wheels turning again about how we perceive risk when it comes to network outages.

For example:

We have close to 10,000 NetEqualizer systems in the field, of which, we get about 10 failures a year. If you further break down those failures  to root cause, about 80 percent are due to some external event:

  •  lightning
  • flood
  • heat
  • blunt trauma

Given that breakdown, the chances of a NetEqualizer failure for a well-maintained system in a properly vented environment is far less than 1 percent a year. I would also assume that for a simple router or firewall the failure rate is about the same.

Now compare those odds with the chances that your Internet provider is going to crash and burn for some extended outage during the business day  over the course of a full year?

I would say the odds of this happening approach 100 percent.

And yet, the perception often is that, you need a hardware fail-over strategy, and that certainly is a good idea for those who have critical Internet needs. But if you are truly trying to mitigate risk in order of precedence, you should address the potential outages from your provider before investing in redundant hardware.

Here again, our top 5 reasons for an Internet Outage.

Below are list of recent publicly reported outages for various reasons. I am not intentionally picking on the larger service providers here , I do not believe they are any more or less vulnerable than some smaller regional providers , they just tend to make news headlines with their outages.

Comcast Outage for North Denver Fiber cut

Comcast hit with massive Internet outage

Forum discussion about wide spread Internet outage Des Moines Iowa

Spokane Washington 10,000 customers without Internet service

Wide spread Internet outage London , Virgin Media

An Easy Way to Get Rid of Wireless Dead Spots and Get Whole Home Music


By Steve Wagor, Co-Founder APconnections

Wireless dead spots are a common problem in homes and offices that expand beyond the range of single wireless access point. For example in my home office, my little Linksys Access point works great on my main floor , but down in my basement the signal just does not reach very well. The problem with a simple access point is if you need to expand your area you must mesh a new one, and off the shelf they do not know how to talk to each other.

For those of you have tried to expand your home network into a mesh with multiple access points there are howto’s out there for rigging them up

Many use wireless access points that are homemade, or the commercial style made for long range. With these solutions you will most likely need a rubber ducky antenna and either some old computers or at least small board computers with attached wireless cards. You will also need to know a bit of networking and setup most of these types of things via what some people would consider complex commands to link them all up into the mesh.

Well its a lot easier than that if you don’t need miles and miles of coverage using off the shelf Apple products. These are small devices with no external antennas.

First you need to install an Apple Extreme access point:
http://www.apple.com/airport-extreme
– at the time of this being written it is $199 and has been at that price for at least a couple of years now.

Now for every dead spot you just need the Apple Express:
http://www.apple.com/airport-express/
– at the time of this being written it is $99 and has been at that price for at least a couple of years now too.

So for every dead spot you have you can solve the problem for $99 after the Apple Extreme is installed. And Apple has very good install instructions on the product line so you don’t need to be a network professional to configure it. Most of it is simple point and click and all done via a GUI and without having to go to a command line ever.

For whole home music fairly effortlessly you can use the Analog/Optical Audio Jack on the back of the Airport Express and plug into your stereo or externally powered speakers. Now connect your iPhone or Mac product up to the same wireless network provided by your Airport Extreme and you can use Airplay to toggle on all or any of the stereos that your network has access to. So if you let your guests access your wireless network and they have an iPhone with Airplay then they could let you listen to anything they are playing by using Airplay to play it on your stereo for example while you are working out together in your home gym.

The Internet, Free to the Highest Bidder.


It looks like the FCC has  caved,

“The Federal Communications Commission said on Wednesday that it would propose new rules that allow companies like Disney, Google or Netflix to pay Internet service providers.”

WSJ article April 2014

Compare today’s statements to those made back in  January and February, when  the FCC was posturing  like a fluffed up Tom Turkey for Net Neutrality.

“I am committed to maintaining our networks as engines for economic growth, test beds for innovative services and products, and channels for all forms of speech protected by the First Amendment”

– Tom Wheeler FCC chairman Jan 2014

“The FCC could use that broad authority to punish Internet providers that engage in flagrant net-neutrality violations, Wheeler suggested. The agency can bring actions with the goal of promoting broadband deployment, protecting consumers, or ensuring competition, for example.”

-Tom Wheeler Jan 2014

As I eluded to back then, I did not give their white night rhetoric much credence.

“The only hope in this case is for the FCC to step in and take back the Internet. Give it back to the peasants. However, I suspect their initial statements are just grandstanding politics.  This is, after all, the same FCC that auctions off the airwaves to the highest bidder.”

– Art Reisman  Feb 2014

It seems to me the FCC is now a puppet agency of regulation. How can you  start by talking about regulating abuses threatening free access to the Internet, and then without blinking an eye, offer up a statement that Rich Guys can  now pay for privileged access to the Internet ?

I don’t know whether to cry or be cynical at this point. Perhaps I should just go down to my nearest public library , and pay somebody to stock their shelves with promotional NetEqualizer Material?

“The court said that because the Internet is not considered a utility under federal law, it was not subject to that sort of regulation.”

Quotes Referenced from New York Times article FCC in shift backs fast lanes for Web Traffic

Stuck on Desert Island, Do You Take Your Caching Server or Your Netequalizer ?


Caching is a great idea and works well, but I’ll take my NetEqualizer with me if forced to choose between the two on my remote island with a satellite link.

Yes there are  a few circumstances where a caching server might have a nice impact. Our most successful deployments are in educational environments where the same video is watched repeatedly as an assignment;  but for most wide open installations  ,expectations of performance far outweigh reality.   Lets  have at look at what works and also drill down on expectations that are based on marginal assumptions.

From my personal archive of experience here are some of the expectations attributed to caching that perhaps are a bit too optimistic.

“Most of my users go to their Yahoo or Face Book home page every day when they log in and that is the bulk of all they do”

– I doubt this customer’s user base is that conformist :),   and they’ll find out once they install their caching solution.  But even if true, only some of the content on Face  Book and Yahoo is static.  A good portion of these pages are by default dynamic, and ever-changing with content.  They are marked as Dynamic in their URLs which means the bulk of the page must be reloaded each time.  For example,  in order for caching to have an impact , the users in this scenario would have to stick to their home pages , and not look at friend photo’s or other pages.

” We expect to see a 30 percent hit rate when we deploy our cache.”

You won’t see a 30 percent hit rate, unless somebody designs some specific robot army to test your cache, hitting the same pages over and over again. Perhaps, on IOS update day, you might see a bulk of your hits going to the same large file and have a significant performance boost for a day. But overall you will be  doing well if  you get a 3 or 4 percent hit rate.

” I expect the cache hits to take pressure off my Internet Link”

Assuming you want your average user to experience a fast loading Internet, this is where you really want your NetEqualizer ( or similar intelligent bandwidth controller) over your caching engine. The smart bandwidth controller can re-arrange traffic on the fly insuring Interactive hits get the best response. A caching engine does not have that intelligence.

Let’s suppose you have a 100 megabit link to the Internet ,and you install a cache engine that effectively gets a 6 percent hit rate. That would be exceptional  hit rate.

So what is the  end user experience with a 6 percent hit rate compared to pre-cache ?

-First off, it is not the hit rate that matters when looking at total bandwidth. Much of those hits will likely be smallish image  files from the Yahoo home page or common sites, that account for less than 1 percent of your actual traffic.  Most of your traffic is likely dominated by large file downloads and only a portion of those may be coming from cache.

– A 6 percent hit rate means that 94 percent miss rate , and if your Internet was slow from congestion before the caching server it will still be slow 94 percent of the time.

– Putting in a caching server  would be like upgrading your bandwidth from 100 megabits to 104 megabits to relieve congestion. That cache hits may add to the total throughput in your reports, but the 100 megabit bottleneck is still there, and to the end user, there is little or no difference in user perception at this point.  A  portion of your Internet access is still marginal or unusable during peak times, and other than the occasional web page or video loading nice and snappy , users are getting duds most of the time.

Even the largest caching server is insignificant in how much data it can store.

– The Internet is Vast and your Cache is not. Think of a tiny Ant standing on top of Mount Everest. YouTube puts up 100 hours of new content every minute of every day. A small commercial caching server can store about 1/1000 of what YouTube uploads in day, not to mention yesterday and the day before and last year. It’s just not going to be in your cache.

So why is a NetEqualizer bandwidth controller so much more superior than a caching server when changing user perception of speed?  Because the NetEqualizer is designed to keep Internet access from crashing , and this is accomplished by reducing the large file transfers and video download footprints during peak times. Yes these videos  and downloads may be slow or sporadic, but they weren’t going to work anyway, so why let them crush the interactive traffic ? In the end caching and equalizing are not perfect, but from real world trials the equalizer changes the user experience from slow to fast for all Interactive transactions, caching is hit or miss ( pun intended).

Federal Judge Orders Internet Name be Changed to CDSFBB (Content Delivery Service for Big Business)


By Art Reisman – CTO – APconnections

Okay, so I fabricated that headline, it’s not true, but I hope it goes viral and sends a message that our public Internet is being threatened by business interests and activist judges.

I’ll concede our government does serve us well in some cases;  they have produced some things that could not be done without their oversight, for example:

1) The highway system

2) The FAA does a pretty good job keeping us safe

3) The Internet. At least up until some derelict court ruling that will allow ISPs to give preferential treatment to content providers for a payment (bribe), whatever you want to call it.

The ramifications of this ruling may bring an end to the Internet as we know it. Perhaps the ball was put in motion when the Internet was privatized back in 1994. In any case, if this ruling stands up,  you can forget about the Internet as the great equalizer. A place where a small businesses can have a big web site. The Internet where a new idea on a small budget can blossom into a fortune 500 company. A place where the little guy can compete on equal footing without an entry fee to get noticed. No, the tide won’t turn right away, but at some point through a series of rationalizations, content companies and ISPs, with deep pockets, will kill anything that moves.

This ruling establishes a legal precedent. Legal precedents with suspect DNA are like cancers, they mutate into ugly variations, and replicate rapidly. There is no drug that can stop them. Obviously, the forces at work here are not the court systems themselves, but businesses with motives. The poor carriers just can’t seem to find any other solution to their congestion other than charge for access? Combine this with oblivious consumers that just want content on their devices, and you have a dangerous mixture. Ironically, these consumers already subsidize ISPs with a huge chunk of their disposable income. The hoodwink is on. Just as the public airwaves are controlled by a few large media conglomerates, so will go the Internet.

The only hope in this case is for the FCC to step in and take back the Internet. Give it back to the peasants. However, I suspect their initial statements are just grandstanding politics.  This is, after all, the same FCC that auctions off the airwaves to the highest bidder.

Squid Caching Can be Finicky


Editors Note: The past few weeks we have been working on tuning and testing our caching engine. We have been working  closely with  some of the developers who contribute to the Squid open source program.

Following are some of my  observations and discoveries regarding Squid Caching from our testing process.

Our primary mission was to make sure YouTube files cache correctly ( which we have done). One of the tricky aspects of caching a YouTube file, is that many of these files are considered dynamic content. Basically, this means their content contains a portion that may change with each access, sometimes the URL itself is just a pointer to a server where the content is generated fresh with each new access.

An extreme example of dynamic content would be your favorite stock quote site. During the business day much of the information on these pages is changing constantly, thus it  is obsolete within seconds. A poorly designed caching engine would do much more harm than good if it served up out of data stock quotes.

Caching engines by default try not cache dynamic content, and for good reason.    There are two different methods a caching server uses to decide whether or not to cache a page

1) The web designer can specifically set flags in the  format the actual URL  to tell caching engines whether a page is safe to cache or not.

In a recent test I set up a crawler to walk through the excite web site and all its urls. I use this crawler to create load in our test lab as well as to fill up our caching engine with repeatable content. I set my Squid Configuration file to cache all content less than 4k. Normally this would generate a great deal of Web hits , but for some reason none of the Excite content would cache. Upon further analysis our Squid consultant found the problem.

  I have completed the initial analysis. The problem is the excite.com
server(s). All of the “200 OK” excite.com responses that I have seen
among the first 100+ requests contain Cache-Control headers that
prohibit their caching by shared caches. There appears to be only two
kinds of Cache-Control values favored by excite:

Cache-Control: no-store, no-cache, must-revalidate, post-check=0,
               pre-check=0

and

Cache-Control: private,public

Both are deadly for a shared Squid cache like yours. Squid has options
to overwrite most of these restrictions, but you should not do that for
all traffic as it will likely break some sites.”

2) The second method is a bit more passive than deliberate directives.  Caching engines look at the actual URL of a page to gain clues about its permanence. A “?” used in the url implies dynamic content and is generally a  red flag to the caching server . And here-in lies the issue with caching Youtube files, almost all of them have  a “?” embedded within their URL.

Fortunately  Youtube Videos,  are normally permanent and unchanging once they are uploaded. I am still getting a handle these pages, but it seems the dynamic part is used for the insertion of different advertisements on the front end of the Video.  Our squid caching server uses a normalizing technique to keep the root of the URL consistent and thus serve up the correct base YouTube every time. Over the past two years we have had to replace our normalization technique twice in order to consistently cache YouTube files.

%d bloggers like this: