So You Think you Have Enough Bandwidth?


There are actually only two tiers of bandwidth , video for all, and not video for all. It is a fairly black and white problem. If you secure enough bandwidth such that 25 to 30 percent of your users can simultaneously watch video feeds, and still have some head room on your circuit, congratulations  – you have reached bandwidth nirvana.

Why is video the lynchpin in this discussion?

Aside from the occasional iOS/Windows update, most consumers really don’t use that much bandwidth on a regular basis. Skype, chat, email, and gaming, all used together, do not consume as much bandwidth as video. Hence, the marker species for congestion is video.

Below, I present some of the metrics to see if you can mothball your bandwidth shaper.

1) How to determine the future bandwidth demand.
Believe it or not, you can outrun your bandwidth demand, if your latest bandwidth upgrade is large enough to handle the average video load per customer.  Then it is possible that no further upgrades will be needed, at least in the foreseeable future.

In the “Video for all” scenario the rule of thumb is you can assume 25 percent of your subscribers watching video at any one time.  If you still have 20 percent of your bandwidth left over, you have reached the video for all threshold.

To put some numbers to this
Assume 2000 subscribers, and a 1 gigabit link. The average video feed will require about 2 megabits. (note some HD video is higher than this )  This would mean, to support video 25 percent of your subscribers would use the entire 1 gigabit and there is nothing left over anybody else, hence you will run out of  bandwidth.

Now if you have 1.5 gigabits for 2000 subscribers you have likely reached the video for all threshold, and most likely you will be able to support them without any advanced intelligent bandwidth control . A simple 10 megabit rate cap per subscriber is likely all you would need.

2) Honeymoon periods are short-lived.
The reason why the reprieve in congestion after a bandwidth upgrade is so short-lived is usually because the operator either does not have a good intelligent bandwidth control solution, or they take their existing solution out thinking mistakenly they have reached the “video for all” level.  In reality, they are still under the auspices of the video not for all. They are lulled into a false sense of security for a brief honeymoon period.  After the upgrade things are okay. It takes a while for a user base to fill the void of a new bandwidth upgrade.

Bottom line: Unless you have the numbers to support 25 to 30 percent of your user base running video you will need some kind of bandwidth control.

Firewall Recipe for DDoS Attack Prevention and Mitigation


Although you cannot “technically” stop a DDoS attack, there are ways to detect and automatically mitigate the debilitating effects on your public facing servers. Below, we shed some light on how to accomplish this without spending hundreds of thousands of dollars on a full service security solution that may be overkill for this situation.

Most of the damage done by a targeted DDoS attack is the result of the overhead incurred on your servers from large volume of  fake inquiries into your network. Often with these attacks, it is not the volume of raw bandwidth  that is the issue, but the reduced the slow response time due to the overhead on your servers. For a detailed discussion of how a DDoS attack is initiated please visit http://computer.howstuffworks.com/zombie-computer3.htm zombie-computer-3d

We assume in our recipe below, that you have some sort of firewall device on your edge that can actually count hits into your network from an outside IP, and also that you can program this device to take blocking action automatically.

Note: We provide this type of service with our NetGladiator line. As of our 8.2 software update, we also provide this in our NetEqualizer line of products.

Step 1
Calculate your base-line incoming activity. This should be a running average of unique hits per minute or perhaps per second. The important thing is that you have an idea of what is normal. Remember we are only concerned with Un-initiated hits into your network, meaning outside clients that contact you without being contacted first.

Step 2
Once you have your base hit rate of incoming queries, then set a flag to take action ( step 3 below), should this hit rate exceed more than 1.5 standard deviations above your base line.  In other words if your hit rate jumps by statistically large amount compared to your base line for no apparent reason i.e .you did not mail out a newsletter.

Step 3
You are at step 3 because you have noticed a much larger than average hit rate of un-initiated requested into your web site. Now you need to look for a hit count by external IP. We assume that the average human will only generate at most a hit every 10 seconds or so, maybe higher. And also on average they will like not generate more than 5 or 6 hits over a period of a few minutes.  Where as a hijacked client attacking your site as part of a DDOS attack is likely to hit you at a much higher rate.  Identify these incoming IP’s and go to Step 4.

Step 4
Block these IP’s on your firewall for a period of 24 hours. You don’t want to block them permanently because it is likely they are just hijacked clients ,and also if they are coming from behind a Nat’d community ( like a University) you will be blocking a larger number of users who had nothing to do with the attack.

If you follow these steps you should have a nice pro-active watch-dog on your firewall to mitigate the effects of any DDoS attack.

For further consulting on DDoS or other security related issues feel free to contact us at admin@apconnections.net.

Related Articles:

Defend your Web Server against DDoS Attacks – techrecipes.com

How DDoS Attacks Work, and Why They’re Hard to Stop

How to Launch a 65 gbps DDoS Attack – and How to Stop It

Net Neutrality must be preserved


As much as I hate to admit it, it seems a few of our Republican congressional leaders are “all in” on allowing large content providers to have privileged priority access on the Internet. Their goal for the 2015 congress is to thwart the President and his Mandate to the FCC on net neutrality. Can you imagine going to visit Yosemite National park and being told that the corporations that sponsor the park have taken all the campsites? Or a special lane on the Interstate dedicated exclusively for Walmart Trucks?  Like our highway system and our National parks, the Internet is a resource shared by all Americans.

I think one of the criteria for being a politician is a certification that you flunked any class in college that involved critical or objective thinking, for example, this statement from Rep Marsha Blackburn

“Federal control of the internet will restrict our online freedom and leave Americans facing the same horrors that they have experienced with HealthCare.gov,”

She might as well compare the Internet to the Macy’s parade, it would make about as much sense; the Internet is a common shared utility similar to electricity and roads, and besides that, it was the Government that invented and funded most of the original Internet. The healthcare system is complex and flawed because it is a socialistic re-distribution of wealth, not even remotely similar to the Internet.  The internet needs very simple regulation to prevent abuse, this is about the only thing the government is designed to do effectively. And then there is this stifle innovation argument…

Rep. Bob Goodlatte, chair of the House Judiciary Committee, said he may seek legislation that would aim to undermine the “FCC’s net neutrality authority by shifting it to antitrust enforcers,” Politico wrote.

Calling any such net neutrality rules a drag on innovation and competition

Let me translate for him because he does not understand or want to understand the motivations of the lobbyist when they talk about stifling innovation. My Words: “Regulation, in the form of FCC imposed net neutrality, will stifle the ability of the larger access providers and content providers from creating a walled off garden, thus stifling their pending monopoly on the Internet.” There are many things where I wish the Government would keep their hands out of, but the Internet is not one of them. I must side with the FCC and the President on this one.

Update Jan 31st

Another win for Net Neutrality, the Canadian Government outlaws the practice of zero rating, which is simply a back door for a provider to give free content over rivals.

More lies and deceit from your ISP


Note: We believe bandwidth shaping is a necessary and very valuable tool for both ISPs and the public. We also support open honest discussion about the need for this technology and encourage our customers to open and honest with their customers.    We do not like deception in the industry at any level and will continue to expose and write about it when we see it. 

Back in 2007, I wrote an article for PC magazine about all the shenanigans that ISPs use to throttle bandwidth.  The article set a record for on-line comments for the day, and the editor was happy.  At that time, I recall feeling like a lone wolf trying to point out these practices.  Finally some redemption came this morning. The FTC is flexing its muscles; they are now taking on AT&T for false claims with respect to unlimited data.

Federal officials on Tuesday sued AT&T, the nation’s second-largest cellular carrier, for allegedly deceiving millions of customers by selling them supposedly “unlimited” data plans that the company later “throttled” by slowing Internet speeds when customers surfed the Web too much.

It seems that you can have an unlimited data plan with AT&T, but if you try to use it all the time, they slow down your speed to the point where the amount of data you get approaches zero. You get unlimited data, as long as you don’t use it – huh?  Does that make sense?

Recently, I have been doing some experiments with Comcast and my live dropcam home video feed.  It seems that if I try to watch this video feed on my business class Comcast, (it comes down from the dropcam cloud), the video will time out within about minute or so. However, other people watching my feed do not have this problem. So, I am starting to suspect that Comcast is using some form of application shaper to cut off my feed (or slow it down to the point where it does not work).  My evidence is only anecdotal.  I am supposed to have unlimited 4 megabits up and 16 megabits down with my new business class service, but I am starting to think there may be some serious caveats hidden in this promise.

Where can you find the fastest Internet Speeds ?


The fastest Internet Speeds on earth can be found on any police detective related shows, CSI, etc.  Pick a modern TV show, or movie for that matter, with a technology scene, and you’ll find that the investigators can log into the Internet from any place on earth, and the connection is perfect. They can bring up images and data files instantly, while on the move, in a coffee shop, in a  hotel, it does not matter.  They can be in some remote village in India or back at the office, super perfectly fast connection every time.  Even the bad guys have unlimited bandwidth from anywhere in the world on these shows.

So if you ever need fast Internet, find a friend who works in government or law enforcement, and ask for shared access.

On the other hand,  I just spent a weekend in a small hotel where nothing worked, their wireless was worthless – pings went unanswered for 30 seconds at a time, and my backup Verizon 4g was also sporadic in and out. So I just gave up and read a magazine. When this happens, I wish I could just go to the Verizon Back Haul at their tower and plug a NetEqualizer in, this would immediately stop their data crush.

End of thought of day

Notes from a cyber criminal


After a couple of recent high profile data thefts,   I put the question to myself,  how does a cyber thief convert a large amount of credit cards into a financial windfall?

I did some research, and then momentarily put on the shoes of a cyber thief, here are my notes and thoughts:

I am the greatest hacker in the world and I just got a-hold of twenty million  Home Depot debit cards and account numbers. What is my next move. Well I guess I could just start shopping at Home Depot every day and maxing out all my stolen account cards with a bunch of Lawn Mowers , Garden Hoses, and other items. How many times could I do this before I got caught ?  Probably not that many, I am sure the buying patterns would be flagged even before the consumer realized their card was stolen , especially if I was nowhere near the home area code of my victim(s).  And then I’d have to fence all those items to turn it into cash. But let’s assume I acted quickly and went on a home depot shopping spree with my twenty million cards.  Since I am a big time crook I am looking for a haul I can retire on, and so I’d want to buy and fence at least a few hundred thousand dollars worth of stuff out the gate. Now that is going to be quite a few craig(s) list advertisements, and one logistical nightmare to move those goods, and also I am leaving a trail back to me because at some point I have to exchange the goods with the buyer and they are going to want to pay by check . Let me re-think this…

Okay so I am getting smarter, forget the conventional method , what if I find some Russian portal where I can just sell the Home Depot cards and have the funds paid in Bitcoin to some third-party account that is untraceable.  How many people actually have Bitcoin accounts, and how many are interested in buying stolen credit cards on the black market, and then how to insure that the numbers have not been deactivated ? Suppose I sell to some Mafia type and the cards are not valid anymore ? Will they track me down and kill me ? Forget the Bitcoin,  I’ll have to use Paypal , again leaving a trail of some kind.  So now how do I market my credit card fencing site, I have 20 million cards to move and no customers.  A television advertisement , an underworld blog post ?  I need customers to buy these cards and I need them fast , once I start selling them Home Depot will only take a few days to shut down their cards . Maybe I can just have an agent hawk them in Thailand for $3 each , that way I stay anonymous, yeh that’s what I’ll do whew , I’ll be happy if I can net a few thousand dollars.

Conclusion: Although the theft of a data makes a great headline and is certainly not to be taken lightly , the ability for the crook(s) to convert bounty into a financial windfall, although possible is most likely a far more difficult task than the data theft . Stealing the data is one thing, but profiting from it on anything but the smallest scale is very difficult if not impossible.

The real problem for the hacked commercial institution is not the covering the loss of revenue from the theft, but the loss of company value from loss of public trust which can mount into the billions.

Although my main business is Bandwidth Control I do spend a good deal of thought cycles on Security as on occasion the two go hand in hand. For example some of the utilities we use on our NetEqualizer are used to thwart DOS attacks.  We also have our NetGladiator product which is simply the best and smartest tool out there for preventing an attack through your Website.

Surviving iOS updates


The birds outside my office window are restless. I can see the strain in the Comcast cable wires as they droop, heavy with the burden of additional bits, weighting them down like a freak ice storm.   It is time, once again, for Apple to update every device in the Universe with their latest IOS update.

Assuming you are responsible for a Network with a limited Internet pipe, and you are staring down 100 or more users, about to hit the accept button for their update, what can you do to prevent your user network from being gridlocked?

The most obvious option to gravitate to is caching. I found this nice article (thanks Luke) on the Squid settings used for a previous iOS update in 2013.  Having worked with Squid quite a bit helping our customers, I was not surprised on the amount of tuning required to get this to work, and I suspect there will be additional changes to make it work in 2014.

If you have a Squid caching solution already up and running it is worth a try, but I am on the fence of recommending a Squid install from scratch.  Why? Because we are seeing diminishing returns from Squid caching each year due to the amount of dynamic content.  Translation: Very few things on the Internet come from the same place with the same filename anymore, and for many content providers they are marking much of their content as non-cacheable.

If you have a NetEqualizer in place you can easily blunt the effects of the data crunch with a standard default set-up. The NetEqualizer will automatically push the updates out further into time, especially during peak hours when there is contention. This will allow other applications on your network to function normally during the day. I doubt anybody doing the update will notice the difference.

Finally if you are desperate, you might be able to block access to anything IOS update on your firewall.  This might seem a bit harsh, but then again Apple did not consult with you, and besides isn’t that what the free Internet at Starbucks is for?

Here is a snippet pulled from a forum on how to block it.

iOS devices check for new versions by polling the server mesu.apple.com. This is done via HTTP, port 80. Specifically, the URL is:

http://mesu.apple.com/assets/com_apple_MobileAsset_SoftwareUpdate/com_apple_MobileAsset_SoftwareUpdate.xml

If you block or redirect mesu.apple.com, you will inhibit the check for software updates. If you are really ambitious, you could redirect the query to a cached copy of the XML, but I haven’t tried that. Please remove the block soon; you wouldn’t want to prevent those security updates, would you?

Your Critical Business Needs Two Sources for Internet


Time Warner’s Nationwide outage got my wheels turning again about how we perceive risk when it comes to network outages.

For example:

We have close to 10,000 NetEqualizer systems in the field, of which, we get about 10 failures a year. If you further break down those failures  to root cause, about 80 percent are due to some external event:

  •  lightning
  • flood
  • heat
  • blunt trauma

Given that breakdown, the chances of a NetEqualizer failure for a well-maintained system in a properly vented environment is far less than 1 percent a year. I would also assume that for a simple router or firewall the failure rate is about the same.

Now compare those odds with the chances that your Internet provider is going to crash and burn for some extended outage during the business day  over the course of a full year?

I would say the odds of this happening approach 100 percent.

And yet, the perception often is that, you need a hardware fail-over strategy, and that certainly is a good idea for those who have critical Internet needs. But if you are truly trying to mitigate risk in order of precedence, you should address the potential outages from your provider before investing in redundant hardware.

Here again, our top 5 reasons for an Internet Outage.

Below are list of recent publicly reported outages for various reasons. I am not intentionally picking on the larger service providers here , I do not believe they are any more or less vulnerable than some smaller regional providers , they just tend to make news headlines with their outages.

Comcast Outage for North Denver Fiber cut

Comcast hit with massive Internet outage

Forum discussion about wide spread Internet outage Des Moines Iowa

Spokane Washington 10,000 customers without Internet service

Wide spread Internet outage London , Virgin Media

An Easy Way to Get Rid of Wireless Dead Spots and Get Whole Home Music


By Steve Wagor, Co-Founder APconnections

Wireless dead spots are a common problem in homes and offices that expand beyond the range of single wireless access point. For example in my home office, my little Linksys Access point works great on my main floor , but down in my basement the signal just does not reach very well. The problem with a simple access point is if you need to expand your area you must mesh a new one, and off the shelf they do not know how to talk to each other.

For those of you have tried to expand your home network into a mesh with multiple access points there are howto’s out there for rigging them up

Many use wireless access points that are homemade, or the commercial style made for long range. With these solutions you will most likely need a rubber ducky antenna and either some old computers or at least small board computers with attached wireless cards. You will also need to know a bit of networking and setup most of these types of things via what some people would consider complex commands to link them all up into the mesh.

Well its a lot easier than that if you don’t need miles and miles of coverage using off the shelf Apple products. These are small devices with no external antennas.

First you need to install an Apple Extreme access point:
http://www.apple.com/airport-extreme
– at the time of this being written it is $199 and has been at that price for at least a couple of years now.

Now for every dead spot you just need the Apple Express:
http://www.apple.com/airport-express/
– at the time of this being written it is $99 and has been at that price for at least a couple of years now too.

So for every dead spot you have you can solve the problem for $99 after the Apple Extreme is installed. And Apple has very good install instructions on the product line so you don’t need to be a network professional to configure it. Most of it is simple point and click and all done via a GUI and without having to go to a command line ever.

For whole home music fairly effortlessly you can use the Analog/Optical Audio Jack on the back of the Airport Express and plug into your stereo or externally powered speakers. Now connect your iPhone or Mac product up to the same wireless network provided by your Airport Extreme and you can use Airplay to toggle on all or any of the stereos that your network has access to. So if you let your guests access your wireless network and they have an iPhone with Airplay then they could let you listen to anything they are playing by using Airplay to play it on your stereo for example while you are working out together in your home gym.

The Internet, Free to the Highest Bidder.


It looks like the FCC has  caved,

“The Federal Communications Commission said on Wednesday that it would propose new rules that allow companies like Disney, Google or Netflix to pay Internet service providers.”

WSJ article April 2014

Compare today’s statements to those made back in  January and February, when  the FCC was posturing  like a fluffed up Tom Turkey for Net Neutrality.

“I am committed to maintaining our networks as engines for economic growth, test beds for innovative services and products, and channels for all forms of speech protected by the First Amendment”

– Tom Wheeler FCC chairman Jan 2014

“The FCC could use that broad authority to punish Internet providers that engage in flagrant net-neutrality violations, Wheeler suggested. The agency can bring actions with the goal of promoting broadband deployment, protecting consumers, or ensuring competition, for example.”

-Tom Wheeler Jan 2014

As I eluded to back then, I did not give their white night rhetoric much credence.

“The only hope in this case is for the FCC to step in and take back the Internet. Give it back to the peasants. However, I suspect their initial statements are just grandstanding politics.  This is, after all, the same FCC that auctions off the airwaves to the highest bidder.”

– Art Reisman  Feb 2014

It seems to me the FCC is now a puppet agency of regulation. How can you  start by talking about regulating abuses threatening free access to the Internet, and then without blinking an eye, offer up a statement that Rich Guys can  now pay for privileged access to the Internet ?

I don’t know whether to cry or be cynical at this point. Perhaps I should just go down to my nearest public library , and pay somebody to stock their shelves with promotional NetEqualizer Material?

“The court said that because the Internet is not considered a utility under federal law, it was not subject to that sort of regulation.”

Quotes Referenced from New York Times article FCC in shift backs fast lanes for Web Traffic

Stuck on Desert Island, Do You Take Your Caching Server or Your Netequalizer ?


Caching is a great idea and works well, but I’ll take my NetEqualizer with me if forced to choose between the two on my remote island with a satellite link.

Yes there are  a few circumstances where a caching server might have a nice impact. Our most successful deployments are in educational environments where the same video is watched repeatedly as an assignment;  but for most wide open installations  ,expectations of performance far outweigh reality.   Lets  have at look at what works and also drill down on expectations that are based on marginal assumptions.

From my personal archive of experience here are some of the expectations attributed to caching that perhaps are a bit too optimistic.

“Most of my users go to their Yahoo or Face Book home page every day when they log in and that is the bulk of all they do”

– I doubt this customer’s user base is that conformist :),   and they’ll find out once they install their caching solution.  But even if true, only some of the content on Face  Book and Yahoo is static.  A good portion of these pages are by default dynamic, and ever-changing with content.  They are marked as Dynamic in their URLs which means the bulk of the page must be reloaded each time.  For example,  in order for caching to have an impact , the users in this scenario would have to stick to their home pages , and not look at friend photo’s or other pages.

” We expect to see a 30 percent hit rate when we deploy our cache.”

You won’t see a 30 percent hit rate, unless somebody designs some specific robot army to test your cache, hitting the same pages over and over again. Perhaps, on IOS update day, you might see a bulk of your hits going to the same large file and have a significant performance boost for a day. But overall you will be  doing well if  you get a 3 or 4 percent hit rate.

” I expect the cache hits to take pressure off my Internet Link”

Assuming you want your average user to experience a fast loading Internet, this is where you really want your NetEqualizer ( or similar intelligent bandwidth controller) over your caching engine. The smart bandwidth controller can re-arrange traffic on the fly insuring Interactive hits get the best response. A caching engine does not have that intelligence.

Let’s suppose you have a 100 megabit link to the Internet ,and you install a cache engine that effectively gets a 6 percent hit rate. That would be exceptional  hit rate.

So what is the  end user experience with a 6 percent hit rate compared to pre-cache ?

-First off, it is not the hit rate that matters when looking at total bandwidth. Much of those hits will likely be smallish image  files from the Yahoo home page or common sites, that account for less than 1 percent of your actual traffic.  Most of your traffic is likely dominated by large file downloads and only a portion of those may be coming from cache.

– A 6 percent hit rate means that 94 percent miss rate , and if your Internet was slow from congestion before the caching server it will still be slow 94 percent of the time.

– Putting in a caching server  would be like upgrading your bandwidth from 100 megabits to 104 megabits to relieve congestion. That cache hits may add to the total throughput in your reports, but the 100 megabit bottleneck is still there, and to the end user, there is little or no difference in user perception at this point.  A  portion of your Internet access is still marginal or unusable during peak times, and other than the occasional web page or video loading nice and snappy , users are getting duds most of the time.

Even the largest caching server is insignificant in how much data it can store.

– The Internet is Vast and your Cache is not. Think of a tiny Ant standing on top of Mount Everest. YouTube puts up 100 hours of new content every minute of every day. A small commercial caching server can store about 1/1000 of what YouTube uploads in day, not to mention yesterday and the day before and last year. It’s just not going to be in your cache.

So why is a NetEqualizer bandwidth controller so much more superior than a caching server when changing user perception of speed?  Because the NetEqualizer is designed to keep Internet access from crashing , and this is accomplished by reducing the large file transfers and video download footprints during peak times. Yes these videos  and downloads may be slow or sporadic, but they weren’t going to work anyway, so why let them crush the interactive traffic ? In the end caching and equalizing are not perfect, but from real world trials the equalizer changes the user experience from slow to fast for all Interactive transactions, caching is hit or miss ( pun intended).

Federal Judge Orders Internet Name be Changed to CDSFBB (Content Delivery Service for Big Business)


By Art Reisman – CTO – APconnections

Okay, so I fabricated that headline, it’s not true, but I hope it goes viral and sends a message that our public Internet is being threatened by business interests and activist judges.

I’ll concede our government does serve us well in some cases;  they have produced some things that could not be done without their oversight, for example:

1) The highway system

2) The FAA does a pretty good job keeping us safe

3) The Internet. At least up until some derelict court ruling that will allow ISPs to give preferential treatment to content providers for a payment (bribe), whatever you want to call it.

The ramifications of this ruling may bring an end to the Internet as we know it. Perhaps the ball was put in motion when the Internet was privatized back in 1994. In any case, if this ruling stands up,  you can forget about the Internet as the great equalizer. A place where a small businesses can have a big web site. The Internet where a new idea on a small budget can blossom into a fortune 500 company. A place where the little guy can compete on equal footing without an entry fee to get noticed. No, the tide won’t turn right away, but at some point through a series of rationalizations, content companies and ISPs, with deep pockets, will kill anything that moves.

This ruling establishes a legal precedent. Legal precedents with suspect DNA are like cancers, they mutate into ugly variations, and replicate rapidly. There is no drug that can stop them. Obviously, the forces at work here are not the court systems themselves, but businesses with motives. The poor carriers just can’t seem to find any other solution to their congestion other than charge for access? Combine this with oblivious consumers that just want content on their devices, and you have a dangerous mixture. Ironically, these consumers already subsidize ISPs with a huge chunk of their disposable income. The hoodwink is on. Just as the public airwaves are controlled by a few large media conglomerates, so will go the Internet.

The only hope in this case is for the FCC to step in and take back the Internet. Give it back to the peasants. However, I suspect their initial statements are just grandstanding politics.  This is, after all, the same FCC that auctions off the airwaves to the highest bidder.

Squid Caching Can be Finicky


Editors Note: The past few weeks we have been working on tuning and testing our caching engine. We have been working  closely with  some of the developers who contribute to the Squid open source program.

Following are some of my  observations and discoveries regarding Squid Caching from our testing process.

Our primary mission was to make sure YouTube files cache correctly ( which we have done). One of the tricky aspects of caching a YouTube file, is that many of these files are considered dynamic content. Basically, this means their content contains a portion that may change with each access, sometimes the URL itself is just a pointer to a server where the content is generated fresh with each new access.

An extreme example of dynamic content would be your favorite stock quote site. During the business day much of the information on these pages is changing constantly, thus it  is obsolete within seconds. A poorly designed caching engine would do much more harm than good if it served up out of data stock quotes.

Caching engines by default try not cache dynamic content, and for good reason.    There are two different methods a caching server uses to decide whether or not to cache a page

1) The web designer can specifically set flags in the  format the actual URL  to tell caching engines whether a page is safe to cache or not.

In a recent test I set up a crawler to walk through the excite web site and all its urls. I use this crawler to create load in our test lab as well as to fill up our caching engine with repeatable content. I set my Squid Configuration file to cache all content less than 4k. Normally this would generate a great deal of Web hits , but for some reason none of the Excite content would cache. Upon further analysis our Squid consultant found the problem.

  I have completed the initial analysis. The problem is the excite.com
server(s). All of the “200 OK” excite.com responses that I have seen
among the first 100+ requests contain Cache-Control headers that
prohibit their caching by shared caches. There appears to be only two
kinds of Cache-Control values favored by excite:

Cache-Control: no-store, no-cache, must-revalidate, post-check=0,
               pre-check=0

and

Cache-Control: private,public

Both are deadly for a shared Squid cache like yours. Squid has options
to overwrite most of these restrictions, but you should not do that for
all traffic as it will likely break some sites.”

2) The second method is a bit more passive than deliberate directives.  Caching engines look at the actual URL of a page to gain clues about its permanence. A “?” used in the url implies dynamic content and is generally a  red flag to the caching server . And here-in lies the issue with caching Youtube files, almost all of them have  a “?” embedded within their URL.

Fortunately  Youtube Videos,  are normally permanent and unchanging once they are uploaded. I am still getting a handle these pages, but it seems the dynamic part is used for the insertion of different advertisements on the front end of the Video.  Our squid caching server uses a normalizing technique to keep the root of the URL consistent and thus serve up the correct base YouTube every time. Over the past two years we have had to replace our normalization technique twice in order to consistently cache YouTube files.

Network User Authentication Using Heuristics


Most authentication systems are black and white, once you are in , you are in. It was brought our attention recently, that authentication should be an ongoing process,  not a one time gate with continuous unchecked free rein once in.

The reasons are well founded.

1) Students at universities and employees at businesses, have all kinds of devices which can get stolen/borrowed while open.

My high school kids can attest this many times over. Often the result is just an innocuous string of embarrassing texts emanating from their phones claiming absurd things. For example  ” I won’t be at the party, I was digging for a booger and got a nose bleed” ,  blasted out to their friends after they left their phone unlocked.

2) People will also deliberately give out their authentication to friends and family

This leaves a hole in standard authentication strategies .

Next year we plan to add an interesting twist to our Intrusion Detection Device ( NetGladiator). The idea was actually not mine, but was suggested by a customer recently at our user group meeting in Western Michigan.

Here is the plan.

The idea for our intrusion detection device would be to build a knowledge base of a user’s habits over time and then match those established patterns against a  tiered alert system when there is any kind of abrupt   change.

It should be noted that we would not be monitoring content, and thus we would be far less invasive than Google Gmail ,with their targeted advertisements,  we would primarily just following the trail or path of usage and not reading content.

The heuristics would consist of a three-pronged model.

Prong one, would look at general trending access across all users globally . If  an aggregate group of users on the network were downloading an IOS update, then this behavior would be classified as normal for individual users.

Prong two ,  would look at the pattern of usage for the authenticated user. For example most people tune their devices to start at a particular page. They also likely use a specific e-mail client, and then have their favorite social networking sites. String together enough these and you would develop unique foot print for that user. Yes the user could deviate from their pattern of established usage as long as there were still elements of their normal usage in their access patterns.

Prong three would be the alarming level. In general a user would receive a risk rating when they deviated into suspect behaviors outside their established baseline. Yes this is profiling similar to psychological profiling on employment tests, which are very accurate at predicting future behavior.

A simple example of a risk factor would be a user that all of sudden starts executing login scripts en masse outside of their normal pattern. Something this egregious would be flagged as high risk,  and the administrator could specify an automatic disconnection for the user at a high risk level. Lower risk behavior would be logged for after the fact forensics if any internal servers became compromised.

Latest Notes on the Peer to Peer Front and DMCA Notices


Just getting back from our tech talk seminar today at Western Michigan University. The topic of DMCA requests came up in our discussions, and here are some of my notes on the subject.

Background: The DMCA, which is the enforcement arm of the motion picture copyright conglomerate, tracks down users with illegal content.

They seem to sometimes shoot first and ask questions later when sending out their notices more specific detail to follow.

Unconfirmed Rumor has it, that one very large University in the State of Michigan just tosses the requests in the garbage and does nothing with them, I have heard of other organizations taking this tact. They basically claim  this problem for the DMCA is not the responsibility of the ISP.

I also am aware of a sovereign Caribbean country that also ignores them. I am not advocating this as a solution just an observation.

There was also a discussion on how the DMCA discovers copyright violators from the outside.

As standard practice,  most network administrators use their firewall to block UN-initiated requests  into the network from the outside. With this type of firewall setting, an outsider cannot just randomly probe a network  to find out what copyrighted material is being hosted. You must get invited in first by an outgoing request.

An analogy would be that if you show up at my door  uninvited, and knock, my doorman is not going to let you in, because there is no reason for you to be at my door. But if I order a pizza and you show up wearing  a pizza delivery shirt, my doorman is going to let you in.  In the world of p2p, the invite into the network is a bit more subtle, and most users are not aware they have sent out the invite, but it turns out any user with a p2p client is constantly sending out requests to p2p super nodes to attain information on what content is out there.  Doing so, opens the door on the firewall to let the P2p super node into the network.  The DMCA p2p super nodes just look like another web site to the firewall so it lets it in. Once in the DMCA reads directories of p2p clients.

In one instance, the DMCA is not really inspecting files for copyrighted material, but was only be checking for titles. A  music student who recorded their own original music, but named their files after original artists and songs based on the style of the song.  Was flagged erroneously with DMCA notifications based on his naming convention   The school security examined his computer and determined the content was not copyrighted at all.   What we can surmise from this account was that the DMCA was probing the network directories and not actually looking at the content of the files to see if they were truly in violation of copying original works.
Back to the how does the DMCA probe theory ? The consensus was that it is very likely that DMCA is actually running  super nodes, so they will get access to client directories.  The super  node is a server node that p2p clients contact to get advice on where to get music and movie content ( pirated most likely). The speculation among the user group , and these are very experienced front line IT administrators that have seen just about every kind  of p2p scheme.  They suspect that the since the DMCA super node is contacted by their student network first, it opens the door from the super node to come back and probe for content. In other words the super node looks like the Pizza delivery guy where you place your orders.
It was also further discussed and this theory is still quite open, that sophisticated p2p  networks try to cut out the DMCA  spy super nodes.  This gets more convoluted than peeling off character masks at a mission impossible movie. The p2p network operators need super nodes to distribute content, but these nodes cannot be permanently hosted, they must live in the shadows and are perhaps parasites themselves on client computers.

So questions that remain for future study on this subject are , how do the super nodes get picked , and how does the p2p network disable a spy DMCA super node ?

%d bloggers like this: