Application Shaping and Encryption on a Collision Course


Art Reisman, CTO APconnections

I have had a few conversations lately where I have mentioned that due to increased encryption, application shaping is really no longer viable.  This statement without context evokes some quizzical stares and thus inspired me to expound.

I believe that due to increased use of encryption, Application Shaping is really no longer viable…

Yes, there are still ways to censor traffic and web sites, but shaping it, as in allocating a fixed amount of bandwidth for a particular type of traffic, is becoming a thing of the past. And here is why.

First a quick primer in how application shaping works.

When an IP packet with data comes into the application shaper, the packet shaper opens the packet and looks inside.  In the good old days the shaper would see the data inside the packet the same way it appeared in context on a web page. For example, when you loaded up the post that you are a reading now, the actual text is transported from the WordPress host server across the internet to you, broken up in a series of packets.  The only difference between the text on the page and the text crossing the Internet would be that the text in the packets would be chopped up into segments (about 1500 characters per packet is typical).

Classifying traffic in a packet shaper requires intercepting packets in transport, and looking inside them for particular patterns that are associated with applications (such as YouTube, Netflix, Bittorrent, etc.).  This is what is called the application pattern. The packet shaping appliance looks at the text inside the packets and attempts to identify unique sequences of characters, using a pattern matcher. Packet shaping companies, at least the good ones, spend millions of dollars a year keeping up with various patterns associated with ever-changing applications.

Perhaps you have used HTTPS, ssh. These are standard security features built into a growing number of websites. When you access a web page from a URL starting with HTTPS, that means this website is using encryption, and the text gets scrambled in a different way each time it is sent out.  Since the scrambling is unique/different for every user accessing the site, there is no one set pattern, and so a shaper using application shaping cannot classify the traffic. Hence the old methods used by packet shapers are no longer viable.

Does this also mean that you cannot block a website with a Web Filter when HTTPS is used?

I deliberately posed this question to highlight the difference between filtering a site and using application shaping to classify traffic. A site cannot typically hide the originating URL, as the encryption will not begin until there is an initial handshake. A web filter blocks a site based on the URL, thus blocking technology is still viable to prevent access to a website. Once the initial URL is known, data transfer is often set up on another transport port, and there is no URL involved in the transfer. Thus the packet shaper has no idea of where the datastream came from, nor is there any pattern that can be discerned due to the encryption stream.

So the short answer is that you can block a website using a web filter, even when https is used.  However, as we have seen, the same does not apply to shaping the traffic with an application shaper.

Net Neutrality must be preserved


As much as I hate to admit it, it seems a few of our Republican congressional leaders are “all in” on allowing large content providers to have privileged priority access on the Internet. Their goal for the 2015 congress is to thwart the President and his Mandate to the FCC on net neutrality. Can you imagine going to visit Yosemite National park and being told that the corporations that sponsor the park have taken all the campsites? Or a special lane on the Interstate dedicated exclusively for Walmart Trucks?  Like our highway system and our National parks, the Internet is a resource shared by all Americans.

I think one of the criteria for being a politician is a certification that you flunked any class in college that involved critical or objective thinking, for example, this statement from Rep Marsha Blackburn

“Federal control of the internet will restrict our online freedom and leave Americans facing the same horrors that they have experienced with HealthCare.gov,”

She might as well compare the Internet to the Macy’s parade, it would make about as much sense; the Internet is a common shared utility similar to electricity and roads, and besides that, it was the Government that invented and funded most of the original Internet. The healthcare system is complex and flawed because it is a socialistic re-distribution of wealth, not even remotely similar to the Internet.  The internet needs very simple regulation to prevent abuse, this is about the only thing the government is designed to do effectively. And then there is this stifle innovation argument…

Rep. Bob Goodlatte, chair of the House Judiciary Committee, said he may seek legislation that would aim to undermine the “FCC’s net neutrality authority by shifting it to antitrust enforcers,” Politico wrote.

Calling any such net neutrality rules a drag on innovation and competition

Let me translate for him because he does not understand or want to understand the motivations of the lobbyist when they talk about stifling innovation. My Words: “Regulation, in the form of FCC imposed net neutrality, will stifle the ability of the larger access providers and content providers from creating a walled off garden, thus stifling their pending monopoly on the Internet.” There are many things where I wish the Government would keep their hands out of, but the Internet is not one of them. I must side with the FCC and the President on this one.

Update Jan 31st

Another win for Net Neutrality, the Canadian Government outlaws the practice of zero rating, which is simply a back door for a provider to give free content over rivals.

Do hotels ever block your personal wifi ?


Apparently at least one hotel does. We had written an article hinting that this might be the case  back in 2010.  Hotel operators at the time were hurting from the loss of phone call charges as customers turned to their cell phones, and were looking for creative ways to charge for Internet service.

Hence I was not surprised to see this article today.

FCC: Marriott blocked guests’ personal Wi-Fi, charged for Net access

Federal Communications Commission fines Marriott $600,000 after deciding it illegally interfered with conventiongoers’ hot spots in Nashville. Marriott says it did nothing wrong.

In its judgment, the FCC said “Marriott employees had used containment features of a Wi-Fi monitoring system at the Gaylord Opryland to prevent individuals from connecting to the Internet via their own personal Wi-Fi networks, while at the same time charging consumers, small businesses and exhibitors as much as $1,000 per device to access Marriott’s Wi-Fi network.”

read more

More lies and deceit from your ISP


Note: We believe bandwidth shaping is a necessary and very valuable tool for both ISPs and the public. We also support open honest discussion about the need for this technology and encourage our customers to open and honest with their customers.    We do not like deception in the industry at any level and will continue to expose and write about it when we see it. 

Back in 2007, I wrote an article for PC magazine about all the shenanigans that ISPs use to throttle bandwidth.  The article set a record for on-line comments for the day, and the editor was happy.  At that time, I recall feeling like a lone wolf trying to point out these practices.  Finally some redemption came this morning. The FTC is flexing its muscles; they are now taking on AT&T for false claims with respect to unlimited data.

Federal officials on Tuesday sued AT&T, the nation’s second-largest cellular carrier, for allegedly deceiving millions of customers by selling them supposedly “unlimited” data plans that the company later “throttled” by slowing Internet speeds when customers surfed the Web too much.

It seems that you can have an unlimited data plan with AT&T, but if you try to use it all the time, they slow down your speed to the point where the amount of data you get approaches zero. You get unlimited data, as long as you don’t use it – huh?  Does that make sense?

Recently, I have been doing some experiments with Comcast and my live dropcam home video feed.  It seems that if I try to watch this video feed on my business class Comcast, (it comes down from the dropcam cloud), the video will time out within about minute or so. However, other people watching my feed do not have this problem. So, I am starting to suspect that Comcast is using some form of application shaper to cut off my feed (or slow it down to the point where it does not work).  My evidence is only anecdotal.  I am supposed to have unlimited 4 megabits up and 16 megabits down with my new business class service, but I am starting to think there may be some serious caveats hidden in this promise.

Where can you find the fastest Internet Speeds ?


The fastest Internet Speeds on earth can be found on any police detective related shows, CSI, etc.  Pick a modern TV show, or movie for that matter, with a technology scene, and you’ll find that the investigators can log into the Internet from any place on earth, and the connection is perfect. They can bring up images and data files instantly, while on the move, in a coffee shop, in a  hotel, it does not matter.  They can be in some remote village in India or back at the office, super perfectly fast connection every time.  Even the bad guys have unlimited bandwidth from anywhere in the world on these shows.

So if you ever need fast Internet, find a friend who works in government or law enforcement, and ask for shared access.

On the other hand,  I just spent a weekend in a small hotel where nothing worked, their wireless was worthless – pings went unanswered for 30 seconds at a time, and my backup Verizon 4g was also sporadic in and out. So I just gave up and read a magazine. When this happens, I wish I could just go to the Verizon Back Haul at their tower and plug a NetEqualizer in, this would immediately stop their data crush.

End of thought of day

15 Years to Cross the Technology Chasm ?


Final Jeopardy Answer

Fat Pipe/Thin Client, E-mail, VoIP, Equalizing

And the Question is…

What are  some recent technologies that took a minimum 15 years to cross the chasm from initial viability to widespread commercial acceptance?

Being old allows me to recall, with some historical perspective, the  timeframe it takes for a technology to make it  from production prototype into the mainstream. It is usually much longer than I have patience for. Today, when I see a technology emerging that is obviously superior to what the world is using , I always expect the adoption to take a few weeks.  When in reality, 50 years is close to the historical norm, and 15 years is light-speed for a product to go from concept to societal norm.

For example, Refrigeration and Commercial Air Travel took  50+ years to cross the chasm.  And I am not talking about from the crude idea stage to reality, but rather from the time frame of a working prototype, to wide-spread acceptance.  It was about fifty years from that first, stable airplane, to regular commercial air travel of the late 1950’s.  I should be happy that many of  the world’s technologies are maturing in 15 years, right?

From my historical observations, and a bit of Wikipedia (http://www.wikipedia.org/), lazy man research, here are some  recently completed 15 year chasm crossings.

  • Before Cloud Computing we had, Fat Pipe/Thin Clients.

    This was all the rage of a key note speech by an Apple exec back in 1999 at a wireless conference in San Jose. I remember the speech well, as the exec spent the first 15 minutes making fun of Microsoft and their crotchety cumbersome desktop market. Now, 15 years later we can officially say that cloud computing has overtaken the bloated desktop computer, and small thin devices are the norm to connect with.

  • E-mail has always been around? 

    Well it did not take off until the late 90’s, more than 15 years after its wide use in the educational system. Yes, some early adopters had AOL dial-up accounts with e-mail,  but even as late as 1995 , voice mail was the dominant player for receiving non real-time messages.  I remember this because I worked for a company that was in the voice messaging Business (their logo looks like the Star War’s death star), and we were basically ignoring the e-mail market, and rolling out a major voice mail product release with huge expectations as late as 1995.  Yes, we were pushing other forms of communications – Lotus Notes was a big player then also, but E-mail hit that acceptance curve somewhere in the late 90’s to early 2000’s.

  • VoIP PBX

    Also at that same company, in the early 90’s we thought VoIP was the greatest thing since sliced bread. And we were making quality PBX’s that supported VoIP in the early 90’s.  In this case there was plenty of natural resistance to acceptance.

  1. The economic cash cow of embedded PBX’s pushed VoIP systems life-span out a few years.
  2. There was also just fear of using a new technology for something as important as an enterprise phone system. I would estimate that VoIP PBX’s started to outnumber the legacy installed base around 2005 or perhaps later.
  • NetEqualizer

    Equalizing technology for reigning in bandwidth abuse has always been superior to Layer 7 shaping, which incidentally rose up from 1995 to 2000 in just 5 years.  Equalizing has taken 15 years and is still on a linear acceptance curve.  There are several reasons for this:

1) The Equalizing concept crossed a chasm from traditional thinking of intuitive, hands-on control and moved to a heuristic approach which is not always obvious to the non-technical decision maker.

2) The graph below depicts how transit Bandwidth prices have dropped exponentially in the past 15 years. This has squeezed out the more expensive devices in the market , and slowed the need a bit at the NetEqualizer price point.

 
Year Internet Transit Prices (in Mbps, min commit) % Decline
1998
$1200
per Mbps
1999
$800
per Mbps 33%
2000
$675
per Mbps 16%
2001
$400
per Mbps 40%
2002
$200
per Mbps 50%
2003
$120
per Mbps 40%
2004
$90
per Mbps 25%
2005
$75
per Mbps 17%
2006
$50
per Mbps 33%
2007
$25
per Mbps 50%
2008
$12
per Mbps 52%
2009
$9.00
per Mbps 25%
2010
$5.00
per Mbps 44%
2011
$3.25
per Mbps 35%
2012
$2.34
per Mbps 28%
2013
$1.57
per Mbps 33%
2014
$0.94
per Mbps 40%
2015
$0.63
per Mbps 33%
Source: DrPeering.net

3)  NetEqualizer has stayed with a direct sales channel for the most part. The land-grab mentality of investing in a worldwide sales channel and going fast looks impressive but, with dropping bandwidth prices in some markets, is not a sustainable model due to the channel costs.

 So what will come to maturity 15 years from now ?

In my opinion the following technologies will have crossed the chasm in 2029:

1) Automobiles with standard braking sensors to avoid collisions will be the norm in 15 years.

2) Drones everywhere for anything traveling quickly that is not a human.  But I think the widespread commercial use will be 20+ years out.

3) House automation. You won’t be flipping switches to turn anything on or off in 15 years in a new house.

What are your predictions for 15 years out?

Is Your Bandwidth Controller Obsolete Technology?


Although not free yet, bandwidth contracts have been dropping in cost faster than a bad stock during a recession.  With cheaper bandwidth costs , the question often arises on whether or not an enterprise can do without their trusty bandwidth controller.

Below, we have compiled a list of factors that will determine whether or not Bandwidth Controllers stick around for a while, or go the route of the analog modem,  a relic of when people received their Internet from AOL and dial up.

  • In Many areas of the world bandwidth prices are still very high. For example most of Africa,  and also Parts of the Middle East  they do not have the infrastructure in  place to deliver high speed low cost circuits . Bandwidth controllers are essential equipment in these regions.
  • Even in countries where bandwidth infrastructure is subsidized, and urban access is relatively cheap,  people like to work and play in remote places. Bandwidth consumers have come to expect bandwidth while choosing to live in a remote village. Many of these lifestyle choices find people far away from the main fiber lines that crisscross the urban landscape. Much like serving fresh seafood in mining camp, providing bandwidth to remote locations,  has a high price, and bandwidth controllers are more essential than ever in the remote areas of developed countries.   For example we are seeing a pick up in NetEqualizer interest in luxury resort hotels on tropical islands, and national parks , where high speed Internet is now a necessity but it is not cheap.
  • Government spending on Internet infrastructure has grown out of favor, at least in the US. After the recent waste and fraud scandals, don’t expect another windfall like the broad band initiative any time soon. Government subsidies were a one time factor in the drop in bandwidth prices during the 2007 to 2010 time frame.
  • As the market matures and providers look to show profit, they will be tempted to raise prices again, especially as demand grows.  The recession of 2007 drove down some commercial demand at a time when there was significant infrastructure increases in capacity, we may be at the tail end of that deflationary bubble.
  • There was also a one time infrastructure enhancement, that gained momentum around 2007, this compounded the deflationary pressure on bandwidth. WDM technology allowed existing fiber to carry up to 16 times the original planned capacity.  We don’t expect any new infrastructure innovations of that magnitude to occur any time soon.  Moore’s law has finally cracked  (proved false) in the computer industry and so will the honeymoon increases in the carrying capacity of fiber.
  • Lastly, the wireless frequencies are crowded beyond capacity and bandwidth is still hard to find here, and operators are running out of tricks.
  • We must concede that we have seen cases where customers are getting bandwidth at such a low cost that they forgo investing in bandwidth controllers, but we expect that trend to flatten out as bandwidth prices hold steady or start to creep back up a bit in the coming decade.

Stay tuned.

The Internet, Free to the Highest Bidder.


It looks like the FCC has  caved,

“The Federal Communications Commission said on Wednesday that it would propose new rules that allow companies like Disney, Google or Netflix to pay Internet service providers.”

WSJ article April 2014

Compare today’s statements to those made back in  January and February, when  the FCC was posturing  like a fluffed up Tom Turkey for Net Neutrality.

“I am committed to maintaining our networks as engines for economic growth, test beds for innovative services and products, and channels for all forms of speech protected by the First Amendment”

– Tom Wheeler FCC chairman Jan 2014

“The FCC could use that broad authority to punish Internet providers that engage in flagrant net-neutrality violations, Wheeler suggested. The agency can bring actions with the goal of promoting broadband deployment, protecting consumers, or ensuring competition, for example.”

-Tom Wheeler Jan 2014

As I eluded to back then, I did not give their white night rhetoric much credence.

“The only hope in this case is for the FCC to step in and take back the Internet. Give it back to the peasants. However, I suspect their initial statements are just grandstanding politics.  This is, after all, the same FCC that auctions off the airwaves to the highest bidder.”

– Art Reisman  Feb 2014

It seems to me the FCC is now a puppet agency of regulation. How can you  start by talking about regulating abuses threatening free access to the Internet, and then without blinking an eye, offer up a statement that Rich Guys can  now pay for privileged access to the Internet ?

I don’t know whether to cry or be cynical at this point. Perhaps I should just go down to my nearest public library , and pay somebody to stock their shelves with promotional NetEqualizer Material?

“The court said that because the Internet is not considered a utility under federal law, it was not subject to that sort of regulation.”

Quotes Referenced from New York Times article FCC in shift backs fast lanes for Web Traffic

An Entrepreneurs Guide to The Headwinds of Change


By Art Reisman

For anybody who has ever done something innovative you’ll find most technology advances require some sort of change in behavior on the part of the target customer  (consumer or business).  The larger the organization,  the less likely they are to embrace that change, many times they are downright hostile toward change.

I attended an entrepreneur group last month where a company is going to market with a smart Window that changes reflectivity with outside temperature. The demand and value for this product clearly distinguish it as cost-savings winner, and yet, because entrenched ideas about smart windows, it has been a 12 year battle of sacrifice and pain for founder Wil McCarthy to get his product to market.

Like eating an unknown berry when you have other food sources, the human resistance reflex is perhaps deep-seated and evolutionary.  On the other hand, people will react to perceived threats almost immediately, a topic I wrote about previously.

Here are some of the governing factors that limit the ability of  a new product to change or enhance a market space.

1) Cash. Most businesses have already spent their spare cash before they have accumulated it.  They don’t keep a kitty around for a new  idea. Typically they  have a long list of existing ideas politically competing for a limited amount of investment.

  • The sports bar could use new big screen TV’s
  • The factory needs new automated robots
  • The office building could save money with new thermal windows

Barring some rare financial good times, like the stock bubble of 2000, all new products must face the reality of “cash flow politics”.

The list of productivity enhancements for any business is endless, so when an entrepreneur comes along with a newfangled idea to help a business, the newcomer must compete with cash needs within the organization. The new product/idea must be either superior to the existing list, or you must politically work your way to the top of the list.   Barring some rare financial good times, like the stock bubble of 2000, all new products must face the reality of “cash flow politics”.

2) Credibility

Perhaps this line item should have come before cash, but realistically none of these are optional.

I am normally not a big supporter of using your friends and relatives as market validators, (they will never give you honest feedback if your idea has a flaw) but you’ll need some friendly reviews of some kind to vouch for your product as a reference. So get a few people to try your product or service and write a review. Don’t be bashful about aggressively pursuing your references to say something. There are no rules here other than the references must be verifiable – as long as you know your product works and are willing to stand behind it.  Don’t worry about how prestigious your reviewers are – just get someone to agree, get a quote or two, and then set up a professional looking web page with the contact information for your references.

Note: When you set up a Web Page, you’ll likely use a template. Make sure to fill out every aspect of the template, there is nothing worse than going to a business web site and finding templates that have not been filled out with thoughtful content.

3) Market Research, will they buy?

This does tie back to the Cash question.

Most young, first-time entrepreneurs assume that since they like their idea, and all their friends like the idea, then people will buy their product or service.

The best thing you can do to get real honest feedback is to sell your product early based on a description. Call it a pre-order, or a promise to purchase. Figure out some way to find out if a real customer is willing to pay any amount for your product, before you spend 1000’s of hours building your product.

The level of commitment could be something as simple as a registration form for a discount on your website. If somebody takes the time to fill out a form on your web site, they are likely to buy later… I don’t have the cycles to explain all the ways to do this type of validation, but I can tell you the narrow audience of talking to your friends and family about your product as your conclusive market research is very misleading.  Sure, they may buy it, but you need to engage a stranger and get them to commit to something. The registration for a discount is just one simple way to confirm a higher level of commitment of interest before you invest too heavily in the idea.

4) Best if your product returns something $.

People and business like things that make money for them. This scheme is exploited to the hilt with vertical marketing, (e.g. Amway  Corporation and such…).  I am no fan of vertical marketing, but my point was that you need to offer your customer a way to make money and then you’ll get their attention. A better example would be the ATM business, banks pay convenience stores to place their ATMs in their stores. Another example is pay-per-view TV in hotels. Lodgenet grew to a multi-billion dollar company by offering small hotels a share of the revenue for their pay-per-view. What do you think works better when approaching a hotel operator? 1) selling pool cleaning supplies, or 2) a pay-per-view movie system that creates revenue to their bottom line.

Although these types of revenue generating ideas may require a change in your entrepreneurial thought process, they will greatly increase your chances of financial success.

Why Does Fear Sell over Value for IT?


When Willie Sutton was asked, why do you rob Banks ? He replied, “Because that is where the money is.”

Why do companies sell fear? Ask Willie Sutton. :)

From Y2K and ozone holes, to IP4 address space, sales channels love a good crises to drive a sale. The funny thing is, from my experience, the process  of adjusting a product line to accommodate customer fear is evolutionary, akin to natural selection, and not a pre-planned conspiracy. Demand seems to be created from some external uncontrolled upwelling, and not from a hard sell within the vendor ranks.

For example, back in 2008, we had a little mini boom selling our product to meet the demand based on the CALEA laws – we ended up supporting this feature because our customers demanded it. The demand (fear) for CALEA came out of left field, not once did we push this solution and yet everybody wanted it. I had a similar experience back in 2000 with Y2k. The product managers for the systems I worked on at AT&T were beating down my door to come up with any type Y2k inoculation update I could muster.  I can assure you that, with all due respect, the product managers at AT&T were not savvy enough to generate this demand, it came from the customers and the media.

Why fear trumps value.

They say the stock market is driven by fear and greed. CNN actually has a fear greed meter. With IT technology sales, fear and cost are the driving factors.

Normally businesses are cost conscious with their IT decisions, after all, IT purchases do not normally generate revenue and the idea is to spend as little as possible.  It makes sense that a business  would carefully analyze these expenditure(s).

Fear, real or imagined, on the other hand, can force a CIO into immediate decision making, and less scrutiny on cost.

What Fears are currently driving the market ?

Security

Security is always in the conversation.  The market in this area is fairly mature and about 90 percent of what is purchased is for CYA or regulatory reasons.

Falling Behind

This is more like keeping up with the Jones’s, if your competitor has large flat screen monitors in their control center, you want them;  however their actual effect on the bottom line may not clear but you buy anyway so as not to be appear obsolete.

Education

I seriously doubt that investment in classroom technology, for things like  “one for one” iPad’s per student, for the sake of a teaching aid, is really making anybody smarter, but the perception is that you must have it.

What’s on the horizon ?

I’ll cover this next week.

Stuck on Desert Island, Do You Take Your Caching Server or Your Netequalizer ?


Caching is a great idea and works well, but I’ll take my NetEqualizer with me if forced to choose between the two on my remote island with a satellite link.

Yes there are  a few circumstances where a caching server might have a nice impact. Our most successful deployments are in educational environments where the same video is watched repeatedly as an assignment;  but for most wide open installations  ,expectations of performance far outweigh reality.   Lets  have at look at what works and also drill down on expectations that are based on marginal assumptions.

From my personal archive of experience here are some of the expectations attributed to caching that perhaps are a bit too optimistic.

“Most of my users go to their Yahoo or Face Book home page every day when they log in and that is the bulk of all they do”

– I doubt this customer’s user base is that conformist :),   and they’ll find out once they install their caching solution.  But even if true, only some of the content on Face  Book and Yahoo is static.  A good portion of these pages are by default dynamic, and ever-changing with content.  They are marked as Dynamic in their URLs which means the bulk of the page must be reloaded each time.  For example,  in order for caching to have an impact , the users in this scenario would have to stick to their home pages , and not look at friend photo’s or other pages.

” We expect to see a 30 percent hit rate when we deploy our cache.”

You won’t see a 30 percent hit rate, unless somebody designs some specific robot army to test your cache, hitting the same pages over and over again. Perhaps, on IOS update day, you might see a bulk of your hits going to the same large file and have a significant performance boost for a day. But overall you will be  doing well if  you get a 3 or 4 percent hit rate.

” I expect the cache hits to take pressure off my Internet Link”

Assuming you want your average user to experience a fast loading Internet, this is where you really want your NetEqualizer ( or similar intelligent bandwidth controller) over your caching engine. The smart bandwidth controller can re-arrange traffic on the fly insuring Interactive hits get the best response. A caching engine does not have that intelligence.

Let’s suppose you have a 100 megabit link to the Internet ,and you install a cache engine that effectively gets a 6 percent hit rate. That would be exceptional  hit rate.

So what is the  end user experience with a 6 percent hit rate compared to pre-cache ?

-First off, it is not the hit rate that matters when looking at total bandwidth. Much of those hits will likely be smallish image  files from the Yahoo home page or common sites, that account for less than 1 percent of your actual traffic.  Most of your traffic is likely dominated by large file downloads and only a portion of those may be coming from cache.

– A 6 percent hit rate means that 94 percent miss rate , and if your Internet was slow from congestion before the caching server it will still be slow 94 percent of the time.

– Putting in a caching server  would be like upgrading your bandwidth from 100 megabits to 104 megabits to relieve congestion. That cache hits may add to the total throughput in your reports, but the 100 megabit bottleneck is still there, and to the end user, there is little or no difference in user perception at this point.  A  portion of your Internet access is still marginal or unusable during peak times, and other than the occasional web page or video loading nice and snappy , users are getting duds most of the time.

Even the largest caching server is insignificant in how much data it can store.

– The Internet is Vast and your Cache is not. Think of a tiny Ant standing on top of Mount Everest. YouTube puts up 100 hours of new content every minute of every day. A small commercial caching server can store about 1/1000 of what YouTube uploads in day, not to mention yesterday and the day before and last year. It’s just not going to be in your cache.

So why is a NetEqualizer bandwidth controller so much more superior than a caching server when changing user perception of speed?  Because the NetEqualizer is designed to keep Internet access from crashing , and this is accomplished by reducing the large file transfers and video download footprints during peak times. Yes these videos  and downloads may be slow or sporadic, but they weren’t going to work anyway, so why let them crush the interactive traffic ? In the end caching and equalizing are not perfect, but from real world trials the equalizer changes the user experience from slow to fast for all Interactive transactions, caching is hit or miss ( pun intended).

Federal Judge Orders Internet Name be Changed to CDSFBB (Content Delivery Service for Big Business)


By Art Reisman – CTO – APconnections

Okay, so I fabricated that headline, it’s not true, but I hope it goes viral and sends a message that our public Internet is being threatened by business interests and activist judges.

I’ll concede our government does serve us well in some cases;  they have produced some things that could not be done without their oversight, for example:

1) The highway system

2) The FAA does a pretty good job keeping us safe

3) The Internet. At least up until some derelict court ruling that will allow ISPs to give preferential treatment to content providers for a payment (bribe), whatever you want to call it.

The ramifications of this ruling may bring an end to the Internet as we know it. Perhaps the ball was put in motion when the Internet was privatized back in 1994. In any case, if this ruling stands up,  you can forget about the Internet as the great equalizer. A place where a small businesses can have a big web site. The Internet where a new idea on a small budget can blossom into a fortune 500 company. A place where the little guy can compete on equal footing without an entry fee to get noticed. No, the tide won’t turn right away, but at some point through a series of rationalizations, content companies and ISPs, with deep pockets, will kill anything that moves.

This ruling establishes a legal precedent. Legal precedents with suspect DNA are like cancers, they mutate into ugly variations, and replicate rapidly. There is no drug that can stop them. Obviously, the forces at work here are not the court systems themselves, but businesses with motives. The poor carriers just can’t seem to find any other solution to their congestion other than charge for access? Combine this with oblivious consumers that just want content on their devices, and you have a dangerous mixture. Ironically, these consumers already subsidize ISPs with a huge chunk of their disposable income. The hoodwink is on. Just as the public airwaves are controlled by a few large media conglomerates, so will go the Internet.

The only hope in this case is for the FCC to step in and take back the Internet. Give it back to the peasants. However, I suspect their initial statements are just grandstanding politics.  This is, after all, the same FCC that auctions off the airwaves to the highest bidder.

Guest Article From a WISP Owner in the Trenches


Editors Note:  A great read if you are thinking of starting a WISP and need a little inspiration.  Re-posted with permission from Rory Conaway, Triad WirelessRory is president and CEO of Triad Wireless, an engineering and design firm in Phoenix. Triad Wireless specializes in unique RF data and network designs for municipalities, public safety and educational campuses. E-mail comments to rory@triadwireless.net.

Tales from the Towers – Chapter 50: CRY ‘HAVOC!’, AND LET SLIP THE DOGS OF WAR

Interesting fellow that Shakespeare because not only did he write plays, he also acted in them.  And although Tales from the Towers doesn’t hold a candle (pre-electric times, you can groan now) to Mr. William’s contributions to culture, I have a double life too.  If you haven’t guessed it yet, writing articles really isn’t my full-time job (my wife is giving me the look that says I should find another hobby), I actually run a WISP, do installs, and handle tech support calls.  After 10 years though, and many mistakes and successes, I’ve decided to rethink my network from the ground up as if I was starting tomorrow and share that.  The idea is to help lay out a simplified road map that will bring forth thousands of new WISPs into the market that can start breaking down the digital divide without taxpayer money and creating a new business.  Since a thousand bee stings can take out the biggest animal, the more companies that jump into the industry, the better the chances of competing against the incumbents.  It’s time to open the floodgates of small business entrepreneurs and begin the war for last mile bandwidth delivery everywhere.  And although few outside Star Trek fans will recognize one of Shakespeare’s most famous sayings, they will recognize this modern variation, “Who let the dogs out”!  Hopefully it’s the WISP industry.

Triad_WirelessWhy would anyone want to start a WISP you ask?  Although many of us in the industry would say because we don’t have a life, the reality is that it can be a profitable small business model.  How about this, a typical WISP gross profit margin is about 90% (this varies depending on where you live).  Yes, you have read that correctly.  In the U.S., bandwidth costs average about $5-$20 per Mbps to a tower or some other demarcation point.  In some areas, it’s as little as 40 cents and others as much as $300 but in the 90% of the country that I believe WISPs have the greatest opportunities, bandwidth is inexpensive.  Even if it’s $20 per Mbps, that’s still a profit margin of 80%.  Wal-Mart would go apoplectic if they get half that and they squeeze suppliers like ripe lemons.  And my razor has more margin between the blade and my face than Amazon has on their products.  For any small business operator to find a product that he can buy for $5-$20 and resell for $100, legally I might add, is like printing money if you have the technical and marketing skills.

Between the FCC and the federal government being in the pocket of the incumbent cellular operators, tax-payer subsidized DSL providers, and all the FTTH zealots whose business plans are more like a lobbyists guide to squeeze taxpayers instead of a real business plan based on profit, it seems like being a WISP would be a huge challenge.  Ubiquiti, Cambium, and a few other companies now have second generation 802.11N inexpensive and broad product lines that are simple enough for even beginners to install and manage.  Throw in Mimosa with new 802.11ac product lines  (Ubiquiti is already shipping UniFi with 802.11ac) in the near future, and the wireless providers will be able to deliver speeds that will make DSL operators cry.  With those resources and lower costs, a wireless provider can provide bandwidth at wireline speeds and undercut the pricing or provide faster bandwidth at the same price.  Either way it’s a win-win situation and a golden opportunity to jump on the bandwagon of an industry that is only going to grow.  I’m not going to get into the triple-play option even though right now it’s the best model to fund FTTH.  I personally believe it’s a dying model as Voice-over-IP and Video-on-Demand will force everyone to a pure IP play in the future.

If you don’t think a WISP business model is a good idea, let’s analyze what the government thinks it costs CenturyLink (or what CenturyLink tells them it costs, boy do I want to send that invoice.  Yea, yea, it costs me $775, that’s the ticket ) to deploy a single customer with DSL with a speed of 3Mbps down.  The Connect America Fund was paying $775 per customer for deployment for these pathetic speeds plus subsidizing the monthly bills.  A WISP can do it for about $250 on-site and another $100 for the backhaul infrastructure per customer and probably make a profit on the install (hey FTTH guys, it really can be done without subsidies).  And even better, a WISP can charge less.  Unfortunately, I wouldn’t expect anyone from the FCC to do the research necessary to save the taxpayers from this CAF boondoggle.  They are very, very, very, proud of it but hey, ignorance is bliss (here is where you should get sick to your stomach).  Private enterprise really can succeed without small business killing government intervention.

Before jumping into any business though, we need to analyze the competitive environment,   DSL and Cable, since they provide most of the population bandwidth.  What’s interesting here is that while DSL is on the decline due to limitations and age of copper wire, it’s not really being replaced by better DSL.  In some CenturyLink areas for example, they are pulling fiber closer to the homes to get their DSL speeds up to 40Mbps.  However, unless another wireless technology comes along though, that’s probably their swan song until they upgrade to FTTH (don’t hold your breath waiting for it though).

DSL providers have 2 basic areas, cruddy service in low-density areas where they are the only provider and reasonably decent service in areas where they probably compete against cable providers.  There are opportunities in both areas although the cruddy areas are where I would start first.  Those are typically pocket or peripheral areas but if you can get about 20 customers or more, it’s a profit center.  It’s also a place to build from and test the local zoning code in cases those are issues.

In areas where they are delivering far more bandwidth, they are also charging more.  And since they also try to bundle with either their service or satellite providers, they have to add taxes (another reason to avoid triple-play since it also adds more office infrastructure and accounting requirements).  In Arizona for example, a phone/internet bundle CenturyLink package delivering 1.5 to 40Mbps with bundle is about $30-$65 plus taxes (almost $10 worth if it’s bundled).  They also have a package with Direct TV and then the costs start climbing well about $100.  And all those packages come with contracts of at least 1-2 years.

Cable providers aren’t much different though. They are not only all about bundling; they have constant price increases and fees along with higher prices to start with.  Although cable providers can provide some great speeds, up to 150Mbps, it’s still more expensive to deliver than wireless.  Triple play providers like cable are also under a huge amount of financial pressure from content providers.  When they have to pass that cost along to customers, the customers don’t differentiate the services, they just know their bills have gone berserk and start looking elsewhere.  I’ve had customers call me with cable bills that hit $200 and we just tell them about Ooma (don’t even mention MagicJack unless your idea of a good time is slamming your head in your refrigerator door), Roku, and local TV.  Amazing how much people will adjust their viewing and phone habits to save $100 per month.

Cable providers are getting hammered by the FTTH zealots who simply don’t understand that almost NOBODY really needs 100Mbps to their house today and NOBODY in the investment community is willing to fund it unless they also happen to own a Senator.  Just to make the FTTH subsidized fiber supporters have a conniption, the cable providers should publish the percentage of their users that have 10Mbps, 20Mbps, etc…  Then publish their average use and peak numbers.  Selling 50Mbps circuits and above is one of the biggest scams in our industry today.  It’s all about the latency, baby!

There is no FTTH business plan on this planet that was taxpayer subsidized that I’m aware of as a stand-alone business that is profitable that I’ve ever heard of.  I’m still waiting to see one, but please feel free to send your financials if you think you have one.  I’ll stand by and hold my breath.  LinkedIn is a great place to see examples of that. If you take the WISP position or even suggest that FTTH is not financially viable today to the “Experts” when the government gets involved, you learn that you should be committed because you dare to point that out.  Apparently stating facts is redefined as zealotry when you ask for the financial results of these projects.  The best excuse I have heard about getting me off a FTTH discussion when I kept insisting on actual facts was where I was banned from the group, not because of my view but because my picture wasn’t professional enough (apparently it wasn’t my good side).  What I really want to do is follow the money to see how much these consultants and companies are making from the taxpayers while fully knowing the plan will fail.  In this case, it’s all about the money baby!

The end result of this is if you start a WISP, don’t worry about the FTTH providers unless you think some clueless bureaucrat in California or the CAF/FCC gets the idea it’s a great place to waste more taxpayer money.  Even if they come into your area, they will be selling something that is more expensive that what a WISP can provide and few people will pay for.  The FTTH boys think everyone should pay at least $50 for 10Mbps or more if you want faster.  The good part is that they will provide middle –mile backhaul for you to undercut them and will probably get bought out by Google for $1 when it loses so much money, even the politicians get tired of funding it.

Privately funded FTTH systems that have triple play products are actually bigger threat to wireless systems and natural migration paths for triple-play WISPS although they are generally in more rural areas or urban areas.  Many of the companies currently doing fiber started out as WISPs meaning they are generally more efficient, and usually already profitable.  They are playing for the long-haul and have the resources and experience to do it the right way with little or no taxpayer subsidies.  The bad thing for them is as they get closer to higher density population centers, unless they are Google and the local government bends over to help them, government regulations make it difficult for them to expand into cities or suburbs.  It always amazes me that the local bureaucrats would rather ignore local business for years or just make life miserable for them to justify their jobs, rather than reach out and see how the can actually help them be successful.  Then when things aren’t going to so rosy for the municipality, they fall all over themselves looking for a savior like Google who doesn’t give flying donut about them.  Here’s a clue zoning department, cold call every WISP and ISP anywhere near your and see what you can do for them in terms of making the regulations easier to work with them instead of just writing new ones.   They you won’t have to sell your soul to a Google because you screwed up for years and are now trying to fix the mess you created.

Now that we know the general competitive landscape, the next question is where to start your business.  Although our country is wonderfully diverse in terms of density, intelligent guys like Brian Webster have analyzed some states down to how many driveway basketball nets per square mile.  Other resources like www.wispa.org, the FCC, and www.goubiquiti.com, have coverage maps of WISP service areas among many other services that we will cover later.  Without getting overlay complicated, I define the areas into rural, suburban, and city areas.  Most rural areas already have at least 1 WISP covering them and some rural areas have multiple WISPS.  My personal preference and where the articles will be focused on (Okay, I detour when it comes to government intervention in the private industry), is between suburban fringes through to the city fringes.  This is the most opportune areas for WISPS that have the biggest investment bang for the buck.  It’s also the easiest way to get inexpensive bandwidth. Next article we will focus on the RF environment, planning, and budgeting since those are going to be very closely tied together (and I’ll probably make some other political comment there also).  Time to go, the Big Dog is scratching at the back door to get out and he’s got some business to take care of, as do we all.

Top 10 Out-of-the-Box Technology Predictions for 2014


Back in 2011, I posted some technology predictions for 2012. Below is my revised and updated list for 2014.
1) Look for Google, or somebody, to launch an Internet Service using a balloon off the California Coast.
Well it turns out, those barges out in San Francisco bay are for something far less ambitious than a balloon based internet service, but I still think this is on horizon so I am sticking with it.
2) Larger slower transport planes to bring down the cost of comfortable international and long range travel.

I did some analysis on the cost of airline operators, and the largest percentage of the cost in air travel is fuel. You can greatly reduce fuel consumption per mile by flying larger, lighter aircraft at slower speeds. Think of these future airships like cruise ships. They will have more comforts than a the typical packed cross continental flight of today. My guess is, given the choice, passengers will trade off speed for a little price break and more leg room.

3) I am still calling for somebody to make a smart contextual search engine with a brain that weeds through the muck of bad useless commercial content to give you a decent result. It seems every year, intentional or not, Google is muddling their search results into the commercial. It is like the travel magazine that claims the editorial and advertising units are not related, some how the flow of money overrides good intentions. Google is very vulnerable to a mass exodus should somebody pull off a better search engine. Perhaps this search engine would allow the user to filter results from less commercial to more commercial sites?

4) Drones ? Sc#$$ drones, Amazon is not ever going to deliver any consumer package with a drone service.  This PR stunt was sucked up by the Media. Yes there will be many uses for unmanned aircraft but not residential delivery

5) Somebody is going to genetically engineer an ant colony to do something useful. Something simple, like fill in pot holes in streets with little pebbles. The Ants in Colorado already pile up mounds of pebbles around their colonies, just got to get them put them in the right place

6) Protein Shakes made out of finely powdered exoskeletons of insects. Not possible? Think of all the by product that goes into something like a hot dog and nobody flinches. If you could harvest a small percent of the trillions of grasshoppers in the world dry them and grind them up you would have an organic protein source without any environmental impact or those dreaded GMOs.

7) Look for more drugs that stop cancer at the cell level by turning off genetic markers.

This is my brothers research ongoing at University of Florida.

8) A diet pill that promotes weight loss without any serious side effects

I have no basis for this statement other than somebody must be getting close to figuring out the exact brain signals that trigger the urge to eat, and a way to counteract them effectively without using amphetamines or stimulants.
9) Virtual reality Beach Front Property.
They already have virtual reality windows. I am thinking next step is incorporating a complete home with a virtual breeze, sound , sites and smells. Just look at what people pay for beach front property anywhere in the world.  Besides who really wants to live Los Angeles or Florida with all the traffic. Suppose for a mere 50k you can upgrade your double wide retirement home in Arkansas to virtual beach front?

The Illusion of Separation: My Malaysia Trip Report


By Zack Sanders

VP of Security – APconnections

Traveling is an illuminating experience. Whether you are going halfway across the country or halfway around the world, the adventures that you have and the lessons that you learn are priceless and help shape your outlook on life, humanity, and the planet we live on. Even with the ubiquitousness of the Internet, we are still so often constrained by our limited and biased information sources that we develop a world view that is inaccurate and disconnected. This disconnection is the root of many of our problems – be they political, environmental, or social. There is control in fear and the powerful maintain their seats by reinforcing this separation to the masses. Having the realization that we are all together on this planet and that we all largely want the same things is something that can only be discovered by going out and seeing the world for yourself with as open of a mind as possible.

One of the great things about NetEqualizer, and working for APconnections, is that, while we are a relatively small organization, we are truly international in our business. From the United States to the United Kingdom, and Argentina to Finland, NetEqualizers are helping nearly every vertical around the world optimize the bandwidth they have available. Because of this global reach, we sometimes get to travel to unique customer sites to conduct training or help install units. We recently acquired a new customer in Malaysia – a large university system called International Islamic University Malaysia, or IIUM. In addition to NetEqualizers for all of their campuses, two days of training was allotted in their order – one day each at two of their main locations (Kuala Lumpur and Kuantan). I jumped at the chance to travel to Asia (my first time to the continent) and promptly scheduled some dates with our primary contact at the University.

I spent the weeks prior to my departure in Spain – a nicely-timed, but unrelated, warmup trip to shake the rust off that had accrued since my last international travel experience five years ago. The part about the Malaysia trip that I was dreading the most was the hours I would log sitting in seat 46E of the Boeing 777 metal I was to take to Kuala Lumpur with Singapore Airlines. Having the Spain trip occur before this helped ease me in to the longer flights.

F.C. Barcelona hosting Real Madrid at the Camp Nou.

My Malaysia itinerary looked like this:

Denver -> San Francisco (2.5 hours), Layover (overnight)

San Francisco -> Seoul (12 hours), Layover (1 hour)

Seoul -> Singapore (7 hours), Layover (6 hours)

Singapore -> Kuala Lumpur (1 hour)

I was only back in the United States from Spain for one week. It was a fast, but much needed, seven days of rest. The break went by quickly and I was back in the air again, this time heading west.

After 22 hours on the plane and 7 hours in various airports, I was ready to crash at my hotel in the City Centre when I touched down in KL. I don’t sleep too well on planes so I was pretty exhausted. The trouble was that it was 8am local time when I arrived and check-in wouldn’t be until 2:00pm. Fortunately, the fine folks at Mandarin Oriental accommodated me with a room and I slept the day away.

KL City Centre.

I padded my trip with the intention of having a few days before the training to get adjusted, but it didn’t take me as long as I thought and I was able to do some site seeing in and outside the city before the training.

My first stop was Batu Caves – a Hindu shrine located near the last stop of the LRT’s KTM-KOMUTER line in the Gombak District – which I later learned was near the location of my first training seminar. The shrine is set atop 272 stairs in a 400 million year old limestone cave. After the trek up you are greeted by lightly dripping water and a horde of ambitious monkeys in addition to the shrines within the cave walls.

Batu Caves entrance.

Batu Caves.

Petronas Towers.

This was the furthest I ventured from the city for site seeing. The rest of the time, I spent near the City Centre – combing through the markets of Chinatown and Little India, taking a tour of the Petronas Towers, and checking out the street food on Jalan Alor. Kuala Lumpur is a very Western city. The influence is everywhere despite the traditional Islamic culture. TGI-Fridays, Chili’s, and Starbucks were the hotspots – at least in this touristy part of town. On my last night I found a unique spot at the top of the Trader’s Hotel called Skybar. It is a prime location because it looks directly at the Petronas Towers – which, at night especially, are gorgeous. The designers of the bar did a great job implementing sweeping windows and sunken sofas to enjoy the view. I stayed there for a couple hours and had a Singapore Sling – a drink I’ve heard of but had never gotten to try.

Singapore Sling at the Skybar.

The city and sites were great, however, the primary purpose of the trip was not leisure – it was to share my knowledge of NetEqualizer with those that would be working with it at the University. To be honest, I wasn’t sure what to expect. This was definitely different from most locations I have been to in the past. A lot of thoughts went through my head about how I’d be received, if the training would be valuable or not, etc. It’s not that I was worried about anything in particular, I just didn’t know. My first stop was the main location in KL. It’s a beautifully manicured campus where the buildings all have aqua blue roofs. My cab driver did a great job helping me find the Information Technology Department building and I quickly met up with my contact and got set up in the Learning Lab.

This session had nine participants – ranging from IT head honchos to network engineers. The specific experience with the NetEqualizer also ranged from well-versed to none at all. I catered the training such that it would be useful to all participants – we went over the basics but also spent time on more advanced topics and configurations. All in all, the training lasted six hours or so, including an hour break for lunch that I took with some of the attendees. It was great talking with each of them – regardless of whether the subject was bandwidth congestion or the series finale episode of Breaking Bad. They were great hosts and I look forward to keeping in touch with them.

Training at IIUM.

I was pretty tired from the day by the time I arrived back at the hotel. I ate and got to bed early because I had to leave at 6:00am for my morning flight across the peninsula to Kuantan – a short, 35 minute jaunt eastward – to do it all over again at that campus. Kuantan is much smaller than KL, but it is still a large city. I didn’t get to see much of it, however, because I took a cab directly from the airport to the campus and got started. There were only four participants this time – but the training went just as well. I had similar experiences talking with this group of guys, and they, too, were great hosts. I returned back to the airport in the evening and took a flight back to KL. The flight is so short that it’s comical. It goes like this:

Taxi to the runway -> “Flight attendants prepare for takeoff” -> “You may now use your electronic devices” -> 5 minutes goes by -> “Flight attendants prepare for landing – please turn off your electronic devices” -> Land -> Taxi to terminal

The airport in Kuantan at sunset.

I had one more day to check out Kuala Lumpur and then it was back to the airport for another 22 hours of flying. At this point though, I felt like a flying professional. The time didn’t bother me and the frequent meals, Sons of Anarchy episodes, and extra leg room helped break it up nicely. I took a few days in San Francisco to recover and visit friends before ultimately heading back to Boulder.

It was a whirlwind of a month. I flew almost 33,000 miles in 33 days and touched down in eight countries on three continents. Looking back, it was a great experience – both personally and professionally. I think the time I spent in these places, and the things I did, will pay invaluable dividends going forward.

If your organization is interested in NetEqualizer training – regardless of whether you are a new or existing customer – let us know by sending an email to sales@apconnections.net!

View of KL Tower from the top of the Petronas Towers.

%d bloggers like this: