Analyzing the cost of Layer 7 Packet Shaping


November, 2010

By Eli RIles

For most IT administrators layer 7 packet shaping involves two actions.

Action 1:  Involves inspecting and analyzing data to determine what types of traffic are on your network.

Action 2: Involves taking action by adjusting application  flows on your network .

Without  the layer 7 visibility and actions,  an administrator’s job would degrade into a quagmire of random guesswork. Or would it?

Layer 7 monitoring and shaping is intuitively appealing , but it is a good idea to take a step back and break down examine the full life cycle costs of your methodology .

In an ironic inverse correlation, we assert that costs increase with the complexity of the monitoring tool.

1) Obviously, the more detailed the reporting tool (layer 7 ) , the more expensive its initial price tag.

2)  The kicker comes with part two. The more expensive the tool, the more  detail  it will provide, and the more time an administrator is likely to spend adjusting and mucking, looking for optimal performance.

But, is it a fair to assume higher labor costs with  more advanced monitoring and information?

Well, obviously it would not make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. Why have the reporting tool in the first place if the only output was to stare at reports and do nothing? Typically, the more information an admin has about a network, the more inclined he might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief  that when  the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network mucking can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention.  For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980′s.  The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.

An effective compromise with many of our customers is that they are stepping down from expensive complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.

For example, with a simple reporting tool you can plot network usage by user.  Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing; abuse becomes obvious just looking at the usage (a simple report).

However, there is also the personal control factor, which often does not follow clear lines of ROI (return on investment).

What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into a trip to Hawaii.

In our next article, we’ll put some real world numbers to the test for actual break downs, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.

List of monitoring tools compiled by Stanford

Top five free monitoring tools

Planetmy
Linux Tips
How to set up a monitor for free

PPPoE may be outdated


By Art Reisman

Art Reisman is  currently CTO and Co-Founder of NetEqualizer.  He  has worked at several start up companies over the years, and has invented and brought several technology products to market, both on his own, and with backing of larger corporations.  Including tools for the automotive industry.

We often get asked if we support PPPoE (Point-to-Point Protocol over Ethernet) through our bandwidth controller at this time.  We have decided not to support PPPoE.  What follows is our reasoning behind this decision.

First, some background on PPP.  Point-to-Point Protocal (PPP) is the protocol that was developed to allow the Internet to traverse through the phone system.  It converts digital IP traffic over a modem (analog phone circuit) into sound, and is essential when doing dial up because without it you could not have dial up internet service.  In other words, a phone line to a customer’s house cannot transmit IP packets directly, only audio sounds, so the PPP is a protocol converter that takes a series of sounds and transmits them over the line.  Similar to FAX, if you pick up the line and listen you will hear squealing.

1) We were not interested in building a PPPoE billing system and database.

I assume that since every dial up system also required billing and an authentication database, that somehow the PPP server, the thing that has a modem pool to talk over phone lines, also needs to integrate other aspects of the service to make a turn-key system for providers with Radius, billing etc.

2)There is no reason to continue legacy PPP in the new environment.

As providers transitioned from dial up to broadband wireless, in order to accommodate their legacy PPP server systems, they retrofitted their new wireless network with PPPoE modems at the customer site.   This was so the central PPP server would only need to transmit serialized sound data out of the lines as it had with phone lines.  It also served as way to preserve the legacy, dial up, connection mechanism that authorized users.

We believe that providers should transition from PPP to newer technologies, as PPP is becoming obsolete.

3)Operators are putting off the inevitable.

Now with the investment of these PPP servers integrating with the billing systems, we are where we are today. Even though there is no need to transmit data serially over the Ethernet, providers use PPPoE to preserve other aspects of their existing infrastructure which grew up when dial up was king.  This is similar to mainframe vendors trying to preserve their old screen-scrape technology when the Internet first came out, rather than move to the inevitable web GUI interface (where they eventually all had to go anyways).

4) Newer technologies are more efficient.

As far as I can tell, new wireless providers that do not do any traditional dial up are just creating overhead by trying to preserve PPP, as it is not needed in their circuit.  Generic IP and more modern forms of customer authentication such as MAC address or a login are more efficient.

Of course, you may disagree with our reasoning.  Please feel free to let us know your thoughts on PPPoE.

Nine tips to consider when starting a product company


By Art Reisman

I often get asked to help friends,  and friends  of friends, with flushing out their start up idea’s.  Usually they are looking for a cheerleader to build confidence.  Confidence and support are essential part of building a company; however I will not be addressing those aspects here. I am not a good predictor of what might take off, and a marginal motivator at best,  but I do know from many failures as well as successes, the things you will need  to give yourself the best chance of success.    What follows are   just the facts, as I know them.

1) You don’t have much of a chance unless you jump in full time.

If you are not willing to jump into your venture full time, you are stacking the odds against yourself. Going halfway is like running a marathon without training and expecting to win. So be honest with yourself, are your doing this as a hobby or do you expect a business to pop out?  I know the ideal situation is to start as a hobby and when the business grows a bit then go full time,  you can also win the lottery but its not likely.   Even with a unique idea and no obvious competition you are still competing for mind share.  Treating your business as a hobby is akin to studying for a final when you don’t know what is on the test.  To insure a good grade you’ll need to know more than everybody else taking the test which means you need to study hard.

2) If your idea  requires a change in culture or behavior you are less likely to succeed.

There are literally trillions of ideas and things you can do that might be successful given a little energy. Too often I see entrepreneurs stuck on something that requires a change of consumer behavior beyond their control. This is not to say their ideas are bad or that a change in human behavior is not in order. The problem is you will have limited time and resources to promote and market your idea.  The best inventions probe high demand low resistance niches , meaning they fit into a segment where there will little adaption resistance.

I worked with a company that invented a shoe that would allow you to track your children.  One of the  behavioral show stoppers was that you had to put the shoe  in a charger every night.  Who puts their shoes in  a charger? It’s not that it could not be sold with this limitation, but the fact that it required a change in behavior which made  it a much less attractive idea.

Although one might assume that text messaging on phones just happened , from its roots in the Japanese market of the early 1990’s,  it took 10 years to become commonplace in the US. The feature was an add-on to product already in a channel and generating revenue hence it did not require a house bet from existing service providers to bring to market. You most likely will not have this kind channel to leverage for your product, in other words, it takes a special set of circumstances to influence human behavior and be successful.

3) Your idea involves  consulting or support services

If your goal is to get immediate income and become your own boss, then consulting and services are relatively easy to get going in.  Yes you will need to work hard to win over customers and retain them, but realistically if you are  good at what you do,  income will follow . The downside of consulting and support  is that it is very hard to clone your value  and expand beyond your original partners. For this reason, the tips in this article are geared toward bringing a product to market.

4) Sell it to strangers

Hopefully you don’t have too many enemies but the point of this statement is validate your product need. Selling a book to your family and friends through courtesy buys is good for some feedback and worthwhile, but you will never know how your product will fare until you are converting random strangers.  If you can sell to somebody that  hates you personally then you’ll know the product has staying power.

5) Test Market with small samples

The late billy mayes had it down to a science , take almost anything  produce a commerical and sell it to a small market with  a late night TV advertisement. Obviously this validation is only good for home consumer products, but the idea is to test market small.

6) Sell the idea without the goods.

You need to be careful with this one.  The general rule here is, do not under any circumstance take any money unless you  have your product in stock. Either that or fully disclose to potential customers that they are  pre-ordering a product that does not physically exist. If you break these ground rules you will fail. I learned this trick from a friend of mine who wanted to sell Satellite dishes when they first came out. They did not even have a Franchise license, but they took out a small Advertisement in the local paper for Satellite dishes and the response was overwhelming , they just told inquiries they were out of stock ( true statement) and then proceeded to get a Franchise License and follow up with their inquiries.

7) How do you eat an Elephant?

One bite at a time. I define success as selling something , anything and making one dollar, once you have made a dollar you can concentrate on your second dollar. Great if you can go faster, but unless you are really big  now as a company, there will be plenty of time and  space to grow your product into. You don’t need sales offices all over the world that is just a distraction.

8) Ask successful people to help and advise.  Most entrapanuers and business people love to help others get started and if you have a good idea they can help you open doors for oppurtunities but you must ask, and you must be sincere. Everybody loves the underdog and is willing to help. Remember your brother in law, that is a sales rep for Toshiba, is not who I am talking about.  You need to get advice from people who have started companies from scratch. Nothing wrong with brother in law at Toshiba, but the if you are doing a product spend your time getting advice from others who have brought products to marker.

9) Stop worrying about the competition.  Just do what you do best.  You will  often   to differentiate yourself from the competition.  I politely keep the subject on what I know , my product, and how it fits the customers needs.    Never bad mouth a competitor even if you believe them to be scum an astute customer will figure that out for themselves. Let somebody else bad mouth them.

10) I am waiting to be in a better financial situation before I start a  company

Time on this earth is way more valuable than the any dollar you can make. Letting years go by is not a rational option if you intend on doing a product. Your financial needs are likely  an illusion created by others expectations.  If you have to live in trailer without heat to make ends meet while developing your product you can do it. In fact,  the sacrifices you make will be far healthier for your children than that new Nintendo game. It just amazes me how many people will borrow 100k and give it to a school for a childs education while at the same time are afraid of investing in their dream with time and savings.

About the Author:

Art Reisman is  currently CTO and Co-Founder of NetEqualizer. He  has worked at several start up companies over the years , and has invented and brought several technology products to market, both on his own, and with backing of larger corporations.  Including tools for the automotive industry.

Related Articles

Practical and inspirational tips on bootstrapping

Building a software company from scratch

Your ISP May Not Be Who You Think It Is


By Art Reisman, APconnections CTO (www.netequalizer.com)

Have you ever logged into your wireless laptop at a library or hotel lobby or airport?

Have you ever visited and used WiFi in a small-town coffee shop?

Do you take classes at a local university?

What got us thinking on this subject was the flurry of articles on net neutrality — a hot-button issue in the media these days. With each story, the reporters usually rush to get quotes and statements from all the usual suspects — Verizon, Google, Comcast, Time Warner, etc. It’s as if these providers ARE the Internet. However, in this article, we’ll show there is a significant loose conglomerate of smaller providers that, taken together, create a much larger entity than any of these traditional players.

These smaller organizations buy bulk bandwidth from tier-1 providers such as Level 3 and then redistribute it to their customers. In other words, they are your ISP. To give you a rough idea on just how large this segment is, we have worked up some numbers with conservative estimates.

There are roughly 121,000 libraries in the US. Some are very large with thousands of patrons per day and some are very small with perhaps just a handful of daily visitors. We estimate that half provide some form of wireless Internet service, and of those, they would average 300 unique users per month. That gives us approximately 18 million patrons using the Internet in libraries per year.

There are approximately 15 million students attending higher education institutions, with K-through-12 schools making up another 72 million students. If all the university students, and perhaps half of the K-through-12 students use the Internet at their schools, that gives us another 45 million users.

In 2004, half the hotels in the U.S. had broadband service.  It would be safe to assume that this numbers is over 90 percent in 2010. There are approximately 130,000 hotels listed in the US. With an average occupancy per night of 30 guests per hotel (very conservative), we can easily conclude that 100 million people use the Internet from U.S. hotels over the course of a year.

Lastly there are 10,000 small regional ISPs and cable companies serving smaller and rural customers. These companies average about 1,000 customers, covering another 10 million people.

Yes, some of these users are being double counted as many obviously have multiple sources to the Internet, but the point is, with conservative estimates, we were able to easily estimate 100 million users through these alternate channels, making this segment much larger than any single provider.

Therefore, when discussing the issue of net neutrality, or any regulation or privacy debate concerning the Internet, one should look beyond just the big-name providers. There’s a good chance you’ll find your own online experience regularly extends beyond these high-profile ISPs.

NetEqualizer bandwidth controllers are used in hotels, libraries, schools, WiFi hotspots and businesses around the world and have aided in the Internet experience of over 100 million users since 2003.

Google Verizon Net Neutrality Policy, is it sincere?


With all the rumors circulating about the larger wireless providers trying to wall off competition or generate extra revenue through preferential treatment of traffic, they had to do something, hence  Google and Verizon crafted a joint statement on Net Neutrality. Making a statement in denial of a rumor on such a scale is somewhat akin to admitting the rumor was true. It reminds me of a politician claiming he has no plans to raise taxes.

Yes, I believe that most people who work for Google and Verizon, executives included, believe in an open Neutral Internet.  And yet, from experience, when push comes to shove, and profits are flat or dropping, the idea of leveraging your assets will be on the table.  And what better way to leverage your assets than restrict competition to your captive audience. Walling off a captive audience to selected content will always be enticing to any service provider looking for low hanging fruit.  Morals can easily be compromised or rationalized in the face of losing your house, and it only takes one over zealous leader to start a provider down the slope.

The checks and balances so far, in this case, are the consumers who have voiced outright disgust with anybody who dare toy with the idea of  preferential  treatment of Internet traffic for economic benefit.

For now this concept will have to wait, but it will be revisited again and hopefully consumers will rise up in disgust.  It would be naive to think that today’s statement by Verizon and Google would be  binding beyond the political moment.

Does Lower cost bandwidth foretell a decline in Expensive Packet Shapers ?


This excerpt is from a recent interview with Art Reisman and has some good insight into the future of bandwidth control appliances.

Are you seeing a drop off in layer 7 bandwidth shapers in the marketplace?

In the early stages of the Internet, up until the early 2000s, the application signatures were not that complex and they were fairly easy to classify. Plus the cost of bandwidth was in some cases 10 times more expensive than 2010 prices. These two factors made the layer 7 solution a cost-effective idea. But over time, as bandwidth costs dropped, speeds got faster and the hardware and processing power in the layer 7 shapers actually rose. So, now in 2010 with much cheaper bandwidth, the layer 7 shaper market is less effective and more expensive. IT people still like the idea, but slowly over time price and performance is winning out. I don’t think the idea of a layer 7 shaper will ever go away because there are always new IT people coming into the market and they go through the same learning curve. There are also many WAN type installations that combine layer 7 with compression for an effective boost in throughput. But, even the business ROI for those installations is losing some luster as bandwidth costs drop.

So, how is the NetEqualizer doing in this tight market where bandwidth costs are dropping? Are customers just opting to toss their NetEqualizer in favor of adding more bandwidth?

There are some that do not need shaping at all, but then there are many customers that are moving from $50,000 solutions to our $10,000 solution as they add more bandwidth. At the lower price points, bandwidth shapers still make sense with respect to ROI. Even with lower bandwidth costs  users will almost always clog the network with new more aggressive applications. You still need a way to gracefully stop them from consuming everything, and the NetEqualizer at our price point is a much more attractive solution.

Related article on Packeteers recent Decline in Revenue

Related article Layer 7 becoming obsolete from SSL

The Inside Scoop on Where the Market for Bandwidth Control Is Going


Editor’s Note: The modern traffic shaper appeared in the market in the late 1990s. Since then market dynamics have changed significantly. Below we discuss these changes with industry pioneer and APconnections CTO Art Reisman.

Editor: Tell us how you got started in the bandwidth control business?

Back in 2002, after starting up a small ISP, my partners and I were looking for a tool that we could plug-in and take care of the resource contention without spending too much time on it. At the time, we had a T1 to share among about 100 residential users and it was costing us $1200 per month, so we had to do something.

Editor: So what did you come up with?

I consulted with my friends at Cisco on what they had. Quite a few of my peers from Bell Labs had migrated to Cisco on the coat tails of Kevin Kennedy, who was also from Bell Labs. After consulting with them and confirming there was nothing exactly turnkey at Cisco, we built the Linux Bandwidth Arbitrator (LBA) for ourselves.

How was the Linux Bandwidth Arbitrator distributed and what was the industry response?

We put out an early version for download on a site called Freshmeat. Most of the popular stuff on that site are home-user based utilities and tools for Linux. Given that the LBA was not really a consumer tool, it rose like a rocket on that site. We were getting thousands of downloads a month, and about 10 percent of those were installing it someplace.

What did you learn from the LBA project?

We eventually bundled layer 7 shaping into the LBA. At the time that was the biggest request for a feature. We loosely partnered with the Layer 7 project and a group at the Computer Science Department at the University of Colorado to perfect our layer 7 patterns and filter. Myself and some of the other engineers soon realized that layer 7 filtering, although cool and cutting edge, was a losing game with respect to time spent and costs. It was not impossible but in reality it was akin to trying to conquer all software viruses and only getting half of them. The viruses that remain will multiply and take over because they are the ones running loose. At the same time we were doing layer 7, the core idea of Equalizing,  the way we did fairness allocation on the LBA, was s getting rave reviews.

What did you do next ?

We bundled the LBA into a CD for install and put a fledgling GUI interface on it. Many of the commercial users were happy to pay for the convenience, and from there we started catering to the commercial market and now here we are with modern version of the NetEqualizer.

How do you perceive the layer 7 market going forward?

Customers will always want layer 7 filtering. It is the first thing they think of from the CIO on down. It appeals almost instinctively to people. The ability to choose traffic  by type of application and then prioritize it by type is quite appealing. It is as natural as ordering from a restaurant menu.

We are not the only ones declaring a decline in Deep packet inspection we found this opinion on another popular blog regarding bandwidth control:

The end is that while Deep Packet Inspection presentations include nifty graphs and seemingly exciting possibilities; it is only effective in streamlining tiny, very predictable networks. The basic concept is fundamentally flawed. The problem with generous networks is not that bandwidth wants to be shifted from “terrible” protocols to “excellent” protocols. The problem is volume. Volume must be managed in a way that maintains the strategic goals of the arrangement administration. Nearly always this can be achieved with a macro approach of allocating an honest share to each entity that uses the arrangement. Any attempt to micro-manage generous networks ordinarily makes them of poorer quality; or at least simply results in shifting bottlenecks from one business to another.

So why did you get away from layer 7 support in the NetEqualizer back in 2007?

When trying to contain an open Internet connection it does not work very well. The costs to implement were going up and up. The final straw was when encrypted p2p hit the cloud. Encrypted p2p cannot be specifically classified. It essentially tunnels through $50,000 investments in layer 7 shapers, rendering them impotent. Just because you can easily sell a technology does not make it right.

We are here for the long haul to educate customers. Most of our NetEqualizers stay in service as originally intended for years without licensing upgrades. Most expensive layer 7 shapers are mothballed after about 12 months are just scaled back to do simple reporting. Most products are driven by channel sales and the channel does not like to work very hard to educate customers with alternative technology. They (the channel) are interested in margins just as a bank likes to collect fees to increase profit. We, on the other hand, sell for the long haul on value and not just what we can turn quickly to customers because customers like what they see at first glance.

Are you seeing a drop off in layer 7 bandwidth shapers in the marketplace?

In the early stages of the Internet up until the early 2000s, the application signatures were not that complex and they were fairly easy to classify. Plus the cost of bandwidth was in some cases 10 times more expensive than 2010 prices. These two factors made the layer 7 solution a cost-effective idea. But over time, as bandwidth costs dropped, speeds got faster and the hardware and processing power in the layer 7 shapers actually rose. So, now in 2010 with much cheaper bandwidth, the layer 7 shaper market is less effective and more expensive. IT people still like the idea, but slowly over time price and performance is winning out. I don’t think the idea of a layer 7 shaper will ever go away because there are always new IT people coming into the market and they go through the same learning curve. There are also many WAN type installations that combine layer 7 with compression for an effective boost in throughput. But, even the business ROI for those installations is losing some luster as bandwidth costs drop.

So, how is the NetEqualizer doing in this tight market where bandwidth costs are dropping? Are customers just opting to toss their NetEqualizer in favor of adding more bandwidth?

There are some that do not need shaping at all, but then there are many customers that are moving from $50,000 solutions to our $10,000 solution as they add more bandwidth. At the lower price points, bandwidth shapers still make sense with respect to ROI.  Even with lower bandwidth costs, users will almost always clog the network with new more aggressive applications. You still need a way to gracefully stop them from consuming everything, and the NetEqualizer at our price point is a much more attractive solution.

Simple Is Better with Bandwidth Monitoring and Traffic Shaping Equipment


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

For most IT administrators, bandwidth monitoring of some sort is an essential part of keeping track of, as well as justifying, network expenses. However, the question a typical CIO will want to know before approving any purchase is, “What is the return on investment for your equipment purchase?”.  Putting a hard and fast number on  bandwidth optimization equipment may seem straight forward.  If you can quantify the cost of your bandwidth and project an approximate reduction in usage or increase in throughput, you can crunch the numbers. But, is that all you should consider when determining how much you should spend on a bandwidth optimization device?

The traditional way of looking at monitoring your Internet has two dimensions.  First, the fixed cost of the monitoring tool used to identify traffic, and second, the labor associated with devising and implementing the remedy.  In an ironic inverse correlation, we assert that your ROI will degrade with the complexity of the monitoring tool.

Obviously, the more detailed the reporting/shaping tool, the more expensive its initial price tag. Yet, the real kicker comes with part two. The more detailed data output generally leads to an increase in the time an administrator is likely to spend making adjustments and looking for optimal performance.

But, is it really fair to assume higher labor costs with more advanced monitoring and information?

Well, obviously it wouldn’t make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. But, typically, the more information an admin has about a network, the more inclined he or she might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief that when the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network adjusting can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention. For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980s. The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.

An effective compromise with many of our customers is that they are stepping down from expensive, complex reporting tools to a simpler approach.  Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.

For example, with a simple reporting tool you can plot network usage by user. Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing. Abuse becomes obvious just looking at the usage (a simple report).

However, there is also the personal control factor, which often does not follow clear lines of ROI.

What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into, for example, a trip to Hawaii.

In our next article, we’ll put some real world numbers to the test for actual breakdowns, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.

List of monitoring tools compiled by Stanford

ROI tool , determine how much a bandwidth control device can save.

Great article on choosing a bandwidth controller

Planetmy
Linux Tips
How to set up a monitor for free

Good enough is better a lesson from the Digital Camera Revolution

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

The Promise of Streaming Video: An Unfunded Mandate


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com
Art Reisman is a partner and co-founder of APconnections, a company that provides bandwidth control solutions (NetEqualizer) to
ISPs, Universities, Libraries, Mining Camps, and any organization where groups of users must share their Internet resources equitably. What follows is an objective educational journey on how consumers and ISPs can live in harmony with the explosion of YouTube video.

The following is written primarily for the benefit of mid-to-small sized internet services providers (ISPs).  However, home consumers may also find the details interesting.  Please follow along as I break down the business cost model of the costs required to keep up with growing video demand.

In the past few weeks, two factors have come up in conversations with our customers, which has encouraged me to investigate this subject further and outline the challenges here:

1) Many of our ISP customers are struggling to offer video at competitive levels during the day, and yet are being squeezed due to high bandwidth costs.  Many look to the NetEqualizer to alleviate video congestion problems.  As you know, there are always trade-offs to be made in handling any congestion issue, which I will discuss at the end of this article.  But back to the subject at hand.  What I am seeing from customers is that there is an underlying fear that they (IT adminstrators) are behind the curve.   As I have an opinion on this, I decided I need to lay out what is “normal” in terms of contention ratios for video, as well what is “practical” for video in today’s world.

2) My internet service provider, a major player that heavily advertises how fast their speed is to the home, periodically slows down standard YouTube Videos.  I should be fair with my accusation, with the Internet you can actually never be quite certain who is at fault.  Whether I am being throttled or not, the point is that there are an ever-growing number of video content providers , who are pushing ahead with plans that do not take into account, nor care about, a last mile provider’s ability to handle the increased load.  A good analogy would be a travel agency that is booking tourists onto a cruise ship without keeping a tally of tickets sold, nor caring, for that matter.  When all those tourists show up to board the ship, some form of chaos will ensue (and some will not be able to get on the ship at all).

Some ISPs are also adding to this issue, by building out infrastructure without regard to content demand, and hoping for the best.  They are in a tight spot, getting caught up in a challenging balancing act between customers, profit, and their ability to actually deliver video at peak times.

The Business Cost Model of an ISP trying to accommodate video demands

Almost all ISPs rely on the fact that not all customers will pull their full allotment of bandwidth all the time.  Hence, they can map out an appropriate subscriber ratio for their network, and also advertise bandwidth rates that are sufficient enough to handle video.  There are four main governing factors on how fast an actual consumer circuit will be:

1) The physical speed of the medium to the customer’s front door (this is often the speed cited by the ISP)
2) The combined load of all customers sharing their local circuit and  the local circuit’s capacity (subscriber ratio factors in here)
3) How much bandwidth the ISP contracts out to the Internet (from the ISP’s provider)

4) The speed at which the source of the content can be served (Youtube’s servers), we’ll assume this is not a source of contention for our examples below, but it certainly should remain a suspect in any finger pointing of a slow circuit.

The actual limit to the am0unt of bandwidth a customer gets at one time, which dictates whether they can run a live streaming video, usually depends  on how oversold their ISP is (based on the “subscriber ratio” mentioned in points 1 and 2 above). If  your ISP can predict the peak loads of their entire circuit correctly, and purchase enough bulk bandwidth to meet that demand (point 3 above), then customers should be able to run live streaming video without interruption.

The problem arises when providers put together a static set of assumptions that break down as consumer appetite for video grows faster than expected.  The numbers below typify the trade-offs a mid-sized provider is playing with in order to make a profit, while still providing enough bandwidth to meet customer expectations.

1) In major metropolitan areas, as of 2010, bandwidth can be purchased in bulk for about $3000 per 50 megabits. Some localities less some more.

2) ISPs must cover a fixed cost per customer amortized: billing, sales staff, support staff, customer premise equipment, interest on investment , and licensing, which comes out to about $35 per month per customer.

3) We assume market competition fixes price at about $45 per month per customer for a residential Internet customer.

4) This leaves $10 per month for profit margin and bandwidth fees.  We assume an even split: $5 a month per customer for profit, and $5 per month per customer to cover bandwidth fees.

With 50 megabits at $3000 and each customer contributing $5 per month, this dictates that you must share the 50 Megabit pipe amongst 600 customers to be viable as a business.  This is the governing factor on how much bandwidth is available to all customers for all uses, including video.

So how many simultaneous YouTube Videos can be supported given the scenario above?

Live streaming YouTube video needs on average about 750kbs , or about 3/4 of a megabit, in order to run without breaking up.

On a 50 megabit shared link provided by an ISP, in theory you could support about 70 simultaneous YouTube sessions, assuming nothing else is running on the network.  In the real world there would always be background traffic other than YouTube.

In reality, you are always going to have a minimum fixed load of internet usage from 600 customers of approximately 10-to-20 megabits.  The 10-to-20 megabit load is just to support everything else, like web sufing, downloads, skype calls, etc.  So realistically you can support about 40 YouTube sessions at one time.  What this implies that if 10 percent of your customers (60 customers) start to watch YouTube at the same time you will need more bandwidth, either that or you are going to get some complaints.  For those ISPs that desperately want to support video, they must count on no more than about 40 simultaneous videos running at one time, or a little less than 10 percent of their customers.

Based on the scenario above, if 40 customers simultaneously run YouTube, the link will be exhausted and all 600 customers will be wishing they had their dial-up back.  At last check, YouTube traffic accounted for 10 percent of all Internet Traffic.  If left completely unregulated, a typical rural ISP could find itself on the brink of saturation from normal YouTube usage already.  With tier-1 providers in major metro areas, there is usually more bandwidth, but with that comes higher expectations of service and hence some saturation is inevitable.

This is why we believe that Video is currently an “unfunded mandate”.  Based on a reasonable business cost model, as we have put forth above, an ISP cannot afford to size their network to have even 10% of their customers running real-time streaming video at the same time.  Obviously, as bandwidth costs decrease, this will help the economic model somewhat.

However, if you still want to tune for video on your network, consider the options below…

NetEqualizer and Trade-offs to allow video

If you are not a current NetEqualizer user, please feel free to call our engineering team for more background.  Here is my short answer on “how to allow video on your network” for current NetEqualizer users:

1) You can determine the IP address ranges for popular sites and give them priority via setting up a “priority host”.
This is not recommended for customers with 50 megs or less, as generally this may push you over into a gridlock situation.

2) You can raise your HOGMIN to 50,000 bytes per second.
This will generally let in the lower resolution video sites.  However, they may still incur Penalities should they start buffering at a higher rate than 50,000.  Again, we would not recommend this change for customers with pipes of 50 megabits or less.

With either of the above changes you run the risk of crowding out web surfing and other interactive uses , as we have described above. You can only balance so much Video before you run out of room.  Please remember that the Default Settings on the NetEq are designed to slow video before the entire network comes to halt.

For more information, you can refer to another of Art’s articles on the subject of Video and the Internet:  How much YouTube can the Internet Handle?

Other blog posts about ISPs blocking YouTube

Do We Really Need IPv6 And When?


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper. APconnections removed all Deep Packet Inspection technology from their NetEqualizer product over two years ago.

First off, let me admit my track record is not that stellar when it comes to predicting the timing of eminent technology changes.

In 1943, Thomas Watson, the chairman of IBM forecast a world market for “maybe only five computers.” Years before IBM launched the personal computer in 1981, Xerox had already successfully designed and used PCs internally… but decided to concentrate on the production of photocopiers. Even Ken Olson, founder of Digital Equipment Corporation, said in 1977, “There is no reason anyone would want a computer in their home” (read about other predictions that missed the mark).

As a young computer scientist 1984-ish,  I would  often get questions from friends on whether they needed a personal computer. I was on the same bandwagon as Ken Olsen, telling anybody that asked — my dentist, my in-laws, random strangers in the park — that  it was absurd to think the average person would ever need a PC.

I did learn from my mistake and now simply understand that I really just suck at predicting consumer trends.

However, while the adoption of the personal computer was  a private consumer-driven phenomenon, IPv6, on the other hand, is not a consumer issue. And, my track record as an innovator of technology for business is much better. My years of guiding engineering decisions in Bell Labs, and now running my own technology company, provide a good base for understanding the headwinds facing IPv6.

Since the transition to IPv6 is not a consumer adoption issue, it has  many more parallels to the Y2K scare than the iPod. But, even then there are major differences.

Y2K had a time bomb of deadline. You could choose to ignore it,  but most IT managers could not afford to be wrong, so they were played by their vendors with expensive upgrades.

My prediction is that we will not transition to IPV6 this century, and if we attempt such a change, there will be utter chaos and mayhem to the point that we will have to revert back to IPv4.

Here’s my argument:

  1. There is no formal central control for  certification of Internet equipment. Yes, manufactures are self-proclaiming readiness, but even if  they all do a relatively good and professional job of testing — even with a 99 percent accuracy — on switchover day, the day everybody starts using IPV6 address space, the cumulative errors from traffic getting lost, delayed, or bounced from the one percent of equipment with problems will bring the Internet to its knees.  I don’t think the world will sit around for a few weeks or even months without the Internet while millions of pieces of routing equipment from thousands of manufacturers are retrofitted with upgrades.
  2. There’s no precedence. The only close precedent for changing the Internet address space would be the last time when AT&T added an extra digits to the dialing plan.  At the time they controlled everything from end to end.  They also had only one mission , and that was to complete a circuit from A to B. Internet routers, other than in the main backbone, do all kinds of auxiliary functions today such as firewalls, Web filtering, and optimization, hence further distancing themselves from any previous precedence.
  3. We have a viable workaround. Although a bit cumbersome, organizations and ISPs have been making due with a limited public address space using NetWork Address Translation for more than 10 years already. NAT can expand one Internet address into thousands.  Yes, public IP addresses for every man woman child for earth and every other planet in the Milky Way is possible with IPV6, but for the forseeable future, NAT combined with the 4 billion addresses available in IPv4 should do the trick, especially given the insurmountable difficulty with a switchover.
  4. Phased  Switchover nonsense ?  The pundits of moving to IPv6 are touting a phased switchover.  I am not sure what this accomplishes . If one set of users starts using a larger address range, for example, the Indian Government, they will still need to keep their original address range in order to communicate with the rest of the world. To realize the benefits of IPV6, the world as  whole, will need 100 percent participation. Phased switchover by  a segment of users, will only benefit vendors selling equipment.

Despite these predictions, the NetEqualier is ready for IPv6. We have already done some preliminary validation on IPv6  implementation in our NetEqualizer. In fact, we have even run on networks with IPv6 traffic without issues. While we have some work to do to make our product fully functional, we’ve already sufficiently tested enough to have confidence that if and when the IPv6 switch over happens, we will not cause any issues.

Fourteen Tips To Make Your ISP/WISP More Profitable


As the demand for Internet access continues to grow around the world, opportunities for service providers are emerging in markets far and wide. Yet, simply offering Internet service, even in untapped areas, does not guarantee long-term success. Just as quickly as your customer-base grows, the challenges facing ISPs and WISPs begin to emerge.

From competition to unhappy customers, the business venture that once seemed certain to succeed can quickly test the will of even the most battle-hardened and tech savvy business owners. However, there are ways to make the road to profitability a little smoother.

1. Make Sure You Have an Easy Customer Base to Grow into — Perhaps 500 households before you start building out. Yes, you can do it for less, but 500 is sort of a magic number where you can pay yourself and perhaps some hired help so you can be profitable and take a day off. WISPs and ISPs with 100 customers are great, but, at that size, they will remain a hobby that you may not be able to unload a couple of years down the road. Before you build out do some demographic research.

2. Set Boundaries from the Start — When starting up a new service, don’t let your customers run wide open. You may be OK without putting rate caps on users when you have only 10 customers sharing a 10 meg link, but when you get to 100 customers sharing a 10 meg link, you’ll need to put rate caps on them all. The problem with waiting is that your original users will become accustomed to higher speeds and will not be happy with sharing as your business expands – unless you enforce some reasonable restrictions up front.

3. Keep Your Network from Locking Up — Many ISPs believe that if they set maximum rate caps for their users that their network is safe from locking up due to congestion. However, if you are oversold on your contention ratios, you will lock up and simple rate limits are not enough. Don’t make this mistake.

This may sound obvious, but let me spell it out. We often run into operators with 500 customers on a 20-meg link. They then offer two rate plans — 1 meg up and down for consumers and 5 megs up and down for businesses. Next, they put rate caps on each type of customer to ensure they don’t exceed their allotted amount. Somehow, this is supposed to exonerate the operator from being oversold. This is all well and good, but if you do the math, 500 customers on a 20 meg link will overwhelm your link at some point and nobody will be able to get anywhere close to their “promised amount.”

If you are oversold, you will need something more than rate limits to prevent lockups. At some point, you will need to go with a layer-7 shaper such as Packeteer or Allot NetEnforcer. Or, you can use a NetEqualizer. Your only other option is to keep adding bandwidth.

4. Be the Reliable AlternativeIf you are in a dense metro area, and have the resources, you can offer Internet connections to hotel and business customers with pay-as-you-go services. Many hotels and businesses have unreliable connections, or none at all.  Obviously you’ll need real estate across the street, but once secured, you can point a directional antenna into the building and give your signal a recognizable name so your users will connect. Then, offer them the connection for a daily fee. For many users, paying a small daily fee for reliable service will be worth it – especially if the hotel or business offers sub par Internet service, none at all, or a connection for an exorbitant price.

5. Good Tech Support Is a Must — Don’t put all your faith into the local guru who set up your network. There are many good technical people out there and there are many more that will make a mess of your business. This can create some really tough decisions. I like to use this analogy:

I’m not a concert pianist – not even close – so I can’t tell the guy that hacks away playing Beatles tunes in the piano bar at my local pub from a Julliard trained pianist. Since I can’t play a lick, they all amaze me. Well, the same holds true for non-technical business owners hiring network techs or developers. They all seem amazingly smart when in fact they may run you into the ground. The only way to tell is to find somebody with a really good track record of making things work for people. So, ask around.

The good ones have no vested interest in making a custom dynasty of your business (another thing to watch out for). It’s like the doctor who needs the patient to stay sick. You don’t want that. Poor or misguided tech support may be the single largest cause for failed ISPs or issues with selling your business.

6. Make Payment As Easy As Possible — When a customer is delinquent on paying their bill, make sure you have a way to direct them to a payment site. Don’t just shut off their service and wait for them to call. For small operators, you don’t need to automate the payment cycle, just send them to a static page telling them how to pay their bill. For larger operators (3,000-plus users), the expense of automated bill payment may be worth the extra cost, but with a smaller set of customers, a static redirection to a page with instructions and a phone number will suffice. Your router or bandwidth controller likely already has this capability.

7. Look for a Competitive Credit Card Processor — Your bank will likely provide a service for you, but they are generally a middle man in this transaction. There are credit card processing agencies that sell their services direct and may be more cost-effective. These are no-brainer dollars that add up each month in savings.

8. Don’t Overspend – Remember that on the open market your business is likely only to be valued at three-quarters of your revenue, so don’t delude yourself and overspend on equipment and borrowing thinking that a white knight will come along. If your revenue is $500,000 per year, you will be in good shape if you get $400,000 for your business. And this may just cover your debt. Yes, there are exceptions and you might get a bit more, but don’t expect two-times your revenue. It’s just not going to happen in the current market, so plan your expenses accordingly.

9. Cross Market — What do your customers see when they login or sign up for service ? Do you send them regular e-mails about your service ?  If you answered yes to either of these questions you have ready-made billboards. Don’t be shy about it. Once you have a captive audience, there are all kinds of cross marketing ideas you can do for extra revenue. Done tastefully, your customers won’t mind. This could be a special with the local car dealer running coupons for them. Or for something like a pizza place. There is unlimited potential here, and if you’re not taking advantage of it, you’re missing out on easy revenue.

10. Optimize Your Bandwidth — A NetEqualizer bandwidth controller will allow you to increase your customer base by between 10 to 30 percent without having to purchase additional resources. This allows you to increase the amount of people you can put into your infrastructure without an expensive build out. Yet, a purchase like this can be a difficult decision. It’s best to think in the long term.  A NetEqualizer is a one-time cost that will pay for itself in about four months. On the other hand, purchasing additional bandwidth keeps adding up month after month.

11) Look for Creative Ways to Purchase Bandwidth — The local T1 provider is not always the lowest price.  There are many Tier 1 providers out there that may have fiber within line of sight of your rural business. For example, Level 3 has fiber rings already hot in many metro areas and will be happy to sell  you bandwidth. To get a low-cost high-speed link to your point of presence, numerous companies can set up with wireless backhaul equipment, which is a one time fixed cost for transport.

12)  Bundle Data Service with Phone Service — Look into your options for reselling phone service with your data packages.

13)  Offer a Discount for Customers that Auto-pay with Electronic Transfer or Credit Card on File — This is usually a win-win for both customer and ISP. The provider won’t have to worry about customers forgetting to pay their bill each month and the client won’t be forced to remember.

14) Offer Troubleshooting Services for Home PCs — You are a reliable tech contact point with your end customers, and likely know as much or more about PC viruses than the people giving out advice and charging for it at the local electronics superstore. You’re also likely in a rural area where good home tech support is hard to find. This would be a great source of additional revenue and you are likely already troubleshooting some home PC problems anyway, so why not make this part of your service and charge for it?

Obviously, these 14 tips won’t apply to every ISP/WISP, but it’s almost a given that at least some of these issues will emerge over time. While there’s no guarantee that any business will succeed, these tips should help steer Internet service providers in the right direction.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

Net Neutrality Enforcement and Debate: Will It Ever Be Settled?


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper. APconnections removed all Deep Packet Inspection technology from their NetEqualizer product over 2 years ago.

As the debate over net neutrality continues, we often forget what an ISP actually is and why they exist.
ISPs in this country are for-profit private companies made up of stockholders and investors who took on risk (without government backing) to build networks with the hopes of making a profit. To make a profit they must balance users expectations for performance against costs of implementing a network.

The reason bandwidth control is used in the first place is the standard switching problem capacity problem. Nobody can afford the investment of infrastructure to build a network to meet peak demands at all times. Would you build a house with 10 bedrooms if you were only expecting one or two kids sometime in the future? ISPs build networks to handle an average load, and when peak loads come along, they must do some mitigation. You can argue they should have built their networks; with more foresight until you are green, but the fact is demand for bandwidth will always outstrip supply.

So, where did the net neutrality debate get its start?
Unfortunately, in many Internet providers’ first attempt to remedy the overload issue on their networks, the layer-7 techniques they used opened a Pandora’s box of controversy that may never be settled.

When the subject of net neutrality started heating up around 2007 and 2008, the complaints from consumers revolved around ISP practices of looking inside customer’s transmittal of data and blocking or redirecting traffic based on content. There were all sorts of rationalizations for this practice and I’ll be the first to admit that it was not done with intended malice. However, the methodology was abhorrent.

I likened this practice to the phone company listening into your phone calls and deciding which calls to drop to keep their lines clear. Or, if you want to take it a step farther, the postal service making a decision to toss your junk mail based on their own private criteria. Legally I see no difference between looking inside mail or looking inside Internet traffic. It all seems to cross a line. When referring to net neutrality, the bloggers of this era were originally concerned with this sort of spying and playing God with what type of data can be transmitted.

To remedy this situation, Comcast and others adopted methods that relegated Internet usage based on patterns of usage and not content. At the time, we were happy to applaud them and claim that the problem of spying on data had been averted. I pretty much turned my attention away from the debate at that time, but I recently started looking back at the debate and, wow, what a difference a couple of years make.

So, where are we headed?
I am not sure what his sources are, but Rush Limbaugh claims that net neutrality is going to become a new fairness doctrine. To summarize, the FCC or some government body would start to use its authority to ensure equal access to content from search engine companies. For example, making sure that minority points of view on subjects got top billing in search results. This is a bit a scary, although perhaps a bit alarmist, but it would not surprise me since, once in government control, anything is possible. Yes, I realize conservative talk radio show hosts like to elicit emotional reactions, but usually there is some truth to back up their claims.

Other intelligent points of view:

The CRTC (Canadian FCC) seems to have a head on their shoulders, they have stated that ISPs must disclose their practices, but are not attempting to regulate how in some form of over reaching doctrine. Although I am not in favor of government institutions, if they must exist then the CRTC stance seems like a sane and appropriate request with regard to regulating ISPs.

Freedom to Tinker

What Is Deep Packet Inspection and Why All the Controversy?

Broadband in Rural America


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com
Art Reisman is a partner and co-founder of APconnections, a company that provides bandwidth control solutions (NetEqualizer) to
ISPs, Universities, Wireless ISPs, Libraries, Mining Camps, and any organization where groups of users must share their Internet resources equitably.

“The report of my death was an exaggeration”.  As with Twain’s humorous response to his death, America and our free-spending Congress must also be made aware that the accusations that, rural America  is starved for broadband, are mostly false.  There are literally thousands of independent broadband providers that are serving rural communities across the country.  There are websites devoted to locating them and trade organizations devoted to representing them.  The implied myth that somehow rural America does not have access to broadband  is simply not true.

How is it possible that Rural America already has access to broadband ?

Most rural small businesses already have  access to decent broadband speeds and are not stuck on dial-up.  To be fair, rural broadband currently is not quite fast enough to watch unlimited YouTube, but it is certainly fast enough to allow for VoIP, E-mail, sending documents, and basic communication without the plodding of dial-up.

We support approximately 500 rural operators around the US and the world.  The enabling technology for getting bandwidth to rural areas is well-established using readily available line-of-sight back haul equipment.

For example, let’s say you want to start a broadband business 80 miles southwest of Wichita, Kansas. How do you tap into the major Internet backbone?  Worst case scenario is that the nearest pop to a major backbone Internet provider is in Wichita. For a few thousand dollars, you can run a microwave link from Wichita out to your town using common backhaul technology. You could then distribute broadband access to your local community using point to multipoint technology. The technology to move broadband into rural areas is not futuristic; it is a viable and profitable industry that has evolved to meet market demands.

How much bandwidth is enough for rural business needs?

We support hundreds of businesses and their bandwidth needs.  From our observations, what we have found is that unless a business is specifically a content distribution or hosting company, they purchase minimal pipes, much less per capita than a consumer household.

Why?  They don’t want to subsidize their employees’ YouTube and online entertainment habits.  Therefore, they typically don’t need more than a T1 (1.5 meg) for an office of 20 or so employees.

As mentioned, bandwidth in rural American towns is not quite up to the same standards as major metro areas, but the service is adequate to ensure that businesses are not at a disadvantage.  Most  high speed connections beyond business needs are used primarily for entertainment -watching videos, playing Xbox, etc.  It’s not that these activities are bad, it’s just that they are consumer activities and not related to business productivity.  Hence, considering this, I would argue that a government subsidy to bring high speed into rural areas will have little additional economic impact.

The precedent of building highways to rural areas cannot be compared to broadband.

Highways did open the country to new forms of commerce, but there was a clear geographic hurdle to overcome that no commercial entity would take on.  There were farm producers in rural America, vital to our GDP, that had to get product to market efficiently.

The interstate system was necessary to open the country to commerce, and I would agree that moving goods from coast-to-coast via highway certainly benefits everybody.  Grain and corn from the Midwest must be brought to market through a system of feeder roads connecting into the Interstate and rail sytems.  And, the only way to transport goods from anyplace must include a segment of highway.

But the Internet transports data, and  there is  no geographic restriction on where data gets created and consumed.  So, there is not an underlying need to make use of rural america for economic reasons with respect to data.  Even if there was a small business building widgets in rural America, I challenge any government official to cite one instance of a business not being able function for lack of Internet conectivity.  I am able to handle my e-mail on a $49 -per-month WildBlue Internet connection, 20 miles from the nearest town, in the middle of Kansas, and my customers cannot tell the difference — and neither can I.

With broadband there is only data to transport, and unlike the geographic necessity of farm products, there is no compelling reason why it needs to be produced in rural areas. Nor is there evidence of an issue moving it from one end of the country to another, the major links between cities are already well-established.

Since Europeans are far better connected than the US, we are falling behind.

This comparison is definitely effective in convincing Americans that something drastic needs to be done about the country’s broadband deficiencies, but it needs to be kept in perspective.

While it is true the average teenagar in Europe can download and play oodles more games with much more efficiency than a poor American farmhand in rural Texas, is that really setting the country back?

Second, the population densities in Western Europe make the economics of high-speed links to everybody much more feasible than stringing lines through rural towns 40 miles apart in America’s heartland.  I don’t think the Russians are trying to send gigabit lines to every village in Siberia, which would be a more realistic analogy than comparing U.S. broadband coverage to Western Europe in general.

Therefore, while the prospect of expanded broadband Internet access to rural America is appealing for many reasons, both the positive outcomes of its implementation as well as the consequences of the current broadband shortcomings must be kept in perspective.  The majority of rural America is not completely bandwidth deprived.  Although there are shortcomings, they are not to the extent that commerce is suffering, nor to the extent that changes will lead to a significant increase in jobs or productivity.  This is not to say that rural bandwidth projects should not be undertaken, but rather that overly ambitious expectations should not be the driving force behind them.

Building a Software Company from Scratch


By Art Reisman, CEO, CTO, and co-founder of APconnections, Inc.

Adapted from an article first published in Entrepreneurship.org and updated with new material in April 2010.

At APconnections, our flagship product, NetEqualizer, is a traffic management and WAN optimization tool. Rather than using compression and caching techniques, NetEqualizer analyzes connections and then doles out bandwidth to them based on preset rules. We look at every connection on the network and compare it to the overall trunk size to determine how to eliminate congestion on the links. NetEqualizer also prevents peer-to-peer traffic from slowing down higher-priority application traffic without shutting down those connections.

When we started the company, we had lots of time, very little cash, some software development skills, and a technology idea.  This article covers a couple of bootstrapping pearls of wisdom that we learned to implement by doing.

Don’t be Afraid to Use Open Source

Using open source technology to develop and commercialize new application software can be an invaluable bootstrapping tool for startup entrepreneurs. It has allowed us to validate new technology with a willing set of early adopters who, in turn, provided us with references and debugging. We used this huge number of early adopters, who love to try open source applications, to legitimize our application.  Further, this large set of commercial “installs” helped us ring out many of the bugs by users who have no grounds to demand perfection.

In addition, we jump-started our products without incurring large development expense. We used open source by starting with technology already in place and extending it, rather than building (or licensing) every piece from scratch.  Using open source code makes at least a portion of our technology publicly available. We use bundling, documentation, and proprietary extensions to make it difficult for larger players to steal our thunder. Proprietary extensions account for over half of development work, but can be protected by copyright.  Afraid of copycats?  In many cases, nothing could be better than to have a large player copy you.  Big players value time-to-market.  If one player clones your work, another may acquire your company to catch up in the market.

The transition from open source users to paying customers is a big jump, requiring traditional sales and marketing. Don’t expect your loyal base of open source beta users to start paying for your product.  However, use testimonials from this critical mass of users to market to paying customers, who are reluctant to be early adopters (see below).

Channels? Use Direct Selling and the Web

Our innovation is a bit of a stretch from existing products, and like most innovations, requires some education of the user.  Much of the early advice we received related to picking a sales channel.  Just sign-up reps, resellers, and distributors and revenues will grow. We found the exact opposite to be true.  Priming channels is expensive.  And, after we pointed the sales channel at customers, closing the sale and supporting the customer fell back on us anyway.  Direct selling is not the path to rapid growth.  But as a bootstrapping tool, direct selling has rewarded us with loyal customers, better margins, and many fewer returns.

We use the Internet to generate hot leads, but we don’t worry about our Google ranking.  The key for us is to get every satisfied customer to post something about our product.  It probably hasn’t improved our Google ratings, but customer comments have surely improved our credibility in the marketplace.

Honest postings to blogs and user groups have significant influence on potential customers.  We explain to each customer how important their posting is to our company.  We often provide them with a link to a user group or appropriate blog.  And, as you know, these blogs stay around forever.  Then, when we encounter new potential customers, we suggest that they Google our “brand name” and blog, which always generates a slew of testimonials. (Check out our Web site to see some of the ways we use testimonials.)

Conclusion

Using open source code and direct sales are surely out-of-step with popular ideas for growing technology companies, especially those funded by equity investors.  But, they worked very well for us as we grew our company with limited resources to positive cash flow and beyond.

Here are some notes on what type product to create. Obviously, you’ll want to do something you are passionate about, otherwise there is no sense in even getting started.  If you are passionate about more than one thing remember this:  trying  to sell product on value, to IT people or engineering types, is much harder than selling to other Entrepreneurs or sales people.  Technical people are generally skeptical about new claims of something working well.  Also, unless somebody asks, they often really don’t tell many other people about the product they bought and the value they are receiving from it.

Looking for a peer group to get some advice from?  Find a local software group that you can join.  If you are in the Denver area,  I would recommend trying  http://www.denversoftware.org/

Behind the Scenes on the latest Comcast Ruling on Net Neutrality


Yesterday the FCC ruled in favor of Comcast regarding their rights to manipulate consumer traffic . As usual, the news coverage was a bit oversimplified and generic. Below we present a breakdown of the players involved, and our educated opinion as to their motivations.

1) The Large Service Providers for Internet Service: Comcast, Time Warner, Quest

From the perspective of Large Service Providers, these companies all want to get a return on their investment, charging the most money the market will tolerate. They will also try to increase market share by consolidating provider choices in local markets. Since they are directly visible to the public, they will also be trying to serve the public’s interest at heart; for without popular support, they will get regulated into oblivion. Case in point, the original Comcast problems stemmed from angry consumers after learning their p2p downloads were being redirected and/or  blocked.

Any and all government regulation will be opposed at every turn, as it is generally not good for private business. In the face of a strong headwind, don’t be surprised if Large Service Providers might try to reach a compromise quickly to alleviate any uncertainty.  Uncertainty can be more costly than regulation.

To be fair, Large Service Providers are staffed top to bottom with honest, hard-working people but, their decision-making as an entity will ultimately be based on profit.  To be the most profitable they will want to prevent third-party Traditional Content Providers from flooding  their networks with videos.  That was the original reason why Comcast thwarted bittorrent traffic. All of the Large Service Providers are currently, or plotting  to be, content providers, and hence they have two motives to restrict unwanted traffic. Motive one, is to keep their capacities in line with their capabilities for all generic traffic. Motive two, would be to thwart other content providers, thus making their content more attractive. For example who’s movie service are you going to subscribe with?  A generic cloud provider such as Netflix whose movies run choppy or your local provider with better quality by design?

2) The Traditional Content Providers:  Google, YouTube, Netflix etc.

They have a vested interest in expanding their reach by providing expanded video content.  Google, with nowhere to go for new revenue in the search engine and advertising business, will be attempting  an end-run around Large Service Providers to take market share.   The only thing standing in their way is the shortcomings in the delivery mechanism. They have even gone so far as to build out an extensive, heavily subsidized, fiber test network of their own.  Much of the hubbub about Net Neutrality is  based on a market play to force Large Service Providers to shoulder the Traditional Content Providers’ delivery costs.  An analogy from the bird world would be the brown-headed cowbird, where the mother lays her eggs in another bird’s nest, and then lets her chicks be raised by an unknowing other species.  Without their own delivery mechanism direct-to-the-consumer, the Traditional Content Providers  must keep pounding at the FCC  for rulings in their favor.  Part of the strategy is to rile consumers against the Large Service Providers, with the Net Neutrality cry.

3) The FCC

The FCC is a government organization trying to take their existing powers, which were granted for airwaves, and extend them to the Internet. As with any regulatory body, things start out well-intentioned, protection of consumers etc., but then quickly they become self-absorbed with their mission.  The original reason for the FCC was that the public airways for television and radio have limited frequencies for broadcasts. You can’t make a bigger pipe than what frequencies will allow, and hence it made sense to have a regulatory body oversee this vital  resource. In  the early stages of commercial radio, there was a real issue of competing entities  broadcasting  over each other in an arms race for the most powerful signal.  Along those lines, the regulatory entity (FCC) has forever expanded their mission.  For example, the government deciding what words can be uttered on primetime is an extension of this power.

Now with Internet, the FCC’s goal will be to regulate whatever they can, slowly creating rules for the “good of the people”. Will these rules be for the better?  Most likely the net effect is no; left alone the Internet was fine, but agencies will be agencies.

4) The Administration and current Congress

The current Administration has touted their support of Net Neutrality, and perhaps have been so overburdened with the battle on health care and other pressing matters that there has not been any regulation passed.  In the face of the aftermath of the FCC getting slapped down in court to limit their current powers, I would not be surprised to see a round of legislation on this issue to regulate Large Service Providers in the near future.  The Administraton will be painted as consumer protection against big greedy companies that need to be reigned in, as we have seen with banks, insurance companies, etc…. I hope that we do not end up with an Internet Czar, but some regulation is inevitable, if nothing else for a revenue stream to tap into.

5) The Public

The Public will be the dupes in all of this, ignorant voting blocks lobbied by various scare tactics.   The big demographic difference on swaying this opinion will be much different from the health care lobby.  People concerned for and against Internet Regulation will be in income brackets that have a higher education and employment rate than the typical entitlement lobbies that support regulation.  It is certainly not going to be the AARP or a Union Lobbyist leading the charge to regulate the Internet; hence legislation may be a bit delayed.

6) Al Gore

Not sure if he has a dog in this fight; we just threw him in here for fun.

7) NetEqualizer

Honestly, bandwidth control will always be needed, as long as there is more demand for bandwidth than there is bandwidth available.  We will not be lobbying for or against Net Neutrality.

8) The Courts

This is an area where I am a bit weak in understanding how a Court will follow legal precedent.  However, it seems to me that almost any court can rule from the bench, by finding the precedent they want and ignoring others if they so choose?  Ultimately, Congress can pass new laws to regulate just about anything with impunity.  There is no constitutional protection regarding Internet access.  Most likely the FCC will be the agency carrying out enforcement once the laws are in place.