Building a Technology Company from Scratch


Editors note: We wrote this article about a year ago before the blog was established. Although this article chronicles the model used to bootstrap the NetEqualizer from open source, the basic formula applies to any aspiring open source developer.

When we started the APconnections (APconnections makes the popular bandwidth shaping tool NetEqualizer), we had lots of time, very little cash, some software development skills, and a technology idea. This article covers a couple of bootstrapping pearls that we learned to implement by doing.

Don’t be Afraid to Use Open Source

Using open source technology to develop and commercialize new application software can be an invaluable bootstrapping tool for startup entrepreneurs. It has allowed us to validate new technology with a willing set of early adopters who, in turn, provided us with references and debugging.

We used this huge number of early adopters, who love to try open source applications, to legitimize our application. Further, this large set of commercial “installs” helped us ring out many of the bugs by users who have no grounds to demand perfection.

In addition, we jump-started our products without incurring large development expense. We used open source by starting with technology already in place and extending it, rather than building (or licensing) every piece from scratch.

Using open source code makes at least a portion of our technology publicly available. We use bundling, documentation, and proprietary extensions to make it difficult for larger players to steal our thunder. These will account for over half of development work but can be protected by copyright.

Afraid of copycats? In many cases, nothing could be better than to have a large player copy you. Big players value time to market. If one player clones your work, another may acquire your company to catch up in the market.

The transition from open source users to paying customers is a big jump, requiring traditional sales and marketing. Don’t expect your loyal base of open source beta users to start paying for your product. We use testimonials from this critical mass of users to market to paying customers who are reluctant to be early adopters (see below).

Channels? Use Direct Selling and the Web

Our innovation is a bit of a stretch from existing products and, like most innovations, requires some education of the user. Much of the early advice we received related to picking a sales channel. Just signup reps, resellers, and distributors and revenues will grow.

We found the exact opposite to be true. Priming channels is expensive. And, after we pointed the sales channel at customers, closing the sale and supporting the customer fell back on us anyway. Direct selling is not the path torapid growth. But as a bootstrapping tool direct selling has rewarded us with loyal customers, better margins, andmany fewer returns.

We use the Internet to generate hot leads, but we don’t worry about our Google ranking. The key for us is to get every satisfied customer to post something about our product. It probably hasn’t improved our Google ratings but customer comments have surely improved our credibility.

Honest postings to blogs and user groups have significant influence on potential customers. We explain to each customer how important their posting is to our company. We often provide them with a link to a user group or appropriate blog. And, as you know, these blogs stay around forever. Then, when we encounter new potential customers, we suggest that they Google our “brand name” and blog, which always generates a slew of believable testimonials. (Check out our Web site to see some of the ways we use testimonials.)

Using open source code and direct sales are surely out-of-step with popular ideas for growing technology companies, especially those funded by equity investors. But they worked very well for us as we grew our company with limited resources to positive cash flow and beyond.

NetEqualizer Evaluation Policy


Our official policy for customers requesting evaluation units is to require payment upfront.  However, we do honor a no-questions-asked  30-day return policy.

As you can imagine, we get a constant stream of requests for evaluation units. Obviously we’d love to provide everybody who asks with a demo unit. After all, the other brand name packet shapers will throw them at you. Especially if you are coming from an account they want to win over.

So, you may be wondering why we don’t do the same…

Some background:

APconnections  sells quite a few units under $3000 dollars. To put this in perspective, last year a CEO from a larger competitor selling similar equipment admitted that $4000 is their break-even point.

So, how do we offer units starting at $2000 and still turn a profit?

A big part of our model to is make sure that we do not drill dry wells. Dry well is industry speak for pursuing business that will never materialize. Yes, we love chatting with people, but in order to pay our engineers and stay in business, we must limit money spent supporting customers that are just “looking”.  The easiest way to do this is to enforce our evaluation policy.

Serious customers that are ready to buy something but need to see it work in their network usually have no problem with purchasing up front.  Some, but not all, customers that are not agreeable to purchasing up front may have cash flow problems of their own. In an economy where banks do not know how to qualify loans, we don’t want  to try to calculate this risk.

The result of our conservative policy translates to much lower prices , and to date nobody is arguing with that.

NetEqualizer the Safe Bet for Optimizing Internet Link During Economic Downturn


We just announced a record profit for the quarter ending September 2008. I have included a copy of that announcement below.

Although we do not believe (or want to see) our success come at the expense of other players in the market, there is a strong contrast if you compare our performance to the higher-cost publicly-traded players in this market (see charts below).

I suspect these high-end shapers with expensive sales channels  may have trouble in this slowing market as they come under price pressure. IT departments continue to cut costs and the main play  of optimization products, reducing  ROI,  will lose some luster as Internet costs slowly fall. At some point, a high-end piece of equipment will lose out to adding more bandwidth.

NetEqualizer, on the other hand, is priced so much lower than these other products that our window of value will extend out at least another 10 years — perhaps more.

Although we are private company, we would be happy to share financials under NDA with any customer that has concerns going forward.  We have plenty of operating cash on hand and will likely expand as we pull out of this downturn and customers continue to look to reduce costs.

Stock charts for major players in the Internet/WAN optimization market

http://finance.yahoo.com/q/bc?s=RVBD&t=1y

http://finance.yahoo.com/q/bc?s=ALLT&t=2y&l=on&z=m&q=l&c=

http://finance.yahoo.com/q/bc?s=BCSI&t=1y&l=on&z=m&q=l&c=

Now, here’s our latest press release reporting profits…

———————————————————-

APconnections Announces 50-percent Increase in Profits During Current Quarter

LAFAYETTE, Colo., Sept. 22, 2008 — APconnections, a leading supplier of plug-and-play bandwidth shaping products, today announced that sales revenues have increased by 50 percent during the current quarter.

Company officials report that APconnections is finding that a growing number of ISPs, businesses, libraries, and universities are looking to the NetEqualizer to solve their Internet bandwidth congestion issues, oftentimes switching from more expensive traffic shaping solutions.

As companies deal with the ongoing economic struggles that have hit the nation, the NetEqualizer’s rare combination of effectiveness and affordability has been a major factor fueling this growth.

Other factors driving the upturn are:

  1. Comcast has adopted a similar fairness-based strategy to solve Internet congestion issues, thus validating APconnections’ long-held belief that deep packet inspection is on its way out. (See APconnections’ previous announcements on net neutrality: http://www.netequalizer.com)
  2. Direct sales and support for 90 percent of their customers, thus reducing the overall cost of sales.
  3. Simple turnkey set-up allowing new customer installations to require only one hour of support.

The NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology gives priority to latency sensitive applications, such as VoIP and email. It does it all dynamically and automatically, improving on other bandwidth shaping technology out there. It controls network flow for the best WAN optimization.

APconnections is a privately held company founded in 2003 and is based in Lafayette, Colorado.

NetEqualizer Offers Net Neutrality, User Privacy Compromise


Although the debates surrounding net neutrality and user privacy are nothing new, the recent involvement of the Federal Communications Commission is forcing ISPs and network administrators to rethink their strategies for network optimization. The potential benefits of layer-7 bandwidth shaping and deep packet inspection are coming into conflict with the rights of Internet users to surf the net unimpeded while maintaining their privacy.

Despite the obvious potential relationship between net neutrality, deep packet inspection and bandwidth shaping, the issues are not inherently intertwined and must be judged separately. This has been the outlook at APconnections since the development of the network optimization appliance NetEqualizer five years ago.

On the surface, net neutrality seems to be a reasonable and ultimately beneficial goal for the Internet. In a perfect world, all consumers would be able to use the Internet to the extent they saw fit, absent of any bandwidth regulation. However, that perfect world does not exist.

In many cases, net neutrality can become a threat to equal access. Whether this is true for larger ISPs is debatable, however it cannot be denied when considering the circumstances surrounding smaller Internet providers. For example, administrators at rural ISPs, libraries, universities, and businesses often have no choice but to implement bandwidth shaping in order to ensure both reliable service and their own survival. When budgets allow only a certain amount of bandwidth to be purchased, once that supply is depleted, oftentimes due to the heavy usage of a small number of users, options are limited. Shaping in no longer a choice, but a necessity.

However, this does not mean that a free pass should be given for Internet providers to accomplish network optimization through any means available even at the expense of customer privacy. This is especially true considering that it’s possible to achieve network optimization without compromising privacy or equal access to the Internet. The NetEqualizer is a proven example.

Rather than relying on techniques such as deep packet inspection, NetEqualizer regulates bandwidth usage by connection limits and, through its fairness algorithm, ensures that all users are given equal access when the network is congested (Click here for a more detailed explanation of the NetEqualizer technology).

Therefore, a heavy bandwidth user that might be slowing Internet access for other customers can be kept in check without having to actually examine or completely block the data that is being sent. The end result is that the large majority of users will be able to access the Internet unhindered, while the privacy of all users is protected.

In the midst of the ongoing debates over net neutrality and privacy, the NetEqualizer approach is gaining popularity. This is apparent in both an increase in sales as well as on message boards and forums across the Internet. A recent Broadband Reports post reads:

“I don’t think anyone’s going to argue with you if you’re simply prioritizing real time traffic over non-real time. Just so long as you’re agnostic as to who’s sending the traffic, not making deals behind people’s backs, etc. then I’d have no problem with my ISP letting me surf the web or e-mail or stream at full speed, even if it meant that, when another person was doing the same, I could only get 100 KBs on a torrent instead of 150.

“I’d much rather have a NetEq’d open connection than a NATed nonmanaged one, that’s for sure.”

It is this agnostic approach that differentiates NetEqualizer from other network optimization appliances. While network administrators are able to prioritize applications such as VoIP in order to prevent latency, other activity, such as BitTorrent, is still able to take place – just at a slower speed when the network is congested. This is all done without deep packet inspection.

“NetEqualizer never opens up any customer data and thus cannot be accused of spying. Connections are treated as a metered resource,” said Art Reisman, CEO of APconnections. “The ISPs that use NetEqualizer simply put a policy in their service contracts stating how many connections they support, end of story. BitTorrent is still allowed to run, albeit not as wide with unlimited connections.”

Although not a proponent of bandwidth shaping, TorrentFreak.com editor-in-chief and founder Ernesto differentiates NetEqualizer from other bandwidth shaping appliances.

“I am not a fan of bandwidth control, the correct solution is for providers to build out more capacity by reinvesting their profits, however I’ll concede a solution such as a NetEqualizer is much more palatable than redirecting or specially blocking bittorrent and also seems to be more acceptable to consumers than bandwidth caps or metered plans.

“There is a risk though, who decides what the ‘peaks times’ are, how much bandwidth / connections would that be? Let me reiterate, I would rather see that ISPs invest in network capacity than network managing hardware.

“The Internet is growing rapidly, and if networks ‘crash’ already, they are clearly doing something wrong.”

The ultimate capacity of individual networks will vary on a case-by-case basis, with some having little choice but to employ bandwidth shaping and others doing so for reasons other than necessity. It has never been the intention of APconnections to pass judgment on how or why users implement shaping technology. The NetEqualizer is simply providing a bandwidth optimization alternative to deep packet inspection that gives administrators the opportunity to manage their networks with respect to both net neutrality and customer privacy.

NetEqualizer Gains Traction against Competition in Australia


In a recent discussion on how and where to deploy a NetEqualizer Stephan Wickham, Product Marketing Manager for KeyTrust (keytrust.com.au), had the following astounding revelation:

“My view is to try NetEqualizer and see how it works – I would then only apply a more expensive solution in instances that require special features or functionality not available with NetEqualizer. I believe this approach is the most practical. I also don’t believe that identifying and reporting on 100s of application types as performed by other products on the market serves much purpose. It would be like trying to manage freeway traffic flow by the identifying vehicle types and then reserving lanes per type. NetEqualizer works more like identifying a gang riding Harleys disrupting traffic and turns them into nice people riding Vespa scooters going with the flow.”

Failover and NetEqualizer: The Whys and Why Nots


Do you want failover on your NetEqualizer or wondered why it’s not available? Let me share a story with you that has developed our philosophy on failover.

A long time ago, back in 1993 or so, I was the Unix and operating system point person for the popular AT&T (i.e. Lucent and Avaya) voice messaging product called Audix. It was my job to make sure that the Unix operating system was bug free and to trouble shoot any issues.

At the time, Audix sales accounted for about $300 million in business and included many Fortune 500 companies around the world. One of the features which I investigated, tested, and certified was our RAID technology. The data on our systems consisted of the archives of all those saved messages that were so important, even more so before e-mail became the standard.

I had a lab setup with all sorts of disk arrays and would routinely yank one from the rack while an Audix system was running. The RAID software we’d integrated worked flawlessly in every test. We were one of the largest companies in the world and we spared no expense to ensure quality in our equipment, and we also charged a premium for everything we sold. If the RAID line item feature was included with an Audix system, it could run as high as $100,000.

Flash forward to the future. We get a call that a customer has lost all their data. A RAID system had failed. It was a well-known insurance company in the Northeast. Needless to say, they were not pleased that their 100 K insurance policy against disk failure did not pan out.

I had certified this mechanism and stood behind it. So, I called together the RAID manufacturer and several Unix kernel experts to do a postmortem. After several days locked in a room, we found was that the real world failure did not follow the lab testing where we had pulled live disk drives in our lab. In fact, it failed in such a way as to slowly corrupt the customer data on all disk drives rendering it useless.

I did some follow up research on failover strategies over the years and discovered that many people implement them for political reasons to cover their asses. I do not mean to demean people covering their asses, it is an important part of business, but the problem is the real cost of testing and validating failover is not practical for most manufacturers.

Many customers ask, “If a NetEqualizer fails, will the LAN cards still pass data?” The answer is, we could certainly engineer our product this way, but there is no guarantee for fail safe systems.

Here are the pros and cons of such a technology:

1) Just like my disk drive failure experience, a system can fail many different ways and the failover mechanism is likely not foolproof. So, I don’t want to recreate history for something we cannot (nor can anybody) reliably real-world test.

2) NetEqualizer’s failure rate is about two percent over two years, which is mostly attributed to harsh operating conditions. That means you have a 1 in 50 chance of having a failure over a two-year period. Put simply, the odds are against this happening.

3) If a NetEqualizer fails, it is usually a matter of moving a cable, which can be easily fixed. So, if you, or anyone with access to the NetEqualizer, are within an hour of your facility, that means you have a 1 in 50 chance of your network being down for one hour every two years because of a NetEqualizer.

4) Customers that really need a fully redundant failover for their operation duplicate their entire infrastructure and purchase two NetEqualizers. These customers are typically brokerage houses where large revenue could be lost. Since they already have a fully tested strategy at the macro level, a failover card on the NetEqualizer is not needed.

5) For customer that is just starting to dabble, they have gone to Cisco spanning tree protocol. Cisco has many years and billions of dollars invested in their switching technology and is rock solid.

6) Putting LAN failover cards in our product would likely raise our base price by about $1000. That would be a significant price increase for most customers, and one that would most likely not be worth paying for.

7) Most equipment failures are software or system related. We take pride in the fact that our boxes run forever and don’t lock up or need rebooting. A failover LAN card does not typically protect against system-type failures.

So, yes, we could sell our system as failsafe with a failover LAN card, but we would rather educate than exploit fears and misunderstandings. Hopefully we’ve accomplished that here.

Does TCP need an overhaul?


Just stumbled upon an article by

Dr. Lawrence G. Roberts, CEO, Anagran Inc.

He discusses the idea of solving Internet Congestion by Fixing the TCP protocol. Here is an excerpt


There has been widespread discussion lately about the unfairness of the primary protocol we rely on with the Internet – Transmission Control Protocol (TCP) – along with many proposals on how to fix it. Since there are clearly many problems with both slow and unfair service, my question is: Should TCP be overhauled to fix today’s congestion control problem, or does the network itself need fixing?

First, the problems include:

  • Multi-flow unfairness – More flows, such as P2P, can consume too much capacity
  • Distance unfairness – Long-distance users get slower service
  • Loss unfairness – Random packet loss slows flows unevenly; Web access is slowed

He then goes on discuss various specific congestion problems and proposes some ways to solve it by mucking with the TCP protocol itself. It is a very good article!

I Just wanted to point out that inside the NetEqalizer we have already brought back fairness to many congested networks without retrofitting TCP. I just wish we were a little better at getting the word out!

Here is the link to the full article

http://www.internetevolution.com/author.asp?section_id=499&doc_id=150113&

Eli Riles

Comcast Should Adopt Behavior-Based Shaping to Stay out of Trouble


Well it finally happened…

As reported by the NY times :

SAN FRANCISCO — Comcast, the country’s largest residential Internet provider, said on Thursday that it would take a more equitable approach toward managing the ever-expanding flow of Web traffic on its network.

The cable company, based in Philadelphia, has been under relentless pressure from the Federal Communications Commission and public interest groups after media reports last year that it was blocking some Internet traffic of customers who used online software based on the popular peer-to-peer BitTorrent protocol.

As many of our ISP customers already know, we have been proselytizing that using layer-7 packet shaping is a slippery slope for any provider and it was only a matter of time before a large provider such as Comcast would be forced to change their ways.

Note: Layer-7 shaping involves looking at data to determine what it is. A technique commonly used to identify bit torrent traffic.


The NetEqualizer methodology for application shaping has been agnostic with respect to type of data for quite some time. We have shown through our thousands of customers that you can effectively control and give priority to Internet traffic based on behavior. We did not feel comfortable with our layer-7 application shaping techniques and hence we ceased to support that methodology almost two years ago. We now manage traffic as a resource much the same way a municipality would/should ration water if there was a shortage.

Customers understand this. For example, if you simply tell somebody they must share a resource such as water, the Internet, or butter (as in WWII), and that they may periodically get a reduced amount, they will likely agree that sharing the resource is better than one person getting all of the resource while others suffer. Well, that is exactly what a NetEqualizer does with Internet resources, albeit in real time. Internet bandwidth is very spiky. It comes and goes in milliseconds and there is no time for a quorum.

We’ll keep an eye on this for you. If you are interested in learning more about how our technology differs from application-based shaping, the following link is very useful:

http://www.netequalizer.com/Compare_NetEqualizer.php

What Can We Do To Improve NetEqualizer?


We are always looking for feedback on how to improve Netequalizer products. What features do you want to see in 2008? Some ideas we have in the works are:

  • CALEA Probe for VOIP
  • Shaping by Domain Name (Input a URL
    Instead of an IP)
  • Quieter Fan (Already Shipping This!)

This is your chance to tell us what you’d like to see in the Netequalizer for 2008!

Please send all ideas to admin@apconnections.net or give us a call at 303-997-1300, extension 102.

2008 Pricing Update


Wouldn’t it be nice if Santa would bring us a promise of no manufacturing or logistic cost increases for the new year?

Santa is magical but not quite that magical!

Our Finance guys are crunching the numbers now and will have our new 2008 Product Pricing List available the first week of January. You still have plenty of time, however, to get your order in before those new prices go into effect.

Don’t Delay – Check out our current Netequalizer Price List and get your order in today!

Name-Based Shaping Is Now Available!


APconnections is pleased to announce the availability of name-based shaping. Now you can set class of service for your users by domain (user) name. And regardless of where or how they login into your network, the NetEqualizer will enforce subscribed service-level service agreements (i.e., 3 meg, 1 meg etc…).

How does this service work?

It is designed to work with your DHCP server. Your DHCP server is the device on your network which hands out an IP address to clients when they login or become active. Since clients can receive a new and different IP address each time they login, it is normally difficult and perhaps impossible to assign a unique SLA for each customer. But, with NetEqualizer name- based shaping, you assign the SLA to the customer domain name (computer name) and the SLA sticks with them wherever and whenever they login.

I use MAC addresses for shaping, why would I use name-based shaping?

MAC address shaping works well with small networks and is sufficient if you plan to remain under about 300 customers. But, once you grow beyond the amount of MAC addresses a network segment can handle, MAC address shaping breaks down and becomes complex to manage. If you are currently using MAC shaping and plan to increase your number of customers, it’s a good time to think about making the architecture change to domain-based shaping or some other alternative.