Networking Equipment and Virtual Machines Do Not Mix


By Joe DEsopo

Editors Note:
We often get asked why we don’t offer our NetEqualizer as a virtual machine. Although the excerpt below is geared toward the NetEqualizer, you could just as easily substitute the word  “router” or “firewall” in place of NetEqualizer and the information would apply to just about any networking product on the market. For example, even a simple Linksys router has a version of Linux under the hood and to my knowlege they don’t offer that product as VM. In the following excerpt lifted from a real response to one of our larger customers (a hotel operator), we detail the reasons.

————————————————————————–

Dear Customer

We’ve very consciously decided not to release a virtualized copy of the software. The driver for our decision is throughput performance and accuracy.

As you can imagine, The NetEqualizer is optimized to do very fast packet/flow accounting and rule enforcement while minimizing unwanted negative effects (latencies, etc…) in networks. As you know, the NetEqualizer needs to operate in the sub-second time domain over what could be up to tens of thousands of flows per second.

As part of our value proposition, we’ve been successful, where others have not, at achieving tremendous throughput levels on low cost commodity platforms (Intel based Supermicro motherboards), which helps us provide a tremendous pricing advantage (typically we are 1/3 – 1/5 the price of alternative solutions). Furthermore, from an engineering point of view, we have learned from experience that slight variations in Linux, System Clocks, NIC Drivers, etc… can lead to many unwanted effects and we often have to re-optimize our system when these things are upgraded. In some special areas, in order to enable super-fast speeds, we’ve had to write our own Kernel-level code to bypass unacceptable speed penalties that we would otherwise have to live with on generic Linux systems. To some degree, this is our “secret sauce.” Nevertheless, I hope you can see that the capabilities of the NetEqualizer can only be realized by a carefully engineered synergy between our Software, Linux and the Hardware.

With that as a background, we have taken the position that a virtualized version of the NetEqualizer would not be in anyone’s best interest.   The fact is, we need to know and understand the specific timing tolerances in any given moment and system environment.  This is especially true if a bug is encountered in the field and we need to reproduce it in our labs in order to isolate and fix the problem (note: many bugs we find our not of our own making – they are often changes in Linux that used to work fine, but for some reason have changed in a newer release and we are unaware and that requires us to discover and re-optimize around).

I hope I’ve done a good job of explaining the technical complexities surrounding a “virtualized” NetEqualizer.  I know it sounds like a great idea, but really we think it cannot be done to an acceptable level of performance and support.

The Internet was Never Intended for On-demand TV and Movies


By Art Reisman

www.netequalizer.com

I just got off the phone with one our customers who happens to be a large ISP. He chewed me out because we were throttling his video, and his customers were complaining. I tell him, if we did not throttle his video during peak times, his whole pipe would come to screeching halt. Seems everybody is looking for a magic bullet to squeeze blood from a turnip.

Can the Internet be retrofitted for video?

Yes, there are a few tricks an ISP can do to make video more acceptable, but the bottom line is, the Internet was never intended to deliver video.

One basic basic trick being used to eek out some video, is to cache local copies of video content, and then deliver it to you when you click a URL for a movie. This technique follows along the same path as the original on demand video of the 1980’s. The kind of service where you called your cable company and purchased a movie to start at 3:00 pm.  Believe it or not, there was often a video player with a cassette at other end of the cable going into your home, and your provider would just turn the video player on with the movie at the prescribed time. Today, the selection of available video has expanded and the delivery mechanism has gotten a bit more sophisticated, but for the most part, popular video is delivered via a direct wire from the operator into your home. It is usually NOT coming across the public Internet, it only appears that way (if it came across the Internet it would be slow and sporadic). Content that comes from the open Internet must come through an exchange point, and if your ISP has to rely on their exchange point to retrieve video content, things can get congested rather quickly.

What is an Internet Exchange point and why does it matter?

Perhaps an explanation of exchange points might help. Think of a giant railroad yard, where trains from all over the country converge and then return from where they came. In the yard they exchange their goods with the other train operators. For example, a train from Montana brings in coal destined for power plants in the east, and the trains from the east brings mining supplies and food for the people of Montana. As per a gentleman’s agreement, the railroad companies will transfer some goods to other operators, and take some goods in return. Although fictional, this would be a fair trade agreement. The fair trade in our railroad example works as as long as everybody exchanges about the same amount of stuff. But, suppose one day a train from the south shows up with 10 times the size load they wish to exchange data with, and suppose their goods are perishable, like raw milk products. Not only do they have more than their fair share to exchange, but they also have a time dependency on the exchange. They must get their milk to other markets quickly or it loses all value. You can imagine that the some of the railroads in the exchange co-operative would be overloaded and problems would arise.

I wish I could take every media person who writes about the Internet, take them into a room, and not let them leave until they understand the concept of an Internet exchange point. The Internet is founded on a best effort exchange agreement. Everything is built off this mode, and it cannot easily be changed.

So how does this relate back to the problems of video?

There really is no problem with the Internet, it works as intended and is a magnificent model of best effort exchange. The problem occurs with the delusion of content providers pumping video content into the pipes without any consideration of what might happen at the exchange points.

A bit of quick history on exchange point evolution.

Over the years, the original government network operators started exchanging with private operators, such as AT&T, Verizon, and Level 3. These private operators have made great improvement efforts to the capacity of their links and exchange points, but the basic problem still exists. The sender and receiver never have any guarantee if their real time streaming video will get to the other end in a timely manner.

As for caching, it is a band aid, and works some of the time for the most popular videos that get watched over and over again, but it does not solve the problem at the exchange points, and consumers and providers are always pumping more content into the pipes.

So can the problem of streaming content be solved?

The short answer is yes, but it would not be the Internet. I suspect one might call it the Internet for marketing purposes, but out of necessity. It would be some new network with a different political structure and entirely different rules. This would have much higher cost to ensure data paths for video, and operators would have to pass the cost of transport and path set up directly on to the content providers to make it work. Best effort fair exchange would be out of the picture.

For example, over the years I have seen numerous plans by wizards who draw up block diagrams on how to make the Internet a signaling switching network, instead of a best effort network. Each time I see one of these plans, I just sort of shrug. It has been done before and done very well,  they never consider the data networks originally built by AT&T, which was a fully functional switched network for sending data to anybody with guaranteed bandwidth. We’ll see where we end up.

Video Over 3G/4G Will Always Lag Behind the Quality of Wired Home Service


Written by Art Reisman

CTO – http://www.apconnections.net

Editors note:

Marketing and hype for services ultimately meet the reality of what is possible. Below, I explain the basic reasons behind what is possible in terms of video on your wired home network and then compare that to the limitations of 3G and 4G service.

In the wired network world, many consumers are connected to their provider via a spoke and hub topology, like this

The hub, “H”, is at your cable operator’s regional office and the spokes are dedicated wires to each home. When supplying video such as Netflix, your cable operator caches popular videos at their HUB, so when you select a movie, it plays unencumbered on a wire direct from the central office to your home. In this topology you are not competing for bandwidth on the last mile. The bottom line is you can watch a good deal of video without interruption.

Yes, it is possible to watch video on your wireless device, but unlike the wired network to your home, claims of high speeds from 4G providers have limitations. Due to the way wireless frequencies operate, the more users on the nearest tower, the more likely your video feed will break up.

With a wireless provider there is also a hub, but unlike the HUB of the wired network, many users share a single wire (Frequency) back to this HUB. Your wireless provider uses time division multiplexing to give each user a slice of the bandwidth on the wire. In the diagram below, there are no dedicated wires to each phone, the lines are a symbolic representation of a slice of time. In other words, the wire back to the High Bandwidth HUB is virtual and only exists for a short moment in time. As you add more and more devices to the wire, each time slice becomes shorter and shorter, and at some point, your time slice will become so small, it will be impossible to watch a video no matter how fast the advertised speed to your wireless phone.

Note: There is variability in the quality of video in the wired model but they are related to where the content is located and not the last mile contention described above.

Editors Choice: The Best of Speeding up Your Internet


Edited by Art Reisman

CTO – www.netequalizer.com

Over the years we have written a variety of articles related to Internet Access Speed and all of the factors that can affect your service. Below, I have consolidated some of my favorites along with a quick convenient synopsis.

How to determine the true speed of video over your Internet connection: If you have ever wondered why you can sometimes watch a full-length movie without an issue while at other times you can’t get the shortest of YouTube videos to play without interruption, this article will shed some light on what is going on behind the scenes.

FCC is the latest dupe when it comes to Internet speeds: After the Wall Street Journal published an article on Internet provider speed claims, I decided to peel back the onion a bit. This article exposes anomalies between my speed tests and what I experienced when accessing real data.

How to speed up your Internet connection with a bandwidth controller: This is more of a technical article for Internet Service Providers. It details techniques used to eliminate congestion on their links and thus increase the perception of higher speeds to their end users.

You may be the victim of Internet congestion: An article aimed at consumer and business users to explain some of the variance in your network speeds when congestion rears its ugly head.

Just how fast is your 4g network?: When I wrote this article, I was a bit frustrated with all the amazing claims of speed coming with wireless 4G devices. There are some fundamental gating factors that will forever insure that your wired connection will likely always be a magnitude faster than any wireless data device.

How does your ISP enforce your Internet speed?: Goes into some of the techniques used on upstream routers to control the speed of Internet and data connections.

Burstable Internet connections, are they of any value?: Sheds light on the ambiguity of the term “burstable.”

Speeding up your Internet connection with an optimizing appliance: Breaks down the tradeoffs of various techniques.

Why caching alone will speed up your Internet: One of my favorite articles. Caching, although a good idea, often creates great unattainable expectations. Find out why.

QoS is a matter of sacrifice: Explains how quality of service is a “zero sum” game, and why somebody must lose when favoring one type of traffic.

Using QoS to speed up traffic: More on the pros and cons of using a QoS device.

Nine tips and tricks to speed up your Internet connection: A great collection of 15 tips, this article seems to be timeless and continually grows in popularity.

Network bottlenecks when your router drops packets: A simple, yet technical, explanation of how hitting your line speed limit on your router causes a domino effect.

Why is the Internet access in my hotel so slow: Okay I admit i , this was an attempt to draw some attention to our NetEqualizer which solves this problem about 99 percent of the time for the hotel industry. You can bring the horse to water but you cannot make them drink.

Speed test tools from M-labs: The most reliable speed test tool there is, uses techniques that cannot easily be fooled by special treatment from your provider.

Are hotels jamming 3g access?: They may not be jamming 3g but they are certainly in no hurry to make it better.

Five more tips in testing your Internet speed: More tips to test Internet speed.

The Evolution of P2P and Behavior-Based Blocking


By Art Reisman

CTO – APconnections

www.netequalizer.com

I’ll get to behavior-based blocking soon, but before I do that, I encourage anybody dealing with P2P on their network to read about the evolution of P2P outlined below. Most of the methods historically used to thwart P2P, are short lived pesticides, and resistance is common. Behavior-based control is a natural wholesome predator of P2P which has proved to be cost effective over the past 10 years.

The evolution of P2P

P2P as it exists today is a classic example of Darwinian evolution.

In the beginning there was Napster. Napster was a centralized depository for files of all types. It also happened to be a convenient place to distribute unauthorized, copyrighted material. And so, the music industry, unable to work out a licensing distribution agreement with Napster basically closed it down. So now, you had all these consumers used to getting free music, and like a habituated wild animal, they were in no mood to pay 15.99 per CD from their local retailer.

P2P technology was already in existence when Napster was closed down; however until that time, it was intended to be a distribution system for legitimate content which came out of academia. By decentralizing the content to many multiple distribution points, the cost of distribution was much less than hosting content distribution on a private server. Decentralized content, good for legitimate distribution of academic content, quickly became a nightmare for the Music Industry.  Instead of having one cockroach of illegal content to deal with, they now had millions of little P2P cockroaches all over the world to contend with.

The Music industry had a multi-billion dollar leak in their revenue stream and went after enforcing copyright policy by harassing ISPs and threatening consumers with jail time. For the ISP, the legal liability of having copyrighted material on your network was a hassle, but the bigger problem was the congestion. When content was distributed by a single point supplier, there were natural cost barriers to prevent bandwidth utilization from rising unchecked. For example, when you buy a music file from Amazon or iTunes, both ends of the transaction require some form of payment. The supplier pays for a large bandwidth pipe, and the consumer pays money for the file. With P2P, the distributors and the clients are all consumers with essentially unlimited data usage on their home accounts, and the content is free. As P2P file sharing rose, ISPs had no easy way of changing their pricing model to deal with the orgy of file sharing. Although invisible to the public, it was a cyber party that rivaled 10 cent beer night fiasco of the 1970’s.

Resistant P2P pesticides

In order to thwart p2p usage, ISPs and businesses started spending hundreds of millions of dollars in technology that tracked specific P2P applications and blocked those streams. This technology is referred to as layer 7 blocking. Layer 7 blocking involves looking at the specific content traversing the Internet and identifying P2P applications by their specific footprint. Intuitively, this solution was a no-brainer* – spot P2P and block it. Most of these installations with layer 7 blocking showed some initial promise, however, as was the case with the previous cockroach infestation, P2P again evolved to meet the challenge and then some.

How does newer evolved P2P thwart layer 7 shaping?

1) There are now encrypted P2P clients where their footprint is hidden, and thus all the investment in the layer 7 shaper can go up in smoke once encrypted P2P infects your network. It can’t be spotted.

2) P2P clients open and close connections much faster than their first generation of the early 2000’s. To keep up with a the flurry of connections over a short time, the layer 7 engine must have many times the processing power of a traditional router, and must do the analysis quickly. The cost of layer 7 shaping is rising much faster than the cost of adding additional bandwidth to a circuit.

Also: Legally there also problems with eavesdropping on customer data without authorization.

How does behavior-based shaping P2P blocking keep up?

1) It uses a progressive rate limit on suspected P2P users.

P2P has the footprint of creating many simultaneous connections to move data across the internet. When behavior-based shaping is in effect, it detects these high connection count users, and slowly implements a progressive rate limit on all their data. This does not completely cut them off per se, but it punishes the speeds of the consumer using p2p and does so progressively as they use more p2p connections. This may seem a bit non specific in target, but when done correctly it rarely affects non P2P users, and even if it does, the behavior of using a large number of downloads is considered rude and abhorrent, and is most like a virus if not a P2P application.

2) It limits the user to a fixed number of simultaneous connections.

Also: It does not violate any privacy policies.

That covers the basics of P2P behavior-based shaping. In practice, we have developed our techniques with a bit of intelligence and do not wish to give away all of our fine tuning secrets, but suffice it to say, I have been implementing behavior-based shaping for 10 years and have empirically seen its effectiveness over time. The cost remains low with respect to licensing (very stable solution), and the results remain consistent.

* Although in some cases there was very little information about how effective the solution was working, companies and ISPs shelled out license fees year after year.

Are You Unknowingly Sharing Bandwidth with Your Neighbors?


Editor’s Note: The following is a revised and update version of our original article from April 2007.

In a recent article titled, “The White Lies ISPs Tell about Broadband Speeds,” we discussed some of the methods ISPs use when overselling their bandwidth in order to put on their best face for their customers. To recap a bit, oversold bandwidth is a condition that occurs when an ISP promises more bandwidth to its users than it can actually deliver hence, during peak hours you may actually be competing with your neighbor for bandwidth. Since the act of “overselling” is a relative term, with some ISPs pushing the limit to greater extremes than others, we thought it a good idea to do a quick follow-up and define some parameters for measuring the oversold condition.

For this purpose we use the term contention ratio. A contention ratio is simply the size of an Internet trunk divided by the number of users. We normally think of Internet trunks in units of megabits. For example, 10 users sharing a one megabit trunk would have a 10-to-1 contention ratio. If sharing the bandwidth on the trunk equally and simultaneously, each user could sustain a constant feed of 100kbs, which is exactly 1/10 of the overall bandwidth.

So what is an acceptable contention ratio?

From a business standpoint, it is whatever a customer will put up with and pay for without canceling their service. This definition may seem ethically suspect, but whether in the bygone days of telecommunications phone service or contemporary Internet bandwidth business, there are long-standing precedents for overselling. What do you think a circuit busy signal is caused by? Or a dropped cell phone call? It’s best to leave the moral debate to a university assignment or a Sunday sermon.

So, without pulling any punches, what exactly will a customer tolerate before pulling the plug?
Here are some basic unofficial observations:
  • Rural customers in the US and Canada: Contention ratios of 10 to 1 are common (2007 this was 20 to 1)
  • International customers in remote areas of the world: Contention ratios of 20 to 1 are common (2007 was 80 to 1)
  • Internet providers in urban areas: Contention ratios of 5 to 1 are to be expected (2007 this was 10 to 1) *

* Larger cable operators have extremely fast last mile connections, most of their speed claims are based on the speed of their last mile connection and not their Internet Exchange point thresholds. The numbers cited are related to their connection to the broader Internet and not the last mile from their office (NOC) to your home. Admittedly, the lines of what is the Internet can be blurred as many cable operators cache popular local content (NetFlix Movies, for example). The movie is delivered from a server at their local office direct to your home, hence technically we would not consider this related to your contention ratio to the Internet.

The numbers above are a good, rough starting point, but things are not as simple as they look. There is a statistical twist as bandwidth amounts get higher.

From the customers perspective of speed, contention ratios can actually increase as the overall Internet trunk size gets larger. For example, if 50 people can share one megabit without mutiny, it should follow that 100 people can share two megabits without mutiny as the ratio has not changed. It is still 50 to 1.

However, from observations of hundreds of ISPs, we can easily conclude that perhaps 110 people can share two megabits with the same tolerance as 50 people sharing one megabit. What this means is that the larger the ISP, the more bandwidth at a fixed cost per megabit, and thus the larger the contention ratios you can get away with.

Is this really true? And if so, what are its implications for your business?

This is simply an empirical observation, backed up by talking to literally thousands of ISPs over the course of four years and noticing how their over subscription ratios increase with the size of their trunk while customer perception of speed remains about the same.

A conservative estimate is that, starting with the baseline ratio listed above, you can safely add 10 percent more subscribers above and beyond the original contention ratio for each megabit of trunk they share.

Related Articles

How to speed up access on your iPhone

How to determine the true speed of video over your Internet Connection

NetEqualizer News: September 2012


September 2012

Greetings!

Enjoy another issue of NetEqualizer News! This month, we preview our new GUI for NetEqualizer, discuss a recent NetEqualizer case study we conducted with one of our customers, and announce our next technical seminar at Washington University – St. Louis. As always, feel free to pass this along to others who might be interested in NetEqualizer News.

A message from Art…
Art Reisman, CTO – APconnections

Fall is officially in the air in Boulder, Colorado! Cool nights are now the norm and my backyard garden is full of ripe zucchinis and tomatoes. As promised in last month’s newsletter, our NEW NetEqualizer GUI is almost ready for harvest! We will be conducting our Beta Test in September with a limited number of participants. We expect our GA release to be available in October. If you are interested in being part of our Beta Test, please email me!

We love it when we hear back from you – so if you have a story you would like to share with us of how we have helped you, let us know. Email me directly at art@apconnections.net. I would love to hear from you!

New NetEqualizer GUI
The new, highly-anticipated NetEqualizer GUI is here, and we are ready for Beta testers!

The new GUI is part of the 6.0 Software Update, and includes the same functionality that you already know, with enhancements in the following areas:

New Dashboard Feature

Our new Dashboard provides an intuitive visual display of the status on critical data and settings within NetEqualizer. The Dashboard contains on/off statuses for Equalizing, ntop, Packet Capture, Quotas, and Caching, so that you can quickly tell if your key functions are running. It also contains statistics about traffic running through your NetEqualizer.

Menus Aligned by Key Functions

We have redesigned our menus to better support your workflow. For example, if you are setting up and configuring your unit, all key functions related to this are now in the “Setup and Configuration” Menu section. Other key menus are Management and Reporting, Troubleshooting and Support, and Maintenance and Reference.

Consistent Look and Feel

We’ve enhanced our look and feel by modernizing the interface and improving error messages, buttons, and colors.

Professional Quota API 

The Professional Quota API functionality introduced in 5.8 has been incorporated into our 6.0 GUI. The Professional Quota API helps you to quickly and easily utilize our NetEqualizer User-Quota API toolset commands via a GUI interface.

Please email us if you would like to be part of our Beta Test! Tests will run throughout September, with our GA release sometime in October.

To view a live demo NetEqualizer, with the new GUI installed, click here to register (because of security reasons, we can’t give the password to the live demo machine here).

As always, the 6.0 Software Update will be available at no charge to customers with valid NetEqualizer Software Subscriptions (NSS).

For more information on the NetEqualizer or the upcoming release, visit our blog or contact us at:

-or-
toll-free U.S.(888-287-2492),

Library Case Study
Washington County Corporate Library Services (WCCLS) recently agreed to participate in a NetEqualizer case study.

The case study expands on our already-existing Testimonials section of our website by discussing the challenge that was faced by the customer, what other solutions were considered, and what the benefits and results were with NetEqualizer.

Take a look at the WCCLS Case Study here!

If your organization would like to participate in a similar case study, we’d love to talk to you! Email sandy@apconnections.net if you are interested.


Midwest Technical Seminar at WUSTL
Our CTO, Art Reisman, is coming to Washington University – St. Louis (WUSTL) for a NetEqualizer Technical Seminar!

The half-day seminar will be hosted by WUSTL on Monday, October 29th and will include lunch after the event concludes. If you are in the area, we’d like to see you there!

To learn more, and to register, click here.


Best Of The Blog

Just How Fast Is Your 4G Network?

By Art Reisman – CTO – APconnections

The subject of Internet speed and how to make it go faster is always a hot topic. So that begs the question, if everybody wants their Internet to go faster, what are some of the limitations? I mean, why can’t we just achieve infinite speeds when we want them and where we want them?

Below, I’ll take on some of the fundamental gating factors of Internet speeds, primarily exploring the difference between wired and wireless connections. As we have “progressed” from a reliance on wired connections to a near-universal expectation of wireless Internet options, we’ve also put some limitations on what speeds can be reliably achieved. I’ll discuss why the wired Internet to your home will likely always be faster than the latest fourth generation (4G) wireless being touted today…

Photo Of The Month

Haystack Rock

Haystack Rock, located in Cannon Beach, Oregon, is a 72 meter-high sea stack. A stack is a geologic landform consisting of steep rock along the coast that has been isolated by erosion. There are lots of accessible and interesting tide pools surrounding this rock that are constantly being studied by a full-time team of biologists.

Network Bottlenecks – When Your Router Drops Packets, Things Can Get Ugly


By Art Reisman

CTO – APconnections

As a general rule, when a network router sees more packets than it can send or receive on a link, it will drop the extra  packets. Intuitively, when your router is dropping packets, one would assume that the perceived slow down, per user, would be just a gradual shift slower.

What happens in reality is far worse…

1) Distant users get spiraling slower responses.

Martin Roth, a colleague of ours who founded one of the top performance analysis companies in the world, provided this explanation:

“Any device which is dropping packets “favors” streams with the shortest round trip time, because (according to the TCP protocol) the time after which a lost packet is recovered is depending on the round trip time. So when a company in Copenhagen/Denmark has a line to Australia and a line to Germany on the same internet router, and this router is discarding packets because of bandwidth limits/policing, the stream to Australia is getting much bigger “holes” per lost packet (up to 3 seconds) than the stream to Germany or another office in Copenhagen. This effect then increases when the TCP window size to Australia is reduced (because of the retransmissions), so there are fewer bytes per round trip and more holes between to round trips.”

In the screen shot above (courtesy of avenida.dk), the Bandwidth limit is 10 Mbit (= 1 Mbyte/s net traffic), so everything on top of that will get discarded. The problem is not the discards, this is standard TCP behaviour, but the connections that are forcefully closed because of the discards. After the peak in closed connections, there is a “dip” in bandwidth utilization, because we cut too many connections.

2) Once you hit a congestion point, where your router is forced to drop packets, overall congestion actually gets worse before it gets better.

When applications don’t get a response due to a dropped packet, instead of backing off and waiting, they tend to start sending re-tries, and this is why you may have noticed prolonged periods (3o seconds or more) of no service on a congested network. We call this the rolling brown out. Think of this situation as sort of a doubling down on bandwidth at the moment of congestion. Instead of easing into a full network and lightly bumping your head, all the devices demanding bandwidth ramp up their requests at precisely the moment when your network is congested, resulting in an explosion of packet dropping until everybody finally gives up.

How do you remedy outages caused by Congestion?

We have written extensively about solutions to prevent bottlenecks. Here is a quick summary with links:

1) The most obvious being to increase the size of your link.

2) Enforce rate limits per user.

3) Wse something more sophisticated like a Netequalizer, a device that is designed to specifically counter the effects of congestion.

From Martin Roth of Avenida.dk

“With NetEqualizer we may get the same number of discards, but we get fewer connections closed, because we “kick” the few connections with the high bandwidth, so we do not get the “dip” in bandwidth utilization.

The graphs (above) were recorded using 1 second intervals, so here you can see the bandwidth is reached. In a standard SolarWinds graph with 10 minute averages the bandwidth utilization would be under 20% and the customer would not know they are hitting the limit.”

———————————————————————-

The excerpt below was a message from a reseller who had been struggling with congestion issues at a hotel, he tried basic rate limits on his router first. Rate Limits will buy you some time , but on an oversold network you can still hit the congestion point, and for this you need a smarter device.

“…NetEq delivered a 500% gain in available bandwidth by eliminating rate caps, possible through a mix of connection limits and Equalization.  Both are necessary.  The hotel went from 750 Kbit max per accesspoint (entire hotel lobby fights over 750Kbit; divided between who knows how many users) to 7Mbit or more available bandwidth for single users with heavy needs.

The ability to fully load the pipe, then reach out and instantly take back up to a third of it for an immediate need like a speedtest was also really eye-opening.  The pipe is already maxed out, but there is always a third of it that can be immediately cleared in time to perform something new and high-priority like a speed test.”
 
Rate Caps: nobody ever gets a fast Internet connection.
Equalized: the pipe stays as full as possible, yet anybody with a business-class need gets served a major portion of the pipe on demand. “
– Ben Whitaker – jetsetnetworks.com

Are those rate limits on your router good enough?

Nine Tips for Organic Technology Start Ups


By Art Reisman

Art is CTO and Co-Founder of APconnections – makers of the NetEqualizer. NetEqualizer is used by thousands of ISPs worldwide to arbitrate bandwidth. He is also the principal engineer and inventor of the Kent Moore EVA, a product used to trouble shoot millions of vehicle vibration issues since 1992.

1) Find somebody who has built at least two businesses on their own, and better yet, somebody that has done it more than once from scratch.

For example, a Harvard MBA that went to work for Goldman-Sachs right out of school has no idea what you are up against. They may be brilliant, but without experience specifically in the field of growing a start up, their education and experience is not as good as somebody who had done it on their own.

2) Be leery of late 1990’s dot com moguls.

Many good people got lucky during those years. It was a rare time that will likely never happen again. Yes, there are as some true stars from that era, but most were just people who were in the right place at the right time. Their experiences generally don’t translate to a market place where money is tight and you must bite and scratch for every inch of success.

3) Be careful not to give too much credence to the advice of current and former executives at large companies.

They are great if you are looking for connections and introductions within those companies, but rarely do they understand bootstrapping a start up. These executives most likely operated in a company with large resources and rampant bueracracy that required a completely different set of skills than a start up.

4) Amazingly, I have found Real Estate Broker(s) are a great source for marketing ideas.

Not the agents, but the founders of the companies that built real estate companies up from scratch. I can assure you they have some creative ideas that will translate to your tech business.

5) Product companies must avoid the consulting trap.

If you produce a software product and (or any product for that matter), you will always be inundated for specialty, one-off, requests from customers. These requests are well intentioned, but you can’t let your time and direction of a single customer drive your feature set. The exception to this rule is obviously if you are getting similar requests from multiple customers. If you start building special features for single customers, ultimately you will barely break even, and may go broke trying to please them. At some point (now), you have to say this is our product, and this is our price, and these are the features, and if a customer needs specialty features, you will need to politely decline. If your competition takes up your account on promises of customization, you can be sure they are spreading their resources thin.

6) Validate your product see if you can sell to strangers.

Early on, you need to sell what you have to somebody that is not a friend. Friends are great for testing a product, or making you feel good, or talking up your company, but for real honest feedback on whether your product will be a commercial success you need to find somebody that buys your product. I don’t really care if it is a $10 sale or a $10,000 sale, it is important to establish that somebody is willing to purchase your product. From there, you can work on pricing models. Perfection is great but don’t stay in development for years making things better and perfecting your support channel, or whatever. The reality is you have to sell something to build momentum and delay to market is your enemy. If you do not find customers willing to commit their hard earned money for your product at some early stage you do not have a product.

You should be able to take early deposits on the concept if nothing else.

7) Don’t spend precious cash on patents and lawyers to defend non existing value.

As an organic or unfunded start up, the last thing you need to worry about is somebody stealing your idea, and yet this is the first piece of advice you are going to get from everybody you know. The fact is, there are millions of patents out there for failed products protecting nothing. I suppose it could happen, somebody steals your idea and profits before you get off the ground, but it is much more likely you will waste 6 months mortgage on a patent that you’ll never get a chance to defend. Even if you have a patent, you won’t be able to defend yourself with a large pocketed rival. The good news is if you have a good growing idea, investors will take care of the protection of your idea when they buy you.

8) Become an expert in your field. Maybe you are already? Sounds obvious, but make sure you know every detail of your technology and how it can help your customers.

9) Test the market like Billy Mays (may he rest in peace).

Before he passed away, Billy and his partner had a show where they took you through the test market phase of the products they introduced. The plan was simple, build a cheesy commercial to demo the cheesy product. Then run your advertisements in a small market metro area on late night TV. Although your audience may be insomniacs watching re-runs of old movies late at night, you need to find a way to test market your idea and get honest feedback (people calling trying to buy your product is a good indicator). You might even want to run some teasers to your market before you launch, but do so with limited resources. If you get a representative sample, you can then decide to ramp up from there with some confidence.

10) Need verses buy. The only measure of success is from somebody buying your product. Just because people “need” your product is not an indicator of if they are willing to pay for it. People “need” lots of things and only actually buy a small percentage. I need a bigger house , a nice car, a vacation to Hawaii. I also need a sprinkler system, faster computer, but I bought none of these things this past year.

In the last four years from 2008 and to 2012 hot selling items have been very basic services, such as telephone systems, heat, advertising.  Very few businesses are buying anything beyond the essentials in any quantity. This could change if the economy goes back into a growth phase, but the point here is to build something that is a necessity with clear value and you must test that value by selling product, an open wallet is the only to validate need verses buy Marketing surveys of intentions will not tell the truth. Don’t get me wrong there is always opportunity out there, but you constantly need to validate your threshold of value by selling something.

 

 

Related Business Advice Articles.

Tips to make your WISP more profitable

Terry Gold’s blog has a good bit of Advice Sprinkled throughout

How I got my start the story of NetEqualizer

Building a software company from Scratch

Except for Equalizing, All Other WAN Optimizers Fall Short When You Need Them Most!


Wouldn’t it be nice to get help when you’re in trouble such that you never suffer the effects?

Like the secret service whose job it is to protect the President. They stay out of the way and remain indiscreet and unnoticeable until danger approaches – then they spring into action like a well choreographed army geared to destroy the threat. If they do their jobs right, trouble has been averted and things continue on without any calamity.

When it comes to bandwidth contention, saturation and blackout threats is there any technology whose only mission is to spring into action to avoid this situation? Yes, but as far as I know, only one – Equalizing!

But first, let’s briefly review the many technologies that are marketed and sold under the banner “WAN Optimization” – what do they really do? Most of the time, what they do is try and prioritize one thing over the other – thus accelerating a subset of your applications. In some other cases, they take a XMbs pipe and try to make it feel like a X+YMbs pipe. But what do they do if the majority of applications causing the peak are “high priority,” or what if demand is such that you peak at X+YMbs bandwidth?

In a previous blog article, we wrote about various WAN Optimization technologies and their pro’s and con’s. It’s a great read if you would like more general details about the variety of specific WAN Optimization technologies.

Yet, which of these WAN Optimization technologies only help during peak bandwidth Periods? Do any of them help to manage through those peak periods – by kicking-in to provide relief, and then kicking-out once it’s no longer needed? Isn’t that the time when you need help most and isn’t that the type of help you need?

I get asked the question to describe the differences between the NetEqualizer and other products/technologies all the time. These are good and fair questions, and I like answering them. However, it does demonstrate that product and technology differences are not widely understood by the market place. Not all “WAN Optimization” technologies operate the same way, or solve the same problems.

With each of the non-Equalizing technologies, you can still suffer from peak load problems because, by definition, the other technologies do nothing to dynamically change the bandwidth demand that’s causing the peak load. When using non-Equalizer based technology bandwidth demand is still allowed to increase until the network can’t deal with it anymore. With these technologies the only thing you’ve achieved is extending the point at which the peak may happen, but you’ve done nothing to remove the fundamentals of the peak bandwidth demand. After saturation has occurred, you end up in the same place with the same problems and complaints. That’s not relief and that’s not a solution! More or less, it’s the same thing as buying a little more bandwidth, it may provide some additional breathing room at first, but it really doesn’t solve the problem especially long term.

While most “Wan Optimization” approaches extend or “stretch” bandwidth, ONLY Equalizing dynamically optimizes QoS during peak periods and can “stretch” the effective bandwidth almost indefinitely. To do this, a few applications will be slowed down, but if properly tuned these applications will represent less than 10% of all applications, and frankly, they’re the ones that are the reason for the peak problem anyway – so it’s fair and appropriate.

The fundamental principle of Equalizing is to optimize the WAN by behavior, and only during impacted conditions. An Equalizer will provide all applications the bandwidth they need, until the equalizer senses that there is a peak condition occurring at which time it “kicks” in by slowing down the largest flows while still allowing the smaller flows to pass through the network as a priority and without any delay. When correctly set, large flows should constitute 5-10% of all flows, and by slowing this small percentage of flows to the advantage of all others, the equalizer leverages bandwidth to the benefit of 90-95% of everything else.  Since decisions on what to slowdown is done by flow size, and not by port, protocol or content, it is effective on 100% of all flows – including those that are encrypted or otherwise “cloaked.” Most importantly, during peak periods, it will dynamically react by fairly slowing down only the “hogging” applications.

When the peak period ends, the delays are removed, and all traffic resumes in its natural order. This type of WAN optimization is specifically geared towards peak period management, and is highly effective. Furthermore, since it’s driven by a few key parameters – it is a “set-and-forget” product that requires zero ongoing maintenance and doesn’t require any foreknowledge of the types of users, devices or applications that will be confronting the network. That’s a perfect fit for networks that have unpredictable uses like those administered by: ISPs, Schools, Business Centers, Hotels, Libraries, Convention Centers, any public WiFi Network, etc… Wherever you have an unpredictable application load to manage, Equalizing is a far superior method than any other.

It is true peak contention management like no other – and the only technology that helps at the time when you need it most!

Please comment – we’d love to hear your opinions.

How to Determine the True Speed of Video over Your Internet Connection


Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

More and more, Internet Service Providers are using caching techniques on a large scale to store local copies of Netflix Movies and YouTube videos. There is absolutely nothing wrong with this technology. In fact, without it, your video service would likely be very sporadic. When a video originates on your local provider’s caching server, it only has to make a single hop to get to your doorstep. Many cable operators now have dedicated wires from their office to your neighborhood, and hence very little variability in the speed of your service on this last mile.

So how fast can you receive video over the Internet? (Video that is not stored on your local providers caching servers.) I suppose this question would be moot if all video known to mankind was available from your ISP. In reality, they only store a tiny fraction of what is available on their caching servers. The reason why caching can be so effective is that, most consumers only watch a tiny fraction of what is available, and they tend to watch what is popular. To determine how fast you can receive video over the Internet you must by-pass your providers cache.

To insure that you are running a video from beyond your providers cache, google something really obscure. Like “Chinese language YouTube on preparing flowers.” Don’t use this search term if you are in a Chinese neighborhood, but you get the picture right? Search for something obscure that is likely never watched near you. Pick a video 10 minutes or longer, and then watch it. The video may get broken up, or more subtly you may notice the buffer bar falls behind or barely keeps up with the playing of the video. In any case, if you see a big difference watching an obscure video over a popular one, this will be one of the best ways to analyze your true Internet speed.

Note: Do not watch the same video twice in a row when doing this test. The second time you watch an obscure video from China, it will likely run from the your provider’s cache, thus skewing the experiment.

Google High Speed Internet Service is a Smart Play


Some day it will happen, a search engine that really understands the context of what you are looking for.  Maybe it will come from a young group of grad students with a school research project?  This would be an ironic twist for Google since this is exactly how they came to power; all the more reason to understand the dangers of complacency.

I must admit that I have noticed a difference since Google’s upgrade in May.  Things have gotten better, however, it is an incremental improvement in the battle to get rid of all the bogus linked up pages looking for higher rankings and muddling real content.  Their hold as the top search engine will always be a tenuous position.

My advice to Google

Now is the time to leverage your market position, and the best thing I can think of would be to build a rock-solid fiber network-to-the-home in a major metropolitan area.  A real meat and potatoes service that cannot be undermined by a rogue start-up. Combine your ISP with your ability to host content (worry about the anti-trust stuff later). With the largest and most efficient mass storage facilities in the world, and fiber-to-the-home, you can easily cache massive amounts of video content for instant delivery, thus easily creaming the competition’s delivery cost. You now have a product with a much higher entry barrier. Give it away at cost and fund it with your advertising network. Oh, it looks like Google is one step ahead of me hmm???

NetEqualizer News: August 2012


August 2012

Greetings!

Enjoy another issue of NetEqualizer News! This month, we preview our new NetEqualizer GUI, introduce P2P Blocking on the NetGladiator, and ask for your help compiling NetEqualizer user experiences. As always, feel free to pass this along to others who might be interested in NetEqualizer News.

A message from Art…
Art Reisman, CTO – APconnections

With August comes the beginning of the fall harvest. Farmer’s markets are just beginning to fill up with summer squash, corn, and tomatoes in our area! Seeing nature’s bounty gets me thinking about how to enrich our products and offer our own bountiful harvest.

After nine years, we felt it was time to refresh the NetEqualizer GUI. I’m excited to announce that we are redesigning our interface to improve look & feel and make it easier to use! On the NetGladiator side, we are leveraging our DPI technology to add P2P Blocking to our security capabilities. Both projects will be ready for the fall harvest! Stay tuned to NetEqualizer News for updates on availability and release details.

We love it when we hear back from you – so if you have a story you would like to share with us of how we have helped you, let us know. Email me directly at art@apconnections.net. I would love to hear from you!

Coming this Fall: New NetEqualizer GUI
After 9 years we are finally revamping the GUI for the NetEqualizer!

The new GUI will provide the same functionality that the current GUI has, but it will be presented in a much more organized, intuitive, and modern way.

We will also be developing additional functionality that allows users to more easily and effectively administer their NetEqualizers.

One of the most exciting improvements is a new dashboard feature. The dashboard will be the default home page and will provide a heads up display of the most critical data and settings within NetEqualizer.

Beta testing for the new NetEqualizer GUI will begin sometime in September with a full release coming this fall. And, as always, the new GUI will be available at no charge to customers with valid NSS. Stay tuned to NetEqualizer News or our blog for announcements regarding the new GUI!


Share Your NetEqualizer Experiences!
We love it when we hear from our customers – especially messages of appreciation for the products we work so hard on.

As part of our Library Survey a few months ago, we received a message from Sara Holloway, of Handley Regional Library, asking if she could write an article about NetEqualizer for our blog. We thought this was a great idea, so Sara wrote this post. Thanks Sara!

Starting this fall, we want to open up our blog to our customers more often. Writing a post on our blog is beneficial to us, our readership, and you!

It is a great way to gain exposure for your business and to contribute to a widely-read blog.

If you are interested in being a guest contributor, email our Director of Marketing, Sandy McGregor, at sandy@apconnections.net!


Block P2P with NetGladiator
NetGladiator is already proving to be an effective hacking and botnet deterrent, but the usefulness of NetGladiator does not stop with web application security. Because of the customizable nature of the configuration, and the fact that NetGladiator is built on powerful DPI technology, the sky is the limit in what you can do with NetGladiator.

We wrote about some of the potential uses last month, and we are excited to announce an implementation of one of those ideas – P2P Blocking – available as an additional module to existing NetGladiators.

This implementation differs from our P2P feature on NetEqualizer. NetEqualizer focuses on managing the effects of P2P on a network through equalizing. With NetGladiator, we serve a security-driven need. P2P is one of the most common ways that malware gets through firewalls and enters internal machines. Thus, with NetGladiator, we actually block the protocols completely – greatly improving security.

We’ve already implemented the top 10 P2P protocols, but if your organization is facing a particular protocol outside of the top 10, NetGladiator can be configured to block it.

Take a look at this report from a NetGladiator equipped with P2P Blocking (click here for accompanying blog post). You’ll notice that NetGladiator can effectively determine traffic P2P signatures and display which protocol has been discovered, all without hampering other traffic or user experience.

For more information on this new feature or NetGladiator in general, visit our website or check out our blog. You can also send questions to ips@apconnections.net!


Best Of The Blog

How to Build Your Own Linux-Based Access Point in 5 Minutes

By Steve Wagor – COO – APconnections

A popular post from the archives!
The motivations to build your own access point using Linux are many, and I have listed a few compelling reasons below:

1) You can use the Linux-rich set of firewall rules to customize access to any segment of your wireless network.
2) You can use SNMP utilities to report on traffic going through your AP.
3) You can configure your AP to send e-mail alerts if there are problems with your AP.
4) You can custom coordinate communications with other access points – for example, build your own Mesh network…

Photo Of The Month

Bulls in a Kansas Farm Field

These bulls may be angry, but at APconnections we are happy and excited for the near future – you could even say we are “bullish.” Our exciting new NetEqualizer GUI and NetGladiator feature enhancements are all great reasons to celebrate the upcoming fall season, and we are very optimistic in the value these improvements will provide to our customers!

P2P Protocol Blocking Now Offered with NetGladiator Intrusion Prevention


A few months ago we introduced our NetGladiator Intrusion Prevention (IPS) Device. To date, it has thwarted tens of thousands of robotic cyber attacks and counting. Success breeds success and our users wanted more.

When our savvy customers realized the power, speed, and low price point of our underlying layer 7 engine, we started getting requests seeking additional features such as: “Can you also block Peer To Peer and other protocols that cannot be stopped by our standard Web Filters and Firewalls?”  It was natural that we extended our IPS device to address this space; hence, today we are announcing the next-generation NetGladiator. We now offer a module that will allow you to block and monitor the world’s top 10 p2p protocols (which account for 99 percent of all P2P traffic). We also back our technology with our unique promise to implement a custom protocol blocking rule with the purchase of any system at no extra charge. For example, if you have a specific protocol you need to monitor and just can’t uncover it with your WebSense or Firewall filter, we will custom deliver a NetGladiator system that can track and/or block your unique protocol, in addition to our standard p2p blocking options.

Below is a sample Excel live report integrated with the NetGladiator in monitor mode. On the screen snapshot below, you will notice that we have uncovered a batch of Utorrent and Frost Wire p2p traffic.

Please feel free to call 303-997-1300 or email our NetGladiator sales engineering team with any additional questions at ips@@apconnections.net.

Related Articles

NetGladiator A layer 7 shaper in sheep’s clothing

How to Speed Up Data Access on Your iPhone


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

Ever wonder if there was anything you can do to make your iPhone access a little bit faster?

When on Your Provider’s 4g Network and Data Access is Slow.

The most likely reason for slow data access is congestion on the provider line. 3g and 4g networks all have a limited sized pipe from the nearest tower back to the Internet. It really does not matter what your theoretical data speed is, when there are more people using the tower than the back-haul pipe can handle, you can temporarily lose service, even when your phone is showing three or four bars.

The other point of contention can be the amount of users connected to a tower exceeds the the towers carrying capacity in terms of frequency.  If this occurs you likely will not only lose data connectivity but also the ability to make and receive phone calls.

Unfortunately, you only have a couple of options in this situation.

– If you are in a stadium with a large crowd, your best bet is to text during the action. Pick a time when you know the majority of people are not trying to send data. If you wait for a timeout or end of the game, you’ll find this corresponds to the times when the network slows to a crawl, so try to finish your access before the last out of the game or the end of the quarter.

Get away from the area of congestion. I have experienced complete lockout of up to 30 minutes, when trying to text, as a sold out stadium emptied out. In this situation my only chance was to walk about 1/2 mile or so from the venue to get a text out. Once away from the main stadium, my iPhone connected to a tower with a different back haul away from the congested stadium towers.

When connected to a local wireless network and access is slow.

Get close to the nearest access point.

Oftentimes, on a wireless network, the person with the strongest signal wins. Unlike the cellular data network , 802.11  protocols used by public wireless access points have no way to time-slice data access. Basically, this means the device that talks the loudest will get all the bandwidth. In order to talk the loudest, you need to be closest to the access point.

On a relatively uncrowded network you might have noticed that you get fairly good speed even on a moderate or weak signal.  However, when there are a large number of users competing for the attention of a local access point, the loudest have the ability to dominate all the bandwidth, leaving nothing for the weaker iPhones. The phenomenon of the loudest talker getting all the bandwidth is called the hidden node problem. For a good explanation of the hidden node issue you can reference our white paper on the problem.

Shameless plug: If you happen to be a provider or know somebody that works for a provider please tell them to call us and we’d be glad to explain the simplicity of equalizing and how it can restore sanity to a congested network.