NetEqualizer-Lite Is Now Available!


Last month, we introduced our newest release, a Power-over-Ethernet NetEqualizer. Since then, with your help, we’ve titled the new release the NetEqualizer-Lite and are already getting positive feedback from users. Here’s a little background about what led us to release the NetEqualizer-Lite…Over the years, we’d had several customers express interest in placing a NetEqualizer as close as possible to their towers in order to relieve congestion. However, in many cases, this would require both a weatherproof and low-power NetEqualizer unit – two features that were not available up to this point. However, in the midst of a growing demand for this type of technology, we spent the last few months working to meet this need and thus developed the NetEqualizer-Lite.

Here’s what you can expect from the NetEqualizerLite:

  • Power over Ethernet
  • Up to 10 megabits of shaping
  • Up to 200 users
  • Comes complete with all standard NetEqualizer features

And, early feedback on the new release has been positive. Here’s what one user recently posted on DSLReports.com:

We’ve ordered 4 of these and deployed 2 so far. They work exactly like the 1U rackmount NE2000 that we have in our NOC, only the form factor is much smaller (about 6x6x1) and they use POE or a DC power supply. I amp clamped one of the units, and it draws about 7 watts….The Netequalizer has resulted in dramatically improved service to our customers. Most of the time, our customers are seeing their full bandwidth. The only time they don’t see it now is when they’re downloading big files. And, when they don’t see full performance, its only for the brief period that the AP is approaching saturation. The available bandwidth is re-evaulated every 2 seconds, so the throttling periods are often brief. Bottom line to this is that we can deliver significantly more data through the same AP. The customers hitting web pages, checking e-mail, etc. virtually always see full bandwidth, and the hogs don’t impact these customers. Even the hogs see better performance (although that wasn’t one of my priorities). (DSLReports.com)

Pricing for the new model will be $1,200 for existing NetEqualizer users and $1,550 for non-customers purchasing their first unit. However, the price for subsequent units will be $1,200 for users and nonusers alike.

For more information about the new release, contact us at admin@apconnections.net or 1-800-918-2763.

Finally a Bandwidth Control appliance for under $1500


Lafayette Colorado April 9th 2009

APconnections today Announced a small business bandwidth control device that  lists at $1499. (for single unit orders)

This new offer  handles up to 10 megabits and 100 users with room to spare for some expansion. It comes complete with all the standard features of the NetEqualizer, but in a smaller  low power format  with Power over Ethernet.

Demand for this new offer came from two sources

1) There was huge demand for an affordable traffic shaping device to  help small business run their VOIP concurrent with their data traffic over their internet link.

2) There was also a need  for a low end unit, with POE,  for the WISP market .

In  a large wireless network, congestion often occurs at tower locations.  With a low cost POE version of the NetEqualizer,  wireless providers can  now afford to have advanced bandwidth control at or near their Access distribution points.

According to Joe DeSopo from NetEqualizer, “About half of wireless network slowness comes from p2p (bittorrent)  and video users overloading the access points. We have had great success with our  NE2000 series  but the price point of $2500 was a bit too high to duplicate all over the network.”

For a small or medium sized office with a hosted VOIP PBX solution the NetEqualizer works like a genie in a bottle. It is one of the few products on the market that can provide QOS for voip over an Internet link. And now, with volume pricing approaching $1000,  it will help revolutions the way offices use their Internet connection.

The NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology gives priority to latency-sensitive applications, such as VoIP and email. It does it all dynamically and automatically, improving on other available bandwidth shaping technology. It controls network flow for the best WAN optimization.

APconnections is a privately held company founded in 2003 and is based in Lafayette, Colorado.

Related Articles

Is Your ISP Throttling Your Bandwidth?


Editor’s  Note: With all the recent media coverage about ISPs giving preferential treatment to VOIP, and the controversy over Net Neutrality, we thought it might be interesting to revisit this original article Art published in PC Magazine back in 2007.

Update August 2010 the FCC is not being fooled anymore.

Analysis: The White Lies ISPs Tell About Broadband Speeds

By Art Reisman, CTO, APconnections (www.netequalizer.com)

In a recent PC Magazine article, writer Jeremy Kaplan did a fantastic job of exposing the true Internet access speeds of the large consumer providers.

He did this by creating a speed test that measured the throughput of continuous access to popular Web sites like Google, Expedia, and many others. Until this report was published, the common metric for comparing ISPs was through the use of the numerous Internet speed test sites available online.

The problem with this validation method was that it could not simulate real speeds encountered when doing typical Web surfing and downloading operations. Plus, ISPs can tamper with the results of speed tests — more on this later.

When I saw the results of PC Magazine’s testing, I was a bit relieved to see that the actual speeds of large providers was somewhere between 150 Kbit/s and 200 Kbit/s. This is a far cry from the two, three or even four megabit download speeds frequently hyped in ISP marketing literature.

These slower results were more in line with what I have experienced from my home connection, even though online Internet speed tests always show results close, if not right on, the advertised three megabits per second. There are many factors that dictate your actual Internet speed, and there are also quite a few tricks that can be used to create the illusion of a faster connection.

Before I continue, I should confess that I make my living by helping ISPs stretch their bandwidth among their users. In doing this, I always encourage all parties to be honest with their customers, and in most cases providers are. If you read the fine print in your service contract, you will see disclaimers stating that “actual Internet speeds may vary”, or something to that effect. Such disclaimers are not an attempt to deceive, but rather a simple reflection of reality.

Guaranteeing a fixed-rate speed to any location on the Internet is not possible, nor was the Internet ever meant to be such a conduit. It has always been a best-effort mechanism. I must also confess that I generally only work with smaller ISPs. The larger companies have their own internal network staff, and hence I have no specific knowledge of how they deal with oversold conditions, if they deliberately oversell, and, if so, by how much. Common business sense leads me to believe they must oversell to some extent in order to be profitable. But, again, this isn’t something I can prove.

Editors update Sept 2009: Since this article was written many larger providers have come clean.

A Matter of Expectations

How would you feel if you pumped a gallon of gas only to find out that the service station’s meter was off by 10 percent in its favor? Obviously you would want the owners exposed immediately and demand a refund, and possibly even lodge a criminal complaint against the station. So, why does the consumer tolerate such shenanigans with their ISP?

Put simply, it’s a matter of expectations.

ISPs know that new and existing customers are largely comparing their Internet-speed experiences to dial-up connections, which often barely sustain 28 Kbit/s. So, even at 150 Kbits/s, customers are getting a seven-fold increase in speed, which is like the difference between flying in a jet and driving your car. With the baseline established by dial-up being so slow, most ISPs really don’t need to deliver a true sustained three megabits to be successful.

As a consumer, reliable information is the key to making good decisions in the marketplace. Below are some important questions you may want to ask your provider about their connection speeds. It is unlikely the sales rep will know the answers, or even have access to them, but perhaps over time, with some insistence, details will be made available.

Five Questions to Ask Your ISP

1.) What is the contention ratio in my neighborhood?

At the core of all Internet service is a balancing act between the number of people who are sharing a resource and how much of that resource is available.

For example, a typical provider starts out with a big pipe of Internet access that is shared via exchange points with other large providers. They then subdivide this access out to their customers in ever smaller chunks — perhaps starting with a gigabit exchange point and then narrowing down to a 10 megabit local pipe that is shared with customers across a subdivision or area of town.

The speed you, the customer, can attain is limited to how many people might be sharing that 10 megabit local pipe at any one time. If you are promised one megabit service, it is likely that your provider would have you share your trunk with more than 10 subscribers and take advantage of the natural usage behavior, which assumes that not all users are active at one time.

The exact contention ratio will vary widely from area to area, but from experience, your provider will want to maximize the number of subscribers who can share the pipe, while minimizing service complaints due to a slow network. In some cases, I have seen as many as 1,000 subscribers sharing 10 megabits. This is a bit extreme, but even with a ratio as high as this, subscribers will average much faster speeds when compared to dial-up.

2.) Does your ISP’s exchange point with other providers get saturated?

Even if your neighborhood link remains clear, your provider’s connection can become saturated at its exchange point. The Internet is made up of different provider networks and backbones. If you send an e-mail to a friend who receives service from a company other than your provider, then your ISP must send that data on to another network at an exchange point. The speed of an exchange point is not infinite, but is dictated by the type of switching equipment. If the exchange point traffic exceeds the capacity of the switch or receiving carrier, then traffic will slow.

3.) Does your provider give preferential treatment to speed test sites?

As we alluded to earlier, it is possible for an ISP to give preferential treatment to individual speed test sites. Providers have all sorts of tools at their disposal to allow and disallow certain kinds of traffic. It seems rather odd to me that in the previously cited PC Magazine test, which used highly recognized Web sites, the speed results were consistently well under advertised connection speeds. One explanation for this is that providers give full speed only when going to common speed test Web sites.

4.) Are file-sharing queries confined to your provider network?

Another common tactic to save resources at the exchange points of a provider is to re-route file-sharing requests to stay within their network. For example, if you were using a common file-sharing application such as BitTorrent, and you were looking some non-copyrighted material, it would be in your best interest to contact resources all over the world to ensure the fastest download.

However, if your provider can keep you on their network, they can avoid clogging their exchange points. Since companies keep tabs on how much traffic they exchange in a balance sheet, making up for surpluses with cash, it is in their interest to keep traffic confined to their network, if possible.

5.) Does your provider perform any usage-based throttling?

The ability to increase bandwidth for a short period of time and then slow you down if you persist at downloading is another trick ISPs can use. Sometimes they call this burst speed, which can mean speeds being increased up to five megabits, and they make this sort of behavior look like a consumer benefit. Perhaps Internet usage will seem a bit faster, but it is really a marketing tool that allows ISPs to advertise higher connection speeds – even though these speeds can be sporadic and short-lived.

For example, you may only be able to attain five megabits at 12:00 a.m. on Tuesdays, or some other random unknown times. Your provider is likely just letting users have access to higher speeds at times of low usage. On the other hand, during busier times of day, it is rare that these higher speeds will be available.

In writing this article, my intention was not to create a conspiracy theory about unscrupulous providers. Any market with two or more choices ensures that the consumer will benefit. Before you ask for a Congressional investigation, keep in mind that ISPs’ marketing tactics are no different from those of other industries, meaning they will generally cite best-case scenarios when promoting their products. Federal regulation would only thwart the very spirit of the Internet, which, as said before, has always been a best-effort infrastructure.

But, with the information above, it is your job as a consumer to comparison shop and seek answers. Your choices are what drive the market and asking questions such as these are what will point ISPs in the right direction.

Since we first published this article, Google and others have been trying to educate consumers on Net Neutrality. There is now a consortium called M-Lab which has put together a sophisticated speed test site designed to give specific details on what your ISP is doing to your connection. See the article below for more information.

Related article Ten things your internet provider does not want you to know.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

The pros and cons of Disk (Web) Caching


Eli Riles an independent consultant and former VP of sales for NetEqualizer has extensively investigated the subject of caching with many of  ISPs from around the globe. What follows are some useful observations on disk/web caching.

Effective use of Disk Caching

Suppose you are the administrator for a network, and you have a group of a 1000 users that wake up promptly at 7:00 am each morning and immediately go to MSNBC.com to retrieve the latest news from Wall Street. This synchronized behavior would create 1000 simultaneous requests for the same remote page on the Internet.

Or, in the corporate world, suppose the CEO of a multinational 10,000 employee business, right before the holidays put out an all points 20 page PDF file on the corporate site describing the new bonus plan? As you can imagine all the remote WAN links might get bogged down for hours while each and every employee tried to download this file.

Well it does not take a rocket scientist to figure out that if somehow the MSNBC home page could be stored locally on an internal server that would alleviate quite a bit of pressure on your WAN or Internet link.

And in the case of the CEO memo, if a single copy of the PDF file was placed locally at each remote office it would alleviate the rush of data.

Local Disk Caching does just that.

Offered by various vendors Caching can be very effective in many situations, and vendors can legitimately make claims of tremendous WAN speed improvement in some situations. Caching servers have built in intelligence to store the most recently and most frequently requested information, thus preventing future requests from traversing the WAN link unnecessarily .

You may know that most desktop browsers do their own form caching already. Many web servers keep a time stamp of their last update to data , and browsers such as the popular Internet Explorer will use a cached copy of a remote page after checking the time stamp.

So what is the downside of caching?

There are two main issues that can arise with caching:

1) Keeping the cache current. If you access a cache page that is not current then you are at risk of getting old and incorrect information. Some things you may never want to be cached, for example the results of a transactional database query. It’s not that these problems are insurmountable, but there is always the risk that the data in cache will not be synchronized with changes.

2) Volume. There are some 100 millions of web sites out on the Internet alone. Each site contains upwards of several megabytes of public information. The amount of data is staggering and even the smartest caching scheme cannot account for the variation in usage patterns among users and the likely hood they will hit an un-cached page. If you have a diverse set of users it is unlikely the Cache will have much effect on a given day

Formal definition of Caching

Hotel Property Managers Should Consider Generic Bandwidth Control Solutions


Editors Note: The following Hotelsmag.com article caught my attention this morning. The hotel industry is now seriously starting to understand that they need some form of bandwidth control.   However, many hotel solutions for bandwidth control are custom marketed, which perhaps puts their economy-of-scale at a competitive disadvantage. Yet, the NetEqualizer bandwidth controller, as well as our competitors, cross many market verticals, offering hotels an effective solution without the niche-market costs. For example, in addition to the numerous other industries in which the NetEqualizer is being used, some of our hotel customers include: The Holiday Inn Capital Hill, a prominent Washington DC hotel; The Portola Plaza Hotel and Conference Center in Monterrey, California; and the Hotel St. Regis in New York City.

For more information about the NetEqualizer, or to check out our live demo, visit www.netequalizer.com.

Heavy Users Tax Hotel Systems:Hoteliers and IT Staff Must Adapt to a New Reality of Extreme Bandwidth Demands

By Stephanie Overby, Special to Hotels — Hotels, 3/1/2009

The tweens taking up the seventh floor are instant-messaging while listening to Internet radio and downloading a pirated version of “Twilight” to watch later. The 200-person meeting in the ballroom has a full interactive multimedia presentation going for the next hour. And you do not want to know what the businessman in room 1208 is streaming on BitTorrent, but it is probably not a productivity booster.

To keep reading, click here.

NetEqualizer Bandwidth Control Tech Seminar Video Highlights


Tech Seminar, Eastern Michigan University, January 27, 2009

This 10-minute clip was professionally produced January 27, 2009. It  gives a nice quick overview of how the NetEqualizer does bandwidth control while providing priority for VoIP and video.

The video specifically covers:

1) Basic traffic shaping technology and NetEqualizer’s behavior-based methods

2) Internet congestion and gridlock avoidance on a network

3) How peer-to-peer file sharing operates

4) How to counter the effects of peer-to-peer file sharing

5) Providing QoS and priority for voice and video on a network

6) A short comparison by a user (a university admin) who prefers NetEqualizer to layer-7 deep packet inspection techniques

Four Reasons Why Peer-to-Peer File Sharing Is Declining in 2009


By Art Reisman

CTO of APconnections, makers of the plug-and-play bandwidth control and traffic shaping appliance NetEqualizer

Art Reisman CTO www.netequalizer.com

I recently returned from a regional NetEqualizer tech seminar with attendees from Western Michigan University, Eastern Michigan University and a few regional ISPs.  While having a live look at Eastern Michigan’s p2p footprint, I remarked that it was way down from what we had been seeing in 2007 and 2008.  The consensus from everybody in the room was that p2p usage is waning. Obviously this is not a wide data base to draw a conclusion from, but we have seen the same trend at many of our customer installs (3 or 4 a week), so I don’t think it is a fluke. It is kind of ironic, with all the controversy around Net Neutrality and Bit-torrent blocking,  that the problem seems to be taking care of itself.

So, what are the reasons behind the decline? In our opinion, there are several reasons:

1) Legal Itunes and other Mp3 downloads are the norm now. They are reasonably priced and well marketed. These downloads still take up bandwidth on the network, but do not clog access points with connections like torrents do.

2) Most music aficionados are well stocked with the classics (bootleg or not) by now and are only grabbing new tracks legally as they come out. The days of downloading an entire collection of music at once seem to be over. Fans have their foundation of digital music and are simply adding to it rather than building it up from nothing as they were several years ago.

3) The RIAA enforcement got its message out there. This, coupled with reason #1 above, pushed users to go legal.

4) Legal, free and unlimited. YouTube videos are more fun than slow music downloads and they’re free and legal. Plus, with the popularity of YouTube, more and more television networks have caught on and are putting their programs online.

Despite the decrease in p2p file sharing, ISPs are still experiencing more pressure on their networks than ever from Internet congestion. YouTube and NetFlix  are more than capable of filling in the void left by waning Bit-torrents.  So, don’t expect the controversy over traffic shaping and the use of bandwidth controllers to go away just yet.

ROI calculator for Bandwidth Controllers


Is your commercial Internet link getting full ? Are you evaluating whether to increase the size of your existing internet pipe and trying to do a cost trade off on investing in an optimization solution? If you answered yes to either of these questions then you’ll find the rest of this post useful.

To get started, we assume you are somewhat familiar with the NetEqualizer’s automated fairness and behavior based shaping.

To learn more about NetEqualizer behavior based shaping  we suggest our  NetEqualizer FAQ.

Below are the criteria we used for our cost analysis.

1) It was based on feedback from numerous customers (different verticals) over the previous six years.

2) In keeping with our policies we used average and not best case scenarios of savings.
3) Our Scenario is applicable to any private business or public operator that administers a shared Internet Link with 50 or more users

4) For our example  we will assume a 10 megabit trunk at a cost of $1500 per month.

ROI savings #1 Extending the number of users you can support.

NetEqualizer Equalizing and fairness typically extends the number of users that can share a trunk by making better use of the available bandwidth in a time period. Bandwidth can be stretched from 10 to 30 percent:

savings $150 to $450 per month

ROI savings #2 Reducing support calls caused by peak period brownouts.

We conservatively assume a brownout once a month caused by general network overload. With a transient brownout scenario you will likely spend debug time  trying to find the root cause. For example, a bad DNS server could the problem, or your upstream provider may have an issue. A brownout  may be caused by simple congestion .   Assuming you dispatch staff time to trouble shoot a congestion problem once a month and at an overhead  from 1 to 3 hours. Savings would be $300 per month in staff hours.

ROI savings #3 No recurring costs with your NetEqualizer.

Since the NetEqualizer uses behavior based shaping your license is essentially good for the life of the unit. Layer 7 based protocol shapers must be updated at least once a year.  Savings $100 to $500 per month

The total

The cost of a NetEqualizer unit for a 10 meg circuit runs around $3000, the low estimate for savings per month is around $500 per month.

In our scenario the ROI is very conservatively 6 months.

Note: Commercial Internet links supported by NetEqualizer include T1,E1,DS3,OC3,T3, Fiber, 1 gig and more

Related Articles

How Much YouTube Can the Internet Handle?


By Art Reisman, CTO, http://www.netequalizer.com 

Art Reisman CTO www.netequalizer.com

Art Reisman

 

As the Internet continues to grow and true speeds become higher,  video sites like YouTube are taking advantage of these fatter pipes. However, unlike the peer-to-peer traffic of several years ago (which seems to be abating), YouTube videos don’t face the veil of copyright scrutiny cast upon p2p which caused most users to back off.
 

In our experience, there are trade offs associated with the advancements in technology that have come with YouTube. From measurements done in our NetEqualizer laboratories, the typical normal quality YouTube video needs about 240kbs sustained over the 10 minute run time for the video. The newer higher definition videos run at a rate at least twice that. 

Many of the rural ISPs that we at NetEqualizer support with our bandwidth shaping and control equipment have contention ratios of about 300 users per 10-megabit link. This seems to be the ratio point where these small businesses can turn  a profit.  Given this contention ratio, if 40 customers simultaneously run YouTube, the link will be exhausted and all 300 customers will be wishing they had their dial-up back. At last check, YouTube traffic accounted for 10 percent of all Internet Traffic.  If left completely unregulated,  a typical rural  ISP could find itself on the brink of saturation from normal YouTube usage already. With tier-1 providers in major metro areas there is usually more bandwidth, but with that comes higher expectations of service and hence some saturation is inevitable. 

If you believe there is a conspiracy, or that ISPs are not supposed to profit as they take risk and operate in a market economy, you are entitled to your opinion, but we are dealing with reality. And there will always be tension between users and their providers, much the same as there is with government funds and highway congestion. 

The fact is all ISPs have a fixed amount of bandwidth they can deliver and when data flows exceed their current capacity, they are forced to implement some form of passive constraint. Without them many networks would lock up completely. This is no different than a city restricting water usage when reservoirs are low. Water restrictions are well understood by the populace and yet somehow bandwidth allocations and restrictions are perceived as evil. I believe this misconception is simply due to the fact that bandwidth is so dynamic, if there was a giant reservoir of bandwidth pooled up in the mountains where you could see this resource slowly become depleted , the problem could be more easily visualized. 

The best compromise offered, and the only comprise that is not intrusive is bandwidth rationing at peak hours when needed. Without rationing, a network will fall into gridlock, in which case not only do the YouTube videos come to halt , but  so does e-mail , chat , VOIP and other less intensive applications. 

There is some good news, alternative ways to watch YouTube videos. 

We noticed during out testing that YouTube videos attempt to play back video as a  real-time feed , like watching live TV.  When you go directly to YouTube to watch a video, the site and your PC immediately start the video and the quality becomes dependent on having that 240kbs. If your providers speed dips below this level your video will begin to stall, very annoying;  however if you are willing to wait a few seconds there are tools out there that will play back YouTube videos for you in non real-time. 

Buffering Tool 

They accomplish this by pre-buffering before the video starts playing.  We have not reviewed any of these tools so do your research. We suggest you google “YouTube buffering tools” to see what is out there. Not only do these tools smooth out the YouTube playback during peak times or on slower connections , but they also help balance the load on the network during peak times. 

Bio Art Reisman is a partner and co-founder of APconnections, a company that provides bandwidth control solutions (NetEqualizer) to ISPs, Universities, Libraries, Mining Camps and any organization where groups of users must share their Internet resources equitably. What follows is an objective educational journey on how consumers and ISPs can live in harmony with the explosion of YouTube video.