NetEqualizer News: March 2011


NetEqualizer News March 2011
Enjoy another issue of NetEqualizer News. This month, we release details about our upcoming NetEqualizer Tech Seminar and introduce our new innovative load generating technology. As always, feel free to pass this along to others who might be interested in NetEqualizer News.
In This Issue:
:: NetEqualizer Is Coming To Southern California
:: Best Of The Blog
:: YouTube Caching Testing Leads To Load Generator Breakthrough
Join Our Mailing List!
Our WebsiteContact UsLive NetEqualizer DemoNetEqualizer Price List

NetEqualizer Is Coming To Southern California
biola university logoWill you be in Southern California this March? If so, be sure not to miss the next complimentary NetEqualizer Technical Seminar at Biola University.Join Biola University Director of IT Operations Scott Himes and APconnections CTO Art Reisman on March 22 to discuss network optimization and get a first-hand look at the NetEqualizer technology. Whether you’re an existing customer or just starting to think about bandwidth shaping, come learn more about the NetEqualizer’s capabilities, get your questions answered, and share your experiences with other users.We would love to see you in sunny Southern California! Please reserve a space by registering today.Click here for details and to register.

Best Of The Blog The Dark Side Of Net NeutralityNet neutrality, however idyllic in principle, comes with a price. The following article was written to shed some light on the big money behind the propaganda of net neutrality. It may change your views, but at the very least it will peel back one more layer of the onion that is the issue of net neutrality.First, an analogy to set the stage:I live in a neighborhood that equally shares a local community water system among 60 residential members. Nobody is metered. Through a mostly verbal agreement, all users try to keep our usage to a minimum. This requires us to be very water conscious, especially in the summer months when the main storage tanks need time to recharge overnight.

Several years ago, one property changed hands, and the new owner started raising organic vegetables using a drip irrigation system. The neighborhood precedent had always been that using water for a small lawn and garden area was an accepted practice, however, the new neighbor expanded his garden to three acres and now sells his produce at the local farmers market. Even with drip irrigation, his water consumption is likely well beyond the rest of the neighborhood combined.

You can see where I am going with this. Based on this scenario, it’s obvious that an objective observer would conclude that this neighbor should pay an additional premium – especially when you consider he is exploiting the community water for a commercial gain.

The Internet, much like our neighborhood example, was originally a group of cooperating parties (educational and government institutions) that connected their networks in an effort to easily share information. As the Internet spread away from government institutions, last-mile carriers such as cable and phone companies invested heavily in infrastructure.

To keep reading, click here.

YouTube Caching Testing Leads To Load Generator BreakthroughWe continue to have great success with the NetEqualizer YouTube caching beta units we released in January and are very close to a full release. However, to ensure the best possible results in our upcoming general release, we’re still fine-tuning some of the technology’s details.While we obviously would like to release the YouTube caching feature as soon as we can, we’ve had some significant breakthroughs in the testing process itself. In our mission to provide as efficient and effective product as possible, we’ve developed an entirely new load simulator that’s opening new doors when it comes to pre-market testing.Our existing class of load generators is very good at creating a heavy load and controlling it precisely, but in order to validate a caching system, we needed a different approach. We needed a load simulator that could simulate the variations of live Internet traffic. For example, to ensure a stable caching system, you must take the following into consideration:

  • A caching proxy must perform quite a large number of DNS look-ups
  • It must also check tags for changes in content for cached Web pages
  • It must facilitate the delivery of cached data and know when to update the cache
  • The Squid process requires a significant chunk of CPU and memory resources
  • For YouTube integration, the Squid caching server must also strip some URL tags on YouTube files on the fly

To answer this challenge, and provide the most effective caching feature, we’ve spent the past few months developing a custom load generator. Our simulation lab has a full one-gigabit connection to the Internet. It also has a set of servers that can simulate thousands of simultaneous users surfing the Internet at the same time. We can also queue up a set of YouTube users vying for live video from the cache and Internet. Lastly, we put a traditional point-to-point FTP and UDP load across the NetEqualizer using our traditional load generator.

Once our custom load generator was in place, we were able to run various scenarios that our technology might encounter in a live network setting. Our testing exposed some common, and not so common, issues with YouTube caching and we were able to correct them. This kind of analysis is not possible on a live commercial network, as experimenting and tuning requires deliberate outages. We also now have the ability to re-create a customer problem and develop actual Squid source code patches should the need arise.

For more information on the upcoming NetEqualizer YouTube caching feature, or on our load generator, contact us by emailing admin, 1-800-918-2763, or see some of our past blog articles:

Photo Of The Month

Leadville
Leadville, Colorado

Each month, we’ll post one of our favorite photos we’ve taken recently. Of course, feel free to submit photos of your own. They may just end up in our next newsletter!

NetEqualizer Testing and Integration of Squid Caching Server


Editor’s Note: Due to the many variables involved with tuning and supporting Squid Caching Integration, this feature will require an additional upfront support charge. It will also require at minimum a NE3000 platform. Contact sales@netequalizer.com for specific details.

In our upcoming 5.0 release, the main enhancement will be the ability to implement YouTube caching from a NetEqualizer. Since a squid-caching server can potentially be implemented separately by your IT department, the question does come up about what the difference is between using the embedded NetEqualizer integration and running the caching server stand-alone on a network.

Here are a few of the key reasons why using the NetEqualizer caching integration provides for the most efficient and effective set up:

1. Communication – For proper performance, it’s important that the NetEqualizer know when a file is coming from cache and when it’s coming from the Internet. It would be counterproductive to have data from cache shaped in any way. To accomplish this, we wrote a new utility, aptly named “cache helper,” to advise the NetEqualizer of current connections originating from cache. This allows the NetEqualizer to permit cached traffic to pass without being shaped.

2. Creative Routing – It’s also important that the NetEqualizer be able to see the public IP addresses of traffic originating on the Internet. However, using a stand-alone caching server prevents this. For example, if you plug a caching server into your network in front of a NetEqualizer (between the NetEqualizer and your users), all port 80 traffic would appear to come from the proxy server’s IP address. Cached or not, it would appear this way in a default setup. The NetEqualizer shaping rules would not be of much use in this mode as they would think all of the Internet traffic was originating from a single server. Without going into details, we have developed a set of special routing rules to overcome this limitation in our implementation.

3. Advanced Testing and Validation – Squid proxy servers by themselves are very finicky. Time and time again, we hear about implementations where a customer installed a proxy server only to have it cause more problems than it solved, ultimately slowing down the network. To ensure a simple yet tight implementation, we ran a series of scenarios under different conditions. This required us to develop a whole new methodology for testing network loads through the Netequalizer. Our current class of load generators is very good at creating a heavy load and controlling it precisely, but in order to validate a caching system, we needed a different approach. We needed a load simulator that could simulate the variations of live internet traffic. For example, to ensure a stable caching system, you must take the following into consideration:

  • A caching proxy must perform quite a large number of DNS look-ups
  • It must also check tags for changes in content for cached Web pages
  • It must facilitate the delivery of cached data and know when to update the cache
  • The squid process requires a significant chunk of CPU and memory resources
  • For YouTube integration, the Squid caching server must also strip some URL tags on YouTube files on the fly

To answer this challenge, and provide the most effective caching feature, we’ve spent the past few months developing a custom load generator. Our simulation lab has a full one-gigabit connection to the Internet. It also has a set of servers that can simulate thousands of simultaneous users surfing the Internet at the same time. We can also queue up a set of YouTube users vying for live video from the cache and Internet. Lastly, we put a traditional point-to-point FTP and UDP load across the NetEqualizer using our traditional load generator.

Once our custom load generator was in place, we were able to run various scenarios that our technology might encounter in a live network setting.  Our testing exposed some common, and not so common, issues with YouTube caching and we were able to correct them. This kind of analysis is not possible on a live commercial network, as experimenting and tuning requires deliberate outages. We also now have the ability to re-create a customer problem and develop actual Squid source code patches should the need arise.

The Dark Side of Net Neutrality


Net neutrality, however idyllic in principle, comes with a price. The following article was written to shed some light on the big money behind the propaganda of net neutrality. It may change your views, but at the very least it will peel back one more layer of the the onion that is the issue of net neutrality.

First, an analogy to set the stage:

I live in a neighborhood that equally shares a local community water system among 60 residential members. Nobody is metered. Through a mostly verbal agreement, all users try to keep our usage to a minimum. This requires us to be very water conscious, especially in the summer months when the main storage tanks need time to recharge overnight.

Several years ago, one property changed hands, and the new owner started raising organic vegetables using a drip irrigation system. The neighborhood precedent had always been that using water for a small lawn and garden area was an accepted practice, however, the new neighbor expanded his garden to three acres and now sells his produce at the local farmers market. Even with drip irrigation, his water consumption is likely well beyond the rest of the neighborhood combined.

You can see where I am going with this. Based on this scenario, it’s obvious that an objective observer would conclude that this neighbor should pay an additional premium — especially when you consider he is exploiting the community water for a commercial gain.

The Internet, much like our neighborhood example, was originally a group of cooperating parties (educational and government institutions) that connected their networks in an effort to easily share information. There was never any intention of charging for access amongst members. As the Internet spread away from government institutions, last-mile carriers such as cable and phone companies invested heavily in infrastructure. Their  business plans assumed that all parties would continue to use the Internet with lightweight content such as Web pages, e-mails, and the occasional larger document or picture.

In the latter part of 2007, a few companies, with substantial data content models, decided to take advantage of the low delivery fees for movies and music by serving them up over the Internet. Prior to their new-found Internet delivery model, content providers had to cover the distribution costs for the physical delivery of records, video cassettes and eventually discs.

As of 2010, Internet delivery costs associated with the distribution of media had plummeted to near zero. It seems that consumers have pre-paid their delivery cost when they paid their monthly Internet bill. Everybody should be happy, right?

The problem is, as per our analogy with the community water system, we have a few commercial operators jamming the pipes with content, and jammed pipes have a cost. Upgrading a full Internet pipe at any level requires a major investment, and providers to date are already leveraged and borrowed with their existing infrastructure. Thus, the Internet companies that carry the data need to pass this cost on to somebody else.

As a result of these conflicting interests, we now have a pissing match between carriers and content providers in which the latter are playing the “neutrality card” and the former are lobbying lawmakers to grant them special favors in order to govern ways to limit access.

Therefore, whether it be water, the Internet or grazing on public lands, absolute neutrality can be problematic — especially when money is involved. While the concept of neutrality certainly has the overwhelming support of consumer sentiment, be aware that there are, and  always will be, entities exploiting the system.

Related Articles

For more on NetFlix, see Level 3-Netflix Expose their Hidden Agenda.

Network Redundancy must start with your provider


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

The chances of being killed by a shark are 1 in 264 million. The chance of being mauled by a bear on your weekend outing in the woods are even less.   Fear is a strange emotion rooted deep within our brains. Despite a rational understanding of risks people are programmed to lose sleep and exhaust their adrenaline supply worrying about events that will never happen.

It is this same lack of rational risk evaluation that makes it possible  for vendors to sell unneeded equipment to otherwise budget conscious businesses.  The current , in vogue,  unwarranted  fears used to move network equipment    are IPv6 preparedness, and  equipment redundancy.

Equipment vendors tend to push customers toward internal redundant hardware solutions , not because they have your best interest in mind ,  if they did, they would first encourage you to get a redundant link to your ISP.

Twenty years of practical hands on experience tells us  that your Internet router’s chance of catastrophic failure is about 1 percent over a three-year period. On the other hand, your internet provider has a 95-percent chance of having a full-day outage during that same three-year period.

If you are truly worried about a connectivity failure into your business, you MUST source two separate paths to the Internet to have any significant reduction in risk. Requiring fail-over on individual pieces of equipment, without first securing complete redundancy in your network from your provider is like putting a band-aid on your finger while pleading from your jugular vein.

Some other useful tips on making your network more reliable include

Do not turn on unneeded bells and whistles on your router and firewall equipment.

Many router and device failures are not absolute. Equipment will get cranky, slow, or belligerent based on human error or system bugs. Although system bugs are rare when these devices are used in the default set-up, it seems turning on bells and whistles is often an irresistible enticement for a tech. The more features you turn on, the less standard your configuration becomes, and all too often the mission of the device is pushed well beyond its original intent. Routers doing billing systems, for example.

These “soft” failure situations are common, and the fail-over mechanism likely will not kick in, even though the device is sick and not passing traffic as intended. I have witnessed this type of failure first-hand at major customer installations. The failure itself is bad enough, but the real embarrassment comes from having to tell your customer that the fail-over investment they purchased is useless in a real-life situation. Fail-over systems are designed with the idea that the equipment they route around will die and go belly up like a pheasant shot point-blank with a 12-gauge shotgun. In reality, for every “hard” failure, there are 100 system-related lock ups where equipment sputters and chokes but does not completely die.

Start with a high-quality Internet line.

T1 lines, although somewhat expensive, are based on telephone technology that has long been hardened and paid for. While they do cost a bit more than other solutions, they are well-engineered to your doorstep.

Make sure all your devices have good UPS sources and surge protectors.

Consider this when purchasing redundant equipment,  what is the cost of manually moving a wire to bypass a failed piece of equipment?

Look at this option before purchasing redundancy options on single point of failure. We often see customers asking for redundant fail-over embedded in their equipment. This tends to be a strategy of purchasing hardware such as routers, firewalls, bandwidth shapers, and access points that provide a “fail open” (meaning traffic will still pass through the device) should they catastrophically fail. At face value, this seems like a good idea to cover your bases. Most of these devices embed a failover switch internally to their hardware. The cost of this technology can add about $3,000 to the price of the unit.

If equipment is vital to your operation, you’ll need a spare unit on hand in case of failure. If the equipment is optional or used occasionally, then take it out of your network.

Again, these are just some basic tips, and your final Internet redundancy plan will ultimately depend on your specific circumstances. But, these tips and questions should put you on your way to a decision based on facts rather than one based on unnecessary fears and concerns.

What Is Deep Packet Inspection and Why the Controversy?


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper. APconnections removed all deep packet inspection technology from their NetEqualizer product over 2 years ago.

Article Updated March 2012

As the debate over Deep Packet Inspection continues, network administrators are often faced with a difficult decision: ensure network quality or protect user privacy. However, the legality of the practice is now being called into question, adding a new twist to the mix. Yet, for many Internet users, deep packet inspection continues to be an ambiguous term in need of explanation. In the discussion that follows, deep packet inspection will be explored in the context of the ongoing debate.

Exactly what is deep packet inspection?

All traffic on the Internet travels around in what is called an IP packet. An IP packet is a string of characters moving from computer A to computer B. On the outside of this packet is the address where it is being sent. On the inside of the packet is the data that is being transmitted.

The string of characters on the inside of the packet can be conceptually thought of as the “payload,” much like the freight inside of a railroad car. These two elements, the address and the payload, comprise the complete IP packet.

When you send an e-mail across the Internet, all your text is bundled into packets and sent on to its destination. A deep packet inspection device literally has the ability to look inside those packets and read your e-mail (or whatever the content might be).

Products sold that use DPI are essentially specialized snooping devices that examine the content (pay load inside) of Internet packets. Other terms sometimes used to describe techniques that examine Internet data are packet shapers, layer-7 traffic shaping, etc.

How is deep packet inspection related to net neutrality?

Net neutrality is based on the belief that nobody has the right to filter content on the Internet. Deep packet inspection is a method used for filtering. Thus, there is a conflict between the two approaches. The net neutrality debate continues to rage in its own right.

Why do some Internet providers use deep packet inspection devices?

There are several reasons:

1) Targeted advertising If a provider knows what you are reading, they can display content advertising on the pages they control, such as your login screen or e-mail account.

2) Reducing “unwanted” traffic — Many providers are getting overwhelmed by types of traffic that they deem as less desirable such as Bittorrent and other forms of peer-to-peer. Bittorrent traffic can overwhelm a network with volume. By detecting and redirecting the Bittorrent traffic, or slowing it down, a provider can alleviate congestion.

3) Block offensive material — Many companies or institutions that perform content filtering are looking inside packets to find, and possibly block, offensive material or web sites.

4) Government spying — In the case of Iran (and to some extent China), DPI is used to keep tabs on the local population.

When is it appropriate to use deep packet inspection?

1) Full disclosure — Private companies/institutions/ISPs that notify employees that their Internet use is not considered private have the right to snoop, although I would argue that creating an atmosphere of mistrust is not the mark of a healthy company.

2) Law enforcement — Law enforcement agencies with a warrant issued by a judge would be the other legitimate use.

3) Intrusion detection and prevention– It is one thing to be acting as an ISP  and to eaves drop on a public conversation;  it is entirely another paradigm if you are a  private business examining the behavior of somebody  coming in your front door. For example in a private home it is within your right to look through your peep hole and not let shady characters into your home.  In a private business it is a good idea to use Deep packet inspection in order to block unwanted intruders from your network. Blocking bad guys before they break into and damage your network and is perfectly acceptable.

4) Spam filtering- Most consumers are very happy to have their ISP or email provider remove spam.  I would categorize this type of DPI as implied disclosure. For example, in Gmail you do have the option to turn Spam filtering off, and although most consutomers may not realize that google is reading their mail ( humans don’t read it but computer scanners do), their motives are understood. What consumers may not realize is that their email provider is also reading everything they do in order to set target advertising

Does Content filtering use Deep Packet Inspection ?

For the most part no. Content filtering is generally  done at the URL level. URL’s are generally considered public information, as routers need to look this up anyway. We have only encountered content filters at private institutions that are within their right.

What about spam filtering, does that use Deep Packet Inspection?

Yes many Spam filters will look at content, and most people could not live without their spam filter, however with spam filtering most people have opted in at one point or another, hence it is generally done with permission.

What is all the fuss about?

It seems that consumers are finally becoming aware of what is going on behind the scenes as they surf the Internet, and they don’t like it. What follows are several quotes and excerpts from articles written on the topic of deep packet inspection. They provide an overview not only of how DPI is currently being used, but also the many issues that have been raised with the practice.

For example, this is an excerpt from a recent PC world article:

Not that we condone other forms of online snooping, but deep packet inspection is the most egregious and aggressive invasion of privacy out there….It crosses the line in a way that is very frightening.

Paul Stephens, director of policy and advocacy for the Privacy Rights Clearinghouse, as quoted in the E-Commerce Times on November 14, 2008. Read the full article here.

Recently, Comcast had their hand slapped for re-directing Bittorrent traffic:

Speaking at the Stanford Law School Center for Internet and Society, FCC Chairman Kevin Martin said he’s considering taking action against the cable operator for violating the agency’s network-neutrality principles. Seems Martin was troubled by Comcast’s dissembling around the BitTorrent issue, not to mention its efforts to pack an FCC hearing on Net neutrality with its own employees.

— Digital Daily, March 10, 2008. Read the full article here.

Later in 2008, the FCC came down hard on Comcast.

In a landmark ruling, the Federal Communications Commission has ordered Comcast to stop its controversial practice of throttling file sharing traffic.

By a 3-2 vote, the commission on Friday concluded that Comcast monitored the content of its customers’ internet connections and selectively blocked peer-to-peer connections.

Wired.com, August 1, 2008.Read the full article here.

To top everything off, some legal experts are warning companies practicing deep packet inspection that they may be committing a felony.

University of Colorado law professor Paul Ohm, a former federal computer crimes prosecutor, argues that ISPs such as Comcast, AT&T and Charter Communications that are or are contemplating ways to throttle bandwidth, police for copyright violations and serve targeted ads by examining their customers’ internet packets are putting themselves in criminal and civil jeopardy.

Wired.com, May 22, 2008. Read the full article here.

However, it looks like things are going the other way in the U.K. as Britain’s Virgin Media has announced they are dumping net neutrality in favor of targeting bittorrent.

The UK’s second largest ISP, Virgin Media, will next year introduce network monitoring technology to specifically target and restrict BitTorrent traffic, its boss has told The Register.

The Register, December 16, 2008. Read the full article here.

Canadian ISPs confess en masse to deep packet inspection in January 2009.

With the amount of attention being paid to Comcast recently, a lot of people around the world have begun to look at their ISPs and wonder exactly what happens to their traffic once it leaves. This is certainly true for Canada, where several Canadian ISPs have come under the scrutiny of the CRTC, the regulatory agency responsible for Canada. After investigation, it was determined that all large ISPs in Canada filter P2P traffic in some fashion.

Tech Spot, January 21, 2009. Read the full article here.

In April 2009, U.S. lawmakers announced plans to introduce legislation that would limit the how ISPs could track users. Online privacy advocates spoke out in support of such legislation.

In our view, deep packet inspection is really no different than postal employees opening envelopes and reading letters inside. … Consumers simply do not expect to be snooped on by their ISPs or other intermediaries in the middle of the network, so DPI really defies legitimate expectations of privacy that consumers have.

Leslie Harris, president and CEO of the Center for Democracy and Technology, as quoted on PCWorld.com on April 23, 2009. Read the full article here.

The controversy continues in the U.S. as AT&T is accused of traffic shaping, lying and blocking sections of the Internet.

7/26/2009 could mark a turning point in the life of AT&T, when the future looks back on history, as the day that the shady practices of an ethically challenged company finally caught up with them: traffic filtering, site banning, and lying about service packages can only continue for so long before the FCC, along with the bill-paying public, takes a stand.

Kyle Brady, July 27, 2009. Read the full article here.

[February 2011 Update] The Egyptian government uses DPI to filter elements of their Internet Traffic, and this act in itself becomes the news story. In this video in this news piece, Al Jazeera takes the opportunity to put out an unflattering piece on the company Naurus that makes the DPI technology and sold it to the Egyptians.

While the debate over deep packet inspection will likely rage on for years to come, APconnections made the decision to fully abandon the practice over two years ago, having since proved the viability of alternative approaches to network optimization. Network quality and user privacy are no longer mutually exclusive goals.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

Pros and Cons of Using Your Router as a Bandwidth Controller


So, you already have a router in your network, and rather than take on the expense of another piece of equipment, you want to double-up on functionality by implementing your bandwidth control within your router. While this is sound logic and may be your best decision, as always, there are some other factors to consider.

Here are a few things to think about:

1. Routers are optimized to move packets from one network to another with utmost efficiency. To do this function, there is often minimal introspection of the data, meaning the router does one table look-up and sends the data on its way. However, as soon as you start doing some form of bandwidth control, your router now must perform a higher-level of analysis on the data. Additional analysis can overwhelm a router’s CPU without warning. Implementing non-routing features, such as protocol sniffing, can create conditions that are much more complex than the original router mission. For simple rate limiting there should be no problem, but if you get into more complex bandwidth control, you can overwhelm the processing power that your router was designed for.

2. The more complex the system, the more likely it is to lock up. For example, that old analog desktop phone set probably never once crashed. It was a simple device and hence extremely reliable. On the other hand, when you load up an IP phone on your Windows PC,  you will reduce reliability even though the function is the same as the old phone system. The problem is that your Windows PC is an unreliable platform. It runs out of memory and buggy applications lock it up.

This is not news to a Windows PC owner, but the complexity of a mission will have the same effect on your once-reliable router. So, when you start loading up your router with additional missions, it is increasingly more likely that it will become unstable and lock up. Worse yet, you might cause a subtle network problem (intermittent slowness, etc.) that is less likely to be identified and fixed. When you combine a bandwidth controller/router/firewall together, it can become nearly impossible to isolate problems.

3. Routing with TOS bits? Setting priority on your router generally only works when you control both ends of the link. This isn’t always an option with some technology. However, products such as the NetEqualizer can supply priority for VoIP in both directions on your Internet link.

4. A stand-alone bandwidth controller can be  moved around your network or easily removed without affecting routing. This is possible because a bandwidth controller is generally not a routable device but rather a transparent bridge. Rearranging your network setup may not be an option, or simply becomes much more difficult, when using your router for other functions, including bandwidth control.

These four points don’t necessarily mean using a router for bandwidth control isn’t the right option for you. However, as is the case when setting up any network, the right choice ultimately depends on your individual needs. Taking these points into consideration should make your final decision on routing and bandwidth control a little easier.

NetEqualizer Brand Becoming an Eponym for Fairness and Net Neutrality techniques


An eponym is a general term used to describe from what or whom something derived its name. Therefore, a proprietary eponym could be considered a brand name, product or service mark which has fallen into general use.

Examples of common brand Eponyms include Xerox, Google, and  Band Aid.  All of these brands have become synonymous with the general use of the class of product regardless of the actual brand.

Over the past 7 years we have spent much of our time explaining the NetEqualizer methods to network administrators around the country;  and now,there is mounting evidence,  that  the NetEqualizer brand, is taking on a broader societal connotation. NetEqualizer, is in the early stages as of becoming and Eponym for the class of bandwidth shapers that, balance network loads and ensure fairness and  Neutrality.   As evidence, we site the following excerpts taken from various blogs and publications around the world.

From Dennis OReilly <Dennis.OReilly@ubc.ca> posted on ResNet Forums

These days the only way to classify encrypted streams is through behavioral analysis.  ….  Thus, approaches like the NetEqualizer or script-based ‘penalty box’ approaches are better.

Wisp tutorial Butch Evans

About 2 months ago, I began experimenting with an approach to QOS that mimics much of the functionality of the NetEqualizer (http://www.netequalizer.com) product line.

TMC net

Comcast Announces Traffic Shaping Techniques like APconnections’ NetEqualizer…

From Technewsworld

It actually sounds a lot what NetEqualizer (www.netequalizer.com) does and most people are OK with it…..

From Network World

NetEqualizer looks at every connection on the network and compare it to the overall trunk size to determine how to eliminate congestion on the links

Star Os Forum

If you’d really like to have your own netequalizer-like system then my advice…..

Voip-News

Has anyone else tried Netequalizer or something like it to help with VoIP QoS? It’s worked well so far for us and seems to be an effective alternative for networks with several users…..

NetEqualizer News: January 2011


NetEqualizer
January 2011 NetEqualizer News

YouTube Caching Feature Beta Trial; 2011 NetEqualizer Updates Released
Greetings!

Enjoy another issue of the NetEqualizer Newsletter. This month, we release details of our NetEqualizer YouTube caching feature beta trial and introduce the 2011 NetEqualizer product updates. As always, feel free to pass this along to others who might be interested in NetEqualizer News.

In this issue:

  • NetEqualizer YouTube Caching Beta Trial Now Available!
  • NetEqualizer YouTube Caching General Release Coming Soon
  • 2011 NetEqualizer Product And Pricing Updates
  • Best Of The Blog
NetEqualizer YouTube Caching Beta Trial Now Available!
YouTube

Over the past few months, we’ve kept you updated on our upcoming NetEqualizer YouTube caching feature. We’re now excited to announce the availability of a beta testing trial.While we have sanity and load-tested our YouTube caching systems, and there is very little risk of lockup or failure, we’re offering the beta trial to obtain more real-world data from live networks. This will help us as we set expectations for our general release.

The beta trial is available to all existing NetEqualizer customers and will be offered with complementary set-up support. (A $500 fee will apply for the general release.) However, to qualify, interested trial participants must meet the following requirements:

  • Must have a current NetEqualizer Software Subscription (NSS)
  • Must have the ability to provide remote access to the NetEqualizer
  • Trial requires a memory upgrade to eight gigabits
  • Must be using a two-Ethernet port system (not three)
  • NetEqualizer must have Internet access (We assume you will handle the security and access requirements through your firewall.)
  • NetEqualizer requires a flash upgrade (no charge for beta customers)

There are only a limited number of spaces in the trial available, so be sure to let us know if you’re interested at 1-800-918-2763 or admin@apconnections.net. For more information on YouTube caching, visit our new FAQ article on the NetEqualizer News Blog.

NetEqualizer YouTube Caching General Release Coming Soon

For those of you who plan on waiting for the general release of the NetEqualizer YouTube caching feature, here are some additional details to give you a better idea of what to expect.For best results, we’re going to strongly suggest you have a NE3000-class or above NetEqualizer system. However, we will honor caching server systems on NE2000-class NetEqualizers purchased in the last three months.

A current NetEqualizer Software Subscription will also be needed to install the feature along with a one-time $500 set-up charge to cover the installation and check-up of your caching system.

YouTube caching will require a minimum of eight gigabits of memory, but systems having only four gigabits can still be used with an additional highly reliable disk drive device, which is also available to boost performance for the eight-gigabit units.

As is, eight gigabits of memory will support between 150 and 250 YouTube videos depending on their average length. It is estimated that about 10 to 20 percent of your YouTube traffic will come from the cache, which will make a significant impact in average video speed and quality as well as the reduction of overall bandwidth usage. The disk drive version is estimated to cache approximately 4,000 of the most popular YouTube videos, likely increasing hit rates for videos to nearly 50 percent.

In addition to improved video quality and access for the customer, the YouTube caching feature will offer administrators and ISPs an even more effective way to optimize their networks while respecting user privacy and net neutrality. For more information on the benefits of caching for net neutrality, click here.

To learn more about the upcoming NetEqualizer YouTube caching feature general release, contact us at 1-800-918-2763 or sales@apconnections.net.

2011 NetEqualizer Product And Pricing Updates
NetEqualizer

As we begin a new year, we’re making some minor changes to the NetEqualizer product line. To start, we’re introducing the new-look NetEqualizer console (see unit to left). Of course, the quality of our product hasn’t changed, but we thought it was time for a little updating.Also, starting this month, we’ll be offering both the NE3000-100 and NE3000-150 NetEqualizer systems. This will allow customers to even better customize the NetEqualizer to their individual needs. Furthermore, we’ll be changing the NE2000-45 NetEqualizer model to the NE2000-50.

Finally, in addition to these changes, we’re also releasing our 2011 NetEqualizer price list. The full list can be viewed here without registration for a limited time. Current quotes will not be affected by the pricing updates and will be honored for 90 days from the date the quote was originally given.

Best Of The Blog
How to Determine a Comprehensive ROI for Bandwidth Shaping Products

In the past, we’ve published several articles on our blog to help customers better understand the NetEqualizer’s potential return on investment (ROI). Obviously, we do this because we think we offer a compelling ROI proposition for most bandwidth-shaping decisions. Why? Primarily because we provide the benefits of bandwidth shaping at a a very low cost – both initially and even more so over time. (Click here for the NetEqualizer ROI calculator.)

But, we also want to provide potential customers with the questions that need to be considered before a product is purchased, regardless of whether or not the answers lead to the NetEqualizer. With that said, this article will break down these questions, addressing many issues that may not be obvious at first glance, but are nonetheless integral when determining what bandwidth shaping product is best for you.

First, let’s discuss basic ROI. As a simple example, if an investment cost $100, and if in one year that investment returned $120, the ROI is 20 percent. Simple enough. But what if your investment horizon is five years or longer? It gets a little more complicated, but suffice it to say you would perform a similar calculation for each year while adjusting these returns for time and cost.

The important point is that this technique is a well-known calculation for evaluating whether one thing is a better investment than another – be it bandwidth shaping products or real estate. Naturally and obviously the best financial decision will be determined by the greatest return for the smallest cost.

The hard part is determining what questions to ask in order to accurately determine the ROI. A missed cost or benefit here or there could dramatically alter the outcome, potentially leading to significant unforeseen losses.

For the remainder of this article, I’ll discuss many of the potential costs and returns associated with bandwidth shaping products, with some being more obscure than others. In the end, it should better prepare you to address the most important questions and issues and ultimately lead to a more accurate ROI assessment.

Let’s start by looking at the largest components of bandwidth shaping product “costs” and whether they are one-time or ongoing. We’ll then consider the returns.

Contact Information

email: admin
phone: 303-997-1300

 

Join our mailing list!

How to Determine a Comprehensive ROI for Bandwidth Shaping Products


In the past, we’ve published several articles on our blog to help customers better understand the NetEqualizer’s potential return on investment (ROI). Obviously, we do this because we think we offer a compelling ROI proposition for most bandwidth-shaping decisions. Why? Primarily because we provide the benefits of bandwidth shaping at a a very low cost — both initially and even more so over time. (Click here for the NetEqualizer ROI calculator.)

But, we also want to provide potential customers with the questions that need to be considered before a product is purchased, regardless of whether or not the answers lead to the NetEqualizer. With that said, this article will break down these questions, addressing many issues that may not be obvious at first glance, but are nonetheless integral when determining what bandwidth shaping product is best for you.

First, let’s discuss basic ROI. As a simple example, if an investment cost $100, and if in one year that investment returned $120, the ROI is 20 percent.  Simple enough. But what if your investment horizon is five years or longer? It gets a little more complicated, but suffice it to say you would perform a similar calculation for each year while adjusting these returns for time and cost.

The important point is that this technique is a well-known calculation for evaluating whether one thing is a better investment than another — be it bandwidth shaping products or real estate. Naturally and obviously the best financial decision will be determined by the greatest return for the smallest cost.

The hard part is determining what questions to ask in order to accurately determine the ROI. A missed cost or benefit here or there could dramatically alter the outcome, potentially leading to significant unforeseen losses.

For the remainder of this article, I’ll discuss many of the potential costs and returns associated with bandwidth shaping products, with some being more obscure than others. In the end, it should better prepare you to address the most important questions and issues and ultimately lead to a more accurate ROI assessment.

Let’s start by looking at the largest components of bandwidth shaping product “costs” and whether they are one-time or ongoing. We’ll then consider the returns.

COSTS

  • The initial cost of the tool
    • This is a one-time cost.
  • The cost of vendor support and license updates
    • These are ongoing costs and include monthly and annual licenses for support, training, software updates, library updates, etc…  The difference from vendor to vendor can be significant — especially over the long run.
  • The cost of upgrades within the time horizon of the investment
    • These upgrades can come in several different forms. For example, what does it cost to go from a 50Mbs tool to 100Mbs? Can your tool be upgraded, or do you have to buy a whole new tool? This can be a one-time cost or it can occur several times. It really depends on the growth of your network, but it’s usually inevitable for networks of any size.
  • The internal (human) cost to support the tool
    • For example, how many man hours do you have to spend to maintain the tool, to optimize it and to adapt it to your changing network? This could be a considerable “hidden” cost and it’s generally recurring. It also usually increases in time as the cost of salaries/benefits tend to go up. Because of that, this is a very important component that should be quantified for a good ROI analysis. Tools that require little or no ongoing maintenance will have a large advantage.
  • Overall impact on the network
    • Does the product add latency or other inefficiencies? Does it create any processing overhead and how much? If the answer is yes, costs such as these will constantly impact your network quality and add up over time.

RETURNS

  • Savings from being able to delay or eliminate buying more bandwidth
    • This could either be a one-time or ongoing return. Even delaying a bandwidth upgrade for six months or a year can be highly valuable.
  • Savings from not losing existing revenue sources
    • How many customers did you not lose because they did not get frustrated with their network/Internet service? This return is ongoing.
  • Ability to generate new revenue
    • How many new customers did you add because of a better-maintained network?  Were you able to generate revenue by adding new higher-value services like a tiered rate structure? This will usually be an ongoing return.
  • Savings from the ability eliminate or reduce the financial impact of unprofitable customers
    • This is an ongoing savings. Can you convert an unprofitable customer to a profitable one by reducing their negative impact on the network? If not, and they walk, do you care?
  • Avoidance of having to buy additional equipment
    • Were you able to avoid having to “divide and conquer” by buying new access points, splitting VLANs, etc..? This can be a one-time or ongoing return.
  • Savings in the cost of responding to technical support calls
    • How much time was saved by not having to receive an irate customer call, research it and respond back? If this is something you typically deal with on a regular basis, the savings will add up every day, week or month this is avoided.

Overall, these issues are the basic financial components and questions that need to be quantified to make a good ROI analysis. For each business, and each tool, this type of analysis may yield a different answer, but it is important to note that over time there are many more items associated with ongoing costs/savings than those occurring only once. Thus, you must take great care to understand the impact of these for each tool, especially those issues that lead to costs that increase over time.

NetEqualizer YouTube Caching FAQ


Editor’s Note: This week, we announced the availability of the NetEqualizer YouTube caching feature we first introduced in October. Over the past month, interest and inquiries have been high, so we’ve created the following Q&A to address many of the common questions we’ve received.

This may seem like a silly question, but why is caching advantageous?

The bottleneck most networks deal with is that they have a limited pipe leading out to the larger public Internet cloud. When a user visits a website or accesses content online, data must be transferred to and from the user through this limited pipe, which is usually meant for only average loads (increasing its size can be quite expensive). During busy times, when multiple users are accessing material from the Internet at once, the pipe can become clogged and service slowed. However, if an ISP can keep a cached copy of certain bandwidth-intensive content, such as a popular video, on a server in their local office, this bottleneck can be avoided. The pipe remains open and unclogged and customers are assured their video will always play faster and more smoothly than if they had to go out and re-fetch a copy from the YouTube server on the Internet.

What is the ROI benefit of caching YouTube? How much bandwidth can a provider conserve?

At the time of this writing, we are still in the early stages of our data collection on this subject. What we do know is that YouTube can account for up to 15 percent of Internet traffic. We expect to be able to cache at least the most popular 300 YouTube videos with this initial release and perhaps more when we release the mass-storage version of our caching server in the future. Considering this, realistic estimates put the savings in terms of bandwidth overhead somewhere between 5 and 15 percent. But this is only the instant benefits in terms bandwidth savings. The long-term customer-satisfaction benefit is that many more YouTube videos will play without interruption on a crowded network (busy hour) than before. Therefore, ROI shouldn’t be measured in bandwidth savings alone.

Why is it just the YouTube caching feature? Why not cache everything?

There are a couple of good reasons not to cache everything.

First, there are quite a few Web pages that are dynamically generated or change quite often, and a caching mechanism relies on content being relatively static. This allows it to grab content from the Internet and store it locally for future use without the content changing. As mentioned, when users/clients visit the specific Web pages that have been stored, they are directed to the locally saved content rather than over the Internet and to the original website. Therefore, caching obviously wouldn’t be possible for pages that are constantly changing. Caching dynamic content can cause all kinds of issues — especially with merchant and secure sites where each page is custom-generated for the client.

Second, a caching server can realistically only store a subset of data that it accesses. Yes, data storage is getting less expensive every year, but a local store is finite in size and will eventually fill up. So, when making a decision on what to cache and what not to cache, YouTube, being both popular and bandwidth intensive, was the logical choice.

Will the NetEqualizer ever cache content beyond YouTube? Such as other videos?

At this time, the NetEqualizer is caching files that traverse port 80 and correspond to video files from 30 seconds to 10 minutes. It is possible that some other port 80 file will fall into this category, but the bulk of it will be YouTube.

Is there anything else about YouTube that makes it a good candidate to cache?

Yes, YouTube content meets the level of stability discussed above that’s needed for effective caching. Once posted, most YouTube videos are not edited or changed. Hence, the copy in the local cache will stay current and be good indefinitely.

When I download large distributions, the download utility often gives me a choice of mirrored sites around the world. Is this the same as caching?

By definition this is also caching, but the difference is that there is a manual step to choosing one of these distribution sites. Some of the large-content open source distributions have been delivered this way for many years. The caching feature on the NetEqualizer is what is called “transparent,” meaning users do not have to do anything to get a cached copy.

If users are getting a file from cache without their knowledge, could this be construed as a violation of net neutrality?

We addressed the tenets of net neutrality in another article and to our knowledge caching has not been controversial in any way.

What about copyright violations? Is it legal to store someone’s content on an intermediate server?

This is a very complex question and anything is possible, but with respect to intent and the NetEqualizer caching mechanism, the Internet provider is only caching what is already freely available. There is no masking or redirection of the actual YouTube administrative wrappings that a user sees (this would be where advertising and promotions appear). Hence, there is no loss of potential of revenue for YouTube. In fact, it would be considered more of a benefit for them as it helps more people use their service where connections might otherwise be too slow.

Final Editor’s Note: While we’re confident this Q&A will answer many of the questions that arise about the NetEqualizer YouTube caching feature, please don’t hesitate to contact us with further inquiries. We can be reached at 1-888-287-2492 or sales@apconnections.net.

NetEqualizer YouTube Caching a Win for Net Neutrality


Over the past few years, much of the controversy over net neutrality has ultimately stemmed from the longstanding rift between carriers and content providers. Commercial content providers such as NetFlix have entire business models that rely on relatively unrestricted bandwidth access for their customers, which has led to an enormous increase in the amount of bandwidth that is being used. In response to these extreme bandwidth loads and associated costs, ISPs have tried all types of schemes to limit and restrict total usage. Some of the solutions that have been tried include:

While in many cases effective, most of these efforts have been mired in controversy with respect to net neutrality. However, caching is the one exception.

Up to this point, caching has proven to be the magic bullet that can benefit both ISPs and consumers (faster access to videos, etc.) while respecting net neutrality. To illustrate this, we’ll run caching through the gauntlet of questions that have been raised about these other solutions in regard to a violation of net neutrality. In the end, it comes up clean.

1. Does caching involve deep introspection of user traffic without their knowledge (like layer-7 shaping and DPI)?

No.

2. Does Caching perform any form of preferential treatment based on content?

No.

3. Does caching perform any form of preferential treatment based on fees?

No.

Yet, despite avoiding these pitfalls, caching has still proven to be extremely effective, allowing Internet providers to manage increasing customer demands without infringing upon customers’ rights or quality of service. It was these factors that led APconnections to develop our most recent NetEqualizer feature, YouTube caching.

For more on this feature, or caching in general, check out our new NetEqualizer YouTube Caching FAQ post.

A Tiered Internet – Penny Wise or Pound Foolish


With the debate over net neutrality raging in the background, Internet suppliers are preparing their strategies to bridge the divide between bandwidth consumption and costs. This topic is coming to a head now largely because of the astonishing growth-rate of streaming video from the likes of YouTube, NetFlix, and others.

The issue recently took a new turn and emerged front and center during a webinar when Allot Communications and Openet presented its new product features, including its approach of integrating policy control and charging for wireless access to certain websites.

On the surface, this may seem like a potential solution to the bandwidth problem. Basic economic theory will tell you that if you increase the cost of a product or service, the demand will eventually decrease. In this case, charging for bandwidth will not only increase revenues, but the demand will ultimately drop until a point of equilibrium is reached. Problem solved, right? Wrong!

While the short-term benefits are obviously appealing for some, this is a slippery slope that will lead to further inequality in Internet access (You can easily find many articles and blogs regarding Net Neutrality including those referencing Vinton Cerf and Tim Berners-Lee — two of the founding fathers of the Internet — clearly supporting a free and equal Internet). Despite these arguments, we believe that Deep Packet Inspection (DPI) equipment makers such as Allot will continue to promote and support a charge system since it is in their best business interests to do so. After all, a pay-for-access approach requires DPI as the basis for determining what content to charge.

However, there are better and more cost-effective ways to control bandwidth consumption while protecting the interests of net neutrality. For example, fairness-based bandwidth control intrinsically provides equality and fairness to all users without targeting specific content or websites. With this approach, when the network is busy small bandwidth consumers are guaranteed access to the Internet while large bandwidth users are throttled back but not charged or blocked completely. Everyone lives within their means and gets an equal share. If large bandwidth consumers want access to more bandwidth, they can purchase a higher level of service from their provider. But let’s be clear, this is very different from charging for access to a particular website!

Although this content-neutral approach has repeatedly proved successful for NetEqualizer users, we’re now taking an additional step at mitigating bandwidth congestion while respecting network neutrality through video caching (the largest growth segment of bandwidth consumption). So, keep an eye out for the YouTube caching feature to be available in our new NetEqualizer release early next year.

NetEqualizer Tuning Guide for Small Networks with a Small Number of Infrequent Users


If you are working with a network that has a small number of infrequent users on a small network (10Mbps or less), here are some tuning recommendations that will help you to optimize your network use.  These recommendations came out of a discussion with one of our customers.   Their environment is a 40 person company on a 10Mbps pipe (normal amount of users on a small network), and then converts over at night to a network with only one user.

The following recommendations will help to alleviate the situation where a small network with a small number of infrequent users has a user get knocked down to a less than 1Mbps with a PENALTY while there is more than enough bandwidth to sustain their download at a higher rate.

Summary of Recommendations (listed in priority order):

1) (best option) Put a hard limit somewhere below RATIO (typically 85%) on each IP address on the network.  So, for a 10Mbps network with RATIO = 85%, your hard limits should be below 8.5Mbps for each IP address.

2) Put a “day configuration” and a “night configuration” in place.  The process to do this is described in the Changing Configurations by Time of Day section of our Advanced Tips & Tricks guide.

3) Change the PENALTY unit sensitivity, to make the penalty less restrictive.

4) Raise the value of HOGMIN from 12,000 bytes/second anywhere up to 128,000 bytes/second.

The philosophy behind each is described in detail in the following sections.

1) Adding Hard Limits on each IP address

We recommend putting Hard Limits on each IP address.  Hard Limits will keep any one user from consuming the entire network bandwidth.  If you prefer not to have Hard Limits on all IP addresses, you can set the Hard Limit only for the infrequent users.

For example, on a 10Mbps network, you can put a Hard Limit of 4-5Mbps on every user, which will prevent any one user from tripping equalizing, but will allow all of them to sustain a 5 Mbps download on your lightly loaded network.

If a user starts a large download, it will consume network bandwidth up until the network reaches a point of congestion (at 85% with RATIO set to 85).   Once that point is reached, equalizing will kick in and start penalizing the traffic.  In cases where the network has a normal number of users on it, this works very well to provide fairness across the available bandwidth.

When the one network user spikes the entire network to above 85% congested, a Penalty kicks in.  The result of the penalty is that the file download gets throttled back to 500kbs or maybe less – almost instantly.  Once the penalty is removed, the file download will again consume all the network bandwidth until another penalty is applied. This cycle repeats itself every few seconds until the download completes.

On a system with more than one user, and typically one that is very busy with 100’s of users or 1000’s, the pipe is usually always near capacity, so penalties being applied are not as dramatic, and ensure that all other users do not experience “lockup”.

2) Change your Configuration by Time of Day

You can also change your NetEqualizer to use two separate configuration files, so that you can apply different rules at various times of day – for example, rules for “off-hours” (typically nighttime) versus another set for “on-hours” (typically daytime).    This would be beneficial if you want to open up the amount of bandwidth available per user at night.  For example, you could set your off-hours hard limits to 8 Mbps, and lower your on-hours hard limits to 4Mbps.

Note that it is still important to keep your hard limits below RATIO, so that you do not trigger equalizing based on one data flow.

3) Change the PENALTY unit sensitivity, to make the penalty less restrictive

Networks much larger than 45 megabits may require a PENALTY UNIT resolution smaller than 100ths of seconds. In the NetEqualizer Web GUI, the smallest penalty that can be applied to an IP Packet is 1/100 of a second. If you are finding that a default PENALTY of 1 is putting too much latency on your connections then you can adjust the PENALTY unit to 1/1000 of second with the following command:

From the Web GUI Main Menu, Click on ->Miscellaneous->Run a Command

Type in: /bridge/bridge-utils/brctl/brctl rembrain my 99999

Note: For this change to persist you will need to put it in the /art/autostart file.

4)  Raise the value of HOGMIN, anywhere up to 128,000 bytes/sec

HOGMIN is used to determine what traffic should be penalized on a congested network.  One way to get traffic to not be penalized, then, is to raised the value of HOGMIN (default is 12,000 bytes per second).  For a lightly-loaded network you could consider HOGMIN = 50,000 bytes/sec and may even go as high as 128,000 bytes/sec.

Taken as a whole, this is how our four recommendations would work in the example we have described…

Hmm… I have a 10 megabit pipe and I have 40 users during the day and 1 user at night.  No user should be able to take the whole pipe all day, but I want my 1 user to get more bandwidth at night.

  • I’ll create two configuration files: one with 4Mbps hard limits on all my users during the day (4 megabits is relatively fast service for the average and nobody would complain) and another with 8Mbps hard limits for my night user(s).
  • In addition, I would like the penalty to be less harsh at night, so I’ll change the PENALTY=1/1000 of a second in my night configuration file.
  • I also would like HOGMIN to be raised at night.  I will set it to 50,000 in my night configuration file.

During the day, when every once in a while we get 2 or 3 users downloading at once, it will no longer kill the entire pipe.  And during the night, my user can download larger files without being restricted.  So, with 4Mbps/8Mbps restrictions plus Equalizing, I get the best of both worlds – pretty fast downloads when the pipe is empty and I am protected against peak time crashes! People get fast downloads and if there is a peak I am protected from system gridlock.   Now there is nothing anybody can do to crash the system at random times.

I hope you find this tuning suggestion helpful for your situation.  If you would like additional help, please contact our Support Team at support@apconnections.net or 303.997.1300 x102 to discuss tuning for your specific configuration.

NetEqualizer News: December 2010


NetEqualizer
December 2010 NetEqualizer News

YouTube Caching Feature Launch; Flyaway Contest Winner Announced
Greetings!

Enjoy another issue of the NetEqualizer Newsletter. This month, we introduce updated details about the new NetEqualizer YouTube caching feature and announce our most recent Flyaway Contest winner. As always, feel free to pass this along to others who might be interested in NetEqualizer News.

In this issue:

 

  • NetEqualizer YouTube Caching Release Launching January 2011
  • NetEqualizer Is Coming To Southern California!
  • Best Of The Blog
  • And The Flyaway Contest Winner Is…

 

NetEqualizer YouTube Caching Release Launching January 2011
YouTube

Since announcing the upcoming release of our NetEqualizer YouTube caching feature last month, we’ve received an overwhelming number of inquiries as to when the feature will be available. As of now, we’re still working through multiple scenarios under load in our labs to ensure a seamless release, but we’re expecting a full launch some time next month.The upcoming feature will store the top 300-500 trending YouTube videos to ensure faster and more efficient access. For best results, we’re going to suggest 4 to 16 gigabits of RAM. Almost all of this memory will be used for caching, and the bulk of that will be these YouTube videos. This should be more than enough to capture the most popular YouTube content. We are considering an option to put a large disk drive in the NetEqualizer in a future release, but we still like the stability of the smaller cache hosted in RAM for both speed and reliability.

Overall, the feature will be a major step toward making sure the most popular YouTube content is accessible and reliable even during your network’s busiest hours. (See our blog article, Enhance Your Internet Service With YouTube Caching, for more information on the benefits of video caching).

All future sales in 2010 will be eligible for the new release, so there’s no need to hold off on purchasing a NetEqualizer unit. If you need a NetEqualizer today but are waiting for the release, just let us know when you purchase and we will grandfather you in with three months of the NetEqualizer Software Subscription (NSS) to cover support for the YouTube release at no extra cost. As always, the release will be available at no charge to all current NetEqualizer users with valid NetEqualizer Software Subscriptions.

For more information, contact us at sales@apconnections.net or 1-800-918-2763.

NetEqualizer Is Coming To Southern California!
NetEq. Seminars

Plans are now in the works for our next complimentary NetEqualizer training seminar to be held in Southern California in early 2011. We’ll keep you posted as the details develop and the final date and location are determined, but in the meantime, here’s a preview of what to expect.

The upcoming seminar will cover:

  • The various tradeoffs regarding how to stem P2P and bandwidth abuse
  • Recommendations for curbing RIAA requests
  • Demo of the NetEqualizer network access control module
  • Lots of customer Q&A and information sharing on how clients are using the NetEqualizer, including some hands-on probing of a live system

If that isn’t enough, we’ll be giving away great door prizes to attendees. So, be sure not to miss this seminar! For more information, or to let us know that you’re interested in attending, contact us at admin@apconnections.net.

Best Of The Blog
By Art Reisman The 10-Gigabit Barrier for Bandwidth Controllers and Intel-Based Routers

Editor’s Note: This article was adapted from our answer to a NetEqualizer pre-sale question asked by an ISP that was concerned with its upgrade path. We realized the answer was useful in a broader sense and decided to post it here.

Any router, bandwidth controller, or firewall that is based on Intel architecture and buses will never be able to go faster than about about 7 gigabits sustained. (This includes our NE4000 bandwidth controller. While the NE4000 can actually reach speeds close to 10 gigabits, we rate our equipment for five gigabits because we don’t like quoting best-case numbers to our customers.) The limiting factor in Intel architecture is that to expand beyond 10-gigabit speeds you cannot be running with a central clock. Therefore, with a central clock controlling the show, it is practically impossible to move data around much faster than 10 gigabits.

The alternative is to use a specialized asynchronous design, which is what faster switches and hardware do.

To Keep Reading, Click Here.

And The Flyaway Contest Winner Is…
frontier

Every few months, we have a drawing to give away two roundtrip domestic airline tickets from Frontier Airlines to one lucky person who’s recently tried out our online NetEqualizer demo.The time has come to announce this round’s winner.

And the winner is…Craig Diotte of Algoma University.

Congratulations, Craig! Please contact us within 30 days at admin@apconnections.net or 303-997-1300 to claim your prize.

The 10-Gigabit Barrier for Bandwidth Controllers and Intel-Based Routers


By Art Reisman

Editor’s note: This article was adapted from our answer to a NetEqualizer pre-sale question asked by an ISP that was concerned with its upgrade path. We realized the answer was useful in a broader sense and decided to post it here.

Any router, bandwidth controller, or firewall that is based on Intel architecture and buses will never be able to go faster than about about 7 gigabits sustained. (This includes our NE4000 bandwidth controller. While the NE4000 can actually reach speeds close to 10 gigabits, we rate our equipment for five gigabits because we don’t like quoting best-case numbers to our customers.) The limiting factor in Intel architecture is that to expand beyond 10-gigabit speeds you cannot be running with a central clock. Therefore, with a central clock controlling the show, it is practically impossible to move data around much faster than 10 gigabits.

The alternative is to use a specialized asynchronous design, which is what faster switches and hardware do. They have no clock or centralized multiprocessor/bus. However, the price point for such hardware quickly jumps to 5-10 times the Intel architecture because it must be custom designed. It is also quite limited in function once released.

Obviously, vendors can stack a bunch of 10-gig fiber bandwidth controllers behind a switch and call it something faster, but this is no different from dividing up your network paths and using multiple bandwidth controllers yourself.  So, be careful when assessing the claims of other manufacturers in this space.

Considering these limitations, many cable operators here in the US have embraced the 10-gigabit barrier. At some point you must divide and conquer using multiple 10-gig fiber links and multiple NE4000 type boxes, which we believe is really the only viable plan — that is if you want any sort of sophistication in your bandwidth controller.

While there are some that will keep requesting giant centralized boxes, and paying a premium for them (it’s in their blood to think single box, central location), when you think about the Internet, it only works because it is made of many independent paths. There is no centralized location by design. However, as you approach 10-gigabit speeds in your organization, it might be time to stop thinking “single box.”

I went through this same learning curve as a system architect at AT&T Bell Labs back in the 1990s.  The sales team was constantly worried about how many telephone ports we could support in one box because that is what operators were asking for.  It shot the price per port through the roof with some of our designs. So, in our present case, we (NetEqualizer) decided not to get into that game because we believe that price per megabit of shaping will likely win out in the end.

Art Reisman is currently CTO and co-founder of APconnections, creator of the NetEqualizer. He  has worked at several start-up companies over the years and has invented and brought several technology products to market, both on his own and with the backing of larger corporations. This includes tools for the automotive industry.