|
||||||||||||||||||||||||||
Editor’s Note: Due to the many variables involved with tuning and supporting Squid Caching Integration, this feature will require an additional upfront support charge. It will also require at minimum a NE3000 platform. Contact sales@netequalizer.com for specific details.
In our upcoming 5.0 release, the main enhancement will be the ability to implement YouTube caching from a NetEqualizer. Since a squid-caching server can potentially be implemented separately by your IT department, the question does come up about what the difference is between using the embedded NetEqualizer integration and running the caching server stand-alone on a network.
Here are a few of the key reasons why using the NetEqualizer caching integration provides for the most efficient and effective set up:
1. Communication – For proper performance, it’s important that the NetEqualizer know when a file is coming from cache and when it’s coming from the Internet. It would be counterproductive to have data from cache shaped in any way. To accomplish this, we wrote a new utility, aptly named “cache helper,” to advise the NetEqualizer of current connections originating from cache. This allows the NetEqualizer to permit cached traffic to pass without being shaped.
2. Creative Routing – It’s also important that the NetEqualizer be able to see the public IP addresses of traffic originating on the Internet. However, using a stand-alone caching server prevents this. For example, if you plug a caching server into your network in front of a NetEqualizer (between the NetEqualizer and your users), all port 80 traffic would appear to come from the proxy server’s IP address. Cached or not, it would appear this way in a default setup. The NetEqualizer shaping rules would not be of much use in this mode as they would think all of the Internet traffic was originating from a single server. Without going into details, we have developed a set of special routing rules to overcome this limitation in our implementation.
3. Advanced Testing and Validation – Squid proxy servers by themselves are very finicky. Time and time again, we hear about implementations where a customer installed a proxy server only to have it cause more problems than it solved, ultimately slowing down the network. To ensure a simple yet tight implementation, we ran a series of scenarios under different conditions. This required us to develop a whole new methodology for testing network loads through the Netequalizer. Our current class of load generators is very good at creating a heavy load and controlling it precisely, but in order to validate a caching system, we needed a different approach. We needed a load simulator that could simulate the variations of live internet traffic. For example, to ensure a stable caching system, you must take the following into consideration:
To answer this challenge, and provide the most effective caching feature, we’ve spent the past few months developing a custom load generator. Our simulation lab has a full one-gigabit connection to the Internet. It also has a set of servers that can simulate thousands of simultaneous users surfing the Internet at the same time. We can also queue up a set of YouTube users vying for live video from the cache and Internet. Lastly, we put a traditional point-to-point FTP and UDP load across the NetEqualizer using our traditional load generator.
Once our custom load generator was in place, we were able to run various scenarios that our technology might encounter in a live network setting. Our testing exposed some common, and not so common, issues with YouTube caching and we were able to correct them. This kind of analysis is not possible on a live commercial network, as experimenting and tuning requires deliberate outages. We also now have the ability to re-create a customer problem and develop actual Squid source code patches should the need arise.
By Art Reisman
Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.
The chances of being killed by a shark are 1 in 264 million. The chance of being mauled by a bear on your weekend outing in the woods are even less. Fear is a strange emotion rooted deep within our brains. Despite a rational understanding of risks people are programmed to lose sleep and exhaust their adrenaline supply worrying about events that will never happen.
It is this same lack of rational risk evaluation that makes it possible for vendors to sell unneeded equipment to otherwise budget conscious businesses. The current , in vogue, unwarranted fears used to move network equipment are IPv6 preparedness, and equipment redundancy.
Equipment vendors tend to push customers toward internal redundant hardware solutions , not because they have your best interest in mind , if they did, they would first encourage you to get a redundant link to your ISP.
Twenty years of practical hands on experience tells us that your Internet router’s chance of catastrophic failure is about 1 percent over a three-year period. On the other hand, your internet provider has a 95-percent chance of having a full-day outage during that same three-year period.
If you are truly worried about a connectivity failure into your business, you MUST source two separate paths to the Internet to have any significant reduction in risk. Requiring fail-over on individual pieces of equipment, without first securing complete redundancy in your network from your provider is like putting a band-aid on your finger while pleading from your jugular vein.
Some other useful tips on making your network more reliable include
Do not turn on unneeded bells and whistles on your router and firewall equipment.
Many router and device failures are not absolute. Equipment will get cranky, slow, or belligerent based on human error or system bugs. Although system bugs are rare when these devices are used in the default set-up, it seems turning on bells and whistles is often an irresistible enticement for a tech. The more features you turn on, the less standard your configuration becomes, and all too often the mission of the device is pushed well beyond its original intent. Routers doing billing systems, for example.
These “soft” failure situations are common, and the fail-over mechanism likely will not kick in, even though the device is sick and not passing traffic as intended. I have witnessed this type of failure first-hand at major customer installations. The failure itself is bad enough, but the real embarrassment comes from having to tell your customer that the fail-over investment they purchased is useless in a real-life situation. Fail-over systems are designed with the idea that the equipment they route around will die and go belly up like a pheasant shot point-blank with a 12-gauge shotgun. In reality, for every “hard” failure, there are 100 system-related lock ups where equipment sputters and chokes but does not completely die.
Start with a high-quality Internet line.
T1 lines, although somewhat expensive, are based on telephone technology that has long been hardened and paid for. While they do cost a bit more than other solutions, they are well-engineered to your doorstep.
Make sure all your devices have good UPS sources and surge protectors.
Consider this when purchasing redundant equipment, what is the cost of manually moving a wire to bypass a failed piece of equipment?
Look at this option before purchasing redundancy options on single point of failure. We often see customers asking for redundant fail-over embedded in their equipment. This tends to be a strategy of purchasing hardware such as routers, firewalls, bandwidth shapers, and access points that provide a “fail open” (meaning traffic will still pass through the device) should they catastrophically fail. At face value, this seems like a good idea to cover your bases. Most of these devices embed a failover switch internally to their hardware. The cost of this technology can add about $3,000 to the price of the unit.
If equipment is vital to your operation, you’ll need a spare unit on hand in case of failure. If the equipment is optional or used occasionally, then take it out of your network.
Again, these are just some basic tips, and your final Internet redundancy plan will ultimately depend on your specific circumstances. But, these tips and questions should put you on your way to a decision based on facts rather than one based on unnecessary fears and concerns.
So, you already have a router in your network, and rather than take on the expense of another piece of equipment, you want to double-up on functionality by implementing your bandwidth control within your router. While this is sound logic and may be your best decision, as always, there are some other factors to consider.
Here are a few things to think about:
1. Routers are optimized to move packets from one network to another with utmost efficiency. To do this function, there is often minimal introspection of the data, meaning the router does one table look-up and sends the data on its way. However, as soon as you start doing some form of bandwidth control, your router now must perform a higher-level of analysis on the data. Additional analysis can overwhelm a router’s CPU without warning. Implementing non-routing features, such as protocol sniffing, can create conditions that are much more complex than the original router mission. For simple rate limiting there should be no problem, but if you get into more complex bandwidth control, you can overwhelm the processing power that your router was designed for.
2. The more complex the system, the more likely it is to lock up. For example, that old analog desktop phone set probably never once crashed. It was a simple device and hence extremely reliable. On the other hand, when you load up an IP phone on your Windows PC, you will reduce reliability even though the function is the same as the old phone system. The problem is that your Windows PC is an unreliable platform. It runs out of memory and buggy applications lock it up.
This is not news to a Windows PC owner, but the complexity of a mission will have the same effect on your once-reliable router. So, when you start loading up your router with additional missions, it is increasingly more likely that it will become unstable and lock up. Worse yet, you might cause a subtle network problem (intermittent slowness, etc.) that is less likely to be identified and fixed. When you combine a bandwidth controller/router/firewall together, it can become nearly impossible to isolate problems.
3. Routing with TOS bits? Setting priority on your router generally only works when you control both ends of the link. This isn’t always an option with some technology. However, products such as the NetEqualizer can supply priority for VoIP in both directions on your Internet link.
4. A stand-alone bandwidth controller can be moved around your network or easily removed without affecting routing. This is possible because a bandwidth controller is generally not a routable device but rather a transparent bridge. Rearranging your network setup may not be an option, or simply becomes much more difficult, when using your router for other functions, including bandwidth control.
These four points don’t necessarily mean using a router for bandwidth control isn’t the right option for you. However, as is the case when setting up any network, the right choice ultimately depends on your individual needs. Taking these points into consideration should make your final decision on routing and bandwidth control a little easier.
An eponym is a general term used to describe from what or whom something derived its name. Therefore, a proprietary eponym could be considered a brand name, product or service mark which has fallen into general use.
Examples of common brand Eponyms include Xerox, Google, and Band Aid. All of these brands have become synonymous with the general use of the class of product regardless of the actual brand.
Over the past 7 years we have spent much of our time explaining the NetEqualizer methods to network administrators around the country; and now,there is mounting evidence, that the NetEqualizer brand, is taking on a broader societal connotation. NetEqualizer, is in the early stages as of becoming and Eponym for the class of bandwidth shapers that, balance network loads and ensure fairness and Neutrality. As evidence, we site the following excerpts taken from various blogs and publications around the world.
From Dennis OReilly <Dennis.OReilly@ubc.ca> posted on ResNet Forums
These days the only way to classify encrypted streams is through behavioral analysis. …. Thus, approaches like the NetEqualizer or script-based ‘penalty box’ approaches are better.
About 2 months ago, I began experimenting with an approach to QOS that mimics much of the functionality of the NetEqualizer (http://www.netequalizer.com) product line.
Comcast Announces Traffic Shaping Techniques like APconnections’ NetEqualizer…
From Technewsworld
It actually sounds a lot what NetEqualizer (www.netequalizer.com) does and most people are OK with it…..
From Network World
NetEqualizer looks at every connection on the network and compare it to the overall trunk size to determine how to eliminate congestion on the links
If you’d really like to have your own netequalizer-like system then my advice…..
Has anyone else tried Netequalizer or something like it to help with VoIP QoS? It’s worked well so far for us and seems to be an effective alternative for networks with several users…..
In the past, we’ve published several articles on our blog to help customers better understand the NetEqualizer’s potential return on investment (ROI). Obviously, we do this because we think we offer a compelling ROI proposition for most bandwidth-shaping decisions. Why? Primarily because we provide the benefits of bandwidth shaping at a a very low cost — both initially and even more so over time. (Click here for the NetEqualizer ROI calculator.)
But, we also want to provide potential customers with the questions that need to be considered before a product is purchased, regardless of whether or not the answers lead to the NetEqualizer. With that said, this article will break down these questions, addressing many issues that may not be obvious at first glance, but are nonetheless integral when determining what bandwidth shaping product is best for you.
First, let’s discuss basic ROI. As a simple example, if an investment cost $100, and if in one year that investment returned $120, the ROI is 20 percent. Simple enough. But what if your investment horizon is five years or longer? It gets a little more complicated, but suffice it to say you would perform a similar calculation for each year while adjusting these returns for time and cost.
The important point is that this technique is a well-known calculation for evaluating whether one thing is a better investment than another — be it bandwidth shaping products or real estate. Naturally and obviously the best financial decision will be determined by the greatest return for the smallest cost.
The hard part is determining what questions to ask in order to accurately determine the ROI. A missed cost or benefit here or there could dramatically alter the outcome, potentially leading to significant unforeseen losses.
For the remainder of this article, I’ll discuss many of the potential costs and returns associated with bandwidth shaping products, with some being more obscure than others. In the end, it should better prepare you to address the most important questions and issues and ultimately lead to a more accurate ROI assessment.
Let’s start by looking at the largest components of bandwidth shaping product “costs” and whether they are one-time or ongoing. We’ll then consider the returns.
COSTS
RETURNS
Overall, these issues are the basic financial components and questions that need to be quantified to make a good ROI analysis. For each business, and each tool, this type of analysis may yield a different answer, but it is important to note that over time there are many more items associated with ongoing costs/savings than those occurring only once. Thus, you must take great care to understand the impact of these for each tool, especially those issues that lead to costs that increase over time.
Editor’s Note: This week, we announced the availability of the NetEqualizer YouTube caching feature we first introduced in October. Over the past month, interest and inquiries have been high, so we’ve created the following Q&A to address many of the common questions we’ve received.
This may seem like a silly question, but why is caching advantageous?
The bottleneck most networks deal with is that they have a limited pipe leading out to the larger public Internet cloud. When a user visits a website or accesses content online, data must be transferred to and from the user through this limited pipe, which is usually meant for only average loads (increasing its size can be quite expensive). During busy times, when multiple users are accessing material from the Internet at once, the pipe can become clogged and service slowed. However, if an ISP can keep a cached copy of certain bandwidth-intensive content, such as a popular video, on a server in their local office, this bottleneck can be avoided. The pipe remains open and unclogged and customers are assured their video will always play faster and more smoothly than if they had to go out and re-fetch a copy from the YouTube server on the Internet.
What is the ROI benefit of caching YouTube? How much bandwidth can a provider conserve?
At the time of this writing, we are still in the early stages of our data collection on this subject. What we do know is that YouTube can account for up to 15 percent of Internet traffic. We expect to be able to cache at least the most popular 300 YouTube videos with this initial release and perhaps more when we release the mass-storage version of our caching server in the future. Considering this, realistic estimates put the savings in terms of bandwidth overhead somewhere between 5 and 15 percent. But this is only the instant benefits in terms bandwidth savings. The long-term customer-satisfaction benefit is that many more YouTube videos will play without interruption on a crowded network (busy hour) than before. Therefore, ROI shouldn’t be measured in bandwidth savings alone.
Why is it just the YouTube caching feature? Why not cache everything?
There are a couple of good reasons not to cache everything.
First, there are quite a few Web pages that are dynamically generated or change quite often, and a caching mechanism relies on content being relatively static. This allows it to grab content from the Internet and store it locally for future use without the content changing. As mentioned, when users/clients visit the specific Web pages that have been stored, they are directed to the locally saved content rather than over the Internet and to the original website. Therefore, caching obviously wouldn’t be possible for pages that are constantly changing. Caching dynamic content can cause all kinds of issues — especially with merchant and secure sites where each page is custom-generated for the client.
Second, a caching server can realistically only store a subset of data that it accesses. Yes, data storage is getting less expensive every year, but a local store is finite in size and will eventually fill up. So, when making a decision on what to cache and what not to cache, YouTube, being both popular and bandwidth intensive, was the logical choice.
Will the NetEqualizer ever cache content beyond YouTube? Such as other videos?
At this time, the NetEqualizer is caching files that traverse port 80 and correspond to video files from 30 seconds to 10 minutes. It is possible that some other port 80 file will fall into this category, but the bulk of it will be YouTube.
Is there anything else about YouTube that makes it a good candidate to cache?
Yes, YouTube content meets the level of stability discussed above that’s needed for effective caching. Once posted, most YouTube videos are not edited or changed. Hence, the copy in the local cache will stay current and be good indefinitely.
When I download large distributions, the download utility often gives me a choice of mirrored sites around the world. Is this the same as caching?
By definition this is also caching, but the difference is that there is a manual step to choosing one of these distribution sites. Some of the large-content open source distributions have been delivered this way for many years. The caching feature on the NetEqualizer is what is called “transparent,” meaning users do not have to do anything to get a cached copy.
If users are getting a file from cache without their knowledge, could this be construed as a violation of net neutrality?
We addressed the tenets of net neutrality in another article and to our knowledge caching has not been controversial in any way.
What about copyright violations? Is it legal to store someone’s content on an intermediate server?
This is a very complex question and anything is possible, but with respect to intent and the NetEqualizer caching mechanism, the Internet provider is only caching what is already freely available. There is no masking or redirection of the actual YouTube administrative wrappings that a user sees (this would be where advertising and promotions appear). Hence, there is no loss of potential of revenue for YouTube. In fact, it would be considered more of a benefit for them as it helps more people use their service where connections might otherwise be too slow.
Final Editor’s Note: While we’re confident this Q&A will answer many of the questions that arise about the NetEqualizer YouTube caching feature, please don’t hesitate to contact us with further inquiries. We can be reached at 1-888-287-2492 or sales@apconnections.net.
Over the past few years, much of the controversy over net neutrality has ultimately stemmed from the longstanding rift between carriers and content providers. Commercial content providers such as NetFlix have entire business models that rely on relatively unrestricted bandwidth access for their customers, which has led to an enormous increase in the amount of bandwidth that is being used. In response to these extreme bandwidth loads and associated costs, ISPs have tried all types of schemes to limit and restrict total usage. Some of the solutions that have been tried include:
While in many cases effective, most of these efforts have been mired in controversy with respect to net neutrality. However, caching is the one exception.
Up to this point, caching has proven to be the magic bullet that can benefit both ISPs and consumers (faster access to videos, etc.) while respecting net neutrality. To illustrate this, we’ll run caching through the gauntlet of questions that have been raised about these other solutions in regard to a violation of net neutrality. In the end, it comes up clean.
1. Does caching involve deep introspection of user traffic without their knowledge (like layer-7 shaping and DPI)?
No.
2. Does Caching perform any form of preferential treatment based on content?
No.
3. Does caching perform any form of preferential treatment based on fees?
No.
Yet, despite avoiding these pitfalls, caching has still proven to be extremely effective, allowing Internet providers to manage increasing customer demands without infringing upon customers’ rights or quality of service. It was these factors that led APconnections to develop our most recent NetEqualizer feature, YouTube caching.
For more on this feature, or caching in general, check out our new NetEqualizer YouTube Caching FAQ post.
With the debate over net neutrality raging in the background, Internet suppliers are preparing their strategies to bridge the divide between bandwidth consumption and costs. This topic is coming to a head now largely because of the astonishing growth-rate of streaming video from the likes of YouTube, NetFlix, and others.
The issue recently took a new turn and emerged front and center during a webinar when Allot Communications and Openet presented its new product features, including its approach of integrating policy control and charging for wireless access to certain websites.
On the surface, this may seem like a potential solution to the bandwidth problem. Basic economic theory will tell you that if you increase the cost of a product or service, the demand will eventually decrease. In this case, charging for bandwidth will not only increase revenues, but the demand will ultimately drop until a point of equilibrium is reached. Problem solved, right? Wrong!
While the short-term benefits are obviously appealing for some, this is a slippery slope that will lead to further inequality in Internet access (You can easily find many articles and blogs regarding Net Neutrality including those referencing Vinton Cerf and Tim Berners-Lee — two of the founding fathers of the Internet — clearly supporting a free and equal Internet). Despite these arguments, we believe that Deep Packet Inspection (DPI) equipment makers such as Allot will continue to promote and support a charge system since it is in their best business interests to do so. After all, a pay-for-access approach requires DPI as the basis for determining what content to charge.
However, there are better and more cost-effective ways to control bandwidth consumption while protecting the interests of net neutrality. For example, fairness-based bandwidth control intrinsically provides equality and fairness to all users without targeting specific content or websites. With this approach, when the network is busy small bandwidth consumers are guaranteed access to the Internet while large bandwidth users are throttled back but not charged or blocked completely. Everyone lives within their means and gets an equal share. If large bandwidth consumers want access to more bandwidth, they can purchase a higher level of service from their provider. But let’s be clear, this is very different from charging for access to a particular website!
Although this content-neutral approach has repeatedly proved successful for NetEqualizer users, we’re now taking an additional step at mitigating bandwidth congestion while respecting network neutrality through video caching (the largest growth segment of bandwidth consumption). So, keep an eye out for the YouTube caching feature to be available in our new NetEqualizer release early next year.
If you are working with a network that has a small number of infrequent users on a small network (10Mbps or less), here are some tuning recommendations that will help you to optimize your network use. These recommendations came out of a discussion with one of our customers. Their environment is a 40 person company on a 10Mbps pipe (normal amount of users on a small network), and then converts over at night to a network with only one user.
The following recommendations will help to alleviate the situation where a small network with a small number of infrequent users has a user get knocked down to a less than 1Mbps with a PENALTY while there is more than enough bandwidth to sustain their download at a higher rate.
1) (best option) Put a hard limit somewhere below RATIO (typically 85%) on each IP address on the network. So, for a 10Mbps network with RATIO = 85%, your hard limits should be below 8.5Mbps for each IP address.
2) Put a “day configuration” and a “night configuration” in place. The process to do this is described in the Changing Configurations by Time of Day section of our Advanced Tips & Tricks guide.
3) Change the PENALTY unit sensitivity, to make the penalty less restrictive.
4) Raise the value of HOGMIN from 12,000 bytes/second anywhere up to 128,000 bytes/second.
The philosophy behind each is described in detail in the following sections.
We recommend putting Hard Limits on each IP address. Hard Limits will keep any one user from consuming the entire network bandwidth. If you prefer not to have Hard Limits on all IP addresses, you can set the Hard Limit only for the infrequent users.
For example, on a 10Mbps network, you can put a Hard Limit of 4-5Mbps on every user, which will prevent any one user from tripping equalizing, but will allow all of them to sustain a 5 Mbps download on your lightly loaded network.
If a user starts a large download, it will consume network bandwidth up until the network reaches a point of congestion (at 85% with RATIO set to 85). Once that point is reached, equalizing will kick in and start penalizing the traffic. In cases where the network has a normal number of users on it, this works very well to provide fairness across the available bandwidth.
When the one network user spikes the entire network to above 85% congested, a Penalty kicks in. The result of the penalty is that the file download gets throttled back to 500kbs or maybe less – almost instantly. Once the penalty is removed, the file download will again consume all the network bandwidth until another penalty is applied. This cycle repeats itself every few seconds until the download completes.
On a system with more than one user, and typically one that is very busy with 100’s of users or 1000’s, the pipe is usually always near capacity, so penalties being applied are not as dramatic, and ensure that all other users do not experience “lockup”.
You can also change your NetEqualizer to use two separate configuration files, so that you can apply different rules at various times of day – for example, rules for “off-hours” (typically nighttime) versus another set for “on-hours” (typically daytime). This would be beneficial if you want to open up the amount of bandwidth available per user at night. For example, you could set your off-hours hard limits to 8 Mbps, and lower your on-hours hard limits to 4Mbps.
Note that it is still important to keep your hard limits below RATIO, so that you do not trigger equalizing based on one data flow.
Networks much larger than 45 megabits may require a PENALTY UNIT resolution smaller than 100ths of seconds. In the NetEqualizer Web GUI, the smallest penalty that can be applied to an IP Packet is 1/100 of a second. If you are finding that a default PENALTY of 1 is putting too much latency on your connections then you can adjust the PENALTY unit to 1/1000 of second with the following command:
From the Web GUI Main Menu, Click on ->Miscellaneous->Run a Command
Type in: /bridge/bridge-utils/brctl/brctl rembrain my 99999
Note: For this change to persist you will need to put it in the /art/autostart file.
HOGMIN is used to determine what traffic should be penalized on a congested network. One way to get traffic to not be penalized, then, is to raised the value of HOGMIN (default is 12,000 bytes per second). For a lightly-loaded network you could consider HOGMIN = 50,000 bytes/sec and may even go as high as 128,000 bytes/sec.
Taken as a whole, this is how our four recommendations would work in the example we have described…
Hmm… I have a 10 megabit pipe and I have 40 users during the day and 1 user at night. No user should be able to take the whole pipe all day, but I want my 1 user to get more bandwidth at night.
During the day, when every once in a while we get 2 or 3 users downloading at once, it will no longer kill the entire pipe. And during the night, my user can download larger files without being restricted. So, with 4Mbps/8Mbps restrictions plus Equalizing, I get the best of both worlds – pretty fast downloads when the pipe is empty and I am protected against peak time crashes! People get fast downloads and if there is a peak I am protected from system gridlock. Now there is nothing anybody can do to crash the system at random times.
I hope you find this tuning suggestion helpful for your situation. If you would like additional help, please contact our Support Team at support@apconnections.net or 303.997.1300 x102 to discuss tuning for your specific configuration.
By Art Reisman
Editor’s note: This article was adapted from our answer to a NetEqualizer pre-sale question asked by an ISP that was concerned with its upgrade path. We realized the answer was useful in a broader sense and decided to post it here.
Any router, bandwidth controller, or firewall that is based on Intel architecture and buses will never be able to go faster than about about 7 gigabits sustained. (This includes our NE4000 bandwidth controller. While the NE4000 can actually reach speeds close to 10 gigabits, we rate our equipment for five gigabits because we don’t like quoting best-case numbers to our customers.) The limiting factor in Intel architecture is that to expand beyond 10-gigabit speeds you cannot be running with a central clock. Therefore, with a central clock controlling the show, it is practically impossible to move data around much faster than 10 gigabits.
The alternative is to use a specialized asynchronous design, which is what faster switches and hardware do. They have no clock or centralized multiprocessor/bus. However, the price point for such hardware quickly jumps to 5-10 times the Intel architecture because it must be custom designed. It is also quite limited in function once released.
Obviously, vendors can stack a bunch of 10-gig fiber bandwidth controllers behind a switch and call it something faster, but this is no different from dividing up your network paths and using multiple bandwidth controllers yourself. So, be careful when assessing the claims of other manufacturers in this space.
Considering these limitations, many cable operators here in the US have embraced the 10-gigabit barrier. At some point you must divide and conquer using multiple 10-gig fiber links and multiple NE4000 type boxes, which we believe is really the only viable plan — that is if you want any sort of sophistication in your bandwidth controller.
While there are some that will keep requesting giant centralized boxes, and paying a premium for them (it’s in their blood to think single box, central location), when you think about the Internet, it only works because it is made of many independent paths. There is no centralized location by design. However, as you approach 10-gigabit speeds in your organization, it might be time to stop thinking “single box.”
I went through this same learning curve as a system architect at AT&T Bell Labs back in the 1990s. The sales team was constantly worried about how many telephone ports we could support in one box because that is what operators were asking for. It shot the price per port through the roof with some of our designs. So, in our present case, we (NetEqualizer) decided not to get into that game because we believe that price per megabit of shaping will likely win out in the end.
Art Reisman is currently CTO and co-founder of APconnections, creator of the NetEqualizer. He has worked at several start-up companies over the years and has invented and brought several technology products to market, both on his own and with the backing of larger corporations. This includes tools for the automotive industry.
The Dark Side of Net Neutrality
February 15, 2011 — netequalizerNet neutrality, however idyllic in principle, comes with a price. The following article was written to shed some light on the big money behind the propaganda of net neutrality. It may change your views, but at the very least it will peel back one more layer of the the onion that is the issue of net neutrality.
First, an analogy to set the stage:
I live in a neighborhood that equally shares a local community water system among 60 residential members. Nobody is metered. Through a mostly verbal agreement, all users try to keep our usage to a minimum. This requires us to be very water conscious, especially in the summer months when the main storage tanks need time to recharge overnight.
Several years ago, one property changed hands, and the new owner started raising organic vegetables using a drip irrigation system. The neighborhood precedent had always been that using water for a small lawn and garden area was an accepted practice, however, the new neighbor expanded his garden to three acres and now sells his produce at the local farmers market. Even with drip irrigation, his water consumption is likely well beyond the rest of the neighborhood combined.
You can see where I am going with this. Based on this scenario, it’s obvious that an objective observer would conclude that this neighbor should pay an additional premium — especially when you consider he is exploiting the community water for a commercial gain.
The Internet, much like our neighborhood example, was originally a group of cooperating parties (educational and government institutions) that connected their networks in an effort to easily share information. There was never any intention of charging for access amongst members. As the Internet spread away from government institutions, last-mile carriers such as cable and phone companies invested heavily in infrastructure. Their business plans assumed that all parties would continue to use the Internet with lightweight content such as Web pages, e-mails, and the occasional larger document or picture.
In the latter part of 2007, a few companies, with substantial data content models, decided to take advantage of the low delivery fees for movies and music by serving them up over the Internet. Prior to their new-found Internet delivery model, content providers had to cover the distribution costs for the physical delivery of records, video cassettes and eventually discs.
As of 2010, Internet delivery costs associated with the distribution of media had plummeted to near zero. It seems that consumers have pre-paid their delivery cost when they paid their monthly Internet bill. Everybody should be happy, right?
The problem is, as per our analogy with the community water system, we have a few commercial operators jamming the pipes with content, and jammed pipes have a cost. Upgrading a full Internet pipe at any level requires a major investment, and providers to date are already leveraged and borrowed with their existing infrastructure. Thus, the Internet companies that carry the data need to pass this cost on to somebody else.
As a result of these conflicting interests, we now have a pissing match between carriers and content providers in which the latter are playing the “neutrality card” and the former are lobbying lawmakers to grant them special favors in order to govern ways to limit access.
Therefore, whether it be water, the Internet or grazing on public lands, absolute neutrality can be problematic — especially when money is involved. While the concept of neutrality certainly has the overwhelming support of consumer sentiment, be aware that there are, and always will be, entities exploiting the system.
Related Articles
For more on NetFlix, see Level 3-Netflix Expose their Hidden Agenda.
Share this: