Summary: In the last few months, we have set up several NetEqualizer systems on hub and spoke MPLS networks. Our solution is very cost effective because it differs from many TOS/Compression-based WAN optimization products that require multiple pieces of hardware. Normally, for WAN optimization, a device is placed at the HUB and a partner device is placed at each remote location. With the NetEqualizer technology, we have been able to simply and elegantly solve contention issues with a single device at the central hub.
The problem:
A customer has a hub and spoke MPLS network where remote sites get their public Internet and corporate data by coming in on a spoke to a central site. Although the network at the host site has plenty of bandwidth, the spokes have a fixed allocation over the MPLS and are experiencing contention issues (e.g. slow response times to corporate sales data, etc.).
The solution:
By placing a NetEqualizer at a central location, so that all the remote spokes come in through the NetEqualizer, we are able to sense when a remote spoke has reached its contention level. We then perform prioritization on all the competing applications and user streams coming in over the congested link.
Why it works:
QoS and priority is really quite simple: it is always the case where some large selfish application is dominating a shared link. The NetEqualizer is able to spot these selfish applications and scale them back using a technique called Equalizing. QoS and priority are just a matter of taking away bandwidth from somebody else. See our related article: QOS is a matter of sacrifice.
Okay, but how does it really work?
How does NetEqualizer solve the congested MPLS link issue?
The NetEqualizer solution, which is completely compatible with MPLS, works by taking advantage of the natural inclination of applications to back off when artificially restrained. We’ll get back to this key point in a moment.
NetEqualizer will adjust selfish application streams by adding latency, forcing them to back off and allow potentially starved data applications to establish communications – thus eliminating any disruption. |
Once you have determined the peak capacity of an MPLS spoke (if you don’t know for sure it can be determined empirically through busy hour observation), you then tell the centralized NetEqualizer the throughput of the spoke through its defined subnet range or VLAN identification tag. This tells the NetEqualizer to kick into gear when that upper limit on the spoke is reached.
Once configured, the NetEqualizer constantly (every second) measures the total aggregate bandwidth throughput traversing every spoke on your network. If it senses the upper limit is being reached, NetEqualizer will then isolate the dominating flows and encourage them to back off.
Each connection between a user on your network and the Internet constitutes a traffic flow. Flows vary widely from short dynamic bursts, which occur, for example, when searching a small Web site, to large persistent flows, as when performing peer-to-peer file sharing or downloading a large file.
By keeping track of every flow going through each MPLS spoke, the NetEqualizer can make a determination of which ones are getting an unequal share of bandwidth and thus crowding out flows from weaker applications.
NetEqualizer determines detrimental flows from normal ones by taking the following questions into consideration:
- How persistent is the flow?
- How many active flows are there?
- How long has the flow been active?
- How much total congestion is currently on the link?
- How much bandwidth is the flow using relative to the link size?
Once the answers to these questions are known, NetEqualizer will adjust offending flows by adding latency, forcing them to back off and allow potentially starved applications to establish communications – thus eliminating any disruption. Selfish Applications with more aggressive bandwidth needs will be throttled back during peak contention. This is done automatically by the NetEqualizer, without requiring any additional programming by administrators.
The key to making this happen over an MPLS link relies on the fact that if you slow a down a selfish application it will back off. This can be done via the NetEqualizer without any changes to the topology of your MPLS network, since the throttling is done independent of the network.
Questions and Answers
How do you know congestion is caused by a heavy stream?
We have years of experience optimizing networks with this technology. It is safe to say that on any congested network, roughly five percent of users are responsible for 80 percent of Internet traffic. This seems to be a law of Internet usage.2
Can certain applications be given priority?
NetEqualizer can give priority by IP address, for video streams, and in its default mode it naturally gives priority to VoIP, thus addressing a common need for commercial operators.
———————————————————————————————————————————————–
2Randy Barrett, “Putting the Squeeze on Internet Hogs: How Operators Deal with Their Greediest Users.” Multichannel News. 7 Mar. 2007. Retrieved 1 Aug. 2007 http://www.multichannel.com/article/CA6439454.html
June 15, 2012 at 10:39 AM
[…] Case Study: A Simple Solution to Relieve Congestion on Your MPLS Network […]
December 17, 2014 at 8:26 AM
[…] Case Study: A Simple Solution to Relieve Congestion on Your MPLS Network […]
January 13, 2015 at 2:35 PM
[…] To read more about how effective the NetEqualizer is at hub and spoke shaping, check out our blog article on the […]
April 28, 2015 at 11:59 AM
[…] have talked about the using a centralized NetEqualizer for MPLS networks, but sometimes it is hard to visualize using a central bandwidth controller for other […]