Net Neutrality Bill Won’t End Conflicts Between Users and Providers


This week, Representatives Edward Markey, a Massachusetts Democrat, and Anna Eshoo, a California Democrat, introduced the Internet Freedom Preservation Act aimed at protecting the rights of Internet users and ultimately net neutrality. Yet, before net neutrality advocates unequivocally praise the bill, it should be noted that it protects the rights of Internet service providers as well. For example, as long as ISPs are candid with their customers in regard to their network optimizaiton practices, the bill does allow for “reasonable network management,” stating:

“Nothing in this section shall be construed to prohibit an Internet access provider from engaging in reasonable network management consistent with the policies and duties of nondiscrimination and openness set forth in this Act. For purposes of subsections (b)(1) and (b)(5), a network management practice is a reasonable practice only if it furthers a critically important interest, is narrowly tailored to further that interest, and is the means of furthering that interest that is the least restrictive, least discriminatory, and least constricting of consumer choice available. In determining whether a network management practice is reasonable, the Commission shall consider, among other factors, the particular network architecture or technology limitations of the provider.”

While this stipulation is extremely important in the protection it provides Internet service providers, it is likely to come into conflict with some Internet users’ ideas of net neutrality.  For example, the bill also states that it is ISPs’ “duty to not block, interfere with, discriminate against, impair or degrade the ability of any person to use an Internet access service to access, use, send, post, receive or offer any lawful content, application or service through the Internet.” However, even users of the NetEqualizer, one of the more hands off approaches to network management, don’t have a choice but to target the behavior of certain heavy customers. One person’s penchant for downloading music — legally or not — can significantly impact the quality of service for everyone else. And, increasing bandwidth just to meet the needs of a few users isn’t reasonable either.

It would seem that this would be a perfect case of reasonable network management which would be allowed under the proposed bill. Yet many net neutrality advocates tend to quickly dismiss any management as an infringement upon the user’s rights. The protection of the users’ rights will likely get the attention in discussions about these types of bills, but there should also be just as much emphasis on the rights of the provider to reasonably manage their network and what this may mean for the idea of unadulterated net neutrality.

The fact that this bill includes the right to reasonably manage one’s network indicates that some form of management is typically nececsary for a network to run at its full potential. The key is finding some middle ground.

Related article September 22 2009

FCC rules in favor of Net Neutrality the commentary on this blog is great and worth the read.

Top Tips To Quantify The Cost Of WAN Optimization


Editor’s Note: As we mentioned in a recent article, there’s often some confusion when it comes to how WAN optimization fits into the overall network optimization industry — especially when compared to Internet optimization. Although similar, the two techniques require different approaches to optimization. What follows are some simple questions to ask your vendor before you purchase a WAN optimization appliance. For the record, the NetEqualizer is primarily used for Internet optimization.

When presenting a WAN optimization ROI argument, your vendor rep will clearly make a compelling case for savings.  The ROI case will be made by amortizing the cost of equipment against your contracted rate from your provider. You can and should trust these basic raw numbers. However, there is more to evaluating a WAN optimization (packet shaping) appliance than comparing equipment cost against bandwidth savings. Here are a few things to keep in mind:

  1. The amortization schedule should also make reasonable assumptions about future costs for T1, DS3, and OC3 links. Most contracted rates have been dropping in many metro areas and it is reasonable to assume that bandwidth costs will perhaps be 50-percent less two to three years out.
  2. If you do increase bandwidth, the licensing costs for the traffic shaping equipment can increase substantially. You may also find yourself in a situation where you need to do a forklift upgrade as you outrun your current hardware.
  3. Recurring licensing costs are often mandatory to keep your equipment current. Without upgrading your license, your deep packet inspection (layer 7 shaping filters) will become obsolete.
  4. Ongoing labor costs to tune and re-tune your WAN optimization appliance can often costs thousands per week.
  5. The good news is that optimization companies will normally allow you to try an appliance before you buy. Make sure you take the time to manage the equipment with your own internal techs or IT consultant to get an idea of how it will fit into your network.  The honeymoon with new equipment (supported by a well trained pre-sales team) can be short lived. After the free pre-sale support has expired, you will be on your own.

There are certainly times when WAN optimization makes sense, yet it many cases, what appears to be a no-brainer decision at first will begin to be called into question as costs mount down the line. Hopefully these five contributing factors will paint a clearer picture of what to expect.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

The True Price of Bandwidth Monitoring


By Art Reisman

Art Reisman CTO www.netequalizer.com

For most IT administrators, bandwidth monitoring of some sort is an essential part of keeping track of, as well as justifying, network expenses. Without visibility into a network load, an administrator’s job would degrade into a quagmire of random guesswork. Or would it?

The traditional way of  looking at monitoring your Internet has two parts: the fixed cost of the monitoring tool used to identify traffic, and the labor associated with devising a remedy. In an ironic inverse correlation, we assert that costs increase with the complexity of the monitoring tool. Obviously, the more detailed the reporting tool, the more expensive its initial price tag. The kicker comes with part two. The more expensive the tool, the more  detail  it will provide, and the more time an administrator is likely to spend adjusting and mucking, looking for optimal performance.

But, is it a fair to assume higher labor costs with  more advanced monitoring and information?

Well, obviously it would not make sense to pay more for an advanced tool if there was no intention of doing anything with the detailed information it provides. Why have the reporting tool in the first place if the only output was to stare at reports and do nothing? Typically, the more information an admin has about a network, the more inclined he might be to spend time making adjustments.

On a similar note, an oversight often made with labor costs is the belief  that when  the work needed to adjust the network comes to fruition, the associated adjustments can remain statically in place. However, in reality, network traffic changes constantly, and thus the tuning so meticulously performed on Monday may be obsolete by Friday.

Does this mean that the overall productivity of using a bandwidth tool is a loss? Not at all. Bandwidth monitoring and network mucking can certainly result in a cost-effective solution. But, where is the tipping point? When does a monitoring solution create more costs than it saves?

A review of recent history reveals that technologies with a path similar to bandwidth monitoring have become commodities and shunned the overhead of most human intervention.  For example, computer operators disappeared off the face of the earth with the invention of cheaper computing in the late 1980’s.  The function of a computer operator did not disappear completely, it just got automated and rolled into the computer itself. The point is, anytime the cost of a resource is falling, the attention and costs used to manage it should be revisited.

An effective compromise with many of our customers is that they are stepping down from expensive complex reporting tools to a simpler approach. Instead of trying to determine every type of traffic on a network by type, time of day, etc., an admin can spot trouble by simply checking overall usage numbers once a week or so. With a basic bandwidth control solution in place (such as a NetEqualizer), the acute problems of a network locking up will go away, leaving what we would call only “chronic” problems, which may need to be addressed eventually, but do not require immediate action.

For example, with a simple reporting tool you can plot network usage by user.  Such a report, although limited in detail, will often reveal a very distinct bell curve of usage behavior. Most users will be near the mean, and then there are perhaps one or two percent of users that will be well above the mean. You don’t need a fancy tool to see what they are doing; abuse becomes obvious just looking at the usage (a simple report).

However, there is also the personal control factor, which often does not follow clear lines of ROI (return on investment).

What we have experienced when proposing a more hands-off model to network management is that a customer’s comfort depends on their bias for needing to know, which is an unquantifiable personal preference. Even in a world where bandwidth is free, it is still human nature to want to know specifically what bandwidth is being used for, with detailed information regarding the type of traffic. There is nothing wrong with this desire, but we wonder how strong it might be if the savings obtained from using simpler monitoring tools were converted into a trip to Hawaii.

In our next article, we’ll put some real world numbers to the test for actual break downs, so stay tuned. In the mean time, here are some other articles on bandwidth monitoring that we recommend. And, don’t forget to take our poll.

List of monitoring tools compiled by Stanford

Planetmy
Linux Tips
How to set up a monitor for free

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

Google Questions Popular Bandwidth Shaping Myth


At this week’s Canadian Radio-Television and Telecommunications Commission Internet traffic hearing, Google’s Canada Policy Counsel, Jacob Glick, raised a point that we’ve been arguing for the last few years. Glick said:

“We urge you to reject as false the choice between debilitating network congestion and application-based discrimination….This is a false dichotomy. The evidence is, and experience in Canada and in the U.S. already shows, that carriers can manage their networks, reduce congestion and protect the open Internet, all at the same time.”

While we agree with Glick to a certain extent, we differ in the alternative proposed by hearing participants — simply increase bandwidth. This is not to say that increasing bandwidth isn’t the appropriate solution in certain circumstances, but to question the validity of a dichotomy with an equally narrow third alternative doesn’t exactly significantly expand the industry’s options. Especially when increasing bandwidth isn’t always a viable solution for some ISPs.

The downsides of application-based shaping are one of the main reasons behind NetEqualizer’s reliance on behavior-based shaping. Therefore, while Glick is right that the above-mentioned dichotomy doesn’t explore all of the available options, it’s important to realize that the goals being promoted at the hearing are not solely achieved through increased bandwidth.

For more on how the NetEqualizer fits into the ongoing debate, see our past article, NetEqualizer Offers Net Neutrality, User Privacy Compromise.

Obama’s Revival of Net Neutrality Revisits An Issue Hardly Forgotten


Last Friday, President Obama reinvigorated (for many people, at least) the debate over net neutrality during a speech from the White House on cybersecurity. The president made it clear that users’ privacy and net neutrality would not be threatened under the guise of cybersecurity measures. President Obama stated:

“Let me also be clear about what we will not do. Our pursuit of cyber-security will not — I repeat, will not include — monitoring private sector networks or Internet traffic. We will preserve and protect the personal privacy and civil liberties that we cherish as Americans. Indeed, I remain firmly committed to net neutrality so we can keep the Internet as it should be — open and free.”

While this is certainly an important issue on the security front, for many ISPs and networks administrators, it didn’t take the president’s comments to put user privacy or net neutrality back in the spotlight.  In may cases, ISPs and network administrators constantly must walk the fine line between net neutrality, user privacy, and ultimately the well being of their own networks, something that can be compromised on a number of fronts (security, bandwidth, economics, etc.).

Therefore, despite the president’s on-going commitment to net neturality, the issue will continue to be debated and remain at the forefront of the minds of ISPs, administrators, and many users. Over the past few years, we at NetEqualizer have been working to provide a compromise for these interested parties, ensuring network quality and neutrality while protecting the privacy of users. It will be interesting to see how this debate plays out, and what it will mean for policy, as the philosophy of network neutrality continues to be challenged — both by individuals and network demands.

Further Reading

Top Six Fear-Driven Network Equipment Purchases


Fear is one of our most primal survival instincts.  But, as such, sales people around the world have made a business out of selling their products on fear and making  them out to be a necessity for survival. Below, we will highlight some of the current and historical fear-based triggers used to push oftentimes unneeded items with respect to the networking industry.

1) CALEA compliance — A little over a year ago, we were besieged by frantic inquiries from many of our ISP customers about the need to do something for the new CALEA laws.  Basically, these are laws that require data carriers to provide access to law enforcement agencies upon receipt of a judge’s order.

We spent the next few months researching what the intent of the CALEA laws were, and what that meant to our customers.   Yes, CALEA is a real law with teeth, but it was intended to help law enforcement agencies track criminals using data networks, not force ISPs into bankruptcy.

There are some low cost options available to operators wanting to conform, so before you break the bank, do some research.  But, also be aware, as somewhere along the line CALEA became the Next Y2k fear-driven windfall for unscrupulous networking sales reps. Familiarize yourself with what you need and then find a product that works for you. While we were more than happy to help users of our products comply, we felt than an informed customer was more important that one that was simply panicked and afraid.  More info on the NetEqualizer approach to CALEA compliance.

2) Secure credit card transmission over the Internet — In short, credit information becomes the most unsecured  once it reaches  a corporate database. A hacker or employee with bad intentions is many times more likely to lift credit card information from a fixed database rather than in transit over the Internet. Therefore, the paranoia that abounds over submitting a credit card to Web a site for fear of transmission piracy is way out of proportion to the actual risk.

Consumers will gladly hand their credit card off to a random strangers behind the cash register at a brick and mortar establishment, but for some reason, submitting your credit card to a Web site creates an unacceptable risk for many. This fear has given rise to a cottage industry around secure Internet transmission. The bottom line is that stealing a credit card in transit over the Internet would take extreme patience and inside help from a carrier. To top it off, the credit card issuers have mastered the art of shutting off your card at the first sign of any anomaly (at great inconvenience to their customers in many cases, but worth it in a true emergency).  However despite the relative lack of risk, there is a significant amount of money and technology spent on securing merchant sites.

Related article “Do we really need SSL

3) Y2k — This is an old one, and yes, there were some critical systems out there that might have suffered. My firsthand personal experience from that  time was just a wake-up call. My employer had me doing Y2k upgrades to our product line and the scare pushed our sales to their biggest year ever.  However, within 3 years revenue had dropped 65 percent. Perhaps we should have been doing real product improvements?

4) Virus protection for your laptop — Yes, viruses are real and they attack all the time, but I simply just save off my critical files daily and re-load my windows box when I get a virus.  I prefer this method over being a slave to a Norton pop-up  box.  You can also convert to MAC or Linux desktop, which seem to carry some form of natural immunity. New York Times writer Paul Boutin agrees in this recent article.

5)  Lack of technology for our schools — Yes, there is some level of computer literacy required in the work force today, however, with the billions (trillions?) spent by schools today, you’d think there might be some increase in standardized test scores. I’d much rather see the money spent on increasing teacher salaries and smaller class sizes, even if it meant learning to calculate on an abacus. Training the mind to think and reason critically is a skill for life that transcends technology and requires encouragement and challenge from teachers.

6) Uninterruptable Power Supply (UPS) — I almost gagged when I read the blurb  below from a UPS sales VP from a trade rag. Originally, I was thinking of including UPS power supplies on my list, but I had no evidence that they were being miss represented. And, yes, in many situations a good UPS will save your computer and computer center from crashing, so please understand they are important pieces of equipment for a data center. But, the context below confirmed my suspicion.  The lead touts ways to speed up network performance, essentially implying that if your network is slow, you need UPS servers to correct it!

Are their desktops locking up every time someone runs the microwave oven? “If VARs aren’t selling UPSs [uninterruptible power supplies] with each new server or desktop, they are doing their customers an injustice, and they may be leaving money on the table,” says ….. name and company omitted.

This quote and full  article is written to infer that your desktop computer and network may run “slow” because of a lack of power. The fact is, your computer will crash hard if  power drops below a fixed tolerance. It is not an electric motor that winds down slowly. It is either on or off. A UPS prevents crashes due to lack of power, but it will not make your network faster or more efficient.

The point of this article isn’t to completely discount the six issues discussed above, but rather to provide some context. In many cases, fear is based on a lack of knowledge and understanding. Therefore, the problems mentioned here may not necessarily be best solved with one tech product or another, but instead could be remedied by a little bit of research. As a consumer, doing your homework goes a long way.

Speeding up Your T1, DS3, or Cable Internet Connection with an Optimizing Appliance


By Art Reisman, CTO, APconnections (www.netequalizer.com)

Whether you are a home user or a large multinational corporation, you likely want to get the most out of your Internet connection. In previous articles, we have  briefly covered using Equalizing (Fairness)  as a tool to speed up your connection without purchasing additional bandwidth. In the following sections, we’ll break down  exactly how this is accomplished in layman’s terms.

First , what is an optimizing appliance?

An optimizing appliance is a piece of networking equipment that has one Ethernet input and one Ethernet output. It is normally located between the router that terminates your Internet connection and the users on your network. From this location, all Internet traffic must pass through the device. When activated, the optimizing appliance can rearrange traffic loads for optimal service, thus preventing the need for costly new bandwidth upgrades.

Next, we’ll summarize equalizing and behavior-based shaping.

Overall, equalizing is a simple concept. It is the art form of looking at the usage patterns on the network, and when things get congested, robbing from the rich to give to the poor. In other words, heavy users are limited in the amount of badwidth to which they have access in order to ensure that ALL users on the network can utilize the network effectively. Rather than writing hundreds of rules to specify allocations to specific traffic as in traditional application shaping, you can simply assume that large downloads are bad, short quick traffic is good, and be done with it.

How is Fairness implemented?

If you have multiple users sharing your Internet trunk and somebody mentions “fairness,” it probably conjures up the image of each user waiting in line for their turn. And while a device that enforces fairness in this way would certainly be better than doing nothing, Equalizing goes a few steps further than this.

We don’t just divide the bandwidth equally like a “brain dead” controller. Equalizing is a system of dynamic priorities that reward smaller users at the expense of heavy users. It is very very dynamic, and there is no pre-set limit on any user. In fact, the NetEqualizer does not keep track of users at all. Instead, we monitor user streams. So, a user may be getting one stream (FTP Download) slowed down while at the same time having another stream untouched(e-mail).

Another key element in behavior-based shaping is connections. Equalizing takes care of instances of congestion caused by single-source bandwidth hogs. However, the other main cause of Internet gridlock (as well as bringing down routers and access points) is p2p and its propensity to open hundreds or perhaps thousands of connections to different sources on the Internet. Over the years, the NetEqualizer engineers have developed very specific algorithms to spot connection abuse and avert its side effects.

What is the result?

The end result is that applications such as Web surfing, IM, short downloads, and voice all naturally receive higher priority, while large downloads and p2p receive lower priority. Also, situations where we cut back large streams is  generally for a short duration. As an added advantage, this behavior-based shaping does not need to be updated constantly as applications change.

Trusting a heuristic solution such as NetEqualizer is not always an easy step. Oftentimes, customers are concerned with accidentally throttling important traffic that might not fit the NetEqualizer model, such as video. Although there are exceptions, it is rare for the network operator not to know about these potential issues in advance, and there are generally relatively few to consider. In fact, the only exception that we run into is video, and the NetEqualizer has a low level routine that easily allows you to give overriding priority to a specific server on your network, hence solving the problem. The NetEqualizer also has a special feature whereby you can exempt and give priority to any IP address specifically in the event that a large stream such as video must be given priority.

Through the implementation of Equalizing technology, network administrators are able to get the most out of their network. Users of the NetEqualizer are often surprised to find that their network problems were not a result of a lack of bandwidth, but rather a lack of bandwidth control.

See who else is using this technology.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

The Pros and Cons of Bonded DSL and Load Balancing Multiple WAN links


Editor’s Note:We often get asked if our NetEqualizer bandwidth shapers can do load balancing. The answer is yes -maybe if we wanted to integrate into one of the public domain load balancing devices freely available. It seems that to do it correctly without issues is extremely expensive. 

In the following excerpt, we have reprinted some thoughts and experience from a user who has a wide breadth of knowledge in this area.  He gives detailed examples of the trade-offs involved in bonding multiple WAN connections.

When bonding is done by your provider, it is essentially seamless and requires no extra effort (or risks to the customer). It is normally done using bonded T1 links, but also can come in the form of a bonded DSL. The technology discussed below is applicable to users who are bonding two or more lines together without the knowledge (or help) of their upstream provider.

As for Linux freeware Load Balancing devices, they are NOT any sort of true bonding at all.  If you have 3 x 1.5 Mbit lines, then you do NOT have a 4.5 Mbit line with these products. If you really want a 4.5Mbit Bonded line, then I’m not aware of any way to do it without having BGP or some method of coordinating with someone upstream on the other side of the link.  However, what a multi-WAN-router will do is try to equally spread sessions out over the three lines, so that if your users are collectively doing 3Mbit of collective downloads, that should be about 1Mbit on each line. For the most part, it does a pretty good job.

It does this by using fairly dumb round-robin NATing.  So, it’s much like a regular NAT router – everyone behind it is a private 192.168 number (which is the 1st downside) – and it will NAT the privates to one of the 3 Public IP’s on the WAN ports. The side effect of that is broken sessions, where some websites (particularly SSL) will complain that your IP address has changed, for example, while you’re inside the shopping cart or whatever.

To counteract that problem, they have ‘session persistence’ which tries to track each ‘Session Pair’ and keep the same WAN IP in effect for that ‘Session Pair’. That means that the 1st time one of the private IP:port accesses some particular public ip:port, the router will remember that and use that same WAN port for that same public/private pair. The result of this is that ‘most’ of the time, we don’t have these broken sessions, but the downside of this is that the fairness of the load balancing is offset.

For example, if you had 2 lines connected:

  • User1 comes to speakeasy and does a speedtest – the router says ‘speakeasy is out WAN1 forevermore’.
  • User2 comes and looks up google, and the router says ‘google is out WAN2 forevermore’
  • User3 goes to Download.com and the router decides ‘Download.com is on WAN1′.
  • User4 goes to smalltextsite.com (WAN2)
  • User5 goes to YouTube (WAN1)

And so on. With session persistence turned on, User300 will get SpeakEasy, Download.com and YouTube across WAN1 because that’s what it originally learned to be persistent about.

So, the tradeoff is if you don’t use the session persistence, then you’ll have angry customers because things break. If you do use persistence, then there may be an unbalancing.

Also, there are still some broken sites, even with persistence on. For example, some online stores have the customer shopping at www.StoreSite.com and when they checkout it transfers their cart contents to www.PaymentProcessor.com, which may flag an IP security violation. Any time the router sees different IP’s out in the public side, it figures it can use a new WAN port and doesn’t know it’s the same user and application. There are a few game launchers that kids load a ‘launcher’ program and select a server to connect to, but when they actually click ‘connect’, the server complains because the WAN addresses have changed.

In all honesty, it works quite well and there are few problems. We also can make our own exception list, so in my shopping cart example, we can manually add ‘storesite.com‘ and ‘paymentprocessor.com‘ to the same WAN address and that will ensure that it always uses the same WAN for those sites. This requires that users complain first before you would even know that there is a problem, AND also requires some tricks to figure out what’s going on.  However, the exception list can ultimately handle these problems if you make enough exceptions.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency-sensitive applications, such as VoIP and email. Click here to request a full price list.

Additional articles

How to inexpensively increase internet bandwidth by bonding cable and dsl.

From BusinessPhoneNews.com a great guide to access bandwidth needs, Bandwidth Management Buyers Guide.

5 Tips to speed up your business T1/DS3 to the Internet


By Art Reisman

Art Reisman CTO www.netequalizer.com

In tight times expanding your corporate Internet pipe is a hard pill to swallow, especially when your instincts tell you the core business should be able to live within the current allotment.

Here are some tips and hard facts that you  you may want to consider  to help stretch your business Internet pipe

1) Layer 7 application shaping.

The market place is crawling with solutions that allow you to set policies on bandwidth based on type of application.  Application shaping allows an administrator to restrict lower priority activities, while allowing mission critical Apps favorable consideration. This methodology is very seductive , but from our experience it can send your IT department into a nanny state, constantly trying to figure out what to allow and what to restrict. Also the cost of an Internet link expansion is dropping, while many of the application shaping solutions start around $10,000 and go up from there.

The up side is Layer 7 application shaping does work well when it comes to internal WAN links that do not carry Internet traffic. An administrator can get a handle on the fixed traffic running privately within their network quite easily.

2) Using your router to restrict specific IP and ports

If your core business utilization can be isolated to a single server or group of servers a few simple rules to allocate a large chunk of the pipe to these resources (by IP address) may be a good fit.

In an environment where business priorities change and are not isolated to a fixed server or two, this solution can backfire, but if your resource allocation requirements are stable doing something on your router to restrict one particular subnet over another can be useful in stretching your bandwidth.

One thing to be careful is that it often takes a skilled technician to set up specialty rules on your router. You can easilyu rack  up  $$ to your IT consultants if  your set up is not static.

3) Behavior based shaping

Editors note: We are the makers of the NetEqualizer which specializes in this technology; however our intent in this article is to be objective.

Behavior based shaping works well and affordably in most situations. Most business related applications will get priority as they tend to use small amounts of data or web pages.  Occasionally there are exceptions that need to override the basic behavior based shaping such as video.  Video can easily  be excluded from the generic policies.  Implementing a few exclusions is far less cumbersome than trying to classify all traffic all the time such as with application shaping.

4) Add more bandwidth and by pass your local loop carrier

T1’s and T3’s from your local telco may not be the only options for bandwidth in your area. Many of our customers get creative by purchasing bandwidth directly from a tier one provider (such as Level 3) and then using a Microwave back haul the bandwidth to their location. The Telco’s make a killing with what they call a loop charge (before they put any bandwidth on your line) With Microwave backhaul technology you can by-pass this charge for significant savings.

5) Clean up the laptops and computers on your network.  Many robots and viruses run in the background on your windows machines and can generate a cacophony of back ground traffic.  A business wide license for good virus protection may be worth the investment.  Stay away from the free ware versions of virus protection they tend to miss quite a bit.

What is the FCC’s position on Net Neutrality?


More snippets on the Net Neutrality debate.

In an article from Wired today there are some interesting comments about the Fed’s position Net Neutrality

the FCC’s loose and little-enforced four principles (.pdf) should be the rules attached to the so-called Broadband Technology Opportunities Program. Those guidelines date to 2005 and state that consumers are entitled to surf where they like, have a choice of ISPs, and use whatever devices and applications they like.

Then the article goes on to detail a few other requirements for good measure:

For enforcement and research needs, the carriers have to be forced to turn over detailed information about their networks, such as where they interconnect, what traffic shaping techniques are used and how often they fail, according to telecom watcher Kevin Werbach and internet researcher kc claffy.

Personally I was kind of miffed to learn the FCC has an official guideline , and I even more miffed that it is seldom enforced.
Next up we will address the debate on if using deep packet inspection in a  spam filter is the same as opening private mail.

New Speed Test Tools from M-Lab Expose ISP Bandwidth Throttling Practices


In a recent article, we wrote about the “The White Lies ISPs tell about their bandwidth speeds“.  We even hinted at how they (your ISP)  might be inclined to give preferential treatment to normal speed test sites.  Well, now there is a speed test site from M-Lab that goes beyond simple speed tests. M-lab gives the consumer sophisticated results and exposes any tricks your ISP might be up to.

Features provided include:

  • Network Diagnostic Tool – Test your connection speed and receive sophisticated diagnosis of problems limiting speed.
  • Glasnost – Test whether BitTorrent is being blocked or throttled.
  • Network Path and Application Diagnosis – Diagnose common problems that impact last-mile broadband networks.
  • DiffProbe (coming soon) – Determine whether an ISP is giving some traffic a lower priority than other traffic.
  • NANO (coming soon) – Determine whether an ISP is degrading the performance of a certain subset of users, applications, or destinations.

Click here to learn more about M-Lab.

Related article on how to determine your true video speed over the Internet.

Seventeen Unique Ideas to Speed up Your Internet


By Eli Riles
Eli Riles is a retired insurance agent from New York. He is a self-taught expert in network infrastructure. He spends half the year traveling and visiting remote corners of the earth. The other half of the year you’ll find him in his computer labs testing and tinkering with the latest network technology.  For questions or comments please contact him at
admin@netequalizer.com

Updated 11/30/2015 – We are now up to sixteen (17) tips!
————————————————————————————————————————————————

Although there is no way to actually make your true Internet speed faster, here are some tips for home and corporate users that can make better use of the bandwidth you have, thus providing the illusion of a faster pipe.

1) Use A VPN tunnel to get to blocked content.

One of the little know secrets your provider does not want you to know is that they will slow video or software updates if the content is not hosted on their network. Here is an article with details on how you can get around this restriction.

 

 

 

2) Time of day does make a difference

During peak internet Usage times, 5 PM to Midnight local time, your upstream provider is also most likely congested.  If you have a bandwidth intensive task to do, such as downloading an update for your IPAD, you can likely get a much faster download by doing your download earlier in the day. I have even noticed that the more obscure YouTube’s and videos,  have problems running at peak traffic times. My upstream provider does a good job with Netflix and popular videos during peak hours ( these can be found in their cache), but if I get something that is not likely stored in a local copy on their servers the video will lag during peak times. (see our article on caching)

3) Turn off Java Script

There are some trade offs with doing this , but it does make a big difference on how fast pages will load. Here is an article where cover all the  relevant details.

Note: Prior to 2010  setting your browser to text only mode was a viable option, but today most sites are full of graphics and virtually unreadable in text only mode.

  • If you are stuck with a dial-up or slower broadband connection, your  browser likely has an  option to load text-only. If you are a power user that’s gaming or watching YouTube, text-only will obviously have no effect on these activities, but it will speed up general browsing and e-mail.  Most web pages are loaded with graphics which take up the bulk of the load time, so switching to text-only will eliminate the graphics and save you quite a bit of time.

4) Install a bandwidth controller to make sure no single connection dominates your bandwidth

Everything you do on the Internet creates a connection from inside your network to the Internet, and all of these connections compete for the limited amount of bandwidth your ISP provides.

Your router (cable modem) connection to the Internet provides first come/first serve service to all the applications trying to access the Internet. To make matters worse, the heavier users, the ones with the larger persistent downloads, tend to get more than their fair share of router cycles.  Large downloads are like the school yard bully, they tend to butt in line, and not play fair.

Read the full article.

5) Turn off the other computers in the house

Many times, even during the day when the kids are off to school, I’ll be using my Skype phone and the connection will break up.  I have no idea what exactly the kids’ computers are doing, but if I log them off the Internet, things get better with the Skype call every time. In a sense, it’s a competition for limited bandwidth resources, so, decreasing the competition will usually boost your computer’s performance.

6) Kill background tasks on your computer

You should also try to turn off any BitTorrent or background tasks on your computer if you are having trouble while trying to watch a video or make a VoIP call.  Use your task bar to see what applications are running and kill the ones you don’t want.  Although this is a bit drastic, you may just find that it makes a difference. You’d be surprised what’s running on your computer without you even knowing it (or wanting it).

For you gamers out there, this also means turning off the audio component on your games if you do not need it for collaboration.

7) Test your Internet speed

One of the most common issues with slow internet service is that your provider is not giving you the speed/bandwidth that they have advertised.  Here is a link to our article on testing your Internet speed, which is a good place to start.

Note:  Comcast has adopted a 15 minute Penalty box in some markets. Your initial speed tests will likely show no degradation, but if you persist at watching high-definition video for more than 15 minutes, you may get put into their Penalty box.  This practice helps preserve a limited resource in some crowded markets.  We note it here because we have heard reports of people happily watching YouTube videos only to have service degrade.

Related Article: The real meaning of Comcast generosity.

8) Make sure you are not accidentally connected to a weak access point signal

There are several ways an access point can slow down your connection a bit.  If the signal between you and the access point is weak, the access point will automatically downgrade its service to a slower speed. This happens to me all the time. My access point goes on the blink (needs to be re-booted) and my computer connects to the neighbor’s with a weaker signal. The speed of my connection on the weaker signaled AP is quite variable.  So, if you are on wireless in a densely populated area, check to make sure what signal you are connected  to.

9) Caching — How  does it work and is it a good idea?

Offered by various vendors and built into Internet Explorer, caching can be very effective in many situations. Caching servers have built-in intelligence to store the most recently and most frequently requested information, thus preventing future requests from traversing a WAN/Internet link unnecessarily.

Many web servers keep a time stamp of their last update to data, and browsers such as the popular Internet Explorer will check the time stamp on the host server. If the page time stamp has not changed since the last time you accessed the page, IE will grab it and present a local stored copy of the Web page (from the last time you accessed the page), saving the time it would take to load the page from across the Internet.

So what is the downside of caching?

There are two main issues that can arise with caching:

a) Keeping the cache current. If you access a cached page that is not current, then you are at risk of getting old and incorrect information. Some things you may never want to be cached, for example the results of a transactional database query. It’s not that these problems are insurmountable, but there is always the risk that the data in cache will not be synchronized with changes. I personally have been misled by old data from my cache on several occasions.

b) Volume. There are some 100 million Web sites out on the Internet. Each site contains upwards of several megabytes of public information. The amount of data is staggering and even the smartest caching scheme cannot account for the variation in usage patterns among users and the likelihood they will hit an uncached page.

Recommended: Related article on how ISPs use caching to speed up NetFlix and Youtube Videos.

For information on turning off caching, click here.

 

10) Kill your virus protection software

With the recent outbreak of the H1N1 virus, it reminded me of  how sometimes the symptoms and carnage from a vaccine are worse than the disease it purports to cure.  Well, the same holds true for your virus protection software. Yes, viruses are real and can take down your computer, but so can a disk crash, which is also inevitable.  You must back up your critical data regularly.  However, that virus software seems to dominate more resources on my desktop than anything else.  I no longer use anything and could not be happier.  But be sure to use a reliable back-up (as you will need to rebuild your computer now and then, which I find a better alternative than running a slow computer all of the time).

11) Set a TOS bit to provide priority

A TOS bit  is a special bit within an IP packet that directs routers to give preferential treatment to selected packets.  This sounds great, just set a bit and move to the front of the line for faster service.  As always, there are limitations.

– How does one set a TOS bit?
It seems that only very special enterprise  applications, like a VoIP PBX, actually set and make use of TOS bits. Setting the actual bit is not all that difficult if you have an application that deals with the network layer, but most commercial applications just send their data on to the host computer’s clearing house for data, which in turn puts it into IP packets without a TOS bit set.  After searching around for a while, I just don’t see any literature on being able to set a TOS bit at the application level. For example, there are a couple of forums where people mention setting the TOS bit in Skype but nothing definitive on how to do it.

– Who enforces the priority for TOS packets?
This is a function of routers at the edge of your network, and all routers along the path to wherever the IP packet is going. Generally, this limits the effectiveness of using a TOS bit to networks that you control end-to-end. In other words, a consumer using a public Internet connection cannot rely on their provider to give any precedence to TOS bits, hence this feature is relegated to enterprise networks within a business or institution.

–  Incoming traffic generally cannot be controlled.
The subject of when you can and cannot control a TOS bit does get a bit more involved.  We have gone over this in more detail in a separate  article.

12) Avoid Quota Penalties

Some providers are implementing Quotas where they slow you down if you use too much data over a period of time.  If you know that you have a large set of downloads to do, for example synching your device with iTunes Cloud, go to a library and use their free service. Or, if you are truly without morals, logon to your neighbor’s wireless network and do your synch.

13) Consider Application Shaping?

Note: Application shaping is an appropriate topic for corporate IT administrators and is generally not a practical solution for a home user.  Makers of application shapers include Blue Coat (Packeteer) and Allot (NetEnforcer), products that are typically out of the price range for many smaller networks and home users.

One of the most popular and intuitive forms of optimizing bandwidth is a method called “application shaping”, with aliases of “deep packet inspection”, “layer 7 shaping”, and perhaps a few others thrown in for good measure. For the IT manager that is held accountable for everything that can and will go wrong on a network, or the CIO that needs to manage network usage policies, this at first glance may seem like a dream come true.  If you can divvy up portions of your WAN/Internet link to various applications, then you can take control of your network and ensure that important traffic has sufficient bandwidth, right?  Well, you be the judge…

At the center of application shaping is the ability to identify traffic by type.  For example, identifying between Citrix traffic, streaming audio, Kazaa peer-to-peer, or something else.  However, this approach is not without its drawbacks.

Drawback #1: Applications can purposely use non-standard ports
Many applications are expected to use Internet ports when communicating across the Web. An Internet port is part of an Internet address, and many firewall products can easily identify ports and block or limit them. For example, the “FTP” application commonly used for downloading files uses as standard the well-known “port 21”. The fallacy with this scheme, as many operators soon find out, is that there are many applications that do not consistently use a standard fixed port for communication. Many application writers have no desire to be easily classified. In fact, they don’t want IT personnel to block them at all, so they deliberately design applications to not conform to any formal port assignment scheme. For this reason, any product that aims to block or alter application flows by port should be avoided if your primary mission is to control applications by type.

So, if standard firewalls are inadequate at blocking applications by port, what can help?

As you are likely aware, all traffic on the Internet travels around in what is called an IP packet. An IP packet can very simply be thought of as a string of characters moving from Computer A to Computer B. The string of characters is called the “payload,” much like the freight inside a railroad car. On the outside of this payload, or data, is the address where it is being sent. These two elements, the address and the payload, comprise the complete IP packet.

In the case of different applications on the Internet, we would expect to see different kinds of payloads. For example, let’s take the example of a skyscraper being transported from New York to Los Angeles. How could this be done using a freight train? Common sense suggests that one would disassemble the office tower, stuff it into as many freight cars as it takes to transport it, and then when the train arrived in Los Angeles, hopefully the workers on the other end would have the instructions on how to reassemble the tower.

Well, this analogy works with almost anything that is sent across the Internet, only the payload is some form of data, not a physical hunk of bricks, metal and wires. If we were sending a Word document as an e-mail attachment, guess what, the contents of the document would be disassembled into a bunch of IP packets and sent to the receiving e-mail client where it would be re-assembled. If I looked at the payload of each Internet packet in transit, I could actually see snippets of the document in each packet and could quite easily read the words as they went by.

At the heart of all current application shaping products is special software that examines the content of Internet packets (aka “deep packet inspection”), and through various pattern matching techniques, determines what type of application a particular flow is. Once a flow is determined, then the application shaping tool can enforce the operator’s policies on that flow. Some examples of policy are:

Limit AIM messenger traffic to 100kbs
Reserve 500kbs for Shoretell voice traffic

The list of rules you can apply to traffic types and flow is unlimited.

Drawback #2: The number of applications on the Internet is a moving target.
The best application shaping tools do a very good job of identifying several thousand of them, and yet there will always be some traffic that is unknown (estimated at 10 percent by experts from the leading manufacturers). The unknown traffic is lumped into the unknown classification and an operator must make a blanket decision on how to shape this class. Is it important? Is it not? Suppose the important traffic was streaming audio for a webcast and is not classified. Well, you get the picture. Although theory behind application shaping by type is a noble one, the cost for a company to stay up-to-date is large and there are cracks.

Drawback #3: The spectrum of application types is not static
Even if the application spectrum could be completely classified, the spectrum of applications constantly changes. You must keep licenses current to ensure you have the latest in detection capabilities. And even then it can be quite a task to constantly analyze and change the mix of policies on your network. As bandwidth costs lessen, how much human time should be spent divvying up and creating ever more complex policies to optimize your WAN traffic?

Drawback #4: Net neutrality is comprised by application shaping.
Techniques used in application shaping have become controversial on public networks, with privacy issues often conflicting with attempts to ensure network quality.

Based on these drawbacks, we believe that application shaping is not the dream come true that it may seem at first glance.  Once CIOs and IT Managers are educated on the drawbacks, they tend to agree.

14) Bypass that local consumer reseller

This option might be a little bit out of the price range of the average consumer, and it may not be practical logistically –  but if you like to do things out-of-the-box, you don’t have to buy Internet service from your local cable operator or phone company, especially if you are in a metro area.  Many customers we know have actually gone directly to a Tier 1 point of presence (backbone provider) and put in a radio backhaul direct to the source.  There are numerous companies that can set you up with a 40-to-60 megabit link with no gimmicks.

15) Speeding up your iPhone

Ever been in a highly populated area with 3 or 4 bars and still your iPhone access slows to crawl ?

The most likely reason for this problem is congestion on the provider line. 3g and 4g networks all have a limited sized pipe from the nearest tower back to the Internet. It really does not matter what your theoretical data speed is, when there are more people using the tower than the back-haul pipe can handle, you can temporarily lose service, even when your phone is showing three or four bars.

Unfortunately, you only have a couple of options in this situation. If you are in a stadium with a large crowd, your best bet is to text during the action.  If you wait for a timeout or end of the game,  you’ll find this corresponds to the times when the network slows to a crawl,  so try to finish your access before the last out of the game or the end of the quarter. Pick a time when you know the majority of people are not trying to send data.

Get away from the area of congestion. I have experienced complete lockout of up to 30 minutes, when trying to text, as a sold out stadium emptied out.  In this situation my only chance was  to walk about  1/2 mile or so from the venue to get a text out. Once away from the main stadium, my iPhone connected to a tower with a different back haul away from the congested stadium towers.

Shameless plug: If you happen to be a provider or know somebody that works for a provider  please tell them to call us and we’d be glad to explain the simplicity of equalizing and how it can restore sanity to a congested wireless backhaul.

16) Turn off HTTPS and other Encryption

Although this may sound a bit controversial , there are some providers that,  for sake of survival assume that encrypted traffic is bad traffic.  For example p2p is considered bad traffic, they usee be able to use special equipment to throw it into a lower priority pool so that it gets sent out at a slower speed.   Many applications are starting to encrypt p2p , face book etc…. The provider may assume that all this is “bad”traffic because they don’t know what it is, and hence give it a lower priority.

17) Protocol Spoofing

Note:  This method is applied to Legacy Database servers doing operations over a WAN.  Skip this tip if you are a home user.

Historically, there are client-server applications that were developed for an internal LAN. Many of these applications are considered chatty. For example, to complete a transaction between a client and server, tens of messages may be transmitted when perhaps one or two would suffice. Everything was fine until companies, for logistical and other reasons, extended their LANs across the globe using WAN links to tie different locations together.

To get a better visual on what goes on in a chatty application, perhaps an analogy will help.  It’s like  sending family members your summer vacation pictures, and, for some insane reason, putting each picture in a separate envelope and mailing them individually on the same mail run. Obviously, this would be extremely inefficient, as chatty applications can be.

What protocol spoofing accomplishes is to fake out the client or server-side of the transaction and then send a more compact version of the transaction over the Internet, i.e. put all the pictures in one envelope and send it on your behalf, thus saving you postage.

You might ask why not just improve the inefficiencies in these chatty applications rather than write software to deal with the problem? Good question, but that would be the subject of a totally different article on how IT organizations must evolve with legacy technology, which is beyond the scale of the present article.

In Conclusion

Again, while there is no way to increase your true Internet speed without upgrading your service, these tips can improve performance, and help you to get better results from the bandwidth that you already have.  You’re paying for it, so you might as well make sure it’s being used as effectively as possible. : )

Related Article on testing true video speed over the Internet

A great article from the tech guy regarding tips on dealing with your ISP

Other Articles on Speeding up Your Internet

Five tips and tricks to speed up your Internet

How to speed up your Internet Connection Without any Software

Tips on how to speed up your Internet

About APconnections

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here to request our full pricelist.

The pros and cons of Disk (Web) Caching


Eli Riles an independent consultant and former VP of sales for NetEqualizer has extensively investigated the subject of caching with many of  ISPs from around the globe. What follows are some useful observations on disk/web caching.

Effective use of Disk Caching

Suppose you are the administrator for a network, and you have a group of a 1000 users that wake up promptly at 7:00 am each morning and immediately go to MSNBC.com to retrieve the latest news from Wall Street. This synchronized behavior would create 1000 simultaneous requests for the same remote page on the Internet.

Or, in the corporate world, suppose the CEO of a multinational 10,000 employee business, right before the holidays put out an all points 20 page PDF file on the corporate site describing the new bonus plan? As you can imagine all the remote WAN links might get bogged down for hours while each and every employee tried to download this file.

Well it does not take a rocket scientist to figure out that if somehow the MSNBC home page could be stored locally on an internal server that would alleviate quite a bit of pressure on your WAN or Internet link.

And in the case of the CEO memo, if a single copy of the PDF file was placed locally at each remote office it would alleviate the rush of data.

Local Disk Caching does just that.

Offered by various vendors Caching can be very effective in many situations, and vendors can legitimately make claims of tremendous WAN speed improvement in some situations. Caching servers have built in intelligence to store the most recently and most frequently requested information, thus preventing future requests from traversing the WAN link unnecessarily .

You may know that most desktop browsers do their own form caching already. Many web servers keep a time stamp of their last update to data , and browsers such as the popular Internet Explorer will use a cached copy of a remote page after checking the time stamp.

So what is the downside of caching?

There are two main issues that can arise with caching:

1) Keeping the cache current. If you access a cache page that is not current then you are at risk of getting old and incorrect information. Some things you may never want to be cached, for example the results of a transactional database query. It’s not that these problems are insurmountable, but there is always the risk that the data in cache will not be synchronized with changes.

2) Volume. There are some 100 millions of web sites out on the Internet alone. Each site contains upwards of several megabytes of public information. The amount of data is staggering and even the smartest caching scheme cannot account for the variation in usage patterns among users and the likely hood they will hit an un-cached page. If you have a diverse set of users it is unlikely the Cache will have much effect on a given day

Formal definition of Caching

Net Neutrality Defined,Barack Obama is on the bandwagon


By Art Reisman, CTO, http://www.netequalizer.com

Art Reisman CTO www.netequalizer.com

Art Reisman

There continues to be a flurry of Net Neutrality articles published and according to one, Barack Obama is a big supporter of Net Neutrality.  Of course that was a fleeting campaign soundbite that the media picked up without much context.

I was releived to see that finally a politically entity put a definition on Net Neutrality.

From the government of Norway we get:

“The new rules lay out three guidelines. First, Internet users must be given complete and accurate information about the service they are buying, including capacity and quality. Second, users are allowed to send and receive content of their choice, use services and applications of their choice. and connect any hardware and software that doesn’t harm the network. Finally, the connection cannot be discriminated against based on application, service, content, sender, or receiver.”

Full Article: Norway gets net neutrality—voluntary, but broadly supported

I could not agree more. Note that this definition does not rule out some form a fair bandwidth shaping, and that is an important distinction because the Internet will be reduced to gridlock without some traffic control.

The funniest piece of irony in this whole debate is that the larger service providers are warning of Armageddon without some form of fairness rules, (and I happen to agree) , while at the same time their marketing arm is creating an image of infinite unfettered access for $29 a month. (I omitted a reference link because they change daily)