Integrating NetEqualizer with Active Directory


By Art Reisman

CTO www.netequalizer.com

I have to admit, that when I see this question posed to one of our sales engineers, I realize our mission of distributing a turn key bandwidth controller will always require a context switch for potential new customers.

It’s not that we can’t tie into Active Directory, we have. The point is that our solution has already solved the customer issue of bandwidth congestion in a more efficient way than divvying up bandwidth per user based on a profile in Active Directory.

Equalizing is the art form of rewarding bandwidth to the real time needs of users at the appropriate time, especially during peak usage hours when bandwidth resources are stretched to their limit. The concept does take some getting used to. A few minutes spent getting comfortable with our methodology will often pay off many times over in comparison to the man hours spent tweaking and fine tuning a fixed allocation scheme.

Does our strategy potentially alienate the Microsoft Shop that depends on Active Directory for setting customized bandwidth restrictions per user ?

Yes, perhaps in some cases it does. However, as mentioned earlier, our mission has always been to solve the business problem of congestion on a network, and equalizing has proven time and again to be the most cost effective in terms of immediate results and low recurring support costs.

Why not support Active Directory integration to get in the door with a new customer ?

Occasionally, we will open up our interface in special cases and integrate with A/D or Radius, but what we have found is that there are a myriad of boundary cases that come up that must be taken care of. For example, synchronizing after a power down or maintenance cycle. Whenever two devices must talk to each other in a network sharing common data, the support and maintenance of the system can grow exponentially. The simple initial requirements of setting a rate limit per user are often met without issue. It is the follow on inevitable complexity and support that violates the nature and structure of our turn-key bandwidth controller. What is the point in adding complexity to a solution when the solution creates more work than the original problem?

See related article on the True Cost of Bandwidth Monitoring.

Where have all the Wireless ISPs gone?


Rachel Carlson wrote silent spring in 1962. She noticed a lack of Robins in her yard and eventually made the link back to DDT spraying.  Robins are again abundant, given a fighting chance they seem to prosper quite well.

Much like the Robins of 1962 , in the past 3 years,  I have noticed a die off in Business from Wireless ISPs.  Four years ago, I spent at least an hour or two a day talking to various WISPs around the USA. The mood was always upbeat, many were adding subscribers at a rapid rate. Today the rural WISPs of the US are still out there, but expansion and investment has come to a standstill.

Is the private investment drought by small rural WISPs due to the recession?

Certainly some of the slowdown is due to the weakness in the housing market; but as one operator told me a couple years ago, his customers will keep the Internet connection up long after they have disconnected their Television and Phone. Some consumers will pay their Internet bill right up to the last day of a pending foreclosure.

Much of the slow down is due to the rural broadband stimulus.

The Rural BroadBand initiative, seems to be a solution looking for a problem. From our perspective the main thing this initiative accomplished is subsidizing a few providers, at the expense of freezing  billions in private equity. Private equity that up until the initiative  was effectively expanding the rural market through entrepreneurs.

Why did the private investment stop.

It was quite simple really, when the playing field was level, most small operators felt like they had an upper hand against the larger prividers in rural areas for example

– They worked smarter using with less overhead using back haul technologies

– There was an abundance of wireless equipment makers (based on 802.11 public requencies) ready to help

– They had confidence that the larger operators were not interested in these low margin niche markets
With the broad band initiative several things happened

–  Nobody knew where the money was going to be spent or how broad the reach would be , this uncertainty froze all private expansion

– Many of these smaller providers applied for money, and only a few were awarded contracts ( if any) . Think of it this way suppose there were 4 restaurants in town all serving slightly different venues and then a giant came along and gave one Restaurant a 10  million dollar subsidy , the other three go out of business

Related article By the FCC’s own report it seems the rural broad band initiative has not changed access to higher speeds.

Prehaps someday the poison of select government subsidies will come to end , and the rural WISP will prosper again.

Update Nov 2011: It appears that not only did the rural broad band initiative freeze up the small home grown ISP market, but proves again that large government subsidies are a poison pill. Related article

By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com
Art Reisman is a partner and co-founder of APconnections, a company that provides bandwidth control solutions (NetEqualizer) to
ISPs, Universities, Wireless ISPs, Libraries, Mining Camps, and any organization where groups of users must share their Internet resources equitably.

The Pros And Cons of Metered Internet Bandwidth And Quotas


Editor’s Note:Looks like the metered bandwidth is back in the news. We first addressed this subject back in June 2008. Below you’ll find our original commentary followed by a few articles on the topic.

Here is our original commentary on the subject:

The recent announcement that Time Warner Cable Internet plans to experiment with a quota-based bandwidth system has sparked lively debates throughout cyberspace. Although the metering will only be done in a limited market for now, it stands as an indication of the direction ISPs may be heading in the future. Bell Canada is also doing a metered bandwidth approach, in Canada much of the last mile for Bell is handled by resellers and they are not happy with this approach.

Over the past several years, we have seen firsthand the pros and cons of bandwidth metering. Ultimately, invoking a quota-based system does achieve the desired effect of getting customers to back off on their usage — especially the aggressive Internet users who take up a large amount of the bandwidth on a network.

However, this outcome doesn’t always develop smoothly as downsides exist for both the ISP and the consumer. From the Internet provider perspective, a quota-based system can put an ISP at a competitive disadvantage when marketing against the competition. Consumers will obviously choose unlimited bandwidth if given a choice at the same price. As the Time Warner article states, most providers already monitor your bandwidth utilization and will secretly kick you offline when some magic level of bandwidth usage has been reached.

To date, it has not been a good idea to flaunt this policy and many ISPs do their best to keep it under the radar. In addition, enforcing and demonstrating a quota-based system to customers will add overhead costs and also create more customer calls and complaints. It will require more sophistication in billing and the ability for customers to view their accounts in real time. Some consumers will demand this, and rightly so.

Therefore, a quota-based system is not simply a quick fix in response to increased bandwidth usage. Considering these negative repercussions, you may wonder what motivates ISPs to put such a system in place. As you may have guessed, it ultimately comes down to the bottom line.

ISPs are often getting charged or incurring cost overruns on total amount of bytes transferred. They are many times resellers of bandwidth themselves and may be getting charged by the byte and, by metering and a quota-based system, are just passing this cost along to the customers. In this case, on face value, quotas allow a provider to adopt a model where they don’t have to worry about cost overruns based on their total usage. They essentially hand this problem to their subscribers.

A second common motivation is that ISPs are simply trying to keep their own peak utilization down and avoid purchasing extra bandwidth to meet the sporadic increases in demand. This is much like power companies that don’t want to incur the expense of new power plants to just meet the demands during peak usage times.

Quotas in this case do have the desired effect of lowering peak usage, but there are other ways to solve the problem without passing the burden of byte counting on to the consumer. For example, behavior-based and fairness reallocation has proven to solve this issue without the downsides of quotas.

A final motivation for the provider is that a quota system will take some of the heat off of their backs from the FCC. According to other articles we have seen, ISPs have discreetly, if not secretly, been toying with bandwidth, redirecting it based on type and such. So, now, just coming clean and charging for what consumers use may be a step in the right direction – at least where policy disclosure is concerned.

For the consumer, this increased candor from ISPs is the only real advantage of a quota-based system. Rather than being misled and having providers play all sorts of bandwidth tricks, quotas at least put customers in the know. Although, the complexity and hassle of monitoring one’s own bandwidth usage on a monthly basis, similar to cell phone minutes, is something most consumers most likely don’t want to deal with.

Personally, I’m on the fence in regard to this issue. Just like believing in Santa Claus, I liked the illusion of unlimited bandwidth, but now, as quota-based systems emerge, I may be faced with reality. It will be interesting to see how the Time Warner experiment pans out.

Related Resource: Blog dedicated to stamping out usage-based billing in Canada.

Additional Recent Articles

Time Bomb Ticking on Netflix Streaming Strategy (Wall Street Journal)

How much casual driving would the average American do if gasoline cost $6 a gallon? A similar question may confront Web companies pushing bandwidth-guzzling services one day.

Several Web companies, including Amazon.com, Google and Netflix, are promoting services like music and video streaming that encourage consumers to gobble up bandwidth. Indeed, Netflix’s new pricing plans, eliminating the combined DVD-streaming offering, may push more people into streaming. These efforts come as broadband providers are discussing, or actually implementing, pricing plans that eventually could make those services pricey to use.

Most obviously this is an issue for the mobile Web, still a small portion of consumer Internet traffic in North America. Verizon Communications‘ majority-owned wireless service last week introduced tiered data pricing, about a year after AT&T made a similar move. But potentially much more disruptive is consumption-based pricing for “fixed broadband,” landlines that provide Internet access for consumers in their homes, either via a cable or a home Wi-Fi network. Long offered on an effectively unlimited basis, American consumers aren’t used to thinking about the bytes they consume online at home.

To keep reading, click here.

The Party’s Over: The End of the Bandwidth Buffet (CedMagazine.com)

As the consumption of video on broadband accelerates, moving to consumption billing is the only option.

Arguments over consumption billing and network neutrality flared up again this summer. The associative connector of the two issues is their technical underpinning: Consumption billing is based on the ability to measure, meter and/or monitor bits as they flow by. The problem is that those abilities are what worry some advocates of one version of network neutrality.

The summer season began with AT&T stirring things up with an announcement that it was moving toward adopting consumption billing for wireless broadband.

To keep reading, click here.

Internet Providers Want to Meter Usage: Customers Who Like To Stream Movies, TV Shows May Get Hit With Extra Fees (MSNBC)

If Internet service providers’ current experiments succeed, subscribers may end up paying for high-speed Internet based on how much material they download. Trials with such metered access, rather than the traditional monthly flat fee for unlimited connection time, offer enough bandwidth that they won’t affect many consumers — yet…

To keep reading, click here.

Related article:  Metered broadband is coming

http://www.businessphonenews.com/2012/10/metered-broadband-is-coming-how-much-broadband-per-month-does-your-business-use.html

Editor’s final note: We are also seeing renewed interest in quota-based systems. We completely revamped our NetEqualizer quota interface this spring to meet rising demand.

Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.

NetEqualizer News Special Feature: Technology and Other Predictions for 2012 and Beyond


As we pass the midpoint of 2011, it’s time to start making a few predictions about the year to come. So keep an eye out for these developments over the next 18 months. If we’re right, be sure to give us credit. If we’re wrong, just act like this post never happened. Here are our thoughts…

Prediction #1: Apple or a new player will make a splash in the search engine market. Current search engine technology, although thorough and expansive, tends to be lacking in smarts. How many times have you searched for a page or link that you know for sure is out there, and despite your best efforts of crafting your key words, Google or Yahoo can’t find what you are looking for? Sometimes, unless you know the exact context of a sentence, in correct word order, you just can’t find it. And that leaves room for improvement.

This is not a knock on Google, Yahoo! or Bing, per se, but rather just an observation that there is room for another generation of search engine and somebody is going to do it sooner rather than later. However, we expect the next-generation search engine will sacrifice speed for intelligence. By this we mean that it is likely the newer generation may crank for 20 seconds to find what you are looking for, but the slower speeds will be more than compensated for by the better, more relevant results. New search engine technology will take the market by storm because of more useful content.

The reason why we suspect Apple might solve this puzzle is that Steve Jobs has a habit of leap frogging technology and bringing it to market. Google has grown by acquisition and not so much by innovation. If not Apple, then it might also come out of left field at some graduate research lab. Regardless, we think it will happen.

Prediction #2: There will be a tumble in the social networking and search engine stock bubble. The expectations for advertisement revenue will not pan out. Placement ads are just too easy to ignore on the Internet. These sites do not have the captive audience of the super bowl, and advertisers are starting to figure that out.

There will be price pressure on the content sites and search engine sites to lower costs to attract advertisers as they actually start to measure and go public with their returns on advertising investment. There will be quite a bit of pressure to hide this fact in the media, as there is now, but at some point content advertising revenues ROI will bare this out.

We are not predicting a collapse in this market, but just some major adjustments to valuations. This is based on our six years of experience placing online ads. Prices have gone up and results were never there to justify cost.

Related Article: Facebook Valuation Too High

Related Article: Demand Builds for TV Ad Time

Prediction #3: Fuel prices will plummet as the Chinese and Indian economies cool down.

Although oil production and exploration is flat in the US, every other country around the world is picking up exploration and exploiting new reserves. The market will be flooded with oil by mid or late 2012, sending the price of gasoline back down to $2 or below.

Prediction #4: There will be a new resurgence in urban mesh networks.

Why? These things really do enhance economic activity. The initial round of municipal mesh networks was a learning experiment with some limited success and way too much inexperience in sourcing providers.

The real reason for cities to invest in these networks will be the growing monthly fees with 4G devices that traditional providers are charging to cover the cost of their larger networks. Users will gravitate toward areas where they can switch over to free wireless. A well-covered downtown or small city with free wireless service will be a welcome island for business users and consumers alike. Think of it like a stepping inside a circle where you can make free unlimited long distance calls while the rest of the provider networks gouge you when step outside.

We’ll see how these predictions pan out. As always, feel free to share your thoughts on our predictions, or some predictions of your own, in the comments section below.

In a related article, the WSJ reports Wi-fi is the largest provider for Mobile Devices such as the iPhone.

Commentary: Verizon Moves to Usage-Based Billing Plans in July 2011


Verizon’s Plans

According to a report published in ChannelPartnersOnline on June 20th, 2011, Verizon is officially moving to a usage-based billing model for new smartphone subscribers as of July.

ChannelPartners reports that Verizon Wireless plans to move to tiered pricing next month on its data plans for new smartphone customers.  On smartphones, including Apple’s iPhone, Verizon Wireless offers an unlimited email and data plan for $29.99 per month. Tiered pricing is very common internationally, but U.S. mobile operators have been slow to move away from all-you-can-eat data plans.

To read the full article, click here.

Commentary: Our Take on This

We were not asked to comment, but if we were, we would agree that usage-based billing more accurately applies charges for services to those using the services. In fact, since April 2010, Internet Providers (ISPs, WISPs, etc.) that want to charge their customers by usage can implement NetEqualizer’s Quota API to track usage over a specified time period.

In addition, if an Internet provider wants to enforce usage levels, the NetEqualizer also supports the use of “rate limits” through its Hard Limits feature. Internet Providers can set inbound and outbound Hard Limits by individual IP, for a whole Class B or Class C subnet, or any legal subnet value 1-32.

We believe that usage-based billing, when broadly adopted, will level the playing field throughout the Internet service space, enabling smaller Internet providers to compete more effectively with larger carriers. Many Internet providers have to charge for usage levels, in order to keep their contention ratios manageable and to remain profitable. In the past, this has been disadvantageous in markets where larger providers have come in and charged flat fees to consumers. With the advent of usage-based billing in the cellular space, consumers will be more apt to expect to pay for usage for all their Internet services.

We will keep watching the developments in this area, and reporting our thoughts here. If you are a small Internet provider, what is your take on usage-based billing? Let us know in the comments section below.

Behind the Scenes: Bugs and Networking Equipment


If you relied only on conspiracy theories to explain the origin of software bugs, they would likely leave little trust in the vendors and manufacturers providing your technology. In general, the more skeptical theories chalk software bugs up to a few nefarious, and easily preventable, causes:

  1. Corporate greed and the failure to effectively allocate resources
  2. Poor engineering
  3. Companies deliberately withholding fixes in an effort to sell upgrades and future support

Although I’ve certainly seen evidence of these policies many times over my 25-year career, the following case studies are more typical for understanding how a bug actually gets into a software release. It’s not necessarily the conspiracy it might initially seem.

My most memorable system failure took place back in the early 1990s. I was the system engineer responsible for the underlying UNIX operating system and Redundant Disk Drives (RAID) on the Audix Voice Messaging system. This was before the days of widespread e-mail use. I worked for AT&T Bell Labs at the time, and AT&T had a reputation of both high price and high reliability. Our customers, almost all Fortune 500 companies, used their voice mail extensively to catalog and archive voice messages. Customers such as John Hancock paid a premium for redundancy on their voice message storage. If there were any field-related problems, the buck stopped in my engineering lab.

For testing purposes, I had several racks of Audix (trade mark) systems and simulators combined with various stacks of disk drives in RAID configurations. We ran these systems for hours, constantly recording voice messages. To stress the RAID storage, we would periodically pull the power on a running disk drive. We would also smash them with a hammer while running. Despite the deliberate destruction of running disk drives, in every test scenario the RAID system worked flawlessly. We never lost a voice mail message in our laboratory.

However, about six months after a major release, I got a call from our support team. John Hancock had a system failure and lost every last one of their corporate voice mails. (AT&T had advised backing data up to tape, but John Hancock had decided not to utilize that facility because of their RAID investment. Remember, this was in the 1990s and does not reflect John Hancock current policies.)

The root cause analysis took several weeks of work with the RAID vendor, myself and some of the key UNIX developers sequestered in a lab in Santa Clara, California. After numerous brainstorm sessions, we were able to re-create the problem. It seemed the John Hancock disk drive had suffered what’s called a parity error.

A parity error can develop if a problem occurs when reading and writing data to the drive. When the problem emerges, the drives try to recover, but in the meantime the redundant drives read and write the bad data. As the attempts at auto recovery within the disk drive go on (sometimes for several minutes), all of the redundant drives have their copies of the data damaged beyond repair. In the case of John Hancock, when the system finally locked up, the voice message indices were useless. Unfortunately, very little could have been done on the vendor or manufacturing end to prevent this.

More recently, when APconnections released a new version of our NetEqualizer, despite extensive testing over a period of months including a new simulation lab, we had to release a patch to clean up some lingering problems with VLAN tags. It turned out the problem was with a bug in the Linux kernel, a kernel that normally gets better with time.

So what happened? Why did we not find this VLAN tag bug before the release? Well, first off, the VLAN tagging facility in the kernel had been stable for years. (The Linux kernel had been released as stable by Kernel.org.) We also had a reliable regression test for new releases that made sure it was not broken. However, our regression test only simulated the actual tag passing through the kernel. This made it much easier to test, and considering our bandwidth shaper software only affected the packets after the tag was in place, there was no logical reason to test a stable feature of the Linux kernel. To retest stable kernel features would not have been economically viable considering these circumstances.

This logic is common during pre-market testing. Rather than test everything, vendors use a regression test for stable components of their system and only rigorously test new features. A regression test is a subset of scenarios and is the only practical way to make sure features unrelated to those being changed do not break when a new release comes out. Think of it this way: Does your mechanic do a crash test when replacing the car battery to see if the airbags still deploy? This analogy may seem silly, but as a product developer, you must be pragmatic about what you test. There are almost infinite variations on a mature product and to retest all of them is not possible.

Therefore, in reality, most developers want nothing more than to release a flawless product. Yet, despite a developer’s best intentions, not every stone can be turned during pre-market testing. This, however, shouldn’t deter a developer from striving for perfection — both before a release as well as when the occasional bugs appear in the field.

Confessions of a Hacker


By Zack Sanders, NetEqualizer Guest Columnist

It’s almost three in the morning. Brian and I have been at it for almost sixteen hours. We’ve been trying to do one seemingly simple task for a while now: execute a command that lists files in a directory. Normally this would be trivial, but the circumstances are a bit different. We have just gotten into EZTrader’s blog and are trying to print a list of files in an unpublished blog post. Accomplishing this would prove that we could run any command we wanted to on the Web server, but it’s not working.

There must be something wrong with the syntax – there always is, right? We have to write the command into an ASP user control file, upload it via the attachment feature in the blog engine, and then reference it in a blog post. It’s ugly, but we are so close to piecing it all together.

I think it’s time for another cup of coffee.

EZTrader is a fictitious online stock trading company. Their front end is relatively basic, but their backend is complex. It allows users to manage their entire portfolio and has access to personal information and other types of sensitive data.

EZTrader came to us with an already strong security profile, but wanted to really put their site through the ringer by having us conduct an actual attack. They run automated scans regularly, have clean, secure code for their backend infrastructure with great SEO, and validate every request both on the client side and the server side. It really was impressive.

In the initial meeting with EZTrader, we were given a login and password for a generic user account so that we could test the authenticated portion of the site. We focused a lot of time and energy there because it is where the highest level of security is needed.

After days of trying to exploit this section of the website with no results, frustration was growing in each of us. Surely there must be some vulnerability to find, some place where they failed to properly secure the data.

Nope.

So what do you do when the front door is locked? Try a window.

We started looking around for possible attack vectors outside of the authenticated area. That’s when we came across the blog. Nobody writes a custom blog engine anymore. They use WordPress or some other open-source blog software. It’s almost always the right choice. These platforms have large communities of developers and testers that look for security holes and patch existing ones right away.

If you stay diligent on keeping your software up to date, you can’t go wrong with choosing an open-source blog platform. Problems arise when keeping this software current falls too low on the priority list. The primary reason this is so dangerous is that all of the bugs and security holes from your dated version are published for the world to see. That was precisely the case with EZTrader. They had an old version of OpenBlogger running on their website. We had finally found a chink in the armor.

We ran a few brute-force password crackers against the blog login form but they weren’t succeeding – access denied. Hmm, maybe it’s simpler.

Let’s do a quick Google search: “OpenBlogger default username and password.”

I’m feeling lucky.

The result: “Administrator/password.” This never seems to work, but it’s worth a shot…“Welcome back Administrator!” Wow. Now we are getting somewhere!

Many of the published vulnerabilities for open-source blog platforms reside in authenticated portions of the blog engine. Logging in with the default credentials was a major step, and now all we have to do is look for security weaknesses associated with that version. Back to Google.

“OpenBlogger 3.5.1 vulnerabilities.” Interesting.

What we find is that you can write code in the blog post itself and have it access any file on the system – even if it is outside of the Web root. This was billed as a “feature” of OpenBlogger. Haha, okay, thanks!

We already knew that the file upload feature of the blog puts files outside the Web root (we had tried accessing an uploaded file directly through the Web browser earlier, but that wasn’t possible due to this segregation). The key was to upload our custom code and reference it through code in the blog post. Once we figured out the path to the uploaded file, we just had to call that path in the blog post and our code would run. Our uploaded file had a simple job. If executed, it would run the “dir” command on the C:\ drive and print out the contents of the directory in a blog post. If we got this to work, the server was ours.

Maybe it’s the coffee, but suddenly I don’t feel so tired. I think we finally have the syntax right. Time to see if this dog will hunt.

Boom! There it is. The entire contents of the C:\ drive. If we can run the “dir” command, what else can we run? Let’s try to FTP one file off of their Web server to our Web server.

Okay, that worked. Let’s now try the entire C:\ drive.

That worked, too.

We now have the source code and supporting files for the entire Web server. This is where a molehill becomes a mountain. First, let’s upload a file that will give me persistent shell access to the drive so we can remove our shady looking blog post and poke around at will. Let’s also upload a file that will send me a text message when an administrator logs into the Web server. At that time, we’ll steal the authentication token and try it on other hosts connected to the network. Maybe it will work on the database server. While we are waiting for the administrator to log in, we’ll review all of our newly acquired source code for security holes that might have eluded us before.

The possibilities from here are endless. We could completely ruin EZTrader’s reputation by destroying their front page, their backend code, or their blog. We could upload more backdoors for access and sell them on the black market. We could sell their source code to E-Trade. We could compromise their other servers that are attached to that subnet.

We could run them out of business.

But luckily, our hats are white. When the CEO sees our report, she is astounded but relieved that we found these issues before the bad guys exploited them.

There are a few lessons that come out of an assessment like this:

It is important to be diligent with security EVERYWHERE. EZTrader’s great infrastructure was rendered obsolete because of one tiny oversight.
Security should exist in layers, and monitoring is crucial. Even if we were able to access the blog, some other process should have thwarted our advances. McAfee or Tripwire should have prevented us from uploading executables or FTPing files off of the server.

In short, security for an online business is paramount. Unlike a breach in the physical world, customers have little tolerance for digital break-ins. Reputation is everything.

In the end, EZTrader’s proactive decisions may have saved their company. It is much easier to prevent an attack than to deal with one after the fact. The cleanup can be messy and expensive. It is increasingly important for all executives and IT personnel to have this mindset, and putting public facing sites to tests like this can be the difference between prosperity and peril.

About the Author(s)

Zack Sanders and Brian Sax are Web Application Security Specialists with Fiddler on the Root (FOTR). FOTR provides web application security expertise to any business with an online presence. They specialize in ethical hacking and penetration testing as a service to expose potential vulnerabilities in web applications. The primary difference between the services FOTR offers and those of other firms is that they treat your website like an ACTUAL attacker would. They use a combination of hacking tools and savvy know-how to try and exploit your environment. Most security companies  just run automated scans and deliver the results. FOTR is for executives that care about REAL security.

What Does Net Privacy Have to Do with Bandwidth Shaping?


I definitely understand the need for privacy. Obviously, if I was doing something nefarious, I wouldn’t want it known, but that’s not my reason. Day in and day out, measures are taken to maintain my privacy in more ways than I probably even realize. You’re likely the same way.

For example, to avoid unwanted telephone and mail solicitations, you don’t advertise your phone numbers or give out your address. When you buy something with your credit card, you usually don’t think twice about your card number being blocked out on the receipt. If you go to the pharmacist, you take it for granted that the next person in line has to be a certain distance behind so they can’t hear what prescription you’re picking up. The list goes on and on. For me personally, I’m sure there are dozens, if not hundreds, of good examples why I appreciate privacy in my life. This is true in my daily routines as well as in my experiences online.

The topic of Internet privacy has been raging for years. However, the Internet still remains a hotbed for criminal activity and misuse of personal information. Email addresses are valued commodities sold to spammers. Search companies have dedicated countless bytes of storage to every search term and IP address made. Websites place tracking cookies on your system so they can learn more about your Web travels, habits, likes, dislikes, etc.  Forensically, you can tell a lot about a person from their online activities. To be honest, it’s a little creepy.

Maybe you think this is much ado about nothing. Why should you care? However, you may recall that less than four years ago, AOL accidentally released around 20 million search keywords from over 650,000 users. Now, those 650,000 users and their searches will exist forever in cyberspace.  Could it happen again? Of course, why wouldn’t it happen again since all it takes is a packed laptop to walk out the door?

Internet privacy is an important topic, and as a result, technology is becoming more and more available to help people protect information they want to keep confidential. And that’s a good thing. But what does this have to do with bandwidth management? In short, a lot (no pun intended)!

Many bandwidth management products (from companies like Blue Coat, Allot, and Exinda, for example) intentionally work at the application level. These products are commonly referred to as Layer 7 or Deep Packet Inspect tools. Their mission is to allocate bandwidth specifically by what you’re doing on the Internet. They want to determine how much bandwidth you’re allowed for YouTube, Netflix, Internet games, Facebook, eBay, Amazon, etc. They need to know what you’re doing so they can do their job.

In terms of this article, whether you’re philosophically adamant about net privacy (like one of the inventors of the Internet), or could care less, is really not important. The question is, what happens to an application-managed approach when people take additional steps to protect their own privacy?

For legitimate reasons, more and more people will be hiding their IPs, encrypting, tunneling, or otherwise disguising their activities and taking privacy into their own hands. As privacy technology becomes more affordable and simple, it will become more prevalent. This is a mega-trend, and it will create problems for those management tools that use this kind of information to enforce policies.

However, alternatives to these application-level products do exist, such as “fairness-based” bandwidth management. Fairness-based bandwidth management, like the NetEqualizer, is the only a 100% neutral solution and ultimately provides a more privacy friendly approach for Internet users and a more effective solution for administrators when personal privacy protection technology is in place. Fairness is the idea of managing bandwidth by how much you can use, not by what you’re doing. When you manage bandwidth by fairness instead of activity, not only are you supporting a neutral, private Internet, but you’re also able to address the critical task of bandwidth allocation, control and quality of service.

Notes on the Complexity of Internet Billing Systems


When using a product or service in business, it’s almost instinctive to think of ways to make it better. This is especially true when it’s a customer-centered application. For some, this thought process is just a habit. However, for others, it leads to innovation and new product development.

I recently experienced this type of stream of consciousness when working with network access control products and billing systems. Rather than just disregarding my conclusions, I decided to take a few notes on what could be changed for the better. These are just a few of the thoughts that came to mind.

The ideal product would:

  1. Cost next to nothing
  2. Auto-sense unique customer requirements
  3. Suggest differentiators such as custom Web screens where customers could view their bill
  4. Roll out the physical deployment bug free in any network topology

Up to this point, the closest products I’ve seen to fulfilling these tasks are from the turn-key vendors that supply systems en mass to hot-spot operators. The other alternative is to rely on custom-built systems. However, there are advantages and drawbacks to both options.

Turn-key Solutions

Let’s start with systems from the turn-key vendors. In short, these aren’t for everyone and only tend to be viable under certain circumstances, which include:

  1. A large greenfield ISP installation — In this situation, the cost of development of the application should be small relative to the size of the customer base. Also, the business model needs some flexibility to work with the features of the billing and access design.
  2. If you have plenty of time to troubleshoot your network — This translates into you having plenty of money allocated to troubleshooting and also realizing there will be several integrations and iterations in order to work out the kinks. This means you must have a realistic expectation for ongoing support (more on the this later). Projects go sour when vendor and customer assume the first iteration is all that’s needed. This is never true when doing even the most innocuous custom development.
  3. If you are willing to take the vendors’ suggestions on equipment and the business process — Generally, the vendor you’re using provides some basic options for your billing and authentication. This may require you to adjust your business process to meet some existing models.

The upside to these turn-key solutions is that if you’re able to operate within these constraints, you can likely get something going at a great price and fairly quickly. But, unfortunately, if you waiver from the turn-key vendor system, your support and cost cycle will likely increase dramatically.

The Hidden Costs of Customization

If you don’t fit into the categories discussed above, you may start looking into custom-built systems to better suit your specific needs. While going the custom-built route will obviously add to your initial price, it’s also important to realize that the long-term costs may increase as well.

Many custom network access control projects start as a nice prototype, but then profit margins tend to drop and changes need to be made. The largest hidden cost from prototype to finished product is in handling error cases and boundary conditions. In addition to adding to the development costs, ongoing support will be required to cover these cases. In our experience, here are a few of the common issues that tend to develop:

  1. Auditing and synchronization with customer databases — This is where your enforcement program (the feature that allows people on to your network) syncs up with your database. But, suppose you lose power and then come back up. How do you re-validate all of your customer ? Do you force them to re-login?
  2. Capacity planning — In many cases, the test system did not account for the size of a growing system. At what point will you be forced to divide and tranisition to multiple authentications systems?
  3. General “feature creep” — This occurs when changing customer expectations pressure the vendor to overrun a fixed-price bid. This in turn leads to shoddy work and more problems as the vendor tries to cut corners in order to hold onto some profit margin.

Conclusion

Based on this discussion, it’s clear that the perfect, one-time-fix NAC billing system may still only be in the minds of users. Therefore, it’s not a matter of trying to find the flawless solution but rather of taking your own needs into account while understanding the limitations of existing options. If you have a clear idea of what you need, as well as a reasonable expectation of what certain solutions can provide (and at what cost), the process of finding and implementing an NAC billing system will not only be more effective but also more painless.

The Dark Side of Net Neutrality


Net neutrality, however idyllic in principle, comes with a price. The following article was written to shed some light on the big money behind the propaganda of net neutrality. It may change your views, but at the very least it will peel back one more layer of the the onion that is the issue of net neutrality.

First, an analogy to set the stage:

I live in a neighborhood that equally shares a local community water system among 60 residential members. Nobody is metered. Through a mostly verbal agreement, all users try to keep our usage to a minimum. This requires us to be very water conscious, especially in the summer months when the main storage tanks need time to recharge overnight.

Several years ago, one property changed hands, and the new owner started raising organic vegetables using a drip irrigation system. The neighborhood precedent had always been that using water for a small lawn and garden area was an accepted practice, however, the new neighbor expanded his garden to three acres and now sells his produce at the local farmers market. Even with drip irrigation, his water consumption is likely well beyond the rest of the neighborhood combined.

You can see where I am going with this. Based on this scenario, it’s obvious that an objective observer would conclude that this neighbor should pay an additional premium — especially when you consider he is exploiting the community water for a commercial gain.

The Internet, much like our neighborhood example, was originally a group of cooperating parties (educational and government institutions) that connected their networks in an effort to easily share information. There was never any intention of charging for access amongst members. As the Internet spread away from government institutions, last-mile carriers such as cable and phone companies invested heavily in infrastructure. Their  business plans assumed that all parties would continue to use the Internet with lightweight content such as Web pages, e-mails, and the occasional larger document or picture.

In the latter part of 2007, a few companies, with substantial data content models, decided to take advantage of the low delivery fees for movies and music by serving them up over the Internet. Prior to their new-found Internet delivery model, content providers had to cover the distribution costs for the physical delivery of records, video cassettes and eventually discs.

As of 2010, Internet delivery costs associated with the distribution of media had plummeted to near zero. It seems that consumers have pre-paid their delivery cost when they paid their monthly Internet bill. Everybody should be happy, right?

The problem is, as per our analogy with the community water system, we have a few commercial operators jamming the pipes with content, and jammed pipes have a cost. Upgrading a full Internet pipe at any level requires a major investment, and providers to date are already leveraged and borrowed with their existing infrastructure. Thus, the Internet companies that carry the data need to pass this cost on to somebody else.

As a result of these conflicting interests, we now have a pissing match between carriers and content providers in which the latter are playing the “neutrality card” and the former are lobbying lawmakers to grant them special favors in order to govern ways to limit access.

Therefore, whether it be water, the Internet or grazing on public lands, absolute neutrality can be problematic — especially when money is involved. While the concept of neutrality certainly has the overwhelming support of consumer sentiment, be aware that there are, and  always will be, entities exploiting the system.

Related Articles

For more on NetFlix, see Level 3-Netflix Expose their Hidden Agenda.

Network Redundancy must start with your provider


By Art Reisman

Art Reisman CTO www.netequalizer.com

Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper.

The chances of being killed by a shark are 1 in 264 million. The chance of being mauled by a bear on your weekend outing in the woods are even less.   Fear is a strange emotion rooted deep within our brains. Despite a rational understanding of risks people are programmed to lose sleep and exhaust their adrenaline supply worrying about events that will never happen.

It is this same lack of rational risk evaluation that makes it possible  for vendors to sell unneeded equipment to otherwise budget conscious businesses.  The current , in vogue,  unwarranted  fears used to move network equipment    are IPv6 preparedness, and  equipment redundancy.

Equipment vendors tend to push customers toward internal redundant hardware solutions , not because they have your best interest in mind ,  if they did, they would first encourage you to get a redundant link to your ISP.

Twenty years of practical hands on experience tells us  that your Internet router’s chance of catastrophic failure is about 1 percent over a three-year period. On the other hand, your internet provider has a 95-percent chance of having a full-day outage during that same three-year period.

If you are truly worried about a connectivity failure into your business, you MUST source two separate paths to the Internet to have any significant reduction in risk. Requiring fail-over on individual pieces of equipment, without first securing complete redundancy in your network from your provider is like putting a band-aid on your finger while pleading from your jugular vein.

Some other useful tips on making your network more reliable include

Do not turn on unneeded bells and whistles on your router and firewall equipment.

Many router and device failures are not absolute. Equipment will get cranky, slow, or belligerent based on human error or system bugs. Although system bugs are rare when these devices are used in the default set-up, it seems turning on bells and whistles is often an irresistible enticement for a tech. The more features you turn on, the less standard your configuration becomes, and all too often the mission of the device is pushed well beyond its original intent. Routers doing billing systems, for example.

These “soft” failure situations are common, and the fail-over mechanism likely will not kick in, even though the device is sick and not passing traffic as intended. I have witnessed this type of failure first-hand at major customer installations. The failure itself is bad enough, but the real embarrassment comes from having to tell your customer that the fail-over investment they purchased is useless in a real-life situation. Fail-over systems are designed with the idea that the equipment they route around will die and go belly up like a pheasant shot point-blank with a 12-gauge shotgun. In reality, for every “hard” failure, there are 100 system-related lock ups where equipment sputters and chokes but does not completely die.

Start with a high-quality Internet line.

T1 lines, although somewhat expensive, are based on telephone technology that has long been hardened and paid for. While they do cost a bit more than other solutions, they are well-engineered to your doorstep.

Make sure all your devices have good UPS sources and surge protectors.

Consider this when purchasing redundant equipment,  what is the cost of manually moving a wire to bypass a failed piece of equipment?

Look at this option before purchasing redundancy options on single point of failure. We often see customers asking for redundant fail-over embedded in their equipment. This tends to be a strategy of purchasing hardware such as routers, firewalls, bandwidth shapers, and access points that provide a “fail open” (meaning traffic will still pass through the device) should they catastrophically fail. At face value, this seems like a good idea to cover your bases. Most of these devices embed a failover switch internally to their hardware. The cost of this technology can add about $3,000 to the price of the unit.

If equipment is vital to your operation, you’ll need a spare unit on hand in case of failure. If the equipment is optional or used occasionally, then take it out of your network.

Again, these are just some basic tips, and your final Internet redundancy plan will ultimately depend on your specific circumstances. But, these tips and questions should put you on your way to a decision based on facts rather than one based on unnecessary fears and concerns.

Pros and Cons of Using Your Router as a Bandwidth Controller


So, you already have a router in your network, and rather than take on the expense of another piece of equipment, you want to double-up on functionality by implementing your bandwidth control within your router. While this is sound logic and may be your best decision, as always, there are some other factors to consider.

Here are a few things to think about:

1. Routers are optimized to move packets from one network to another with utmost efficiency. To do this function, there is often minimal introspection of the data, meaning the router does one table look-up and sends the data on its way. However, as soon as you start doing some form of bandwidth control, your router now must perform a higher-level of analysis on the data. Additional analysis can overwhelm a router’s CPU without warning. Implementing non-routing features, such as protocol sniffing, can create conditions that are much more complex than the original router mission. For simple rate limiting there should be no problem, but if you get into more complex bandwidth control, you can overwhelm the processing power that your router was designed for.

2. The more complex the system, the more likely it is to lock up. For example, that old analog desktop phone set probably never once crashed. It was a simple device and hence extremely reliable. On the other hand, when you load up an IP phone on your Windows PC,  you will reduce reliability even though the function is the same as the old phone system. The problem is that your Windows PC is an unreliable platform. It runs out of memory and buggy applications lock it up.

This is not news to a Windows PC owner, but the complexity of a mission will have the same effect on your once-reliable router. So, when you start loading up your router with additional missions, it is increasingly more likely that it will become unstable and lock up. Worse yet, you might cause a subtle network problem (intermittent slowness, etc.) that is less likely to be identified and fixed. When you combine a bandwidth controller/router/firewall together, it can become nearly impossible to isolate problems.

3. Routing with TOS bits? Setting priority on your router generally only works when you control both ends of the link. This isn’t always an option with some technology. However, products such as the NetEqualizer can supply priority for VoIP in both directions on your Internet link.

4. A stand-alone bandwidth controller can be  moved around your network or easily removed without affecting routing. This is possible because a bandwidth controller is generally not a routable device but rather a transparent bridge. Rearranging your network setup may not be an option, or simply becomes much more difficult, when using your router for other functions, including bandwidth control.

These four points don’t necessarily mean using a router for bandwidth control isn’t the right option for you. However, as is the case when setting up any network, the right choice ultimately depends on your individual needs. Taking these points into consideration should make your final decision on routing and bandwidth control a little easier.

NetEqualizer Brand Becoming an Eponym for Fairness and Net Neutrality techniques


An eponym is a general term used to describe from what or whom something derived its name. Therefore, a proprietary eponym could be considered a brand name, product or service mark which has fallen into general use.

Examples of common brand Eponyms include Xerox, Google, and  Band Aid.  All of these brands have become synonymous with the general use of the class of product regardless of the actual brand.

Over the past 7 years we have spent much of our time explaining the NetEqualizer methods to network administrators around the country;  and now,there is mounting evidence,  that  the NetEqualizer brand, is taking on a broader societal connotation. NetEqualizer, is in the early stages as of becoming and Eponym for the class of bandwidth shapers that, balance network loads and ensure fairness and  Neutrality.   As evidence, we site the following excerpts taken from various blogs and publications around the world.

From Dennis OReilly <Dennis.OReilly@ubc.ca> posted on ResNet Forums

These days the only way to classify encrypted streams is through behavioral analysis.  ….  Thus, approaches like the NetEqualizer or script-based ‘penalty box’ approaches are better.

Wisp tutorial Butch Evans

About 2 months ago, I began experimenting with an approach to QOS that mimics much of the functionality of the NetEqualizer (http://www.netequalizer.com) product line.

TMC net

Comcast Announces Traffic Shaping Techniques like APconnections’ NetEqualizer…

From Technewsworld

It actually sounds a lot what NetEqualizer (www.netequalizer.com) does and most people are OK with it…..

From Network World

NetEqualizer looks at every connection on the network and compare it to the overall trunk size to determine how to eliminate congestion on the links

Star Os Forum

If you’d really like to have your own netequalizer-like system then my advice…..

Voip-News

Has anyone else tried Netequalizer or something like it to help with VoIP QoS? It’s worked well so far for us and seems to be an effective alternative for networks with several users…..

A Tiered Internet – Penny Wise or Pound Foolish


With the debate over net neutrality raging in the background, Internet suppliers are preparing their strategies to bridge the divide between bandwidth consumption and costs. This topic is coming to a head now largely because of the astonishing growth-rate of streaming video from the likes of YouTube, NetFlix, and others.

The issue recently took a new turn and emerged front and center during a webinar when Allot Communications and Openet presented its new product features, including its approach of integrating policy control and charging for wireless access to certain websites.

On the surface, this may seem like a potential solution to the bandwidth problem. Basic economic theory will tell you that if you increase the cost of a product or service, the demand will eventually decrease. In this case, charging for bandwidth will not only increase revenues, but the demand will ultimately drop until a point of equilibrium is reached. Problem solved, right? Wrong!

While the short-term benefits are obviously appealing for some, this is a slippery slope that will lead to further inequality in Internet access (You can easily find many articles and blogs regarding Net Neutrality including those referencing Vinton Cerf and Tim Berners-Lee — two of the founding fathers of the Internet — clearly supporting a free and equal Internet). Despite these arguments, we believe that Deep Packet Inspection (DPI) equipment makers such as Allot will continue to promote and support a charge system since it is in their best business interests to do so. After all, a pay-for-access approach requires DPI as the basis for determining what content to charge.

However, there are better and more cost-effective ways to control bandwidth consumption while protecting the interests of net neutrality. For example, fairness-based bandwidth control intrinsically provides equality and fairness to all users without targeting specific content or websites. With this approach, when the network is busy small bandwidth consumers are guaranteed access to the Internet while large bandwidth users are throttled back but not charged or blocked completely. Everyone lives within their means and gets an equal share. If large bandwidth consumers want access to more bandwidth, they can purchase a higher level of service from their provider. But let’s be clear, this is very different from charging for access to a particular website!

Although this content-neutral approach has repeatedly proved successful for NetEqualizer users, we’re now taking an additional step at mitigating bandwidth congestion while respecting network neutrality through video caching (the largest growth segment of bandwidth consumption). So, keep an eye out for the YouTube caching feature to be available in our new NetEqualizer release early next year.

Are Hotels Jamming 3G Access?


By Art Reisman

About 10 years ago, hotel operators were able to squeeze a nice chunk of change out of guests by charging high toll rates for phone service. However, most of that revenue went by the wayside in the early 2000s when every man, woman, and child on earth started carrying a cell phone. While this loss of revenue was in some cases offset by fees for Internet usage, thanks to 3G access cards most business travelers don’t even bother with hotel Internet service anymore — especially if they have to pay for it.

Yet, these access cards, and even your cell phone, aren’t always reliable in certain hotel settings, such as in interior conference rooms. But, are these simply examples of the random “dead spots” we encounter within the wireless world, or is there more to it? From off-the-record conversations with IT managers, we have learned that many of these rooms are designed with materials that deliberately block 3G signals — or at best make no attempt to allow the signals in. This is especially troubling in hotels that are still hanging on to the pay-for-Internet revenue stream, which will exist as long as customers (or their companies) will support it.

However, reliable complimentary Internet access is quickly becoming an increasingly common selling point for many hotels and is already a difference maker for some chains. We expect this will soon become a selling point even for the large conference centers that are currently implementing the pay-for-access plan.

While meeting the needs and expectations of every hotel guest can be challenging, the ability to provide reliable and affordable Internet service should be a relatively painless way for hotels and conference centers to keep customers happy. Reliable Internet service can be a differentiating factor and an incentive, or deterrent, for future business.

The challenge is finding a balance between the customer-satisfaction benefits of providing such a service and your bottom line. When it comes to Internet service, many  hotels and conference centers are achieving this balance with the help of the NetEqualizer system. In the end, the NetEqualizer is allowing hotels and conference centers to provide better and more affordable service while keeping their own costs down. While the number of 3G and 4G users will certainly continue to grow, the option of good old wireless broadband is hard to overlook. And if it’s available to guests at a minimal fee or no extra charge, hotels and conference centers will not longer have to worry about keeping competing means of Internet access out.

Note: I could not find any specific references to hotels’ shrinking phone toll rate revenue, but as anecdotal evidence, most of the articles complaining about high phone toll charges were at least 7 years old, meaning not much new has been written on the subject in the last few years.

Update 2015

It seems that my suspicions have been confirmed officially. You can read the entire article here Marriott fined for jamming wifi