Will the Rural Broadband Initiative Create New Jobs?


By Art Reisman, CTO, www.netequalizer.com

Art Reisman CTO www.netequalizer.com
Art Reisman is a partner and co-founder of APconnections, a company that provides bandwidth control solutions (NetEqualizer) to
ISPs, Universities, Wireless ISPs, Libraries, Mining Camps, and any organization where groups of users must share their Internet resources equitably.

I’m sure that most people living in rural areas are excited about the prospects of lower cost broadband. But, what will be the ultimate result of this plan? Will it be a transforming technology on the scale of previous campaigns implemented for electricity and interstate highways?  Will the money borrowed see a return on investment through higher productivity and increased national wealth?

The answer is most likely “no.” Here’s why…

  1. The premise of a return on investment by bringing bandwidth to rural areas assumes there is some kind of dormant untapped economic engine  that  will spring to life once sprinkled with additional bandwidth. This isn’t necessarily the case.
  2. There is also an implied myth that somehow rural America does not have access to broadband. This is simply not true.

Here are some questions and issues to consider:

Are rural communities really starved for bandwidth?

Most rural small businesses already have  access to decent broadband speeds and are not stuck on dial up.  To be fair, rural broadband currently is not quite fast enough to watch unlimited YouTube, but it is certainly fast enough to allow for VoIP, E-mail, sending documents and basic communication without the plodding of dial up.

We support approximately 500 rural operators around the US and the world.  The enabling technology for getting bandwidth to rural areas is well established using readily available line of sight back haul equipment.

For example, let’s say you want to start a broadband business 80 miles southwest of Wichita Kansas. How do you tap into the major Internet backbone?  Worst case scenario is that the nearest pop to a major backbone Internet provider is in Wichita. For a few thousand dollars, you can run a microwave link from Wichita out to your town and using common backhaul technology. You could then distribute broadband access to your local community using point to multipoint technology. The technology to move broadband into rural areas is not futuristic, it is a viable and profitable industry that has evolved to meet market demands.

How much bandwidth is enough for rural business needs?

We support hundreds of businesses and their bandwidth needs. From our observations, what we have found is that unless a business is specially  a content distribution or hosting company, they purchase minimal pipes, much less per capita than a consumer household.

Why? They don’t want to subsidize their employees’ YouTube and online entertainment habits. Therefore, they typically just don’t need more than a 1.5 meg for an office of 20 or so employees.

As mentioned, bandwidth in rural American towns is not quite up to the same standards as major metro areas, but the service is adequate to ensure that businesses are not at a disadvantage.  Most  high speed connections beyond business needs are used primarily for entertainment -watching videos, playing Xbox, etc. It’s not that these activities are bad, it’s just that they are consumer activities and not related to business productivity. Hence, considering this, I would argue that a government subsidy to bring high speed into rural areas will have little additional  economic impact.

The precedent of building highways to rural areas cannot be compared to broadband.

Highways did open the country to new forms of commerce, but there was a clear geographic hurdle to overcome that no commercial entity would take on. There were farm producers in rural America, vital to our GDP, that had to get product to market efficiently.

The interstate system was necessary to open the country to commerce, and I would agree that moving goods from coast to coast via highway certainly benefits everybody. Grain and corn from the Midwest must be brought to market through a system of feeder roads connecting into the Interstate and rail sytems. And, the only way to transport goods from anyplace must include a segment of highway.

But the Internet transports data, and  there is  no geographic restriction on where data gets created and consumed. So, there is not an underlying need to make use of rural america for economic reasons with respect to data. Even if there was a small business building widgets in rural America, I challenge any government official to cite one instance of a business not being able function for lack of Internet conectivity. I am able to handle my e-mail on a $49 -per-month WildBlue Internet connection 20 miles from the nearest town in the middle of Kansas and my customers cannot tell the difference — and neither can I.

With broadband there is only data to transport, and unlike the geographic necessity of farm products, there is no compelling reason why it needs to be produced in rural areas. Nor is there evidence of an issue moving it from one end of the country to another, the major links between cities are already well established.

Since Europeans are far better connected than the US, we are falling behind.

This comparison is definitely effective in convincing Americans that something drastic needs to be done about the country’s broadband deficiencies, but it needs to be kept in perspective.

While it is true the average teenagar in Europe can download and play oodles more games with much more efficiency than a poor American farmhand in rural Texas, is that really setting the country back?

Second, the population densities in Western Europe make the econimics of high-speed links to everybody much more feasible than stringing lines through rural towns 40 miles apart in America’s heartland.  I don’t think the Russians are trying to send gigabit lines to every village in Siberia, which would be a more realistic analogy than comparing U.S. broadband coverage to Western Europe in general.

Therefore, while the prospect of expanded broadband Internet access to rural America is appealing for many reasons, both the positive outcomes of its implementation as well as the consequences of the current broadband shortcomings must be kept in perspective. The majority of rural America is not completely bandwidth deprived. Although there are shortcomings, they are not to the extent that commerce is suffering, nor to the extent that changes will lead to a significant increase in jobs or productivity. This is not to say that rural bandwidth projects should not be undertaken, but rather that overly ambitious expectations should not be the driving force behind them.

Looks Robert Mitchell in this 2007  PC World article  disagrees with me.

Comcast Suit: Was Blocking P2P Worth the Final Cost?


By Art Reisman
CTO of APconnections
Makers of the plug-and-play bandwidth control and traffic shaping appliance NetEqualizer

Art Reisman CTO www.netequalizer.com

Comcast recently settled a class action suit in the state of Pennsylvania regarding its practice of selectively blocking of P2P.  So far, the first case was settled for 16 million dollars with more cases on the docket yet to come. To recap, Comcast and other large ISPs invested in technology to thwart P2P, denied involvment when first accused, got spanked by the FCC,  and now Comcast is looking to settle various class action suits.

When Comcast’s practices were established, P2P usage was sky-rocketing with no end in sight and the need to block some of it was required in order to preserve reasonable speeds for all users. Given that there was no specific law or ruling on the book, it seemed like mucking with P2P to alleviate gridlock was a rational business decision. This decision made even more sense considering that DSL providers were stealing disgruntled customers. With this said, Comcast wasn’t alone in the practice — all of the larger providers were doing it, throttling P2P to some extent to ensure good response times for all of their customers.

Yet, with the lawsuits mounting, it appears on face value that things backfired a bit for Comcast. Or did they?

We can work out some very rough estimates as the final cost trade-off. Here goes:

I am going to guess that before this plays out completely, settlements will run close to $50 million or more. To put that in perspective, Comcast shows a 2008 profit of close to $3 billion. Therefore, $50 million is hardly a dent to their stock holders. But, in order to play this out, we must ask what the ramifications would have been to not blocking P2P back when all of this began and P2P was a more serious bandwidth threat (Today, while P2P has declined, YouTube and online video are now the primary bandwidth hogs).

We’ll start with the customer. The cost of getting a new customer is usually calculated at around 6 months of service or approximately $300. So, to make things simple, we’ll assume the net cost of a losing a customer is roughly $300. In addition, there are also the support costs related to congested networks that can easily run $300 per customer incident.

The other more subtle cost of P2P is that the methods used to deter P2P traffic were designed to keep traffic on the Comcast network. You see, ISPs pay for exchanging data when they hand off to other networks, and by limiting the amount of data exchanged, they can save money. I did some cursory research on the costs involved with exchanging data and did not come up with anything concrete, so I’ll assume a P2P customer can cost you $5 per month.

So, lets put the numbers together to get an idea of how much potential financial damage P2P was causing back in 2007 (again, I must qualify that these are based on estimates and not fact. Comments and corrections are welcome).

  • Comcast had approximately 15 million broadband customers in 2008.
  • If 1 in 100 were heavy P2P users, the exchange cost would be $7.5 million per month in exchange costs.
  • Net lost customers to a competitor might be 1 in 500 a month. That would run $9 million a month.
  • Support calls due to preventable congestion might run another 1 out of 500 customers or $9 million a month.

So, very conservatively for 2007 and 2008, incremental costs related to unmitigated P2P could have easily run a total of $600 million right off the bottom line.

Therefore, while these calculations are approximations, in retrospect it was likely financially well worth the risk for Comcast to mitigate the effects of unchecked P2P. Of course, the public relations costs are much harder to quantify.

Bandwidth Quota Prophecy plays out at Comcast.


A couple of years ago we pointed out how implementing a metered usage policy could create additional overhead.  Here is an excerpt:

To date, it has not been a good idea to flaunt a quota policy and many ISPs do their best to keep it under the radar. In addition, enforcing and demonstrating a quota-based system to customers will add overhead costs and also create more customer calls and complaints. It will require more sophistication in billing and the ability for customers to view their accounts in real time. Some consumers will demand this, and rightly so.

Today two years after Comcast started a fair use policy based on Quota’s they announced a new tool for customers that allows customers to see their usage and  gives them a warning before being cut off.  I suspect the new tool is designed to alleviate the issues we mention in our paragraph above.

NetEqualizer customers can usually accomplish bandwidth reductions fairly without the complexity of quota systems , but in a pinch we also have a quota system on our equipment.

Need for Equalizing on Verizon Data Network ?


By Art Reisman

CTO http://www.netequalizer.com

I read a blog post today describing how the 3g wireless providers will not have proper capacity to meet growing demand. Data usage with the  boom of personal devices has finally ramped up and caught them underpowered.

My observations:

It just  so happens that I rely on a Verizon broadband card when I am on the road. I love their service it is by far the best of other carriers I have tried.

I spent a couple days in Gainesville Florida this week , and where my Verizon connection seemed consistently closer to  dial up when anecdotally compared to  typical broad band.  My measurement technique  is pragmatic, and less than scientific, if  I wait for 4 to 5 seconds for a small text e-mail to send, it is sure sign I am not on their 3g network. You can move in and out of 3g service depending on where you are. I then went down to Sanibel Island and my speeds picked back up to broad band levels again.

The Sanibel speeds put an exclamation on how degraded my service was up in Gainesville.  Obviously this anectdotal as there could be other factors at play here , but here are the  two obvious explanations for the increased response times on Sanibel Island when compared to Gainesville.

1) Gainesville is not covered by 3g (high speed broadband)

2) Sanibel island lacks the College students and younger crowd that pressure data usage with their downloads of videos and streaming audio.

I am guessing the answer is number 2.

ALthough verizon , in my opinion cleary has the best network, there is some room for improvement here in Florida.

Based on my limited obervations this week, I suspect that  a few strategically placed  Netequalizers would help speed up response times for services such as e-mail and web browsing in these congested areas.  Obviously this would be at the expense of people watching videos on their portable devices; however it is unlikely those services are running all the quickly on congested network to start with.

What does it take to build a firewall.


Editors Note:

This paragraph written by Michael W. Lucas, was a lead into to a nice testimonial for a PFsense firewall.  For anybody that is in the IT consulting this first part is classic Dilbert.

Found in pfsense.org

My friends and co-workers know that I build firewalls. At least once a month someone says “My company needs a firewall with X and Y, and the price quotes I’ve gotten are tens of thousands of dollars. Can you help us out?”

Anyone who builds firewalls knows this question could be more realistically phrased as “Could you please come over one evening and slap together some equipment for me, then let me randomly interrupt you for the next three to five years to have you install new features, debug problems, set up features I didn’t know enough to request, attend meetings to resolve problems that can’t possibly be firewall issues but someone thinks might be the firewall, and identify solutions for my innumerable unknown requirements? Oh, and be sure to test every possible use case before deploying anything.

 

University of British Columbia IT department chimes in on Layer 7 shaping and its fallacy


Editors notes: The following excerpt was pulled from the Resnet User Group Mailing list Oct 17 , 2009

Most subscribers to this user group are IT directors or adminstrators for large residence networks at various  universities. Many manage upwards of tens of thousands of Internet users.   If you are an ISP I would suggest you subscribe to  this list and monitor  for ideas.  Please note vendor solicitation is frowned upon on the Resnet list

As for the post below The first part of the post is Dennis’s recommendation for a good bandwidth shaper, he uses a carrier grade Cisco product.

The second part is a commentary on the fallacy of layer 7 shaping. No we do not know Dennis nor does he use our products , he just happens to agree with our philosophy after trying many other products.

Dennis OReilly <Dennis.OReilly@ubc.ca
reply-to Resnet Forum <RESNET-L@listserv.nd.edu> to RESNET-L@listserv.nd.edu date Sat, Oct 17, 2009 at 12:35 AM subject Re: Packet Shaping Appliance unsubscribe Unsubscribe from this sender

At 9:22 AM -0400 10/16/09, Brandon Burleigh wrote:

We are researching packet shaping appliance options as our current model is
end-of-life.  It is also at its maximum for bandwidth and we need to increase
our bandwidth with our Internet service provider.  We are interested in
knowing what hardware others are using on their Internet service for packet
shaping.  Thank you.

At the University of British Columbia we own and still use four PS10000’s.   A year ago we purchased a Cisco SCE 2020 which has 4 x 1G interfaces.  The SCE 2020 is approx the same price point as the PS10000.  There is also an SCE 8000 model which has 4 x 10G interfaces, also at a decent price point.

Oregon State brought the SCE product line to our attention at Resnet Symposium 2007.  A number of other Canadian universities recently purchased this product.

The SCE is based on P-Cube technology which Cisco acquired in 2004.

In a nutshell comparing the SCE to the PS10000:
– PS10000 reporting is much superior
– PS10000 and SCE are approx equal at ability to accurately classify P2P
– SCE is essentially a wire speed device
– SCE is a scalable, carrier-grade platform
– Installation of SCE is more complicated than PS10000
– SCE has some capability to identify and mitigate DoS and DDos attacks
– SCE handles asymmetric routing
– SCE has fine grained capabilities to control bandwidth

It is becoming more and more difficult over time for any packet shaping device like a Packetshaper, or a Procera, or an SCE to accurately classify P2P traffic. These days the only way to classify encrypted streams is through behaviorial analysis.  In the long run this is a losing proposition.  Thus, approaches like the NetEqualizer or script-based ‘penalty box’ approaches are better.   However, boxes like the SCE which have excellent capabilities to control bandwidth on a per user basis are also viable.  Otherwise the carriers wouldn’t be using these products.

Network World Blog missing the boat on Packeteer’s decline in revenue


The one thing bad about being a publicly traded company is that you cannot hide from your declining sales, in the following network world blog post and related comments ,the authors make some good points as to where and why they would choose Cisco Wan Optimization over Blue Coat and vice-versa. They also comment on all sorts of reasons why Blue Coat’s revenue in this area is declining , although they neglect one obvious reason.

Prices of bandwidth have fallen quite rapidly over the last 10 years. In some larger metro areas  Internet access runs for as little as $300 per month for 10 megabits. The same link 10 years ago would have run close to $5000 per month or more. Despite falling bandwdith prices,  WAN optimization solutions from the likes Blue Coat, Cisco and Riverbed, remain relatively high.  Many ptential WAN optimization customers will  simply upgrade  their bandwidth rather than invest in new optimization equipment.  You would think that vendors would lower their prices to compete, and they are to some degree; however the complexity of their core solutions requires a mimumum price floor.   The factors that create the price floor on equipment are related to, methodology  of the internal technology, and sales channel costs,  and unfortunately these fixed cost factors cannot keep pace with falling bandwidth prices .

Our prediction is that WAN optimization devices will  slowly become a commodity with automated reduced complexity. One measure of the current complexity is   all the acronyms being tossed around describing WAN optimization. The sales pitches filled with accronyms clearly corrolate that perhaps these devices are just too complicated for the market to continue to use. They will become turn key simple and lower cost or die. No player is bigger than the Market force of cheaper bandwith.

Related articles:

ROI calculation for packet shaping equipment

Does lower cost bandwidth foretell a decline in bandwidth shaper sales?

http://www.networkworld.com/community/comment/reply/46590

How Does NetEqualizer compare to Mikrotik


Mikrotik is a super charged Swiss army knife solution, no feature is off limits on their product, routing , bandwidth control, layer seven filters, PPPoe, firewall they have it all. If I was going off to start a WISP with a limited budget, and could bring only one tool with me, it would be a Mikrotik solution. On the other hand the NetEqualizer grew up with the value equation of optimizing bandwidth on a network and doing it in a smart turn key fashion. It was developed by a wireless operator that realized high quality easy to use bandwidth control  was needed to ensure a profitable business.

Yes there is some overlap between the two,  over time the NetEqualizer has gone beyond their included auxillary features,  for example:  NetEqualizer has a firewall and  a network access control module; but the primary reason an operator would purchase a NetEqualizer still goes back to our core mission.  To keep their margins in this competitive business, they need to optimize their Internet trunk without paying an army of technicians to maintain a piece of equipment.


The following was part of a conversation with a customer who was interested in comparing Mikrotik queues to NetEqualizer Equalizinq. So take off your Mikrotik hat for a minute and read on about a different philosophy on how to control bandwidth.

Equalizing is a bit different than  Microtik, so we can’t make exact
feature comparisons.  NetEqualizer lets users run until the network
(or pool) is crowded and then slaps the heavy users for a very short
duration, faster than you  or I could do it  (if you tried). Do you
have the arcade game “wack a mole”  in Australia?  Where you hit the
moles on the head when they pop up out of the holes with a hammer?

The vision of our product was to allow operators to plug it in ,give
priority to short real time traffic when the network is busy, and to
leave it alone when shaping is not needed.

It does this based on connections not based on users (as per your question)

Suppose out of your 1000 users, 90 percent were web surfing , 5
percent watching youtube, and  20 percent were doing chat sessions
while doing youtube and web surfing, and another 20 percent were on
SKype calls while web surfing.

Based on the different demand levels of all these users it is nearly
impossible to divide the bandwidth evenly.

But, If the trunk was saturated, in the example above, the
NetEqualizer would chop down youtube streams (since they are the
biggest) leaving all the other streams alone. So instead of having
your network crash completely a few youtube videos would break up for
a few seconds and then when conditions abated they would be allowed to
run. I cannot tell you the exact allocations per user because we don’t
try to hit fixed allocations, we just put delay on the nasties until
the bandwidth usage overall drops back to 90 percent.  It is never the
same . And then we quickly take the delay away when things are better.

The value to you is that you get the best possible usage of your
network bandwidth without micro managing everything. There are no
queues to manage. We have been using this model with ISPs for 6 years.

If you do want to put additional rules onto users you can do that with
individual rate limits. Or VLAN limits.

Lastly if you have a very high priority client that must run video you
can give them an exemption if needed.

To control p2p you can use our connection limits as most p2p clients
overload APs with massive connections. We have a fairly smart simple
way to spot this type of user and keep them from crashing your network.
Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list. .

NetEqualizer provides Net Neutrality solution for bandwidth control.


By Eli Riles NetEqualizer VP of Sales

This morning I read an article on how some start up companies are being hurt awaiting the FCC’s decision on Net Neutrality.

Late in the day, a customer called and exclaimed, “Wow now with the FCC coming down  hard on technologies that jeopardize net neutrality, your business  must booming since you offer an excellent viable alternative” And yet  in face of this controversy, several of our competitors continue to sell deep packet inspection devices to customers.

Public operators and businesses that continue to purchase such technology are likely uninformed about the growing fire-storm of opposition against Deep Packet Inspection techniques.  The allure of being able to identify, and control Internet Traffic by type is very a natural solution, which customers often demand. Suppliers who sell DPI devices are just doing what their customer have asked. As with all technologies once the train leaves the station it is hard to turn around. What is different in the case of DPI is that suppliers and ISPs had their way with an ignorant public starting in the late 90’s. Nobody really gave much thought as to how DPI might be the villain in the controversy over Net Nuetrality. It was just assumed that nobody would notice their internet traffic being watched and redirected by routing devices. With behemoths such as Google having a vested interest in keeping traffic flowing without Interference on the Internet, commercial deep packet inspection solutions are slowly falling out of favor in the ISP sector. The bigger question for the players betting the house on DPI is , will it fall out favor in other  business verticals?

The NetEqualizer decision to do away with DPI two years ago is looking quite brilliant now, although at the time it was clearly a risk bucking market trends.  Today, even in the face of world wide recession our profit and unit sales are up for the first three quarters of 2009 this year.

As we have claimed in previous articles there is a time and place for deep packet inspection; however any provider using DPI to manipulate data is looking for a potential dog fight with the FCC.

NetEqualizer has been providing alternative bandwidth control options for ISPs , Businesses , and Schools of all sizes for 7 years without violating any of the Net Nuetrality sacred cows. If you have not heard about us, maybe now is a good time to pick up the phone. We have been on the record touting our solution as being fair equitable for quite some time now.

Burstable Internet Connections — Are They of Any Value?


A burstable Internet connection conjures up the image of a super-charged Internet reserve, available at your discretion during a moment of need, like pushing the gas pedal to the floor to pass an RV on a steep grade. Americans find comfort knowing that they have that extra horsepower at their disposal. The promise of power is ingrained in our psyche, and is easily tapped into when marketing an Internet service. However, if you stop for a minute, and think about what is a bandwidth burst, it might not be a feature worth paying for in reality.

Here are some key questions to consider:

  • Is a burst one second, 10 seconds, or 10 hours at a time? This might seem like a stupid question, but it is at the heart of the issue. What good is a 1-second burst if you are watching a 20-minute movie?
  • If it is 10 seconds, then how long do I need to wait before it becomes available again?
  • Is it available all of the time, or just when my upstream provider(s) circuits are not busy?
  • And overall, is the burst really worth paying for? Suppose the electric company told you that you had a burstable electric connection or that your water pressure fluctuated up for a few seconds randomly throughout the day? Is that a feature worth paying for? Just because it’s offered doesn’t necessarily mean it’s needed or even that advantageous.

While the answers to each of these questions will ultimately depend on the circumstances, they all serve to point out a potential fallacy in the case for burstable Internet speeds: The problem with bursting and the way it is marketed is that it can be a meaningless statement without a precise definition. Perhaps there are providers out there that lay out exact definitions for a burstable connection, and abide by those terms. Even then we could argue that the value of the burst is limited.

What we have seen in practice is that most burstable Internet connections are unpredictable and simply confuse and annoy customers. Unlike the turbo charger in your car, you have no control over when you can burst and when you can’t. What sounded good in the marketing literature may have little practical value without a clear contract of availability.

Therefore, to ensure that burstable Internet speeds really will work to your advantage, it’s important to ask the questions mentioned above. Otherwise, it very well may just serve as a marketing ploy or extra cost with no real payoff in application.

Update: October 1, 2009

Today a user group published a bill of rights in order to nail ISPs down on what exactly they are providing in their service contracts.
ISP claims of bandwidth speed.

I noticed that  in the article, the bill of rights, requires a full disclosure about the speed of the providers link to the consumers modem. I am not sure if this is enough to accomplish a fixed minimus speed to the consumer.  You see, a provider could then quite easily oversell the capacity on their swtiching point. The point where they hook up to a backbone of other providers.  You can not completely regulate speed across the Internet, since by design providers hand off or exchange traffic with other providers.  Your provider cannot control the speed of your connection once it is off their network.

Posted by Eli Riles, VP of sales www.netequalizer.com.

Why is NetEqualizer the low price leader in Bandwidth Control


Recently we have gotten feed back from customers that stating they almost did not consider the NetEqualizer because the price was so much less than solutions  from the likes of: Packeteer (Blue Coat), Allot NetEnforcer and Exinda.

Sometimes low price will raise a red flag on a purchase decision, especially when the price is an order of magnitude less than the competition.

Given this feed back we thought it would be a good idea to go over some of the major cost structure differences betwen APconnections maker of the NetEqualizer and some of the competition.

1) NetEqualizer’s are sold mostly direct by word of mouth. We do not have a traditional indirect sales channel.

– The down side for us as a company is that this does limit our reach a bit.  Many IT departments do not have the resources to seek out new products on their own, and are limited to only what is presented to them.

– The good news for all involved is selling direct takes quite a bit of cost out of delivering the product. Indirect  sales channels need to be incented to sell,  Often times they will steer the customer toward the highest commission product in their arsenal.  Our  direct channel eliminates this overhead.

-The other good thing about not using a sales channel is that when you talk to one of our direct (non commissioned) sales reps you can be sure that they are experts on the NetEqualizer. With a sales channel a sales rep often sells many different kinds of products and they can get rusty on some of the specifics.

2) We have bundled our Manufacturing with a company that also produces a popular fire wall. We also have a back source to manufacture our products at all times thus insuring a steady flow of product without the liability of a Manufacturing facility

3) We have never borrowed money to run Apconnections,

– this keeps us very stable and able to withstand market fluctuations

– there are no greedy investors calling the shots looking for a return and demanding higher prices

4) The NetEqualizer is simple and elegant

– Many products keep adding features to grow their market share we have a solution that works well but does not require constant current engineering

Net Neutrality Bill Won’t End Conflicts Between Users and Providers


This week, Representatives Edward Markey, a Massachusetts Democrat, and Anna Eshoo, a California Democrat, introduced the Internet Freedom Preservation Act aimed at protecting the rights of Internet users and ultimately net neutrality. Yet, before net neutrality advocates unequivocally praise the bill, it should be noted that it protects the rights of Internet service providers as well. For example, as long as ISPs are candid with their customers in regard to their network optimizaiton practices, the bill does allow for “reasonable network management,” stating:

“Nothing in this section shall be construed to prohibit an Internet access provider from engaging in reasonable network management consistent with the policies and duties of nondiscrimination and openness set forth in this Act. For purposes of subsections (b)(1) and (b)(5), a network management practice is a reasonable practice only if it furthers a critically important interest, is narrowly tailored to further that interest, and is the means of furthering that interest that is the least restrictive, least discriminatory, and least constricting of consumer choice available. In determining whether a network management practice is reasonable, the Commission shall consider, among other factors, the particular network architecture or technology limitations of the provider.”

While this stipulation is extremely important in the protection it provides Internet service providers, it is likely to come into conflict with some Internet users’ ideas of net neutrality.  For example, the bill also states that it is ISPs’ “duty to not block, interfere with, discriminate against, impair or degrade the ability of any person to use an Internet access service to access, use, send, post, receive or offer any lawful content, application or service through the Internet.” However, even users of the NetEqualizer, one of the more hands off approaches to network management, don’t have a choice but to target the behavior of certain heavy customers. One person’s penchant for downloading music — legally or not — can significantly impact the quality of service for everyone else. And, increasing bandwidth just to meet the needs of a few users isn’t reasonable either.

It would seem that this would be a perfect case of reasonable network management which would be allowed under the proposed bill. Yet many net neutrality advocates tend to quickly dismiss any management as an infringement upon the user’s rights. The protection of the users’ rights will likely get the attention in discussions about these types of bills, but there should also be just as much emphasis on the rights of the provider to reasonably manage their network and what this may mean for the idea of unadulterated net neutrality.

The fact that this bill includes the right to reasonably manage one’s network indicates that some form of management is typically nececsary for a network to run at its full potential. The key is finding some middle ground.

Related article September 22 2009

FCC rules in favor of Net Neutrality the commentary on this blog is great and worth the read.

Results From Comcast’s New Bandwidth Shaping Approach Support Long-Time NetEqualizer Strategy


This week, a DSL Reports article explored the favorable customer response to the most recent changes in Comcast’s bandwidth shaping strategy. The article states:

“Last month we explored how Comcast and Sandvine’s network management technology continues to evolve. Unlike Comcast’s last system, which throttled upstream traffic for all users regardless of consumption, this new system identifies customers and throttles back consumption only if they’re on a congested node — and they’re a particular reason why. Even then, we haven’t seen complaints from users in our Comcast forum, which is a very good sign.”

Several months ago, we documented the similarities and differences between Comcast’s network management techniques and those of NetEqualizer. If you go back and read our older article, it sounds like these latest changes address many of the issues we raised and inch Comcast’s approach even closer to that of NetEqualizer. The key here is that, like NetEqualizer, they now only hit the users that are specifically breaking the camels back, and as the author points out, there are no complaints.

Although nobody from Comcast has ever conferred with us on our technology, we believe this new more specific shaping is very close to what we have been doing for years, and with similar results — no complaints.

To read the full DSL Reports article, click here.

Deep Packet Inspection Abuse In Iran Raises Questions About DPI Worldwide


Over the past few years, we at APconnections have made our feelings about Deep Packet Inspection clear, completely abandoning the practice in our NetEqualizer technology more than two years ago. While there may be times that DPI is necessary and appropriate, its use in many cases can threaten user privacy and the open nature of the Internet. And, in extreme cases, DPI can even be used to threaten freedom of speech and expression. As we mentioned in a previous article, this is currently taking place in Iran.

Although these extreme invasions of privacy are most likely not occurring in the United States, their existence in Iran is bringing increasing attention to the slippery slope that is Deep Packet Inspection. A July 10 Huffington Post article reads:

“Before DPI becomes more widely deployed around the world and at home, the U.S. government ought to establish legitimate criteria for authorizing the use such control and surveillance technologies. The harm to privacy and the power to control the Internet are so disturbing that the threshold for using DPI must be very high.The use of DPI for commercial purposes would need to meet this high bar. But it is not clear that there is any commercial purpose that outweighs the potential harm to consumers and democracy.”

This potential harm to the privacy and rights of consumers was a major factor behind our decision to discontinue the use of DPI in any of our technology and invest in alternative means for network optimization. We hope that the ongoing controversy will be reason for others to do the same.

Google Questions Popular Bandwidth Shaping Myth


At this week’s Canadian Radio-Television and Telecommunications Commission Internet traffic hearing, Google’s Canada Policy Counsel, Jacob Glick, raised a point that we’ve been arguing for the last few years. Glick said:

“We urge you to reject as false the choice between debilitating network congestion and application-based discrimination….This is a false dichotomy. The evidence is, and experience in Canada and in the U.S. already shows, that carriers can manage their networks, reduce congestion and protect the open Internet, all at the same time.”

While we agree with Glick to a certain extent, we differ in the alternative proposed by hearing participants — simply increase bandwidth. This is not to say that increasing bandwidth isn’t the appropriate solution in certain circumstances, but to question the validity of a dichotomy with an equally narrow third alternative doesn’t exactly significantly expand the industry’s options. Especially when increasing bandwidth isn’t always a viable solution for some ISPs.

The downsides of application-based shaping are one of the main reasons behind NetEqualizer’s reliance on behavior-based shaping. Therefore, while Glick is right that the above-mentioned dichotomy doesn’t explore all of the available options, it’s important to realize that the goals being promoted at the hearing are not solely achieved through increased bandwidth.

For more on how the NetEqualizer fits into the ongoing debate, see our past article, NetEqualizer Offers Net Neutrality, User Privacy Compromise.