Crickets for IPv6

Several years ago, I wrote an article explaining how there is plenty of address space with IPv4 and that the IPv6 hype had some merit, but most of it was being used as another push to scare organizations into buying a bunch of equipment they may not need.  

It turns out that I was mostly correct

How do I know this? We are regularly inside customer networks doing upgrades and support. Yes, we do see a smattering of IPv6 traffic in their logs, but it generally does not originate from their users, and at most it is a fraction of a percent. Basically, this means that their old IPv4 equipment probably would still suffice without upgrades had they gone that route.

Back in 2012 the sky was falling, everything needed to be converted over to IPv6 to save the Internet from locking up due to lack of address space.  There may be elements of the Internet where that was true but such dire predictions did not pan out in the Enterprise. Why?

Lack of control over their private address space with IPv6.

For example, one of the supposed benefits of  IPv6 addressing schemes is that they are assigned to a device in the factory, as there are so many addresses available they are practically infinite.  The problem for an IT professional managing a network is that you can’t change that IPv6 address (as far as I know) and that is where the breakdown begins.

In private organizations, the IT department wants to manage bandwidth and security permissions. Although managing security and permissions are possible with IPv6, you lose the orderliness of an IPv4 address space. 

For example, there is no easy shorthand notation with IPv6 to do something like:

“Block the address range from accessing a data base server”.   

With IPv4, the admin typically assigns IP addresses to different groups of people within the enterprise and then they can go back and make a general rule for all those users with one stroke of the pen (keyboard). 

With IPv6 the admin has no control over the ip addresses, and would need to look them up, or come up with some other validation scheme to set such permissions.   

I suppose the issues stated above could have been overcome by a more modern set of tools, but that did not happen either. Again, I wonder why?

I love answering my own questions. I believe that the reason is that the embedded NAT/PAT addressing schemes that had been used prior to the IPv6 push, were well established and working just fine.  Although I am not tasked with administering  a large network, I did sleep at a Holiday Inn (once), and enterprise admins do not want public IP’s on the private side of their firewall for security purposes. Private IP addresses to the end in itself is likely more of security headache than the Ip4 NAT/PAT address schemes.  

The devil’s advocate in me says that the flat address space across the world of an IPv6 scheme is elegant and simple on face value, not to mention infinite in terms of addresses. IPv6 promises 2,250,000,000 Ip addresses, for every living person on earth. It just was not compelling enough to supplant the embedded IPv4 solutions with their NAT/PAT addressing schemes.

Opinion: Location Based Content Services Must be Defeated

I am normally a law-abiding citizen when it comes to contracted services. For example, many years ago I purchased a house where the previous owner had hijacked their cable service.  I voluntarily turned myself in to get legal, and I would do it again if the same situation ever arose. On the other hand, when it comes to providers blocking or denying content based on your location, I feel violated and angry.  I may sound like a geezer, but in the spirit of the Internet, blocking content based on your location just seems wrong.  I don’t know if I am in the minority or mainstream with my opinion, and frankly I don’t care.  I will continue to do everything I can to defy location-based restrictions, and if I get arrested at some point, I may fight this all the way to the Supreme Court.

What follows is my list of location-enforcement transgressions.  Let’s start with  Every year I pay my $120 to subscribe to this service, and every year blocks my local team as per an agreement they have with a local TV provider who owns the rights to the broadcast.  If you want to watch baseball in my home market, you must buy a $120 a month cable service, and you have no other options.  I’d be glad to pay for the content directly, like a pay-per-view event, but this is not an option either.

Five years ago the content blocking was pretty easy to circumvent; all I had to do was use a VPN connected to another city and everything worked fine.  Last year the MLB decided to subscribe to a service that notified them with a list of every commercial VPN provider, and their associated IP ranges that they owned. So basically if you used a VPN service you could not watch, even for games that would not normally be blacked out in your market, it is was just indiscriminate VPN blocking.

My next counter punch was to set up my own proxy server and put it behind a friend’s router in a different geographic location.  Essentially when I log into they see me coming from Seattle, Washington, and from a random residential private IP address not on their list of commercial VPN providers.  This works pretty well, if you have a friend willing to host a proxy for you.

Legalized internet gambling is another nemesis of location-based denial.  Internet gambling on sports betting sites is legal in some states and not others.  The gambling sites have taken location-based blocking to another level.  It’s not just enforced based on your originating IP or VPN usage, but they sniff your computer’s location-based services to prove your location.  If you turn off your location-based services, they deny you service.

I am now working on a way to circumvent this intrusion, and I don’t even gamble nor do I have any real intention of using my solution at this time.

I ask myself the question what motivates me to spend time and energy on ways to circumvent these draconian rules when I don’t even want their services.  All I can think of is that from a philosophical standpoint, I want Internet content and services to be free from geographical restrictions. I am fine with content providers charging for services, just don’t tell me where I have to be located to use your services.


How to Prioritize Internet Traffic For Video

My daughter, a high school teacher, texted me the other day and said she is having trouble with her home video breaking up. This is not a good situation for her and many other remote learning operations around the world.  There are ways to mitigate this issue, but it must be done upstream by her ISP, and so I could not help her directly. I tried calling Comcast to let them know we have a solution, but they did not return my call. Perhaps one of their engineers will read the blog article that follows.

There is a very simple way to make sure video works well all the time, and with the new generation of video controllers this technique is better than ever.  Okay, sorry for sounding like I am promoting a miracle cure, but in essence I am (and have been doing it with success for 17 years now).

The basic technique involves making sure your circuit does not reach 100% percent capacity.  Video is like the proverbial “canary in a coal mine”; it will be the first to suffer and will abruptly stop working when it runs out of bandwidth.

How you should keep your circuit from reaching 100% percent capacity and disrupting video?   There are two important scenarios that you need to consider:

Scenario #1 assumes you have a large non-video consumers of bandwidth that are filling your circuit.  This is a problem that we normally deal with, and the solution is to use a bandwidth controller to limit streams larger than 4 megabits during peak usage.  By doing this you can free up traffic for video, as almost all video, Netflix, etc. use 4 megabits or less.  Remote learning applications use even less, as they generally don’t need the video quality of a high def movie to be useful.

The issue with this method, and one that has come to fruition recently, is a huge influx of video, now that other recreational activities have been put on hold.  Even for ISPs that were ramping up their delivery mechanisms with the normal usage curve, the spike in recent video demand was unexpected, just like the covid-19 virus causing unplanned quarantines and lockdowns.

Scenario #2 is the situation where a majority of your traffic is video, and you may not be able to recover enough bandwidth by limiting larger streams. What do you do now?

Several years ago most video streams were an all or nothing proposition. Either they received the bandwidth they needed or they just stopped.  As the industry has matured so have the video delivery engines. They are much smarter now, and you can now force them to back off gracefully.  Today’s engines will sense the available bandwidth and back off to a lower resolution as needed.

From the perspective of an ISP, you can trick video into backing off  before you have a crisis on your hands. The trick is to progressively limit 4 megabit streams down to 1 or 2 meg.

We can do this quite easily with our bandwidth controller, but for those of you that have a simple rate limiting controller without dynamic intelligence built-in, you might be able to do this manually if you can limit individual connections.  For example, you might have a user with a 50 megabit circuit.  You would not want to limit their entire circuit down to 2 megabits, but you could limit any stream that is pulling over 4 megabits down to 2 megabits, and video will still function and the customer will continue to have access to the 50 meg circuit for other services.  By limiting just “streams” and not the entire circuit you will trick the smart video services to back off on their resolution.

A proactive approach will prevent gridlock on your entire circuit before it happens;  whereas doing nothing will cause what we call a rolling brownout.  This is when everything is fine and  all of a sudden bandwidth across the enterprise maxes out, and you basically blow a circuit breaker.  There is no bandwidth left for the video services or any other application, and thus all users experience failing application for 30 seconds or longer.  In our opinion, this is a totally preventable situation, if you have implemented manual (or intelligent) bandwidth shaping.

If you are experiencing Scenario #2, and would like to discuss how you can implement bandwidth shaping, contact one of our engineers at 303.997.1300 x103 or email us to discuss further.

The Must Have Tool for the E-Sports Enthusiast

E-Sports in schools is becoming mainstream. You can make a living at it as well. Having the right amount of bandwidth for it is essential. It doesn’t matter if that bandwidth is in-house on a LAN situation or over the Internet. Playing or even practicing suffers when a game doesn’t get what it needs. (for you readers on the other side of the coin that need to make sure other things get done without gaming interfering keep reading too :)

Believe it or not, playing games online was one of the reasons I got interested in bandwidth management!

Every FPS (first person shooter) player wants to have a gaming experience where the only reason they lose is because they met someone better than them. You don’t want to lose because you see your screen freeze waiting for the next packet to arrive to refresh the screen.

I dove into learning as much about Internet/Networking as I could so I could try and get the best setup I could for my network. I ran my own servers so I could control some of that. I never played on the same network as the server back then because that wouldn’t be fair to others. Running my own servers I could also see what else was going on with the network traffic.

I knew how much my servers needed per person to play like they should so I knew that 8 or 16 players would take a certain amount of bandwidth. I knew how much total bandwidth I had for the network. What I didn’t know was how much all the other machines on the network was using and how.

With NetEqualizer you can easily see how much every IP is using. That’s every connection an IP has and how much it is using, that’s the important part. You can tell if your mail server is getting hit hard, or the web server is uploading/downloading huge objects to some offsite IP. If needed you can put connection limits on things with NetEqualizer.

You can also provide priority over getting equalized by the NetEqualizer for your gaming server IP. Even though you have priority on it you can still have a total amount it can use hard limit on it.

In a setting where you want to play games during certain hours you can have rules that go on and come off at different times. For instance if you are in a high school that provides E-Sports gaming then it can be setup so that the administrative IPs all have priority from 8am to 2pm but after that you can take it off and let E-Sports have a bit more priority so you don’t end up getting LAG!

NetEqualizer works both ways, it can be used to give administration priority when you want it to be the most important traffic on the network or you can give programs like E-Sports more priority so your gaming does not suffer when its necessary.

NetEqualizer strives to be a set and forget type of bandwidth manager but it has a lot for the ones that need micro manage it as well. You can set hard limits on IPs, create Pools which have a certain amount of bandwidth and then stick IPs into those pools as members so all the IPs in the pool can use up to the pools specified hard limit. You can set connection limits on IPs. Also the default task of the NetEqualizer is to equalize. If placed on a network without any configuration besides telling it how much in and out bandwidth you have it will monitor all connections from all IPs it sees and when RATIO of incoming or outgoing bandwidth is reached it looks for all connections over a value we call HOGMIN and slows those large connections down so the rest of the connections on the network don’t suffer. A real simple example is if you
are on a standard VoIP call which only uses a few hundred K of bandwidth and someone on the network decides to start downloading a high def movie file from the web. Without NetEqualizer its anyones guess what will happen to the VoIP call. With NetEqualizer its predetermined what will happen. First thing it does is see if there is any reason to look for connections to equalize. If you are no where need your bandwidth ceiling then it does nothing and keeps monitoring. Both your VoIP and download should go along like NetEqualizer wasn’t there. Now if NetEqualizer sees that you are near your ceiling on total bandwidth that you told it you have then it looks for all connections over HOGMIN. Every connection that doesn’t specifically have a priority rule for it will be slowed down by a few milliseconds and this will happen for as long as the bandwidth is near saturation. When a connection is equalized we don’t just do it and leave it that way. We do it in stages so things like fragile FTP servers don’t just drop the connection. We put on a small delay and then in a second or so we check again and if it still needs equalizing and still a connection we put on a bit more and then we do the same routine one more time if things are still needing equalizing. Then we take it completely off and start all over in another second or two.

The NetEqualizer equalizes a connection from one IP to another IP. So if your web server is uploading a file to some IP and its huge then it may be equalized for that connection. The other 100’s or 1000’s of connections to your web server would not be equalized unless they were also over HOGMIN and there was a need to equalize. The same applies to any IP no matter if it belongs to your mail servers or game servers or testing servers. As mentioned above, you can set priority for things like video servers you push out to the world and know those streams would be over HOGMIN but are important enough to mandate they have no equalizing on them.


Creative Marketing Pushing the Limits

I just spent the evening advocating for my 90 year old mother, getting her through the bureaucracy gauntlet of a large teaching hospital.  The highlight of my evening was when I had to move my car from in front of the ER entrance, and upon my return the security guard refused to let me back into the ER.  I had essentially been evicted from the hospital.  I’ll spare you the details of the rest of tonight’s carnage as it is not really relevant for a technical product blog, but it did jar a repressed memory from when we were in early startup mode years ago, and I was trying to gain some market traction.

Flash back to early 2005, NetEqualizer was no more than a bundled open source CD selling for less than a decent television goes for these days. Our customers were mostly early adopters running on shoestring budgets.  Encouragement came in the form of feedback from customers. We were getting amazing reviews from smaller ISPs, who raved about how good our bandwidth shaping technology was.  My problem was that their enthusiasm was not translating into larger corporate customers.  In order to survive, we had to leverage our success into a higher-end market, where despite our technical success we were still an unknown commodity.

With time on my hands, and my industry expertise current on the Telco industry, I started writing small articles for trade magazines.  These vignettes were great for building a resume, but not so great at getting the NetEqualizer in front of customers.  With each passing week I would chat with the editor(s) from Ziff Davis and propose article ideas. Slowly I was becoming a respected yet starving feature writer. By necessity, entrepreneurs have to think out of the box, and I was no exception when I hatched the idea for my next article.  The conversation with my Editor went something like:

Me:  “Hey Bill, I have an idea for a new article.”

Bill: “Let’s hear it. ”

Me: “Well, there is big trade show next month in Orlando…  How about I head down there and write a new product review feature for your magazine? I’ll walk the floor and impromptu interview various vendors and put together a review feature with a little insider flair, what do you think?”

Bill: “Go for it! Keep me posted. We can’t pay your expenses though.”

Me: “That’s fine. In return for not getting paid, I hope to use my access as your feature writer to also start some conservations on our Bandwidth shaper, to get some feedback on our direction.”

Bill : “Sounds good, just keep it discreet.”

And so I was off to Orlando.

On trade show day I wandered the floor with my little badge identifying me as a representative of the publishing company Ziff Davis.  I walked booth to booth introducing myself and asking about what new products were being featured.

The strategy was working.  Various marketing executives were eager to tell me about their new offerings.  Once we had a little rapport going, and I had gathered the information I needed for my product review, I would attempt to work into the conversation that I was not only a part-time feature writer, but also a tech entrepreneur. Much to my surprise, most people were curious to learn about my endeavor and our start up technology.  That was until I entered the Nortel Booth.

When I brought up my alter ego personality as an entrepreneur to the Nortel Marketing rep, he blew a gasket and had me escorted from his booth by some henchmen.  It was  one of those demoralizing, embarrassing moments as an entrepreneur that you just have to push past.

Obviously, we kept going and there were many more dead ends to come. I learned just as in the hospital, whether your an advocate for your product, or your ailing mother, you must push ahead and continue to work out of the box.  And yes, I eventually did get back into the ER, and yes, it was embarrassing.

As a reference, here are links to some of the trade magazine articles I wrote back in the Mid 2000’s:




NetEqualizer Speeds up Websites with Embedded Video

Maybe I am old school, but when I go to a news site, I typically don’t want to watch videos of the news.  I want to skim the article text and move on. I find reading my news  to be a much more efficient way of filtering the content I am interested in.    The problem I have run into recently is that the text portion of news site portals loads much more slowly than a few years ago.  The text portions are starved for bandwidth while waiting for a video to load.  Considering text takes up very little bandwidth it should load very quickly, if not for that darn video!


I can easily tune my NetEqualizer to throttle video, and leave text alone, thus I can get to reading the text without having to wait on the video to load.  It may seem counter-intuitive, but slowing a website video down does make the page load faster.

Here is a behind the scenes explanation of how the NetEq enhances the speed of some of the popular news sites when the stories are loaded with embedded video.

  • Your browser typically attempts to load multiple elements of a webpage at one time.  So I can’t really blame the browser for the text delays. Both the video and text, along with other images, all load simultaneously from the browsers perspective.
  • Video by its nature tries to buffer ahead when bandwidth is available.
  • With my business grade 20 megabit Internet, the video buffering will dominate the entire 20 megabits.  The text loading, even though small with respect to data, tends to suffer in the wings when a video download is dominating the link.
  • Why exactly the text loading does not get equal cycles to load I am not 100 percent sure, but people who design routers have told me that the persistent video connection once started is favored by the router over other packets.
  • The  NetEqualizer by its own design punishes large streams by slowing them down when your link is at capacity.    This allows the text loading a nice chunk of bandwidth to work with and it loads much more quickly than when competing with the video stream.

For  more details on how this works we have a youtube that explains it all.





NetEqualizer 15 Year Anniversary, Celebrating Famous NetEqualizer Users

First off, before I get into trouble , I want to assure all of our customers that in no way do we actually know or have data on who has had their personal traffic pass through a NetEqualizer over the past 15 years. What we can surmise, with a degree of probability/speculation, based on many of the locations we are installed  is , who has likely seen their traffic pass through our device.  What follows is a list of  those likely candidates

Michael Phelps:   For many years we were the primary source of Bandwidth Control in the olympic training center in Colorado Springs where many of the Olympic Swim team would practice  prior to the Olympics.  Basically any olympic athlete that connected to the wireless network in the training center from 2006 through 200? had their traffic pass though a NetEqualizer


Donald Trump:  NetEqualizer products have been used in several Los Angeles/Hollywood production studios where taping and of popular television shows take place , after taping the raw cuts are sent from the studio’s for editing and distribution. Yes it is very likely the Apprentice was taped in a Studio where the NetEqualizer was the Primary bandwidth  control solution.


The Pope:  Not sure if the Pope uses the Internet when he visits the US embassy in the Vatican but yes we do have NetEqualizer installed in the Vatican


Jerry Jones:  We have a NetEqualizer handling the traffic in the AT&T stadium business and conference center. I suspect that Jerry has wondered into that section of the Building  on occasion

Mark Cuban:  I have exchanged e-mails with Mark on a few other idea’s  un related to NetEqualizer. In our office  all of our traffic pass through our local NetEqualizer , hence I know with certainty that our e-mail exchange went through a NetEqualizer!


Barack Obama:  Prior to becoming president Mr Obama visited the Green Zone in Afghanistan along with other members of congress. At the time we had several systems in the green zone ( basically little american cities for Military people stationed there) keeping their wifi up and running.  For non secure communications he would have been using the local wifi and thus passing through a NetEqualizer.

These are just a few instances where I could logically place  these celebrities in locations where active NetEqualizers were shaping traffic.  Of course, we have  had many thousands of units installed over the years and the possibilities   are  endless.    Tens  of millions of users have passed through our controllers over the years . From Resort Hotels, Sports Venues , Universities, Conferences Centers, Fortune 500 business, and many many Rural and small Town ISPs all have deployed our equipment. Hence the actual list of famous people who have stumbled through a NetEqualizer is likely much higher, stay tuned for more to come.


By Art Reisman CTO/ Co Founder NetEqualizer




Smart Bandwidth Shaping

The NetEqualizer Bandwidth shaper has always had the ability to shape a group of people (subnet) to a fixed bandwidth limit. In laymen terms what this means is that you can take a segment of a network and say something like “you guys are only going to get 50 megabits, and try as you might to use more than 50 megabits, you are capped, and won’t be able to go over 50 megabits”.

What has been often requested and not supported, until now, is the ability to selectively enforce the group/subnet bandwidth limit.  In laymen terms again, “I want to set a 50 megabit limit on those guys, but only have it enforced when my network is near peak utilization.  The rest of the time I want those guys to be able to have all available bandwidth.”

Why is this important ?

The best way to answer this question is with an example.

A typical customer for our legacy enforcement feature would be a company where different business units within the company are allocated fixed amounts of bandwidth.   From experience and feed back from our customers we know , most of the time, the company as a whole, has more than enough bandwidth in reserve to accommodate all the business units.  The fixed allocations are really only needed during peak times to make sure no single business unit crowds out the others in a free for all bandwidth grab.   Assuming the critical peak usage situation only happens once a week, or once a day for a few hours , the old fixed allocation scheme is forcing business units to use a limited amount of bandwidth during times when there is unused bandwidth just going to waste. With our new scheme, the intelligence of the NetEqualizer will only apply the fixed allocation during those moments when bandwidth is at a premium.  There is no need for an IT person to make time of day adjustments to maximize utilization , it is automatically done for them.

With our new “Pool Bursting feature”, coming out in July, customers’ wishes have been made a reality.  Enforcement of our pool/subnet bandwidth limits can now be specified as absolute (always enforced) or enforced only at times of peak congestion.

One word of caution though.   As with any dynamic need-based enforcement there may be some customer backlash.  For example, the customer that comes to expect high bandwidth during low utilization times may not be happy if the enforcement kicks in and they are all of sudden hit with a bandwidth cap.

Wireless ISPs Making a Comeback

Back in 2007, every small town in North America had at least one, if not two, wireless ISPs. We know, because many were our customers.  The NetEqualizer was an essential piece to their profitability.  Our optimization techniques allowed ISPs to extend their  bandwidth service to more customers, hence increasing their profitability.  And then came the great recession.  Even as consumers were squeezed,  many of these smaller wireless ISPs initially fared well, as their customers would never cancel their Internet service. One operator told me “Our customers will pay their Internet bill before their heating bill.  You can wear a coat to get warm but you cannot live without the Internet.”

Then came the death-blow of the Broadband Initiative, not a bad idea in principle, but as many government spending programs in the past,  it did not trickle down to the smaller businesses, nor was the initial spend self-sustaining.  Instead, big chunks of the new-found money went to entrenched large providers who had been ignoring investing in rural areas, or it went into new ventures, friends of friends, people who had expertise in the ISP arena, and their businesses eventually fizzled.   The net effect was that the smaller ISPs who had laid the ground work in these rural areas and had been expanding were stopped in their tracks, unable to compete against subsidized competition.

Today the wireless ISPs that weathered the storm are seeing a resurgence, bolstered by better technology, the failure of many Broadband Initiative projects, and consumers being squeezed by the high prices and poor service of the entrenched monopolies.

Every week we are hearing from our old wireless ISP customers ready to upgrade their equipment; some of them have not been in contact with us since 2011.   Stay tuned, this is an evolving story.




Some Musings on Virtual Machines

By Art Reisman

The other day, I sold a smart refrigerator  to a customer. When they found out it had a computer in it, and could be controlled remotely from the Internet, they asked me if they could run it on their Virtual Machine to save some space in the kitchen.  I told them, sure  we support that , they just need to get a-hold of  an  add-on compressor and a 40 foot cubic container module for their VM,  and we would just ship a plug-in application. There would be no need to ship any hardware to them, we have  a virtual refrigerator!

I purposely used that over the top analogy, to highlight,  the chill down my spine I feel, when I hear about vendors bundling their core network equipment into a VM.

Virtual machines make a lot of sense for somebody running a data center with 10 different servers and consolidating them into one box.   My underlying discomfort stems from the extension of  that mission onto equipment that is involved the real-time transport of your data.  Switches , routers , firewalls and bandwidth shapers.  Why do I feel this way? Am I just an old stubborn  engineer clinging to the old ways while the world passed me by?

Not really, we have set up virtual machines with our bandwidth shaper with success in our labs, it is actually pretty cool. My discomfort arises with the fact that bandwidth shapers are finely tuned, real-time devices, with software that must run at the core level of the computer’s operating system.  A bandwidth shaper must have absolute control of perhaps 4 ethernet/fiber ports or more and under no circumstances can it compete with  CPU resources should a server become overloaded.  The consequences of any resource contention are at best a slow internet, and at worst a complete lock up.   Yes I understand a in theory a modern VM can divvy up resources , but how do we ensure that it is done correctly ?   When we ship a standalone device running only our application we know  exactly what it is capable of,  and since we have thousands of identical configuration in the field,  we know that the technology configuration that leaves our factory dock is rock solid stable.

This is not to say we will never offer a virtual machine, we did have one customer where the logistics of their set up was so remote that the benefits of a virtual bandwidth  shaper on their standard configuration far outweighed the risks I mentioned above; but for the most part saving a few dollars on rack space and an extra piece of hardware are not worth the jeopardizing the stability of a critical piece of in-line equipment.



Technology Predictions for 2018+

By Art Reisman


Below are my predictions for technology in 2018 and beyond. As you will see some of them are fairly pragmatic, while others may stretch the imagination a little bit.

  1. Forget about drones delivering packages to your door; too many obstacles in densely populated areas. For example, I don’t want unmanned drones dangling 30 pound flower pots flying above my head in my neighborhood. One gust of wind and bam,  flower-pot comes hurtling out of the sky.  I don’t want it even if it is technically possible!  But what is feasible, and likely, are slow plodding autonomous robots that can carry a payload and navigate to your doorstep.   Not as sexy as zippy little drones, but this technology is fairly mature on factory floors already, and those robots don’t ask for much in return.
  2. As for Networking advancements, we may see a “Cloud” backlash where companies bring some of their technology back in-house to gain full control of their systems and data.  I am not predicting the Cloud won’t continue to be a big player, it will, but it may have a hiccup or two along the way.  My reasoning is simple, and it goes back to the days of the telephone when AT&T started offering a PBX in the sky.  The exact name for this service slips my mind.  It sounded great and had its advantages, but many companies opted to purchase their own customer premise PBX equipment, as they did not want a third-party operating such a critical piece of infrastructure.  The same might be said for private companies thinking about the Cloud.  They could make an argument that they need to secure their own data and also ensure uptime access to their data.
  3. More broadband wireless ISPs coming to your neighborhood as an alternative option for home Internet.  I have had my ear to the street for quite some time, and the ability to beam high-speed Internet to your house has come a long way in the last 10 years.  Also the distrust, bitterness, dare I say hatred, for the traditional large incumbents is always a factor. One friend of mine is making inroads in a major city right in the heart of downtown simply by word of mouth.  His speeds are competitive, his costs are lower, and his service cannot be matched by the entrenched incumbent.
  4. Lower automobile insurance rates.  The newer fleet of smart cars that automatically break for or completely avoid obstacles is going to reduce serious accidents by 50 percent or more in the near future.  Insurance payouts will drop and eventually this will be passed on to consumers.  Longer-term, as everyone on the road has autonomous driving cars, insurance will be analogous to a manufacturer’s warranty, and will be paid by the auto manufacturer.
  5. The Internet of Things (IoT) will continue to explode, particularly in the smart home arena.  Home security has taken leaps & bounds in recent years, enabling a consumer to lock/unlock, view and manage their home remotely.  Now we are seeing IoT imbedded in more appliances, which will be able to be controlled remotely as well – so that you can run the dishwasher, washer, dryer, or oven from anywhere.
  6. Individual Biosensory data, like that collected by Garmin and Fitbit monitors, will be used by more companies and in more ways.  In 2018 my health insurance company is offering discounts for members that prove they use their gym memberships.  It is only a small leap to imagine a health insurance company asking for my biosensory data, to select my insurance group and to set my insurance rates.  As more people use fitness trackers and share their data (currently only with friends), it will become the norm to share this type of data, probably at first anonymously.  I can see a future where  health care providers and employers use this data to make decisions.

I will update soon as new ideas continue to pop into my head all the time.  Stay tuned!

How to best use your 100 megabit Internet Pipe

In a previous article we made the following statement:

“ISPs are now promising 100 megabit per second consumer  service, and are betting on the fact that most consumers will only use a fraction of that at any given time.  In other words, they have oversold their capacity without backlash.  In the unlikely event that all their customers tried to pull their max bandwidth at one time, there would be extreme gridlock, but the probability of this happening is almost zero. “

So I ask the question, what would it take to make full use of your 100 megabit pipe?

A typical streamed movie consumes about 4 megabits. You would need to watch 25 Netflix movies at once all day every day to fully utilize your pipe.  Obviously watching 25 movies at once all day every day is not very practical, you would need multiple Netflix Accounts and 25 devices to watch them on.

Big files:  A 100 Gigabyte file, that’s a good size download for a consumer right?   Well, that would take approximately 4 minutes to download on a 100 megabit pipe, and then you’d have to find another one.

For convenience maybe you could find  a 1,000 Gigabyte file? That would take only 40 minutes, so you are still kind of left with a good deal of spare pipe for most of the day.  How about a 10,000 Gigabyte file (10 Terabytes)?  That would take 400 minutes.   By my calculations, in order to make use of  your 100 megabit  pipe completely for 24 hours, you would need to download a 40 Terabyte file!

Where could you find such a file?

I did some poking around and there are a couple of sites that have gigantic files for no particular reason, but the only practical file with a reason to download was this one:


“Some time ago I was interested in creating custom maps of the Earth, and I realized that the data files needed for this are pretty large; and the more zoom you want, the larger are the data files.

OpenStreetMap has a huge file of the Earth which is 82GB compressed and around 1TB uncompressed according to the OSM wiki, and it will become larger. You can find it updated here.”

So this very large file that maps the entire earth is 82 Gig in compressed form for download, a tiny fraction of the full 40 terabytes you would need to download in one day to fill up your pipe.

What is the moral of the story?

Internet providers can safely offer 100 megabit pipes, full well knowing that even their heaviest users are likely not going to average more than 5 megabits sustained over a long time period.  You would actually have to be maliciously downloading ridiculously sized files all day every day to use your full pipe.

Gmail Gone AWOL

By Art Reisman



I have  a confession to make. Even though we have a corporate e-mail server at my company, I have been using Gmail for my primary business e-mail going back to 2002.  I  love the ability to search old records and conversations from the past.   With Google’s technology , searching gmail was second to none. Sometimes , I searched just for nostalgia  purposes, like the final e-mail conversation I had wih my Mom when my dad was taken off dialysis in hospice, and sometimes for business reasons.   Unfortunately my world has recently been shattered.   All my e-mail prior to 2008 is completely gone, and I have searched far and wide for a policy from Gmail that might explain why.  I pay a monthly fee for google storage and was well below my limit, I have tried their support forums and so far  just silence. If you are a long time e-mail users I suggest you try to search  your archives.  Ten years seems to be the cut off where things got dumped or lost.


And no ,  have not been corresponding with any Russian operatives!

NetEqualizer Reporting Only License now Available for Purchase

For about half the cost of the full-featured NetEqualizer, you can now purchase a NetEqualizer with a Reporting Only License.  Our Reporting Only option enables you to view your network usage data in real-time (as of this second), as well as to view historical usage to see your network usage trends.

Screen Shot 2017-10-19 at 3.59.43 PM

Live Screen Shot Showing Overall Bandwidth In Real Time

 Reporting can help you to troubleshoot your network, from identifying DDoS and virus activity, to assessing for possible unwanted P2P traffic.

You might consider a Reporting Only NetEqualizer for a site where you would like better visibility into your network, and also think you may need to shape at some point.  It could also help you to assess a network segment from a traffic flow perspective.

And the great thing is, we always protect your investment in our technology.  If at a later time you do decide you want to use our state-of-the-art shaping technology, you have not lost your initial investment in the NetEqualizer.  You can always upgrade and only pay the price difference.

What features come in Release 1 (R.v1) of the Reporting Only NetEqualizer?

  • Reporting by IP , real time and historical usage
  • Reporting by Subnet , VLAN  real time and historical usage
  • Reporting by Domain Name ( Yahoo, Facebook etc) Real time and historical
  • Real-time spreadsheet style snapshot of all existing connections

Troubleshooting Tools

  • Top Uploaders & Downloaders
  • Abusive behavior due to Viruses
  • DDoS detection
  • P2P detection
  • Alerts and Alarms for Quota Overages
  • Peak Bandwidth Alerting

More features to come in our next release, please put in your request now!

Reporting Only prices include first year support.  Prices listed below are good through 3/31/2018.  After March 2018, contact us for current pricing.

NE3000-R 500Mbps price   $3000
NE3000-R 1Gbps price      $4000
NE4000-R 5Gbps price       $6000

Note that Reporting Only NetEqualizers can be license-upgraded in the field to enable full   shaping capabilities.

The New Bandwidth Paradigm

For years the prevailing belief was that consumers would always outstrip bandwidth supply.  From our recent conversations with several land line operators,  their experience suggests that in the near-term, that paradigm may not be true.

How could this be?

The answer is fairly simple.  Since streaming HD video became all the rage some 10+ years ago, there has not been any real pressure from any new bandwidth-intensive applications.   All the while, ISPs have been increasing their capacity.  The net result is that many wired providers have finally outstripped demand.

Yes, many video content options have popped up for both real-time streaming and recorded entertainment.  However, when we drill down on consumption, we find that almost all video caps out at 4 megabits per second.  Combine a 4 megabit per second self-imposed video limit with the observation that consumers are averaging 1 movie for every 3 connected households, and we can see what true consumption is nowadays – at or below 4 megabits per second per house.   Thus, even though ISPs now advertise  50 or 100 megabit per second last mile connections to the home, consumers rarely have reason to use that much bandwidth for a sustained period of time.   There is just no application beyond video that they use on a regular basis.

What about the plethora of other applications?

I just did a little experiment on my Internet connection leaving my home office.  My average consumption, including two low resolution security camera’s, a WebEx session, a Skype call, several open web pages, and some smart devices, came to a grand total of 0.7 megabits per second.   The only time I even come close to saturating my 20 megabit per second connection is when I download a computer update of some kind, and obviously this is a relatively rare event, once a month at most.

What about the future?

ISPs are now promising 50 or 100 megabit per second connections, and are betting on the fact that most consumers will only use a fraction of that at any given time.  In other words, they have oversold their capacity without backlash.  In the unlikely event that all their customers tried to pull their max bandwidth at one time, there would be extreme gridlock, but the probability of this happening is almost zero.   At this time we don’t see any new application beyond video that will seriously demand a tenfold type increase in bandwidth, which is what happened when we saw video come of age on the Internet.  Yes,  there will be increases in demand, but we expect that curve to be a few percent a year.

%d bloggers like this: