NetEqualizer Reporting Only License now Available for Purchase


For about half the cost of the full-featured NetEqualizer, you can now purchase a NetEqualizer with a Reporting Only License.  Our Reporting Only option enables you to view your network usage data in real-time (as of this second), as well as to view historical usage to see your network usage trends.

Screen Shot 2017-10-19 at 3.59.43 PM

Live Screen Shot Showing Overall Bandwidth In Real Time

 Reporting can help you to troubleshoot your network, from identifying DDoS and virus activity, to assessing for possible unwanted P2P traffic.

You might consider a Reporting Only NetEqualizer for a site where you would like better visibility into your network, and also think you may need to shape at some point.  It could also help you to assess a network segment from a traffic flow perspective.

And the great thing is, we always protect your investment in our technology.  If at a later time you do decide you want to use our state-of-the-art shaping technology, you have not lost your initial investment in the NetEqualizer.  You can always upgrade and only pay the price difference.

What features come in Release 1 (R.v1) of the Reporting Only NetEqualizer?

  • Reporting by IP , real time and historical usage
  • Reporting by Subnet , VLAN  real time and historical usage
  • Reporting by Domain Name ( Yahoo, Facebook etc) Real time and historical
  • Real-time spreadsheet style snapshot of all existing connections

Troubleshooting Tools

  • Top Uploaders & Downloaders
  • Abusive behavior due to Viruses
  • DDoS detection
  • P2P detection
  • Alerts and Alarms for Quota Overages
  • Peak Bandwidth Alerting

More features to come in our next release, please put in your request now!

Reporting Only prices include first year support.  Prices listed below are good through 3/31/2018.  After March 2018, contact us for current pricing.

NE3000-R 500Mbps price   $3000
NE3000-R 1Gbps price      $4000
NE4000-R 5Gbps price       $6000

Note that Reporting Only NetEqualizers can be license-upgraded in the field to enable full   shaping capabilities.

The New Bandwidth Paradigm


For years the prevailing belief was that consumers would always outstrip bandwidth supply.  From our recent conversations with several land line operators,  their experience suggests that in the near-term, that paradigm may not be true.

How could this be?

The answer is fairly simple.  Since streaming HD video became all the rage some 10+ years ago, there has not been any real pressure from any new bandwidth-intensive applications.   All the while, ISPs have been increasing their capacity.  The net result is that many wired providers have finally outstripped demand.

Yes, many video content options have popped up for both real-time streaming and recorded entertainment.  However, when we drill down on consumption, we find that almost all video caps out at 4 megabits per second.  Combine a 4 megabit per second self-imposed video limit with the observation that consumers are averaging 1 movie for every 3 connected households, and we can see what true consumption is nowadays – at or below 4 megabits per second per house.   Thus, even though ISPs now advertise  50 or 100 megabit per second last mile connections to the home, consumers rarely have reason to use that much bandwidth for a sustained period of time.   There is just no application beyond video that they use on a regular basis.

What about the plethora of other applications?

I just did a little experiment on my Internet connection leaving my home office.  My average consumption, including two low resolution security camera’s, a WebEx session, a Skype call, several open web pages, and some smart devices, came to a grand total of 0.7 megabits per second.   The only time I even come close to saturating my 20 megabit per second connection is when I download a computer update of some kind, and obviously this is a relatively rare event, once a month at most.

What about the future?

ISPs are now promising 50 or 100 megabit per second connections, and are betting on the fact that most consumers will only use a fraction of that at any given time.  In other words, they have oversold their capacity without backlash.  In the unlikely event that all their customers tried to pull their max bandwidth at one time, there would be extreme gridlock, but the probability of this happening is almost zero.   At this time we don’t see any new application beyond video that will seriously demand a tenfold type increase in bandwidth, which is what happened when we saw video come of age on the Internet.  Yes,  there will be increases in demand, but we expect that curve to be a few percent a year.

The Benefits of Slow Internet


By Art Reisman

CTO http://www.netequalizer.com

 

A few weekends a year I spend time at our rural retreat out in the middle of high plains of Kansas.  My internet options are very limited.  We have Wild Blue as a Satellite provider. Their service is on average worse than dial-up when it is working, and there are many reasons for it to randomly go out. Including heavy rain, woodpeckers destroying the plastic cap on the center of the dish, and just random congestion that can occur at any time of the day.  There was also the time I accidentally used up my data quota after leaving the Internet radio on for a week.  In response, they shut off my service without any notification.

As a back up to the wild blue, I have a 40 foot repeater antenna on the roof that picks up a 3g signal from the local wireless provider. If I sit right under the repeater, in a closet, I can get a data signal on my phone for those emergencies when I must respond to an e-mail, so technically I am not completely off grid.

 

When the Internet goes down , I will  fight for hours resetting routers and checking cables, just like my  1-year-old grandson screaming for hours when overtired. I will not give up my Internet access without a fight.

 

But then it happens. At some point I give up.  The Internet is unusable or completely gone.  With great relief, I look over at my night stand, where I have a stack of unread nature books that sits for months at a time. Much like the island of misfit toys, these books just need to be read.  My favorite nature  writer Richard Coniff  lulls  me into  a wonderful world without politics, without doomsday weather events for which I have no control, no angry customer e-mails :) For several hours I can enjoy nature and the glorious rhythm of life without the Internet.

No Patents for This Bandwidth Shaper


By Art Reisman

CTO http://www.netequalizer.com

I often get asked if our NetEqualizer Technology is Patented. And the answer is NO.  The Netequalizer secret sauce is buried deep within our code , and is protected by copy right law.

As for patents, I have a disdain for software patents which was exemplified in this 2007 article that I wrote for Extreme Tech Magazine which explains my position.  Here is an excerpt

The problem with this patent, like many others in a misguided flood of new filings, is that it describes an obvious process to solve a naturally occurring problem.

For the full article click here “Analysis  Confessions of a Patent Holder

6 Tips for Installing a Wireless Network


I have been involved with supporting thousands of wireless networks over the past 14 years. From large professional sports arena’s to small home networks, I have seen successes and failures alike.  What follows are my learnings from living  with the pain and the success of these networks.

 

  1. Do not cut corners on coverage. The biggest and most egregious mistake that our customers have made over the years is shopping price over coverage.   The fewer access points installed the lower the net cost of the install. You may not realize  this mistake during initial trials.  Once your network is at full capacity coverage issues can be a nightmare for both customer and vendor.
  2. Use the best available  technology.  There are many different flavors of technology when installing a wireless network.  Note, the best technology may not be the most expensive, and the newest technology may not be the most reliable. As for specific recommendations on technology , I will include information in the comments section as information becomes available.
  3. Don’t let the advertised SPEED of  access point specifications overly influence your decision.  There are many factors that ultimately affect the end-user connection speed. In many cases the top advertised speed of an access point is unattainable. For an analogy would you pay an extra $50,000 for a car that could go 200 MPH when the speed limit is 75?    I have seen buildings with a 100 megabit  link to the Internet , purchasing 20 1 G access points.  Even for future expansion purposes this is way too much overkill,
  4. When choosing an IT company to help with the install, The midsize or small company in your area is likely a better bet than the large IT company.   I have personal experience working with a company that went from a great company to work with to a nightmare over a period of years. The reason was as they got bigger and hired more employees,  their talent pool become more diluted ,their prices got higher, while their work quality become a sore point with their customers.
  5. For large complex installations think about paying for a simulation. A company like Candelatech , specializes in simulating various loads on wireless networks and is well worth the up front investment prior to build out.
  6. Congestion control. Disclaimer: Yes we make a bandwidth controller and yes we are biased toward this technology. On many networks the best design and best wireless equipment are rendered irrelevant if there is not enough bandwidth to feed the animals.  A wide open heavily used network will come to  a halt without  some form of intelligent bandwidth control.

India IT a Limited Supply


Before founding my current company, I was on the technical staff for a large telecom provider.  In the early 1990’s about half of our tech team were hired on the H-1 visa’s  from India, all very sharp and good engineers.  As the tech economy heated up, the quality of our Engineers from India dropped off significantly, to the point where many were actually let go after trial periods, at a time when we desperately needed technical help.

The unlimited supply of offshore engineering talent evidently had its limits.  To illustrate I share the following experience.

Around the year 2000, in the height of the tech boom, my manager, also from India, sent me on a recruiting trip to look for grad students at a US job fair hosted for UCLA students.

In my pre-trip briefing we went over a list of ten technology universities in India, as he handed me the list he said,  “Don’t worry about a candidates technical ability, if they come from any one of these ten universities they are already vetted for competency, just make sure they have a good attitude, and can think out-of-the-box.”

He also said if they did not attend one of the 10 schools on the list then don’t even consider them, as there is a big drop off in talent at the second tier schools in India.

Upon some further conversations I learned that India’s top tech schools are on par with the  best US undergrad engineering schools.  In India there is extreme competition and vetting to get into these schools.  The dirty little secret was that there were only a limited number of graduates from these universities.  Initially, US companies were only seeing the cream of the Indian Education system.  As the tech demand grew, the second tier engineers were well-enough trained to “talk the talk” in an interview, but in the real world they often did not have that extra gear to do demanding engineering work and so projects suffered.

In the following years, many US-based engineers in the trenches saw some of this incompetence and were able to convince their management to put a halt to offshoring R&D projects when the warning signs were evident.  These companies seemed to be in the minority.  Since many large companies treated their IT staff, and to some extent their R&D staff, like commodities, they continued to offshore based on lower costs and the false stereotype that these Indian companies could perform on par with their in-house R&D teams.  The old adage you get what you pay for held true here once again.

This is not to say there were not some very successful cost savings made possible by Inidan engineers,  but the companies that benefited were the ones that got in early and had strong local Indian management, like my boss, who knew the limits of Indian engineering resources.

How to Survive High Contention Ratios and Prevent Network Congestion


image1-2

Is there a way to raise contention ratios without creating network congestion, thus allowing your network to service more users?

Yes there is.

First a little background on the terminology.

Congestion occurs when a shared network attempts to deliver more bandwidth to its users than is available. We typically think of an oversold/contended network with respect to ISPs and residential customers; but this condition also occurs within businesses, schools and any organization where more users are vying for bandwidth than is available.

 The term, contention ratio, is used in the industry as a way of determining just how oversold your network is.  A contention ratio is simply the size of an Internet trunk divided by the number of users. We normally think of Internet trunks in units of megabits. For example, 10 users sharing a one megabit trunk would have a 10-to- 1 contention ratio.
 A decade ago, a 10-to-1 contention ratio was common. Today, bandwidth is much less expensive and the average contention ratios have come down.  Unfortunately, as bandwidth costs have dropped, pressure on trunks has risen, as today’s applications require increasing amounts of bandwidth. The most common congestion symptom is  slow network response times.
Now back to our original question…
Is there a way to raise contention ratios without creating congestion, thus allowing your network to service more users?
This is where a smart bandwidth controller can help.  Back in the “old” days before encryption was king, most solutions involved classifying types of traffic, and restricting less important traffic based on customer preferences.   Classifying by type went away with encryption, which prevents traffic classifiers from seeing the specifics of what is traversing a network.  A modern bandwidth controller uses dynamic rules to restrict  traffic based on aberrant behavior.  Although this might seem less intuitive than specifically restricting traffic by type, it turns out to be just as reliable, not to mention simpler and more cost-effective to implement.
We have seen results where a customer can increase their user base by as much as 50 percent and still have decent response times for interactive  cloud applications.
To learn more, contact us, our engineering team is more than happy to go over your specific situation, to see if we can help you.
You also might be interested in this VPN product  https://www.cloudwards.net/safervpn-review/
%d bloggers like this: