Five Things to Know About Wireless Networks


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Over the last year or so , when the work day is done , I often find myself talking shop with several peers of mine who run wireless networking companies.  These are the guys in he trenches. They spend their days installing wireless infrastructure in apartment buildings , hotels, professional sports arenas to name just a few.  Below I share a few tidbits intended to provide a high level picture for anybody thinking about building their own wireless network.

There are no experts.

Why? The companies that make wireless equipment are sending out patches almost hourly.  Because they have no idea what works in the real world, every new release is an experiment.  Anybody that works in this industry is chasing this technology, it is not stable enough for a person to become an expert. Anybody that claims to be an expert is living an illusion at best, perhaps wireless historian would be a better term for this fast-moving technology. What you know today will likely be obsolete in 6 months.

The higher (faster) the frequency  the higher the cost of the network.

 Why ? As the industry moves to standards that transmit data at higher data rates, they must use higher frequencies to achieve the faster speeds.  It just so happens that these higher frequencies tend to be less effective at penetrating   through buildings , walls, and windows.   The increase in cost comes with the need to place more and more access points in a building to achieve coverage.

Putting more access points in your building does not always mean  better service. 

Why?  Computers have a bad habit of connecting to one access point and then not letting go, even when the signal gets weak.    For example when you connect up to a wireless network with your lap top in the lobby of a hotel, and then move across the room, you can end up in a bad spot with respect to original access point connection. In theory, the right thing to do would be to release your current connection and connect to a different access point. Problem is most of the installed base of wireless networks , do not have any intelligence built in  to get you routed to the best access point,hence even a building with plenty of coverage can have maddening service.

Electro Magnetic Radiation Cannot Be Seen

So What?  The issue here is that there are all kinds of scenarios where the wireless signals bouncing around the environment can destroy service. Think of a highway full of invisible cars traveling in any direction they wanted.  When a wireless network is installed the contractor in charge does what is called a site survey. This is involves special equipment that can measure the electro magnetic waves in an area, and helps them plan how many and where to install wireless access points ;  but once installed, anything can happen. Private personal hotspots , devices with electric motors, a change in metal furniture configuration are all things that  can destabilize  an area, and thus service can degrade for reasons that nobody can detect.

The more people Connected the Slower their Speed

Why?  Wireless  access points use  a technique called TDM ( Time Division Multiplexing) Basically available bandwidth is carved up into little time slots. When there is only one user connected to access point, that user gets all the bandwidth, when there are two users connected they each get half the time slots. So that access point that advertised 100 megabit speeds , can only deliver at best 10 megabits when 10 people are connected to it.

Related Article

Wireless is nice but wired networks are here to stay

Seven Tips To Improve Performance of your Wireless Lan

Top 5 Reasons Confirming Employers Don’t Like Their IT Guy


  • The IT room is the dregs. Whenever I travel to visit with my IT customers , it is always a challenge to find their office.   Even if I find the right building on the Business/College Campus  , finding their actual location within the building is anything but certain.  Usually it ends up being in some unmarked room behind a loading dock, accessible only by secret passage designed to relieve the building of  cafeteria waste near the trash bins.    Many times, their offices are one and the same thing as the old server computer room, with the raised floor, screaming fans, and air cooled to a Scottish winter.
  • Nobody knows you are in the building.  Often times I enter the building on the upper floors , the floors with windows and young well dressed professionals trying to move up the ladder. Asking these people if they know where the IT room is,  usually brings on blank stares of confusion and embarrassment.  To them the IT guy is that person they only see when their computer fails with a virus.  Where he emanates  from nobody knows, perhaps a trap door opens in the floor. I am not making this up. The usually way  I am instructed to meet the IT guy is they send me an e-mail instructing me to meet at some well known landmark out front, like a fountain or statue with a rendezvous time.
  • You are expected to be an expert in Wireless technology. Let’s face it , the companies that make wireless controllers  are sending out patches almost hourly. Why? Because they have no idea what works in the real world, and so you are part of the experiment  The real fact is nobody is an expert in real world wireless  technology. As the IT guy , you can never  admit  to any holes in your wireless knowledge. If you are not willing to lie,  there are plenty of people with no experience willing to make that claim with a straight face.  You just can’t be honest about this  because your boss has already told his boss you are an expert.  Here is the last paragraph of a recent article on Verizons trial with the latest 5g Wireless

Of course, 5G wireless has never been truly tested at scale in true market scenarios. There’s talk of gigabit capable speeds, but how would a single tower supporting fixed wireless 5G at scale compare to fiber and HFC based networks connected all the way to homes and businesses? No one really knows – yet.

Setting up a new wireless network with the latest technology is like a taking a  physics test  in wave propagation before you have taken the class, and expecting to pass.

  • You will never get rewarded if things work without issues.  I like to compare a good IT tech to a good umpire or a ref in a soccer game.  At best, if they do a perfect job, nobody notices them.   If I ran a  big company, I would hand out bonuses to my IT staff for the days I did not need them, but I do not have an MBA. ( see next paragraph)
  •  Any time a  company hires a brilliant MBA from some business school, the first thing they do is explore outsourcing the IT staff.  Why ? Because nobody teaches them anything about IT in business school. They live in a fantasy world where some unknown third party with a slick brochure, and an unrealistic low ball estimate,  is going to care more about IT needs than the 4 poor shlubs in the basement who have been loyal  for years. You and the  in house staff have always been on call,  missing many weekends over the years, just to insure the IT infrastructure stays up, and yet the Harvard guy will shoot himself the foot with outsourcing every time.

Proving The Identity of The DNC Hacker Not Likely


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Inspired by the recent accusations regarding  the alleged Russian Hacking of the DNC e-mail servers, I ask the question, is it really possible for our intelligence  agencies to say with confidence exactly who hacked those servers?  I honestly don’t think so. To back up  my opinion, I have decided to  take our faithful blog readers through the mind and actions of  a professional hacker,  intent on breaking into a  corporate e-mail server, without leaving a trace. From there you can draw your own conclusions.

My  hacking scenario below is based  on actual techniques that our own ethical hackers use to test security at corporations. These companies  contract with us to deliberately  break into their It systems, and yes sometimes we do break in.

First we will follow our hacker through the process of a typical deliberate illegal break in, and then we will  analyze the daunting task of a forensic expert must deal with after the fact.

 

Here we go….

Phase I

  • First I need a platform for the first phase  of my attack. I want to find a computer with no formal ties to my identity. Just like  the public telephone booth of the 70’s and 80’s were used for calling in bomb threats,  the computers in your   public  libraries can easily conceal my identity.
  • To further cover my trail, I bring my own  flash memory with me to the library, it contains a software program commonly referred to  as  “BOT”. This allows me to move data programs onto the library computer without doing something like logging into my personal e-mail , which would leave a record of me being there.  In this case my BOT  specializes in crawling the Internet looking for consumer grade desktop computers to break into.
  • My BOT  searches the Internet at random looking for computers which are un-protected.  It will hit several thousand computers an hour for as long as I let it run
  • I don’t want to go to long with my BOT running from the Library,  because all the outbound activity it generates, may be detected as a virus by an Upstream ISP. The good news in my favor is that  BOTs both friendly and malicious are very common. At any time of the day there are millions of them  running all over the world.

Note, running a bot in itself is not a crime, it is just bad etiquette and annoying.  It is extremely unlikely that anybody would actually be able to see that I am trying to hack into computers (yes this is a crime)  with my BOT , because that would take very specialized equipment , and since I chose my Library at random the chances of drawing attention at this stage are minuscule. Typically a law enforcement agency must attain a warrant to set up their detection equipment.  all the upstream provider would sense is an unusual high rate of traffic coming out of the library.

  •  Once my bot has found some unprotected home computers and I have their  login credentials, I am ready for phase 2 . I save off their IP addresses and credentials, and delete the bot from the computer in the Library and leave never to return.

You might be wondering how does a BOT get access to home computers?  Many are still out there running very old versions of Windows or Linux and have generic passwords like “password”. The BOT attempts to login   through a well  known service such as SSH ( remote Login) and guesses the password. The BOT may run into 1,000 dead ends or more before cracking a single computer. Just like a mindless robot should,  it works tirelessly without complaint 

Phase II

  •  I again go to the Library and set up shop. Only this time instead of a BOT I come armed with phishing scam e-mail on my Flash.  From a computer in the library I   remotely login into one of the home computers whose credentials I attained in Phase 1 and set up shop.
  • I set up a program that will send e-mails from the home computer to people who work at the DNC with my  trojan horse content.

If I am smart, I do a little research on their back ground(s) of the poeple I sending to so as to make the e-mails as authentic as possible. Most consumers have seen the obvious scams where you get some ridiculous out of context e-mail with a link to open some file  you never asked for, that works for mass e-mailing to the public, hopeing to find  a few old ladies, or the computer illiterate, but I would assume that people who work at the DNC , would just think it is a spam e-mail and delete it.  Hence, they get something a little more personalized.   

How do I find the targeted employ e-mails at the DNC ?  That is a bit easier , many times they are published on a Web site, or  I simply guess at employee e-mails addresses , such as hclinton@dnc.com.

  • If any of the targeted e-mails I have sent to a DNC employee are opened they will, unbeknowest to them, be  installing  a keystroke logger that captures everything they type. In this way when they login into the DNC e-mail server I also get a login and access to all their e-mails

 How do I insure my victim does not suspect they have been hacked ? Stealth , Stealth , Stealth.  All of my hacking my tools such as my keystroke logger have very small inconspicuous footprints. I am not trying to crash or detroy anything at the DNC.  The person or persons who systems I gaing entry through most likely will never know.  Also I will only be using them for a very short period of time, and I will delete them on my way out.

  • Getting e-mail access. Once the keystroke logger is in place I have it report back to another one of my hacked personal computers. In this way the information I am collecting will sit on a home computer with no ties to back to me. WHen I go to collet this information , I again go to a Library with my flash card and download key stroke information, eventually I directly load up al the e-mails I can get onto my flash drive while in the Library.  I then take them to the Kremlin ( or whoever I work for and hand over the flash drives containing 10’s of thousands of e-mails for off line analysis.

 

Debunking the Russian Hacking Theory

The FBI purports to have found a  “Russian Signature file ” on the DNC server?

  •  It’s not like the hacking community has dialects associated with their hacking tools.  Although  If I was a Chinese hacker I might make sure I left a path pointing back at Russia  , why  not ? . If you recall I deleted my hacking tools on the way out, and yes I know how to scrub them so there is no latent foot print on the disk drive
  • As you can infer from my hacking example , I can hack pretty much autonomously from anywhere in the US or the world for that matter, using a series of intermediaries and without ever residing at permanent location.
  • Even if the FBI follows logs of where historical access into the DNC  has come from, the trail is going to lead to some Grandma’s computer at some random location. Remember all my contacts directly into the DNC were from my Hijacked Grandma computers. Perhaps that is enough to draw a conclusion so the FBI can  blame some poor Russian Grandma.  As the  real hacker all the better for me, let Grandma take the diversion, somebody else is going to get the blame.
  • Now let’s suppose the FBI is really on the ball and somehow figures that Grandma’s computer was just a shill hijacked by me. So they get a warrant and raid Grandma’s computer and they find a trail .  This  path is going to lead them back to the Library where I sat perhaps 3 months ago.
  • We can go another step farther, suppose the library had video surveillance and they caught me coming and going , then just perhaps they could make an ID match

By now you get the idea, assuming the hacker was a foreign sponsored professional and was not caught in the act, the trail is going to be impossible to make any definite conclusions from.

To see another detailed account of what it takes to hack into a server please  visit our 2011 article “Confessions of a hacker

Economics of the Internet Cloud Part 1


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Why is it that you need to load up all of your applications and carry them around with you on your personal computing device ?   From  I-bird Pro, to your favorite weather application, the standard operating model  assumes you purchase these things , and then  affix them to your medium of preference.

Essentially you are tethered to your personal device.

Yes there are business reasons why a company like Apple would prefer this model.   They own the hardware and they control the applications, and thus it is in their interest to keep you walled off and loyal  to your investment in Apple products.

But there is another more insidious economic restriction that forces this model upon us. And that is a lag in speed and availability of wireless bandwidth.  If you had a wireless connection to the cloud that was low-cost and offered a minimum of 300 megabits  access without restriction, you could instantly fire up any application in existence without ever pre-downloading it.  Your personal computing device would not store anything.   This is the world of the future that I referenced in my previous article , Will Cloud Computing Obsolete Your Personal Device?

The X factor in my prediction is when will we have 300 megabit wireless  bandwidth speeds across the globe without restrictions ?  The assumption is that bandwidth speed and prices will follow a similar kind of curve similar to improvements in  computing speeds, a Moore’s law for bandwidth if you will.

It will happen but the question is how fast, 10 years , 20 years 50 years?  And when it does vendors and consumers will quickly learn it is much more convenient to keep everything in the cloud.  No more apps tied to your device.  People  will own some some very cheap cloud space for all their  “stuff”,  and the  device on which it runs will become  less  and less important.

Bandwidth speed increases in wireless are running against some pretty severe headwinds which I will cover in my next article stay tuned.

Will Cloud Computing Obsolete Your Personal Device?


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Twenty two years ago, all the Buzz  amongst the engineers in the AT&T Bell  labs offices,  was a technology called “thin client”.     The term “cloud” had not yet been coined yet,  but the seeds had been sowed.  We went to our project managment as we always did when we had a good idea, and as usual, being the dinosaurs that they were, they could not even grasp the concept , their brains were three sizes tooo small, and so the idea was tabled.

And then came  the Googles,  and the  Apples of the world,  the disrupters.  As bell labs reached old age , and wallowed in its death throws, I watched from afar as cloud computing took shape.

Today cloud computing is changing the face of the computer and networking world.   From my early 90’s excitement, it took over 10 agonizing years for the first cotyledons to appear above the soil. And even today,  20 years later, cloud computing is in its adolescence, the plants are essentially teenagers.

Historians probably won’t even take note of those 10 lost years. It will be footnoted as if that transition  time was instantaneous.  For those of us who waited in anticipation during  that incubation period , the time was real, it lasted over  1/4 of our professional working  lives.

Today, cloud computing is having a ripple effect on other technologies that  were  once assumed sacred. For example, customer premise networks and all the associated hardware are getting flushed down the toilet.    Businesses are simplifying their on premise networks and will continue to do so.  This is not good news for Cisco, or the desktop PC manufactures , chip makers and on down the line.

What to expect 20 years from now.   Okay here goes, I predict that the  “personal” computing devices that we know and love, might fall into decline in the next 25 years. Say goodbye to “your” IPAD or “your” iPhone.

That’s not to say you won’t have a device at your disposal for personal use, but it will only be tied to you for the time period for which you are using it.   You walk into the store , along with the shopping carts  there are  stack of computing devices, you pick one up , touch your thumb to it, and instantly it has all your data.

Imagine if  personal computing devices were so ubiquitous in society that you did not have to own one.  How freeing would that  be ?  You would not have to worry about forgetting it, or taking it through security . Where ever happened to be , in a  hotel, library, you could just grab one of the many complimentary devices stacked at the door, touch your thumb to the screen , and you are ready to go, e-mail, pictures , games all your personal settings ready to go.

Yes  you would  pay for the content and the services , through the nose most likely, but the hardware would be an irrelevant commodity.

Still skeptical ?  I’ll cover the the economics of how this transition will happen in my next post , stay tuned.

NetEqualizer News: November 2016


We hope you enjoy this month’s NetEqualizer Newsletter. Highlights include a 8.5 Release feature preview, customer testimonials, and more!

 

  November 2016

 

8.5 Release Planning is Underway!
Greetings! Enjoy another issue of NetEqualizer News.

As we start into the holiday season here in the U.S., I am thankful for many things. First, I want to THANK YOU, our customers, for making this all worthwhile.

fancy thank-you

In my conversations with customers & prospects, I hear over & over how much our behavior-based shaping (aka equalizing) saves you time, money, and headaches. Thank you for validating all our efforts here at APconnections!

I am also thankful that the Presidential Election is over in the U.S., as I am tired of seeing political TV advertisements, which seem to be on every 10 minutes.

We continue to work with you to solve some of your most pressing network problems – so if you have one that you would like to discuss with us, please call or email me anytime at 303.997.1300 x103 or art@apconnections.net.

And remember we are now on Twitter. You can follow us @NetEqualizer.

– Art Reisman (CTO)

In this Issue:

:: 8.5 Release Features Preview

:: We Want Your Suggestions for the 8.5 Release!

:: Is Anyone Out There Still Suffering From DDoS Attacks?

:: Featured Customer Testimonials

:: Best of Blog: Using NetEqualizer to Ensure Clean, Clear QoS for VOIP Calls

8.5 Release Features Preview

We are staring to plan our 8.5 Release!

We have started putting together initial plans for our late spring software update – 8.5 Release. We have some exciting features in mind! Here is a preview of several features that will be included:

Cloud Reporting

Have you ever wanted to access reporting data for longer than 4 weeks? The reason for the current NetEqualizer limit is that we can only store so much data on the device itself.unnamed-2

Our new Cloud Reporting offering will allow you to store historical NetEqualizer data for an extended period of time. You’ll be able to seamlessly pull this data from the Cloud and display the results on your NetEqualizer, or use it for other reporting and archiving purposes.

Read-only Login Account (customer feature request)

The NetEqualizer has always used basic HTTP authentication for it’s one account, but that is about to change! The next release will have a more standard login page with two roles – the current administrator role as well as a NEW read-only account role. The read-only account will let non-technical staff log in and view reports as well as a few other features.fsdf

NetEqualizer Logout (customer feature request)

We will support web application sessions with both log in & log out. Today we offer login but in 8.5 users will also be able to securely log their session out once they are finished using the GUI.

We are very excited about enhancing our recent 8.4 Release user interface with these changes. Stay tuned to the newsletter for updates on 8.5 features, release dates, and more!

We Want Your Suggestions for the 8.5 Release!

 We want your help! Last call for suggestions for our 8.5 Release.

Now is your last chance for 8.5 Release feature requests!

Many of our best features come from customer requests. For example, for all of you that wanted to have a read-only account for NetEqualizer administration, you’ll be happy to know that we have included it in our upcoming 8.5 Release. Our NetEqualizer Logout is also based on a customer suggestion.

For those suggested features that don’t make the cut, it is not because we did not like them (we like all the suggestions), but we have to filter on features that apply to a large set of our customers. We also keep track of all feature requests, so if yours does not make it into 8.5, it may be scheduled in a future release.

We only know what features you are interested in if you speak up! We have no way of knowing if a feature is popular or not unless we hear from you. So please, think deep and tell us what features would make the NetEqualizer tool more valuable to you!

Here are some questions you can ask yourself or your IT team to come up with ideas:

  1. What feature could I use to help us troubleshoot network problems, perhaps something you need to see in our reports?
  2. What feature would further help optimize our bandwidth resource, perhaps your wireless network has unique challenges?
  3. What security concerns do you have? Anything in the DDoS arena?
  4. What feature could be added to make setup and maintenance more efficient?

unnamed-3

Is Anyone Out There Still Suffering from DDoS Attacks?

What have your experiences been?

Perhaps the Russians have given up on hacking? We are not sure, but we certainly have seen a big drop off in DDoS help requests to our support team – so much so that we have put our DDoS firewall enhancement plans on hold.

We were working on a feature request to block foreign IP’s by connection count as one of our DDoS triggers. It would work something like this:

A NetEqualizer customer sets a white list for public IP’s to let through (not blocked). Any other public IP hitting the network with more than X active connections would trigger an alert or possibly a block based on your preference.

We need to know if such a feature, or another DDoS approach would be better, based on your experience.

Let us know what you have been seeing as far as DDoS attacks on your network!

unnamed-4

Featured Testimonials

What our customers are saying…

We take great pride in ensuring our customers are happy with their NetEqualizer! You can find all of our customer testimonials on our website under the “Customers” menu.

Here are just a few testimonials that we’ve received in 2016:

Reed Collegeunnamed-6

“We’ve had NetEqualizers on campus at Reed for several years and continue to be very happy with the product. We have a very small staff and don’t have time to “tune” a device like a Packetshaper. Instead the NetEqualizer is protocol agnostic in the way it shapes traffic for most users but also allows us to quickly prioritize some traffic if necessary.

Over the years the NetEqualizer has saved us countless hours of staff time. We did lose some visibility into what is happening on our border network but our IDS/IPS replaced that functionality. NetEqualizer is an excellent product.”Gary Schlickeiser – Director of Technology Infrastructure Services

Thanks Gary for your kind words!

Edmonton Regional Airport Authorityunnamed-7

“We presently use two NE3000 units for Internet traffic control and monitoring in a redundant setup. At present we have a maximum of 600 Mbps Internet throughput, with over 300 IP addresses in use in some 120+ address Pools.

The NetEqualizer is a very useful tool for us for monitoring and setting speeds for our many users. Most of the feeds come straight off our Campus network, which is spread over a seven kilometer distance from one end of the airdrome to the other. We also feed a number of circuits to customers using ADSL equipment in the older areas where fiber is not yet available. Everything runs though the “live” NE3000!

Controllability and monitoring is key for our customers, as they pay for the speed they are asking for. With the RTR Dashboard, we continually monitor overall usage peaks to make sure we provide enough bandwidth but, more importantly, to our individual customers. Many customers are not sure of how much bandwidth they need, so using the Neteq we can simply change their speed and watch the individual IP and/or Pool usage to monitor. This becomes especially useful now as many customers, including ourselves, use IP telephony to remote sites; so we need to maintain critical bandwidth availability for this purpose. That way when they or we have conference calls for example, no one is getting choppy conversations. All easily monitored and adjusted with the Dashboard and Traffic Management features.

We also have used the Neteq firewall feature to stop certain attack threats and customer infected pcs or servers from spewing email or other reported outbound attacks, not a fun thing but it happens.

Overall a very critical tool for our success in providing internet to users and it has worked very well for the past 8 or more years!”Willy Damgaard – Network and Telecom Analyst

Thanks Willy! We are happy to help.

Cooperative Light & Powerunnamed-8

“Our company is an electric utility and we have a subsidiary WISP with about 1,000 unlicensed fixed wireless customers. We purchased our first NetEqualizer about a year ago to replace our fair access policy server from another company. The server we replaced allowed burst then sustained bandwidth so we weren’t sure if “equalizing” would work, but it works extremely well as advertised.

The NetEqualizer is stable and actually requires very little maintenance after initial configuration. In our case, we wanted to limit the upper end of what a customer could use (max burst). We were able to set that parameter in our wireless CPE’s. Then we set the equalizing pools for the size of our APs. The NetEqualizer can do a burst then sustained then burst at equal intervals, but to our surprise we actually didn’t need to use it.

We also purchased the DDoS Firewall and that is working nicely as well for quick identification of attacks. Perhaps the most important thing to note is the support is excellent. From sales to engineering the team is very responsive and knowledgeable. We were so impressed that we actually purchased a second NetEqualizer to handle the rest of our network. This company is A+.”Kevin Olson – Communication Manager

Thanks Kevin!

It is wonderful to hear such glowing feedback from one of our newer customers! If you would like to share your feedback on the NetEqualizer, to be highlighted in a future NetEqualizer News, click here to send us an email.

unnamed-5

Best Of Blog

Using NetEqualizer to Ensure Clean, Clear QoS for VoIP Calls

By Art Reisman
 
Last week I talked to several ISP’s (Note: these were blind calls, not from our customers) that were having issues with end customers calling and complaining that their web browsing and VOIP calls were suffering. The funny thing is that the congestion was not the fault of the ISP, but the fault of the local connection being saturated with video. For example, if the ISP delivers a 10 meg circuit, and the customer starts two Netflix sessions, they would clog their own circuit.
Those conversations reminded me of an article I wrote back in 2010 that explains how the NetEqualizer can alleviate this type of congestion for VoIP. Here it is…

Photo of the Month
img_2686
Hiking Near Caribou Ranch
It’s been unseasonably warm in Colorado this fall. We’ve been taking advantage of this by hiking in the mountains amidst the changing leaf colors. 
APconnections, home of the NetEqualizer | (303) 997-1300 | Email | Website 

Crossing a Chasm, Transitioning From Packet Shaping to the Next Generation Bandwidth Shaping Technology


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Even though I would self identify as an early adopter of new technology , when I look at my real life behavior, I tend to resist change and hang on to   technology that I am comfortable with.   Suffice to say , I  usually need an event or a gentle push to get over my resistance.

Given that technology change is uncomfortable,  what follows is a gentle push, or perhaps  a  mild shove, to help anybody who is looking to pull the trigger on moving away from Packet Shaping into a more sustainable , cost effective alternative.

First off, lets look at why packet shaping (layer 7 deep packet inspection) technologies are popular.

“A good layer 7 based tool creates the perception of complete control over your network. You can see what applications are running, how much bandwidth they are using, and make  adjustments to flows to meet your business objectives.”

Although the above statement appears idyllic, the reality of implementing , Packet shaping, even in its prime was at best only 60 percent accurate.  The remaining 40 percent of traffic could never be classified, and thus had to shaped based on guess work or faith.

Today, the accuracy of packet classification continues to slip. Security concerns are forcing most content providers to adopt encryption. Encrypted traffic cannot be classified.

In effort to stay relevant companies have moved away from deep packet inspection to classifying traffic by the source and destination ( source IP’s are never encrypted and thus always visible).

If your packet shaping device knows the address range of a content provider, it can safely assume a traffic type  by examining the source IP address.  For example, Youtube traffic emanates from a source address owned by Google.  The draw-back with this method is that savvy users can easily hide their sources by using any one of the publicly available VPN utilities out there.  The personal VPN world is exploding as individual users are moving to VPN tunneling services for all their home browsing.

The combination of VPN tunnels and encrypted content  is slowly transforming the best application classifiers into paper weights.

So what are the alternatives ?   Is  there something better ?

Yes,  If you can let go of concept of controlling specific traffic by type,  you can find viable alternatives. As  per the title , you must “cross the chasm,” and surrender  to a new way  of bandwidth shaping, where  decisions are based  on  usage heuristics, and not absolute identification.

What is a heuristic based shaper ? 

Our heuristic based bandwidth shapers borrow from the world of computer science and a CPU scheduling technique called shortest job first (SJF).  In todays world,  a “job” is synonymous with  an application.  You have likely  unknowingly experienced the benefits of a shortest job first scheduler when you use a Linux based laptop, such as a MAC, or Ubuntu.  Unlike the older Windows operating systems where one application can lock up your computer, such lock ups are rare on Linux .  Linux uses a scheduler that allows  preemption to let other applications in during peak times, so they are not starved for service.     Simply put,  a computer with many applications using SJF will pick the application it thinks is going to use the least amount of time and run it first. Or pre-empt a hog to let another application in.

In the world of bandwidth  shaping we do not have the issue of contended  CPU resources , but we do have an overload of Internet applications that vie for bandwidth resources on  a shared link.   The NetEqualizer uses SJF type techniques to pre-empt users who are dominating a bandwidth link with large downloads and  and other hogs. Although the NetEqualizer does not specifically classify these hogging applications by type , it does not matter. The hogging applications such as large downloads , and high resolution video, by their large foot print alone , are given lower priority . Thus the business critical interactive applications with smaller bandwidth resource consumption get serviced first.

Summary

The issue we often see with switching to this heuristic  shaping technology is that it goes against the absolute control oriented solution offered by Packet Shaping.  The  alternative of  sticking with a deep packet inspection and expecting to get control over your network is becoming impossible, hence something must change.

The new heuristic model of bandwidth shaping accomplishes priority for interactive cloud applications , and the implementation is simple and clean.

 

 

%d bloggers like this: