For Profit Wired Home Internet, is it Coming to an End ?


Low resolution ghost mode is where your video quality drops down to save bandwidth.  The resulting effect transforms, once proud basketball players, into  a slurry of mush  as their video molecules are systematically destroyed “

 

Last night, I was trying to watch a Basketball game on my Hulu through my Business class Comcast line which promises 20 meg down and 4 meg up.  Not only was my Hulu feed breaking up periodically, but my Drop Cam was going up and down constantly sending me e-mails that it was offline.  I checked my bandwidth through my NetEqualizer to find I was not even pulling 6 megabits, less than 1/3 my contracted rate.   When the  Hulu was not locking up  completely, it was  dropping down into low resolution ghost mode .   I have documented my Comcast findings before through various experiments , clearly Comcast has upstream congestion issues or  is shaping selected video traffic , either way I am at their mercy when trying to watch video on the Internet.

 

What options does one have  for alternative Internet service in the Denver Metro area, or for that matter other Metro Areas around the country?

 

Option #1 Get Closer to the Source

Beam Internet directly via Microwave Link from a hot building. A friend of mine runs an ISP that  does essentially this. He buys large bulk bandwidth  and  from a point of presence rooftop downtown, he can beam internet via  point to  point circuit, directly to your residence or building.   I called him out of desperation but I am not in line of site for any of his services.

 

Option #2  Century Link

They constantly run commercials touting they are better than Comcast. I call them perhaps once a year or so, only to find out that my neighborhood is not wired for their high speed service.

 

Option #3  Use my unlimited T-Mobile as a Hot Spot 

Believe it or not I actually did this for a while,  and the video service was a bit better than Comcast. The problem with this solution is that T-Mobile will drop your speeds down once you have consumed 24 Gigabytes in a month, and it will become useless for anything other than e-mail.    (24 Gigabytes would be approximately 4 full length movies).

Option #4 Move

The city just to the North of me , Longmont, put in it’s own fiber ring to the curb. Early reports  are that is it works great  and the residents love it. Since it is essentially a public utility,  there are no shaping games destroying your Hulu. If you contract for 20 megabits , you get 20 megabits. And now the city of boulder is considering doing the same.

With two nearby cities essentially kicking out their entrenched providers within a few miles of my home, I can see other municipalities quickly following suit .  Having good quality, affordable municipal Internet service is not just a luxury for a city, it is essential for economic development.  As I can attest, it will be a factor in where I choose to live   the next time I move. I will not put myself at the mercy of Comcast again.

 

By Art Reisman

 

 

Five Things to Know About Wireless Networks


By Art Reisman
CTO, APconnections

overwhelmed

Over the last year or so, when the work day is done, I often find myself talking shop with several peers of mine who run wireless networking companies.  These are the guys in he trenches. They spend their days installing wireless infrastructure in apartment buildings , hotels, professional sports arenas to name just a few.  Below I share a few tidbits intended to provide a high level picture for anybody thinking about building their own wireless network.

There are no experts.

Why? The companies that make wireless equipment are sending out patches almost hourly.  Because they have no idea what works in the real world, every new release is an experiment.  Anybody that works in this industry is chasing this technology, it is not stable enough for a person to become an expert. Anybody that claims to be an expert is living an illusion at best, perhaps wireless historian would be a better term for this fast-moving technology. What you know today will likely be obsolete in 6 months.

The higher (faster) the frequency  the higher the cost of the network.

 Why ? As the industry moves to standards that transmit data at higher data rates, they must use higher frequencies to achieve the faster speeds.  It just so happens that these higher frequencies tend to be less effective at penetrating   through buildings , walls, and windows.   The increase in cost comes with the need to place more and more access points in a building to achieve coverage.

Putting more access points in your building does not always mean  better service. 

Why?  Computers have a bad habit of connecting to one access point and then not letting go, even when the signal gets weak.    For example when you connect up to a wireless network with your lap top in the lobby of a hotel, and then move across the room, you can end up in a bad spot with respect to original access point connection. In theory, the right thing to do would be to release your current connection and connect to a different access point. Problem is most of the installed base of wireless networks , do not have any intelligence built in  to get you routed to the best access point,hence even a building with plenty of coverage can have maddening service.

Electro Magnetic Radiation Cannot Be Seen

So What?  The issue here is that there are all kinds of scenarios where the wireless signals bouncing around the environment can destroy service. Think of a highway full of invisible cars traveling in any direction they wanted.  When a wireless network is installed the contractor in charge does what is called a site survey. This is involves special equipment that can measure the electro magnetic waves in an area, and helps them plan how many and where to install wireless access points ;  but once installed, anything can happen. Private personal hotspots , devices with electric motors, a change in metal furniture configuration are all things that  can destabilize  an area, and thus service can degrade for reasons that nobody can detect.

The more people Connected the Slower their Speed

Why?  Wireless  access points use  a technique called TDM ( Time Division Multiplexing) Basically available bandwidth is carved up into little time slots. When there is only one user connected to access point, that user gets all the bandwidth, when there are two users connected they each get half the time slots. So that access point that advertised 100 megabit speeds , can only deliver at best 10 megabits when 10 people are connected to it.

Related Article

Wireless is nice but wired networks are here to stay

Seven Tips To Improve Performance of your Wireless Lan

Top 5 Reasons Confirming Employers Do Not Like Their IT Guy


it guy

  • The IT room is the dregs
    Whenever I travel to visit with my IT customers, it is always a challenge to find their office.   Even if I find the right building on the Business/College Campus, finding their actual location within the building is anything but certain.  Usually it ends up being in some unmarked room behind a loading dock, accessible only by secret passage designed to relieve the building of cafeteria waste near the trash bins.   Many times, their offices are one and the same thing as the old server computer room, with the raised floor, screaming fans, and air-cooled to a Scottish winter.
  • Nobody knows you are in the building.  Often times I enter the building on the upper floors, the floors with windows and young well-dressed professionals trying to move up the ladder.  Asking these people if they know where the IT room is usually brings on blank stares of confusion and embarrassment.  To them, the IT guy is that person they only see when their computer fails with a virus.  Where he emanates from nobody knows, perhaps a trap door opens in the floor. I am not making this up.  The usually way I am instructed to meet the IT guy is tht they send me an e-mail instructing me to meet at some well-known landmark out front, like a fountain or statue with a rendezvous time.
  • You are expected to be an expert in Wireless technology. Let’s face it, the companies that make wireless controllers are sending out patches almost hourly. Why? Because they have no idea what works in the real world, and so you are part of the experiment.  The real fact is nobody is an expert in real-world wireless technology. As the IT guy, you can never admit to any holes in your wireless knowledge. If you are not willing to lie, there are plenty of people with no experience willing to make that claim with a straight face.  You just can’t be honest about this – because your boss has already told his boss you are an expert.  Here is the last paragraph of a recent article on Verizon’s trial with the latest 5G wireless….

Of course, 5G wireless has never been truly tested at scale in true market scenarios. There’s talk of gigabit capable speeds, but how would a single tower supporting fixed wireless 5G at scale compare to fiber and HFC based networks connected all the way to homes and businesses? No one really knows – yet.

Setting up a new wireless network with the latest technology is like a taking a physics test in wave propagation before you have taken the class, and expecting to pass.

  • You will never get rewarded if things work without issues.  I like to compare a good IT tech to a good umpire or a ref in a soccer game.  At best, if they do a perfect job, nobody notices them.   If I ran a big company, I would hand out bonuses to my IT staff for the days I did not need them, but I do not have an MBA. (see next paragraph)
  • Any time a  company hires a brilliant MBA from some business school, the first thing they do is explore outsourcing the IT staff.  Why ? Because nobody teaches them anything about IT in business school. They live in a fantasy world where some unknown third party with a slick brochure, and an unrealistic low-ball estimate, is going to care more about IT needs than the 4 poor schlubs in the basement who have been loyal for years. You and the in-house staff have always been on call, missing many weekends over the years, just to insure the IT infrastructure stays up, and yet the Harvard guy will shoot himself in the foot with outsourcing every time.

Proving The Identity of The DNC Hacker Not Likely


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Inspired by the recent accusations regarding  the alleged Russian Hacking of the DNC e-mail servers, I ask the question, is it really possible for our intelligence  agencies to say with confidence exactly who hacked those servers?  I honestly don’t think so. To back up  my opinion, I have decided to  take our faithful blog readers through the mind and actions of  a professional hacker,  intent on breaking into a  corporate e-mail server, without leaving a trace. From there you can draw your own conclusions.

My  hacking scenario below is based  on actual techniques that our own ethical hackers use to test security at corporations. These companies  contract with us to deliberately  break into their It systems, and yes sometimes we do break in.

First we will follow our hacker through the process of a typical deliberate illegal break in, and then we will  analyze the daunting task of a forensic expert must deal with after the fact.

 

Here we go….

Phase I

  • First I need a platform for the first phase  of my attack. I want to find a computer with no formal ties to my identity. Just like  the public telephone booth of the 70’s and 80’s were used for calling in bomb threats,  the computers in your   public  libraries can easily conceal my identity.
  • To further cover my trail, I bring my own  flash memory with me to the library, it contains a software program commonly referred to  as  “BOT”. This allows me to move data programs onto the library computer without doing something like logging into my personal e-mail , which would leave a record of me being there.  In this case my BOT  specializes in crawling the Internet looking for consumer grade desktop computers to break into.
  • My BOT  searches the Internet at random looking for computers which are un-protected.  It will hit several thousand computers an hour for as long as I let it run
  • I don’t want to go to long with my BOT running from the Library,  because all the outbound activity it generates, may be detected as a virus by an Upstream ISP. The good news in my favor is that  BOTs both friendly and malicious are very common. At any time of the day there are millions of them  running all over the world.

Note, running a bot in itself is not a crime, it is just bad etiquette and annoying.  It is extremely unlikely that anybody would actually be able to see that I am trying to hack into computers (yes this is a crime)  with my BOT , because that would take very specialized equipment , and since I chose my Library at random the chances of drawing attention at this stage are minuscule. Typically a law enforcement agency must attain a warrant to set up their detection equipment.  all the upstream provider would sense is an unusual high rate of traffic coming out of the library.

  •  Once my bot has found some unprotected home computers and I have their  login credentials, I am ready for phase 2 . I save off their IP addresses and credentials, and delete the bot from the computer in the Library and leave never to return.

You might be wondering how does a BOT get access to home computers?  Many are still out there running very old versions of Windows or Linux and have generic passwords like “password”. The BOT attempts to login   through a well  known service such as SSH ( remote Login) and guesses the password. The BOT may run into 1,000 dead ends or more before cracking a single computer. Just like a mindless robot should,  it works tirelessly without complaint 

Phase II

  •  I again go to the Library and set up shop. Only this time instead of a BOT I come armed with phishing scam e-mail on my Flash.  From a computer in the library I   remotely login into one of the home computers whose credentials I attained in Phase 1 and set up shop.
  • I set up a program that will send e-mails from the home computer to people who work at the DNC with my  trojan horse content.

If I am smart, I do a little research on their back ground(s) of the poeple I sending to so as to make the e-mails as authentic as possible. Most consumers have seen the obvious scams where you get some ridiculous out of context e-mail with a link to open some file  you never asked for, that works for mass e-mailing to the public, hopeing to find  a few old ladies, or the computer illiterate, but I would assume that people who work at the DNC , would just think it is a spam e-mail and delete it.  Hence, they get something a little more personalized.   

How do I find the targeted employ e-mails at the DNC ?  That is a bit easier , many times they are published on a Web site, or  I simply guess at employee e-mails addresses , such as hclinton@dnc.com.

  • If any of the targeted e-mails I have sent to a DNC employee are opened they will, unbeknowest to them, be  installing  a keystroke logger that captures everything they type. In this way when they login into the DNC e-mail server I also get a login and access to all their e-mails

 How do I insure my victim does not suspect they have been hacked ? Stealth , Stealth , Stealth.  All of my hacking my tools such as my keystroke logger have very small inconspicuous footprints. I am not trying to crash or detroy anything at the DNC.  The person or persons who systems I gaing entry through most likely will never know.  Also I will only be using them for a very short period of time, and I will delete them on my way out.

  • Getting e-mail access. Once the keystroke logger is in place I have it report back to another one of my hacked personal computers. In this way the information I am collecting will sit on a home computer with no ties to back to me. WHen I go to collet this information , I again go to a Library with my flash card and download key stroke information, eventually I directly load up al the e-mails I can get onto my flash drive while in the Library.  I then take them to the Kremlin ( or whoever I work for and hand over the flash drives containing 10’s of thousands of e-mails for off line analysis.

 

Debunking the Russian Hacking Theory

The FBI purports to have found a  “Russian Signature file ” on the DNC server?

  •  It’s not like the hacking community has dialects associated with their hacking tools.  Although  If I was a Chinese hacker I might make sure I left a path pointing back at Russia  , why  not ? . If you recall I deleted my hacking tools on the way out, and yes I know how to scrub them so there is no latent foot print on the disk drive
  • As you can infer from my hacking example , I can hack pretty much autonomously from anywhere in the US or the world for that matter, using a series of intermediaries and without ever residing at permanent location.
  • Even if the FBI follows logs of where historical access into the DNC  has come from, the trail is going to lead to some Grandma’s computer at some random location. Remember all my contacts directly into the DNC were from my Hijacked Grandma computers. Perhaps that is enough to draw a conclusion so the FBI can  blame some poor Russian Grandma.  As the  real hacker all the better for me, let Grandma take the diversion, somebody else is going to get the blame.
  • Now let’s suppose the FBI is really on the ball and somehow figures that Grandma’s computer was just a shill hijacked by me. So they get a warrant and raid Grandma’s computer and they find a trail .  This  path is going to lead them back to the Library where I sat perhaps 3 months ago.
  • We can go another step farther, suppose the library had video surveillance and they caught me coming and going , then just perhaps they could make an ID match

By now you get the idea, assuming the hacker was a foreign sponsored professional and was not caught in the act, the trail is going to be impossible to make any definite conclusions from.

To see another detailed account of what it takes to hack into a server please  visit our 2011 article “Confessions of a hacker

Economics of the Internet Cloud Part 1


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Why is it that you need to load up all of your applications and carry them around with you on your personal computing device ?   From  I-bird Pro, to your favorite weather application, the standard operating model  assumes you purchase these things , and then  affix them to your medium of preference.

Essentially you are tethered to your personal device.

Yes there are business reasons why a company like Apple would prefer this model.   They own the hardware and they control the applications, and thus it is in their interest to keep you walled off and loyal  to your investment in Apple products.

But there is another more insidious economic restriction that forces this model upon us. And that is a lag in speed and availability of wireless bandwidth.  If you had a wireless connection to the cloud that was low-cost and offered a minimum of 300 megabits  access without restriction, you could instantly fire up any application in existence without ever pre-downloading it.  Your personal computing device would not store anything.   This is the world of the future that I referenced in my previous article , Will Cloud Computing Obsolete Your Personal Device?

The X factor in my prediction is when will we have 300 megabit wireless  bandwidth speeds across the globe without restrictions ?  The assumption is that bandwidth speed and prices will follow a similar kind of curve similar to improvements in  computing speeds, a Moore’s law for bandwidth if you will.

It will happen but the question is how fast, 10 years , 20 years 50 years?  And when it does vendors and consumers will quickly learn it is much more convenient to keep everything in the cloud.  No more apps tied to your device.  People  will own some some very cheap cloud space for all their  “stuff”,  and the  device on which it runs will become  less  and less important.

Bandwidth speed increases in wireless are running against some pretty severe headwinds which I will cover in my next article stay tuned.

Will Cloud Computing Obsolete Your Personal Device?


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Twenty two years ago, all the Buzz  amongst the engineers in the AT&T Bell  labs offices,  was a technology called “thin client”.     The term “cloud” had not yet been coined yet,  but the seeds had been sowed.  We went to our project managment as we always did when we had a good idea, and as usual, being the dinosaurs that they were, they could not even grasp the concept , their brains were three sizes tooo small, and so the idea was tabled.

And then came  the Googles,  and the  Apples of the world,  the disrupters.  As bell labs reached old age , and wallowed in its death throws, I watched from afar as cloud computing took shape.

Today cloud computing is changing the face of the computer and networking world.   From my early 90’s excitement, it took over 10 agonizing years for the first cotyledons to appear above the soil. And even today,  20 years later, cloud computing is in its adolescence, the plants are essentially teenagers.

Historians probably won’t even take note of those 10 lost years. It will be footnoted as if that transition  time was instantaneous.  For those of us who waited in anticipation during  that incubation period , the time was real, it lasted over  1/4 of our professional working  lives.

Today, cloud computing is having a ripple effect on other technologies that  were  once assumed sacred. For example, customer premise networks and all the associated hardware are getting flushed down the toilet.    Businesses are simplifying their on premise networks and will continue to do so.  This is not good news for Cisco, or the desktop PC manufactures , chip makers and on down the line.

What to expect 20 years from now.   Okay here goes, I predict that the  “personal” computing devices that we know and love, might fall into decline in the next 25 years. Say goodbye to “your” IPAD or “your” iPhone.

That’s not to say you won’t have a device at your disposal for personal use, but it will only be tied to you for the time period for which you are using it.   You walk into the store , along with the shopping carts  there are  stack of computing devices, you pick one up , touch your thumb to it, and instantly it has all your data.

Imagine if  personal computing devices were so ubiquitous in society that you did not have to own one.  How freeing would that  be ?  You would not have to worry about forgetting it, or taking it through security . Where ever happened to be , in a  hotel, library, you could just grab one of the many complimentary devices stacked at the door, touch your thumb to the screen , and you are ready to go, e-mail, pictures , games all your personal settings ready to go.

Yes  you would  pay for the content and the services , through the nose most likely, but the hardware would be an irrelevant commodity.

Still skeptical ?  I’ll cover the the economics of how this transition will happen in my next post , stay tuned.

Crossing a Chasm, Transitioning From Packet Shaping to the Next Generation Bandwidth Shaping Technology


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Even though I would self identify as an early adopter of new technology , when I look at my real life behavior, I tend to resist change and hang on to   technology that I am comfortable with.   Suffice to say , I  usually need an event or a gentle push to get over my resistance.

Given that technology change is uncomfortable,  what follows is a gentle push, or perhaps  a  mild shove, to help anybody who is looking to pull the trigger on moving away from Packet Shaping into a more sustainable , cost effective alternative.

First off, lets look at why packet shaping (layer 7 deep packet inspection) technologies are popular.

“A good layer 7 based tool creates the perception of complete control over your network. You can see what applications are running, how much bandwidth they are using, and make  adjustments to flows to meet your business objectives.”

Although the above statement appears idyllic, the reality of implementing , Packet shaping, even in its prime was at best only 60 percent accurate.  The remaining 40 percent of traffic could never be classified, and thus had to shaped based on guess work or faith.

Today, the accuracy of packet classification continues to slip. Security concerns are forcing most content providers to adopt encryption. Encrypted traffic cannot be classified.

In effort to stay relevant companies have moved away from deep packet inspection to classifying traffic by the source and destination ( source IP’s are never encrypted and thus always visible).

If your packet shaping device knows the address range of a content provider, it can safely assume a traffic type  by examining the source IP address.  For example, Youtube traffic emanates from a source address owned by Google.  The draw-back with this method is that savvy users can easily hide their sources by using any one of the publicly available VPN utilities out there.  The personal VPN world is exploding as individual users are moving to VPN tunneling services for all their home browsing.

The combination of VPN tunnels and encrypted content  is slowly transforming the best application classifiers into paper weights.

So what are the alternatives ?   Is  there something better ?

Yes,  If you can let go of concept of controlling specific traffic by type,  you can find viable alternatives. As  per the title , you must “cross the chasm,” and surrender  to a new way  of bandwidth shaping, where  decisions are based  on  usage heuristics, and not absolute identification.

What is a heuristic based shaper ? 

Our heuristic based bandwidth shapers borrow from the world of computer science and a CPU scheduling technique called shortest job first (SJF).  In todays world,  a “job” is synonymous with  an application.  You have likely  unknowingly experienced the benefits of a shortest job first scheduler when you use a Linux based laptop, such as a MAC, or Ubuntu.  Unlike the older Windows operating systems where one application can lock up your computer, such lock ups are rare on Linux .  Linux uses a scheduler that allows  preemption to let other applications in during peak times, so they are not starved for service.     Simply put,  a computer with many applications using SJF will pick the application it thinks is going to use the least amount of time and run it first. Or pre-empt a hog to let another application in.

In the world of bandwidth  shaping we do not have the issue of contended  CPU resources , but we do have an overload of Internet applications that vie for bandwidth resources on  a shared link.   The NetEqualizer uses SJF type techniques to pre-empt users who are dominating a bandwidth link with large downloads and  and other hogs. Although the NetEqualizer does not specifically classify these hogging applications by type , it does not matter. The hogging applications such as large downloads , and high resolution video, by their large foot print alone , are given lower priority . Thus the business critical interactive applications with smaller bandwidth resource consumption get serviced first.

Summary

The issue we often see with switching to this heuristic  shaping technology is that it goes against the absolute control oriented solution offered by Packet Shaping.  The  alternative of  sticking with a deep packet inspection and expecting to get control over your network is becoming impossible, hence something must change.

The new heuristic model of bandwidth shaping accomplishes priority for interactive cloud applications , and the implementation is simple and clean.

 

 

%d bloggers like this: