Covid-19 and Increased Internet Usage


Our sympathies go out to everyone who has been impacted by Covid 19, whether you had it personally or it affected your family and friends. I personally lost a sister to Covid-19 complications back in May; hence I take this virus very seriously.

The question I ask myself now as we see a light at the end of the Covid-19 tunnel with the anticipated vaccines next month is, how has Covid-19 changed the IT landscape for us and our customers?

The biggest change that we have seen is Increased Internet Usage.

We have seen a 500 percent increase in NetEqualizer License upgrades over the past 6 months, which means that our customers are ramping up their circuits to ensure a work from home experience without interruption or outages. What we can’t tell for sure is whether or not these upgrades were more out of an abundance of caution, getting ahead of the curve, or if there was actually a significant increase in demand. 

Without a doubt, home usage of Internet has increased, as consumers work from home on Zoom calls, watch more movies, and find ways to entertain themselves in a world where they are staying at home most of the time.  Did this shift actually put more traffic on the average business office network where our bandwidth controllers normally reside?  The knee jerk reaction would be yes of course, but I would argue not so fast.  Let me lay out my logic here…

For one, with a group of people working remotely using the plethora of cloud-hosted collaboration applications such as Zoom, or Blackboard sharing, there is very little if any extra bandwidth burden back at the home office or campus. The additional cloud-based traffic from remote users will be pushed onto their residential ISP providers. On the other hand, organizations that did not transition services to the cloud will have their hands full handling the traffic from home users coming in over VPN into the office.

Higher Education usage is a slightly different animal.   Let’s explore the three different cases as I see them for Higher Education.

#1) Everybody is Remote

In this instance it is highly unlikely there would be any increase in bandwidth usage at the campus itself. All of the Zoom or Microsoft Teams traffic would be shifted to the ISPs at the residences of students and teachers.

2) Teachers are On-Site and Students are Remote

For this we can do an approximation.

For each teacher sharing a room session you can estimate 2 to 8 megabits of consistent bandwidth load. Take a high school with 40 teachers on active Zoom calls, you could estimate a sustained 300 megabits dedicated to Zoom.  With just a skeleton crew of teachers and no students in the building the Internet Capacity should hold as the students tend to eat up huge chunks of bandwidth which is no longer the case. 

3) Mixed Remote and In-person Students

The one scenario that would stress existing infrastructure would be the case where students are on campus while at the same time classes are being broadcast remotely for the students who are unable to come to class in person.  In this instance, you have close to the normal campus load plus all the Zoom or Microsoft Teams sessions emanating from the classrooms. To top it off these Zoom or Microsoft Team sessions are highly sensitive to latency and thus the institution cannot risk even a small amount of congestion as that would cause an interruption to all classes. 

Prior to Covid-19, Internet congestion might interrupt a Skype conference call with the sales team to Europe, which is no laughing matter but a survivable disruption.  Post Covid-19, an interruption in Internet communcation could potentially  interrupt the  entire organization, which is not tolerable. 

In summary, it was probably wise for most institutions to beef up their IT infrastructure to handle more bandwidth. Even knowing in hindsight that  in some cases, it may have not been needed on the campus or the office.  Given the absolutely essential nature that Internet communication has played to keep Businesses and Higher Ed connected, it was not worth the risk of being caught with too little.

Stay tuned for a future article detailing the impact of Covid-19 on ISPs…

DDoS: The Real Extortion. It’s Not What You Think…


I am not normally a big fan of conspiracy theories, but I when I start to connect the dots on the evolution of DDoS, I can really only come to one conclusion that makes sense and holds together.   You may be surprised at what I have found.

But first, my observations about DDoS.

We have all heard the stories about businesses getting hacked, bank accounts compromised, or credit cards stolen.  These breaches happen quietly and discreetly, often only discovered long after the fact.  I can clearly understand the motivation of  a perpetrator behind this type of break in.  They are looking to steal some information and sell it on the dark web.

On the other hand, a DDoS attack does not pose any security threat to a business’ data, or their bank accounts.  It is used as a jamming tool to effectively cut off their communication by paralyzing their network.  I have read vanilla articles detailing how extortion was the motivation.  They generally assume the motive is money and DDoS attacks are monetized through extortion.  You get attacked, your web site is down, and some dark figure contacts you via a back channel and offers to stop the attack for a ransom.  Perhaps some DDoS attacks are motivated by this kind of extortion,  but let’s dig a little deeper to see if there is a more plausible explanation.

Through my dealings with 100’s of IT people managing networks, almost all have experienced some sort of DDoS attack in the past 5 or 6 years.

To my knowledge, none of my contacts were ever approached by somebody attempting to extort money.  When you think about this, taking a payment via extortion is a very risky endeavor for a criminal.  The FBI could easily set up a sting at any time  to track the payment.  You would have to be very, very clever to extort and take payment and not get  caught.

Another explanation is that many of these were revenge attacks from disgruntled employees or foreign agents.  Maybe a few, but based on my sample and projecting it out, these DDoS attacks are widespread, and not just limited to key political targets.  Businesses of all sizes have been affected, reaching into the millions.  I can’t imagine  that there are that many disgruntled customers or employees who all decided to settle their grievances with anonymous attacks in such a short time span.  And what foreign  agent would spend energy bringing down the Internet at a regional real estate office in Moline, Illinois?  But it was happening and it was happening everywhere.

The real AHA moment came to me one day when I was having a beer with an IT reseller that sold high-end networking equipment. He reminisced about his 15 year run selling networking equipment with nice margins.  Switches, Routers, Access Points.

But revenue was getting squeezed and had started to dry up by 2010.  Instead of making $100K sales with $30K commission, many customers dumped their channel connection and started buying their equipment as a commodity on-line at much lower margins. There was very little incentive to work the sales channels with these diminishing returns. So what was a channel sales person going to do now to replace that lost income?  The answer was this new market selling $200K integrated security systems and clearing $30K commission  per sale.

I also learned after talking to several security consultants that it was rare to get a new customer willing to proactively purchase services unless they were required to by law. For example, the banking and financial industry had established some standards. But  for large and medium private companies it is hard to extract $200K for a security system as a proactive purchase to protect against an event that had never happened.

I think you might be able to see where I am going with this, but it gets better!

I also noticed that, post purchase of these rather pricey security systems, attacks would cease.  The simple answer to this is that an on-site DDoS prevention tool generally has no chance of stopping a dedicated attack. A DDoS attack is done by thousands of hijacked home computers all hitting a business network from the outside. I have simulated them on my own network by having 100 virtual computers hitting our website over and over as fast as they can go and it cripples my web server.

The only way to stop the DDoS attack  is at the source.  In a real attack the victim must hunt down the source machine all the way back to their local ISP and have the ISP block  the attacker at the source.  Now imagine an attack coming from 1000 different sources located all over the world. For example, your home computer, if compromised by a hacker, could be taking part in an attack and you would never know it.  Professional hackers have thousands of hijacked computers under their control (this is also how spammers work).  The hacker turns your computer into a slave at its beck and call.  And the hijacker is untraceable. When they initiate an attack they tell your computer to bombard a website of their choosing, along with the thousands of other computers in their control, and BAM! the website goes down.

So why do the attacks cease once a customer has purchased a security system?   If the attacks continued after the purchase of the tool the customer would not be very happy with their purchase.  My hypothesis: Basically, somebody is calling off the dogs once they get their money.

Let me know if you agree or disagree with my analysis and hypothesis.  What do you think is happening?

Stick a Fork in Third Party Caching (Squid Proxy)


I was just going through our blog archives and noticed that many of the caching articles we promoted circa 2011 are still getting hits.  Many of the hits are coming from less developed countries where bandwidth is relatively expensive when compared to the western world.  I hope that businesses and ISPs hoping for a miracle using caching will find this article, as it applies to all third-party caching engines, not just the one we used to offer as an add-on to the NetEqualizer.

So why do I make such a bold statement about third-party caching becoming obsolete?

#1) There have been some recent changes in the way Google provides YouTube content, which makes caching it almost impossible.  All of their YouTube videos are generated dynamically and broken up into segments, to allow differential custom advertising.  (I yearn for the days without the ads!)

#2) Almost all pages and files on the Internet are marked “Do not Cache” in the HTML headers. Some of them will cache effectively, but you must assume the designer plans on making dynamic, on the fly, changes to their content.  Caching an obsolete page and delivering it to an end user could actually result in serious issues, and perhaps even a lawsuit, if you cause some form of economic harm by ignoring the “do not cache” directive.

#3) Streaming content as well as most HTML content is now encrypted, and since we are not the NSA, we do not have a back door to decrypt and deliver from our caching engines.

As you may have noticed I have been careful to point out that caching is obsolete on third-party caching engines, not all caching engines, so what gives?

Some of the larger content providers, such as Netflix, will work with larger ISPs to provide large caching servers for their proprietary and encrypted content. This is a win-win for both Netflix and the Last Mile ISP.  There are some restrictions on who Netflix will support with this technology.  The point is that it is Netflix providing the caching engine, for their content only, with their proprietary software, and a third-party engine cannot offer this service.  There may be other content providers providing a similar technology.  However, for now, you can stick a fork in any generic third-party caching server.

Tracking Traffic by DNS


The video rental industry of the early 80’s was comprised of 1000’s of independent stores.  Corner video rental shops were as numerous as today’s Starbucks.  In the late 1990’s, consolidation took over.  Blockbuster with its bright blue canopy lighting up the night sky swallowed them up like doggy treats.   All the small retail outlets were gone. Blockbuster had changed everything, their economy of scale, and their chain store familiarity, had overrun the small operators.

In a similar fashion to the fledgling video rental industry, circa 1990’s Internet content was scattered across the spectrum of the web, ripe for consolidation.  I can still remember all of the geeks at my office creating and hosting their own personal websites. They used primitive tools and their own public IP’s to weave these sites together.  Movies  and music were bootlegged, and shared across a network of underground file-sharing sites.

Although we do not have one Internet “Blockbuster” today, there has been major consolidation.  Instead of all traffic coming from 100’s of thousands of personal or small niche content providers, most of it comes from the big content providers. Google, Amazon, Netflix, Facebook, Pinterest are all familiar names today.

So far I have reminisced about a nice bit of history, and I suspect you might be wondering how all of this prelude relates to tracking traffic by DNS?

Three years ago we added a DNS (domain name system) server lookup from our GUI interface, as more of a novelty than anything else. Tracking traffic by content was always a high priority for our customers, but most techniques had relied on a technology called “deep packet inspection” to identify traffic.  This technology was costly, and ineffective on its best day, but it was the only way to chase down nefarious content such as P2P.

Over the last couple of years I noticed again the world had changed. With the consolidation of content from a small number of large providers, you could now count on some consistency in the domain from which it originated.  I would often click on our DNS feature and notice a common name for my data.   For example, my YouTube videos resolved to one or two DNS names,  and I found the same to be true with my Facebook video.  We realized that this consolidation might make DNS tracking useful for our customers, and so we have now put DNS tracking into our current NetEqualizer 8.5 release.

Another benefit of tracking by domain is the fact that most encrypted data will report a valid domain.  This should help to identify traffic patterns on a network.

It will be interesting to get feedback on this feature as it hits the real world, stay tuned!

How I Survived a Ransomware Attack


By Art Reisman

About six months ago, I was trying to access a web site when I got the infamous message: “Your Flash Player is out-of-date”.  I was provided with a link to a site to update my Adobe Flash Player.  At the time, I thought nothing of updating my Flash Player, as this had happened perhaps 100 times already. That begs the question as to why my perfectly fine and happy Adobe Flash Player constantly needs to be updated?  Another story for another day.

In my haste, I clicked the link and promptly received the Adobe Flash update for my Mac and installed it. For all intents and purposes, that was the end of my Mac.  This thing just took it over, destroying it.  It would insidiously let me get started with my daily work and then within a few minutes I would receive a barrage of almost constant messages popping up telling me I had a virus and to call some number for help.  Classic Ransomware.  At the time I did not think Macs were vulnerable to this type of thing, as the only viruses I had contracted prior were on my Windows machines, which I tossed in the scrap pile several years ago for that very reason.

My solution to this dilemma was simply to re-load my Mac from scratch.  I was up and running again in about one hour.   A hassle yes, the end of the world – no.

Now you might be wondering what about all my data programs and files I store on my Mac?  And to that I answer what data files?  Everything I do is in the Cloud, nothing is stored on my Mac, as I believe that there is no reason to store anything locally.

Gmail, Quickbooks, WordPress, photos, documents, and everything else that I use are all stored in the Cloud!

For backup purposes, I periodically e-mail a list of all my important Cloud links to myself.  Since they are stored in Gmail, they are always accessible and I can access them from any computer.  Data recovery amounts to nothing more than finding my most recent backup list e-mail and clicking on my Cloud links as needed.

Five Things to Know About Wireless Networks


By Art Reisman
CTO, APconnections

overwhelmed

Over the last year or so, when the work day is done, I often find myself talking shop with several peers of mine who run wireless networking companies.  These are the guys in he trenches. They spend their days installing wireless infrastructure in apartment buildings , hotels, professional sports arenas to name just a few.  Below I share a few tidbits intended to provide a high level picture for anybody thinking about building their own wireless network.

There are no experts.

Why? Competition between wireless manufacturers is intense. Yes the competition is great for innovation, and  certainly wireless technology has come a long way in the last 10 years; however these fast paced  improvements come with a cost.  New learning curves for IT partners, numerous patches, combined with  differing approaches,   make it hard for any one person to become an expert.    Anybody that works in this industry usually settles in with one manufacturer perhaps 2, it is moving too fast .

The higher (faster) the frequency  the higher the cost of the network.

 Why ? As the industry moves to standards that transmit data at higher data rates, they must use higher frequencies to achieve the faster speeds.  It just so happens that these higher frequencies tend to be less effective at penetrating   through buildings , walls, and windows.   The increase in cost comes with the need to place more and more access points in a building to achieve coverage.

Putting more access points in your building does not always mean  better service. 

Why?  Computers have a bad habit of connecting to one access point and then not letting go, even when the signal gets weak.    For example when you connect up to a wireless network with your lap top in the lobby of a hotel, and then move across the room, you can end up in a bad spot with respect to original access point connection. In theory, the right thing to do would be to release your current connection and connect to a different access point. Problem is most of the installed base of wireless networks , do not have any intelligence built in  to get you routed to the best access point,hence even a building with plenty of coverage can have maddening service.

Electro Magnetic Radiation Cannot Be Seen

So What?  The issue here is that there are all kinds of scenarios where the wireless signals bouncing around the environment can destroy service. Think of a highway full of invisible cars traveling in any direction they wanted.  When a wireless network is installed the contractor in charge does what is called a site survey. This is involves special equipment that can measure the electro magnetic waves in an area, and helps them plan how many and where to install wireless access points ;  but once installed, anything can happen. Private personal hotspots , devices with electric motors, a change in metal furniture configuration are all things that  can destabilize  an area, and thus service can degrade for reasons that nobody can detect.

The more people Connected the Slower their Speed

Why?  Wireless  access points use  a technique called TDM ( Time Division Multiplexing) Basically available bandwidth is carved up into little time slots. When there is only one user connected to access point, that user gets all the bandwidth, when there are two users connected they each get half the time slots. So that access point that advertised 100 megabit speeds , can only deliver at best 10 megabits when 10 people are connected to it.

Related Article

Wireless is nice but wired networks are here to stay

Seven Tips To Improve Performance of your Wireless Lan

Proving The Identity of The DNC Hacker Not Likely


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Inspired by the recent accusations regarding  the alleged Russian Hacking of the DNC e-mail servers, I ask the question, is it really possible for our intelligence  agencies to say with confidence exactly who hacked those servers?  I honestly don’t think so. To back up  my opinion, I have decided to  take our faithful blog readers through the mind and actions of  a professional hacker,  intent on breaking into a  corporate e-mail server, without leaving a trace. From there you can draw your own conclusions.

My  hacking scenario below is based  on actual techniques that our own ethical hackers use to test security at corporations. These companies  contract with us to deliberately  break into their It systems, and yes sometimes we do break in.

First we will follow our hacker through the process of a typical deliberate illegal break in, and then we will  analyze the daunting task of a forensic expert must deal with after the fact.

 

Here we go….

Phase I

  • First I need a platform for the first phase  of my attack. I want to find a computer with no formal ties to my identity. Just like  the public telephone booth of the 70’s and 80’s were used for calling in bomb threats,  the computers in your   public  libraries can easily conceal my identity.
  • To further cover my trail, I bring my own  flash memory with me to the library, it contains a software program commonly referred to  as  “BOT”. This allows me to move data programs onto the library computer without doing something like logging into my personal e-mail , which would leave a record of me being there.  In this case my BOT  specializes in crawling the Internet looking for consumer grade desktop computers to break into.
  • My BOT  searches the Internet at random looking for computers which are un-protected.  It will hit several thousand computers an hour for as long as I let it run
  • I don’t want to go to long with my BOT running from the Library,  because all the outbound activity it generates, may be detected as a virus by an Upstream ISP. The good news in my favor is that  BOTs both friendly and malicious are very common. At any time of the day there are millions of them  running all over the world.

Note, running a bot in itself is not a crime, it is just bad etiquette and annoying.  It is extremely unlikely that anybody would actually be able to see that I am trying to hack into computers (yes this is a crime)  with my BOT , because that would take very specialized equipment , and since I chose my Library at random the chances of drawing attention at this stage are minuscule. Typically a law enforcement agency must attain a warrant to set up their detection equipment.  all the upstream provider would sense is an unusual high rate of traffic coming out of the library.

  •  Once my bot has found some unprotected home computers and I have their  login credentials, I am ready for phase 2 . I save off their IP addresses and credentials, and delete the bot from the computer in the Library and leave never to return.

You might be wondering how does a BOT get access to home computers?  Many are still out there running very old versions of Windows or Linux and have generic passwords like “password”. The BOT attempts to login   through a well  known service such as SSH ( remote Login) and guesses the password. The BOT may run into 1,000 dead ends or more before cracking a single computer. Just like a mindless robot should,  it works tirelessly without complaint 

Phase II

  •  I again go to the Library and set up shop. Only this time instead of a BOT I come armed with phishing scam e-mail on my Flash.  From a computer in the library I   remotely login into one of the home computers whose credentials I attained in Phase 1 and set up shop.
  • I set up a program that will send e-mails from the home computer to people who work at the DNC with my  trojan horse content.

If I am smart, I do a little research on their back ground(s) of the poeple I sending to so as to make the e-mails as authentic as possible. Most consumers have seen the obvious scams where you get some ridiculous out of context e-mail with a link to open some file  you never asked for, that works for mass e-mailing to the public, hopeing to find  a few old ladies, or the computer illiterate, but I would assume that people who work at the DNC , would just think it is a spam e-mail and delete it.  Hence, they get something a little more personalized.   

How do I find the targeted employ e-mails at the DNC ?  That is a bit easier , many times they are published on a Web site, or  I simply guess at employee e-mails addresses , such as hclinton@dnc.com.

  • If any of the targeted e-mails I have sent to a DNC employee are opened they will, unbeknowest to them, be  installing  a keystroke logger that captures everything they type. In this way when they login into the DNC e-mail server I also get a login and access to all their e-mails

 How do I insure my victim does not suspect they have been hacked ? Stealth , Stealth , Stealth.  All of my hacking my tools such as my keystroke logger have very small inconspicuous footprints. I am not trying to crash or detroy anything at the DNC.  The person or persons who systems I gaing entry through most likely will never know.  Also I will only be using them for a very short period of time, and I will delete them on my way out.

  • Getting e-mail access. Once the keystroke logger is in place I have it report back to another one of my hacked personal computers. In this way the information I am collecting will sit on a home computer with no ties to back to me. WHen I go to collet this information , I again go to a Library with my flash card and download key stroke information, eventually I directly load up al the e-mails I can get onto my flash drive while in the Library.  I then take them to the Kremlin ( or whoever I work for and hand over the flash drives containing 10’s of thousands of e-mails for off line analysis.

 

Debunking the Russian Hacking Theory

The FBI purports to have found a  “Russian Signature file ” on the DNC server?

  •  It’s not like the hacking community has dialects associated with their hacking tools.  Although  If I was a Chinese hacker I might make sure I left a path pointing back at Russia  , why  not ? . If you recall I deleted my hacking tools on the way out, and yes I know how to scrub them so there is no latent foot print on the disk drive
  • As you can infer from my hacking example , I can hack pretty much autonomously from anywhere in the US or the world for that matter, using a series of intermediaries and without ever residing at permanent location.
  • Even if the FBI follows logs of where historical access into the DNC  has come from, the trail is going to lead to some Grandma’s computer at some random location. Remember all my contacts directly into the DNC were from my Hijacked Grandma computers. Perhaps that is enough to draw a conclusion so the FBI can  blame some poor Russian Grandma.  As the  real hacker all the better for me, let Grandma take the diversion, somebody else is going to get the blame.
  • Now let’s suppose the FBI is really on the ball and somehow figures that Grandma’s computer was just a shill hijacked by me. So they get a warrant and raid Grandma’s computer and they find a trail .  This  path is going to lead them back to the Library where I sat perhaps 3 months ago.
  • We can go another step farther, suppose the library had video surveillance and they caught me coming and going , then just perhaps they could make an ID match

By now you get the idea, assuming the hacker was a foreign sponsored professional and was not caught in the act, the trail is going to be impossible to make any definite conclusions from.

To see another detailed account of what it takes to hack into a server please  visit our 2011 article “Confessions of a hacker

Why is Your Internet Connection So Slow?


By Art Reisman

CTO – APconnections

Have you ever been on  a shared wireless network, in a Hotel , or Business, and noticed how your  connection can go from reasonable to completely unusable in a matter of seconds, and then cycle back to usable ?

The reason for this , is that once a network hits its bandwidth allocation, the providers router usually just starts dropping the excess packets. Intuitively, when your router is dropping packets, one would assume that the perceived slow down, per user, would be just a gradual shift slower.

What happens in reality is far worse…

1) Distant users get spiraling slower responses.

Martin Roth, a colleague of ours who founded one of the top performance analysis companies in the world, provided this explanation:

“Any device which is dropping packets “favors” streams with the shortest round trip time, because (according to the TCP protocol) the time after which a lost packet is recovered is depending on the round trip time. So when a company in Copenhagen/Denmark has a line to Australia and a line to Germany on the same internet router, and this router is discarding packets because of bandwidth limits/policing, the stream to Australia is getting much bigger “holes” per lost packet (up to 3 seconds) than the stream to Germany or another office in Copenhagen. This effect then increases when the TCP window size to Australia is reduced (because of the retransmissions), so there are fewer bytes per round trip and more holes between to round trips.”

In the screen shot above (courtesy of avenida.dk), the Bandwidth limit is 10 Mbit (= 1 Mbyte/s net traffic), so everything on top of that will get discarded. The problem is not the discards, this is standard TCP behaviour, but the connections that are forcefully closed because of the discards. After the peak in closed connections, there is a “dip” in bandwidth utilization, because we cut too many connections.

2) Once you hit a congestion point, where your router is forced to drop packets, overall congestion actually gets worse before it gets better.

When applications don’t get a response due to a dropped packet, instead of backing off and waiting, they tend to start sending re-tries, and this is why you may have noticed prolonged periods (3o seconds or more) of no service on a congested network. We call this the rolling brown out. Think of this situation as sort of a doubling down on bandwidth at the moment of congestion. Instead of easing into a full network and lightly bumping your head, all the devices demanding bandwidth ramp up their requests at precisely the moment when your network is congested, resulting in an explosion of packet dropping until everybody finally gives up.

How do you remedy outages caused by Congestion?

We have written extensively about solutions to prevent bottlenecks. Here is a quick summary of possible solutions

1) The most obvious being to increase the size of your link.

2) Enforce rate limits per user. The problem with this solution is that you can waste a good bit of bandwidth if the network is lightly loaded

3) Use something more sophisticated like a Netequalizer, a device that is designed to specifically counter the effects of congestion.

From Martin Roth of Avenida.dk

“With NetEqualizer we may get the same number of discards, but we get fewer connections closed, because we “kick” the few connections with the high bandwidth, so we do not get the “dip” in bandwidth utilization.

The graphs (above) were recorded using 1 second intervals, so here you can see the bandwidth is reached. In a standard SolarWinds graph with 10 minute averages the bandwidth utilization would be under 20% and the customer would not know they are hitting the limit.”

———————————————————————-

The excerpt below was a message from a reseller who had been struggling with congestion issues at a hotel, he tried basic rate limits on his router first. Rate Limits will buy you some time , but on an oversold network you can still hit the congestion point, and for this you need a smarter device.

“…NetEq delivered a 500% gain in available bandwidth by eliminating rate caps, possible through a mix of connection limits and Equalization.  Both are necessary.  The hotel went from 750 Kbit max per accesspoint (entire hotel lobby fights over 750Kbit; divided between who knows how many users) to 7Mbit or more available bandwidth for single users with heavy needs.

The ability to fully load the pipe, then reach out and instantly take back up to a third of it for an immediate need like a speedtest was also really eye-opening.  The pipe is already maxed out, but there is always a third of it that can be immediately cleared in time to perform something new and high-priority like a speed test.”
 
Rate Caps: nobody ever gets a fast Internet connection.
Equalized: the pipe stays as full as possible, yet anybody with a business-class need gets served a major portion of the pipe on demand. “
– Ben Whitaker – jetsetnetworks.com

Are those rate limits on your router good enough?

How to Speed Up Windows/Apple Updates


I discovered a problem with my download speed while trying to recover my un-responsive iPad.  Apple’s solution required me attach my iPad to my Mac, and then to download a new iOS image from the Internet, through the Mac and onto the IPad.

Speed should have been no problem with my business class, 20 megabit Internet connection from a well-known provider, right?

So I thought.

When I started the iOS download, the little progress timer immediately registered 23 hours to go. Wow, that is long time to wait, and I needed my iPad for a trip the next morning.  I tried a couple of speed tests in parallel, and everything looked normal.  The question remained – where was the bottleneck on this iOS download?  Was it on Apple’s end or a problem with my provider?

Over the years I have learned that iOS  and Windows updates are the bane of many Internet Providers. They are constantly looking at ways to prevent them from gumming up their exchange points.  They will try to identify update traffic, either by using the source IP, or if that does not work, they can actually examine the download data to make a determination. In either case, once they have tagged it as an update, they will choose to slow it down to keep their exchange points clear during peak traffic hours.

To thwart their shaping and get my speed back up near 20 megabits as promised, I simply had to hide my intentions. This can be accomplished using any number of consumer grade VPN applications.

I turned on my  IPvanish, which automatically encrypts the data and original source of my iOS update. Once up and running with my VPN, my IOS update loaded in 23 minutes. A 60 fold speed increase from my previous attempt.

If you would like to read more, here are a couple of other posts about ISP’s throttling data:

There is something rotten in the state of online streaming.

How to get access to blocked Internet Sites.

Good luck!

Bandwidth Shaping Shake Up, Your Packet Shaper May be Obsolete?


If you went to sleep in 2005 and woke up 10 years later you would likely be surprised by some dramatic changes in technology.

  • Smart cars that drive themselves are almost a reality
  • The desktop PC is no longer a consumer product
  • Wind farms  now line the highways of rural America
  • Layer 7 shaping technology is now clinging to life, crashing the financials of a several  companies that bet the house on it.

What happened to layer 7 and Packet Shaping?

In the early 2000’s all the rave in traffic classification was the ability to put different types of bandwidth traffic into labeled buckets and assign a priority to them. Akin to rating your food choices  on a tapas menu ,network administrators  enjoyed an extensive  list of various traffic. Youtube, Citrix,  news feeds, the list was only limited by the price and quality of the bandwidth shaper. The more expensive the traffic shaper , the more choices you had.

Starting in 2005 and continuing to this day,  several forces started to work against the layer 7 paradigm.

  • The price of bulk bandwidth went into a free fall, much faster than the relatively fixed cost of a bandwidth shaper.  The business proposition of buying a bandwidth shaper to conserve bandwidth utilization became much tighter. Some companies that were riding high saw their stock prices collapse.
  • Internet traffic became invisible and impossible to identify with the advent of encryption techniques. A traffic classifier using Layer 7,  cannot see inside HTTPS or a VPN tunnel, and thus it is essentially becomes a big expensive albatross with little value as the rate of encrypted traffic increases.
  • The FCC ruling toward Net Neutrality further put a damper on a portion of the Layer 7 market. For years ISPs had been using Layer 7 technology to give preferential treatment to different types of traffic.
  • Cloud based services are using less complex  architectures. Companies  can consolidate on one simplified central bandwidth shaper, where as before they might have had several on all their various WAN links and Network segments

So where does this leave the bandwidth shaping market?

There is still some demand for layer 7 type shapers, particular in countries like China, where they attempt to control   everything.  However in Europe and in the US , the trend is to more basic controls that do not violate the FCC rule, cost less, and use some form intelligent based fairness rules such as:

  • Quota’s ,  your cell phone data plan.
  • Fairness based heuristics is gaining momentum, lower price point, prevents congestion without violating FCC ruling  (  Equalizing).
  • Basic Rate limits,  your wired ISP 20 megabit plan, often implemented on a basic router and not a specialized shaping device.
  • No Shaping at all,  pipes are so large there is no need to ration bandwidth.

Will Shaping be around in 10 years?

Yes, consumers and businesses will always find ways to use all their bandwidth and more.

Will price points for bandwidth continue to drop ?

I am going to go against the grain here, and say bandwidth prices will flatten out in the near future.  Prices  over the last decade slid for several reasons which are no longer in play.

The biggest driver in price drops was the wide acceptance of wave division muliplexing on carrier lines in the 2005- present time frame. There was already a good bit of fiber in the ground but the WDM innovation caused a huge jump in capacity, with very little additional cost to providers.

The other factor was a major world-wide recession, where businesses where demand was slack.

Lastly there are no new large carriers coming on line. Competition and price wars will ease up as suppliers try to increase profits.

 

 

NetEqualizer is Net Neutral, Packet Shaping is Not


The NetEqualizer has long been considered a net neutral appliance. Given the new net neutrality FCC regulations, upheld yesterday, I thought it would be good time to reiterate how the NetEqualizer shaping techniques  are  compliant with the FCC ruling.

Here is the basic FCC rule that applies to bandwidth shaping and preferential treatment:

The FCC created a separate rule that prohibits broadband providers from slowing down specific applications or services, a practice known as throttling. More to the point, the FCC said providers can’t single out Internet traffic based on who sends it, where it’s going, what the content happens to be or whether that content competes with the provider’s business.

I’ll break this down as it relates to the NetEqualizer.

1. The rule “prohibits broadband providers from slowing down specific applications or services”.

The NetEqualizer makes shaping decisions solely based on instantaneous usage and only when a link is congested. It does not single out a particular application or service for throttling. The NetEqualizer does not classify traffic, instead looking at how the traffic behaves in order to make a shaping decision.  The key to remember here is that the NetEqualizer only shapes when a link is congested, and without it in place, the link would drop packets which would cause a serious outage.

2.  The FCC said “providers can’t single out Internet traffic based on who sends it, where it’s going”.

The NetEqualizer is completely agnostic as to who is sending the traffic and as to where it is going. In fact, any rate limiting that we provide is independent of the traffic on network, and is used solely to partition a shared resource amongst a set of internal users, whether they be buildings, groups, or access points.

I hope we have finally seen an end to application-based shaping (Packet Shaping) on the Internet.  I see this ruling being upheld as the dawning of a new era.

Will Fixed Wireless Ever Stand up To Cable Internet?


;

Screen Shot 2016-04-05 at 10.07.59 AM

By Art Reisman
CTO http://www.netequalizer.com

Screen Shot 2016-04-21 at 1.46.41 PM

Last night I had a dream. A dream where  I was free from relying on my Cable operator for my Internet Service.  After all, the latest wireless technology can be used to beam an Internet signal into your house  at  speeds approaching 600 Megabits right?

My sources tell me some wireless operators  are planning to compete head  to head with entrenched cable operators. This new  tactic is a  bold experiment  considering  most legacy WISP operators normally offer service on the outskirts of town; areas  where traditional Cable and DSL  service is spotty or non-existent.  Going at the throat of the entrenched  cable operators in the urban corridor , beaming Internet into homes with service that compete on price and speed  is a bold undertaking.  Is it possible? Let’s look at some of the obstacles and some of the advantages.

In the wireless model, a provider lights up a fixed tower with Internet service and beams a signal from the tower into each home it services.

  • Unlike cable where there is a fixed  physical wire to each home , the wireless operator relies on a line of sight signal from tower to home. The tower can have as many as four transmitters each capable of 600 megabits The kicker is, to turn a profit,  you have to share the  600 megabits  from each transmitter among as many users as possible.  Each user only gets a fraction of the bandwidth.  For example,       to make the business case work you will need perhaps  100 users (homes ) on one transmitter, that breaks down to 6  megabits per customer.
  • Each tower will need a physical connection back to a tier one provider such as Level 3. This will be a cost duplicated at each tower. A cable operator has a more concentrated NOC and requires far fewer links connections to their Tier one connection.
  • Radio Interference is a problem so the tower may not be able to perform consistently at 600 megabits, when there is interference speeds are backed down
  • Cable operators can put 100 megabits or more down each wire direct to the customer home so if you get into a bandwidth speed war on the last mile connection, the wireless is still not competitive.
  • Towers in this speed range must be line of sight to the home, so the towers must be high enough to clear all trees and buildings , this creates logistical problems on putting in one tower for every 200 homes.

On the flip side I  would gladly welcome a solid 6 megabit feed from a local  wireless  provider.

Speed is not everything , as long as it is adequate for basic services, facebook, e-mail etc. Where a wireless operator can excel and win over customers are in the following areas.

  • good clean honest service
  • no back door price hikes
  • local support, and not that impersonal off shore call center service
  • customers tend to appreciate locally owned companies

 

Why Is IT Security FUD So Prevalent


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections
www.netequalizer.com
I just read an article by Rafal Los titled Abandon FUD, Scare Tactics and Marketing Hype.

In summary, he calls out all the vendor sales  presentations with slides citing all the statistics as to why you should be scared.  Here is the excerpt:

I want you to take out the last slide deck you either made, received, or reviewed on the topic of security.  Now open it up and tell me if it fits the following mold:

  • [Slides 1~4] – some slides telling you how horrible the state of information security is, how hackers are hacking everything, and probably at least 1-2 “clippings” of articles in recent media.
  • [Slides 4~7] – some slides telling you how you need to “act now,” “get compliant,” “protect your IP,” “protect your customer data,” or other catch phrases which fall into the category of “well, duh.”
  • [Slides 7~50+] – slides telling you how if you buy this product/service you will be protected from the threat du’jour and rainbows will appear as unicorns sing your praises.

Here’s the thing… did you find the slide deck you’re looking at more or less fits the above pattern? Experience tells me the odds of you nodding in agreement right now is fairly high.

And then he blasts all vendors in general with his disgust.

Ask yourself, if you write slide decks like this one I just described – who does that actually serve?  Are you expecting an executive, security leader, or practitioner to read your slides and suddenly have a “Eureka!” moment in which they realize hackers are out to get them and they should quickly act? 

I can certainly understand his frustration.  His rant reminded me of people complaining about crappy airline service and then continuing to fly that airline because it was cheapest.

Obviously FUD is around because there are still a good number of companies that make FUD driven purchases, just like there are good number of people that fly on airlines with crappy service.  Although it is not likely that you can effect a 180 degree industry turn you can certainly make a start by taking a stand.

If you get the chance try this the next time a Vendor offers you a salivating FUD-driven slide presentation.

Simply don’t talk to the sales team.  Sales teams are a thin veneer on top of a product’s warts. Request a meeting with the Engineering or Test team of a company. This may not be possible, if you are a small IT shop purchasing from Cisco, but remember you are the customer, you pay their salaries, and this should be a reasonable request.

I did this a couple of times when I was the lead architect for an AT&T product line. Yes, I had some clout due to the size of AT&T and the money involved in the decision. Vendors would always be trying to comp me hard with free tickets to sporting events, and yet my only request was this: “I want to visit your facility and talk directly to the engineering test team.”  After days of squirming and alternative venues offered, they granted me my  request. When the day finally came, it was not the impromptu sit down with the engineering team I was hoping for. It felt more like I was visiting North Korea. I had two VP’s escort me into their test facility, probably the first time they had ever set foot in there, and as I tried to ask questions directly with their test team, the VP’s almost peed their pants.  After a while the VP’s settled down, when they realized I was not looking to ruin them, I just wanted the truth about how their product performed.

FUD is much easier to sell than the product.

 

Seven Must Know Network Troubleshooting Tips


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections
www.netequalizer.com

To get started you’ll need to get ahold of two key software tools: 1) Ping Tool and 2) a Network Scan Tool, both which I describe in more detail below.  And for advanced analysis (experts only), I will then show you how you can use a bandwidth shaper/sniffer if needed.

Ping Tool

Ping is a great tool to determine what your network responsiveness is (in milliseconds), identified by trying to get a response from a typical website. If you do not already know how to use Ping on your device there are hundreds of references to Ping and how to use it.  Simply google “how to use ping ” on  your favorite device or computer to learn how to use it.

For example, I found these instructions for my MAC; and there are similar instructions for Windows, iPhone, Linux, Android, etc.

  1. Open Network Utility (located inside Applications > Utilities).
  2. Click Ping.
  3. Fill out the “Enter the network that you want to ping” field. You can enter the IP address or a web URL. For example, enter http://www.bbc.co.uk/iplayer to test the ping with that website.
  4. Click Ping.

Network Scan Tool

There are a variety of network SCAN tools/apps available for just about any consumer device or computer.  The decent ones will cost a few dollars, but I have never regretted purchasing one.  I use mine often for very common home and business network issues as I will detail in the tips below. Be sure and use the term “network scan tool” when searching, so you do not get confusing results about unrelated document scanning tools.

Once you get your scan tool installed, test it out by selecting Network Scan. Here is the output from my MAC scan tool.  I will be referencing this output later in the article.

Network Scan Output
Screen Shot 2016-04-05 at 5.33.19 AM

 

Tip #1: Using Ping to see if you are really connected to your Network

I like to open a window on my laptop and keep Ping going all day, it looks like this:

yahoo.com Ping  Output

Screen Shot 2016-04-05 at 8.25.10 AM

Amazingly, seemingly on cue, I lost connectivity to my Internet while I was running the tool for the screen capture above, and no, it was not planned or contrived.  I kicked off my ping by contacting http://www.yahoo.com (type in “ping http://www.yahoo.com”), a public website. And you can see that my round-trip time was around 40 milliseconds before it went dead. Any ping results under 100 milliseconds are normal.

 

Tip #2: How to Deal with Slow Ping Times

In the case above, my Internet Connection just went dead; it came back a minute or so later, and was most likely not related to anything local on my network.

If you start to see missed pings or slow Ping Times above 100 milliseconds, it is most likely due to congestion on your network.  To improve your response times, try turning off other devices/applications and see if that helps.  Even your TV video can suck down a good chunk of bandwidth.

Note: Always test two public websites with a ping before jumping to any conclusions. It is not likely but occasionally a big site like Yahoo will have sporadic response times.

Note: If you have a satellite link, slow and missed pings are normal just a fact-of-life.

 

Tip #3: If you can’t ping a public site, try pinging your local Wireless Router

To ping your local router all you need to find is the IP address of your router. And on almost all networks you can guess it quite easily by looking up the IP address of your computer, and then replacing the last number with a 1.

For example, on my computer I click on my little apple icon, then System Preferences, and then Networking, and I get this screen.  You can see in the Status are it tells me that my IP address is 192.168.1.131.

Finding my IP address output

Screen Shot 2016-04-05 at 10.52.14 AM

The trick to finding your router’s IP address is to replace the last number of any IP address on your network with a 1.  So in my case, I start with my IP address of 192.168.1.131, and I swap the 131 with 1.  I then ping using 192.168.1.1 as my argument, by typing in “ping 192.168.1.1”. A  ping to my router looks like this:

Router Ping  Output

Screen Shot 2016-04-05 at 10.56.30 AM

In the case above I was able to ping my local router and get a response. So what does this tell me?  If I can ping my local wireless router but I can’t ping Yahoo or any other public site, most likely the problem is with my Internet Provider.  To rule out problems with your wireless router or cables, I recommend that you re-boot your wireless router and check the cables coming into it as a next step.

In one case of failure, I actually saw a tree limb on the cable coming from the utility pole to the house. When I called my Internet Provider, I was able to relay this information, which saved a good bit of time in resolving issue.

 

Tip  #4: Look for IP loops

Last week I was getting an error message when I powered up my laptop, saying that some other device had my IP address, and I determined that I was unable to attach to the wireless router. WHAT a strange message!  Fortunately, with my scan tool I can see all the other devices on my network. And although I do not know exactly how I got into this situation, I was quickly able to find the device with the duplicate IP address and powercycle it. This resolved the problem in this case.

 

Tip #5: Look for Rogue Devices

If you never give out the security code to your wireless router, you should not have any unwanted visitors on your network.  To be certain, I again turn to the scan tool.  From my scan output, in the image above (titled “Network Scan Output” near the top of this post), you can see that there are about 15 devices attached to my network. I can account for all of them so for now I have no intruders.

 

Tip #6: Maybe it is just Mischief

There was a time when I left my wireless router wide open as I live in a fairly rural neighborhood and was just being complacent. I was surprised to see that one of my neighbors was on my access point, but which one?

I did some profiling.  Neighbor to my west is a judge with his own network, probably not him.  Across the street, a retired librarian, so probably not her.  That left the Neighbor to my Southwest, kitty corner, a house with all kinds of extended family coming and going, and no network router of their own, at least that I could detect. I had my suspect. And I could also assume they never suspected I was aware of them.

The proper thing to do would have been to block them and lock my wireless router. But since I wanted to have a little fun, I plugged in my bandwidth controller and set their bandwidth down to a fraction of a Megabit.  This had the effect of making their connection painfully dreadfully slow, almost unusable but with a ray of hope.  After a week, he went away and then I completely blocked him (just in case he decided to come back!).

 

Tip #7: Advanced Analysis with a Bandwidth Shaper/Sniffer

If the Ping tool and the Scan tool don’t shed any light on an issue, the next step is to use a more advanced Packet Sniffer. Usually this requires a separate piece of equipment that you insert into your network between your router and network users. I use my NetEqualizer because I have several of them laying around the house.

Often times the problem with your network is some rogue application consuming all of the resources. This can be in the form of consuming total bandwidth, or it could also be seen as overwhelming your wireless router with packets (there are many viruses designed to do just this).

The image below is from a live snapshot depicting bandwidth utilization on a business network. Screen Shot 2016-01-27 at 12.26.49 PM

That top number, circled in red, is a YouTube video, and it is consuming about 3 megabits of bandwidth. Directly underneath that are a couple of cloud service applications from Amazon, and they are consuming 1/10 of what the YouTube video demolishes. On some lower cost Internet links one YouTube can make the service unusable to other applications.

With my sniffer I can also see total packets consumed by a device, which can be a problem on many networks if somebody opens an email with a virus. Without a sniffer it is very hard to track down the culprit.

I hope these tips help you to troubleshoot your network.  Please let us know if you have any questions or tips that you would like to contribute.

Network Redundancy Anxiety Needs a Re-direct


When vandals sliced a fiber-optic cable in the Arizona desert last month, they did more than time-warp thousands of people back to an era before computers, credit cards or even phones. They exposed a glaring vulnerability in the U.S. Internet infrastructure: no backup systems in many places.

A few years ago I wrote an article about the top five causes of disruption of internet service.  Our number two cause on our list at the time was

2) Failed Link to Provider

And our number one cause was congestion.

1) Congestion

A few things have changed since 2010,  first off Congestion is on the decline, and although still a concern it is less of a problem now that bandwidth prices have fallen and most businesses have larger circuits.

In our opinion, based on our experience, failed links from your provider are now  the number one threat as pointed out in this Huffington Post Article .  (The first paragraph of this  post is an excerpt from that article)   Not only are provider outages common, they can also take days to remedy in some cases.

As a network equipment OEM, the biggest concern with respect to failure that we hear of our customers are the components in their Network.  Routers, Firewalls, Switches , Bandwidth shapers, customers want redundancy built into these devices. That’s not to say these devices are flawless , but in general if they are up and running in your utility closet, they rarely spontaneously fail.

On the other hand…

The link into your building and everything upstream relies on   several, to perhaps thousands of miles of buried cable , usually buried along a road right of ways. These cables can be violated by  any idiot with a back ho, or a lightning strike on a nearby power pole.

My Business class internet is up most of the time but it does go out for a few hours at least twice a year. I have alternatives so it is a minor hassle to switch over.

Moral of the story: The next time you ask  about reliability on an equipment component in your network.  I suggest you also  ask the same question of your upstream provider.

%d bloggers like this: