Tracking Traffic by DNS


The video rental industry of the early 80’s was comprised of 1000’s of independent stores.  Corner video rental shops were as numerous as today’s Starbucks.  In the late 1990’s, consolidation took over.  Blockbuster with its bright blue canopy lighting up the night sky swallowed them up like doggy treats.   All the small retail outlets were gone. Blockbuster had changed everything, their economy of scale, and their chain store familiarity, had overrun the small operators.

In a similar fashion to the fledgling video rental industry, circa 1990’s Internet content was scattered across the spectrum of the web, ripe for consolidation.  I can still remember all of the geeks at my office creating and hosting their own personal websites. They used primitive tools and their own public IP’s to weave these sites together.  Movies  and music were bootlegged, and shared across a network of underground file-sharing sites.

Although we do not have one Internet “Blockbuster” today, there has been major consolidation.  Instead of all traffic coming from 100’s of thousands of personal or small niche content providers, most of it comes from the big content providers. Google, Amazon, Netflix, Facebook, Pinterest are all familiar names today.

So far I have reminisced about a nice bit of history, and I suspect you might be wondering how all of this prelude relates to tracking traffic by DNS?

Three years ago we added a DNS (domain name system) server lookup from our GUI interface, as more of a novelty than anything else. Tracking traffic by content was always a high priority for our customers, but most techniques had relied on a technology called “deep packet inspection” to identify traffic.  This technology was costly, and ineffective on its best day, but it was the only way to chase down nefarious content such as P2P.

Over the last couple of years I noticed again the world had changed. With the consolidation of content from a small number of large providers, you could now count on some consistency in the domain from which it originated.  I would often click on our DNS feature and notice a common name for my data.   For example, my YouTube videos resolved to one or two DNS names,  and I found the same to be true with my Facebook video.  We realized that this consolidation might make DNS tracking useful for our customers, and so we have now put DNS tracking into our current NetEqualizer 8.5 release.

Another benefit of tracking by domain is the fact that most encrypted data will report a valid domain.  This should help to identify traffic patterns on a network.

It will be interesting to get feedback on this feature as it hits the real world, stay tuned!

How I Survived a Ransomware Attack


By Art Reisman

About six months ago, I was trying to access a web site when I got the infamous message: “Your Flash Player is out-of-date”.  I was provided with a link to a site to update my Adobe Flash Player.  At the time, I thought nothing of updating my Flash Player, as this had happened perhaps 100 times already. That begs the question as to why my perfectly fine and happy Adobe Flash Player constantly needs to be updated?  Another story for another day.

In my haste, I clicked the link and promptly received the Adobe Flash update for my Mac and installed it. For all intents and purposes, that was the end of my Mac.  This thing just took it over, destroying it.  It would insidiously let me get started with my daily work and then within a few minutes I would receive a barrage of almost constant messages popping up telling me I had a virus and to call some number for help.  Classic Ransomware.  At the time I did not think Macs were vulnerable to this type of thing, as the only viruses I had contracted prior were on my Windows machines, which I tossed in the scrap pile several years ago for that very reason.

My solution to this dilemma was simply to re-load my Mac from scratch.  I was up and running again in about one hour.   A hassle yes, the end of the world – no.

Now you might be wondering what about all my data programs and files I store on my Mac?  And to that I answer what data files?  Everything I do is in the Cloud, nothing is stored on my Mac, as I believe that there is no reason to store anything locally.

Gmail, Quickbooks, WordPress, photos, documents, and everything else that I use are all stored in the Cloud!

For backup purposes, I periodically e-mail a list of all my important Cloud links to myself.  Since they are stored in Gmail, they are always accessible and I can access them from any computer.  Data recovery amounts to nothing more than finding my most recent backup list e-mail and clicking on my Cloud links as needed.

Five Things to Know About Wireless Networks


By Art Reisman
CTO, APconnections

overwhelmed

Over the last year or so, when the work day is done, I often find myself talking shop with several peers of mine who run wireless networking companies.  These are the guys in he trenches. They spend their days installing wireless infrastructure in apartment buildings , hotels, professional sports arenas to name just a few.  Below I share a few tidbits intended to provide a high level picture for anybody thinking about building their own wireless network.

There are no experts.

Why? Competition between wireless manufacturers is intense. Yes the competition is great for innovation, and  certainly wireless technology has come a long way in the last 10 years; however these fast paced  improvements come with a cost.  New learning curves for IT partners, numerous patches, combined with  differing approaches,   make it hard for any one person to become an expert.    Anybody that works in this industry usually settles in with one manufacturer perhaps 2, it is moving too fast .

The higher (faster) the frequency  the higher the cost of the network.

 Why ? As the industry moves to standards that transmit data at higher data rates, they must use higher frequencies to achieve the faster speeds.  It just so happens that these higher frequencies tend to be less effective at penetrating   through buildings , walls, and windows.   The increase in cost comes with the need to place more and more access points in a building to achieve coverage.

Putting more access points in your building does not always mean  better service. 

Why?  Computers have a bad habit of connecting to one access point and then not letting go, even when the signal gets weak.    For example when you connect up to a wireless network with your lap top in the lobby of a hotel, and then move across the room, you can end up in a bad spot with respect to original access point connection. In theory, the right thing to do would be to release your current connection and connect to a different access point. Problem is most of the installed base of wireless networks , do not have any intelligence built in  to get you routed to the best access point,hence even a building with plenty of coverage can have maddening service.

Electro Magnetic Radiation Cannot Be Seen

So What?  The issue here is that there are all kinds of scenarios where the wireless signals bouncing around the environment can destroy service. Think of a highway full of invisible cars traveling in any direction they wanted.  When a wireless network is installed the contractor in charge does what is called a site survey. This is involves special equipment that can measure the electro magnetic waves in an area, and helps them plan how many and where to install wireless access points ;  but once installed, anything can happen. Private personal hotspots , devices with electric motors, a change in metal furniture configuration are all things that  can destabilize  an area, and thus service can degrade for reasons that nobody can detect.

The more people Connected the Slower their Speed

Why?  Wireless  access points use  a technique called TDM ( Time Division Multiplexing) Basically available bandwidth is carved up into little time slots. When there is only one user connected to access point, that user gets all the bandwidth, when there are two users connected they each get half the time slots. So that access point that advertised 100 megabit speeds , can only deliver at best 10 megabits when 10 people are connected to it.

Related Article

Wireless is nice but wired networks are here to stay

Seven Tips To Improve Performance of your Wireless Lan

Proving The Identity of The DNC Hacker Not Likely


Screen Shot 2016-04-05 at 10.07.59 AM.png

By Art Reisman

CTO, APconnections

Inspired by the recent accusations regarding  the alleged Russian Hacking of the DNC e-mail servers, I ask the question, is it really possible for our intelligence  agencies to say with confidence exactly who hacked those servers?  I honestly don’t think so. To back up  my opinion, I have decided to  take our faithful blog readers through the mind and actions of  a professional hacker,  intent on breaking into a  corporate e-mail server, without leaving a trace. From there you can draw your own conclusions.

My  hacking scenario below is based  on actual techniques that our own ethical hackers use to test security at corporations. These companies  contract with us to deliberately  break into their It systems, and yes sometimes we do break in.

First we will follow our hacker through the process of a typical deliberate illegal break in, and then we will  analyze the daunting task of a forensic expert must deal with after the fact.

 

Here we go….

Phase I

  • First I need a platform for the first phase  of my attack. I want to find a computer with no formal ties to my identity. Just like  the public telephone booth of the 70’s and 80’s were used for calling in bomb threats,  the computers in your   public  libraries can easily conceal my identity.
  • To further cover my trail, I bring my own  flash memory with me to the library, it contains a software program commonly referred to  as  “BOT”. This allows me to move data programs onto the library computer without doing something like logging into my personal e-mail , which would leave a record of me being there.  In this case my BOT  specializes in crawling the Internet looking for consumer grade desktop computers to break into.
  • My BOT  searches the Internet at random looking for computers which are un-protected.  It will hit several thousand computers an hour for as long as I let it run
  • I don’t want to go to long with my BOT running from the Library,  because all the outbound activity it generates, may be detected as a virus by an Upstream ISP. The good news in my favor is that  BOTs both friendly and malicious are very common. At any time of the day there are millions of them  running all over the world.

Note, running a bot in itself is not a crime, it is just bad etiquette and annoying.  It is extremely unlikely that anybody would actually be able to see that I am trying to hack into computers (yes this is a crime)  with my BOT , because that would take very specialized equipment , and since I chose my Library at random the chances of drawing attention at this stage are minuscule. Typically a law enforcement agency must attain a warrant to set up their detection equipment.  all the upstream provider would sense is an unusual high rate of traffic coming out of the library.

  •  Once my bot has found some unprotected home computers and I have their  login credentials, I am ready for phase 2 . I save off their IP addresses and credentials, and delete the bot from the computer in the Library and leave never to return.

You might be wondering how does a BOT get access to home computers?  Many are still out there running very old versions of Windows or Linux and have generic passwords like “password”. The BOT attempts to login   through a well  known service such as SSH ( remote Login) and guesses the password. The BOT may run into 1,000 dead ends or more before cracking a single computer. Just like a mindless robot should,  it works tirelessly without complaint 

Phase II

  •  I again go to the Library and set up shop. Only this time instead of a BOT I come armed with phishing scam e-mail on my Flash.  From a computer in the library I   remotely login into one of the home computers whose credentials I attained in Phase 1 and set up shop.
  • I set up a program that will send e-mails from the home computer to people who work at the DNC with my  trojan horse content.

If I am smart, I do a little research on their back ground(s) of the poeple I sending to so as to make the e-mails as authentic as possible. Most consumers have seen the obvious scams where you get some ridiculous out of context e-mail with a link to open some file  you never asked for, that works for mass e-mailing to the public, hopeing to find  a few old ladies, or the computer illiterate, but I would assume that people who work at the DNC , would just think it is a spam e-mail and delete it.  Hence, they get something a little more personalized.   

How do I find the targeted employ e-mails at the DNC ?  That is a bit easier , many times they are published on a Web site, or  I simply guess at employee e-mails addresses , such as hclinton@dnc.com.

  • If any of the targeted e-mails I have sent to a DNC employee are opened they will, unbeknowest to them, be  installing  a keystroke logger that captures everything they type. In this way when they login into the DNC e-mail server I also get a login and access to all their e-mails

 How do I insure my victim does not suspect they have been hacked ? Stealth , Stealth , Stealth.  All of my hacking my tools such as my keystroke logger have very small inconspicuous footprints. I am not trying to crash or detroy anything at the DNC.  The person or persons who systems I gaing entry through most likely will never know.  Also I will only be using them for a very short period of time, and I will delete them on my way out.

  • Getting e-mail access. Once the keystroke logger is in place I have it report back to another one of my hacked personal computers. In this way the information I am collecting will sit on a home computer with no ties to back to me. WHen I go to collet this information , I again go to a Library with my flash card and download key stroke information, eventually I directly load up al the e-mails I can get onto my flash drive while in the Library.  I then take them to the Kremlin ( or whoever I work for and hand over the flash drives containing 10’s of thousands of e-mails for off line analysis.

 

Debunking the Russian Hacking Theory

The FBI purports to have found a  “Russian Signature file ” on the DNC server?

  •  It’s not like the hacking community has dialects associated with their hacking tools.  Although  If I was a Chinese hacker I might make sure I left a path pointing back at Russia  , why  not ? . If you recall I deleted my hacking tools on the way out, and yes I know how to scrub them so there is no latent foot print on the disk drive
  • As you can infer from my hacking example , I can hack pretty much autonomously from anywhere in the US or the world for that matter, using a series of intermediaries and without ever residing at permanent location.
  • Even if the FBI follows logs of where historical access into the DNC  has come from, the trail is going to lead to some Grandma’s computer at some random location. Remember all my contacts directly into the DNC were from my Hijacked Grandma computers. Perhaps that is enough to draw a conclusion so the FBI can  blame some poor Russian Grandma.  As the  real hacker all the better for me, let Grandma take the diversion, somebody else is going to get the blame.
  • Now let’s suppose the FBI is really on the ball and somehow figures that Grandma’s computer was just a shill hijacked by me. So they get a warrant and raid Grandma’s computer and they find a trail .  This  path is going to lead them back to the Library where I sat perhaps 3 months ago.
  • We can go another step farther, suppose the library had video surveillance and they caught me coming and going , then just perhaps they could make an ID match

By now you get the idea, assuming the hacker was a foreign sponsored professional and was not caught in the act, the trail is going to be impossible to make any definite conclusions from.

To see another detailed account of what it takes to hack into a server please  visit our 2011 article “Confessions of a hacker

Why is Your Internet Connection So Slow?


By Art Reisman

CTO – APconnections

Have you ever been on  a shared wireless network, in a Hotel , or Business, and noticed how your  connection can go from reasonable to completely unusable in a matter of seconds, and then cycle back to usable ?

The reason for this , is that once a network hits its bandwidth allocation, the providers router usually just starts dropping the excess packets. Intuitively, when your router is dropping packets, one would assume that the perceived slow down, per user, would be just a gradual shift slower.

What happens in reality is far worse…

1) Distant users get spiraling slower responses.

Martin Roth, a colleague of ours who founded one of the top performance analysis companies in the world, provided this explanation:

“Any device which is dropping packets “favors” streams with the shortest round trip time, because (according to the TCP protocol) the time after which a lost packet is recovered is depending on the round trip time. So when a company in Copenhagen/Denmark has a line to Australia and a line to Germany on the same internet router, and this router is discarding packets because of bandwidth limits/policing, the stream to Australia is getting much bigger “holes” per lost packet (up to 3 seconds) than the stream to Germany or another office in Copenhagen. This effect then increases when the TCP window size to Australia is reduced (because of the retransmissions), so there are fewer bytes per round trip and more holes between to round trips.”

In the screen shot above (courtesy of avenida.dk), the Bandwidth limit is 10 Mbit (= 1 Mbyte/s net traffic), so everything on top of that will get discarded. The problem is not the discards, this is standard TCP behaviour, but the connections that are forcefully closed because of the discards. After the peak in closed connections, there is a “dip” in bandwidth utilization, because we cut too many connections.

2) Once you hit a congestion point, where your router is forced to drop packets, overall congestion actually gets worse before it gets better.

When applications don’t get a response due to a dropped packet, instead of backing off and waiting, they tend to start sending re-tries, and this is why you may have noticed prolonged periods (3o seconds or more) of no service on a congested network. We call this the rolling brown out. Think of this situation as sort of a doubling down on bandwidth at the moment of congestion. Instead of easing into a full network and lightly bumping your head, all the devices demanding bandwidth ramp up their requests at precisely the moment when your network is congested, resulting in an explosion of packet dropping until everybody finally gives up.

How do you remedy outages caused by Congestion?

We have written extensively about solutions to prevent bottlenecks. Here is a quick summary of possible solutions

1) The most obvious being to increase the size of your link.

2) Enforce rate limits per user. The problem with this solution is that you can waste a good bit of bandwidth if the network is lightly loaded

3) Use something more sophisticated like a Netequalizer, a device that is designed to specifically counter the effects of congestion.

From Martin Roth of Avenida.dk

“With NetEqualizer we may get the same number of discards, but we get fewer connections closed, because we “kick” the few connections with the high bandwidth, so we do not get the “dip” in bandwidth utilization.

The graphs (above) were recorded using 1 second intervals, so here you can see the bandwidth is reached. In a standard SolarWinds graph with 10 minute averages the bandwidth utilization would be under 20% and the customer would not know they are hitting the limit.”

———————————————————————-

The excerpt below was a message from a reseller who had been struggling with congestion issues at a hotel, he tried basic rate limits on his router first. Rate Limits will buy you some time , but on an oversold network you can still hit the congestion point, and for this you need a smarter device.

“…NetEq delivered a 500% gain in available bandwidth by eliminating rate caps, possible through a mix of connection limits and Equalization.  Both are necessary.  The hotel went from 750 Kbit max per accesspoint (entire hotel lobby fights over 750Kbit; divided between who knows how many users) to 7Mbit or more available bandwidth for single users with heavy needs.

The ability to fully load the pipe, then reach out and instantly take back up to a third of it for an immediate need like a speedtest was also really eye-opening.  The pipe is already maxed out, but there is always a third of it that can be immediately cleared in time to perform something new and high-priority like a speed test.”
 
Rate Caps: nobody ever gets a fast Internet connection.
Equalized: the pipe stays as full as possible, yet anybody with a business-class need gets served a major portion of the pipe on demand. “
– Ben Whitaker – jetsetnetworks.com

Are those rate limits on your router good enough?

How to Speed Up Windows/Apple Updates


I discovered a problem with my download speed while trying to recover my un-responsive iPad.  Apple’s solution required me attach my iPad to my Mac, and then to download a new iOS image from the Internet, through the Mac and onto the IPad.

Speed should have been no problem with my business class, 20 megabit Internet connection from a well-known provider, right?

So I thought.

When I started the iOS download, the little progress timer immediately registered 23 hours to go. Wow, that is long time to wait, and I needed my iPad for a trip the next morning.  I tried a couple of speed tests in parallel, and everything looked normal.  The question remained – where was the bottleneck on this iOS download?  Was it on Apple’s end or a problem with my provider?

Over the years I have learned that iOS  and Windows updates are the bane of many Internet Providers. They are constantly looking at ways to prevent them from gumming up their exchange points.  They will try to identify update traffic, either by using the source IP, or if that does not work, they can actually examine the download data to make a determination. In either case, once they have tagged it as an update, they will choose to slow it down to keep their exchange points clear during peak traffic hours.

To thwart their shaping and get my speed back up near 20 megabits as promised, I simply had to hide my intentions. This can be accomplished using any number of consumer grade VPN applications.

I turned on my  IPvanish, which automatically encrypts the data and original source of my iOS update. Once up and running with my VPN, my IOS update loaded in 23 minutes. A 60 fold speed increase from my previous attempt.

If you would like to read more, here are a couple of other posts about ISP’s throttling data:

There is something rotten in the state of online streaming.

How to get access to blocked Internet Sites.

Good luck!

Bandwidth Shaping Shake Up, Your Packet Shaper May be Obsolete?


If you went to sleep in 2005 and woke up 10 years later you would likely be surprised by some dramatic changes in technology.

  • Smart cars that drive themselves are almost a reality
  • The desktop PC is no longer a consumer product
  • Wind farms  now line the highways of rural America
  • Layer 7 shaping technology is now clinging to life, crashing the financials of a several  companies that bet the house on it.

What happened to layer 7 and Packet Shaping?

In the early 2000’s all the rave in traffic classification was the ability to put different types of bandwidth traffic into labeled buckets and assign a priority to them. Akin to rating your food choices  on a tapas menu ,network administrators  enjoyed an extensive  list of various traffic. Youtube, Citrix,  news feeds, the list was only limited by the price and quality of the bandwidth shaper. The more expensive the traffic shaper , the more choices you had.

Starting in 2005 and continuing to this day,  several forces started to work against the layer 7 paradigm.

  • The price of bulk bandwidth went into a free fall, much faster than the relatively fixed cost of a bandwidth shaper.  The business proposition of buying a bandwidth shaper to conserve bandwidth utilization became much tighter. Some companies that were riding high saw their stock prices collapse.
  • Internet traffic became invisible and impossible to identify with the advent of encryption techniques. A traffic classifier using Layer 7,  cannot see inside HTTPS or a VPN tunnel, and thus it is essentially becomes a big expensive albatross with little value as the rate of encrypted traffic increases.
  • The FCC ruling toward Net Neutrality further put a damper on a portion of the Layer 7 market. For years ISPs had been using Layer 7 technology to give preferential treatment to different types of traffic.
  • Cloud based services are using less complex  architectures. Companies  can consolidate on one simplified central bandwidth shaper, where as before they might have had several on all their various WAN links and Network segments

So where does this leave the bandwidth shaping market?

There is still some demand for layer 7 type shapers, particular in countries like China, where they attempt to control   everything.  However in Europe and in the US , the trend is to more basic controls that do not violate the FCC rule, cost less, and use some form intelligent based fairness rules such as:

  • Quota’s ,  your cell phone data plan.
  • Fairness based heuristics is gaining momentum, lower price point, prevents congestion without violating FCC ruling  (  Equalizing).
  • Basic Rate limits,  your wired ISP 20 megabit plan, often implemented on a basic router and not a specialized shaping device.
  • No Shaping at all,  pipes are so large there is no need to ration bandwidth.

Will Shaping be around in 10 years?

Yes, consumers and businesses will always find ways to use all their bandwidth and more.

Will price points for bandwidth continue to drop ?

I am going to go against the grain here, and say bandwidth prices will flatten out in the near future.  Prices  over the last decade slid for several reasons which are no longer in play.

The biggest driver in price drops was the wide acceptance of wave division muliplexing on carrier lines in the 2005- present time frame. There was already a good bit of fiber in the ground but the WDM innovation caused a huge jump in capacity, with very little additional cost to providers.

The other factor was a major world-wide recession, where businesses where demand was slack.

Lastly there are no new large carriers coming on line. Competition and price wars will ease up as suppliers try to increase profits.

 

 

%d bloggers like this: