The Technology Differences Between a Web Filter and a Traffic Shaper


First, a couple of definitions, so we are all on the same page.
A Web Filter is basically a type of specialized firewall with a configurable list of URLs.  Using a Web Filter, a Network Administrator can completely block specific web sites, or block complete categories of sites, such as pornography.

A Traffic Shaper is typically deployed to change the priority of certain kinds of traffic.  It is used where blocking traffic completely is not required, or is not an acceptable practice.  For example, the mission of a typical Traffic Shaper might be to allow users to get into their Facebook accounts, and to limit their bandwidth so as to not overshadow other more important activities.  With a shaper the idea is to limit (shape) the total amount of data traffic for a given category.

From a technology standpoint, building a Web Filter is a much easier proposition than creating a Traffic Shaper.  This is not to demean the value or effort that goes into creating a good Web Filter.  When I say “easier”, I mean this from a core technology point of view.  Building a good Web Filter product is not so much a technology challenge, but more of a data management issue. A Web Filter worth its salt must be aware of potentially millions of various websites that are ever-changing. To manage these sites, a Web Filter product must be constantly getting updated. The product company supporting the Web Filter must search the Web, constantly indexing new web sites and their contents, and then passing this information into the Web Filter product. The work is ongoing, but not necessarily daunting in terms of technology prowess.  The actual blocking of a Web site is simply a matter of comparing a requested URL against the list of forbidden web sites and blocking the request (dropping the packets).
A Traffic Shaper, on the other hand, has a more daunting task than the Web Filter. This is due to the fact that unlike the Web Filter, a Traffic Shaper kicks in after the base URL has been loaded.  I’ll walk through a generic scenario to illustrate this point.  When a user logs into their Facebook account, the first URL they hit is a well-known Facebook home page.  Their initial query request coming from their computer to the Facebook home page is easy to spot by the Web Filter, and if you block it at the first step, that is the end of the Facebook session.  Now, if you say to your Traffic Shaper “I want you to limit Facebook Traffic to 1 megabit”, then the task gets a bit trickier.  This is because once you are logged into a Facebook  page subsequent requests are not that obvious. Suppose a user downloads an image or plays a shared video from their Facebook screen. There is likely no context for the Traffic Shaper to know the URL of the video is actually coming from Facebook.  Yes, to the user it is coming from their Facebook page, but when they click the link to play the video, the Traffic Shaper only sees the video link – it is not a Facebook URL any longer. On top of that, often times the Facebook page and it’s contents are encrypted for privacy.
For these reasons a traditional Traffic Shaper inspects the packets to see what is inside.  The traditional Traffic Shaper uses Deep Packet Inspection (DPI) to look into the data packet to see if it looks like Facebook data. This is not an exact science, and with the widespread use of encryption, the ability to identify traffic with accuracy is becoming all but impossible.
The good news is that there are other heuristic ways to shape traffic that are gaining traction in the industry.  The bad news is that many end customers continue to struggle with diminishing accuracy of traditional Traffic Shapers.
For more in depth information on this subject, feel free to e-mail me at art@apconnections.net.
By Art Reisman, CTO APconnections

Firewall Recipe for DDoS Attack Prevention and Mitigation


Although you cannot “technically” stop a DDoS attack, there are ways to detect and automatically mitigate the debilitating effects on your public facing servers. Below, we shed some light on how to accomplish this without spending hundreds of thousands of dollars on a full service security solution that may be overkill for this situation.

Most of the damage done by a targeted DDoS attack is the result of the overhead incurred on your servers from large volume of  fake inquiries into your network. Often with these attacks, it is not the volume of raw bandwidth  that is the issue, but the reduced the slow response time due to the overhead on your servers. For a detailed discussion of how a DDoS attack is initiated please visit http://computer.howstuffworks.com/zombie-computer3.htm zombie-computer-3d

We assume in our recipe below, that you have some sort of firewall device on your edge that can actually count hits into your network from an outside IP, and also that you can program this device to take blocking action automatically.

Note: We provide this type of service with our NetGladiator line. As of our 8.2 software update, we also provide this in our NetEqualizer line of products.

Step 1
Calculate your base-line incoming activity. This should be a running average of unique hits per minute or perhaps per second. The important thing is that you have an idea of what is normal. Remember we are only concerned with Un-initiated hits into your network, meaning outside clients that contact you without being contacted first.

Step 2
Once you have your base hit rate of incoming queries, then set a flag to take action ( step 3 below), should this hit rate exceed more than 1.5 standard deviations above your base line.  In other words if your hit rate jumps by statistically large amount compared to your base line for no apparent reason i.e .you did not mail out a newsletter.

Step 3
You are at step 3 because you have noticed a much larger than average hit rate of un-initiated requested into your web site. Now you need to look for a hit count by external IP. We assume that the average human will only generate at most a hit every 10 seconds or so, maybe higher. And also on average they will like not generate more than 5 or 6 hits over a period of a few minutes.  Where as a hijacked client attacking your site as part of a DDOS attack is likely to hit you at a much higher rate.  Identify these incoming IP’s and go to Step 4.

Step 4
Block these IP’s on your firewall for a period of 24 hours. You don’t want to block them permanently because it is likely they are just hijacked clients ,and also if they are coming from behind a Nat’d community ( like a University) you will be blocking a larger number of users who had nothing to do with the attack.

If you follow these steps you should have a nice pro-active watch-dog on your firewall to mitigate the effects of any DDoS attack.

For further consulting on DDoS or other security related issues feel free to contact us at admin@apconnections.net.

Related Articles:

Defend your Web Server against DDoS Attacks – techrecipes.com

How DDoS Attacks Work, and Why They’re Hard to Stop

How to Launch a 65 gbps DDoS Attack – and How to Stop It

Do hotels ever block your personal wifi ?


Apparently at least one hotel does. We had written an article hinting that this might be the case  back in 2010.  Hotel operators at the time were hurting from the loss of phone call charges as customers turned to their cell phones, and were looking for creative ways to charge for Internet service.

Hence I was not surprised to see this article today.

FCC: Marriott blocked guests’ personal Wi-Fi, charged for Net access

Federal Communications Commission fines Marriott $600,000 after deciding it illegally interfered with conventiongoers’ hot spots in Nashville. Marriott says it did nothing wrong.

In its judgment, the FCC said “Marriott employees had used containment features of a Wi-Fi monitoring system at the Gaylord Opryland to prevent individuals from connecting to the Internet via their own personal Wi-Fi networks, while at the same time charging consumers, small businesses and exhibitors as much as $1,000 per device to access Marriott’s Wi-Fi network.”

read more

How to keep your IP address static with DHCP


One of the features we support with the NetEqualizer product is a Quota tool, which keeps a running count of total bytes used per IP on a network. A typical IT administrator wants to keep track of data on a per user basis over time, hence some form of Quota tool is essential.  However, a potential drawback of our methodology is that we track usage by IP.   Most networks use a technology called DHCP that dynamically hands out a new IP address each time you power up and power down your computer or wireless device. Most network administrators can track a specific user to an IP in the moment, but they have no idea who had the IP address last week or last month.  Note: there are authentication tools such as Radius or Nomadix that can be used to track users by name but, this adds a complex layer of additional overhead to a simple network.

Yesterday, when working with a customer, the subject came up about our Quota tool, and its drawback of not being able to track a user by IP over time, and the customer turned that into a teaching moment for me.

You see, a DHCP server will always try and give the same IP address back to the same device if the previous IP address is available.   So the key is keeping that IP address available; and there is a simple trick to make sure that this happens.

When you set up a DHCP server it will ask you the range of IP addresses you want to use. All one needs to do is ensure that the defined range is much bigger than the number of devices that will be on your network, and then you can be almost certain that a device will always get the same IP.  This is because the DHCP server only re-uses previously assigned IP addresses when all IP addresses have been assigned, and this would only happen if you defined your IP address range to a relative small number relative to the number of potential devices on your network.   There is no real extra cost for defining your DHCP address range as a Class B instead of the typical default Class C, which then expands your range from 255 to 64,000.  So make sure your ranges are large enough and feel free to track your users by IP without worry.

If you would like to learn more about our Quota tool, also known as “User Quota”, you can read more about it in our User Guide.

Is Layer 7 Shaping Officially Dead ?


Sometimes life throws you a curve ball and you must change directions.

We have some nice color coded pie chart  graphs that show customers percentages of  their bandwidth by application. This feature is popular  really catches their eye.

In an effort to improve our latest layer 7  reporting feature, we have been collecting data from some of our Beta users.

Layer 7 PIe Chart

Layer 7 PIe Chart 

The  accuracy of the Layer 7 data has always and continues to be an issue. Normally this is resolved by revising the layer 7 protocol patterns, which we use internally to identify the signatures of various applications.   We  had anticipated and planned to address accuracy in  a second release. However when we start to look at the root cause as to what is causing the missed classifications, we start to  see more cases of encrypted data. Encrypted data cannot be identified.

We then checked with one of our ISP customers in South Africa , who handles over a million residential users. It seems that some of their investment in Layer 7 classification is also being thwarted by increased encryption. And this is more  than the traditional p2p traffic,  encryption has spread to  the common social services such as face book.

Admittedly some of this early data is anecdotal,  but two independent observers reporting increased encryption is hard to ignore.

Evidently the increased encryption techniques now being used by common applications,  is a back lash to all the security issues bogging down the Internet.  There are workarounds for enterprises that must use layer 7 classification to prioritize traffic; however the workarounds require that all devices using the network must be retrofitted with special software to identify the traffic on the device ( iPad, iPhone). Such a workaround is impractical for an ISP.

The net side effect is, that if this trend continues traditional layer 7 packet shapers will become museum pieces right beside old Atari Games, and giant 3 pound cell phones.

How Many Users Can Your High Density Wireless Network Support? Find Out Before you Deploy.


By

Art Reisman

CTO http://www.netequalizer.com

Recently I wrote an article on how tough it has become to deploy wireless technology in high density areas.  It is difficult to predict final densities until fully deployed, and often this leads to missed performance expectations.

In a strange coincidence, while checking  in with my friends over at Candela Technologies last Friday , I was not  surprised to learn that their latest offering ,the Wiser-50 Mobile Wireless Network Emulator,  is taking the industry by storm.  

So how does their wireless emulator work and why would you need one ?

The Wiser-50  allows you to take your chosen access points, load them up with realistic  signals from a densely packed area of users, and play out different load scenarios without actually building out the network . The ability to this type of emulation  allows you to make adjustments to your design on paper without the costly trial and error of field trials.  You will be able to  see how your access points will behave under load  before you deploy them.  You can then make some reasonable assumptions on how densely to place your access points,  and more importantly get an idea on the upper bounds of your final network.

With IT deployments  scaling up into new territories of  densities, an investment in a wireless emulation tool will pay for itself many times over.  Especially when bidding on a project. The ability to justify how you have sized a quality solution over an ad-hock random solution, will allow your customer to make informed decisions on the trade -offs in wireless investment.

The technical capabilities of Wiser-50 are listed below.   If you are not familiar with all the terms involved with wireless testing I would suggest a call to Candelatech network engineers, they have years of experience helping all levels of customers and are extremely patient and easy to work with.

Scenario Definition Tool/Visualization

  • Complete Scenario Definition to add nodes, create mobility vectors and traffic profiles for run-time executable emulation.
  • Runtime GUI visualization with mobility and different link and traffic conditions.
  • Automatic Traffic generation & execution through the GUI.
  • Drag-and-drop capability for re-positioning of nodes.
  • Scenario consistency checks (against node capabilities and physical limitations such as speed of vehicle).
  • Mock-up run of the defined scenario (i.e., run that does not involve the emulator core to look at the scenario)
  • Manipulation of groups of nodes (positioning, movement as a group)
  • Capture and replay log files via GUI.
  • Support for 5/6 pre-defined scenarios.

RF Module

  • Support for TIREM, exponent-based, shadowing, fading, rain models (not included in base package.)
  • Support for adaptive modulation/coding for BER targets for ground-ground links.
  • Support for ground-to-ground & satellite waveforms
  • Support for MA TDMA (variants for ground-ground, ground-air & satellite links).
  • Support for minimal CSMA/CA functionality.
  • Support to add effects of selective ARQ & re-transmissions for the TDMA MAC.

Image

Related Articles

The Wireless Density Problem

Wireless Network Capacity Never Ending Quest Cisco Blog

Internet User’s Bill of Rights


This is the second article in our series. Our first was a Bill of Rights dictating the etiquette of software updates. We continue with a proposed Bill of Rights for consumers with respect to their Internet service.

1) Providers must divulge the contention ratio of their service.

At the core of all Internet service is a balancing act between the number of people that are sharing a resource and how much of that resource is available.

For example, a typical provider starts out with a big pipe of Internet access that is shared via exchange points with other large providers. They then subdivide this access out to their customers in ever smaller chunks — perhaps starting with a gigabit exchange point and then narrowing down to a 10 megabit local pipe that is shared with customers across a subdivision or area of town.

The speed you, the customer, can attain is limited to how many people might be sharing that 10 megabit local pipe at any one time. If you are promised one megabit service, it is likely that your provider would have you share your trunk with more than 10 subscribers and take advantage of the natural usage behavior, which assumes that not all users are active at one time.

The exact contention ratio will vary widely from area to area, but from experience, your provider will want to maximize the number of subscribers who can share the pipe, while minimizing service complaints due to a slow network. In some cases, I have seen as many as 1,000 subscribers sharing 10 megabits. This is a bit extreme, but even with a ratio as high as this, subscribers will average much faster speeds when compared to dial up.

2) Service speeds should be based on the amount of bandwidth available at the providers exchange point and NOT the last mile.

Even if your neighborhood (last mile) link remains clear, your provider’s connection can become saturated at its exchange point. The Internet is made up of different provider networks and backbones. If you send an e-mail to a friend who receives service from a company other than your provider, then your ISP must send that data on to another network at an exchange point. The speed of an exchange point is not infinite, but is dictated by the type of switching equipment. If the exchange point traffic exceeds the capacity of the switch or receiving carrier, then traffic will slow.

3) No preferential treatment to speed test sites.

It is possible for an ISP to give preferential treatment to individual speed test sites. Providers have all sorts of tools at their disposal to allow and disallow certain kinds of traffic. There should never be any preferential treatment to a speed test site.

4) No deliberate re-routing of traffic.

Another common tactic to save resources at the exchange points of a provider is to re-route file-sharing requests to stay within their network. For example, if you were using a common file-sharing application such as BitTorrent, and you were looking some non-copyrighted material, it would be in your best interest to contact resources all over the world to ensure the fastest download.

However, if your provider can keep you on their network, they can avoid clogging their exchange points. Since companies keep tabs on how much traffic they exchange in a balance sheet, making up for surpluses with cash, it is in their interest to keep traffic confined to their network, if possible.

5) Clearly disclose any time of day bandwidth restrictions.

The ability to increase bandwidth for a short period of time and then slow you down if you persist at downloading is another trick ISPs can use. Sometimes they call this burst speed, which can mean speeds being increased up to five megabits, and they make this sort of behavior look like a consumer benefit. Perhaps Internet usage will seem a bit faster, but it is really a marketing tool that allows ISPs to advertise higher connection speeds – even though these speeds can be sporadic and short-lived.

For example, you may only be able to attain five megabits at 12:00 a.m. on Tuesdays, or some other random unknown times. Your provider is likely just letting users have access to higher speeds at times of low usage. On the other hand, during busier times of day, it is rare that these higher speeds will be available.

There is now a consortium called M-Lab which has put together a sophisticated speed test site designed to give specific details on what your ISP is doing to your connection. See the article below for more information.

Related article Ten things your internet provider does not want you to know.

Related article On line shoppers bill of rights

%d bloggers like this: