10 Web Application Security Tools You Can’t Do Without

By Zack Sanders – Director of Security – APconnections

Since initiating our hacking challenge last year, we’ve helped multiple organizations shore up security flaws in their web application infrastructure. Proper web application security testing is always a mix of automated testing and manual testing. If you just run automated tests and don’t have the knowledge to interpret the results, the amount of false positives thrown at you will result in little value. If you don’t know the ins and outs of common vulnerabilities, manual testing alone will get you nowhere. With the right mix, you can create a baseline analysis from the automated tests that will help determine what areas of the application should be explored further manually.

Here are some of the tools I use the most when assessing a new web application along with brief descriptions*:

1) Metasploit – http://www.metasploit.com/ – Metasploit is an entire framework for penetration testing and security analysis. The tools are all open source and the community behind the software is outstanding.

2) DirBuster – http://sourceforge.net/projects/dirbuster/ – DirBuster is a directory brute force tool that allows you to create a tree view of a web application’s file system.

3) Nessus – http://www.tenable.com/products/nessus – Nessus is a great tool for identifying server-level vulnerabilities.

4) John the Ripper – http://www.openwall.com/john/ – JTR is a password cracker tool.

5) Havij – http://www.itsecteam.com/products/havij-v116-advanced-sql-injection/ – Havij is an advanced SQL injection tool that provides a GUI for conducting injection tests.

6) Charles Web Proxy – http://www.charlesproxy.com/ – Charles is an awesome tool that allows you to modify requests and responses in web applications.

7) Tamper Data Firefox Add-On – https://addons.mozilla.org/en-us/firefox/addon/tamper-data/ – Like Charles, this tool also allows you to modify requests.

8) Skipfish – http://code.google.com/p/skipfish/ – Skipfish is a web application security vulnerability scanner that will scan an entire website for issues. It results in quite a few false positives but also legitimate issues.

9) Firebug – https://getfirebug.com/ – This is a debugging tool for web developers but it is useful for security professionals in that you can easily see what is happening behind the scenes.

10) Websecurify – http://www.websecurify.com/ – Websecurify is an entire security environment meant for assisting in the manual testing phase.

These are only some of the tools out there for security professionals who are testing web applications. There are many more. But, they aren’t just available to the good guys. Bad guys have access to them too and are using them in attacks all the time. Let us know if we can run a security assessment for your organization using the same tools hackers do. The investment will be well worth it.

Contact us today at: ips@apconnections.net

*Use these tools at your own risk and only on websites you have permission to test.

Getting the Keys to the Kingdom: SQL Injection

By Zack Sanders

Director of Security – www.netgladiator.net

SQL injection is one of the most well-known vulnerabilities in web application security. Because so many web sites today are database driven, an SQL injection vulnerability puts the entire application and its users at risk. The purpose of this article is to explain what SQL injection is, show how easily it can be exploited, and discuss what steps you can take to make sure your site is secure from this devastating attack vector.

What is SQL injection?

SQL injection is performed by including portions of SQL statements in a web form entry field in an attempt to get the web site to pass a newly formed malicious SQL command to the database. The vulnerability happens when user input is either incorrectly filtered or user input is not strongly typed and unexpectedly executed. SQL commands are thus injected from the web form into the database of an application (like queries) to change the database content or dump the database information like credit card or passwords to the attacker. Average websites can experience 100’s of SQL injection attempts per hour from automated bots scouring the Internet.

How do attackers discover it?

When searching for SQL injection, an attacker is looking for an application that behaves differently based on varying inputs to a form. For example, a vulnerable web form might accept expected content just fine, but if SQL characters are inputted, a system-level SQL error is generated saying something like, “There is an error in your MySQL syntax.” This tells the attacker that the SQL code is being interpreted, even though it is incorrect. This indicates that the application is vulnerable.

How is a site that is vulnerable exploited?

Once an application is deemed vulnerable, an attacker will try using an automated injection tool to glean information about the database. Structure data like the information schema, the version of SQL being run, and table names are all trivial to gather. Once the structure is defined and understood, custom SQL statements can be written to download data from interesting tables like, “users”, “customers”, “payments”, etc. Here is a screenshot from a recent client of mine whose site was vulnerable. These are just a few of the columns from the “users” table.

* Names, email addresses, partial passwords, usernames, and addresses are blocked out.

What happens next?

With this level of access, the sky is the limit. Here are a few things an attacker might do:

1) Take all of the hashed passwords and run them against a rainbow table for matches. This is why long passwords are so important. Even though hashing is a one-way algorithm for encryption, the hashes for short and common passwords are all known and can easily be looked up reversely. An attacker might then use the passwords, along with email addresses, to compromise other accounts owned by those users.

2) Change the super administrator flag for a user they know the password for, and log in to gain further access. A common goal is to get to a file upload interface so that a script can be uploaded to the server that would give an attacker remote control.

3) Drop the entire database purely to wreak havoc.

How do you protect your site from SQL injection?

ALL GET and POST requests involving the database need to be filtered and analyzed before being run. This includes actions like:

1) Stripping away SQL characters. In MySQL, this would be the mysql_real_escape_string() function.

2) Analyze for expected input. Should the entry only be a number 1-50? Check to make sure it is a positive number, non-zero, and no more than two characters.

3) Have strong database permissions. Different database users should be created with only needed permissions for their function. For example, don’t use the root MySQL user to connect your web application to your database.

4) Hire an expert to assess your web application. The cost of performing this type of health check is miniscule compared to the cost of being exploited.

5) Install an intrusion protection system like NetGladiator that looks for SQL characters in URL’s.

The keys to the kingdom

Hopefully you can now see the danger of SQL injection. The level of control and access coupled with the ease of discovery and exploitation make it extremely problematic. The good news is, putting basic protections in place is relatively easy.

Contact us today if you want help securing your web application!

Special Glasses Needed to Spot Network Security Holes

By Art Reisman

CTO – http://www.netequalizer.com

Would you leave for vacation with your garage door wide open or walk off the edge of a cliff looking for a lost dog? Whether it be a bike lock, or that little beep your car makes when you hit the button on your remote, you rely on physical confirmation for safety and security every day.

Because network security holes do not illuminate any of our human senses, most businesses run blind with respect to what are obvious vulnerabilities. Security holes can be glaringly obvious to a hacker.

Have you ever seen an Owl swoop down in the darkness and grab a rabbit? I have, but only once, and that was in the dim glow of field illuminated by some nearby stadium lights. Owls take hundreds of rodents every night under the cover of darkness, they have excellent night vision and most rodents don’t.

To a hacker, a security hole can be just as obvious as that rabbit. You might feel seemingly secure under the cover of darkness. To your senses what may be invisible is quite obvious to a hacker. They have ways of illuminating your security holes. And then, they can choose to exploit them if deemed juicy enough. For some entry points, a hacker might have to look a little bit harder, like lifting a door mat to reveal a key. Never the less, they will see the key, and the problem is you won’t even know the key is under the mat.

Fancy automated tools that report risk are nice, but the only way to expose your actual network security holes is to hire somebody with night vision goggles that can see the holes. Most tools that do audits are not good enough by themselves, they sort of bumble around in the dark looking and feeling for things, and they really do not see them the way a hacker does.

I’d strongly urge any company that is serious about updating their security to employ a white knight hacker before any other investment outlay. For the same reason that automated systems cannot replace humans, even though billions have been spent on them over the years, you should not start your security defense with an automated tool. It must start with a human hell bent on breaking into your business and then showing you the holes. It never ceases to amaze me the types of holes our white knight hackers find. There is nothing better at spotting security holes than a guy with special glasses.

Is Your Data Really Secure?

By Zack Sanders

Most businesses, if asked, would tell you they do care about the security of their customers. The controversial part of security comes to a head when you ask the question in a different way. Does your business care enough about security to make an investment in protecting customer data? There are a few companies that proactively invest in security for security’s sake, but they are largely in the minority.

The two key driving factors that determine a business’s commitment to security investment are:

1) Government or Industry Standard Compliance – This is what drives businesses like your credit card company, your local bank, and your healthcare provider to care about security. In order to operate, they are forced to care. Standards like HIPAA and PCI require them to go through security audits and checkups. Note: And just because they invest in meeting a compliance standard,  it may not translate to secure data, as we will point out below.

2) A Breach Occurs – Nothing will change an organization’s attitude toward security like a massive, embarrassing security breach. Sadly, it usually takes something like this happening to drive home the point that security is important for everyone.

The fact is, most businesses are running on very thin margins and other operating operating costs come before security spending. Human nature is such that we prioritize by what is in front of us now. What we don’t know can’t hurt us. It is easy for a business to assume that their minimum firewall configuration is good enough for now. Unfortunately they cannot easily see the holes in their firewall. Most firewall security can easily be breached through advertised public interfaces.

How do we know? Because we often do complimentary spot checks on company web servers. It is a rare case when we  have not been able to break in, attaining access to all customer records. Even though our sample set is small, our breach rate is so high, we can reliably extrapolate that most companies can easily be broken into.

As we eluded to above, even some of the companies that follow a standard are still vulnerable. Many large corporations  just go through the motions to comply with a standard, so they might typically seek out “trusted,” large professional services firms to do their audits. Often, these companies will conduct boiler plate assessments where auditors run down a checklist with the sole goal of certifying the application or organization as compliant.

Hiring a huge firm to do an audit makes it much easier to deflect blame in the case of an incident. The employee responsible for hiring the audit firm can say, “Well, I hired XXX – what more could I have done?” If they had hired a small firm to do the audit, and a breach occurred, their judgement and job might come into question – however unfair that might be.

As a professional web application security analyst that has personally handled the aftermath of many serious security breaches, I would advocate that if you take your security seriously, start with an assessment challenge using a firm that will work to expose your real world vulnerabilities.

P2P Protocol Blocking Now Offered with NetGladiator Intrusion Prevention

A few months ago we introduced our NetGladiator Intrusion Prevention (IPS) Device. To date, it has thwarted tens of thousands of robotic cyber attacks and counting. Success breeds success and our users wanted more.

When our savvy customers realized the power, speed, and low price point of our underlying layer 7 engine, we started getting requests seeking additional features such as: “Can you also block Peer To Peer and other protocols that cannot be stopped by our standard Web Filters and Firewalls?”  It was natural that we extended our IPS device to address this space; hence, today we are announcing the next-generation NetGladiator. We now offer a module that will allow you to block and monitor the world’s top 10 p2p protocols (which account for 99 percent of all P2P traffic). We also back our technology with our unique promise to implement a custom protocol blocking rule with the purchase of any system at no extra charge. For example, if you have a specific protocol you need to monitor and just can’t uncover it with your WebSense or Firewall filter, we will custom deliver a NetGladiator system that can track and/or block your unique protocol, in addition to our standard p2p blocking options.

Below is a sample Excel live report integrated with the NetGladiator in monitor mode. On the screen snapshot below, you will notice that we have uncovered a batch of Utorrent and Frost Wire p2p traffic.

Please feel free to call 303-997-1300 or email our NetGladiator sales engineering team with any additional questions at ips@@apconnections.net.

Related Articles

NetGladiator A layer 7 shaper in sheep’s clothing

How to Block Frostwire, utorrent and Other P2P Protocols

By Art Reisman, CTO, http://www.netequalizer.com

Art Reisman CTO www.netequalizer.com

Disclaimer: It is considered controversial and by some definitions illegal for a US-based ISP to use deep packet inspection on the public Internet.

At APconnections, we subscribe to the philosophy that there is more to be gained by explaining your technology secrets than by obfuscating them with marketing babble. Read on to learn how I hunt down aggressive P2P traffic.

In order to create a successful tool for blocking a P2P application, you must first figure out how to identify P2P traffic. I do this by looking at the output data dump from a P2P session.

To see what is inside the data packets I use a custom sniffer that we developed. Then to create a traffic load, I use a basic Windows computer loaded up with the latest utorrent client.

Editors Note: The last time I used a P2P engine on a Windows computer, I ended up reloading my Windows OS once a week. Downloading random P2P files is sure to bring in the latest viruses, and unimaginable filth will populate your computer.

The custom sniffer is built into our NetGladiator device, and it does several things:

1) It detects and dumps the data inside packets as they cross the wire to a file that I can look at later.

2) It maps non printable ASCII characters to printable ASCII characters. In this way, when I dump the contents of an IP packet to a file, I don’t get all kinds of special characters embedded in the file. Since P2P data is encoded random music files and video, you can’t view data without this filter. If you try, you’ll get all kinds of garbled scrolling on the screen when you look at the raw data with a text editor.

So what does the raw data output dump of a P2P client look like ?

Here is a snippet of some of the utorrent raw data I was looking at just this morning. The sniffer has converted the non printable characters to “x”.
You can clearly see some repeating data patterns forming below. That is the key to identifying anything with layer 7. Sometimes it is obvious, while sometimes you really have work to find a pattern.

Packet 1 exx_0ixx`12fb*!s[`|#l0fwxkf)d1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:ka 31:v4:utk21:y1:qe
Packet 2 exx_0jxx`1kmb*!su,fsl0’_xk<)d1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:xv4^1:v4:utk21:y1:qe
Packet 3 exx_0kxx`1exb*!sz{)8l0|!xkvid1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:09hd1:v4:utk21:y1:qe
Packet 4 exx_0lxx`19-b*!sq%^:l0tpxk-ld1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:=x{j1:v4:utk21:y1:qe

The next step is to develop a layer 7 regular expression to identify the patterns in the data. In the output you’ll notice the string “exx” appears in line, and that is what you look for. A repeating pattern is a good place to start.

The regular expression I decided to use looks something like:


This translates to: match any string starting with “exx” followed, by any character “.” followed by “0”, followed by “xx”, followed by any sequence of characters ending with “qe”.

Note: When I tested this regular expression it turns out to only catch a fraction of the Utorrent, but it is a start. What you don’t want to do is make your regular expression so simple that you get false positives. A layer 7 product that creates a high degree of false positives is pretty useless.

The next thing I do with my new regular expression is a test for accuracy of target detection and false positives.

Accuracy of detection is done by clearing your test network of everything except the p2p target you are trying to catch, and then running your layer 7 device with your new regular expression and see how well it does.

Below is an example from my NetGladiator in a new sniffer mode. In this mode I have the layer 7 detection on, and I can analyze the detection accuracy. In the output below, the sniffer puts a tag on every connection that matches my utorrent regular expression. In this case, my tag is indicated by the word “dad” at the end of the row. Notice how every connection is tagged. This means I am getting 100 percent hit rate for utorrent. Obviously I doctored the output for this post :)

ndex SRCP DSTP Wavg Avg IP1 IP2 Ptcl Port Pool TOS
0 0 0 17 53 — 2 99 dad
1 0 0 16 48 — 2 99 dad
2 0 0 16 48 — 2 99 dad
3 0 0 18 52 — 2 99 dad
4 0 0 12 24 — 2 99 dad
5 0 0 18 52 — 2 99 dad
6 0 0 10 0 — 2 99 dad
7 0 0 88 732 — 2 99 dad
8 0 0 12 0 — 2 99 dad
9 0 0 12 24 — 2 99 dad
10 0 0 16 48 — 2 99 dad
11 0 0 11 16 — 2 99 dad
12 0 0 17 52 — 2 99 dad
13 0 0 27 54 — 2 99 dad
14 0 0 10 0 — 2 99 dad
15 0 0 14 28 — 2 99 dad
16 0 0 14 32 — 2 99 dad
17 0 0 10 0 — 2 99 dad
18 0 0 24 33 — 2 99 dad
19 0 0 17 53 — 2 99 dad

A bit more on reading this sniffer output…

Notice columns 4 and 5, which indicate data transfer rates in bytes per second. These columns contain numbers that are less than 100 bytes per second – Very small data transfers. This is mostly because as soon as that connection is identified as utorrent, the NetGladiator drops all future packets on the connection and it never really gets going. One thing I did notice is that the modern utorrent protocol hops around very quickly from connection to connection. It attempts not to show it’s cards. Why do I mention this? Because in layer 7 shaping of P2P, speed of detection is everything. If you wait a few milliseconds too long to analyze and detect a torrent, it is already too late because the torrent has transferred enough data to keep it going. It’s just a conjecture, but I suspect this is one of the main reasons why this utorrent is so popular. By hopping from source to source, it is very hard for an ISP to block this one without the latest equipment. I recently wrote a companion article regarding the speed of the technology behind a good layer 7 device.

The last part of testing a regular expression involves looking for false positives. For this we use a commercial grade simulator. Our simulator uses a series of pre-programmed web crawlers that visit tens of thousands of web pages an hour at our test facility. We then take our layer 7 device with our new regular expression and make sure that none of the web crawlers accidentally get blocked while reading thousands of web pages. If this test passes we are good to go with our new regular expression.

Editors Note: Our primary bandwidth shaping product manages P2P without using deep packet inspection.
The following layer 7 techniques can be run on our NetGladiator Intrusion Prevention System. We also advise that public ISPs check their country regulations before deploying a deep packet inspection device on a public network.

NetGladiator: A Layer 7 Shaper in Sheep’s Clothing

When explaining our NetGladiator technology the other day, a customer was very intrigued with our Layer 7 engine. He likened it to a caged tiger under the hood, gobbling up and spitting out data packets with the speed and cunning of the world’s most powerful feline.

He was surprised to see this level of capability in equipment offered at our prices.  He was impressed with the speed attained for the price point of our solution (more on this later in the article)…

In order to create a rock-solid IPS (Intrusion Prevention System), capable of handling network speeds of up to 1 gigabit with standard Intel hardware, we had to devise a technology breakthrough in Layer 7 processing. Existing technologies were just too slow to keep up with network speed expectations.

In order to support higher speeds, most vendors use semi-custom chip sets and a technology called “ASIC“. This works well but is very expensive to manufacture.

How do typical Layer 7 engines work?

Our IPS story starts with our old Layer 7 engine. It was sitting idle on our NetEqualizer product. We had shelved it when we got away from from Layer 7 shaping in favor of Equalizing technology, which is a superior solution for traffic shaping.  However, when we decided to move ahead with our new IPS this year, we realized we needed a fast-class analysis engine, one that could look at all data packets in real time. Our existing Layer 7 shaper only analyzed headers because that was adequate for its previous mission (detecting P2P streams).  For our new IPS system, we needed a solution that could do a deep dive into the data packets.  The IPS mission requires that you look at all the data – every packet crossing into a customer network.

The first step was to revamp the older engine and configure it to look at every packet. The results were disappointing.  With the load of analyzing every packet, we could not get throughput any higher than about 20 megabits, far short of our goal of 1 gigabit.

What do we do differently with our updated Layer 7 engine?

Necessity is the mother of invention, and so we invented a better Layer 7 engine.

The key was to take advantage of multiple processors for analysis of data without delaying data packets. The way the old technology worked was that it would intercept a data packet on a data link, hold it, analyze it for P2P patterns, and then send it on.  With this method, as packets come faster and faster you end up not having enough CPU time to do the analysis and still send the packet on without adding latency.  Many customers find this out the hard way when they update their data speeds from older slower T1 technology.  Typical analysis engines on affordable routers and firewalls often just can’t keep up with line speeds.

What we did was take advantage of a utility in the Linux Kernel called “clone skb”.  This allows you to make a temporary copy of the data packet without the overhead of copying.  More importantly, it allows us to send the packet on without delay and do the analysis within a millisecond (not quite line speed, but fast enough to stop an intruder).

We then combined the cloning with a new technology in the Linux kernel called Kernel Threading.  This is different than the technology that large multi-threaded HTTP servers use because it happens at the kernel level, and we do not have to copy the packet up to some higher-level server for analysis. Copying a packet for analysis is a huge bottleneck and very time-consuming.

What were our Results?

With kernel threading, cloning, and a high-end Intel SMP processor, we can make use of 16 CPU’s doing packet analysis at the same time and we now have attained speeds close to our 1 gigabit target.

When we developed our bandwidth shaping technology in 2003/2004, we leveraged technology innovation to create a superior bandwidth control appliance (read our NetEqualizer Story).  With the NetGladiator IPS, we have once again leveraged technology innovation to enable us to provide an intrusion prevention system at a very compelling price (register to get our price list), hence our customer’s remark about great speed for the price.

What other benefits does our low cost, high-speed layer 7 engine allow for? Is it just for IPS?

The sky is the limit here.  Any type of pattern you want to look at in real-time can now be done at one tenth (1/10th) the cost of the ASIC class of shapers.  Although we are not a fan of unauthorized intrusion into private data of the public Internet (we support Net Neutrality), there are hundreds of other uses which can be configured with our engine.

Some that we might consider in the future include:

– Spam filtering
– Unwanted protocols in your business
– Content blocking
– Keyword spotting

If you are interested in testing and experimenting in any of these areas with our raw technology, feel free to contact us ips@netgladiator.net.

%d bloggers like this: