10 Web Application Security Tools You Can’t Do Without


By Zack Sanders – Director of Security – APconnections

Since initiating our hacking challenge last year, we’ve helped multiple organizations shore up security flaws in their web application infrastructure. Proper web application security testing is always a mix of automated testing and manual testing. If you just run automated tests and don’t have the knowledge to interpret the results, the amount of false positives thrown at you will result in little value. If you don’t know the ins and outs of common vulnerabilities, manual testing alone will get you nowhere. With the right mix, you can create a baseline analysis from the automated tests that will help determine what areas of the application should be explored further manually.

Here are some of the tools I use the most when assessing a new web application along with brief descriptions*:

1) Metasploit – http://www.metasploit.com/ – Metasploit is an entire framework for penetration testing and security analysis. The tools are all open source and the community behind the software is outstanding.

2) DirBuster – http://sourceforge.net/projects/dirbuster/ – DirBuster is a directory brute force tool that allows you to create a tree view of a web application’s file system.

3) Nessus – http://www.tenable.com/products/nessus – Nessus is a great tool for identifying server-level vulnerabilities.

4) John the Ripper – http://www.openwall.com/john/ – JTR is a password cracker tool.

5) Havij – http://www.itsecteam.com/products/havij-v116-advanced-sql-injection/ – Havij is an advanced SQL injection tool that provides a GUI for conducting injection tests.

6) Charles Web Proxy – http://www.charlesproxy.com/ – Charles is an awesome tool that allows you to modify requests and responses in web applications.

7) Tamper Data Firefox Add-On – https://addons.mozilla.org/en-us/firefox/addon/tamper-data/ – Like Charles, this tool also allows you to modify requests.

8) Skipfish – http://code.google.com/p/skipfish/ – Skipfish is a web application security vulnerability scanner that will scan an entire website for issues. It results in quite a few false positives but also legitimate issues.

9) Firebug – https://getfirebug.com/ – This is a debugging tool for web developers but it is useful for security professionals in that you can easily see what is happening behind the scenes.

10) Websecurify – http://www.websecurify.com/ – Websecurify is an entire security environment meant for assisting in the manual testing phase.

These are only some of the tools out there for security professionals who are testing web applications. There are many more. But, they aren’t just available to the good guys. Bad guys have access to them too and are using them in attacks all the time. Let us know if we can run a security assessment for your organization using the same tools hackers do. The investment will be well worth it.

Contact us today at: ips@apconnections.net

*Use these tools at your own risk and only on websites you have permission to test.

Getting the Keys to the Kingdom: SQL Injection


By Zack Sanders

Director of Security – www.netgladiator.net

SQL injection is one of the most well-known vulnerabilities in web application security. Because so many web sites today are database driven, an SQL injection vulnerability puts the entire application and its users at risk. The purpose of this article is to explain what SQL injection is, show how easily it can be exploited, and discuss what steps you can take to make sure your site is secure from this devastating attack vector.

What is SQL injection?

SQL injection is performed by including portions of SQL statements in a web form entry field in an attempt to get the web site to pass a newly formed malicious SQL command to the database. The vulnerability happens when user input is either incorrectly filtered or user input is not strongly typed and unexpectedly executed. SQL commands are thus injected from the web form into the database of an application (like queries) to change the database content or dump the database information like credit card or passwords to the attacker. Average websites can experience 100’s of SQL injection attempts per hour from automated bots scouring the Internet.

How do attackers discover it?

When searching for SQL injection, an attacker is looking for an application that behaves differently based on varying inputs to a form. For example, a vulnerable web form might accept expected content just fine, but if SQL characters are inputted, a system-level SQL error is generated saying something like, “There is an error in your MySQL syntax.” This tells the attacker that the SQL code is being interpreted, even though it is incorrect. This indicates that the application is vulnerable.

How is a site that is vulnerable exploited?

Once an application is deemed vulnerable, an attacker will try using an automated injection tool to glean information about the database. Structure data like the information schema, the version of SQL being run, and table names are all trivial to gather. Once the structure is defined and understood, custom SQL statements can be written to download data from interesting tables like, “users”, “customers”, “payments”, etc. Here is a screenshot from a recent client of mine whose site was vulnerable. These are just a few of the columns from the “users” table.

* Names, email addresses, partial passwords, usernames, and addresses are blocked out.

What happens next?

With this level of access, the sky is the limit. Here are a few things an attacker might do:

1) Take all of the hashed passwords and run them against a rainbow table for matches. This is why long passwords are so important. Even though hashing is a one-way algorithm for encryption, the hashes for short and common passwords are all known and can easily be looked up reversely. An attacker might then use the passwords, along with email addresses, to compromise other accounts owned by those users.

2) Change the super administrator flag for a user they know the password for, and log in to gain further access. A common goal is to get to a file upload interface so that a script can be uploaded to the server that would give an attacker remote control.

3) Drop the entire database purely to wreak havoc.

How do you protect your site from SQL injection?

ALL GET and POST requests involving the database need to be filtered and analyzed before being run. This includes actions like:

1) Stripping away SQL characters. In MySQL, this would be the mysql_real_escape_string() function.

2) Analyze for expected input. Should the entry only be a number 1-50? Check to make sure it is a positive number, non-zero, and no more than two characters.

3) Have strong database permissions. Different database users should be created with only needed permissions for their function. For example, don’t use the root MySQL user to connect your web application to your database.

4) Hire an expert to assess your web application. The cost of performing this type of health check is miniscule compared to the cost of being exploited.

5) Install an intrusion protection system like NetGladiator that looks for SQL characters in URL’s.

The keys to the kingdom

Hopefully you can now see the danger of SQL injection. The level of control and access coupled with the ease of discovery and exploitation make it extremely problematic. The good news is, putting basic protections in place is relatively easy.

Contact us today if you want help securing your web application!

Special Glasses Needed to Spot Network Security Holes


By Art Reisman

CTO – http://www.netequalizer.com

Would you leave for vacation with your garage door wide open or walk off the edge of a cliff looking for a lost dog? Whether it be a bike lock, or that little beep your car makes when you hit the button on your remote, you rely on physical confirmation for safety and security every day.

Because network security holes do not illuminate any of our human senses, most businesses run blind with respect to what are obvious vulnerabilities. Security holes can be glaringly obvious to a hacker.

Have you ever seen an Owl swoop down in the darkness and grab a rabbit? I have, but only once, and that was in the dim glow of field illuminated by some nearby stadium lights. Owls take hundreds of rodents every night under the cover of darkness, they have excellent night vision and most rodents don’t.

To a hacker, a security hole can be just as obvious as that rabbit. You might feel seemingly secure under the cover of darkness. To your senses what may be invisible is quite obvious to a hacker. They have ways of illuminating your security holes. And then, they can choose to exploit them if deemed juicy enough. For some entry points, a hacker might have to look a little bit harder, like lifting a door mat to reveal a key. Never the less, they will see the key, and the problem is you won’t even know the key is under the mat.

Fancy automated tools that report risk are nice, but the only way to expose your actual network security holes is to hire somebody with night vision goggles that can see the holes. Most tools that do audits are not good enough by themselves, they sort of bumble around in the dark looking and feeling for things, and they really do not see them the way a hacker does.

I’d strongly urge any company that is serious about updating their security to employ a white knight hacker before any other investment outlay. For the same reason that automated systems cannot replace humans, even though billions have been spent on them over the years, you should not start your security defense with an automated tool. It must start with a human hell bent on breaking into your business and then showing you the holes. It never ceases to amaze me the types of holes our white knight hackers find. There is nothing better at spotting security holes than a guy with special glasses.

Is Your Data Really Secure?


By Zack Sanders

Most businesses, if asked, would tell you they do care about the security of their customers. The controversial part of security comes to a head when you ask the question in a different way. Does your business care enough about security to make an investment in protecting customer data? There are a few companies that proactively invest in security for security’s sake, but they are largely in the minority.

The two key driving factors that determine a business’s commitment to security investment are:

1) Government or Industry Standard Compliance – This is what drives businesses like your credit card company, your local bank, and your healthcare provider to care about security. In order to operate, they are forced to care. Standards like HIPAA and PCI require them to go through security audits and checkups. Note: And just because they invest in meeting a compliance standard,  it may not translate to secure data, as we will point out below.

2) A Breach Occurs – Nothing will change an organization’s attitude toward security like a massive, embarrassing security breach. Sadly, it usually takes something like this happening to drive home the point that security is important for everyone.

The fact is, most businesses are running on very thin margins and other operating operating costs come before security spending. Human nature is such that we prioritize by what is in front of us now. What we don’t know can’t hurt us. It is easy for a business to assume that their minimum firewall configuration is good enough for now. Unfortunately they cannot easily see the holes in their firewall. Most firewall security can easily be breached through advertised public interfaces.

How do we know? Because we often do complimentary spot checks on company web servers. It is a rare case when we  have not been able to break in, attaining access to all customer records. Even though our sample set is small, our breach rate is so high, we can reliably extrapolate that most companies can easily be broken into.

As we eluded to above, even some of the companies that follow a standard are still vulnerable. Many large corporations  just go through the motions to comply with a standard, so they might typically seek out “trusted,” large professional services firms to do their audits. Often, these companies will conduct boiler plate assessments where auditors run down a checklist with the sole goal of certifying the application or organization as compliant.

Hiring a huge firm to do an audit makes it much easier to deflect blame in the case of an incident. The employee responsible for hiring the audit firm can say, “Well, I hired XXX – what more could I have done?” If they had hired a small firm to do the audit, and a breach occurred, their judgement and job might come into question – however unfair that might be.

As a professional web application security analyst that has personally handled the aftermath of many serious security breaches, I would advocate that if you take your security seriously, start with an assessment challenge using a firm that will work to expose your real world vulnerabilities.

P2P Protocol Blocking Now Offered with NetGladiator Intrusion Prevention


A few months ago we introduced our NetGladiator Intrusion Prevention (IPS) Device. To date, it has thwarted tens of thousands of robotic cyber attacks and counting. Success breeds success and our users wanted more.

When our savvy customers realized the power, speed, and low price point of our underlying layer 7 engine, we started getting requests seeking additional features such as: “Can you also block Peer To Peer and other protocols that cannot be stopped by our standard Web Filters and Firewalls?”  It was natural that we extended our IPS device to address this space; hence, today we are announcing the next-generation NetGladiator. We now offer a module that will allow you to block and monitor the world’s top 10 p2p protocols (which account for 99 percent of all P2P traffic). We also back our technology with our unique promise to implement a custom protocol blocking rule with the purchase of any system at no extra charge. For example, if you have a specific protocol you need to monitor and just can’t uncover it with your WebSense or Firewall filter, we will custom deliver a NetGladiator system that can track and/or block your unique protocol, in addition to our standard p2p blocking options.

Below is a sample Excel live report integrated with the NetGladiator in monitor mode. On the screen snapshot below, you will notice that we have uncovered a batch of Utorrent and Frost Wire p2p traffic.

Please feel free to call 303-997-1300 or email our NetGladiator sales engineering team with any additional questions at ips@@apconnections.net.

Related Articles

NetGladiator A layer 7 shaper in sheep’s clothing

How to Block Frostwire, utorrent and Other P2P Protocols


By Art Reisman, CTO, http://www.netequalizer.com

Art Reisman CTO www.netequalizer.com

Disclaimer: It is considered controversial and by some definitions illegal for a US-based ISP to use deep packet inspection on the public Internet.

At APconnections, we subscribe to the philosophy that there is more to be gained by explaining your technology secrets than by obfuscating them with marketing babble. Read on to learn how I hunt down aggressive P2P traffic.

In order to create a successful tool for blocking a P2P application, you must first figure out how to identify P2P traffic. I do this by looking at the output data dump from a P2P session.

To see what is inside the data packets I use a custom sniffer that we developed. Then to create a traffic load, I use a basic Windows computer loaded up with the latest utorrent client.

Editors Note: The last time I used a P2P engine on a Windows computer, I ended up reloading my Windows OS once a week. Downloading random P2P files is sure to bring in the latest viruses, and unimaginable filth will populate your computer.

The custom sniffer is built into our NetGladiator device, and it does several things:

1) It detects and dumps the data inside packets as they cross the wire to a file that I can look at later.

2) It maps non printable ASCII characters to printable ASCII characters. In this way, when I dump the contents of an IP packet to a file, I don’t get all kinds of special characters embedded in the file. Since P2P data is encoded random music files and video, you can’t view data without this filter. If you try, you’ll get all kinds of garbled scrolling on the screen when you look at the raw data with a text editor.

So what does the raw data output dump of a P2P client look like ?

Here is a snippet of some of the utorrent raw data I was looking at just this morning. The sniffer has converted the non printable characters to “x”.
You can clearly see some repeating data patterns forming below. That is the key to identifying anything with layer 7. Sometimes it is obvious, while sometimes you really have work to find a pattern.

Packet 1 exx_0ixx`12fb*!s[`|#l0fwxkf)d1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:ka 31:v4:utk21:y1:qe
Packet 2 exx_0jxx`1kmb*!su,fsl0’_xk<)d1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:xv4^1:v4:utk21:y1:qe
Packet 3 exx_0kxx`1exb*!sz{)8l0|!xkvid1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:09hd1:v4:utk21:y1:qe
Packet 4 exx_0lxx`19-b*!sq%^:l0tpxk-ld1:ad2:id20:c;&h45h”2x#5wg;|l{j{e1:q4:ping1:t4:=x{j1:v4:utk21:y1:qe

The next step is to develop a layer 7 regular expression to identify the patterns in the data. In the output you’ll notice the string “exx” appears in line, and that is what you look for. A repeating pattern is a good place to start.

The regular expression I decided to use looks something like:

exx.0.xx.*qe

This translates to: match any string starting with “exx” followed, by any character “.” followed by “0”, followed by “xx”, followed by any sequence of characters ending with “qe”.

Note: When I tested this regular expression it turns out to only catch a fraction of the Utorrent, but it is a start. What you don’t want to do is make your regular expression so simple that you get false positives. A layer 7 product that creates a high degree of false positives is pretty useless.

The next thing I do with my new regular expression is a test for accuracy of target detection and false positives.

Accuracy of detection is done by clearing your test network of everything except the p2p target you are trying to catch, and then running your layer 7 device with your new regular expression and see how well it does.

Below is an example from my NetGladiator in a new sniffer mode. In this mode I have the layer 7 detection on, and I can analyze the detection accuracy. In the output below, the sniffer puts a tag on every connection that matches my utorrent regular expression. In this case, my tag is indicated by the word “dad” at the end of the row. Notice how every connection is tagged. This means I am getting 100 percent hit rate for utorrent. Obviously I doctored the output for this post :)

ndex SRCP DSTP Wavg Avg IP1 IP2 Ptcl Port Pool TOS
0 0 0 17 53 255.255.255.255 95.85.150.34 — 2 99 dad
1 0 0 16 48 255.255.255.255 95.82.250.60 — 2 99 dad
2 0 0 16 48 255.255.255.255 95.147.1.179 — 2 99 dad
3 0 0 18 52 255.255.255.255 95.252.60.94 — 2 99 dad
4 0 0 12 24 255.255.255.255 201.250.236.194 — 2 99 dad
5 0 0 18 52 255.255.255.255 2.3.200.165 — 2 99 dad
6 0 0 10 0 255.255.255.255 99.251.180.164 — 2 99 dad
7 0 0 88 732 255.255.255.255 95.146.136.13 — 2 99 dad
8 0 0 12 0 255.255.255.255 189.202.6.133 — 2 99 dad
9 0 0 12 24 255.255.255.255 79.180.76.172 — 2 99 dad
10 0 0 16 48 255.255.255.255 95.96.179.38 — 2 99 dad
11 0 0 11 16 255.255.255.255 189.111.5.238 — 2 99 dad
12 0 0 17 52 255.255.255.255 201.160.220.251 — 2 99 dad
13 0 0 27 54 255.255.255.255 95.73.104.105 — 2 99 dad
14 0 0 10 0 255.255.255.255 95.83.176.3 — 2 99 dad
15 0 0 14 28 255.255.255.255 123.193.132.219 — 2 99 dad
16 0 0 14 32 255.255.255.255 188.191.192.157 — 2 99 dad
17 0 0 10 0 255.255.255.255 95.83.132.169 — 2 99 dad
18 0 0 24 33 255.255.255.255 99.244.128.223 — 2 99 dad
19 0 0 17 53 255.255.255.255 97.90.124.181 — 2 99 dad

A bit more on reading this sniffer output…

Notice columns 4 and 5, which indicate data transfer rates in bytes per second. These columns contain numbers that are less than 100 bytes per second – Very small data transfers. This is mostly because as soon as that connection is identified as utorrent, the NetGladiator drops all future packets on the connection and it never really gets going. One thing I did notice is that the modern utorrent protocol hops around very quickly from connection to connection. It attempts not to show it’s cards. Why do I mention this? Because in layer 7 shaping of P2P, speed of detection is everything. If you wait a few milliseconds too long to analyze and detect a torrent, it is already too late because the torrent has transferred enough data to keep it going. It’s just a conjecture, but I suspect this is one of the main reasons why this utorrent is so popular. By hopping from source to source, it is very hard for an ISP to block this one without the latest equipment. I recently wrote a companion article regarding the speed of the technology behind a good layer 7 device.

The last part of testing a regular expression involves looking for false positives. For this we use a commercial grade simulator. Our simulator uses a series of pre-programmed web crawlers that visit tens of thousands of web pages an hour at our test facility. We then take our layer 7 device with our new regular expression and make sure that none of the web crawlers accidentally get blocked while reading thousands of web pages. If this test passes we are good to go with our new regular expression.

Editors Note: Our primary bandwidth shaping product manages P2P without using deep packet inspection.
The following layer 7 techniques can be run on our NetGladiator Intrusion Prevention System. We also advise that public ISPs check their country regulations before deploying a deep packet inspection device on a public network.

NetGladiator: A Layer 7 Shaper in Sheep’s Clothing


When explaining our NetGladiator technology the other day, a customer was very intrigued with our Layer 7 engine. He likened it to a caged tiger under the hood, gobbling up and spitting out data packets with the speed and cunning of the world’s most powerful feline.

He was surprised to see this level of capability in equipment offered at our prices.  He was impressed with the speed attained for the price point of our solution (more on this later in the article)…

In order to create a rock-solid IPS (Intrusion Prevention System), capable of handling network speeds of up to 1 gigabit with standard Intel hardware, we had to devise a technology breakthrough in Layer 7 processing. Existing technologies were just too slow to keep up with network speed expectations.

In order to support higher speeds, most vendors use semi-custom chip sets and a technology called “ASIC“. This works well but is very expensive to manufacture.

How do typical Layer 7 engines work?

Our IPS story starts with our old Layer 7 engine. It was sitting idle on our NetEqualizer product. We had shelved it when we got away from from Layer 7 shaping in favor of Equalizing technology, which is a superior solution for traffic shaping.  However, when we decided to move ahead with our new IPS this year, we realized we needed a fast-class analysis engine, one that could look at all data packets in real time. Our existing Layer 7 shaper only analyzed headers because that was adequate for its previous mission (detecting P2P streams).  For our new IPS system, we needed a solution that could do a deep dive into the data packets.  The IPS mission requires that you look at all the data – every packet crossing into a customer network.

The first step was to revamp the older engine and configure it to look at every packet. The results were disappointing.  With the load of analyzing every packet, we could not get throughput any higher than about 20 megabits, far short of our goal of 1 gigabit.

What do we do differently with our updated Layer 7 engine?

Necessity is the mother of invention, and so we invented a better Layer 7 engine.

The key was to take advantage of multiple processors for analysis of data without delaying data packets. The way the old technology worked was that it would intercept a data packet on a data link, hold it, analyze it for P2P patterns, and then send it on.  With this method, as packets come faster and faster you end up not having enough CPU time to do the analysis and still send the packet on without adding latency.  Many customers find this out the hard way when they update their data speeds from older slower T1 technology.  Typical analysis engines on affordable routers and firewalls often just can’t keep up with line speeds.

What we did was take advantage of a utility in the Linux Kernel called “clone skb”.  This allows you to make a temporary copy of the data packet without the overhead of copying.  More importantly, it allows us to send the packet on without delay and do the analysis within a millisecond (not quite line speed, but fast enough to stop an intruder).

We then combined the cloning with a new technology in the Linux kernel called Kernel Threading.  This is different than the technology that large multi-threaded HTTP servers use because it happens at the kernel level, and we do not have to copy the packet up to some higher-level server for analysis. Copying a packet for analysis is a huge bottleneck and very time-consuming.

What were our Results?

With kernel threading, cloning, and a high-end Intel SMP processor, we can make use of 16 CPU’s doing packet analysis at the same time and we now have attained speeds close to our 1 gigabit target.

When we developed our bandwidth shaping technology in 2003/2004, we leveraged technology innovation to create a superior bandwidth control appliance (read our NetEqualizer Story).  With the NetGladiator IPS, we have once again leveraged technology innovation to enable us to provide an intrusion prevention system at a very compelling price (register to get our price list), hence our customer’s remark about great speed for the price.

What other benefits does our low cost, high-speed layer 7 engine allow for? Is it just for IPS?

The sky is the limit here.  Any type of pattern you want to look at in real-time can now be done at one tenth (1/10th) the cost of the ASIC class of shapers.  Although we are not a fan of unauthorized intrusion into private data of the public Internet (we support Net Neutrality), there are hundreds of other uses which can be configured with our engine.

Some that we might consider in the future include:

– Spam filtering
– Unwanted protocols in your business
– Content blocking
– Keyword spotting

If you are interested in testing and experimenting in any of these areas with our raw technology, feel free to contact us ips@netgladiator.net.

Four Reasons Why Companies Remain Vulnerable to Cyber Attacks


Over the past year, since the release of our IPS product, we have spent many hours talking to resellers and businesses regarding Internet security. Below are our observations about security investment, and more importantly, non-investment.

1) By far the number one reason why companies are vulnerable is procrastination.

Seeing is believing, and many companies have never been hacked or compromised.

Some clarification here, most attacks do not end in something being destroyed or any obvious trail of data being lifted. This does not mean they do not happen; it’s just that there was no immediate ramification in many cases hence, business as usual.

Companies are run by people, and most people are reactive, and furthermore somewhat single threaded, thus they can only address a few problems at a time. Without a compelling obvious problem, security gets pushed down the list. The exception to the procrastination rule would be verticals such as financial institutions, where security audits are mandatory (more on audits in a bit). Most companies, although aware of  risk factors, are reluctant to spend on a problem that has never happened. In their defense, a company that reacts to all the security FUD, might find itself hamstrung and out of business. Sometimes, to be profitable, you have to live with a little risk.

2) Existing security tools are ignored.

Many security suites are just too broad to be relevant. Information overload can lead to a false sense of coverage.

The best analogy I can give is the Tornado warning system used by the National Weather Service. Their warning system, although well-intended, has been so diffuse in specificity that after a while people ignore the warnings. The same holds true with security tools. In order to impress and out-do one another, security tools have become bloated with quantity, not quality. This overload of data can lead to an overwhelming glut of frivolous information. It would be like a stock analyst predicting every possible outcome and expecting you to invest on that advice. Without a specific, targeted piece of information, your security solution can be a distraction.

3) Security audits are mandated formalities.

In some instances, a security audit is treated as a bureaucratic mandate. When security audits are mandated as a standard, the process of the audit can become the objective. The soldiers carrying out the process will view the completed checklist as the desired result and thus may not actually counter existing threats. It’s not that the audit does not have value, but the audit itself becomes a minimum objective. And most likely the audit is a broad cookie-cutter approach which mostly serves to protect the company or individuals from blame.

4) It may just not be worth the investment.

The cost of getting hacked may be less than the ongoing fees and consumption of time required to maintain a security solution. On a mini-scale, I followed this advice on my home laptop running Windows. It was easier to re-load my system every 6 months when I got a virus rather than mess with all the security virus protection being thrown at me, slowing my system down. The same holds true on a corporate scale. Although nobody would ever come out and admit this publicly, or make it deliberately easy, but it might be more cost-effective to recover from a security breach than to proactively invest in preventing it. What if your customer records get stolen, so what? Consumers are hearing about the largest banks and government security agencies getting hacked every day. If you are a mid-sized business it might be more cost-effective to invest in some damage control after the fact rather than jeopardize cash flow today.

So what is the future for security products? Well, they are not going to go away. They just need to be smarter, more cost-effective, and turn-key, and then perhaps companies will find the benefit-to-risk more acceptable.

<Article Reference:  Security Data overload article >

Web Security Breaches and Accountability


By Zack Sanders – Security Expert – APconnections

If this recent story about a breach of medical information in Utah is any indication of how organizations will now handle security breaches, technology managers everywhere should be shaking in their boots. After a breach that exposed personal information of 780,000 people, the Utah state technology director was relieved of his position by the governor, and several others are under investigation.

Details of the actual attack are scarce, but it appears as though a medicaid server (possibly hosted in the cloud) was vulnerable to a security misconfiguration at the password authentication level. This could mean a few different things – including SQL injection issues, exposed configuration files, or that content was accessible without actually logging in. Regardless of how it really occurred, it certainly could have been prevented with proper proactive assessments.

The larger issue at hand that the article touches on is accountability in data security. Personally, I think you are going to have a hard time finding organizations that will guarantee their solutions are totally secure. It’s just not realistic. You can never be 100% protected against an attack, and because software solutions often rely on other technologies and people, the amount of ways in are many and proving exactly how someone got in and who is to blame will be difficult considering that vulnerabilities are often leveraged against each other. For example, say you have a server that has a third party web application, a back-end database, and blog software installed. The web application itself is secure, but the blog software is not. It is breached by an attacker, and the database for the web application is stolen. User data in the database was not encrypted, and wide-spread fraud occurs. Who is to blame? The blog maker? The web application developer? The system administrator?

In truth, the answer is everyone – to varying degrees. The system administrator should not have these two software packages running on the same system. The blog developers should have built a better solution. The web application programmer should have encrypted data at rest. Blame can even shift further up the chain. The IT director should have budgeted more money for security. The board members should have demanded proactive actions be taken.

So, it is likely the firings in the Utah Medicaid breach were mostly political in that someone has to fall on the sword, but in truth, the blame should fall on many individuals and companies.

One thing is clear, if you are a technology director or manager, you don’t want this to happen to you – but there are actions you can take. The most important thing is to BE PROACTIVE about security. How many breaches do you have to read about every day before you take charge in your own environment. If you’ve never been hacked, ask someone who has. It is a very painful process and costs reputation, money, and time. Start taking steps today to better your chances against attack. Some options to consider:

– Have quarterly security assessments conducted.

– If major changes to the application or server are made, have those changes reviewed for security.

– Discuss your security controls with an expert.

– Audit your existing infrastructure and start making changes now. Even though this will take time and resources, it does not compare to the time and resources required if a breach occurs.

Apconnections Backs up Security Device Support with an unusual offer, “We’ll hack your network”


What gets people excited about purchasing an intrusion detection system? Not much. Certainly, fear can be used to sell security devices. But most, mid sized companies are spread thin with their IT staff, they are focused on running their business operations. To spend money to prevent something that has never happened to them would be seen as somewhat foolish. There are a large number of potential threats to a business, security being just one of them.

One expert pointed out recently:

“Sophisticated fraudsters are becoming the norm with data breaches, carder forums, and do it yourself (DIY) crime kits being marketed via the Internet.” Excerpt from fraudwar blog spot.

Thus, getting data stolen happens so often that it can be considered a survivable event, it is the new normal. Your customers are not going to run for the hills, as they have been conditioned to roll with this threat. But there still is a steep cost for such an event. So our staff put our heads together and asked the question… there must be an easy, quantifiable, minimum investment way to objectively evaluate data risk without a giant cluster of data security devices in place, spewing gobs of meaningless drivel.

One of our internal, white knight, hackers pointed out, that in his storied past, he had been able to break into almost any business at will (good thing he is a white knight and does not steal or damage anything). While talking to some of our channel resellers we have also learned that most companies, although aware of outside intrusion, are reluctant to throw money and resources at a potential problem that they can’t easily quantify.

Thus arose an idea for our new offer. For a small refundable retainer fee, we will attempt to break into a customers data systems from the outside. If we can’t get in, then we’ll return the retainer fee. Obviously, if we get in, we can then propose a solution with indisputable evidence of the vulnerability, and if we don’t get in, then the customer can have some level of assurance that their existing infrastructure thwarted a determined break in.

Case Study: A Successful BotNet-Based Attack


By Zack Sanders – Security Expert – APconnections

In early 2012, I took on a client who was a referral from someone I had worked with when I first got out of school. When the CTO of the company initially called me, they were actually in the process of being attacked at that very moment. I got to work right away using my background as both a web application hacker and as a forensic analyst to try and solve the key questions that we briefly touched on in a blog post just last week. Questions such as:

– What was the nature of the attack?

– What kind of data was it after?

– What processes and files on the machine were malicious and/or which legitimate files were now infected?

– How could we maintain business continuity while at the same time ensuring that the threat was truly gone?

– What sort of security controls should we put in place to make sure an attack doesn’t happen again?

– What should the public and internal responses be?

Background

For the sake of this case study, we’ll call the company HappyFeet Movies – an organization that specializes in online dance tutorials. HappyFeet has three basic websites, all of which help sell and promote their movies. Most of the company’s business occurs in the United States and Europe, with few other international transactions. All of the websites reside on one physical server that is maintained by a hosting company. They are a small to medium-sized business with about 50 employees locally.

Initial Questions

I always start these investigations with two questions:

1) What evidence do you see of an attack? Defacement? Increased traffic? Interesting log entries?

2) What actions have you taken thus far to stop the attack?

Here was HappyFeet’s response to these questions:

1) We are seeing content changes and defacement on the home page and other pages. We are also seeing strange entries in the Apache logs.

2) We have been working with our hosting company to restore to previous backups. However, after each backup, within hours, we are getting hacked again. This has been going on for the last couple of months. The hosting company has removed some malicious files, but we aren’t sure which ones.

Looking For Clues

The first thing I like to do in cases like this is poke around the web server to see what is really going on under the hood. Hosting companies often have management portals or FTP interfaces where you can interact with the web server, but having root access and a shell is extremely important to me. With this privileged account, I can go and look at all the relevant files for evidence that aligns with the observed behavior. Keep in mind, at this point I have not done anything as far as removing the web server from the production environment or shutting it down. I am looking for valuable information that really can only be discovered while the attack is in progress. The fact that the hosting company has restored to backup and removed files irks me, but there is still plenty of evidence available for me to analyze.

Here were some of my findings during this initial assessment – all of them based around one of the three sites:

1) The web root for one of the three sites has a TON of files in it – many of which have strange names and recent modification dates. Files such as:

db_config-1.php

index_t.php

c99.php

2) Many of the directories (even the secure ones) are world writable, with permissions:

drwxrwxrwx

3) There are SQL dumps/backups in the web root that are zipped so when visited by a web browser the user is prompted for a download – yikes!

4) The site uses a content management system (CMS) that was last updated in 2006 and the database setup interface is still enabled and visible at the web root.

5) Directory listings are enabled, allowing a user to see the contents of the directories – making discovery of file names above trivial task.

6) The Apache logs show incessant SQL injection attempts, which when ran, expose usernames and passwords in plain text.

7) The Apache logs also show many entries accessing a strange file called c99.php. It appeared to be some sort of interface that took shell commands as arguments, as is evident in the logs:

66.249.72.41 – – “GET /c99.php?act=ps_aux&d=%2Fvar%2Faccount%2F&pid=24143&sig=9 HTTP/1.1″ 200 286

Nature of the Attack

There were two basic findings that stood out to me most:

1) The c99.php file.

2) The successful SQL injection log entries.

c99.php

I decided to do some research and quickly found out that this is a popular PHP shell file. It was somehow uploaded to the web server and the rest of the mayhem was conducted through this shell script in the browser. But how did it get there?

The oldest log data on the server was December 19, 2011. At the very top of this log file were commands accessing c99.php, so I couldn’t really be sure how it got on there, but I had a couple guesses:

1) The most likely scenario I thought was that the attacker was able to leverage the file upload feature of the dated CMS – either by accessing it without an account, or by brute forcing an administrative account with a weak password.

2) There was no hardware firewall protecting connections to the server, and there were many legacy FTP and SSH accounts festering that hadn’t been properly removed when they were no longer needed. One of these accounts could have been brute forced – more likely an FTP account with limited access; otherwise a shell script wouldn’t really be necessary to interact with the server.

The log entries associated with c99.php were extremely interesting. There would be 50 or so GET requests, which would run commands like:

cd, ps aux, ls -al

Then there would be a POST request, which would either put a new file in the current directory or modify an existing one.

This went on for tens of thousands of lines. The very manual and linear nature of the entries seemed to me very much like an automated process of some type.

SQL Injection

The SQL injection lines of the logs were also very exploratory in nature. There was a long period of information gathering and testing against a few different PHP pages to see how they responded to database code. Once the attacker realized that the site was vulnerable, the onslaught began and eventually they were able to discover the information schema and table names of pertinent databases. From there, it was just a matter of running through the tables one at a time pulling rows of data.

What Was The Attack After?

The motives were pretty clear at this point. The attacker was a) attempting to control the server to use in other attacks or send SPAM, and b) gather whatever sensitive information they could from databases or configuration files before moving on. Exploited user names and passwords could later be used in identity theft, for example. Both of the above motives are very standard for botnet-based attacks. It should be noted that the attacker was not specifically after HappyFeet – in fact they probably knew nothing about them – they just used automated probing to look for red flags and when they returned positive results,  assimilated the server into their network.

Let the Cleanup Begin

Now that the scope of the attack was more fully understood, it was time to start cleaning up the server. When I am conducting this phase of the project, I NEVER delete anything, no matter how obviously malicious or how benign. Instead, I quarantine it outside of the web root, where I will later archive and remove it for backup storage.

Find all the shell files

The first thing I did was attempt to locate all of the shell files that might have been uploaded by c99.php. Because my primary theory was that the shell file was uploaded through a file upload feature in the web site, I checked those directories first. Right away I saw a file that didn’t match the naming convention of the other files. First of all, the directory was called “pdfs” and this file had an extension of PHP. It was also called broxn.php, whereas the regular files had longer names with camel-case that made sense to HappyFeet. I visited this file in the web browser and saw a GUI-like shell interface. I checked the logs for usage of this file, but there were none. Perhaps this file was just an intermediary to get c99.php to the web root. I used a basic find command to pull a list of all PHP files from the web root forward. Obviously this was a huge list, but it was pretty easy to run through quickly because of the naming differences in the files. I only had to investigate ten or so files manually.

I found three other shell files in addition to broxn.php. I looked for evidence of these in the logs, found none, and quarantined them.

What files were uploaded or which ones changed?

Because of the insane amount of GET requests served by c99.php, I thought it was safe to assume that every file on the server was compromised. It wasn’t worth going through the logs manually on this point. The attacker had access to the server long enough that this assumption is the only safe one. The less frequent occurrences of POST requests were much more more manageable. I did a grep through the Apache logs for POST requests submitted by c99.php and came up with a list of about 200 files. My thought was that these files were all either new or modified and could potentially be malicious. I began the somewhat pain-staking process of manually reviewing these files. Some had been overwritten back to their original state by the hosting company’s backup, but some were still malicious and in place. I noted these files, quarantined them, and retested website functionality.

Handling the SQL injection vulnerabilities

The dated CMS used by this site was riddled with SQL injection vulnerabilities. So much so, that my primary recommendation for handling it was building a brand new site. That process, however, takes time, and we needed a temporary solution. I used the log data that I had to figure out which pages the botnet was primarily targeting with SQL attacks. I manually modified the PHP code to do basic sanitizing on all inputs in these pages. This immediately thwarted SQL attacks going forward, but the damage had already been done. The big question here was how to handle the fact that all usernames and passwords were compromised.

Improving Security

Now that I felt the server was sufficiently cleaned, it was time to beef up the security controls to prevent future attacks. Here are some of the primary tasks I did to accomplish this:

1) Added a hardware firewall for SSH and FTP connections.

I worked with the hosting company to put this appliance in front of the web server. Now, only specific IPs could connect to the web server via SSH and FTP.

2) Audited and recreated all accounts.

I changed the passwords of all administrative accounts on the server and in the CMS, and regenerated database passwords.

3) Put IP restrictions on the administrative console of the CMS.

Now, only certain IP addresses could access the administrative portal.

4) Removed all files related to install and database setup for the CMS.

These files were no long necessary and only presented a security vulnerability.

5) Removed all zip files from the web root forward and disabled directory listings.

These files were readily available for download and exposed all sorts of sensitive information. I also disabled directory listings, which is helpful in preventing successful information gathering.

6) Hashed customer passwords for all three sites.

Now, the passwords for user accounts were not stored in plain text in the database.

7) Added file integrity monitoring to the web server.

Whenever a file changes, I am notified via email. This greatly helps reduce the scope of an attack should it breach all of these controls.

8) Wrote a custom script that blocks IP addresses that put malicious content in the URL.

This helps prevent information gathering or further vulnerability probing. The actions this script takes operate like a miniature NetGladiator.

9) Installed anti-virus software on the web server.

10) Removed world-writable permissions from every directory and adjusted ownership accordingly.

No directory should ever be world writable – doing so is usually just a lazy way of avoiding proper ownership. The world writable aspect of this server allowed the attack to be way more broad than it had to be.

11) Developed an incident response plan.

I worked with the hosting company and HappyFeet to develop an internal incident response policy in case something happens in the future.

Public Response

Due to the fact that all usernames and passwords were compromised, I urged HappyFeet to communicate the breach to their customers. They did so, and later received feedback from users who had experienced identity theft. This can be a tough step to take from a business point of view, but transparency is always the best policy.

Ongoing Monitoring

It is not enough to implement the above controls, set it, and forget it. There must be ongoing tweaking and monitoring to ensure a strong security profile. For HappyFeet, I set up a yearly monitoring package that includes:

– Manual and automated log monitoring.

– Server vulnerability scans once a quarter, and web application scans once every six months.

– Manual user history review.

– Manual anti-virus scans and results review.

Web Application Firewalls

I experimented with two types of web application firewalls for HappyFeet. Both of which took me down the road of broken functionality and over-robustness. One had to be completely uninstalled, and the other is in monitoring mode because protection mode disallowed legitimate requests. It also is alerting to probing attempts about 5,000 times per day – most of which are not real attacks – and the alert volume is unmanageable. Its only value is in generating data for improving my custom script that is blocking IPs based on basic malicious attempts.

This is a great example of how NetGladiator can provide a lot of value to the right environment. They don’t need an intense, enterprise-level intrusion prevention system – they just need to block the basics and not break functionality in their web sites. The custom script, much like NetGladiator, suits their needs to a T and can also be configured to reflect previous attacks and vulnerabilities I found in their site that are too vast to manually patch.

Lessons Learned

Here are some key take-aways from the above project:

– Being PROACTIVE is so much better than being REACTIVE when it comes to web security. If you are not sure where you stack up, have an expert take a look.

– Always keep software and web servers up to date. New security vulnerabilities arrive on the scene daily, and it’s extremely likely that old software is vulnerable. Often, security holes are even published for an attacker to research. It’s just a matter of finding out which version you have and testing the security flaw.

– Layered security is king. The security controls mentioned above prove just how powerful layering can be. They are working together in harmony to protect an extremely vulnerable application effectively.

If you have any questions on NetGladiator, web security, or the above case study, feel free to contact us any time! We are here to help, and don’t want you to ever experience an attack similar to the one above.

What to Do If Your Organization Has Been Hacked


By Zack Sanders – Security Expert – APconnections

It’s a scary scenario that every business fears; a successful attack on your web site that results in stolen information or embarrassing defacement.

From huge corporations, to mom-and-pop online shops, data security is (or should be) a keystone consideration. As we’ve written about before, no one is immune to attack – not even local businesses with small online footprints. I, personally, have worked with many clients whom you would not think would be targeted by hackers, and they end up being the victims of reasonably intricate and damaging attacks that cost many thousands of dollars to mitigate.

Because no set of security controls or solutions can make you truly safe from exploitation, it is important to have a plan in place in case you do get hacked. Having a documented plan ready BEFORE an attack occurs allows you to be calm and rational with your response. Below are some basic steps you should consider in an incident response plan and/or follow in case a breach occurs.

1) Stay calm.

An attack, especially one in progress, naturally causes panic. While understandable, these feelings will only cause you to make mistakes in handling the breach. Stay calm and stick to your plan.

2) DO NOT unplug the system.

Unplugging the affected system, deleting malicious files, or restoring to a backup are all panic-driven responses to a security incident. When you take measures such as these, you potentially destroy key evidence in determining what, if anything, was taken, how it was taken, and when. Leave the system in place and call an expert as soon as possible.

3) Call an expert.

There are many companies that specialize in post-breach analysis, and it is important to contact these folks right away. They can help determine how the breach occurred, what was taken, and when. They can also help implement controls and improve security so that the same attack does not happen again. If you’ve been hacked, this is the most important step to take.

4) Keep a record.

For possible eventual legal action and to simply keep track of system changes, always keep a record of what has happened to the infected system – who has touched it, when, etc.

5) Determine the scope of the attack, stop the bleeding, and figure out what was taken.

The expert you phoned in will analyze the affected system and follow the steps above. Once the scope is understood, the system will be taken offline and the security hole that caused the problem will be discovered and closed. After that, the information that was compromised will be reviewed. This step will help determine how to proceed next.

6) Figure out who to tell.

Once you’ve determined what kind of information was compromised, it is very important to communicate that to the right people. If it was internal documents, you probably don’t need to make that public. If it was usernames and passwords, you must let your users know.

7) Have a security assessment performed and improve security controls.

Have your expert analyze the rest of your infrastructure and web applications for security holes that could be a problem in the future. After this occurs, the expert can recommend tools that will vastly improve your security layering.

Of course, many of these tasks can be performed proactively to greatly reduce the likelihood of ever needing this process. Contact an expert now and have them analyze your systems for security vulnerabilities.

Do We Really Need SSL?


By Art Reisman, CTO, www.netequalizer.com, www.netgladiator.net.

Art Reisman CTO www.netequalizer.com

I know that perception is reality, and sometimes it is best to accept it, but when it comes to security, FUD, I get riled up.

For example, last year I wrote about the un-needed investment surrounding the IPV4 demise, and, as predicted, the IPv6 push turned out to be mostly vendor hype motivated by a desire to increase equipment sales. Today, I am here to dispel the misplaced fear around the concept of having your data stolen in transit over the Internet. I am referring to the wire between your residence and the merchant site at the other end. This does  not encompass the security of data once it is stored on disk drive at its final location, just the transit portion.

To get warmed up, let me throw out some analogies.

Do you fear getting carjacked going 75 mph on the interstate?

Most likely not, but I bet you do lock your doors when stopped.

Do you worry about encrypting your cell phone conversations?

Not unless you are on security detail in the military.

As with my examples, somebody stealing your credit card while it is in transit, although possible, is highly impractical; there are just better ways to steal your data.

It’s not that I am against VPN’s and SSL, I do agree there is a risk in transport of data. The problem I have is that the relative risk is so much lower than some other glaring security holes that companies ignore because they are either unaware, or more into perception than protecting data. And yet, customers will hand them financial data as long as their web site portal provides SSL encryption.

To give you some more perspective on the relative risk, let’s examine the task of stealing customer information in transit over the Internet.

Suppose for a moment that I am a hacker. Perhaps I am in it for thrills or for illegal financial gain, either way, I am going to be pragmatic with my approach and maximize my chances of finding a gold nugget.

So how would I go about stealing a credit card number in transit?

Option 1: Let’s suppose I parked in the alley behind your house and had a device sophisticated enough to eaves drop your wireless router and display all the web sites you visited. So now what? I just wait there, and hope perhaps in a few days or weeks you’ll make an online purchase and I’ll grab your cc information, and then I’ll run off and make a few purchases.  This may sound possible, and it is, but the effort and exposure would not be practical.

Option 2: If I landed a job at an ISP, I could hook up a sniffer that eaves drops on every conversation between the ISP customers and the rest of the Internet. I suppose this is a bit more likely than option 1;  but there is just no precedent for it – and ISPs often have internal safeguards to monitor and protect against this. I’d still need very specialized equipment and time to work unnoticed to pull this off. I’d have to limit my thefts to the occasional hit and run so as not to attract suspicion. The chances of economic benefit are slim, and the chances of getting caught are high, and thus the risk to the customer is very low.

For the criminal intent on stealing data, trolling the internet with a bot looking for unsecured servers, or working for a financial company where the data resides, and stealing thousands of credit cards is far more likely. SSL does nothing to prevent the real threats, and that is why you hear about hacking intrusions in the headlines everyday. Many of these break-ins could be prevented, but it takes a layered approach, not just a feel good SSL layer that we could do without.

Common NetGladiator Questions Explained


Since our last security-related blog post, The Truth About Web Security (And How to Protect Your Data), we’ve received many inquiries related to NetGladiator and best-practice security in general. In the various email and phone conversations thus far, we’ve encountered some recurring questions that many of you might also find useful. The purpose of this post is to provide answers to those questions.

1) Could an attacker circumvent NetGladiator by slowly probing the targets as not to be detected by the time anomaly metrics?

The NetGladiator detects multiple types of anomalies. Some are time-frequency based, and some are pattern based.

For instance, a normal user won’t be hitting 500 pages/minute, and a normal user will never be putting SQL in the URL attempting an injection. If a malicious user was slowly running a probing robot, it would likely still be attempting patterns that the NetGladiator would detect, and the NetGladiator would immediately block that IP. There are directory brute force tools that won’t hit on any patterns, but they will hit on the time frequency settings. If the attacker were to slow it down to a normal user click-rate, it’s possible they could go undetected, but these brute force lists rely on trying millions of common page and directory names quickly. It would not be worth it to run through this list at that pace.

2) Could a hacker change their IP address often enough so that NetGladiator would not think the source of the attack was the same?

The amount of IP addresses you’d need to spoof would make this a tiresome effort for the attacker, and in an automated attack by a botnet, the probe is more likely to just move on to a new target. In a targeted attack, IP spoofing, while possible, would also likely be more of a hassle than it’s worth. But, even if it were worth it for the attacker, the NetGladiator alerts admins to intrusion attempts, so you can proactively deal with the threat. You can also block by IP Range/Country so that if you notice someone spoofing IP addresses from a specific IP range, you can drop all those connections for as long as you like.

Also with regard to IP addresses, the NetGladiator only bans them for a set amount of time. This is because bots probe from new IP addresses all the time. A real user might eventually end up with that IP and you wouldn’t want to block it forever. That being said, if there was a constantly malicious IP, you can permanently block it.

3) Why is there a maximum number of patterns you can input into NetGladiator?

One of NetGladiator’s key differentiating factors is its “robustlessness” and its custom configuration. This may sound like a detriment, but it actually will make you better off. Not only will you be able to exclusively detect threats pertinent to your web application, you also will not break functionality – regardless of poor programming or setup on the back end. Many intrusion prevention systems are so robust in their blocking of requests that there are too many false positives to deal with (usually based on programming “errors” or infrastructure abnormalities). This often ends with the IPS being disabled – which helps no one. NetGladiator has a maximum number of patterns for one main reason:

Speed and efficiency.

We don’t want to hamper your web connections by inspecting packets for too many regular expressions. We’d rather quickly check for key patterns that show malicious intent under the assumption that those patterns will be tried eventually by an attacker. This way, data can seamlessly pass through, and your users won’t incur performance problems.

4) What kind of environments benefit from NetGladiator?

NetGladiator was built to protect web applications from botnets and hackers – it won’t have much use for you at the network level or the user level (email, SPAM, anti-virus, etc.). There are other options for security controls that focus on these areas. Every few years, the Open Web Application Security Project (OWASP), releases their Top 10 – which is a list of the most common web application security vulnerabilities facing sites today. NetGladiator helps protect against issues of this type, so any web application that has even a small amount of interactivity or backend to it will benefit from NetGladiator’s features.

We want to hear from you!

Have some questions about NetGladiator or web security in general? Visit our website, leave a comment, or shoot us an email at ips@apconnections.net.

The Truth About Web Security (And How to Protect Your Data)


By Zack Sanders – Security Expert at APconnections.

Security Theater

Internet security is an increasingly popular and fascinating subject that has pervaded our lives through multiple points of entry in recent years. Because of this infiltration, security expertise is no longer a niche discipline teetering on the fringe of computer science – it’s an integral part. Computer security concerns have ceased to be secondary thoughts and have made their way to the front lines of business decisions, political banter, and legislative reform. Hackers are common subjects in movies, books, and TV shows. It seems like every day we are reading about the latest security breach of a gigantic, international conglomerate. Customers who once were naive to how their data was used and stored are now outwardly concerned about their privacy and identity theft.

This explosion in awareness has, of course, yielded openings for the opportunistic. Companies now know there is a real business need for security, and there are thus hundreds of solutions available to you to improve your security footprint. But most of them are not telling you the truth about how to really secure your infrastructure. They just want to sell you their product – hyping its potential, touting its features, and telling you to install it and – *poof* – you no longer need to worry about security – something those in the industry call “Security Theater.” In many ways, these companies are actually making you less secure because of this sales point. Believing that you can plug in an “all-in-one device” and have it provide you with all of your security controls sounds good, but it’s unrealistic. When you stop being diligent on multiple levels, you start being vulnerable.

Real security is all about two things:

1) Being PROACTIVE.
2) Implementing LAYERED security controls.

Let’s briefly discuss each of these central tenants of best-practice security.

1) Being proactive is key for many reasons. When you are proactive with security, you are anticipating attacks before they start. This allows you to more calmly implement security controls, develop policies, and train staff before a breach occurs. You should be proactive about security for the same reasons you are proactive about your health. Eating well, exercising, and periodically seeing a doctor are all ways to improve your chances of remaining healthy. It doesn’t guarantee you won’t get sick, much in the same way security controls won’t guarantee you won’t get hacked, but it does greatly improve your odds. And if you are not proactive, just like with your personal health, if something does go wrong, it can often be too late to reverse the effects, as most of the damage has already been done.

2) Implementing a layered approach to security is paramount in reducing the odds of a successful attack. The goal is to take security controls that complement each other on different levels of your infrastructure and piece them together to form a solid line of defense. If one control is breached, another is there to back it up in a different, but equally effective way. It is actually possible to take products that are relatively ineffective on their own (say 75% effective), and layer them to lower the chances of a successful attack to less than 1%. If you implement just four 75%-effective tools, say, check out what your breach success rate becomes: (.25 * .25 * .25 *.25) = .0039 * 100 = 0.39%! That’s pretty impressive!

Here is an analogy

Think of your sensitive data as crown jewels that are stored in the center of a castle. If your only security control is a moat, it wouldn’t take much ingenuity for a thief to cross over the moat and subsequently steal your jewels. One thing we can do to improve security is better our moat. Let’s add some crocodiles – that will certainly help in thwarting would-be crossers. But, even though we’ve beefed up the security of the moat, it’s still passable. The problem is that we can never 100% secure the moat from thieves no matter what we do. We need to add in some complementary controls to back up the security of the moat in case the moat fails. So, we’ll place archers at the four corner towers and install a big door with multiple locks and guards at the front gate. We’ll move the jewels to the cellar and place them under lock and key with a designated guard. Knights will be trained to spot thieves, and there will be a checkpoint outside the castle for all incoming and outgoing guests. Now, instead ofhaving to just cross the moat, a thief would also have to get through the heavy door, through the locks, past the guards, past the archers, into the cellar, past another guard, and into the locked room. On exit, he’d have to get through all these again, including a manual search at the checkpoint. That seems tough to do compared to just crossing the moat.

Your web security infrastructure should work the same way. Multiple policies, devices, and configurations should all work in harmony to protect your sensitive data. When companies are trying to sell you an all-in-one security device, they are essentially trying to sell you a very robust moat. It’s not that their product won’t provide value, but it needs to be implemented as part of an overall security strategy, and it should not be solely relied upon.

How Real Attacks Occur

We have thought a lot lately about exactly how real attacks occur in the wild for organizations with interactive web applications. This is slightly simplistic, but it really seems to boil down to two key origins:

1) A hack results from an AUTOMATED scan or probe.

This is by far the most common type of attack, despite it not being as popular as the other. Many organizations don’t take this type of attack as seriously as they should. They think that just because they are a small, non-influential site with little customer data that they won’t be targeted. And they are probably right – a human attacker won’t be targeting them. But a robot has no discretion. The robot’s goal is to increase hosts in their botnet (for DOS attacks, sending SPAM, etc.), and to siphon off any available sensitive data from the server. The botnets are constantly scouring the Internet, rapidly attempting breaches with known, common patterns. They don’t get too sophisticated.

2) A hack results from a TARGETED attack.

The media has hyped this into the most popular type of attack, but it is much less common. Targeted attacks can begin from multiple motivations. Sometimes, a targeted attack will occur due to interesting results from an automated scan (as in #1, above). The other type of targeted attack is the most dangerous – an attacker, or group of attackers, specifically targeting your site for financial or political reasons. Despite what other products might profess, there is no one-stop solution for stopping this type of attack. A layered approach to security, as discussed above, is key.

Approaches to Dealing with Botnets/Malnets and other Automated Attacks

Botnets are large, distributed networks of private computers and servers that are infected with malicious software without the owner of the system being aware. The botnet computers can be used to scan targets for vulnerabilities or send out SPAM/malicious emails. Using systems registered to someone else provides a layer of anonymity to the attacker. He/she also has increased processing power and resources available at their disposal. Botnets rely heavily on attempting simple intrusions and speed. They often are brute forcing directory listings or credentials and once they’ve exhausted their lists, they move on.

There are a few things you can do to greatly lower the effectiveness of a botnet:

1) Think about if your website really needs to be open to the entire Internet. Are there countries/subnets that you will never receive business from? Why not just block these IP ranges right off the bat? It seems harsh at first, but if you think about it, there is a lot of added security value here for the small risk you turn away a legitimate customer.

2) Implement a tool that monitors the amount of requests received over a given time frame. A normal user won’t ever be requesting pages at the same rate as a botnet. If the request count reaches past a certain threshold, you can confidently block the offending IP.

3) Implement a tool that monitors logs for multiple 404 (Page Not Found) requests. Brute-force tools will generate plenty of 404 requests when they are hammering your servers. If you see multiple 404’s over a short period of time from the same IP, chances are good they are acting maliciously.

4) Look for common patterns in logs that suggest malicious intent. The information discovery process is very important for an attacker (or botnet). It is during this phase that they learn about possible vulnerabilities your sites might have. In order to find these holes, the attacker has to experiment with the site to see how it responds to malicious code. If you can isolate these probing attempts right off the bat, you stand a good chance at cutting off the information gathering process before they get results on potential attack vectors.

5) Implement a file integrity monitoring tool on your web server and have it actively alert to changes in files that are not supposed to change often. If an attacker finds an entry point, one of the first things they will try and do is upload a file to the server. Getting a file to the server is a huge accomplishment for an attacker. They can upload PHP or ASP files that act as shell interfaces to the server itself, and from there can wreak whatever havoc they’d like. With a file integrity monitoring tool, you can know if an file is added within minutes of upload and can deal with the threat before it is wide spread.

The NetGladiator

NetGladiator is a next-generation Intrusion Prevention System (IPS) made by APconnections that deals with some of the issues above and was built based on how attacks actually occur. It can be an effective layer in your security profile to help block unwanted web-based requests (either from a botnet or a targeting attacker) – you can think of it as a firewall for your web applications. In addition to handling web requests, it can detect time-based anomalies and block IP ranges by country and/or subnet.

NetGladiator has two primary goals:

1) Make your web infrastructure INVISIBLE and UNINTERESTING to probing botnets.
2) Provide value as a LAYERED appliance in case of a targeted attack.

NetGladiator also has some of the following aspects that set it apart from more expensive, overly robust IPS’s:

Customizable Configurations
Unlike other IPSs with insanely robust pattern sets, NetGladiator lets you pick and choose the patterns you’d like it to hit on. Other products inspect for every vulnerability known to man. While this sounds good, it isn’t very practical and often leads to broken functionality, false positives, and total reliance.

Support From a White Knight (a.k.a Professional Hacker)
As part of your support agreement when you purchase a NetGladiator, a real, white knight will help you set up and configure your machine to meet your needs. This includes identifying and patching any existing holes prior to your installation, deciding what issues you might face from a real attacker, and writing you a custom configuration for your box. That’s something that no one else provides – especially at this price point. And, if you want further security assessments performed, additional support hours can be purchased.

Plug and Play
If you’ve set up a NetEqualizer in the past, you’ll find NetGladiator’s installation process to be even easier. Just put it in front of your web servers, cable the box correctly, and turn it on. Traffic will be passing through it instantly. Now all that’s left is to configure your patterns. NetGladiator comes with default patterns in case no customization is necessary. NetGladiator also runs on its own system, and does not require any installs to your web server. This makes it platform independent and will create zero conflicts with your existing software and hardware.

But remember, protecting web applications is just one piece of the puzzle. In order to layer NetGladiator into your overall security strategy, you should complement its use with other controls. Some examples would be:

– Well-defined user and staff policies that deal with insider threats and social engineering

– Full or column-level database encryption

– Anti-virus

– File integrity monitoring

– Hardware firewalls

– A security assessment by an expert

etc…

Questions?

Need help instituting a layered security strategy? We have experience in all these levels of security controls and are happy to help with NetGladiator implementation or other security-related tasks. Just let us know how we can be of service!

Have some questions about NetGladiator or web security in general? Visit our website, leave a comment, or shoot us an email at ips@apconnections.net. In the next blog post, we’ll answer those questions and also discuss common ones we’ve received from customers so far.

%d bloggers like this: