Special Glasses Needed to Spot Network Security Holes


By Art Reisman

CTO – http://www.netequalizer.com

Would you leave for vacation with your garage door wide open or walk off the edge of a cliff looking for a lost dog? Whether it be a bike lock, or that little beep your car makes when you hit the button on your remote, you rely on physical confirmation for safety and security every day.

Because network security holes do not illuminate any of our human senses, most businesses run blind with respect to what are obvious vulnerabilities. Security holes can be glaringly obvious to a hacker.

Have you ever seen an Owl swoop down in the darkness and grab a rabbit? I have, but only once, and that was in the dim glow of field illuminated by some nearby stadium lights. Owls take hundreds of rodents every night under the cover of darkness, they have excellent night vision and most rodents don’t.

To a hacker, a security hole can be just as obvious as that rabbit. You might feel seemingly secure under the cover of darkness. To your senses what may be invisible is quite obvious to a hacker. They have ways of illuminating your security holes. And then, they can choose to exploit them if deemed juicy enough. For some entry points, a hacker might have to look a little bit harder, like lifting a door mat to reveal a key. Never the less, they will see the key, and the problem is you won’t even know the key is under the mat.

Fancy automated tools that report risk are nice, but the only way to expose your actual network security holes is to hire somebody with night vision goggles that can see the holes. Most tools that do audits are not good enough by themselves, they sort of bumble around in the dark looking and feeling for things, and they really do not see them the way a hacker does.

I’d strongly urge any company that is serious about updating their security to employ a white knight hacker before any other investment outlay. For the same reason that automated systems cannot replace humans, even though billions have been spent on them over the years, you should not start your security defense with an automated tool. It must start with a human hell bent on breaking into your business and then showing you the holes. It never ceases to amaze me the types of holes our white knight hackers find. There is nothing better at spotting security holes than a guy with special glasses.

Is Your Data Really Secure?


By Zack Sanders

Most businesses, if asked, would tell you they do care about the security of their customers. The controversial part of security comes to a head when you ask the question in a different way. Does your business care enough about security to make an investment in protecting customer data? There are a few companies that proactively invest in security for security’s sake, but they are largely in the minority.

The two key driving factors that determine a business’s commitment to security investment are:

1) Government or Industry Standard Compliance – This is what drives businesses like your credit card company, your local bank, and your healthcare provider to care about security. In order to operate, they are forced to care. Standards like HIPAA and PCI require them to go through security audits and checkups. Note: And just because they invest in meeting a compliance standard,  it may not translate to secure data, as we will point out below.

2) A Breach Occurs – Nothing will change an organization’s attitude toward security like a massive, embarrassing security breach. Sadly, it usually takes something like this happening to drive home the point that security is important for everyone.

The fact is, most businesses are running on very thin margins and other operating operating costs come before security spending. Human nature is such that we prioritize by what is in front of us now. What we don’t know can’t hurt us. It is easy for a business to assume that their minimum firewall configuration is good enough for now. Unfortunately they cannot easily see the holes in their firewall. Most firewall security can easily be breached through advertised public interfaces.

How do we know? Because we often do complimentary spot checks on company web servers. It is a rare case when we  have not been able to break in, attaining access to all customer records. Even though our sample set is small, our breach rate is so high, we can reliably extrapolate that most companies can easily be broken into.

As we eluded to above, even some of the companies that follow a standard are still vulnerable. Many large corporations  just go through the motions to comply with a standard, so they might typically seek out “trusted,” large professional services firms to do their audits. Often, these companies will conduct boiler plate assessments where auditors run down a checklist with the sole goal of certifying the application or organization as compliant.

Hiring a huge firm to do an audit makes it much easier to deflect blame in the case of an incident. The employee responsible for hiring the audit firm can say, “Well, I hired XXX – what more could I have done?” If they had hired a small firm to do the audit, and a breach occurred, their judgement and job might come into question – however unfair that might be.

As a professional web application security analyst that has personally handled the aftermath of many serious security breaches, I would advocate that if you take your security seriously, start with an assessment challenge using a firm that will work to expose your real world vulnerabilities.

P2P Protocol Blocking Now Offered with NetGladiator Intrusion Prevention


A few months ago we introduced our NetGladiator Intrusion Prevention (IPS) Device. To date, it has thwarted tens of thousands of robotic cyber attacks and counting. Success breeds success and our users wanted more.

When our savvy customers realized the power, speed, and low price point of our underlying layer 7 engine, we started getting requests seeking additional features such as: “Can you also block Peer To Peer and other protocols that cannot be stopped by our standard Web Filters and Firewalls?”  It was natural that we extended our IPS device to address this space; hence, today we are announcing the next-generation NetGladiator. We now offer a module that will allow you to block and monitor the world’s top 10 p2p protocols (which account for 99 percent of all P2P traffic). We also back our technology with our unique promise to implement a custom protocol blocking rule with the purchase of any system at no extra charge. For example, if you have a specific protocol you need to monitor and just can’t uncover it with your WebSense or Firewall filter, we will custom deliver a NetGladiator system that can track and/or block your unique protocol, in addition to our standard p2p blocking options.

Below is a sample Excel live report integrated with the NetGladiator in monitor mode. On the screen snapshot below, you will notice that we have uncovered a batch of Utorrent and Frost Wire p2p traffic.

Please feel free to call 303-997-1300 or email our NetGladiator sales engineering team with any additional questions at ips@@apconnections.net.

Related Articles

NetGladiator A layer 7 shaper in sheep’s clothing

Apconnections Backs up Security Device Support with an unusual offer, “We’ll hack your network”


What gets people excited about purchasing an intrusion detection system? Not much. Certainly, fear can be used to sell security devices. But most, mid sized companies are spread thin with their IT staff, they are focused on running their business operations. To spend money to prevent something that has never happened to them would be seen as somewhat foolish. There are a large number of potential threats to a business, security being just one of them.

One expert pointed out recently:

“Sophisticated fraudsters are becoming the norm with data breaches, carder forums, and do it yourself (DIY) crime kits being marketed via the Internet.” Excerpt from fraudwar blog spot.

Thus, getting data stolen happens so often that it can be considered a survivable event, it is the new normal. Your customers are not going to run for the hills, as they have been conditioned to roll with this threat. But there still is a steep cost for such an event. So our staff put our heads together and asked the question… there must be an easy, quantifiable, minimum investment way to objectively evaluate data risk without a giant cluster of data security devices in place, spewing gobs of meaningless drivel.

One of our internal, white knight, hackers pointed out, that in his storied past, he had been able to break into almost any business at will (good thing he is a white knight and does not steal or damage anything). While talking to some of our channel resellers we have also learned that most companies, although aware of outside intrusion, are reluctant to throw money and resources at a potential problem that they can’t easily quantify.

Thus arose an idea for our new offer. For a small refundable retainer fee, we will attempt to break into a customers data systems from the outside. If we can’t get in, then we’ll return the retainer fee. Obviously, if we get in, we can then propose a solution with indisputable evidence of the vulnerability, and if we don’t get in, then the customer can have some level of assurance that their existing infrastructure thwarted a determined break in.

Case Study: A Successful BotNet-Based Attack


By Zack Sanders – Security Expert – APconnections

In early 2012, I took on a client who was a referral from someone I had worked with when I first got out of school. When the CTO of the company initially called me, they were actually in the process of being attacked at that very moment. I got to work right away using my background as both a web application hacker and as a forensic analyst to try and solve the key questions that we briefly touched on in a blog post just last week. Questions such as:

– What was the nature of the attack?

– What kind of data was it after?

– What processes and files on the machine were malicious and/or which legitimate files were now infected?

– How could we maintain business continuity while at the same time ensuring that the threat was truly gone?

– What sort of security controls should we put in place to make sure an attack doesn’t happen again?

– What should the public and internal responses be?

Background

For the sake of this case study, we’ll call the company HappyFeet Movies – an organization that specializes in online dance tutorials. HappyFeet has three basic websites, all of which help sell and promote their movies. Most of the company’s business occurs in the United States and Europe, with few other international transactions. All of the websites reside on one physical server that is maintained by a hosting company. They are a small to medium-sized business with about 50 employees locally.

Initial Questions

I always start these investigations with two questions:

1) What evidence do you see of an attack? Defacement? Increased traffic? Interesting log entries?

2) What actions have you taken thus far to stop the attack?

Here was HappyFeet’s response to these questions:

1) We are seeing content changes and defacement on the home page and other pages. We are also seeing strange entries in the Apache logs.

2) We have been working with our hosting company to restore to previous backups. However, after each backup, within hours, we are getting hacked again. This has been going on for the last couple of months. The hosting company has removed some malicious files, but we aren’t sure which ones.

Looking For Clues

The first thing I like to do in cases like this is poke around the web server to see what is really going on under the hood. Hosting companies often have management portals or FTP interfaces where you can interact with the web server, but having root access and a shell is extremely important to me. With this privileged account, I can go and look at all the relevant files for evidence that aligns with the observed behavior. Keep in mind, at this point I have not done anything as far as removing the web server from the production environment or shutting it down. I am looking for valuable information that really can only be discovered while the attack is in progress. The fact that the hosting company has restored to backup and removed files irks me, but there is still plenty of evidence available for me to analyze.

Here were some of my findings during this initial assessment – all of them based around one of the three sites:

1) The web root for one of the three sites has a TON of files in it – many of which have strange names and recent modification dates. Files such as:

db_config-1.php

index_t.php

c99.php

2) Many of the directories (even the secure ones) are world writable, with permissions:

drwxrwxrwx

3) There are SQL dumps/backups in the web root that are zipped so when visited by a web browser the user is prompted for a download – yikes!

4) The site uses a content management system (CMS) that was last updated in 2006 and the database setup interface is still enabled and visible at the web root.

5) Directory listings are enabled, allowing a user to see the contents of the directories – making discovery of file names above trivial task.

6) The Apache logs show incessant SQL injection attempts, which when ran, expose usernames and passwords in plain text.

7) The Apache logs also show many entries accessing a strange file called c99.php. It appeared to be some sort of interface that took shell commands as arguments, as is evident in the logs:

66.249.72.41 – – “GET /c99.php?act=ps_aux&d=%2Fvar%2Faccount%2F&pid=24143&sig=9 HTTP/1.1″ 200 286

Nature of the Attack

There were two basic findings that stood out to me most:

1) The c99.php file.

2) The successful SQL injection log entries.

c99.php

I decided to do some research and quickly found out that this is a popular PHP shell file. It was somehow uploaded to the web server and the rest of the mayhem was conducted through this shell script in the browser. But how did it get there?

The oldest log data on the server was December 19, 2011. At the very top of this log file were commands accessing c99.php, so I couldn’t really be sure how it got on there, but I had a couple guesses:

1) The most likely scenario I thought was that the attacker was able to leverage the file upload feature of the dated CMS – either by accessing it without an account, or by brute forcing an administrative account with a weak password.

2) There was no hardware firewall protecting connections to the server, and there were many legacy FTP and SSH accounts festering that hadn’t been properly removed when they were no longer needed. One of these accounts could have been brute forced – more likely an FTP account with limited access; otherwise a shell script wouldn’t really be necessary to interact with the server.

The log entries associated with c99.php were extremely interesting. There would be 50 or so GET requests, which would run commands like:

cd, ps aux, ls -al

Then there would be a POST request, which would either put a new file in the current directory or modify an existing one.

This went on for tens of thousands of lines. The very manual and linear nature of the entries seemed to me very much like an automated process of some type.

SQL Injection

The SQL injection lines of the logs were also very exploratory in nature. There was a long period of information gathering and testing against a few different PHP pages to see how they responded to database code. Once the attacker realized that the site was vulnerable, the onslaught began and eventually they were able to discover the information schema and table names of pertinent databases. From there, it was just a matter of running through the tables one at a time pulling rows of data.

What Was The Attack After?

The motives were pretty clear at this point. The attacker was a) attempting to control the server to use in other attacks or send SPAM, and b) gather whatever sensitive information they could from databases or configuration files before moving on. Exploited user names and passwords could later be used in identity theft, for example. Both of the above motives are very standard for botnet-based attacks. It should be noted that the attacker was not specifically after HappyFeet – in fact they probably knew nothing about them – they just used automated probing to look for red flags and when they returned positive results,  assimilated the server into their network.

Let the Cleanup Begin

Now that the scope of the attack was more fully understood, it was time to start cleaning up the server. When I am conducting this phase of the project, I NEVER delete anything, no matter how obviously malicious or how benign. Instead, I quarantine it outside of the web root, where I will later archive and remove it for backup storage.

Find all the shell files

The first thing I did was attempt to locate all of the shell files that might have been uploaded by c99.php. Because my primary theory was that the shell file was uploaded through a file upload feature in the web site, I checked those directories first. Right away I saw a file that didn’t match the naming convention of the other files. First of all, the directory was called “pdfs” and this file had an extension of PHP. It was also called broxn.php, whereas the regular files had longer names with camel-case that made sense to HappyFeet. I visited this file in the web browser and saw a GUI-like shell interface. I checked the logs for usage of this file, but there were none. Perhaps this file was just an intermediary to get c99.php to the web root. I used a basic find command to pull a list of all PHP files from the web root forward. Obviously this was a huge list, but it was pretty easy to run through quickly because of the naming differences in the files. I only had to investigate ten or so files manually.

I found three other shell files in addition to broxn.php. I looked for evidence of these in the logs, found none, and quarantined them.

What files were uploaded or which ones changed?

Because of the insane amount of GET requests served by c99.php, I thought it was safe to assume that every file on the server was compromised. It wasn’t worth going through the logs manually on this point. The attacker had access to the server long enough that this assumption is the only safe one. The less frequent occurrences of POST requests were much more more manageable. I did a grep through the Apache logs for POST requests submitted by c99.php and came up with a list of about 200 files. My thought was that these files were all either new or modified and could potentially be malicious. I began the somewhat pain-staking process of manually reviewing these files. Some had been overwritten back to their original state by the hosting company’s backup, but some were still malicious and in place. I noted these files, quarantined them, and retested website functionality.

Handling the SQL injection vulnerabilities

The dated CMS used by this site was riddled with SQL injection vulnerabilities. So much so, that my primary recommendation for handling it was building a brand new site. That process, however, takes time, and we needed a temporary solution. I used the log data that I had to figure out which pages the botnet was primarily targeting with SQL attacks. I manually modified the PHP code to do basic sanitizing on all inputs in these pages. This immediately thwarted SQL attacks going forward, but the damage had already been done. The big question here was how to handle the fact that all usernames and passwords were compromised.

Improving Security

Now that I felt the server was sufficiently cleaned, it was time to beef up the security controls to prevent future attacks. Here are some of the primary tasks I did to accomplish this:

1) Added a hardware firewall for SSH and FTP connections.

I worked with the hosting company to put this appliance in front of the web server. Now, only specific IPs could connect to the web server via SSH and FTP.

2) Audited and recreated all accounts.

I changed the passwords of all administrative accounts on the server and in the CMS, and regenerated database passwords.

3) Put IP restrictions on the administrative console of the CMS.

Now, only certain IP addresses could access the administrative portal.

4) Removed all files related to install and database setup for the CMS.

These files were no long necessary and only presented a security vulnerability.

5) Removed all zip files from the web root forward and disabled directory listings.

These files were readily available for download and exposed all sorts of sensitive information. I also disabled directory listings, which is helpful in preventing successful information gathering.

6) Hashed customer passwords for all three sites.

Now, the passwords for user accounts were not stored in plain text in the database.

7) Added file integrity monitoring to the web server.

Whenever a file changes, I am notified via email. This greatly helps reduce the scope of an attack should it breach all of these controls.

8) Wrote a custom script that blocks IP addresses that put malicious content in the URL.

This helps prevent information gathering or further vulnerability probing. The actions this script takes operate like a miniature NetGladiator.

9) Installed anti-virus software on the web server.

10) Removed world-writable permissions from every directory and adjusted ownership accordingly.

No directory should ever be world writable – doing so is usually just a lazy way of avoiding proper ownership. The world writable aspect of this server allowed the attack to be way more broad than it had to be.

11) Developed an incident response plan.

I worked with the hosting company and HappyFeet to develop an internal incident response policy in case something happens in the future.

Public Response

Due to the fact that all usernames and passwords were compromised, I urged HappyFeet to communicate the breach to their customers. They did so, and later received feedback from users who had experienced identity theft. This can be a tough step to take from a business point of view, but transparency is always the best policy.

Ongoing Monitoring

It is not enough to implement the above controls, set it, and forget it. There must be ongoing tweaking and monitoring to ensure a strong security profile. For HappyFeet, I set up a yearly monitoring package that includes:

– Manual and automated log monitoring.

– Server vulnerability scans once a quarter, and web application scans once every six months.

– Manual user history review.

– Manual anti-virus scans and results review.

Web Application Firewalls

I experimented with two types of web application firewalls for HappyFeet. Both of which took me down the road of broken functionality and over-robustness. One had to be completely uninstalled, and the other is in monitoring mode because protection mode disallowed legitimate requests. It also is alerting to probing attempts about 5,000 times per day – most of which are not real attacks – and the alert volume is unmanageable. Its only value is in generating data for improving my custom script that is blocking IPs based on basic malicious attempts.

This is a great example of how NetGladiator can provide a lot of value to the right environment. They don’t need an intense, enterprise-level intrusion prevention system – they just need to block the basics and not break functionality in their web sites. The custom script, much like NetGladiator, suits their needs to a T and can also be configured to reflect previous attacks and vulnerabilities I found in their site that are too vast to manually patch.

Lessons Learned

Here are some key take-aways from the above project:

– Being PROACTIVE is so much better than being REACTIVE when it comes to web security. If you are not sure where you stack up, have an expert take a look.

– Always keep software and web servers up to date. New security vulnerabilities arrive on the scene daily, and it’s extremely likely that old software is vulnerable. Often, security holes are even published for an attacker to research. It’s just a matter of finding out which version you have and testing the security flaw.

– Layered security is king. The security controls mentioned above prove just how powerful layering can be. They are working together in harmony to protect an extremely vulnerable application effectively.

If you have any questions on NetGladiator, web security, or the above case study, feel free to contact us any time! We are here to help, and don’t want you to ever experience an attack similar to the one above.

%d bloggers like this: