By Zack Sanders – Security Expert – APconnections
In early 2012, I took on a client who was a referral from someone I had worked with when I first got out of school. When the CTO of the company initially called me, they were actually in the process of being attacked at that very moment. I got to work right away using my background as both a web application hacker and as a forensic analyst to try and solve the key questions that we briefly touched on in a blog post just last week. Questions such as:
– What was the nature of the attack?
– What kind of data was it after?
– What processes and files on the machine were malicious and/or which legitimate files were now infected?
– How could we maintain business continuity while at the same time ensuring that the threat was truly gone?
– What sort of security controls should we put in place to make sure an attack doesn’t happen again?
– What should the public and internal responses be?
Background
For the sake of this case study, we’ll call the company HappyFeet Movies – an organization that specializes in online dance tutorials. HappyFeet has three basic websites, all of which help sell and promote their movies. Most of the company’s business occurs in the United States and Europe, with few other international transactions. All of the websites reside on one physical server that is maintained by a hosting company. They are a small to medium-sized business with about 50 employees locally.
Initial Questions
I always start these investigations with two questions:
1) What evidence do you see of an attack? Defacement? Increased traffic? Interesting log entries?
2) What actions have you taken thus far to stop the attack?
Here was HappyFeet’s response to these questions:
1) We are seeing content changes and defacement on the home page and other pages. We are also seeing strange entries in the Apache logs.
2) We have been working with our hosting company to restore to previous backups. However, after each backup, within hours, we are getting hacked again. This has been going on for the last couple of months. The hosting company has removed some malicious files, but we aren’t sure which ones.
Looking For Clues
The first thing I like to do in cases like this is poke around the web server to see what is really going on under the hood. Hosting companies often have management portals or FTP interfaces where you can interact with the web server, but having root access and a shell is extremely important to me. With this privileged account, I can go and look at all the relevant files for evidence that aligns with the observed behavior. Keep in mind, at this point I have not done anything as far as removing the web server from the production environment or shutting it down. I am looking for valuable information that really can only be discovered while the attack is in progress. The fact that the hosting company has restored to backup and removed files irks me, but there is still plenty of evidence available for me to analyze.
Here were some of my findings during this initial assessment – all of them based around one of the three sites:
1) The web root for one of the three sites has a TON of files in it – many of which have strange names and recent modification dates. Files such as:
db_config-1.php
index_t.php
c99.php
2) Many of the directories (even the secure ones) are world writable, with permissions:
drwxrwxrwx
3) There are SQL dumps/backups in the web root that are zipped so when visited by a web browser the user is prompted for a download – yikes!
4) The site uses a content management system (CMS) that was last updated in 2006 and the database setup interface is still enabled and visible at the web root.
5) Directory listings are enabled, allowing a user to see the contents of the directories – making discovery of file names above trivial task.
6) The Apache logs show incessant SQL injection attempts, which when ran, expose usernames and passwords in plain text.
7) The Apache logs also show many entries accessing a strange file called c99.php. It appeared to be some sort of interface that took shell commands as arguments, as is evident in the logs:
66.249.72.41 – – “GET /c99.php?act=ps_aux&d=%2Fvar%2Faccount%2F&pid=24143&sig=9 HTTP/1.1″ 200 286
Nature of the Attack
There were two basic findings that stood out to me most:
1) The c99.php file.
2) The successful SQL injection log entries.
c99.php
I decided to do some research and quickly found out that this is a popular PHP shell file. It was somehow uploaded to the web server and the rest of the mayhem was conducted through this shell script in the browser. But how did it get there?
The oldest log data on the server was December 19, 2011. At the very top of this log file were commands accessing c99.php, so I couldn’t really be sure how it got on there, but I had a couple guesses:
1) The most likely scenario I thought was that the attacker was able to leverage the file upload feature of the dated CMS – either by accessing it without an account, or by brute forcing an administrative account with a weak password.
2) There was no hardware firewall protecting connections to the server, and there were many legacy FTP and SSH accounts festering that hadn’t been properly removed when they were no longer needed. One of these accounts could have been brute forced – more likely an FTP account with limited access; otherwise a shell script wouldn’t really be necessary to interact with the server.
The log entries associated with c99.php were extremely interesting. There would be 50 or so GET requests, which would run commands like:
cd, ps aux, ls -al
Then there would be a POST request, which would either put a new file in the current directory or modify an existing one.
This went on for tens of thousands of lines. The very manual and linear nature of the entries seemed to me very much like an automated process of some type.
SQL Injection
The SQL injection lines of the logs were also very exploratory in nature. There was a long period of information gathering and testing against a few different PHP pages to see how they responded to database code. Once the attacker realized that the site was vulnerable, the onslaught began and eventually they were able to discover the information schema and table names of pertinent databases. From there, it was just a matter of running through the tables one at a time pulling rows of data.
What Was The Attack After?
The motives were pretty clear at this point. The attacker was a) attempting to control the server to use in other attacks or send SPAM, and b) gather whatever sensitive information they could from databases or configuration files before moving on. Exploited user names and passwords could later be used in identity theft, for example. Both of the above motives are very standard for botnet-based attacks. It should be noted that the attacker was not specifically after HappyFeet – in fact they probably knew nothing about them – they just used automated probing to look for red flags and when they returned positive results, assimilated the server into their network.
Let the Cleanup Begin
Now that the scope of the attack was more fully understood, it was time to start cleaning up the server. When I am conducting this phase of the project, I NEVER delete anything, no matter how obviously malicious or how benign. Instead, I quarantine it outside of the web root, where I will later archive and remove it for backup storage.
Find all the shell files
The first thing I did was attempt to locate all of the shell files that might have been uploaded by c99.php. Because my primary theory was that the shell file was uploaded through a file upload feature in the web site, I checked those directories first. Right away I saw a file that didn’t match the naming convention of the other files. First of all, the directory was called “pdfs” and this file had an extension of PHP. It was also called broxn.php, whereas the regular files had longer names with camel-case that made sense to HappyFeet. I visited this file in the web browser and saw a GUI-like shell interface. I checked the logs for usage of this file, but there were none. Perhaps this file was just an intermediary to get c99.php to the web root. I used a basic find command to pull a list of all PHP files from the web root forward. Obviously this was a huge list, but it was pretty easy to run through quickly because of the naming differences in the files. I only had to investigate ten or so files manually.
I found three other shell files in addition to broxn.php. I looked for evidence of these in the logs, found none, and quarantined them.
What files were uploaded or which ones changed?
Because of the insane amount of GET requests served by c99.php, I thought it was safe to assume that every file on the server was compromised. It wasn’t worth going through the logs manually on this point. The attacker had access to the server long enough that this assumption is the only safe one. The less frequent occurrences of POST requests were much more more manageable. I did a grep through the Apache logs for POST requests submitted by c99.php and came up with a list of about 200 files. My thought was that these files were all either new or modified and could potentially be malicious. I began the somewhat pain-staking process of manually reviewing these files. Some had been overwritten back to their original state by the hosting company’s backup, but some were still malicious and in place. I noted these files, quarantined them, and retested website functionality.
Handling the SQL injection vulnerabilities
The dated CMS used by this site was riddled with SQL injection vulnerabilities. So much so, that my primary recommendation for handling it was building a brand new site. That process, however, takes time, and we needed a temporary solution. I used the log data that I had to figure out which pages the botnet was primarily targeting with SQL attacks. I manually modified the PHP code to do basic sanitizing on all inputs in these pages. This immediately thwarted SQL attacks going forward, but the damage had already been done. The big question here was how to handle the fact that all usernames and passwords were compromised.
Improving Security
Now that I felt the server was sufficiently cleaned, it was time to beef up the security controls to prevent future attacks. Here are some of the primary tasks I did to accomplish this:
1) Added a hardware firewall for SSH and FTP connections.
I worked with the hosting company to put this appliance in front of the web server. Now, only specific IPs could connect to the web server via SSH and FTP.
2) Audited and recreated all accounts.
I changed the passwords of all administrative accounts on the server and in the CMS, and regenerated database passwords.
3) Put IP restrictions on the administrative console of the CMS.
Now, only certain IP addresses could access the administrative portal.
4) Removed all files related to install and database setup for the CMS.
These files were no long necessary and only presented a security vulnerability.
5) Removed all zip files from the web root forward and disabled directory listings.
These files were readily available for download and exposed all sorts of sensitive information. I also disabled directory listings, which is helpful in preventing successful information gathering.
6) Hashed customer passwords for all three sites.
Now, the passwords for user accounts were not stored in plain text in the database.
7) Added file integrity monitoring to the web server.
Whenever a file changes, I am notified via email. This greatly helps reduce the scope of an attack should it breach all of these controls.
8) Wrote a custom script that blocks IP addresses that put malicious content in the URL.
This helps prevent information gathering or further vulnerability probing. The actions this script takes operate like a miniature NetGladiator.
9) Installed anti-virus software on the web server.
10) Removed world-writable permissions from every directory and adjusted ownership accordingly.
No directory should ever be world writable – doing so is usually just a lazy way of avoiding proper ownership. The world writable aspect of this server allowed the attack to be way more broad than it had to be.
11) Developed an incident response plan.
I worked with the hosting company and HappyFeet to develop an internal incident response policy in case something happens in the future.
Public Response
Due to the fact that all usernames and passwords were compromised, I urged HappyFeet to communicate the breach to their customers. They did so, and later received feedback from users who had experienced identity theft. This can be a tough step to take from a business point of view, but transparency is always the best policy.
Ongoing Monitoring
It is not enough to implement the above controls, set it, and forget it. There must be ongoing tweaking and monitoring to ensure a strong security profile. For HappyFeet, I set up a yearly monitoring package that includes:
– Manual and automated log monitoring.
– Server vulnerability scans once a quarter, and web application scans once every six months.
– Manual user history review.
– Manual anti-virus scans and results review.
Web Application Firewalls
I experimented with two types of web application firewalls for HappyFeet. Both of which took me down the road of broken functionality and over-robustness. One had to be completely uninstalled, and the other is in monitoring mode because protection mode disallowed legitimate requests. It also is alerting to probing attempts about 5,000 times per day – most of which are not real attacks – and the alert volume is unmanageable. Its only value is in generating data for improving my custom script that is blocking IPs based on basic malicious attempts.
This is a great example of how NetGladiator can provide a lot of value to the right environment. They don’t need an intense, enterprise-level intrusion prevention system – they just need to block the basics and not break functionality in their web sites. The custom script, much like NetGladiator, suits their needs to a T and can also be configured to reflect previous attacks and vulnerabilities I found in their site that are too vast to manually patch.
Lessons Learned
Here are some key take-aways from the above project:
– Being PROACTIVE is so much better than being REACTIVE when it comes to web security. If you are not sure where you stack up, have an expert take a look.
– Always keep software and web servers up to date. New security vulnerabilities arrive on the scene daily, and it’s extremely likely that old software is vulnerable. Often, security holes are even published for an attacker to research. It’s just a matter of finding out which version you have and testing the security flaw.
– Layered security is king. The security controls mentioned above prove just how powerful layering can be. They are working together in harmony to protect an extremely vulnerable application effectively.
If you have any questions on NetGladiator, web security, or the above case study, feel free to contact us any time! We are here to help, and don’t want you to ever experience an attack similar to the one above.
May 8, 2012 at 10:44 AM
[…] Nine Tips And Tricks To Speed Up Your Internet ConnectionSoftware UpdatesDo We Really Need SSL?Case Study: A Successful BotNet-Based AttackWhat Is Burstable Bandwidth? Five Points to […]