The Hacker Chronicles: After the Hack

Bellcurve Technology

4114 Clubhouse Road, #943

Highland City, FL 33846

1 813.540.0454

This email address is being protected from spambots. You need JavaScript enabled to view it.

The Hacker Chronicles: After the Hack
Written by Super User
on 27 December 2012

Man is it quiet in here. No whirring of fans, blinking lights or muted hum of hard drives being accessed; just the slightly oily smell of the air handlers and barely audible sound of air moving through the vents in the floor. This is a well-designed server room not unlike many server rooms I have been in. But this one is quiet and that is never good. Every server, firewall, workstation, wireless access point, switch, router and laptop are turned off and sitting mute waiting for me to resurrect them.

Even the cell phones are sitting in a nondescript cardboard box with their batteries removed. This is not a Saturday or Sunday. This is a Tuesday afternoon in a financial services office of 92 employees that has been hacked. I am told by their CFO that it is costing them $12,000 per hour that they cannot access their files or receive email from clients or conduct business. I think to myself “it is a little more than that with the cost of my team to be here and clean up this mess” but I just nod. Past experience has shown that it is not good to prod a wounded CFO with a dollar bill. As I stand there soaking up the silence I get a silent nod from my Senior Security Engineer that we are ready to power up the first server. Our equipment is in place and we will be able to capture and analyze every bit that comes and goes to the server to determine if this server was the point of infection. We mount our drives, make duplicates of all the drives in the server for forensic analysis, mount the tools for the server for a virus scan and away we go…

When a network is compromised we are called in to clean up the mess. The sad part about this is that its preventable, but people just don’t think it is going to happen to them so they don’t have their networks audited (even if it is the law) and choose to live in ignorance. Had they known that they will be down for the better part of a week and unable to conduct business I bet they would have had an audit performed that may have caught the vulnerability that the bad guys used to compromise them. But they did not so we have to come in and perform a forensic analysis of each server, scan them for viruses, check them for botnets, examine each user and process to make sure there are no back doors and finally verify each computer that the server is trying to communicate with to make sure all traffic is legitimate. Sounds daunting right? It is, but the worst part is to come. The technology part is painstaking analysis but that is a walk in the park compared to the PR nightmare and the onslaught of lawsuits from customers and the local, state and federal investigators coming their way. But, we handle that too and it will take months if not years to get THAT mess cleared up. Yeah, I would much rather work on the servers.

11 hours onsite

We have now been onsite for 11 hours or so and the first server gets a clean bill of health. The CFO has been in here four times now asking what is taking so long. The last time he was in I offered to turn the process over to him, but he declined and I have not seen him since. I turn to the very downtrodden looking IT Director and ask him which server he wants to bring up next. Yes, they have their own IT staff. And many of that staff are very, very concerned about their jobs at the moment. People just do not understand that an IT person has to concentrate on EVERYTHING in the company to keep it running whereas a security professional concentrates almost exclusively on security and that knowledge base alone is vast and impossible to master for any IT staff. The Director doesn’t speak, he just looks at the email server in the next race over and nods his head at it. We start setting up our equipment on the email server and the entire process starts over. It is going to be a very long day (night…week…)

29 hours onsite

The email server had multiple issues and after fixing one to uncover three more it was decided that it would cost less to simply rebuild the server from scratch and then mount each user mailbox file and scan it before copying it over. This server may well have been the initial infection point, but the log files are so convoluted that it will take time to know for sure. The engineering team is rebuilding the server while I start on the log files. I hate this part. It is like looking for a needle in a haystack.

33 hours onsite

I found an odd looking needle in the haystack. Over the weekend a user’s permissions was changed from a low level access (the user is actually a printer/scanner) to administrative access. And it happened at 4:04 a.m. on a Sunday morning. Now we backtrack who changed it and find out it was a power user from the investment group. His account was accessed only 40 minutes before the change, and from an IP address that pings to Latvia. Great. Another hacker from across the pond we will never see brought to justice.  Now the question is if this was the only server breached or if there were others. Back to combing through log files. Half of the response team is sent home to shower and rest and be back in 12 hours.

40 hours on site

Email and file servers are back up. I walk outside to take a break and catch 30 minutes of sleep in the car. Why is it so dark out here and what day is it again? 29 minutes and 53 seconds later and I am on my way back into the server room to fire up the last server. As we start our analysis on the last server one of the Infrastructure Engineers begins to examine the switches and make sure they have not been compromised and upgrade their firmware if necessary. 45 minutes later the all clear comes in on the switches so we reconnect the two servers that are now clean to them so the domains can begin to synchronize again and we can start copying email to the newly built email server. Wireless access points are also examined and one is found to be completely misconfigured and has to be wiped out and fixed. Probably not from the hacker, but if you are rebuilding a network in a hurry you still have to do it right.

48 hours on site

The last server is given a clean bill of health and is coming back up. The Firewall had to be reconfigured, it looks like someone accessed it and created tunnels for streaming bit torrents. Not good. If stolen software, pirated movies or pornography is hosted on these servers and streamed to the Internet then the company could have law enforcement walk in at any time, shut down their network and seize EVERYTHING. Time to start on the workstations and laptops.

60 hours on site

Workstations are all cleared and multiple infections were found on a couple of the laptops. Oddly enough, it does not look like they were primary but the secondary infection site. Primary site is still elusive on this one. FBI showed up a few minutes ago asking questions about the loss of customer data and state of the analysis. The IT Director had to field some tough questions about the security measures that were in place that did not exactly placate the FBI. The news conference is in 45 minutes with several TV and Radio stations that will want ALL the answers even though we are only about 25% of the way through our own analysis, but this is typical. TV and News stations live their lives in 30 second sound bites and when they are told they have to wait for explanations they immediately get frustrated, and usually take the frustrations out on the person holding the news conference. I’ve seen some great CIO’s and CTO’s crumble under the onslaught of reporters when they smell weakness. It is never pretty and it takes a practiced hand to navigate that minefield. I am just glad that I am off the hook for this one and one of our partner companies is handling the damage control.

74 hours onsite

Point of infection has been found! One of the cell phones that was mated to a laptop was compromised by an app and gave the hacker full access to it and any device it attached to. The app source was traced to a company in the Ukraine that appears to be a shell company, a front if you will, with a flashy web page and games for download for free. Now that we have the infection point we can start bringing everything back up online and get the company back to work. We still have 100-150 hours of analysis to perform, but we can do that in our own lab. The CEO and CFO will not be pleased to learn that a) an employee caused all this just because her daughter wanted the latest “Unicorn Races” game and b) there is no way to hold the bad guys accountable. No extradition and even if there was they have paid off the local cops and are usually working with organized crime so they will be untouchable. We have a post event meeting with all the stakeholders and the IT Director is noticeably absent. Turns out he resigned this morning under pressure from his superiors. Too bad, he was a good guy that was doing his best. Now there are reports to write, more analysis to perform and meetings to be held with our partner companies that are assisting us, but not before a much needed nap on a quiet beach…