Tuesday, January 14, 2020

Proxychain tool in a nutshell

In order to do penetration testing anonymously and decrease the possibility of identity detection, hackers need to use an intermediary machine whose IP address will be left on the target system. This can be done by using a proxy.  "proxychains" is a tool that forces any TCP connection made by any given application to follow through proxy(ies) like TOR or any other SOCKS4, SOCKS5 or HTTP(S) proxy(ies). Supported auth-types: “user/pass” for SOCKS4/5, “basic” for HTTP.

Steps: You set up proxychain in your own system to use a proxy / sequence of proxies, and use "proxychains <command> [options]" and any network traffic generated from the <command> will be routed through the configured proxy / sequence of proxies.

Kali linux comes with the tool built in. Otherwise, assuming you know how to install the tool, I will proceed with the use cases.

Using a single proxy with proxychain

Why useful?

It is useful when attacker compromised a server behind firewall - he can set up a dynamic SSH forwarding with the compromised system (ingress ssh must be allowed for this to work), set the local forwarded port as socks proxy in /etc/proxychains.conf, and use "proxychains <command> [options]" to attack the internal systems from the compromised system.

Configure:


Use:

Using multiple proxies

Why useful?

To increase anonymity while accessing a network/web.

Configure:

Add in /etc/proxychains.conf

Multiple proxies will be chained based on how it is configured under the /etc/proxychains.conf file - dynamic_chain / strict_chain / random_chain. You should un-comment the relevant configuration. Along with random_chain,  you may want to uncomment the line with chain_len. This will determine how many of the IP addresses in your chain will be used in creating your random proxy chain.

Use:

User has to execute the command - "proxychains <command> [options]"

Find yourself some proxies, and get out there with proxychains!

Monday, December 18, 2017

CSV injection mitigations

It's a common feature for servers to allow exporting data in CSV formats that users read in their own machine using some spreadsheet software. The sources of data for the CSV export may be spread across many places in the server where any malicious user can put data. Malicious users can put specially crafted spreadsheet formulas in the server, that might end up in the CSV export file. When the victim opens the CSV file, the formulas end up executing in his machine (victim has to ignore multiple warnings from the spreadsheet software), executing arbitrary commands that the malicious user injected in the server.



I won't describe the payloads and impacts in this blog, as there is already some well written blogs for those information -
The purpose of the blog is to describe proper mitigation.

Example of bad mitigation

One product in my organization fixed a similar issue with 2 layers of defense. For any CSV cell value that starts with +, -, @, = (as suggested in http://georgemauer.net/2017/10/07/csv-injection.html or OWASP) the fix added (1) a preceding TAB char, (2) single quotes around the cell value. But later the we found that adding a single / pair of double quote(s) before the attacker's payload can simply evade the filter to check for the chars +, -, @, =. e.g. if the payload attacker injects is =2+5+cmd|' /C calc'!A0 filter will catch it and mitigate the risks. But if attacker puts the payload ""=2+5+cmd|' /C calc'!A0 filter won't be able to catch it as it was checking for only values starting with +, -, @, =. The end result will be same because MS excel, while rendering the value, simply skips the leading double quotes in CSV values.

Suggested mitigation

So, the mitigation should be, while creating the CSV export in the server -
  1. Create a white-list of allowed characters for the stored data fields in the server and block requests for storing other values. This is a best practice that reduces the chance of any injection attack. While creating CSV export, explicitly disallow any of the following chars in CSV export values +, -, @, =, |, ". If disallowing some chars is not possible in the context of the application, prepend those with a back slash (\).
  2. Additionally, prepend a SPACE or TAB or SINGLE QUOTE to ALL CSV values before exporting them to file. This mitigation leaves the CSV file human readable but not executable. DO NOT check for leading +, -, @, =, " and prepend to only those values.
  3. If the CSV files are created SOLELY for the purpose of information exchange among machines, do not tamper any data as that may break the functionality. Instead you can take the following measures - save the exported file with .txt extension, Create a warning for GUI users while they are exporting a CSV, Update the documentation/SCG to make customers aware of the risk
Note:
  •  NOTE: In theory, escaping special characters should work, but if the CSV file is saved/exported again using Excel the escaping characters might be removed so the saved file will have the vulnerability again.
  • The above mitigation works with known spreadsheet software (MS Excel, Google sheets etc) in present. But other applications might use some other characters to denote a formula in a CSV cell. Also, how the known spreadsheet softwares interpret the values might change in future making the above defense mechanisms weak.
Safe browsing!

Tuesday, November 28, 2017

Insecure SUDO configurations

System roots often allow certain commands to be executed by non-root users in a controlled manner. This non-root user might be local human users or a user with whose privilege a remote application runs. But this controlled allowance might result in unintended privileges for the non-root user, if improper configurations are done.

Attack scenarios:

1. Unsafe options in sudo-able commands


SSH:
A program is running as a non-root Linux user which allows users to execute ssh command as root, as configured in sudoers file. The command string coming from the users are properly sanitized by the application to filter out command injection characters like ;,&,| etc. The command name, first word in the string, is also matched to be 'ssh'. What if some user enters the command -

sudo ssh -F /etc/shadow 127.0.0.1

As the file is not a ssh configuration file, the command will output each line of /etc/shadow file in a verbose error message. This will allow the attacker to read any file in the system.

ARP:

Similarly, sudo arp -v -f /path/file will allow the user to read any system file.

TELNET:

sudo telnet -n /path/file will allow to delete the contents of any system file.

TCPDUMP:

sudo tcpdump -i em0 -G 1 -z /tmp/lala.sh -w dump will execute the script /tmp/lala.sh as root.
ref: https://seclists.org/tcpdump/2010/q3/68

FIND:

sudo find /file/path -exec /bin/bash will provide a root-bash to the non-root user. Using NOEXEC in sudoers will help preventing this.

SERVICE:

sudo service ../../bin/bash can be used to gain root bash

Fix: Use wrapper shells to invoke the desired commands, instead of directly exposing the system commands to non-root users and do necessary sanitization in the wrapper script. Any option that takes a file path or executable path in a sudo-able command should be sanitized.

Using NOEXEC. This will prevent the sudo-able scripts/commands to execute other scripts by its own, like the examples above for TCPDUMP and FIND.

If you are doing application layer filtering of command (sudo-able) options, don't look for the exact command options like - "-f", "-n", "-F' etc. These checks can be easily bypasses by combining options or using "\" - e.g. "-vf" or "-\f" will bypass. Use proper regex or use a wrapper shell code to identify them using "getopts" command (or similar).

2. non-root users having write permissions to sudo-able script or directory :


If /path/script.sh is sudo-able and non-root users have write permission to that file, then its a call for trouble. Non-root users can modify the script to put arbitrary commands and then execute as root.

Allowing /home/user in sudoers and non-root users having write access to the directory will allow non-root users to create arbitrary scripts in the /home/user directory and execute them as root.

Fix: sudo-able scripts/directories shouldn't be writable by non-root users. Never use wildcards as entries in sudoers, only allow specific commands what is necessary.

3. Improper input sanitization for a sudo-able script


A sudo-able custom script takes as input - a. path to other script that it should execute, b. system commands/command options etc. and doesn't sanitize the inputs properly. Or it might contain a OS command injection vulnerability. Any vulnerability in this script might allow an attacker to execute arbitrary scripts/commands as root.

Fix: Sanitize all inputs to a sudo-able script properly.

Monday, July 17, 2017

Improving Fortify Scan Time for Large Projects

If the project code base being scanned is in GBs, Fortify scan takes several days for scanning, irrespective of how powerful machine you are using for scan. Estimated times -

There are few ways to improve the scan time dramatically 'without compromising the scan coverage or breaking up the code base into smaller chunks':

Step 1:

While calling the sourceanalyzer utility, pass the parameters -
  • -Xmx<size>M or -Xmx<size>G
  • -Xss<size>M or -Xss<size>G
where -Xmx<size>M configures the max amount of heap memory and -Xss<size>M configures the maximum stack memory that JVM can use while running sourceanalyzer. 

You can assign the remaining memory for sourceanalyzer, after calculating the memory needed for OS and other running processes. The machine should be dedicated only for scanning and no other unnecessary (w.r.t Fortify scan) programs should run in the machine. 

As mentioned in HPE_SCA_Perf_Guide_17.10 - "Heap sizes between 32 GB and 48 GB are not advised due to internal JVM implementations. Heap sizes in this range perform worse than at 32 GB. Heap sizes smaller than 32 GB are optimized by the JVM. If your scan requires more than 32 GB, then you probably need much more than 48 GB such as 64 GB or higher."

Example: 
sourceanalyzer -Xmx4G -Xss1G -b "MyBuild" -cp "path/to/class-file" "path/to/code"
sourceanalyzer -Xmx4G -Xss1G -b "MyBuild" -scan -f "path/to/fpr"

If you set SCA_VM_OPTS environment variable, the same effect can be achieved without passing the memory parameters to each sourceanalyzer call.

Step2:

Enable parallel analysis in one of the following ways:
  • Add the -mt option to the analysis phase command-line invocation. For example: sourceanalyzer -b MyBuild -scan -mt
  • Add the property setting com.fortify.sca.MultithreadedAnalysis=true to your fortifysca.properties file, located in the <sca_install_dir>/core/config directory.
If you need to restrict the number of concurrently executing threads, you can set the
com.fortify.sca.ThreadCount property setting in the fortify-sca.properties file to the
number of available processor cores. By default, Fortify uses all available cores.

NOTE: the parallel analysis is only effective during scan, not during build. This options are available ONLY from SCA 17.10 onwards. earlier versions used the -j option along with the scan command to achieve the same.

Reference: HPE_SCA_Guide_17.10 in Fortify documentation.

Thursday, June 22, 2017

BURP Automation and Build Integration

Introduction

Continuous Integration of security tools are always desired to save manual efforts and save time.  BURP is meant for manual testing and is a very powerful tool when used manually for a security testing for its plethora of features like Scanner, Intruder, Repeater, Sequencer etc. However the BURP scanner component can be integrated with the build to facilitate automated generation of security scan report.

How to Integrate

The Extender feature of BURP opens multiple APIs that can be used to invoke different functions of BURP. You can write your own plugin in Java/Python/Ruby to programmatically interact with BURP to initiate your scan from the build system. Some reference implementations for this purpose are - GDS Burp API, Jython Burp API. However, there is a 3rd party BURP App available in BApp store in BURP that already does the job for you – “Carbonator”.

You can install the plugin to enable calling BURP scan features from the command line. This documentation describes the setup and use of Carbonator to facilitate calling BURP features from the command line, so that it can be used by any build agent.

Using Carbonator

Setting up the plugin


Carbonator is written in Python, so to enable its use you have to download Jython standalone JAR file from http://www.jython.org/downloads.html and set it in the ‘Python environment’, under the ‘Options’ tab of ‘Extender’ 

Once the environment is set, come to the ‘BApp Store’ tab select ‘Carbonator’ from the list and install.

Make sure that the following option is checked –

Now, close BURP gracefully to ensure the extension is saved within the BURP JAR.

Next time when you open BURP, to confirm that it is installed properly, you have to confirm that it is listed and ticked under the ‘Extensions’ tab of ‘Extender’.

Running BURP from Command-line

Now, to invoke BURP scan from the command line, you have to enter the following command –
java -jar -Xmx2g path/to/burp.jar scheme fqdn port app-path
For example:
If you want to scan http://127.0.0.1:8080/dvwa/ from Windows, the command should be -
java -jar -Xmx2g “path\to\burp.jar” http127.0.0.1 8080 /dvwa

When you invoke this command, it will –
  • Start BURP
  • Set the URL to Scope
  • Start the Spider and populate the site-map
  • Start the scanner
  • Once scan is done, generate the report and place it under the same directory where BURP JAR is located
  • Close BURP

If the GUI is invoked using the previous command, there will be need of some manual interaction in the GUI while the BURP is starting. That cannot be afforded when build agent is calling BURP. So you have to add another option to stop invoking the GUI and get it done without any interruption (if everything is configured properly) -
-Djava.awt.headless=true
So the command to be included in the build script is –
java -jar -Xmx2g -Djava.awt.headless=true  “path\to\burp.jar” http127.0.0.1 8080 /dvwa

(Please read ahead, the story is not over yet)

Issues with automated crawling

When you are invoking BURP using the command mentioned before, the tool has no way to authenticate to the app. In without authentication, it is not possible to cover the complete application during scan. More to that how the command line trigger will know the values for form submission? So in order to properly crawl the complete application (and subsequently scan), few more pieces need to be added.
We can set the user creds and form submission values (and lot more) using the GUI.  So while we are configuring the build integration, after the Carbonator plugin is setup, we need to do few more steps –

Configuring the crawler

Under ‘Spider’ -> ‘Option’ -> ‘Application Login’, set the admin credentials (admin privilege is required in order to ensure maximum coverege)

Under ‘Spider’ -> ‘Option’ -> ‘Form Submission’, populate the table properly with all the field names in your application

Additionally you can set the options for the scanner to improve scan performance, e.g. changing the number of threads for scanning, which issues to look for.

Save the configuration

Save the configuration. The output will be a JSON file that should be put as an additional parameter while invoking BURP.


Running BURP with the configuration file

If this file is used while invoking BURP, the instance created will use the configurations while running. So BURP will be able to authenticate to the app and able to automatically submit the forms it find. To command to use would be –
java -jar -Xmx2g -Djava.awt.headless=true  “path\to\burp.jar” http127.0.0.1 8080 /dvwa --config-file=path\ProjOpt.json
Once the command is executed successfully the report will be available in the same folder, where BURP is located with the name IntegrisSecurity_Carbonator_scheme_fqdn_port.html. e.g IntegrisSecurity_Carbonator_http_127.0.0.1_80.html

NOTE: Manual interventions might be needed

  • The project configuration required will change very often across versions of the same product (with the change of creds or advent of new fields). So periodic review/recreation of the configuration file, might be required.
  • The version of BURP or Plugin might need upgrade periodically. So, manual efforts are required to upgrade them in the build agent.

Monday, May 29, 2017

Sharing BURP environment across multiple testers

We often face the situation when multiple testers are supposed to test the same application in BURP (single installation) and we are unsure of how the session cookies will behave. Which tester's cookie will the BURP tool use for subsequent requests made to the server? After creating the site-map, the testers might not initiate the scan immediately. So what happens when all the session being used for creating the sitemap expires?

Test creds to use

First of all, BURP is not capable of finding issues related to privilege escalation, so all the testers should be using the only the admin credentials for testing to achieve maximum application coverage.

Setting the shared environment

Second, to set up shared BURP environment, do the following settings –


The IP should be the IP of the machine where BURP is installed.

Dealing with session cookies

Finally, for tackling the session issues - 
  • There is a cookie jar plugin in BURP that internally tracks all the cookies being used for an application, captures the latest cookie being used for that application and while active scans use the latest cookie captured by the proxy. For configuring the cookie jar: it is enabled by default but to double check please make sure the following boxes are checked –
  • When these configurations are in place, BURP will automatically record the latest cookie for the target domain as captured in the proxy tool. And for subsequent scan request, it will make use of the latest cookie.
  • So during BURP testing, multiple people can login to the app (ALL USING ADMIN CREDS), with the proxy set, browse the app to create the site map at their own pace. After a while, when you want to trigger the scan, make sure you login to the application with the proxy set and then trigger the scan immediately before that session expires. Note: if some sub domain of the application needs separate authentication and sets additional cookies, prior to triggering the scan, you should browse those locations of the app and complete those authentications as well 
  • At any time, you can click on the “open cookie jar” button on the above screenshot to see the cookie being used for your target domain at that instance



Reference:

https://portswigger.net/burp/help/options_sessions.html

Proxychain tool in a nutshell

In order to do penetration testing anonymously and decrease the possibility of identity detection, hackers need to use an intermediary mach...