A comparative analysis of Open Source Web Application vulnerability scanners (Rana Khalil)

Why Johnny still can’t pentest: A comparative analysis of Open Source Web Application vulnerability scanners When conducting a web application vulnerability assessment, great emphasis is put on running a vulnerability scanner. These are automated Dynamic Application Security Testing (DAST) tools that crawl web applications to look for vulnerabilities. No matter how complete the assessor’s “manual” testing is, you find the project manager reluctant to give the stamp of approval without having run a scan first. This reaction usually stems from the fact that scanners are advertised as being your go to automated solution for finding critical vulnerabilities. In this talk, we’ll present the findings of our thesis research where we evaluated the crawling coverage and vulnerability detection of leading open-source web application vulnerability scanners. The scanners were run in two modes: Point-and-Shoot (PaS): In this mode the scanner is only given the root URL of an application and asked to scan the site. Trained: In this mode the scanner is first configured and trained to maximize the crawling coverage and vulnerability detection. We’ll present the technologies that scanners found difficult to crawl and the classes of vulnerabilities that went undetected. We’ll also outline the differences in crawling coverage and vulnerability detection when scanners are run in PaS and Trained modes. The results of our analysis will show the sheer importance of having a human being not only spend a significant amount of time configuring the scanner, but also manually test for vulnerabilities that would be impossible to detect by a tool that has no way of understanding the logic and architecture of an application.

⭐How do these scanners work Web application vulnerability Scanners have three modules-

⭐Crawler ⭐-

Crawler is responsible for crawling the target web application and identifying all of the pages and resources that are accessible. This information is then used by the other modules in the scanner to identify potential vulnerabilities. The crawler module typically works by sending a series of HTTP requests to the target web application. The requests are typically sent in a sequential order, starting with the homepage and then following all of the links that are found on each page. The crawler module also typically ignores resources that are not accessible to the public, such as administrative pages or pages that require authentication. Once the crawler module has finished crawling the target web application, it will create a map of all of the pages and resources that were found. This map is then used by the other modules in the scanner to identify potential vulnerabilities. For example, the scanner can use the map to identify pages that are vulnerable to cross-site scripting (XSS) attacks or pages that are vulnerable to SQL injection attacks. Here are some additional things to keep in mind about crawler modules:

  • Crawler modules can be slow. This is because they need to send a large number of HTTP requests to the target web application.

  • Crawler modules can be inaccurate. This is because they may not be able to identify all of the pages and resources that are accessible to the target web application.

  • Crawler modules can be fooled. This is because attackers can create fake pages and resources that are designed to trick the crawler module.

⭐Attacker ⭐-

The attacker is responsible for simulating an attacker's behavior in order to identify vulnerabilities in the web application. This module typically uses a variety of techniques, such as fuzzing, SQL injection, and cross-site scripting, to try to exploit vulnerabilities in the application. The attacker module is an important part of web application vulnerability scanners because it allows them to identify vulnerabilities that would not be found by other methods. For example, fuzzing can be used to identify vulnerabilities that are caused by incorrect input validation. SQL injection can be used to identify vulnerabilities that are caused by improper handling of user input. Cross-site scripting can be used to identify vulnerabilities that are caused by the application not properly sanitizing user input. The attacker module is typically used in conjunction with other modules in web application vulnerability scanners, such as the scanner module and the reporting module. The scanner module is responsible for identifying the web application's vulnerabilities, while the reporting module is responsible for generating a report of the vulnerabilities that were found. Here are some of the techniques that the attacker module in web application vulnerability scanners can use:

  • Fuzzing: Fuzzing is a technique that involves sending random or unexpected input to an application in order to identify vulnerabilities. This can be used to identify vulnerabilities in input validation, error handling, and other areas of the application.

  • SQL injection: SQL injection is a technique that can be used to inject malicious code into a web application's database. This can be used to steal data, modify data, or even take control of the application.

  • Cross-site scripting: Cross-site scripting (XSS) is a technique that can be used to inject malicious code into a web application's output. This can be used to steal cookies, hijack sessions, or redirect users to malicious websites.

  • Path traversal: Path traversal is a technique that can be used to access files that are outside of the web application's directory structure. This can be used to steal sensitive data or to execute arbitrary commands on the server.

  • Directory traversal: Directory traversal is a technique that can be used to access files that are outside of the web application's directory structure. This can be used to steal sensitive data or to execute arbitrary commands on the server.

⭐Analysis ⭐-

The analysis module in web application vulnerability scanners is responsible for analyzing the output of the scanning engine and identifying potential vulnerabilities. The analysis module typically uses a variety of techniques to identify vulnerabilities, such as:

  • Static analysis: Static analysis is the process of analyzing the source code of a web application to identify potential vulnerabilities. This can be done by looking for common patterns of vulnerability, such as hardcoded passwords or SQL injection vulnerabilities.

  • Dynamic analysis: Dynamic analysis is the process of analyzing the behavior of a web application by interacting with it in a real-world environment. This can be done by sending specially crafted requests to the web application and observing the response.

  • Fuzz testing: Fuzz testing is a technique that involves sending random data to a web application to see if it causes any unexpected behavior. This can be useful for identifying vulnerabilities that are not easily found by other techniques. The analysis module typically generates a report that lists the potential vulnerabilities that were identified. The report may also include information about the severity of the vulnerabilities and how they can be exploited.

Here are some of the benefits of using an analysis module in a web application vulnerability scanner:

  • Increased accuracy: Analysis modules can help to increase the accuracy of vulnerability identification by using a variety of techniques to identify vulnerabilities.

  • Reduced false positives: Analysis modules can help to reduce the number of false positives by filtering out false positives that are generated by the scanning engine.

  • Improved reporting: Analysis modules can help to improve the quality of vulnerability reports by providing detailed information about the vulnerabilities that were identified. However, there are also some limitations to using an analysis module in a web application vulnerability scanner:

  • Complexity: Analysis modules can be complex to develop and maintain.

  • Cost: Analysis modules can be expensive to purchase and implement.

  • Performance: Analysis modules can impact the performance of the scanning engine.

How are these Scanners used? Options #1 : Point and shoot (PaS)

  • Scanner is given only root URL

  • Default Configuration unchanged.

  • Minimal human Interference. You have to given the URL and asked it to scan the site.

Options #2 : Trained/ Configured.

  • Manually visit every page of the application in proxy mode.

  • Change configuration & train Scanner.

Environment Setup Vm => Kali > TOOLS (zap, arachni, burp , skipfish, wapiti, vega) Applications ( wavsep, wivet, wachopicko)

Vulnerability Detection

  1. Weak authentication credentials (admin/admin) Vulnerabilities in WackoPicko that were not detected by any scanners.

  2. Parameter Manipulation

  3. Forceful browsing ( Access to a link that contains a high quality version of a picture without authentication )

  4. Logic flaw (like a Coupon mangement functionality)

On average scanners found only 40% of the vulnerabilities.

Crawling Challenges 1- uploading a File Its a features that scanners found difficult to crawl in WackoPicko: in Crawler rule - Crawler have to crawl the entire website to find out vulnerabilites. but crawler cant detect the upload file, so other pages next to the upload file its not gonna be scan by a scanner.

  • All scanner were not able to upload a picture in Pas mode.

  • Burp and ZAP were able to in Trained mode. 2 - Authentication. Scanners like (Arachni , burp , skipfish , vega, zap) they are doing is throwing random values to the user and password input and create a account. but the created accounts they have to idea that they created actually or not. cause they are not gonna used it.

  • All scanners expect for wapiti successfully created account

  • None of the scanner used the created accounts to authenticate. 3- Multi-step Process. when comment something in a post , we have to Preview -> Create -> Post. in this case scanner are not be able to find the later pages without preview. point to shoot mode they are not gonna be able to detect the XSS vulnerability because they never got the page in the first place.

  • All scanner were not able to complete the process in Pas mode.

  • Burp and Zap were able to in Trained mode. 4 - Infinite Websites.

  • All scanners recognized the infinite loop except Arachni. 5 - State Awareness.

  • In PaS mode none of the scanners discovered any of the vulnerabilities that require authentication.

  • Vulnerabilities that require authentication were only discovered in Trained mode. 6 - Client-side Code.

  • Standard anchor links.

  • Links created dynamically using javascript.

  • Multi-page forms.

  • Links in comments.

  • Links embedded in Flash objects.

  • Links within AJAX requests.

What now? Should Johnny even bother using an automated scanner? Yes... Johnny definitely should!

  • Scanners DO NOT replace a skilled pentester, but can aid the pentester

    • Vulnerability scan in NOT EQUIVALENT to a vulnerability assessment.

  • Using a vulnerability scanner requires skill.

    • A fool with a tool is still a fool

  • Configure your scanner! Never run your scanner in Pas.

    • Specify the target.

    • Set Login/Logout conditions.

    • Set the scanner in proxy mode and visit everypage of the application.

    • Configure scenarios (business flows) and cleanup b/w scenarios.

  • Monitor and review the requests of your scan.

  • After all that work, you're only protected against script kiddies.

Last updated