Skip to content Skip to main navigation Skip to footer

How to use the new Website Scanner

The Website Vulnerability Scanner is a custom tool written by our team which helps you quickly assess the security of a web application. It is a full-blown web application scanner, capable of performing comprehensive security assessments against any type of web application.

On 5th April 2021, we officially released a new engine that is in beta testing. In the following article, we will explain all the features and options available to help you make the finest vulnerability reports.

Table of Contents
You can select the "Full Scan (new engine - beta)" option below the Target URL field.

We recommend you do not change the default settings. However, if you have any specific requirements, such as a very large application, or you need to exclude several parts of the application from the scan, you can configure these settings as described below.

Furthermore, if you need to run only specific checks, such as SQL or XSS Injection testing, you can run the scan applying these settings only.

Initial tests

These tests are recommended for all applications. You can skip any of these, depending on the target application typology. The scan duration will vary depending on the number and the complexity of the tests you select to perform.

The resource discovery part is the most time-consuming, so we recommend you run this test at a later stage, or for when you have time to leave the scan running.

You can schedule a scan for later using the scheduling feature. For more details, please check out our support article on how to schedule a scan.

Initial Tests:
- Fingerprint Website
- Server Software Vulnerabilities
- Robots.txt
- JavaScript libraries
- SSL/TLS Certificates
- Client access policies
- Resource Discovery

Fingerprint website

Fingerprinting tries to find information about the used output will be a list of the detected used technologies, tools, third-party software and their version. This information can serve as a starting point for an attacker, by giving them some directions to further investigate.

Server software vulnerabilities

The Server software vulnerabilities test checks if the server software is affected by known vulnerabilities. Will output CVEs and a description of the vulnerabilities.


The Robots.txt test checks for the existence of the robots.txt file and extracts any URLs that are present and in-scope for further analysis.

Javascript libraries

The JavaScript Libraries test checks if the application uses any outdated JavaScript libraries, which are affected by known vulnerabilities. The output will be a list of such detected vulnerabilities.

SSL/TLS certificates

The SSL/TLS Certificates test checks if the SSL/TLS Certificate the server presents is trusted by the browser. The most common causes for this error are that the certificate is self-signed, the issuer is untrusted, or it is not valid for the domain of the application.

Client access policies

Client Access policies are a set of rules in XML files used by Adobe Flash and Microsoft Silverlight clients in the browser. These files specify which parts of the server should be accessible, and to which external domains. 

We define a vulnerability as allowing any domain to request data from the server, which is identified by a wildcard (a * operator) in certain XML tags in the policy files. 

This is not necessarily a problem if the website is supposed to be public, but might be a vulnerability if the tested website is supposed to have restricted access.

Resource discovery

Resource Discovery researches for common files and directories that represent a possible liability if exposed. However, be aware that this will increase the overall scan duration. By default, this initial test is unselected because it adds a significant amount of time to the scanning process. However, you can select it when ready to let the scan run, without a time constraint.

Spider Options

You can configure the following options to determine how deep you want the scan to crawl the application or set some paths that you want the scanner to avoid.


The Approach section notifies the scanner of which type of spidering method to use.

- Classic Spider
- SPA Spider
  • Classic Spider – Used to crawl classic websites.
  • SPA Spider– Used to crawl single page application (JavaScript heavy) websites. We are still working on this feature and it will be released in a later version.


By adjusting the Limit you are letting the scanner know the number of subpaths (‘/’) it should crawl and scan, meaning to what extent the search engine indexes the website’s content.

- Depth: 10 (default)

A greater crawl depth might get a lot more injection points than a site with a lower crawl depth, but it will also affect your scan duration. We recommend that you keep the default value.

Exclude URLs

Excluded URLs is a list of URL test names to ignore when scanning. By default this is an empty list representing no paths should be excluded. You can enter each URL on a new line. Make sure to enter the full path of the URLs.

Exclude URLs:
Tip: You can resize the input box by dragging the bottom-right corner.

Attack Options

Attack Options represent tests the scanner engine is performing on every new Injection Point it detects during the scanning process. An Injection Point is a target URL paired with unique parameters. It is considered validated after the scanner sends a request to it and checks if the response is valid.

For example “” is a unique Injection Point that is checked with all the selected modules.

There are Active and Passive checks. Both types of tests use the validated Injection Points from the request engine.

The difference between them is that active checks send a large number of requests against an Injection Point with specific payloads that should trigger certain behaviours from the target that indicate whether it is vulnerable or not. 

The passive checks use the Injection Points detected directly and are not sending additional requests. They analyze the server’s response for specific configurations and behaviours that prove the target is vulnerable to different attacks

While the passive checks generate a maximum of 20 HTTP requests to the server, the active checks are more aggressive and send up to 10,000 HTTP requests. This may trigger alarms from IDS but you should know that it is not a destructive scan.

Active checks

Check for vulnerable parameters that might give access to sensitive information. The engine crawls the target application, then it sends various inputs into the parameters of the pages and looks for specific web vulnerabilities such as SQL Injection, Cross-Site Scripting, Local File Inclusion, OS Command Injection.


The XSS test will try to detect if the application is vulnerable to Cross-Site Scripting by injecting XSS payloads and analyzing the response.

SQL Injections

The SQL Injections test checks for SQL injection vulnerabilities found in web applications by crawling, injecting SQL payloads in parameters, and analyzing the responses of the web application.

Local file inclusion

Local file inclusion occurs when the web application shows arbitrary files on the current server in response to an input in a parameter.

OS command injection

Command injection is an attack in which the goal is the execution of arbitrary commands on the host operating system via a vulnerable application. Command injection attacks are possible when an application passes unsafe user-supplied data (forms, cookies, HTTP headers, etc.) as Operating System commands.

Passive checks

Passive checks analyze the HTTP responses from the server to find weaknesses in your system.

Security headers

HTTP security headers are a fundamental part of website security. They protect against XSS, code injection, clickjacking, etc. This tests checks for the common security headers.

Cookies security

Checks the use of :

  • the HttpOnly attribute to prevent access to cookie values via JavaScript.
  • the Secure attribute to prevent the sending of Cookies over HTTP
  • the Domain attribute indicates which hosts the cookie should be sent to; be careful with this, as including a hostname will also send cookies to all of its subdomains. For example, domain = also sends the cookie to,, etc.

Directory listing

Directory listings might constitute a vulnerability if it gives access to the server configuration or other sensitive files.

Secure communication

Secure communication tests if the communication is done over HTTPS instead of HTTP, which is not encrypted.

Weak password submission method

If the communication is done over HTTP, we check if the credentials are submitted using transparent methods like HTTP Basic or Digest Authentication.

Commented code / Error codes

Tests if there are suspicious code comments or if the application responds with stack traces or too verbose error messages on certain inputs.

Clear text submission of credentials

Tests if the user credentials are sent over HTTP as opposed to HTTPS.

Verify domain sources

Checks if the website uses content from 3rd party domains. This isn’t a security vulnerability per-se but might represent in the case that the outside domain is compromised. We recommend that you keep all the necessary resources on your server and load them from there.

Mixed encryption content

Checks if HTML loads over a secure HTTPS connection but other content, such as images, video content, stylesheets, and scripts, continues to load over an insecure HTTP connection.

Find login interfaces

Finds login interfaces and returns the HTML code of the forms.


If your application requires authentication to access certain parts of the website, it is highly recommended to enable the authenticated scanning. Thus, the scanner covers more application functionality and pages than the unauthenticated scan.

If you wish to learn more about why you should perform an authenticated scan, you can check out this dedicated article from our Learning Center.

The “Check authentication” button is optional for the first three methods and disabled for the “Headers” method, so you can start scanning directly.

Our Website Vulnerability Scanner supports four methods for performing authenticated scans:

  1. Recorded – Recording-based Authentication
  2. Automatic – Form-based authentication
  3. Cookies – Cookie-based authentication
  4. Headers – Headers authentication

You’ll know that the authentication was successful if you get an additional “Authentication complete” message in the final scan report. Furthermore, the Spider results should contain more crawled URLs than the unauthenticated scan.

Email Notifications

You can configure email notifications when your scan matches certain conditions (is Finished, found High Risk, discovered some open port, etc).

You can find more details in our dedicated support article.

Was This Article Helpful?


If you didn't find what you were looking for, browse the categories below or contact us now!

We'd really love to get you the answer you're looking for. If the article How to use the new Website Scanner doesn't contain the information you're seeking, please get in touch with us directly:

Contact us »