ࡱ > H # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ~ bjbj m `
) ) 7 7 7 7 7 -7 -7 -7 8 e7 = -7 2 ^ M I W ( M zW k T 11 31 31 31 31 31 31 $ |4 7 W1 7 " + " W1 7 7 1 ^ ^ ^ / 7 7 11 ^ 11 ^ ^ (
( 0 03p_ -7 > * 1 1 0 2 - 7 WE ` 7 0 7 7 0 ` ^ W1 W1 ^ 2 7 ) 5 :
OWASP Testing Guide
2008 v3.0
2002-2008 OWASP Foundation
This document is licensed under the Creative Commons HYPERLINK "http://creativecommons.org/licenses/by-sa/3.0/"Attribution-ShareAlike 3.0 license. You must attribute your version to the OWASP Testing or the OWASP Foundation.
Table of Contents
TOC \o "1-2" Foreword PAGEREF _Toc217202575 \h 7
Why OWASP? PAGEREF _Toc217202576 \h 7
Tailoring and Prioritizing PAGEREF _Toc217202577 \h 7
The Role of Automated Tools PAGEREF _Toc217202578 \h 8
Call to Action PAGEREF _Toc217202579 \h 8
1. Frontispiece PAGEREF _Toc217202580 \h 9
Welcome to the OWASP Testing Guide 3.0 PAGEREF _Toc217202581 \h 9
About The Open Web Application Security Project PAGEREF _Toc217202582 \h 12
2. Introduction PAGEREF _Toc217202583 \h 14
Principles of Testing PAGEREF _Toc217202584 \h 16
Testing Techniques Explained PAGEREF _Toc217202585 \h 19
Security Requirements Test Derivation PAGEREF _Toc217202586 \h 25
3. The OWASP Testing Framework PAGEREF _Toc217202587 \h 40
Overview PAGEREF _Toc217202588 \h 40
Phase 1: Before Development Begins PAGEREF _Toc217202589 \h 41
Phase 2: During Definition and Design PAGEREF _Toc217202590 \h 41
Phase 3: During Development PAGEREF _Toc217202591 \h 42
Phase 4: During Deployment PAGEREF _Toc217202592 \h 43
Phase 5: Maintenance and Operations PAGEREF _Toc217202593 \h 44
4 Web Application Penetration Testing PAGEREF _Toc217202594 \h 46
4.1 Introduction and objectives PAGEREF _Toc217202595 \h 46
4.2 Information Gathering PAGEREF _Toc217202596 \h 51
4.2.1 Testing: Spiders, robots, and Crawlers (OWASP-IG-001) PAGEREF _Toc217202597 \h 52
4.2.2 Search engine discovery/Reconnaissance (OWASP-IG-002) PAGEREF _Toc217202598 \h 54
4.2.3 Identify application entry points (OWASP-IG-003) PAGEREF _Toc217202599 \h 56
4.2.4 Testing for Web Application Fingerprint (OWASP-IG-004) PAGEREF _Toc217202600 \h 59
4.2.5 Application Discovery (OWASP-IG-005) PAGEREF _Toc217202601 \h 65
4.2.6 Analysis of Error Codes (OWASP-IG-006) PAGEREF _Toc217202602 \h 71
4.3 Configuration Management Testing PAGEREF _Toc217202603 \h 75
4.3.1 SSL/TLS Testing (OWASP-CM-001) PAGEREF _Toc217202604 \h 76
4.3.2 DB Listener Testing (OWASP-CM-002) PAGEREF _Toc217202605 \h 82
4.3.3 Infrastructure configuration management testing (OWASP-CM-003) PAGEREF _Toc217202606 \h 86
4.3.4 Application configuration management testing (OWASP-CM-004) PAGEREF _Toc217202607 \h 91
4.3.5 Testing for File extensions handling (OWASP-CM-005) PAGEREF _Toc217202608 \h 95
4.3.6 Old, backup and unreferenced files (OWASP-CM-006) PAGEREF _Toc217202609 \h 97
4.3.7 Infrastructure and Application Admin Interfaces (OWASP-CM-007) PAGEREF _Toc217202610 \h 102
4.3.8 Testing for HTTP Methods and XST (OWASP-CM-008) PAGEREF _Toc217202611 \h 104
4.4 Authentication Testing PAGEREF _Toc217202612 \h 109
4.4.1 Credentials transport over an encrypted channel (OWASP-AT-001) PAGEREF _Toc217202613 \h 110
4.4.2 Testing for user enumeration (OWASP-AT-002) PAGEREF _Toc217202614 \h 113
4.4.3 Default or guessable (dictionary) user account (OWASP-AT-003) PAGEREF _Toc217202615 \h 117
4.4.4 Testing For Brute Force (OWASP-AT-004) PAGEREF _Toc217202616 \h 120
4.4.5 Testing for Bypassing authentication schema (OWASP-AT-005) PAGEREF _Toc217202617 \h 126
4.4.6 Testing for Vulnerable remember password and pwd reset (OWASP-AT-006) PAGEREF _Toc217202618 \h 131
4.4.7 Testing for Logout and Browser Cache Management (OWASP-AT-007) PAGEREF _Toc217202619 \h 133
4.4.8 Testing for Captcha (OWASP-AT-008) PAGEREF _Toc217202620 \h 138
4.4.9 Testing for Multiple factors Authentication (OWASP-AT-009) PAGEREF _Toc217202621 \h 140
4.4.10 Testing for Race Conditions (OWASP-AT-010) PAGEREF _Toc217202622 \h 144
4.5 Session Management Testing PAGEREF _Toc217202623 \h 146
4.5.1 Testing for Session Management Schema (OWASP-SM-001) PAGEREF _Toc217202624 \h 147
4.5.2 Testing for Cookies attributes (OWASP-SM-002) PAGEREF _Toc217202625 \h 156
4.5.3 Testing for Session Fixation (OWASP-SM_003) PAGEREF _Toc217202626 \h 159
4.5.4 Testing for Exposed Session Variables (OWASP-SM-004) PAGEREF _Toc217202627 \h 161
4.5.5 Testing for CSRF (OWASP-SM-005) PAGEREF _Toc217202628 \h 164
4.6 Authorization testing PAGEREF _Toc217202629 \h 170
4.6.1 Testing for path traversal (OWASP-AZ-001) PAGEREF _Toc217202630 \h 170
4.6.2 Testing for bypassing authorization schema (OWASP-AZ-002) PAGEREF _Toc217202631 \h 174
4.6.3 Testing for Privilege Escalation (OWASP-AZ-003) PAGEREF _Toc217202632 \h 176
4.7 Business logic testing (OWASP-BL-001) PAGEREF _Toc217202633 \h 178
4.8 Data Validation Testing PAGEREF _Toc217202634 \h 184
4.8.1 Testing for Reflected Cross Site Scripting (OWASP-DV-001) PAGEREF _Toc217202635 \h 187
4.8.2 Testing for Stored Cross Site Scripting (OWASP-DV-002) PAGEREF _Toc217202636 \h 191
4.8.3 Testing for DOM based Cross Site Scripting (OWASP-DV-003) PAGEREF _Toc217202637 \h 197
4.8.4 Testing for Cross Site Flashing (OWASP-DV-004) PAGEREF _Toc217202638 \h 199
4.8.5 SQL Injection (OWASP-DV-005) PAGEREF _Toc217202639 \h 204
4.8.5.1 Oracle Testing PAGEREF _Toc217202640 \h 212
4.8.5.2 MySQL Testing PAGEREF _Toc217202641 \h 219
4.8.5.3 SQL Server Testing PAGEREF _Toc217202642 \h 225
4.8.5.4 MS Access Testing PAGEREF _Toc217202643 \h 233
4.8.5.5 Testing PostgreSQL PAGEREF _Toc217202644 \h 236
4.8.6 LDAP Injection (OWASP-DV-006) PAGEREF _Toc217202645 \h 241
4.8.7 ORM Injection (OWASP-DV-007) PAGEREF _Toc217202646 \h 243
4.8.8 XML Injection (OWASP-DV-008) PAGEREF _Toc217202647 \h 245
4.8.9 SSI Injection (OWASP-DV-009) PAGEREF _Toc217202648 \h 251
4.8.10 XPath Injection (OWASP-DV-010) PAGEREF _Toc217202649 \h 254
4.8.11 IMAP/SMTP Injection (OWASP-DV-011) PAGEREF _Toc217202650 \h 255
4.8.12 Code Injection (OWASP-DV-012) PAGEREF _Toc217202651 \h 260
4.8.13 OS Commanding (OWASP-DV-013) PAGEREF _Toc217202652 \h 261
4.8.14 Buffer overflow Testing (OWASP-DV-014) PAGEREF _Toc217202653 \h 264
4.8.14.1 Heap overflow PAGEREF _Toc217202654 \h 265
4.8.14.2 Stack overflow PAGEREF _Toc217202655 \h 268
4.8.14.3 Format string PAGEREF _Toc217202656 \h 272
4.8.15 Incubated vulnerability testing (OWASP-DV-015) PAGEREF _Toc217202657 \h 275
4.8.15 Testing for HTTP Splitting/Smuggling (OWASP-DV-016) PAGEREF _Toc217202658 \h 278
4.9 Denial of Service Testing PAGEREF _Toc217202659 \h 281
4.9.1 Testing for SQL Wildcard Attacks (OWASP-DS-001) PAGEREF _Toc217202660 \h 282
4.9.2 Locking Customer Accounts (OWASP-DS-002) PAGEREF _Toc217202661 \h 284
4.9.3 Buffer Overflows (OWASP-DS-003) PAGEREF _Toc217202662 \h 286
4.9.4 User Specified Object Allocation (OWASP-DS-004) PAGEREF _Toc217202663 \h 287
4.9.5 User Input as a Loop Counter (OWASP-DS-005) PAGEREF _Toc217202664 \h 288
4.9.6 Writing User Provided Data to Disk (OWASP-DS-006) PAGEREF _Toc217202665 \h 289
4.9.7 Failure to Release Resources (OWASP-DS-007) PAGEREF _Toc217202666 \h 290
4.9.8 Storing too Much Data in Session (OWASP-DS-008) PAGEREF _Toc217202667 \h 291
4.10 Web Services Testing PAGEREF _Toc217202668 \h 292
4.10.1 WS Information Gathering (OWASP-WS-001) PAGEREF _Toc217202669 \h 293
4.10.2 Testing WSDL (OWASP-WS-002) PAGEREF _Toc217202670 \h 299
4.10.3 XML Structural Testing (OWASP-WS-003) PAGEREF _Toc217202671 \h 302
4.10.4 XML Content-level Testing (OWASP-WS-004) PAGEREF _Toc217202672 \h 307
4.10.5 HTTP GET parameters/REST Testing (OWASP-WS-005) PAGEREF _Toc217202673 \h 309
4.10.6 Naughty SOAP attachments (OWASP-WS-006) PAGEREF _Toc217202674 \h 310
4.10.7 Replay Testing (OWASP-WS-007) PAGEREF _Toc217202675 \h 312
4.11 AJAX Testing PAGEREF _Toc217202676 \h 315
4.11.1 AJAX Vulnerabilities (OWASP-AJ-001) PAGEREF _Toc217202677 \h 316
4.11.2 Testing For AJAX (OWASP-AJ-002) PAGEREF _Toc217202678 \h 319
5. Writing Reports: value the real risk PAGEREF _Toc217202679 \h 325
5.1 How to value the real risk PAGEREF _Toc217202680 \h 325
5.2 How to write the report of the testing PAGEREF _Toc217202681 \h 332
Appendix A: Testing Tools PAGEREF _Toc217202682 \h 337
Appendix B: Suggested Reading PAGEREF _Toc217202683 \h 340
Appendix C: Fuzz Vectors PAGEREF _Toc217202684 \h 341
Appendix D: Encoded Injection PAGEREF _Toc217202685 \h 346
Foreword
The problem of insecure software is perhaps the most important technical challenge of our time. Security is now the key limiting factor on what we are able to create with information technology. At The Open Web Application Security Project (OWASP), we're trying to make the world a place where insecure software is the anomaly, not the norm, and the OWASP Testing Guide is an important piece of the puzzle.
It goes without saying that you can't build a secure application without performing security testing on it. Yet many software development organizations do not include security testing as part of their standard software development process. Still, security testing, by itself, isn't a particularly good measure of how secure an application is, because there are an infinite number of ways that an attacker might be able to make an application break, and it simply isn't possible to test them all. However, security testing has the unique power to absolutely convince naysayers that there is a problem. So security testing has proven itself as a key ingredient in any organization that needs to trust the software it produces or uses.
Taken together, OWASP's guides are a great start towards building and maintaining secure applications. The Development Guide will show your project how to architect and build a secure application, the Code Review Guide will tell you how to verify the security of your application's source code, and this Testing Guide will show you how to verify the security of your running application. I highly recommend using these guides as part of your application security initiatives.
Why OWASP?
Creating a guide like this is a massive undertaking, representing the expertise of hundreds of people around the world. There are many different ways to test for security flaws and this guide captures the consensus of the leading experts on how to perform this testing quickly, accurately, and efficiently.
It's impossible to underestimate the importance of having this guide available in a completely free and open way. Security should not be a black art that only a few can practice. Much of the available security guidance is only detailed enough to get people worried about a problem, without providing enough information to find, diagnose, and solve security problems. The project to build this guide keeps this expertise in the hands of the people who need it.
This guide must make its way into the hands of developers and software testers. There are not nearly enough application security experts in the world to make any significant dent in the overall problem. The initial responsibility for application security must fall on the shoulders of the developers. It shouldn't be a surprise that developers aren't producing secure code if they're not testing for it.
Keeping this information up to date is a critical aspect of this guide project. By adopting the wiki approach, the OWASP community can evolve and expand the information in this guide to keep pace with the fast moving application security threat landscape.
Tailoring and Prioritizing
You should adopt this guide in your organization. You may need to tailor the information to match your organization's technologies, processes, and organizational structure. If you have standard security technologies, you should tailor your testing to ensure they are being used properly. There are several different roles that may use this guide.
Developers should use this guide to ensure that they are producing secure code. These tests should be a part of normal code and unit testing procedures.
Software testers should use this guide to expand the set of test cases they apply to applications. Catching these vulnerabilities early saves considerable time and effort later.
Security specialists should use this guide in combination with other techniques as one way to verify that no security holes have been missed in an application.
The most important thing to remember when performing security testing is to continuously reprioritize. There are an infinite number of possible ways that an application could fail, and you always have limited testing time and resources. Be sure you spend it wisely. Try to focus on the security holes that are the most likely to be discovered and exploited by an attacker, and that will lead to the most serious compromises.
This guide is best viewed as a set of techniques that you can use to find different types of security holes. But not all the techniques are equally important. Try to avoid using the guide as a checklist.
The Role of Automated Tools
There are a number of companies selling automated security analysis and testing tools. Remember the limitations of these tools so that you can use them for what they're good at. As Michael Howard put it at the HYPERLINK "http://www.owasp.org/index.php/OWASP_AppSec_Seattle_2006/Agenda" \o "OWASP AppSec Seattle 2006/Agenda" 2006 OWASP AppSec Conference in Seattle, "Tools do not make software secure! They help scale the process and help enforce policy."
Most importantly, these tools are generic - meaning that they are not designed for your custom code, but for applications in general. That means that while they can find some generic problems, they do not have enough knowledge of your application to allow them to detect most flaws. In my experience, the most serious security issues are the ones that are not generic, but deeply intertwined in your business logic and custom application design.
These tools can also be seductive, since they do find lots of potential issues. While running the tools doesn't take much time, each one of the potential problems takes time to investigate and verify. If the goal is to find and eliminate the most serious flaws as quickly as possible, consider whether your time is best spent with automated tools or with the techniques described in this guide.
Still, these tools are certainly part of a well-balanced application security program. Used wisely, they can support your overall processes to produce more secure code.
Call to Action
If you're building software, I strongly encourage you to get familiar with the security testing guidance in this document. If you find errors, please add a note to the discussion page or make the change yourself. You'll be helping thousands of others who use this guide.
Please consider HYPERLINK "http://www.owasp.org/index.php/Membership" \o "Membership" joining us as an individual or corporate member so that we can continue to produce materials like this testing guide and all the other great projects at OWASP.
Thank you to all the past and future contributors to this guide, your work will help to make applications worldwide more secure.
-- HYPERLINK "http://www.owasp.org/index.php/User:Jeff_Williams" \o "User:Jeff Williams" Jeff Williams, OWASP Chair, December 15, 2006
1. Frontispiece
Welcome to the OWASP Testing Guide 3.0
Open and collaborative knowledge: thats the OWASP way
HYPERLINK "http://www.owasp.org/index.php/User:Mmeucci" \o "User:Matteo Meucci"Matteo Meucci
OWASP thanks the many authors, reviewers, and editors for their hard work in bringing this guide to where it is today. If you have any comments or suggestions on the Testing Guide, please e-mail the Testing Guide mail list:
HYPERLINK "http://lists.owasp.org/mailman/listinfo/owasp-testing" \o "http://lists.owasp.org/mailman/listinfo/owasp-testing" http://lists.owasp.org/mailman/listinfo/owasp-testing
Or drop an e-mail to the project leader: HYPERLINK "mailto:matteo.meucci@gmail.com" \o "mailto:matteo.meucci@gmail.com" Matteo Meucci
version 3
The OWASP Testing Guide Version 3 improves version 2 and creates new sections and controls. This new version has added:
Configuration Management and Authorization Testing sections and Encoded Injection Appendix;
36 new articles (1 taken from the OWASP BSP);
Version 3 improved 9 articles, for a total of 10 Testing categories and 66 controls.
Copyright and License
Copyright (c) 2008 The OWASP Foundation.
This document is released under the HYPERLINK "http://creativecommons.org/licenses/by-sa/2.5/" \o "http://creativecommons.org/licenses/by-sa/2.5/" Creative Commons 2.5 License. Please read and understand the license and copyright conditions.
Revision History
The Testing Guide v3 was released in November 2008. The Testing guide originated in 2003 with Dan Cuthbert as one of the original editors. It was handed over to Eoin Keary in 2005 and transformed into a wiki. Matteo Meucci has taken on the Testing guide and is now the lead of the OWASP Testing Guide Project since v2.
16th December, 2008
"OWASP Testing Guide", Version 3.0 Released by Matteo Meucci at the OWASP Summit 08
December 25, 2006
"OWASP Testing Guide", Version 2.0
July 14, 2004
"OWASP Web Application Penetration Checklist", Version 1.1
December 2004
"The OWASP Testing Guide", Version 1.0
Editors
Matteo Meucci: OWASP Testing Guide Lead since 2007.
Eoin Keary: OWASP Testing Guide 2005-2007 Lead.
Daniel Cuthbert: OWASP Testing Guide 2003-2005 Lead.
V3 Authors
Anurag Agarwwal
Daniele Bellucci
Arian Coronel
Stefano Di Paola
Giorgio Fedon
Alan Goodman
Christian Heinrich Kevin Horvath
Gianrico Ingrosso
Roberto Suggi Liverani
Alex Kuza
Pavol Luptak
Ferruh Mavituna
Marco MellaMatteo Meucci
Marco Morana
Antonio Parata
Cecil Su
Harish Skanda Sureddy
Mark Roxberry
Andrew Van der StockV3 Reviewers
Marco Cova
Kevin Fuller Matteo Meucci
Nam NguyenV2 Authors
Vicente Aguilera
Mauro Bregolin
Tom Brennan
Gary Burns
Luca Carettoni
Dan Cornell
Mark Curphey
Daniel Cuthbert
Sebastien Deleersnyder
Stephen DeVries
Stefano Di Paola
David Endler
Giorgio FedonJavier Fernndez-Sanguino
Glyn Geoghegan
Stan Guzik
Madhura Halasgikar
Eoin Keary
David Litchfield
Andrea Lombardini
Ralph M. Los
Claudio Merloni
Matteo Meucci
Marco Morana
Laura Nunez
Gunter OllmannAntonio Parata
Yiannis Pavlosoglou
Carlo Pelliccioni
Harinath Pudipeddi
Alberto Revelli
Mark Roxberry
Tom Ryan
Anush Shetty
Larry Shields
Dafydd Studdard
Andrew van der Stock
Ariel Waissbein
Jeff Williams V2 Reviewers
Vicente Aguilera
Marco Belotti
Mauro Bregolin
Marco Cova
Daniel Cuthbert
Paul DaviesStefano Di Paola
Matteo G.P. Flora
Simona Forti
Darrell Groundy
Eoin Keary
James Kist
Katie McDowellMarco Mella
Matteo Meucci
Syed Mohamed A
Antonio Parata
Alberto Revelli
Mark Roxberry
Dave WichersTrademarks
Java, Java Web Server, and JSP are registered trademarks of Sun Microsystems, Inc.
Merriam-Webster is a trademark of Merriam-Webster, Inc.
Microsoft is a registered trademark of Microsoft Corporation.
Octave is a service mark of Carnegie Mellon University.
VeriSign and Thawte are registered trademarks of VeriSign, Inc.
Visa is a registered trademark of VISA USA.
OWASP is a registered trademark of the OWASP Foundation
All other products and company names may be trademarks of their respective owners. Use of a term in this document should not be regarded as affecting the validity of any trademark or service mark.
About The Open Web Application Security Project
Overview
The Open Web Application Security Project (OWASP) is an open community dedicated to enabling organizations to develop, purchase, and maintain applications that can be trusted. All of the OWASP tools, documents, forums, and chapters are free and open to anyone interested in improving application security. We advocate approaching application security as a people, process, and technology problem because the most effective approaches to application security includes improvements in all of these areas. We can be found at HYPERLINK "http://www.owasp.org" \o "http://www.owasp.org" http://www.owasp.org.
OWASP is a new kind of organization. Our freedom from commercial pressures allows us to provide unbiased, practical, cost-effective information about application security. OWASP is not affiliated with any technology company, although we support the informed use of commercial security technology. Similar to many open-source software projects, OWASP produces many types of materials in a collaborative, open way. The OWASP Foundation is a not-for-profit entity that ensures the project's longterm success. For more information, please see the pages listed below:
HYPERLINK "http://www.owasp.org/index.php/Contact" \o "Contact" Contact for information about communicating with OWASP
HYPERLINK "http://www.owasp.org/index.php/Contributions" \o "Contributions" Contributions for details about how to make contributions
HYPERLINK "http://www.owasp.org/index.php/Advertising" \o "Advertising" Advertising if you're interested in advertising on the OWASP site
HYPERLINK "http://www.owasp.org/index.php/How_OWASP_Works" \o "How OWASP Works" How OWASP Works for more information about projects and governance
HYPERLINK "http://www.owasp.org/index.php/OWASP_brand_usage_rules" \o "OWASP brand usage rules" OWASP brand usage rules for information about using the OWASP brand
Structure
The OWASP Foundation is the not for profit (501c3) entity that provides the infrastructure for the OWASP community. The Foundation provides our servers and bandwidth, facilitates projects and chapters, and manages the worldwide OWASP Application Security Conferences.
Licensing
All OWASP materials are available under an approved open source license. If you opt to become an OWASP member organization, you can also use the commercial license that allows you to use, modify, and distribute all OWASP materials within your organization under a single license.
For more information, please see the HYPERLINK "http://www.owasp.org/index.php/OWASP_Licenses" \o "OWASP Licenses" OWASP Licenses page.
Participation and Membership
Everyone is welcome to participate in our forums, projects, chapters, and conferences. OWASP is a fantastic place to learn about application security, to network, and even to build your reputation as an expert.
If you find the OWASP materials valuable, please consider supporting our cause by becoming an OWASP member. All monies received by the OWASP Foundation go directly into supporting OWASP projects.
For more information, please see the HYPERLINK "http://www.owasp.org/index.php/Membership" \o "Membership" Membership page.
Projects
OWASP's projects cover many aspects of application security. We build documents, tools, teaching environments, guidelines, checklists, and other materials to help organizations improve their capability to produce secure code.
For details on all the OWASP projects, please see the HYPERLINK "http://www.owasp.org/index.php/Category:OWASP_Project" \o "Category:OWASP Project" OWASP Project page.
OWASP Privacy Policy
Given OWASPs mission to help organizations with application security, you have the right to expect protection of any personal information that we might collect about our members.
In general, we do not require authentication or ask visitors to reveal personal information when visiting our website. We collect Internet addresses, not the e-mail addresses, of visitors solely for use in calculating various website statistics.
We may ask for certain personal information, including name and email address from persons downloading OWASP products. This information is not divulged to any third party and is used only for the purposes of:
Communicating urgent fixes in the OWASP Materials
Seeking advice and feedback about OWASP Materials
Inviting participation in OWASPs consensus process and AppSec conferences
OWASP publishes a list of member organizations and individual members. Listing is purely voluntary and opt-in. Listed members can request not to be listed at any time.
All information about you or your organization that you send us by fax or mail is physically protected. If you have any questions or concerns about our privacy policy, please contact us at HYPERLINK "mailto:owasp@owasp.org" \o "mailto:owasp@owasp.org" owasp@owasp.org
2. Introduction
The OWASP Testing Project has been in development for many years. With this project, we wanted to help people understand the what, why, when, where, and how of testing their web applications, and not just provide a simple checklist or prescription of issues that should be addressed. The outcome of this project is a complete Testing Framework, from which others can build their own testing programs or qualify other peoples processes. The Testing Guide describes in details both the general Testing Framework and the techniques required to implement the framework in practice.
Writing the Testing Guide has proven to be a difficult task. It has been a challenge to obtain consensus and develop the content that allow people to apply the concepts described here, while enabling them to work in their own environment and culture. It has also been a challenge to change the focus of web application testing from penetration testing to testing integrated in the software development life cycle.
However, we are very satisfied with the results we have reached. Many industry experts and those responsible for software security at some of the largest companies in the world are validating the Testing Framework. This framework helps organizations test their web applications in order to build reliable and secure software, rather than simply highlighting areas of weakness, although the latter is certainly a byproduct of many of OWASPs guides and checklists. As such, we have made some hard decisions about the appropriateness of certain testing techniques and technologies, which we fully understand will not be agreed upon by everyone. However, OWASP is able to take the high ground and change culture over time through awareness and education based on consensus and experience.The rest of this guide is organized as follows. This introduction covers the pre-requisites of testing web applications: the scope of testing, the principles of successful testing, and the testing techniques. Chapter 3 presents the OWASP Testing Framework and explains its techniques and tasks in relation to the various phases of the software development life cycle. Chapter 4 covers how to test for specific vulnerabilities (e.g., SQL Injection) by code inspection and penetration testing.
Measuring (in)security: the Economics of Insecure SoftwareA basic tenet of software engineering is that you can't control what you can't measure [1]. Security testing is no different. Unfortunately, measuring security is a notoriously difficult process. We will not cover this topic in detail here, since it would take a guide on its own (for an introduction, see [2])
One aspect that we want to emphasize, however, is that security measurements are, by necessity, about both the specific, technical issues (e.g., how prevalent a certain vulnerability is) and how these affect the economics of software. We find that most technical people understand at least the basic issues, or have a deeper understanding, of the vulnerabilities. Sadly, few are able to translate that technical knowledge into monetary terms, and, thereby, quantify the potential cost of vulnerabilities to the application owner's business. We believe that until this happens, CIOs will not be able to develop an accurate return on security investment and, subsequently, assign appropriate budgets for software security.While estimating the cost of insecure software may appear a daunting task, recently, there has been a significant amount of work in this direction. For example, in June 2002, the US National Institute of Standards (NIST) published a survey on the cost of insecure software to the US economy due to inadequate software testing [3]. Interestingly, they estimate that a better testing infrastructure would save more than a third of these costs, or about $22 billion a year. More recently, the links between economics and security have been studied by academic researchers. See [4] for more information about some of these efforts.
The framework described in this document encourages people to measure security throughout their entire development process. They can then relate the cost of insecure software to the impact it has on their business, and consequently develop appropriate business decisions (resources) to manage the risk. Remember: measuring and testing web applications is even more critical than for other software, since web applications are exposed to millions of users through the Internet.
What is TestingWhat do we mean by testing? During the development life cycle of a web application, many things need to be tested. The Merriam-Webster Dictionary describes testing as:
To put to test or proof.
To undergo a test.
To be assigned a standing or evaluation based on tests.
For the purposes of this document, testing is a process of comparing the state of a system/application against a set of criteria. In the security industry, people frequently test against a set of mental criteria that are neither well defined nor complete. For this reason and others, many outsiders regard security testing as a black art. This documents aim is to change that perception and to make it easier for people without in-depth security knowledge to make a difference.
Why TestingThis document is designed to help organizations understand what comprises a testing program, and to help them identify the steps that they need to undertake to build and operate that testing program on their web applications. It is intended to give a broad view of the elements required to make a comprehensive web application security program. This guide can be used as a reference and as a methodology to help determine the gap between your existing practices and industry best practices. This guide allows organizations to compare themselves against industry peers, understand the magnitude of resources required to test and maintain their software, or prepare for an audit. This chapter does not go into the technical details of how to test an application, as the intent is to provide a typical security organizational framework. The technical details about how to test an application, as part of a penetration test or code review, will be covered in the remaining parts of this document.
When to TestMost people today dont test the software until it has already been created and is in the deployment phase of its life cycle (i.e., code has been created and instantiated into a working web application). This is generally a very ineffective and cost-prohibitive practice. One of the best methods to prevent security bugs from appearing in production applications is to improve the Software Development Life Cycle (SDLC) by including security in each of its phases. An SDLC is a structure imposed on the development of software artifacts. If an SDLC is not currently being used in your environment, it is time to pick one! The following figure shows a generic SDLC model as well as the (estimated) increasing cost of fixing security bugs in such a model.
INCLUDEPICTURE "http://www.owasp.org/images/8/84/SDLC.jpg" \* MERGEFORMATINET Figure 1: Generic SDLC Model
Companies should inspect their overall SDLC to ensure that security is an integral part of the development process. SDLCs should include security tests to ensure security is adequately covered and controls are effective throughout the development process.
What to TestIt can be helpful to think of software development as a combination of people, process, and technology. If these are the factors that "create" software, then it is logical that these are the factors that must be tested. Today most people generally test the technology or the software itself.
An effective testing program should have components that test People to ensure that there is adequate education and awareness; Process to ensure that there are adequate policies and standards and that people know how to follow these policies; Technology to ensure that the process has been effective in its implementation. Unless a holistic approach is adopted, testing just the technical implementation of an application will not uncover management or operational vulnerabilities that could be present. By testing the people, policies, and processes, an organization can catch issues that would later manifest themselves into defects in the technology, thus eradicating bugs early and identifying the root causes of defects. Likewise, testing only some of the technical issues that can be present in a system will result in an incomplete and inaccurate security posture assessment. Denis Verdon, Head of Information Security at HYPERLINK "http://www.fnf.com" \o "http://www.fnf.com" Fidelity National Financial presented an excellent analogy for this misconception at the OWASP AppSec 2004 Conference in New York [5]: "If cars were built like applications [...] safety tests would assume frontal impact only. Cars would not be roll tested, or tested for stability in emergency maneuvers, brake effectiveness, side impact, and resistance to theft."
Feedback and CommentsAs with all OWASP projects, we welcome comments and feedback. We especially like to know that our work is being used and that it is effective and accurate.
Principles of Testing
There are some common misconceptions when developing a testing methodology to weed out security bugs in software. This chapter covers some of the basic principles that should be taken into account by professionals when testing for security bugs in software.
There is No Silver BulletWhile it is tempting to think that a security scanner or application firewall will either provide a multitude of defenses or identify a multitude of problems, in reality there are no silver bullets to the problem of insecure software. Application security assessment software, while useful as a first pass to find low-hanging fruit, is generally immature and ineffective at in-depth assessments and at providing adequate test coverage. Remember that security is a process, not a product.
Think Strategically, Not TacticallyOver the last few years, security professionals have come to realize the fallacy of the patch-and-penetrate model that was pervasive in information security during the 1990s. The patch-and-penetrate model involves fixing a reported bug, but without proper investigation of the root cause. This model is usually associated with the window of vulnerability shown in the figure below. The evolution of vulnerabilities in common software used worldwide has shown the ineffectiveness of this model. For more information about the window of vulnerability please refer to [6]. Vulnerability studies [7] have shown that with the reaction time of attackers worldwide, the typical window of vulnerability does not provide enough time for patch installation, since the time between a vulnerability being uncovered and an automated attack against it being developed and released is decreasing every year. There are also several wrong assumptions in the patch-and-penetrate model: patches interfere with the normal operations and might break existing applications, and not all the users might (in the end) be aware of a patchs availability. Consequently not all the product's users will apply patches, either because of this issue or because they lack knowledge about the patch's existence.
.
INCLUDEPICTURE "http://www.owasp.org/images/9/90/WindowExposure.jpg" \* MERGEFORMATINET Figure 2: Window of exposure
To prevent reoccurring security problems within an application, it is essential to build security into the Software Development Life Cycle (SDLC) by developing standards, policies, and guidelines that fit and work within the development methodology. Threat modeling and other techniques should be used to help assign appropriate resources to those parts of a system that are most at risk. The SDLC is KingThe SDLC is a process that is well-known to developers. By integrating security into each phase of the SDLC, it allows for a holistic approach to application security that leverages the procedures already in place within the organization. Be aware that while the names of the various phases may change depending on the SDLC model used by an organization, each conceptual phase of the archetype SDLC will be used to develop the application (i.e., define, design, develop, deploy, maintain). Each phase has security considerations that should become part of the existing process, to ensure a cost-effective and comprehensive security program. Test Early and Test OftenWhen a bug is detected early within the SDLC, it can be addressed more quickly and at a lower cost. A security bug is no different from a functional or performance-based bug in this regard. A key step in making this possible is to educate the development and QA organizations about common security issues and the ways to detect and prevent them. Although new libraries, tools, or languages might help design better programs (with fewer security bugs), new threats arise constantly and developers must be aware of those that affect the software they are developing. Education in security testing also helps developers acquire the appropriate mindset to test an application from an attacker's perspective. This allows each organization to consider security issues as part of their existing responsibilities. Understand the Scope of SecurityIt is important to know how much security a given project will require. The information and assets that are to be protected should be given a classification that states how they are to be handled (e.g., Confidential, Secret, Top Secret). Discussions should occur with legal council to ensure that any specific security need will be met. In the USA they might come from federal regulations, such as the Gramm-Leach-Bliley Act [8], or from state laws, such as the California SB-1386 [9]. For organizations based in EU countries, both country-specific regulation and EU Directives might apply. For example, Directive 96/46/EC4 [10] makes it mandatory to treat personal data in applications with due care, whatever the application. Develop the Right MindsetSuccessfully testing an application for security vulnerabilities requires thinking "outside of the box." Normal use cases will test the normal behavior of the application when a user is using it in the manner that you expect. Good security testing requires going beyond what is expected and thinking like an attacker who is trying to break the application. Creative thinking can help to determine what unexpected data may cause an application to fail in an insecure manner. It can also help find what assumptions made by web developers are not always true and how they can be subverted. This is one of the reasons why automated tools are actually bad at automatically testing for vulnerabilities: this creative thinking must be done on a case-by-case basis and most web applications are being developed in a unique way (even if using common frameworks). Understand the SubjectOne of the first major initiatives in any good security program should be to require accurate documentation of the application. The architecture, data-flow diagrams, use cases, and more should be written in formal documents and made available for review. The technical specification and application documents should include information that lists not only the desired use cases, but also any specifically disallowed use case. Finally, it is good to have at least a basic security infrastructure that allows the monitoring and trending of attacks against an organization's applications and network (e.g., IDS systems). Use the Right ToolsWhile we have already stated that there is no silver bullet tool, tools do play a critical role in the overall security program. There is a range of open source and commercial tools that can automate many routine security tasks. These tools can simplify and speed up the security process by assisting security personnel in their tasks. It is important to understand exactly what these tools can and cannot do, however, so that they are not oversold or used incorrectly. The Devil is in the DetailsIt is critical not to perform a superficial security review of an application and consider it complete. This will instill a false sense of confidence that can be as dangerous as not having done a security review in the first place. It is vital to carefully review the findings and weed out any false positive that may remain in the report. Reporting an incorrect security finding can often undermine the valid message of the rest of a security report. Care should be taken to verify that every possible section of application logic has been tested, and that every use case scenario was explored for possible vulnerabilities. Use Source Code When AvailableWhile black box penetration test results can be impressive and useful to demonstrate how vulnerabilities are exposed in production, they are not the most effective way to secure an application. If the source code for the application is available, it should be given to the security staff to assist them while performing their review. It is possible to discover vulnerabilities within the application source that would be missed during a black box engagement. Develop MetricsAn important part of a good security program is the ability to determine if things are getting better. It is important to track the results of testing engagements, and develop metrics that will reveal the application security trends within the organization. These metrics can show if more education and training are required, if there is a particular security mechanism that is not clearly understood by development, and if the total number of security related problems being found each month is going down. Consistent metrics that can be generated in an automated way from available source code will also help the organization in assessing the effectiveness of mechanisms introduced to reduce security bugs in software development. Metrics are not easily developed, so using standard metrics like those provided by the OWASP Metrics project and other organizations might be a good head start.Document the Test ResultsTo conclude the testing process, it is important to produce a formal record of what testing actions were taken, by whom, when they ware performed, and details of the test findings. It is wise to agree on an acceptable format for the report which is useful to all concerned parties, which may include developers, project management, business owners, IT department, audit, and compliance. The report must be clear to the business owner in identifying where material risks exist and sufficient to get their backing for subsequent mitigation actions. The report must be clear to the developer in pin-pointing the exact function that is affected by the vulnerability, with associated recommendations for resolution in a language that the developer will understand (no pun intended). Last but not least, the report writing should not be overly burdensome on the security tester themselves; security testers are not generally renowned for their creative writing skills, therefore agreeing on a complex report can lead to instances where test results do not get properly documented.
Testing Techniques Explained
This section presents a high-level overview of various testing techniques that can be employed when building a testing program. It does not present specific methodologies for these techniques, although Chapter 3 will address this information. This section is included to provide context for the framework presented in the next chapter and to highlight the advantages and disadvantages of some of the techniques that should be considered. In particular, we will cover:
Manual Inspections & Reviews
Threat Modeling
Code Review
Penetration Testing
Manual Inspections & Reviews
OverviewManual inspections are human-driven reviews that typically test the security implications of the people, policies, and processes, but can include inspection of technology decisions such as architectural designs. They are usually conducted by analyzing documentation or performing interviews with the designers or system owners. While the concept of manual inspections and human reviews is simple, they can be among the most powerful and effective techniques available. By asking someone how something works and why it was implemented in a specific way, it allows the tester to quickly determine if any security concerns are likely to be evident. Manual inspections and reviews are one of the few ways to test the software development life-cycle process itself and to ensure that there is an adequate policy or skill set in place. As with many things in life, when conducting manual inspections and reviews we suggest you adopt a trust-but-verify model. Not everything everyone tells you or shows you will be accurate. Manual reviews are particularly good for testing whether people understand the security process, have been made aware of policy, and have the appropriate skills to design or implement a secure application. Other activities, including manually reviewing the documentation, secure coding policies, security requirements, and architectural designs, should all be accomplished using manual inspections.
Advantages:
Requires no supporting technology
Can be applied to a variety of situations
Flexible
Promotes teamwork
Early in the SDLC
Disadvantages:
Can be time consuming
Supporting material not always available
Requires significant human thought and skill to be effective!
Threat Modeling
OverviewThreat modeling has become a popular technique to help system designers think about the security threats that their systems/applications might face. Therefore, threat modeling can be seen as risk assessment for applications. In fact, it enables the designer to develop mitigation strategies for potential vulnerabilities and helps them focus their inevitably limited resources and attention on the parts of the system that most require it. It is recommended that all applications have a threat model developed and documented. Threat models should be created as early as possible in the SDLC, and should be revisited as the application evolves and development progresses. To develop a threat model, we recommend taking a simple approach that follows the NIST 800-30 [11] standard for risk assessment. This approach involves:
Decomposing the application understand, through a process of manual inspection, how the application works, its assets, functionality, and connectivity.
Defining and classifying the assets classify the assets into tangible and intangible assets and rank them according to business importance.
Exploring potential vulnerabilities - whether technical, operational, or management.
Exploring potential threats develop a realistic view of potential attack vectors from an attackers perspective, by using threat scenarios or attack trees.
Creating mitigation strategies develop mitigating controls for each of the threats deemed to be realistic. The output from a threat model itself can vary but is typically a collection of lists and diagrams. The OWASP Code Review Guide outlines an Application Threat Modeling methodology that can be used as a reference for the testing applications for potential security flaws in the design of the application. There is no right or wrong way to develop threat models and perform information risk assessments on applications. [12].
Advantages:
Practical attacker's view of the system
Flexible
Early in the SDLC
Disadvantages:
Relatively new technique
Good threat models dont automatically mean good software
Source Code Review
OverviewSource code review is the process of manually checking a web application's source code for security issues. Many serious security vulnerabilities cannot be detected with any other form of analysis or testing. As the popular saying goes if you want to know whats really going on, go straight to the source." Almost all security experts agree that there is no substitute for actually looking at the code. All the information for identifying security problems is there in the code somewhere. Unlike testing third party closed software such as operating systems, when testing web applications (especially if they have been developed in-house) the source code should be made available for testing purposes. Many unintentional but significant security problems are also extremely difficult to discover with other forms of analysis or testing, such as penetration testing, making source code analysis the technique of choice for technical testing. With the source code, a tester can accurately determine what is happening (or is supposed to be happening) and remove the guess work of black box testing. Examples of issues that are particularly conducive to being found through source code reviews include concurrency problems, flawed business logic, access control problems, and cryptographic weaknesses as well as backdoors, Trojans, Easter eggs, time bombs, logic bombs, and other forms of malicious code. These issues often manifest themselves as the most harmful vulnerabilities in web sites. Source code analysis can also be extremely efficient to find implementation issues such as places where input validation was not performed or when fail open control procedures may be present. But keep in mind that operational procedures need to be reviewed as well, since the source code being deployed might not be the same as the one being analyzed herein [13].
Advantages:
Completeness and effectiveness
Accuracy
Fast (for competent reviewers)
Disadvantages:
Requires highly skilled security developers
Can miss issues in compiled libraries
Cannot detect run-time errors easily
The source code actually deployed might differ from the one being analyzed
For more on code review, checkout the HYPERLINK "http://www.owasp.org/index.php/OWASP_Code_Review_Project" \o "OWASP Code Review Project" OWASP code review project.
Penetration Testing
OverviewPenetration testing has been a common technique used to test network security for many years. It is also commonly known as black box testing or ethical hacking. Penetration testing is essentially the art of testing a running application remotely, without knowing the inner workings of the application itself, to find security vulnerabilities. Typically, the penetration test team would have access to an application as if they were users. The tester acts like an attacker and attempts to find and exploit vulnerabilities. In many cases the tester will be given a valid account on the system. While penetration testing has proven to be effective in network security, the technique does not naturally translate to applications. When penetration testing is performed on networks and operating systems, the majority of the work is involved in finding and then exploiting known vulnerabilities in specific technologies. As web applications are almost exclusively bespoke, penetration testing in the web application arena is more akin to pure research. Penetration testing tools have been developed that automate the process, but, again, with the nature of web applications their effectiveness is usually poor. Many people today use web application penetration testing as their primary security testing technique. Whilst it certainly has its place in a testing program, we do not believe it should be considered as the primary or only testing technique. Gary McGraw in [14] summed up penetration testing well when he said, If you fail a penetration test you know you have a very bad problem indeed. If you pass a penetration test you do not know that you dont have a very bad problem. However, focused penetration testing (i.e., testing that attempts to exploit known vulnerabilities detected in previous reviews) can be useful in detecting if some specific vulnerabilities are actually fixed in the source code deployed on the web site.
Advantages:
Can be fast (and therefore cheap)
Requires a relatively lower skill-set than source code review
Tests the code that is actually being exposed
Disadvantages:
Too late in the SDLC
Front impact testing only!
The Need for a Balanced Approach
With so many techniques and so many approaches to testing the security of web applications, it can be difficult to understand which techniques to use and when to use them. Experience shows that there is no right or wrong answer to exactly what techniques should be used to build a testing framework. The fact remains that all techniques should probably be used to ensure that all areas that need to be tested are tested. What is clear, however, is that there is no single technique that effectively covers all security testing that must be performed to ensure that all issues have been addressed. Many companies adopt one approach, which has historically been penetration testing. Penetration testing, while useful, cannot effectively address many of the issues that need to be tested, and is simply too little too late in the software development life cycle (SDLC). The correct approach is a balanced one that includes several techniques, from manual interviews to technical testing. The balanced approach is sure to cover testing in all phases of the SDLC. This approach leverages the most appropriate techniques available depending on the current SDLC phase. Of course there are times and circumstances where only one technique is possible; for example, a test on a web application that has already been created, and where the testing party does not have access to the source code. In this case, penetration testing is clearly better than no testing at all. However, we encourage the testing parties to challenge assumptions, such as no access to source code, and to explore the possibility of more complete testing. A balanced approach varies depending on many factors, such as the maturity of the testing process and corporate culture. However, it is recommended that a balanced testing framework look something like the representations shown in Figure 3 and Figure 4. The following figure shows a typical proportional representation overlaid onto the software development life cycle. In keeping with research and experience, it is essential that companies place a higher emphasis on the early stages of development.
HYPERLINK "http://www.owasp.org/index.php/Image:ProportionSDLC.png" \o "Image:ProportionSDLC.png" INCLUDEPICTURE "http://www.owasp.org/images/3/31/ProportionSDLC.png" \* MERGEFORMATINET Figure 3: Proportion of Test Effort in SDLC
The following figure shows a typical proportional representation overlaid onto testing techniques.
HYPERLINK "http://www.owasp.org/index.php/Image:ProportionTest.png" \o "Image:ProportionTest.png" INCLUDEPICTURE "http://www.owasp.org/images/5/5c/ProportionTest.png" \* MERGEFORMATINET Figure 4: Proportion of Test Effort According to Test Technique
A Note about Web Application ScannersMany organizations have started to use automated web application scanners. While they undoubtedly have a place in a testing program, we want to highlight some fundamental issues about why we do not believe that automating black box testing is (or will ever be) effective. By highlighting these issues, we are not discouraging web application scanner use. Rather, we are saying that their limitations should be understood, and testing frameworks should be planned appropriately. NB: OWASP is currently working to develop a web application scanner-benchmarking platform. The following examples indicate why automated black box testing is not effective. Example 1: Magic ParametersImagine a simple web application that accepts a name-value pair of magic and then the value. For simplicity, the GET request may be: http://www.host/application?magic=value To further simplify the example, the values in this case can only be ASCII characters a z (upper or lowercase) and integers 0 9. The designers of this application created an administrative backdoor during testing, but obfuscated it to prevent the casual observer from discovering it. By submitting the value sf8g7sfjdsurtsdieerwqredsgnfg8d (30 characters), the user will then be logged in and presented with an administrative screen with total control of the application. The HTTP request is now:http://www.host/application?magic= sf8g7sfjdsurtsdieerwqredsgnfg8d Given that all of the other parameters were simple two- and three-characters fields, it is not possible to start guessing combinations at approximately 28 characters. A web application scanner will need to brute force (or guess) the entire key space of 30 characters. That is up to 30^28 permutations, or trillions of HTTP requests! That is an electron in a digital haystack! The code for this exemplar Magic Parameter check may look like the following:
public void doPost( HttpServletRequest request, HttpServletResponse response)
{
String magic = sf8g7sfjdsurtsdieerwqredsgnfg8d;
boolean admin = magic.equals( request.getParameter(magic));
if (admin) doAdmin( request, response);
else . // normal processing
}
By looking in the code, the vulnerability practically leaps off the page as a potential problem. Example 2: Bad CryptographyCryptography is widely used in web applications. Imagine that a developer decided to write a simple cryptography algorithm to sign a user in from site A to site B automatically. In his/her wisdom, the developer decides that if a user is logged into site A, then he/she will generate a key using an MD5 hash function that comprises: Hash { username: date } When a user is passed to site B, he/she will send the key on the query string to site B in an HTTP re-direct. Site B independently computes the hash, and compares it to the hash passed on the request. If they match, site B signs the user in as the user they claim to be. Clearly, as we explain the scheme, the inadequacies can be worked out, and it can be seen how anyone that figures it out (or is told how it works, or downloads the information from Bugtraq) can login as any user. Manual inspection, such as an interview, would have uncovered this security issue quickly, as would inspection of the code. A black-box web application scanner would have seen a 128-bit hash that changed with each user, and by the nature of hash functions, did not change in any predictable way. A Note about Static Source Code Review ToolsMany organizations have started to use static source code scanners. While they undoubtedly have a place in a comprehensive testing program, we want to highlight some fundamental issues about why we do not believe this approach is effective when used alone. Static source code analysis alone cannot identify issues due to flaws in the design, since it cannot understand the context in which the code is constructed. Source code analysis tools are useful in determining security issues due to coding errors, however significant manual effort is required to validate the findings.
Security Requirements Test Derivation
If you want to have a successful testing program, you need to know what the objectives of the testing are. These objectives are specified by security requirements. This section discusses in the details how to document requirements for security testing by deriving them from applicable standards and regulations and positive and negative application requirements. It also discusses how security requirements effectively drive security testing during the SDLC and how security test data can be used to effectively manage software security risks.
Testing ObjectivesOne of the objectives of security testing is to validate that security controls function as expected. This is documented via security requirements that describe the functionality of the security control. At a high level, this means proving confidentiality, integrity, and availability of the data as well as the service. The other objective is to validate that security controls are implemented with few or no vulnerabilities. These are common vulnerabilities, such as the HYPERLINK "http://www.owasp.org/index.php/OWASP_Top_Ten" \o "OWASP Top Ten" OWASP Top Ten, as well as vulnerabilities that are previously identified with security assessments during the SDLC, such as threat modeling, source code analysis, and penetration test.
Security Requirements DocumentationThe first step in the documentation of security requirements is to understand the business requirements. A business requirement document could provide the initial, high-level information of the expected functionality for the application. For example, the main purpose of an application may be to provide financial services to customers or shopping and purchasing goods from an on-line catalogue. A security section of the business requirements should highlight the need to protect the customer data as well as to comply with applicable security documentation such as regulations, standards, and policies.
A general checklist of the applicable regulations, standards, and policies serves well the purpose of a preliminary security compliance analysis for web applications. For example, compliance regulations can be identified by checking information about the business sector and the country/state where the application needs to function/operate. Some of these compliance guidelines and regulations might translate in specific technical requirements for security controls. For example, in the case of financial applications, the compliance with FFIEC guidelines for authentication [15] requires that financial institutions implement applications that mitigate weak authentication risks with multi-layered security control and multi factor authentication.
Applicable industry standards for security need also to be captured by the general security requirement checklist. For example, in the case of applications that handle customer credit card data, the compliance with the PCI DSS [16] standard forbids the storage of PINs and CVV2 data, and requires that the merchant protect magnetic strip data in storage and transmission with encryption and on display by masking. Such PCI DSS security requirements could be validated via source code analysis.
Another section of the checklist needs to enforce general requirements for compliance with the organization information security standards and policies. From the functional requirements perspective, requirement for the security control needs to map to a specific section of the information security standards. An example of such requirement can be: "a password complexity of six alphanumeric characters must be enforced by the authentication controls used by the application." When security requirements map to compliance rules a security test can validate the exposure of compliance risks. If violation with information security standards and policies are found, these will result in a risk that can be documented and that the business has to deal with (i.e., manage). For this reason, since these security compliance requirements are enforceable, they need to be well documented and validated with security tests.
Security Requirements ValidationFrom the functionality perspective, the validation of security requirements is the main objective of security testing, while, from the risk management perspective, this is the objective of information security assessments. At a high level, the main goal of information security assessments is the identification of gaps in security controls, such as lack of basic authentication, authorization, or encryption controls. More in depth, the security assessment objective is risk analysis, such as the identification of potential weaknesses in security controls that ensure the confidentiality, integrity, and availability of the data. For example, when the application deals with personal identifiable information (PII) and sensitive data, the security requirement to be validated is the compliance with the company information security policy requiring encryption of such data in transit and in storage. Assuming encryption is used to protect the data, encryption algorithms and key lengths need to comply with the organization encryption standards. These might require that only certain algorithms and key lengths could be used. For example, a security requirement that can be security tested is verifying that only allowed ciphers are used (e.g., SHA-1, RSA, 3DES) with allowed minimum key lengths (e.g., more than 128 bit for symmetric and more than 1024 for asymmetric encryption).
From the security assessment perspective, security requirements can be validated at different phases of the SDLC by using different artifacts and testing methodologies. For example, threat modeling focuses on identifying security flaws during design, secure code analysis and reviews focus on identifying security issues in source code during development, and penetration testing focuses on identifying vulnerabilities in the application during testing/validation.
Security issues that are identified early in the SDLC can be documented in a test plan so they can be validated later with security tests. By combining the results of different testing techniques, it is possible to derive better security test cases and increase the level of assurance of the security requirements. For example, distinguishing true vulnerabilities from the un-exploitable ones is possible when the results of penetration tests and source code analysis are combined. Considering the security test for a SQL injection vulnerability, for example, a black box test might involve first a scan of the application to fingerprint the vulnerability. The first evidence of a potential SQL injection vulnerability that can be validated is the generation of a SQL exception. A further validation of the SQL vulnerability might involve manually injecting attack vectors to modify the grammar of the SQL query for an information disclosure exploit. This might involve a lot of trial-and-error analysis till the malicious query is executed. Assuming the tester has the source code, she might learn from the source code analysis on how to construct the SQL attack vector that can exploit the vulnerability (e.g., execute a malicious query returning confidential data to unauthorized user).
Threats and Countermeasures TaxonomiesA threat and countermeasure classification that takes into consideration root causes of vulnerabilities is the critical factor to verify that security controls are designed, coded, and built so that the impact due to the exposure of such vulnerabilities is mitigated. In the case of web applications, the exposure of security controls to common vulnerabilities, such as the OWASP Top Ten, can be a good starting point to derive general security requirements. More specifically, the web application security frame [17] provides a classification (e.g. taxonomy) of vulnerabilities that can be documented in different guidelines and standards and validated with security tests.
The focus of a threat and countermeasure categorization is to define security requirements in terms of the threats and the root cause of the vulnerability. A threat can be categorized by using STRIDE [18], for example, as Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege. The root cause can be categorized as security flaw in design, a security bug in coding, or an issue due to insecure configuration. For example, the root cause of weak authentication vulnerability might be the lack of mutual authentication when data crosses a trust boundary between the client and server tiers of the application. A security requirement that captures the threat of non-repudiation during an architecture design review allows for the documentation of the requirement for the countermeasure (e.g., mutual authentication) that can be validated later on with security tests.
A threat and countermeasure categorization for vulnerabilities can also be used to document security requirements for secure coding such as secure coding standards. An example of a common coding error in authentication controls consists of applying an hash function to encrypt a password, without applying a seed to the value. From the secure coding perspective, this is a vulnerability that affects the encryption used for authentication with a vulnerability root cause in a coding error. Since the root cause is insecure coding the security requirement can be documented in secure coding standards and validated through secure code reviews during the development phase of the SDLC.
Security Testing and Risk AnalysisSecurity requirements need to take into consideration the severity of the vulnerabilities to support a risk mitigation strategy. Assuming that the organization maintains a repository of vulnerabilities found in applications, i.e., a vulnerability knowledge base, the security issues can be reported by type, issue, mitigation, root cause, and mapped to the applications where they are found. Such a vulnerability knowledge base can also be used to establish metrics to analyze the effectiveness of the security tests throughout the SDLC.
For example, consider an input validation issue, such as a SQL injection, which was identified via source code analysis and reported with a coding error root cause and input validation vulnerability type. The exposure of such vulnerability can be assessed via a penetration test, by probing input fields with several SQL injection attack vectors. This test might validate that special characters are filtered before hitting the database and mitigate the vulnerability. By combining the results of source code analysis and penetration testing it is possible to determine the likelihood and exposure of the vulnerability and calculate the risk rating of the vulnerability. By reporting vulnerability risk ratings in the findings (e.g., test report) it is possible to decide on the mitigation strategy. For example, high and medium risk vulnerabilities can be prioritized for remediation, while low risk can be fixed in further releases.
By considering the threat scenarios exploiting common vulnerabilities it is possible to identify potential risks for which the application security control needs to be security tested. For example, the OWASP Top Ten vulnerabilities can be mapped to attacks such as phishing, privacy violations, identify theft, system compromise, data alteration or data destruction, financial loss, and reputation loss. Such issues should be documented as part of the threat scenarios. By thinking in terms of threats and vulnerabilities, it is possible to devise a battery of tests that simulate such attack scenarios. Ideally, the organization vulnerability knowledge base can be used to derive security risk driven tests cases to validate the most likely attack scenarios. For example if identity theft is considered high risk, negative test scenarios should validate the mitigation of impacts deriving from the exploit of vulnerabilities in authentication, cryptographic controls, input validation, and authorization controls.
Functional and Non Functional Test Requirements
Functional Security RequirementsFrom the perspective of functional security requirements, the applicable standards, policies and regulations drive both the need of a type of security control as well as the control functionality. These requirements are also referred to as positive requirements, since they state the expected functionality that can be validated through security tests. Examples of positive requirements are: the application will lockout the user after six failed logon attempts or passwords need to be six min characters, alphanumeric. The validation of positive requirements consists of asserting the expected functionality and, as such, can be tested by re-creating the testing conditions, and by running the test according to predefined inputs and by asserting the expected outcome as a fail/pass condition.
In order to validate security requirements with security tests, security requirements need to be function driven and highlight the expected functionality (the what) and implicitly the implementation (the how). Examples of high-level security design requirements for authentication can be:
Protect user credentials and shared secrets in transit and in storage
Mask any confidential data in display (e.g., passwords, accounts)
Lock the user account after a certain number of failed login attempts
Do not show specific validation errors to the user as a result of failed logon
Only allow passwords that are alphanumeric, include special characters and six characters minimum length, to limit the attack surface
Allow for password change functionality only to authenticated users by validating the old password, the new password, and the user answer to the challenge question, to prevent brute forcing of a password via password change.
The password reset form should validate the users username and the users registered email before sending the temporary password to the user via email. The temporary password issued should be a one time password. A link to the password reset web page will be sent to the user. The password reset web page should validate the user temporary password, the new password, as well as the user answer to the challenge question.
Risk Driven Security RequirementsSecurity tests need also to be risk driven, that is they need to validate the application for unexpected behavior. These are also called negative requirements, since they specify what the application should not do. Examples of "should not do" (negative) requirements are:
The application should not allow for the data to be altered or destroyed
The application should not be compromised or misused for unauthorized financial transactions by a malicious user.
Negative requirements are more difficult to test, because there is no expected behavior to look for. This might require a threat analyst to come up with unforeseeable input conditions, causes, and effects. This is where security testing needs to be driven by risk analysis and threat modeling. The key is to document the threat scenarios and the functionality of the countermeasure as a factor to mitigate a threat. For example, in case of authentication controls, the following security requirements can be documented from the threats and countermeasure perspective:
Encrypt authentication data in storage and transit to mitigate risk of information disclosure and authentication protocol attacks
Encrypt passwords using non reversible encryption such as using a digest (e.g., HASH) and a seed to prevent dictionary attacks
Lock out accounts after reaching a logon failure threshold and enforce password complexity to mitigate risk of brute force password attacks
Display generic error messages upon validation of credentials to mitigate risk of account harvesting/enumeration
Mutually authenticate client and server to prevent non-repudiation and Man In the Middle (MiTM) attacks
Threat modeling artifacts such as threat trees and attack libraries can be useful to derive the negative test scenarios. A threat tree will assume a root attack (e.g., attacker might be able to read other users messages) and identify different exploits of security controls (e.g., data validation fails because of a SQL injection vulnerability) and necessary countermeasures (e.g., implement data validation and parametrized queries) that could be validated to be effective in mitigating such attacks.
Security Requirements Derivation Through Use and Misuse Cases
Pre-requisite in describing the application functionality is to understand what the application is supposed to do and how. This can be done by describing use cases. Use cases, in the graphical form as commonly used in software engineering, show the interactions of actors and their relations, and help to identify the actors in the application, their relationships, the intended sequence of actions for each scenario, alternative actions, special requirements, and pre- and post-conditions. Similar to use cases, misuse and abuse cases [19] describe unintended and malicious use scenarios of the application. These misuse cases provide a way to describe scenarios of how an attacker could misuse and abuse the application. By going through the individual steps in a use scenario and thinking about how it can be maliciously exploited, potential flaws or aspects of the application that are not well-defined can be discovered. The key is to describe all possible or, at least, the most critical use and misuse scenarios. Misuse scenarios allow the analysis of the application from the attacker's point of view and contribute to identifying potential vulnerabilities and the countermeasures that need to be implemented to mitigate the impact caused by the potential exposure to such vulnerabilities. Given all of the use and abuse cases, it is important to analyze them to determine which of them are the most critical ones and need to be documented in security requirements. The identification of the most critical misuse and abuse cases drives the documentation of security requirements and the necessary controls where security risks should be mitigated.
To derive security requirements from use and misuse case [20], it is important to define the functional scenarios and the negative scenarios, and put these in graphical form. In the case of derivation of security requirements for authentication, for example, the following step-by-step methodology can be followed.
Step 1: Describe the Functional Scenario: User authenticates by supplying username and password. The application grants access to users based upon authentication of user credentials by the application and provides specific errors to the user when validation fails.
Step 2: Describe the Negative Scenario: Attacker breaks the authentication through a brute force/dictionary attack of passwords and account harvesting vulnerabilities in the application. The validation errors provide specific information to an attacker to guess which accounts are actually valid, registered accounts (usernames). The attacker, then, will try to brute force the password for such a valid account. A brute force attack to four minimum length all digit passwords can succeed with a limited number of attempts (i.e., 10^4).
Step 3: Describe Functional and Negative Scenarios With Use and Misuse Case: The graphical example in Figure below depicts the derivation of security requirements via use and misuse cases. The functional scenario consists of the user actions (entering username and password) and the application actions (authenticating the user and providing an error message if validation fails). The misuse case consists of the attacker actions, i.e., trying to break authentication by brute forcing the password via a dictionary attack and by guessing the valid usernames from error messages. By graphically representing the threats to the user actions (misuses), it is possible to derive the countermeasures as the application actions that mitigate such threats.
HYPERLINK "http://www.owasp.org/index.php/Image:UseAndMisuseCase.jpg" \o "Image:UseAndMisuseCase.jpg" INCLUDEPICTURE "http://www.owasp.org/images/9/94/UseAndMisuseCase.jpg" \* MERGEFORMATINET
Step 4: Elicit The Security Requirements. In this case, the following security requirements for authentication are derived:
1) Passwords need to be alphanumeric, lower and upper case and minimum of seven character length
2) Accounts need to lockout after five unsuccessful login attempt
3) Logon error messages need to be generic
These security requirements need to be documented and tested.
Security Tests Integrated in Developers and Testers Workflow
Developers Security Testing WorkflowSecurity testing during the development phase of the SDLC represents the first opportunity for developers to ensure that individual software components that they have developed are security tested before they are integrated with other components and built into the application. Software components might consist of software artifacts such as functions, methods, and classes, as well as application programming interfaces, libraries, and executables. For security testing, developers can rely on the results of the source code analysis to verify statically that the developed source code does not include potential vulnerabilities and is compliant with the secure coding standards. Security unit tests can further verify dynamically (i.e., at run time) that the components function as expected. Before integrating both new and existing code changes in the application build, the results of the static and dynamic analysis should be reviewed and validated. The validation of source code before integration in application builds is usually the responsibility of the senior developer. Such senior developer is also the subject matter expert in software security and his role is to lead the secure code review and make decisions whether to accept the code to be released in the application build or to require further changes and testing. This secure code review workflow can be enforced via formal acceptance as well as a check in a workflow management tool. For example, assuming the typical defect management workflow used for functional bugs, security bugs that have been fixed by a developer can be reported on a defect or change management system. The build master can look at the test results reported by the developers in the tool and grant approvals for checking in the code changes into the application build.
Testers Security Testing WorkflowAfter components and code changes are tested by developers and checked in to the application build, the most likely next step in the software development process workflow is to perform tests on the application as a whole entity. This level of testing is usually referred to as integrated test and system level test. When security tests are part of these testing activities, they can be used to validate both the security functionality of the application as a whole, as well as the exposure to application level vulnerabilities. These security tests on the application include both white box testing, such as source code analysis, and black box testing, such as penetration testing. Gray box testing is similar to Black box testing. In a gray box testing we can assume we have some partial knowledge about the session management of our application, and that should help us in understanding whether the logout and timeout functions are properly secured.
The target for the security tests is the complete system that is the artifact that will be potentially attacked and includes both whole source code and the executable. One peculiarity of security testing during this phase is that it is possible for security testers to determine whether vulnerabilities can be exploited and expose the application to real risks. These include common web application vulnerabilities, as well as security issues that have been identified earlier in the SDLC with other activities such as threat modeling, source code analysis, and secure code reviews.
Usually, testing engineers, rather then software developers, perform security tests when the application is in scope for integration system tests. Such testing engineers have security knowledge of web application vulnerabilities, black box and white box security testing techniques, and own the validation of security requirements in this phase. In order to perform such security tests, it is a pre-requisite that security test cases are documented in the security testing guidelines and procedures.
A testing engineer who validates the security of the application in the integrated system environment might release the application for testing in the operational environment (e.g., user acceptance tests). At this stage of the SDLC (i.e., validation), the application functional testing is usually a responsibility of QA testers, while white-hat hackers/security consultants are usually responsible for security testing. Some organizations rely on their own specialized ethical hacking team in order to conduct such tests when a third party assessment is not required (such as for auditing purposes).
Since these tests are the last resort for fixing vulnerabilities before the application is released to production, it is important that such issues are addressed as recommended by the testing team (e.g., the recommendations can include code, design, or configuration change). At this level, security auditors and information security officers discuss the reported security issues and analyze the potential risks according to information risk management procedures. Such procedures might require the developer team to fix all high risk vulnerabilities before the application could be deployed, unless such risks are acknowledged and accepted.
Developers' Security Tests
Security Testing in the Coding Phase: Unit TestsFrom the developers perspective, the main objective of security tests is to validate that code is being developed in compliance with secure coding standards requirements. Developers' own coding artifacts such as functions, methods, classes, APIs, and libraries need to be functionally validated before being integrated into the application build.
The security requirements that developers have to follow should be documented in secure coding standards and validated with static and dynamic analysis. As testing activity following a secure code review, unit tests can validate that code changes required by secure code reviews are properly implemented. Secure code reviews and source code analysis through source code analysis tools help developers in identifying security issues in source code as it is developed. By using unit tests and dynamic analysis (e.g., debugging) developers can validate the security functionality of components as well as verify that the countermeasures being developed mitigate any security risks previously identified through threat modeling and source code analysis.
A good practice for developers is to build security test cases as a generic security test suite that is part of the existing unit testing framework. A generic security test suite could be derived from previously defined use and misuse cases to security test functions, methods and classes. A generic security test suite might include security test cases to validate both positive and negative requirements for security controls such as:
Authentication & Access Control
Input Validation & Encoding
Encryption
User and Session Management
Error and Exception Handling
Auditing and Logging
Developers empowered with a source code analysis tool integrated into their IDE, secure coding standards, and a security unit testing framework can assess and verify the security of the software components being developed. Security test cases can be run to identify potential security issues that have root causes in source code: besides input and output validation of parameters entering and exiting the components, these issues include authentication and authorization checks done by the component, protection of the data within the component, secure exception and error handling, and secure auditing and logging. Unit test frameworks such as Junit, Nunit, CUnit can be adapted to verify security test requirements. In the case of security functional tests, unit level tests can test the functionality of security controls at the software component level, such as functions, methods, or classes. For example, a test case could validate input and output validation (e.g., variable sanitization) and boundary checks for variables by asserting the expected functionality of the component.
The threat scenarios identified with use and misuse cases, can be used to document the procedures for testing software components. In the case of authentication components, for example, security unit tests can assert the functionality of setting an account lockout as well as the fact that user input parameters cannot be abused to bypass the account lockout (e.g., by setting the account lockout counter to a negative number). At the component level, security unit tests can validate positive assertions as well as negative assertions, such as errors and exception handling. Exceptions should be caught without leaving the system in an insecure state, such as potential denial of service caused by resources not being deallocated (e.g., connection handles not closed within a final statement block), as well as potential elevation of privileges (e.g., higher privileges acquired before the exception is thrown and not re-set to the previous level before exiting the function). Secure error handling can validate potential information disclosure via informative error messages and stack traces.
Unit level security test cases can be developed by a security engineer who is the subject matter expert in software security, and is also responsible for validaing that the security issues in the source code have been fixed and can be checked into the integrated system build. Typically, the manager of the application builds also makes sure that third-party libraries and executable files are security assessed for potential vulnerabilities before being integrated in the application build.
Threat scenarios for common vulnerabilities that have root causes in insecure coding can also be documented in the developers security testing guide. When a fix is implemented for a coding defect identified with source code analysis, for example, security test cases can verify that the implementation of the code change follows the secure coding requirements documented in the secure coding standards.
Source code analysis and unit tests can validate that the code change mitigates the vulnerability exposed by the previously identified coding defect. The results of automated secure code analysis can also be used as automatic check-in gates for version control: software artifacts cannot be checked into the build with high or medium severity coding issues.
Functional Testers' Security Tests
Security Testing During the Integration and Validation Phase: Integrated System Tests and Operation TestsThe main objective of integrated system tests is to validate the defense in depth concept, that is, that the implementation of security controls provides security at different layers. For example, the lack of input validation when calling a component integrated with the application is often a factor that can be tested with integration testing.
The integration system test environment is also the first environment where testers can simulate real attack scenarios as can be potentially executed by a malicious, external or internal user of the application. Security testing at this level can validate whether vulnerabilities are real and can be exploited by attackers. For example, a potential vulnerability found in source code can be rated as high risk because of the exposure to potential malicious users, as well as because of the potential impact (e.g., access to confidential information). Real attack scenarios can be tested with both manual testing techniques and penetration testing tools. Security tests of this type are also referred to as ethical hacking tests. From the security testing perspective, these are risk driven tests and have the objective to test the application in the operational environment. The target is the application build that is representative of the version of the application being deployed into production.
The execution of security in the integration and validation phase is critical to identifying vulnerabilities due to integration of components as well as validating the exposure of such vulnerabilities. Since application security testing requires a specialized set of skills, which includes both software and security knowledge and is not typical of security engineers, organizations are often required to security-train their software developers on ethical hacking techniques, security assessment procedures and tools. A realistic scenario is to develop such resources in-house and document them in security testing guides and procedures that take into account the developers security testing knowledge. A so called security test cases cheat list or check-list, for example, can provide simple test cases and attack vectors that can be used by testers to validate exposure to common vulnerabilities such as spoofing, information disclosures, buffer overflows, format strings, SQL injection and XSS injection, XML, SOAP, canonicalization issues, denial of service and managed code and ActiveX controls (e.g., .NET). A first battery of these tests can be performed manually with a very basic knowledge of software security. The first objective of security tests might be the validation of a set of minimum security requirements. These security test cases might consist of manually forcing the application into error and exceptional states, and gathering knowledge from the application behavior. For example, SQL injection vulnerabilities can be tested manually by injecting attack vectors through user input and by checking if SQL exceptions are thrown back the user. The evidence of a SQL exception error might be a manifestation of a vulnerability that can be exploited. A more in-depth security test might require the testers knowledge of specialized testing techniques and tools. Besides source code analysis and penetration testing, these techniques include, for example, source code and binary fault injection, fault propagation analysis and code coverage, fuzz testing, and reverse engineering. The security testing guide should provide procedures and recommend tools that can be used by security testers to perform such in-depth security assessments.
The next level of security testing after integration system tests is to perform security tests in the user acceptance environment. There are unique advantages to performing security tests in the operational environment. The user acceptance tests environment (UAT) is the one that is most representative of the release configuration, with the exception of the data (e.g., test data is used in place of real data). A characteristic of security testing in UAT is testing for security configuration issues. In some cases these vulnerabilities might represent high risks. For example, the server that hosts the web application might not be configured with minimum privileges, valid SSL certificate and secure configuration, essential services disabled and web root directory not cleaned from test and administration web pages.
Security Test Data Analysis and Reporting
Goals for Security Test Metrics and MeasurementsThe definition of the goals for the security testing metrics and measurements is a pre-requisite for using security testing data for risk analysis and management processes. For example, a measurement such as the total number of vulnerabilities found with security tests might quantify the security posture of the application. These measurements also help to identify security objectives for software security testing: for example, reducing the number of vulnerabilities to an acceptable number (minimum) before the application is deployed into production.
Another manageable goal could be to compare the application security posture against a baseline to assess improvements in application security processes. For example, the security metrics baseline might consist of an application that was tested only with penetration tests. The security data obtained from an application that was also security tested during coding should show an improvement (e.g., fewer number of vulnerabilities) when compared with the baseline.
In traditional software testing, the number of software defects, such as the bugs found in an application, could provide a measure of software quality. Similarly, security testing can provide a measure of software security. From the defect management and reporting perspective, software quality and security testing can use similar categorizations for root causes and defect remediation efforts. From the root cause perspective, a security defect can be due to an error in design (e.g., security flaws) or due to an error in coding (e.g., security bug). From the perspective of the effort required to fix a defect, both security and quality defects can be measured in terms of developer hours to implement the fix, the tools and resources required to fix, and, finally, the cost to implement the fix.
A peculiarity of security test data, compared to quality data, is the categorization in terms of the threat, the exposure of the vulnerability, and the potential impact posed by the vulnerability to determine the risk. Testing applications for security consists of managing technical risks to make sure that the application countermeasures meet acceptable levels. For this reason, security testing data needs to support the security risk strategy at critical checkpoints during the SDLC. For example, vulnerabilities found in source code with source code analysis represent an initial measure of risk. Such measure of risk (e.g., high, medium, low) for the vulnerability can be calculated by determining the exposure and likelihood factors and, further, by validating such vulnerability with penetration tests. The risk metrics associated to vulnerabilities found with security tests empower business management to make risk management decisions, such as to decide whether risks can be accepted, mitigated, or transferred at different levels within the organization (e.g., business as well as technical).
When evaluating the security posture of an applications, it is important to take into consideration certain factors, such as the size of the application being developed. Application size has been statistically proven to be related to the number of issues found in the application with tests. One measure of application size is the number of line of code (LOC) of the application. Typically, software quality defects range from about 7 to 10 defects per thousand lines of new and changed code [21]. Since testing can reduce the overall number by about 25% with one test alone, it is logical for larger size applications to be tested more and more often than smaller size applications.
When security testing is done in several phases of the SDLC, the test data could prove the capability of the security tests in detecting vulnerabilities as soon as they are introduced, and prove the effectiveness of removing them by implementing countermeasures at different checkpoints of the SDLC. A measurement of this type is also defined as containment metrics and provides a measure of the ability of a security assessment performed at each phase of the development process to maintain security within each phase. These containment metrics are also a critical factor in lowering the cost of fixing the vulnerabilities, since it is less expensive to deal with the vulnerabilities when they are found (in the same phase of the SDLC), rather than fixing them later in another phase.
Security test metrics can support security risk, cost, and defect management analysis when it is associated with tangible and timed goals such as:
Reducing the overall number of vulnerabilities by 30%
Security issues are expected to be fixed by a certain deadline (e.g., before beta release)
Security test data can be absolute, such as the number of vulnerabilities detected during manual code review, as well as comparative, such as the number of vulnerabilities detected in code reviews vs. penetration tests. To answer questions about the quality of the security process, it is important to determine a baseline for what could be considered acceptable and good.
Security test data can also support specific objectives of the security analysis, such as compliance with security regulations and information security standards, management of security processes, the identification of security root causes and process improvements, and security costs vs. benefits analysis.
When security test data is reported it has to provide metrics to support the analysis. The scope of the analysis is the interpretation of test data to find clues about the security of the software being produced as well the effectiveness of the process. Some examples of clues supported by security test data can be:
Are vulnerabilities reduced to an acceptable level for release?
How does the security quality of this product compare with similar software products?
Are all security test requirements being met?
What are the major root causes of security issues?
How numerous are security flaws compared to security bugs?
Which security activity is most effective in finding vulnerabilities?
Which team is more productive in fixing security defects and vulnerabilities?
Which percentage of overall vulnerabilities are high risks?
Which tools are most effective in detecting security vulnerabilities?
Which kind of security tests are most effective in finding vulnerabilities (e.g., white box vs. black box) tests?
How many security issues are found during secure code reviews?
How many security issues are found during secure design reviews?
In order to make a sound judgment using the testing data, it is important to have a good understanding of the testing process as well as the testing tools. A tool taxonomy should be adopted to decide which security tools should be used. Security tools can be qualified as being good at finding common known vulnerabilities targeting different artifacts. The issue is that the unknown security issues are not tested: the fact that you come out clean does not mean that your software or application is good. Some studies [22] have demonstrated that at best tools can find 45% of overall vulnerabilities.
Even the most sophisticated automation tools are not a match for an experienced security tester: just relying on successful test results from automation tools will give security practitioners a false sense of security. Typically, the more experienced the security testers are with the security testing methodology and testing tools, the better the results of the security test and analysis will be. It is important that managers making an investment in security testing tools also consider an investment in hiring skilled human resources as well as security test training.
Reporting RequirementsThe security posture of an application can be characterized from the perspective of the effect, such as number of vulnerabilities and the risk rating of the vulnerabilities, as well as from the perspective of the cause (i.e., origin) such as coding errors, architectural flaws, and configuration issues.
Vulnerabilities can be classified according to different criteria. This can be a statistical categorization, such as the OWASP Top 10 and WASC Web Application Security Statistics project, or related to defensive controls as in the case of WASF (Web Application Security Framework) categorization.
When reporting security test data, the best practice is to include the following information, besides the categorization of each vulnerability by type:
The security threat that the issue is exposed to
The root cause of security issues (e.g., security bugs, security flaw)
The testing technique used to find it
The remediation of the vulnerability (e.g., the countermeasure)
The risk rating of the vulnerability (High, Medium, Low)
By describing what the security threat is, it will be possible to understand if and why the mitigation control is ineffective in mitigating the threat.
Reporting the root cause of the issue can help pinpoint what needs to be fixed: in the case of a white box testing, for example, the software security root cause of the vulnerability will be the offending source code.
Once issues are reported, it is also important to provide guidance to the software developer on how to re-test and find the vulnerability. This might involve using a white box testing technique (e.g., security code review with a static code analyzer) to find if the code is vulnerable. If a vulnerability can be found via a black box technique (penetration test), the test report also needs to provide information on how to validate the exposure of the vulnerability to the front end (e.g., client).
The information about how to fix the vulnerability should be detailed enough for a developer to implement a fix. It should provide secure coding examples, configuration changes, and provide adequate references.
Finally the risk rating helps to prioritize the remediation effort. Typically, assigning a risk rating to the vulnerability involves a risk analysis based upon factors such as impact and exposure.
Business CasesFor the security test metrics to be useful, they need to provide value back to the organization's security test data stakeholders, such as project managers, developers, information security offices, auditors, and chief information officers. The value can be in terms of the business case that each project stakeholder has in terms of role and responsibility.
Software developers look at security test data to show that software is coded more securely and efficiently, so that they can make the case of using source code analysis tools as well as following secure coding standards and attending software security training.
Project managers look for data that allows them to successfully manage and utilize security testing activities and resources according to the project plan. To project managers, security test data can show that projects are on schedule and moving on target for delivery dates and are getting better during tests.
Security test data also helps the business case for security testing if the initiative comes from information security officers (ISOs). For example, it can provide evidence that security testing during the SDLC does not impact the project delivery, but rather reduces the overall workload needed to address vulnerabilities later in production.
To compliance auditors, security test metrics provide a level of software security assurance and confidence that security standard compliance is addressed through the security review processes within the organization.
Finally, Chief Information Officers (CIOs) and Chief Information Security Officers (CISOs), responsible for the budget that needs to be allocated in security resources, look for derivation of a cost/benefit analysis from security test data to make informed decisions on which security activities and tools to invest. One of the metrics that support such analysis is the Return On Investment (ROI) in Security [23]. To derive such metrics from security test data, it is important to quantify the differential between the risk due to the exposure of vulnerabilities and the effectiveness of the security tests in mitigating the security risk, and factor this gap with the cost of the security testing activity or the testing tools adopted.
References
[1] T. De Marco, Controlling Software Projects: Management, Measurement and Estimation, Yourdon Press, 1982
[2] S. Payne, A Guide to Security Metrics - HYPERLINK "http://www.sans.org/reading_room/whitepapers/auditing/55.php" \o "http://www.sans.org/reading_room/whitepapers/auditing/55.php" http://www.sans.org/reading_room/whitepapers/auditing/55.php
[3] NIST, The economic impacts of inadequate infrastructure for software testing - HYPERLINK "http://www.nist.gov/public_affairs/releases/n02-10.htm" \o "http://www.nist.gov/public_affairs/releases/n02-10.htm" http://www.nist.gov/public_affairs/releases/n02-10.htm
[4] Ross Anderson, Economics and Security Resource Page - HYPERLINK "http://www.cl.cam.ac.uk/users/rja14/econsec.html" \o "http://www.cl.cam.ac.uk/users/rja14/econsec.html" http://www.cl.cam.ac.uk/users/rja14/econsec.html
[5] Denis Verdon, Teaching Developers To Fish - HYPERLINK "http://www.owasp.org/index.php/OWASP_AppSec_NYC_2004" \o "http://www.owasp.org/index.php/OWASP_AppSec_NYC_2004" http://www.owasp.org/index.php/OWASP_AppSec_NYC_2004
[6] Bruce Schneier, Cryptogram Issue #9 - HYPERLINK "http://www.schneier.com/crypto-gram-0009.html" \o "http://www.schneier.com/crypto-gram-0009.html" http://www.schneier.com/crypto-gram-0009.html
[7] Symantec, Threat Reports - HYPERLINK "http://www.symantec.com/business/theme.jsp?themeid=threatreport" \o "http://www.symantec.com/business/theme.jsp?themeid=threatreport" http://www.symantec.com/business/theme.jsp?themeid=threatreport
[8] FTC, The Gramm-Leach Bliley Act - HYPERLINK "http://www.ftc.gov/privacy/privacyinitiatives/glbact.html" \o "http://www.ftc.gov/privacy/privacyinitiatives/glbact.html" http://www.ftc.gov/privacy/privacyinitiatives/glbact.html
[9] Senator Peace and Assembly Member Simitian, SB 1386- HYPERLINK "http://www.leginfo.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_20020926_chaptered.html" \o "http://www.leginfo.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_20020926_chaptered.html" http://www.leginfo.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_20020926_chaptered.html
[10] European Union, Directive 96/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data - HYPERLINK "http://ec.europa.eu/justice_home/fsj/privacy/docs/95-46-ce/dir1995-46_part1_en.pdf" \o "http://ec.europa.eu/justice_home/fsj/privacy/docs/95-46-ce/dir1995-46_part1_en.pdf" http://ec.europa.eu/justice_home/fsj/privacy/docs/95-46-ce/dir1995-46_part1_en.pdf
[11] NIST, Risk management guide for information technology systems - HYPERLINK "http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf" \o "http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf" http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf
[12] SEI, Carnegie Mellon, Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) - HYPERLINK "http://www.cert.org/octave/" \o "http://www.cert.org/octave/" http://www.cert.org/octave/
[13] Ken Thompson, Reflections on Trusting Trust, Reprinted from Communication of the ACM - HYPERLINK "http://cm.bell-labs.com/who/ken/trust.html" \o "http://cm.bell-labs.com/who/ken/trust.html" http://cm.bell-labs.com/who/ken/trust.html
[14] Gary McGraw, Beyond the Badness-ometer - HYPERLINK "http://www.ddj.com/security/189500001" \o "http://www.ddj.com/security/189500001" http://www.ddj.com/security/189500001
[15] FFIEC, Authentication in an Internet Banking Environment - HYPERLINK "http://www.ffiec.gov/pdf/authentication_guidance.pdf" \o "http://www.ffiec.gov/pdf/authentication_guidance.pdf" http://www.ffiec.gov/pdf/authentication_guidance.pdf
[16] PCI Security Standards Council, PCI Data Security Standard - HYPERLINK "https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml" \o "https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml" https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml
[17] MSDN, Cheat Sheet: Web Application Security Frame - HYPERLINK "http://msdn.microsoft.com/en-us/library/ms978518.aspx" \l "tmwacheatsheet_webappsecurityframe" \o "http://msdn.microsoft.com/en-us/library/ms978518.aspx#tmwacheatsheet_webappsecurityframe" http://msdn.microsoft.com/en-us/library/ms978518.aspx#tmwacheatsheet_webappsecurityframe
[18] MSDN, Improving Web Application Security, Chapter 2, Threat And Countermeasures - HYPERLINK "http://msdn.microsoft.com/en-us/library/aa302418.aspx" \o "http://msdn.microsoft.com/en-us/library/aa302418.aspx" http://msdn.microsoft.com/en-us/library/aa302418.aspx
[19] Gil Regev, Ian Alexander,Alain Wegmann, Use Cases and Misuse Cases Model the Regulatory Roles of Business Processes - HYPERLINK "http://easyweb.easynet.co.uk/%7Eiany/consultancy/regulatory_processes/regulatory_processes.htm" \o "http://easyweb.easynet.co.uk/~iany/consultancy/regulatory_processes/regulatory_processes.htm" http://easyweb.easynet.co.uk/~iany/consultancy/regulatory_processes/regulatory_processes.htm
[20] Sindre,G. Opdmal A., Capturing Security Requirements Through Misuse Cases ' - HYPERLINK "http://folk.uio.no/nik/2001/21-sindre.pdf" \o "http://folk.uio.no/nik/2001/21-sindre.pdf" http://folk.uio.no/nik/2001/21-sindre.pdf
[21] Security Across the Software Development Lifecycle Task Force, Referred Data from Caper Johns, Software Assessments, Benchmarks and Best Practices - HYPERLINK "http://www.cyberpartnership.org/SDLCFULL.pdf" \o "http://www.cyberpartnership.org/SDLCFULL.pdf" http://www.cyberpartnership.org/SDLCFULL.pdf
[22] MITRE, Being Explicit About Weaknesses, Slide 30, Coverage of CWE - HYPERLINK "http://cwe.mitre.org/documents/being-explicit/BlackHatDC_BeingExplicit_Slides.ppt" \o "http://cwe.mitre.org/documents/being-explicit/BlackHatDC_BeingExplicit_Slides.ppt" http://cwe.mitre.org/documents/being-explicit/BlackHatDC_BeingExplicit_Slides.ppt
[23] Marco Morana, Building Security Into The Software Life Cycle, A Business Case - HYPERLINK "http://www.blackhat.com/presentations/bh-usa-06/bh-us-06-Morana-R3.0.pdf" \o "http://www.blackhat.com/presentations/bh-usa-06/bh-us-06-Morana-R3.0.pdf" http://www.blackhat.com/presentations/bh-usa-06/bh-us-06-Morana-R3.0.pdf
3. The OWASP Testing Framework
Overview
This section describes a typical testing framework that can be developed within an organization. It can be seen as a reference framework that comprises techniques and tasks that are appropriate at various phases of the software development life cycle (SDLC). Companies and project teams can use this model to develop their own testing framework and to scope testing services from vendors. This framework should not be seen as prescriptive, but as a flexible approach that can be extended and molded to fit an organizations development process and culture.
This section aims to help organizations build a complete strategic testing process, and is not aimed at consultants or contractors who tend to be engaged in more tactical, specific areas of testing.
It is critical to understand why building an end-to-end testing framework is crucial to assessing and improving software security. Howard and LeBlanc note in Writing Secure Code that issuing a security bulletin costs Microsoft at least $100,000, and it costs their customers collectively far more than that to implement the security patches. They also note that the US governments CyberCrime web site ( HYPERLINK "http://www.cybercrime.gov/cccases.html" \o "http://www.cybercrime.gov/cccases.html" http://www.cybercrime.gov/cccases.html) details recent criminal cases and the loss to organizations. Typical losses far exceed USD $100,000.
With economics like this, it is little wonder why software vendors move from solely performing black box security testing, which can only be performed on applications that have already been developed, to concentrate on the early cycles of application development such as definition, design, and development.
Many security practitioners still see security testing in the realm of penetration testing. As discussed before, while penetration testing has a role to play, it is generally inefficient at finding bugs, and relies excessively on the skill of the tester. It should only be considered as an implementation technique, or to raise awareness of production issues. To improve the security of applications, the security quality of the software must be improved. That means testing the security at the definition, design, develop, deploy, and maintenance stages and not relying on the costly strategy of waiting until code is completely built.
As discussed in the introduction of this document, there are many development methodologies such as the Rational Unified Process, eXtreme and Agile development, and traditional waterfall methodologies. The intent of this guide is to suggest neither a particular development methodology nor provide specific guidance that adheres to any particular methodology. Instead, we are presenting a generic development model, and the reader should follow it according to their company process.
This testing framework consists of the following activities that should take place:
Before Development Begins
During Definition and Design
During Development
During Deployment
Maintenance and Operations
Phase 1: Before Development Begins
Before application development has started:
Test to ensure that there is an adequate SDLC where security is inherent
Test to ensure that the appropriate policy and standards are in place for the development team
Develop the metrics and measurement criteria
Phase 1A: Review Policies and Standards
Ensure that there are appropriate policies, standards, and documentation in place. Documentation is extremely important as it gives development teams guidelines and policies that they can follow.
People can only do the right thing, if they know what the right thing is.
If the application is to be developed in Java, it is essential that there is a Java secure coding standard. If the application is to use cryptography, it is essential that there is a cryptography standard. No policies or standards can cover every situation that the development team will face. By documenting the common and predictable issues, there will be fewer decisions that need to be made during the development process.
Phase 1B: Develop Measurement and Metrics Criteria (Ensure Traceability)
Before development begins, plan the measurement program. By defining criteria that need to be measured, it provides visibility into defects in both the process and product. It is essential to define the metrics before development begins, as there may be a need to modify the process in order to capture the data.
Phase 2: During Definition and Design
Phase 2A: Review Security Requirements
Security requirements define how an application works from a security perspective. It is essential that the security requirements be tested. Testing in this case means testing the assumptions that are made in the requirements, and testing to see if there are gaps in the requirements definitions.
For example, if there is a security requirement that states that users must be registered before they can get access to the whitepapers section of a website, does this mean that the user must be registered with the system, or should the user be authenticated? Ensure that requirements are as unambiguous as possible.
When looking for requirements gaps, consider looking at security mechanisms such as:
User Management (password reset etc.)
Authentication
Authorization
Data Confidentiality
Integrity
Accountability
Session Management
Transport Security
Tiered System Segregation
Privacy
Phase 2B: Review Design and Architecture
Applications should have a documented design and architecture. By documented, we mean models, textual documents, and other similar artifacts. It is essential to test these artifacts to ensure that the design and architecture enforce the appropriate level of security as defined in the requirements.
Identifying security flaws in the design phase is not only one of the most cost-efficient places to identify flaws, but can be one of the most effective places to make changes. For example, if it is identified that the design calls for authorization decisions to be made in multiple places, it may be appropriate to consider a central authorization component. If the application is performing data validation at multiple places, it may be appropriate to develop a central validation framework (fixing input validation in one place, rather than in hundreds of places, is far cheaper).
If weaknesses are discovered, they should be given to the system architect for alternative approaches.
Phase 2C: Create and Review UML Models
Once the design and architecture is complete, build Unified Modeling Language (UML) models that describe how the application works. In some cases, these may already be available. Use these models to confirm with the systems designers an exact understanding of how the application works. If weaknesses are discovered, they should be given to the system architect for alternative approaches.
Phase 2D: Create and Review Threat Models
Armed with design and architecture reviews, and the UML models explaining exactly how the system works, undertake a threat modeling exercise. Develop realistic threat scenarios. Analyze the design and architecture to ensure that these threats have been mitigated, accepted by the business, or assigned to a third party, such as an insurance firm. When identified threats have no mitigation strategies, revisit the design and architecture with the systems architect to modify the design.
Phase 3: During Development
Theoretically, development is the implementation of a design. However, in the real world, many design decisions are made during code development. These are often smaller decisions that were either too detailed to be described in the design, or in other cases, issues where no policy or standard guidance was offered. If the design and architecture were not adequate, the developer will be faced with many decisions. If there were insufficient policies and standards, the developer will be faced with even more decisions.
Phase 3A: Code Walkthroughs
The security team should perform a code walkthrough with the developers, and in some cases, the system architects. A code walkthrough is a high-level walkthrough of the code where the developers can explain the logic and flow of the implemented code. It allows the code review team to obtain a general understanding of the code, and allows the developers to explain why certain things were developed the way they were.
The purpose is not to perform a code review, but to understand at a high level the flow, the layout, and the structure of the code that makes up the application.
Phase 3B: Code Reviews
Armed with a good understanding of how the code is structured and why certain things were coded the way they were, the tester can now examine the actual code for security defects.
Static code reviews validate the code against a set of checklists, including:
Business requirements for availability, confidentiality, and integrity.
OWASP Guide or Top 10 Checklists (depending on the depth of the review) for technical exposures.
Specific issues relating to the language or framework in use, such as the Scarlet paper for PHP or Microsoft Secure Coding checklists for ASP.NET.
Any industry specific requirements, such as Sarbanes-Oxley 404, COPPA, ISO 17799, APRA, HIPAA, Visa Merchant guidelines, or other regulatory regimes.
In terms of return on resources invested (mostly time), static code reviews produce far higher quality returns than any other security review method, and rely least on the skill of the reviewer, within reason. However, they are not a silver bullet, and need to be considered carefully within a full-spectrum testing regime.
For more details on OWASP checklists, please refer to HYPERLINK "http://www.owasp.org/index.php/OWASP_Guide_Project" \o "OWASP Guide Project" OWASP Guide for Secure Web Applications, or the latest edition of the HYPERLINK "http://www.owasp.org/index.php/OWASP_Top_10" \o "OWASP Top 10" OWASP Top 10.
Phase 4: During Deployment
Phase 4A: Application Penetration Testing
Having tested the requirements, analyzed the design, and performed code review, it might be assumed that all issues have been caught. Hopefully, this is the case, but penetration testing the application after it has been deployed provides a last check to ensure that nothing has been missed.
Phase 4B: Configuration Management Testing
The application penetration test should include the checking of how the infrastructure was deployed and secured. While the application may be secure, a small aspect of the configuration could still be at a default install stage and vulnerable to exploitation.
Phase 5: Maintenance and Operations
Phase 5A: Conduct Operational Management Reviews
There needs to be a process in place which details how the operational side of both the application and infrastructure is managed.
Phase 5B: Conduct Periodic Health Checks
Monthly or quarterly health checks should be performed on both the application and infrastructure to ensure no new security risks have been introduced and that the level of security is still intact.
Phase 5C: Ensure Change Verification
After every change has been approved and tested in the QA environment and deployed into the production environment, it is vital that, as part of the change management process, the change is checked to ensure that the level of security hasnt been affected by the change.
A Typical SDLC Testing Workflow
The following figure shows a typical SDLC Testing Workflow.
HYPERLINK "http://www.owasp.org/index.php/Image:Typical_SDLC_Testing_Workflow.gif" \o "Image: Typical SDLC Testing Workflow.gif" INCLUDEPICTURE "http://www.owasp.org/images/4/4e/Typical_SDLC_Testing_Workflow.gif" \* MERGEFORMATINET
4 Web Application Penetration Testing
This Chapter describes the OWASP Web Application Penetration testing methodology and explains how to test each vulnerability.
4.1 Introduction and objectives
What is Web Application Penetration Testing?A penetration test is a method of evaluating the security of a computer system or network by simulating an attack. A Web Application Penetration Test focuses only on evaluating the security of a web application.The process involves an active analysis of the application for any weaknesses, technical flaws, or vulnerabilities. Any security issues that are found will be presented to the system owner together with an assessment of their impact and often with a proposal for mitigation or a technical solution.
What is a vulnerability?
A vulnerability is a flaw or weakness in a system's design, implementation, or operation and management that could be exploited to violate the system's security policy. A threat is a potential attack that, by exploiting a vulnerability, may harm the assets owned by an application (resources of value, such as the data in a database or in the file system). A test is an action that tends to show a vulnerability in the application.
What is the OWASP testing methodology?
Penetration testing will never be an exact science where a complete list of all possible issues that should be tested can be defined. Indeed, penetration testing is only an appropriate technique for testing the security of web applications under certain circumstances. The goal is to collect all the possible testing techniques, explain them and keep the guide updated.The OWASP Web Application Penetration Testing method is based on the black box approach. The tester knows nothing or very little information about the application to be tested. The testing model consists of:
Tester: Who performs the testing activities
Tools and methodology: The core of this Testing Guide project
Application: The black box to test
The test is divided into 2 phases:
Passive mode: in the passive mode, the tester tries to understand the application's logic, and plays with the application. Tools can be used for information gathering, for example, an HTTP proxy to observe all the HTTP requests and responses. At the end of this phase, the tester should understand all the access points (gates) of the application (e.g., HTTP headers, parameters, and cookies). The Information Gathering section explains how to perform a passive mode test. For example, the tester could find the following:
https://www.example.com/login/Authentic_Form.html
This may indicate an authentication form in which the application requests a username and a password. The following parameters represent two access points (gates) to the application:
http://www.example.com/Appx.jsp?a=1&b=1
In this case, the application shows two gates (parameters a and b). All the gates found in this phase represent a point of testing. A spreadsheet with the directory tree of the application and all the access points would be useful for the second phase.
Active mode: in this phase, the tester begins to test using the methodology described in the follow paragraphs.
We have split the set of active tests in 9 sub-categories for a total of 66 controls:
Configuration Management Testing
Business Logic Testing
Authentication Testing
Authorization testing
Session Management Testing
Data Validation Testing
Denial of Service Testing
Web Services Testing
Ajax Testing
The following is the list of controls to test during the assessment:
CategoryRef. NumberTest NameVulnerabilityInformation GatheringOWASP-IG-001Spiders, Robots and Crawlers -
N.A.OWASP-IG-002Search Engine Discovery/Reconnaissance N.A.OWASP-IG-003Identify application entry points N.A.OWASP-IG-004Testing for Web Application FingerprintN.A.OWASP-IG-005Application Discovery N.A.OWASP-IG-006Analysis of Error CodesInformation DisclosureConfiguration Management TestingOWASP-CM-001SSL/TLS Testing (SSL Version, Algorithms, Key length, Digital Cert. Validity) SSL WeaknessOWASP-CM-002DB Listener Testing DB Listener weakOWASP-CM-003Infrastructure Configuration Management Testing Infrastructure Configuration management weakness OWASP-CM-004Application Configuration Management TestingApplication Configuration management weakness OWASP-CM-005Testing for File Extensions Handling File extensions handling OWASP-CM-006Old, backup and unreferenced files Old, backup and unreferenced filesOWASP-CM-007Infrastructure and Application Admin Interfaces Access to Admin interfacesOWASP-CM-008Testing for HTTP Methods and XSTHTTP Methods enabled, XST permitted, HTTP VerbAuthentication TestingOWASP-AT-001Credentials transport over an encrypted channel Credentials transport over an encrypted channelOWASP-AT-002Testing for user enumeration User enumerationOWASP-AT-003Testing for Guessable (Dictionary) User Account Guessable user accountOWASP-AT-004Brute Force Testing Credentials Brute forcingOWASP-AT-005Testing for bypassing authentication schema Bypassing authentication schemaOWASP-AT-006Testing for vulnerable remember password and pwd reset Vulnerable remember password, weak pwd resetOWASP-AT-007Testing for Logout and Browser Cache Management Logout function not properly implemented, browser cache weaknessOWASP-AT-008Testing for CAPTCHA Weak Captcha implementationOWASP-AT-009Testing Multiple Factors Authentication Weak Multiple Factors AuthenticationOWASP-AT-010Testing for Race Conditions Race Conditions vulnerability
Session ManagementOWASP-SM-001Testing for Session Management Schema Bypassing Session Management Schema, Weak Session TokenOWASP-SM-002Testing for Cookies attributes
Cookies are set not HTTP Only, Secure, and no time validityOWASP-SM-003Testing for Session Fixation Session FixationOWASP-SM-004Testing for Exposed Session Variables Exposed sensitive session variablesOWASP-SM-005Testing for CSRF CSRFAuthorization TestingOWASP-AZ-001Testing for Path Traversal
Path TraversalOWASP-AZ-002Testing for bypassing authorization schema
Bypassing authorization schemaOWASP-AZ-003Testing for Privilege Escalation Privilege EscalationBusiness logic testingOWASP-BL-001Testing for business logicBypassable business logic
Data Validation TestingOWASP-DV-001Testing for Reflected Cross Site ScriptingReflected XSSOWASP-DV-002Testing for Stored Cross Site Scripting Stored XSSOWASP-DV-003Testing for DOM based Cross Site Scripting DOM XSSOWASP-DV-004Testing for Cross Site Flashing Cross Site FlashingOWASP-DV-005SQL Injection SQL InjectionOWASP-DV-006LDAP Injection LDAP InjectionOWASP-DV-007ORM Injection ORM InjectionOWASP-DV-008XML Injection XML InjectionOWASP-DV-009SSI InjectionSSI InjectionOWASP-DV-010XPath Injection XPath InjectionOWASP-DV-011IMAP/SMTP Injection IMAP/SMTP Injection OWASP-DV-012Code Injection Code InjectionOWASP-DV-013OS Commanding OS CommandingOWASP-DV-014Buffer overflow Buffer overflowOWASP-DV-015Incubated vulnerability TestingIncubated vulnerabilityOWASP-DV-016Testing for HTTP Splitting/Smuggling
HTTP Splitting, Smuggling
Denial of Service TestingOWASP-DS-001Testing for SQL Wildcard Attacks SQL Wildcard vulnerabilityOWASP-DS-002 Locking Customer AccountsLocking Customer AccountsOWASP-DS-003Testing for DoS Buffer Overflows Buffer OverflowsOWASP-DS-004User Specified Object Allocation User Specified Object AllocationOWASP-DS-005User Input as a Loop Counter User Input as a Loop CounterOWASP-DS-006Writing User Provided Data to Disk Writing User Provided Data to DiskOWASP-DS-007Failure to Release ResourcesFailure to Release ResourcesOWASP-DS-008Storing too Much Data in Session Storing too Much Data in SessionWeb Services TestingOWASP-WS-001WS Information Gathering N.A. OWASP-WS-002Testing WSDL WSDL WeaknessOWASP-WS-003XML Structural Testing Weak XML StructureOWASP-WS-004XML content-level Testing XML content-levelOWASP-WS-005HTTP GET parameters/REST TestingWS HTTP GET parameters/RESTOWASP-WS-006Naughty SOAP attachments WS Naughty SOAP attachmentsOWASP-WS-007Replay TestingWS Replay TestingAJAX TestingOWASP-AJ-001AJAX VulnerabilitiesN.AOWASP-AJ-002AJAX TestingAJAX weakness
4.2 Information Gathering
The first phase in security assessment is focused on collecting as much information as possible about a target application. Information Gathering is a necessary step of a penetration test. This task can be carried out in many different ways.
By using public tools (search engines), scanners, sending simple HTTP requests, or specially crafted requests, it is possible to force the application to leak information, e.g., disclosing error messages or revealing the versions and technologies used.
Spiders, Robots, and Crawlers (OWASP-IG-001)
This phase of the Information Gathering process consists of browsing and capturing resources related to the application being tested.
Search Engine Discovery/Reconnaissance (OWASP-IG-002)Search engines, such as Google, can be used to discover issues related to the web application structure or error pages produced by the application that have been publicly exposed.
Identify application entry points (OWASP-IG-003)Enumerating the application and its attack surface is a key precursor before any attack should commence. This section will help you identify and map out every area within the application that should be investigated once your enumeration and mapping phase has been completed.
Testing Web Application Fingerprint (OWASP-IG-004)Application fingerprint is the first step of the Information Gathering process; knowing the version and type of a running web server allows testers to determine known vulnerabilities and the appropriate exploits to use during testing.
Application Discovery (OWASP-IG-005)Application discovery is an activity oriented to the identification of the web applications hosted on a web server/application server. This analysis is important because often there is not a direct link connecting the main application backend. Discovery analysis can be useful to reveal details such as web applications used for administrative purposes. In addition, it can reveal old versions of files or artifacts such as undeleted, obsolete scripts, crafted during the test/development phase or as the result of maintenance.
Analysis of Error Codes (OWASP-IG-006)During a penetration test, web applications may divulge information that is not intended to be seen by an end user. Information such as error codes can inform the tester about technologies and products being used by the application.In many cases, error codes can be easily invoked without the need for specialist skills or tools, due to bad exception handling design and coding.
Clearly, focusing only on the web application will not be an exhaustive test. It cannot be as comprehensive as the information possibly gathered by performing a broader infrastructure analysis.
4.2.1 Testing: Spiders, robots, and Crawlers (OWASP-IG-001)
Brief Summary
This section describes how to test the robots.txt file.
Description of the Issue
Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the Robots Exclusion Protocol of the robots.txt file in the web root directory [1].
As an example, the robots.txt file from HYPERLINK "http://www.google.com/robots.txt" \o "http://www.google.com/robots.txt" http://www.google.com/robots.txt taken on 24 August 2008 is quoted below:
User-agent: *
Allow: /searchhistory/
Disallow: /news?output=xhtml&
Allow: /news?output=xhtml
Disallow: /search
Disallow: /groups
Disallow: /images
...
The User-Agent directive refers to the specific web spider/robot/crawler. For example the User-Agent: Googlebot refers to the GoogleBot crawler while User-Agent: * in the example above applies to all web spiders/robots/crawlers [2] as quoted below:
User-agent: *
The Disallow directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited:
...
Disallow: /search
Disallow: /groups
Disallow: /images
...
Web spiders/robots/crawlers can intentionally ignore the Disallow directives specified in a robots.txt file [3]. Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties.
Black Box testing and example
wgetThe robots.txt file is retrieved from the web root directory of the web server. For example, to retrieve the robots.txt from www.google.com using wget:
$ wget http://www.google.com/robots.txt
--23:59:24-- http://www.google.com/robots.txt
=> 'robots.txt'
Resolving www.google.com... 74.125.19.103, 74.125.19.104, 74.125.19.147, ...
Connecting to www.google.com|74.125.19.103|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/plain]
[ <=> ] 3,425 --.--K/s
23:59:26 (13.67MB/s) - 'robots.txt' saved [3425]
Analyze robots.txt using Google Webmaster ToolsGoogle provides an "Analyze robots.txt" function as part of its "Google Webmaster Tools", which can assist with testing [4] and the procedure is as follows:
1. Sign into Google Webmaster Tools with your Google Account.2. On the Dashboard, click the URL for the site you want.3. Click Tools, and then click Analyze robots.txt.
Gray Box testing and example
The process is the same as Black Box testing above.
References
Whitepapers
[1] "The Web Robots Pages" - HYPERLINK "http://www.robotstxt.org/" \o "http://www.robotstxt.org/" http://www.robotstxt.org/
[2] "How do I block or allow Googlebot?" - HYPERLINK "http://www.google.com/support/webmasters/bin/answer.py?answer=40364&query=googlebot&topic=&type=" \o "http://www.google.com/support/webmasters/bin/answer.py?answer=40364&query=googlebot&topic=&type=" http://www.google.com/support/webmasters/bin/answer.py?answer=40364&query=googlebot&topic=&type=
[3] "(ISC)2 Blog: The Attack of the Spiders from the Clouds" - HYPERLINK "http://blog.isc2.org/isc2_blog/2008/07/the-attack-of-t.html" \o "http://blog.isc2.org/isc2_blog/2008/07/the-attack-of-t.html" http://blog.isc2.org/isc2_blog/2008/07/the-attack-of-t.html
[4] "How do I check that my robots.txt file is working as expected?" - HYPERLINK "http://www.google.com/support/webmasters/bin/answer.py?answer=35237" \o "http://www.google.com/support/webmasters/bin/answer.py?answer=35237" http://www.google.com/support/webmasters/bin/answer.py?answer=35237
4.2.2 Search engine discovery/Reconnaissance (OWASP-IG-002)
Brief Summary
This section describes how to search the Google Index and remove the associated web content from the Google Cache.
Description of the Issue
Once the GoogleBot has completed crawling, it commences indexing the web page based on tags and associated attributes, such as
, in order to return the relevant search results. [1]
If the robots.txt file is not updated during the lifetime of the web site, then it is possible for web content not intended to be included in Google's Search Results to be returned.
Therefore, it must be removed from the Google Cache.
Black Box Testing
Using the advanced "site:" search operator, it is possible to restrict Search Results to a specific domain [2].
Google provides the Advanced "cache:" search operator [2], but this is the equivalent to clicking the "Cached" next to each Google Search Result. Hence, the use of the Advanced "site:" Search Operator and then clicking "Cached" is preferred.
The Google SOAP Search API supports the doGetCachedPage and the associated doGetCachedPageResponse SOAP Messages [3] to assist with retrieving cached pages. An implementation of this is under development by the HYPERLINK "http://www.owasp.org/index.php/Category:OWASP_Google_Hacking_Project" \o "http://www.owasp.org/index.php/Category:OWASP_Google_Hacking_Project" OWASP "Google Hacking" Project.
Example
To find the web content of owasp.org indexed by Google Cache the following Google Search Query is issued:
site:owasp.org
HYPERLINK "https://www.owasp.org/index.php/Image:Google_site_Operator_Search_Results_Example.JPG" \o "Image:Google_site_Operator_Search_Results_Example.JPG" INCLUDEPICTURE "https://www.owasp.org/images/a/a6/Google_site_Operator_Search_Results_Example.JPG" \* MERGEFORMATINET
To display the index.html of owasp.org as cached by Google the following Google Search Query is issued:
cache:owasp.org
HYPERLINK "https://www.owasp.org/index.php/Image:Google_Cached_Example.JPG" \o "Image:Google_Cached_Example.JPG" INCLUDEPICTURE "https://www.owasp.org/images/b/bd/Google_Cached_Example.JPG" \* MERGEFORMATINET
Gray Box testing and example
Grey Box testing is the same as Black Box testing above.
References
[1] "Google 101: How Google crawls, indexes, and serves the web" - HYPERLINK "http://www.google.com/support/webmasters/bin/answer.py?answer=70897" \o "http://www.google.com/support/webmasters/bin/answer.py?answer=70897" http://www.google.com/support/webmasters/bin/answer.py?answer=70897 [2] "Advanced Google Search Operators" - HYPERLINK "http://www.google.com/help/operators.html" \o "http://www.google.com/help/operators.html" http://www.google.com/help/operators.html [3] "Google SOAP Search API" - HYPERLINK "http://code.google.com/apis/soapsearch/reference.html" \l "1_2" \o "http://code.google.com/apis/soapsearch/reference.html#1_2" http://code.google.com/apis/soapsearch/reference.html#1_2 [4] "Preventing content from appearing in Google search results" - HYPERLINK "http://www.google.com/support/webmasters/bin/topic.py?topic=8459" http://www.google.com/support/webmasters/bin/topic.py?topic=8459
4.2.3 Identify application entry points (OWASP-IG-003)
Brief Summary
Enumerating the application and its attack surface is a key precursor before any thorough testing can be undertaken, as it allows the tester to identify likely areas of weakness. This section aims to help identify and map out areas within the application that should be investigated once enumeration and mapping has been completed.
Description of the Issue
Before any testing begins, always get a good understanding of the application and how the user/browser communicates with it. As you walk through the application, pay special attention to all HTTP requests (GET and POST Methods, also known as Verbs), as well as every parameter and form field that are passed to the application. In addition, pay attention to when GET requests are used and when POST requests are used to pass parameters to the application. It is very common that GET requests are used, but when sensitive information is passed, it is often done within the body of a POST request. Note that to see the parameters sent in a POST request, you will need to use a tool such as an intercepting proxy (for example, OWASP's WebScarab) or a browser plug-in. Within the POST request, also make special note of any hidden form fields that are being passed to the application, as these usually contain sensitive information, such as state information, quantity of items, the price of items, that the developer never intended for you to see or change.
In the author's experience, it has been very useful to use an intercepting proxy and a spreadsheet for this stage of the testing. The proxy will keep track of every request and response between you and the application as you walk through it. Additionally, at this point, testers usually trap every request and response so that they can see exactly every header, parameter, etc. that is being passed to the application and what is being returned. This can be quite tedious at times, especially on large interactive sites (think of a banking application). However, experience will teach you what to look for, and, therefore, this phase can be significantly reduced. As you walk through the application, take note of any interesting parameters in the URL, custom headers, or body of the requests/responses, and save them in your spreadsheet. The spreadsheet should include the page you requested (it might be good to also add the request number from the proxy, for future reference), the interesting parameters, the type of request (POST/GET), if access is authenticated/unauthenticated, if SSL is used, if it's part of a multi-step process, and any other relevant notes. Once you have every area of the application mapped out, then you can go through the application and test each of the areas that you have identified and make notes for what worked and what didn't work. The rest of this guide will identify how to test each of these areas of interest, but this section must be undertaken before any of the actual testing can commence.
Below are some points of interests for all requests and responses. Within the requests section, focus on the GET and POST methods, as these appear the majority of the requests. Note that other methods, such as PUT and DELETE, can be used. Often, these more rare requests, if allowed, can expose vulnerabilities. There is a special section in this guide dedicated for testing these HTTP methods.
Requests:
Identify where GETs are used and where POSTs are used.
Identify all parameters used in a POST request (these are in the body of the request)
Within the POST request, pay special attention to any hidden parameters. When a POST is sent all the form fields (including hidden parameters) will be sent in the body of the HTTP message to the application. These typically aren't seen unless you are using a proxy or view the HTML source code. In addition, the next page you see, its data, and your access can all be different depending on the value of the hidden parameter(s).
Identify all parameters used in a GET request (i.e., URL), in particular the query string (usually after a? mark).
Identify all the parameters of the query string. These usually are in a pair format, such as foo=bar. Also note that many parameters can be in one query string such as separated by a &, ~,:, or any other special character or encoding.
A special note when it comes to identifying multiple parameters in one string or within a POST request is that some or all of the parameters will be needed to execute your attacks. You need to identify all of the parameters (even if encoded or encrypted) and identify which ones are processed by the application. Later sections of the guide will identify how to test these parameters, at this point, just make sure you identify each one of them.
Also pay attention to any additional or custom type headers not typically seen (such as debug=False)
Responses:
Identify where new cookies are set (Set-Cookie header), modified, or added to.
Identify where there are any redirects (300 HTTP status code), 400 status codes, in particular 403 Forbidden, and 500 internal server errors during normal responses (i.e., unmodified requests).
Also note where any interesting headers are used. For example, "Server: BIG-IP" indicates that the site is load balanced. Thus, if a site is load balanced and one server is incorrectly configured, then you might have to make multiple requests to access the vulnerable server, depending on the type of load balancing used.
Black Box testing and example
Testing for application entry points: The following are 2 examples on how to check for application entry points.
EXAMPLE 1:
This example shows a GET request that would purchase an item from an online shopping application.
Example 1 of a simplified GET request:
GET https://x.x.x.x/shoppingApp/buyme.asp?CUSTOMERID=100&ITEM=z101a&PRICE=62.50&IP=x.x.x.x
Host: x.x.x.x
Cookie: SESSIONID=Z29vZCBqb2IgcGFkYXdhIG15IHVzZXJuYW1lIGlzIGZvbyBhbmQgcGFzc3dvcmQgaXMgYmFy
Result Expected:
Here you would note all the parameters of the request such as CUSTOMERID, ITEM, PRICE, IP, and the Cookie (which could just be encoded parameters or used for session state).
EXAMPLE 2:
This example shows a POST request that would log you into an application.
Example 2 of a simplified POST request:
POST https://x.x.x.x/KevinNotSoGoodApp/authenticate.asp?service=login
Host: x.x.x.x
Cookie: SESSIONID=dGhpcyBpcyBhIGJhZCBhcHAgdGhhdCBzZXRzIHByZWRpY3RhYmxlIGNvb2tpZXMgYW5kIG1pbmUgaXMgMTIzNA==
CustomCookie=00my00trusted00ip00is00x.x.x.x00
Body of the POST message:
user=admin&pass=pass123&debug=true&fromtrustIP=true
Result Expected:
In this example you would note all the parameters as you have before but notice that the parameters are passed in the body of the message and not in the URL. Additionally note that there is a custom cookie that is being used.
Gray Box testing and example
Testing for application entry points via a Gray Box methodology would consist of everything already identified above with one caveat. This would be if there are any external sources from which the application receives data and processes it (such as SNMP traps, syslog messages, SMTP, or SOAP messages from other servers). If there are any external sources of input into the application then a meeting with the application developers could identify any functions that would accept or expect user input and how it's formatted. For example, the developer could help in understanding how to formulate a correct SOAP request that the application would accept and where the web service resides (if the web service or any other function hasn't already been identified during the black box testing).
References
Whitepapers
HYPERLINK "http://tools.ietf.org/html/rfc2616" \o "http://tools.ietf.org/html/rfc2616" RFC 2616 Hypertext Transfer Protocol HTTP 1.1 - HYPERLINK "http://tools.ietf.org/html/rfc2616" \o "http://tools.ietf.org/html/rfc2616" http://tools.ietf.org/html/rfc2616
Tools
Intercepting Proxy:
OWASP: HYPERLINK "https://www.owasp.org/index.php/OWASP_WebScarab_Project" \o "OWASP WebScarab Project" Webscarab
Dafydd Stuttard: Burp proxy - HYPERLINK "http://portswigger.net/proxy/" \o "http://portswigger.net/proxy/" http://portswigger.net/proxy/
MileSCAN: Paros Proxy - HYPERLINK "http://www.parosproxy.org/download.shtml" \o "http://www.parosproxy.org/download.shtml" http://www.parosproxy.org/download.shtml
Browser Plug-in:
"TamperIE" for Internet Explorer - HYPERLINK "http://www.bayden.com/TamperIE/" \o "http://www.bayden.com/TamperIE/" http://www.bayden.com/TamperIE/
Adam Judson: "Tamper Data" for Firefox - HYPERLINK "https://addons.mozilla.org/en-US/firefox/addon/966" \o "https://addons.mozilla.org/en-US/firefox/addon/966" https://addons.mozilla.org/en-US/firefox/addon/966
4.2.4 Testing for Web Application Fingerprint (OWASP-IG-004)
Brief Summary
Web server fingerprinting is a critical task for the Penetration tester. Knowing the version and type of a running web server allows testers to determine known vulnerabilities and the appropriate exploits to use during testing.
Description of the Issue
There are several different vendors and versions of web servers on the market today. Knowing the type of web server that you are testing significantly helps in the testing process, and will also change the course of the test. This information can be derived by sending the web server specific commands and analyzing the output, as each version of web server software may respond differently to these commands. By knowing how each type of web server responds to specific commands and keeping this information in a web server fingerprint database, a penetration tester can send these commands to the web server, analyze the response, and compare it to the database of known signatures. Please note that it usually takes several different commands to accurately identify the web server, as different versions may react similarly to the same command. Rarely, however, different versions react the same to all HTTP commands. So, by sending several different commands, you increase the accuracy of your guess.
Black Box testing and example
The simplest and most basic form of identifying a Web server is to look at the Server field in the HTTP response header. For our experiments we use netcat. Consider the following HTTP Request-Response:
$ nc 202.41.76.251 80
HEAD / HTTP/1.0
HTTP/1.1 200 OK
Date: Mon, 16 Jun 2003 02:53:29 GMT
Server: Apache/1.3.3 (Unix) (Red Hat/Linux)
Last-Modified: Wed, 07 Oct 1998 11:18:14 GMT
ETag: "1813-49b-361b4df6"
Accept-Ranges: bytes
Content-Length: 1179
Connection: close
Content-Type: text/html
From the Server field, we understand that the server is likely Apache, version 1.3.3, running on Linux operating system.
Four examples of the HTTP response headers are shown below.
From an Apache 1.3.23 server:
HTTP/1.1 200 OK
Date: Sun, 15 Jun 2003 17:10: 49 GMT
Server: Apache/1.3.23
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT
ETag: 32417-c4-3e5d8a83
Accept-Ranges: bytes
Content-Length: 196
Connection: close
Content-Type: text/HTML
From a Microsoft IIS 5.0 server:
HTTP/1.1 200 OK
Server: Microsoft-IIS/5.0
Expires: Yours, 17 Jun 2003 01:41: 33 GMT
Date: Mon, 16 Jun 2003 01:41: 33 GMT
Content-Type: text/HTML
Accept-Ranges: bytes
Last-Modified: Wed, 28 May 2003 15:32: 21 GMT
ETag: b0aac0542e25c31: 89d
Content-Length: 7369
From a Netscape Enterprise 4.1 server:
HTTP/1.1 200 OK
Server: Netscape-Enterprise/4.1
Date: Mon, 16 Jun 2003 06:19: 04 GMT
Content-type: text/HTML
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT
Content-length: 57
Accept-ranges: bytes
Connection: close
From a SunONE 6.1 server:
HTTP/1.1 200 OK
Server: Sun-ONE-Web-Server/6.1
Date: Tue, 16 Jan 2007 14:53:45 GMT
Content-length: 1186
Content-type: text/html
Date: Tue, 16 Jan 2007 14:50:31 GMT
Last-Modified: Wed, 10 Jan 2007 09:58:26 GMT
Accept-Ranges: bytes
Connection: close
However, this testing methodology is not so good. There are several techniques that allow a web site to obfuscate or to modify the server banner string. For example we could obtain the following answer:
403 HTTP/1.1 Forbidden
Date: Mon, 16 Jun 2003 02:41: 27 GMT
Server: Unknown-Webserver/1.0
Connection: close
Content-Type: text/HTML; charset=iso-8859-1
In this case, the server field of that response is obfuscated: we cannot know what type of web server is running.
Protocol behaviour
More refined techniques take in consideration various characteristics of the several web servers available on the market. We will list some methodologies that allow us to deduce the type of web server in use.
HTTP header field ordering
The first method consists of observing the ordering of the several headers in the response. Every web server has an inner ordering of the header. We consider the following answers as an example:
Response from Apache 1.3.23
$ nc apache.example.com 80
HEAD / HTTP/1.0
HTTP/1.1 200 OK
Date: Sun, 15 Jun 2003 17:10: 49 GMT
Server: Apache/1.3.23
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT
ETag: 32417-c4-3e5d8a83
Accept-Ranges: bytes
Content-Length: 196
Connection: close
Content-Type: text/HTML
Response from IIS 5.0
$ nc iis.example.com 80
HEAD / HTTP/1.0
HTTP/1.1 200 OK
Server: Microsoft-IIS/5.0
Content-Location: http://iis.example.com/Default.htm
Date: Fri, 01 Jan 1999 20:13: 52 GMT
Content-Type: text/HTML
Accept-Ranges: bytes
Last-Modified: Fri, 01 Jan 1999 20:13: 52 GMT
ETag: W/e0d362a4c335be1: ae1
Content-Length: 133
Response from Netscape Enterprise 4.1
$ nc netscape.example.com 80
HEAD / HTTP/1.0
HTTP/1.1 200 OK
Server: Netscape-Enterprise/4.1
Date: Mon, 16 Jun 2003 06:01: 40 GMT
Content-type: text/HTML
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT
Content-length: 57
Accept-ranges: bytes
Connection: close
Response from a SunONE 6.1
$ nc sunone.example.com 80
HEAD / HTTP/1.0
HTTP/1.1 200 OK
Server: Sun-ONE-Web-Server/6.1
Date: Tue, 16 Jan 2007 15:23:37 GMT
Content-length: 0
Content-type: text/html
Date: Tue, 16 Jan 2007 15:20:26 GMT
Last-Modified: Wed, 10 Jan 2007 09:58:26 GMT
Connection: close
We can notice that the ordering of the Date field and the Server field differs between Apache, Netscape Enterprise, and IIS.
Malformed requests test
Another useful test to execute involves sending malformed requests or requests of nonexistent pages to the server. Consider the following HTTP responses.
Response from Apache 1.3.23
$ nc apache.example.com 80
GET / HTTP/3.0
HTTP/1.1 400 Bad Request
Date: Sun, 15 Jun 2003 17:12: 37 GMT
Server: Apache/1.3.23
Connection: close
Transfer: chunked
Content-Type: text/HTML; charset=iso-8859-1
Response from IIS 5.0
$ nc iis.example.com 80
GET / HTTP/3.0
HTTP/1.1 200 OK
Server: Microsoft-IIS/5.0
Content-Location: http://iis.example.com/Default.htm
Date: Fri, 01 Jan 1999 20:14: 02 GMT
Content-Type: text/HTML
Accept-Ranges: bytes
Last-Modified: Fri, 01 Jan 1999 20:14: 02 GMT
ETag: W/e0d362a4c335be1: ae1
Content-Length: 133
Response from Netscape Enterprise 4.1
$ nc netscape.example.com 80
GET / HTTP/3.0
HTTP/1.1 505 HTTP Version Not Supported
Server: Netscape-Enterprise/4.1
Date: Mon, 16 Jun 2003 06:04: 04 GMT
Content-length: 140
Content-type: text/HTML
Connection: close
Response from a SunONE 6.1
$ nc sunone.example.com 80
GET / HTTP/3.0
HTTP/1.1 400 Bad request
Server: Sun-ONE-Web-Server/6.1
Date: Tue, 16 Jan 2007 15:25:00 GMT
Content-length: 0
Content-type: text/html
Connection: close
We notice that every server answers in a different way. The answer also differs in the version of the server. Similar observations can be done we create requests with a non-existent protocol. Consider the following responses:
Response from Apache 1.3.23
$ nc apache.example.com 80
GET / JUNK/1.0
HTTP/1.1 200 OK
Date: Sun, 15 Jun 2003 17:17: 47 GMT
Server: Apache/1.3.23
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT
ETag: 32417-c4-3e5d8a83
Accept-Ranges: bytes
Content-Length: 196
Connection: close
Content-Type: text/HTML
Response from IIS 5.0
$ nc iis.example.com 80
GET / JUNK/1.0
HTTP/1.1 400 Bad Request
Server: Microsoft-IIS/5.0
Date: Fri, 01 Jan 1999 20:14: 34 GMT
Content-Type: text/HTML
Content-Length: 87
Response from Netscape Enterprise 4.1
$ nc netscape.example.com 80
GET / JUNK/1.0
Bad request
Bad request
Your browser sent to query this server could not understand.
Response from a SunONE 6.1
$ nc sunone.example.com 80
GET / JUNK/1.0
Bad request
Bad request
Your browser sent a query this server could not understand.
Automated Testing
The tests to carry out in order to accurately fingerprint a web server can be many. Luckily, there are tools that automate these tests. "httprint" is one of such tools. httprint has a signature dictionary that allows one to recognize the type and the version of the web server in use. An example of running httprint is shown below:
HYPERLINK "https://www.owasp.org/index.php/Image:Httprint.jpg" \o "Image:httprint.jpg" INCLUDEPICTURE "https://www.owasp.org/images/2/24/Httprint.jpg" \* MERGEFORMATINET
OnLine Testing
An example of on line tool that often delivers a lot of information on target Web Server, is Netcraft. With this tool we can retrieve information about operating system, web server used, Server Uptime, Netblock Owner, history of change related to Web server and O.S. An example is shown below:
HYPERLINK "https://www.owasp.org/index.php/Image:Netcraft2.png" \o "Image:netcraft2.png" INCLUDEPICTURE "https://www.owasp.org/images/7/76/Netcraft2.png" \* MERGEFORMATINET
References
Whitepapers
Saumil Shah: "An Introduction to HTTP fingerprinting" - HYPERLINK "http://net-square.com/httprint/httprint_paper.html" \o "http://net-square.com/httprint/httprint_paper.html" http://net-square.com/httprint/httprint_paper.html
Tools
httprint - HYPERLINK "http://net-square.com/httprint/index.shtml" \o "http://net-square.com/httprint/index.shtml" http://net-square.com/httprint/index.shtml
Netcraft - HYPERLINK "http://www.netcraft.com" \o "http://www.netcraft.com" http://www.netcraft.com
4.2.5 Application Discovery (OWASP-IG-005)
Brief Summary
A paramount step in testing for web application vulnerabilities is to find out which particular applications are hosted on a web server.Many applications have known vulnerabilities and known attack strategies that can be exploited in order to gain remote control or to exploit data. In addition, many applications are often misconfigured or not updated, due to the perception that they are only used "internally" and therefore no threat exists.
Description of the Issue
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is losing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve to the same IP address (this scenario is not limited to hosting environments, but also applies to ordinary corporate environments as well).
As a security professional, you are sometimes given a set of IP addresses (or possibly just one) as a target to test. It is arguable that this scenario is more akin to a pentest-type engagement, but in any case, it is expected that such an assignment would test all web applications accessible through this target (and possibly other things). The problem is that the given IP address hosts an HTTP service on port 80, but if you access it by specifying the IP address (which is all you know) it reports "No web server configured at this address" or a similar message. But that system could "hide" a number of web applications, associated to unrelated symbolic (DNS) names. Obviously, the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them. Sometimes, the target specification is richer maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e., it could omit some symbolic names and the client may not even being aware of that (this is more likely to happen in large organizations)!
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfiguration), or intentionally (for example, unadvertised administrative interfaces).
To address these issues, it is necessary to perform web application discovery.
Black Box testing and example
Web application discovery
Web application discovery is a process aimed at identifying web applications on a given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist of a set of DNS symbolic names or a mix of the two. This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., test only the application located at the URL http://www.example.com/), the assessment should strive to be the most comprehensive in scope, i.e. it should identify all the applications accessible through the given target. In the following examples, we will examine a few techniques that can be employed to achieve this goal.
Note: Some of the following techniques apply to Internet-facing web servers, namely DNS and reverse-IP web-based search services and the use of search engines. Examples make use of private IP addresses (such as 192.168.1.100), which, unless indicated otherwise, represent generic IP addresses and are used only for anonymity purposes.
There are three factors influencing how many applications are related to a given DNS name (or an IP address):
1. Different base URL The obvious entry point for a web application is www.example.com, i.e., with this shorthand notation we think of the web application originating at HYPERLINK "http://www.example.com/" \o "http://www.example.com/" http://www.example.com/ (the same applies for https). However, even though this is the most common situation, there is nothing forcing the application to start at /. For example, the same symbolic name may be associated to three web applications such as: HYPERLINK "http://www.example.com/url1" \o "http://www.example.com/url1" http://www.example.com/url1 HYPERLINK "http://www.example.com/url2" \o "http://www.example.com/url2" http://www.example.com/url2 HYPERLINK "http://www.example.com/url3" \o "http://www.example.com/url3" http://www.example.com/url3 In this case, the URL HYPERLINK "http://www.example.com/" \o "http://www.example.com/" http://www.example.com/ would not be associated to a meaningful page, and the three applications would be hidden, unless we explicitly know how to reach them, i.e., we know url1, url2 or url3. There is usually no need to publish web applications in this way, unless you dont want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesnt mean that these applications are secret, just that their existence and location is not explicitly advertised.
2. Non-standard portsWhile web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, HYPERLINK "http://www.example.com:20000/" \o "http://www.example.com:20000/" http://www.example.com:20000/.
3. Virtual hostsDNS allows us to associate a single IP address to one or more symbolic names. For example, the IP address 192.168.1.100 might be associated to DNS names www.example.com, helpdesk.example.com, webmail.example.com (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 Host: header [1].
We would not suspect the existence of other web applications in addition to the obvious www.example.com, unless we know of helpdesk.example.com and webmail.example.com.
Approaches to address issue 1 - non-standard URLsThere is no way to fully ascertain the existence of non-standard-named web applications. Being non-standard, there is no fixed criteria governing the naming convention, however there are a number of techniques that the tester can use to gain some additional insight. First, if the web server is misconfigured and allows directory browsing, it may be possible to spot these applications. Vulnerability scanners may help in this respect. Second, these applications may be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such hidden applications on www.example.com we could do a bit of googling using the site operator and examining the result of a query for site: www.example.com. Among the returned URLs there could be one pointing to such a non-obvious application. Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from URLs such as HYPERLINK "https://www.example.com/webmail" \o "https://www.example.com/webmail" https://www.example.com/webmail, HYPERLINK "https://webmail.example.com/" \o "https://webmail.example.com/" https://webmail.example.com/, or HYPERLINK "https://mail.example.com/" \o "https://mail.example.com/" https://mail.example.com/. The same holds for administrative interfaces, which may be published at hidden URLs (for example, a Tomcat administrative interface), and yet not referenced anywhere. So, doing a bit of dictionary-style searching (or intelligent guessing) could yield some results. Vulnerability scanners may help in this respect.
Approaches to address issue 2 - non-standard portsIt is easy to check for the existence of web applications on non-standard ports. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space. For example, the following command will look up, with a TCP connect scan, all open ports on IP 192.168.1.100 and will try to determine what services are bound to them (only essential switches are shown nmap features a broad set of options, whose discussion is out of scope):
nmap PN sT sV p0-65535 192.168.1.100
It is sufficient to examine the output and look for http or the indication of SSL-wrapped services (which should be probed to confirm that they are https). For example, the output of the previous command could look like:
Interesting ports on 192.168.1.100:
(The 65527 ports scanned but not shown below are in state: closed)
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 3.5p1 (protocol 1.99)
80/tcp open http Apache httpd 2.0.40 ((Red Hat Linux))
443/tcp open ssl OpenSSL
901/tcp open http Samba SWAT administration server
1241/tcp open ssl Nessus security scanner
3690/tcp open unknown
8000/tcp open http-alt?
8080/tcp open http Apache Tomcat/Coyote JSP engine 1.1
From this example, we see that:
There is an Apache http server running on port 80.
It looks like there is an https server on port 443 (but this needs to be confirmed, for example, by visiting HYPERLINK "https://192.168.1.100" \o "https://192.168.1.100" https://192.168.1.100 with a browser).
On port 901 there is a Samba SWAT web interface.
The service on port 1241 is not https, but is the SSL-wrapped Nessus daemon.
Port 3690 features an unspecified service (nmap gives back its fingerprint - here omitted for clarity - together with instructions to submit it for incorporation in the nmap fingerprint database, provided you know which service it represents).
Another unspecified service on port 8000; this might possibly be http, since it is not uncommon to find http servers on this port. Let's give it a look:
$ telnet 192.168.10.100 8000
Trying 192.168.1.100...
Connected to 192.168.1.100.
Escape character is '^]'.
GET / HTTP/1.0
HTTP/1.0 200 OK
pragma: no-cache
Content-Type: text/html
Server: MX4J-HTTPD/1.0
expires: now
Cache-Control: no-cache
...
This confirms that in fact it is an HTTP server. Alternatively, we could have visited the URL with a web browser; or used the GET or HEAD Perl commands, which mimic HTTP interactions such as the one given above (however HEAD requests may not be honored by all servers). Apache Tomcat running on port 8080.
The same task may be performed by vulnerability scanners but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide with respect to nmap a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed (for example, a Tomcat administrative interface).
Approaches to address issue 3 - virtual hostsThere are a number of techniques which may be used to identify DNS names associated to a given IP address x.y.z.t.
DNS zone transfersThis technique has limited use nowadays, given the fact that zone transfers are largely not honored by DNS servers. However, it may be worth a try. First of all, we must determine the name servers serving x.y.z.t. If a symbolic name is known for x.y.z.t (let it be www.example.com), its name servers can be determined by means of tools such as nslookup, host, or dig, by requesting DNS NS records. If no symbolic names are known for x.y.z.t, but your target definition contains at least a symbolic name, you may try to apply the same process and query the name server of that name (hoping that x.y.z.t will be served as well by that name server). For example, if your target consists of the IP address x.y.z.t and the name mail.example.com, determine the name servers for domain example.com.
The following example shows how to identify the name servers for www.owasp.org by using the host command:
$ host -t ns www.owasp.org
www.owasp.org is an alias for owasp.org.
owasp.org name server ns1.secure.net.
owasp.org name server ns2.secure.net.
A zone transfer may now be requested to the name servers for domain example.com. If you are lucky, you will get back a list of the DNS entries for this domain. This will include the obvious www.example.com and the not-so-obvious helpdesk.example.com and webmail.example.com (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated.
Trying to request a zone transfer for owasp.org from one of its name servers:
$ host -l www.owasp.org ns1.secure.net
Using domain server:
Name: ns1.secure.net
Address: 192.220.124.10#53
Aliases:
Host www.owasp.org not found: 5(REFUSED)
; Transfer failed.
DNS inverse queriesThis process is similar to the previous one, but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not guaranteed.
Web-based DNS searchesThis kind of search is akin to DNS zone transfer, but relies on web-based services that enable name-based searches on DNS. One such service is the Netcraft Search DNS service, available at HYPERLINK "http://searchdns.netcraft.com/?host" \o "http://searchdns.netcraft.com/?host" http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as example.com. Then you will check whether the names you obtained are pertinent to the target you are examining.
Reverse-IP servicesReverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.
Domain tools reverse IP: HYPERLINK "http://www.domaintools.com/reverse-ip/" \o "http://www.domaintools.com/reverse-ip/" http://www.domaintools.com/reverse-ip/ (requires free membership)
MSN search: HYPERLINK "http://search.msn.com" \o "http://search.msn.com" http://search.msn.com syntax: "ip:x.x.x.x" (without the quotes)
Webhosting info: HYPERLINK "http://whois.webhosting.info/" \o "http://whois.webhosting.info/" http://whois.webhosting.info/ syntax: HYPERLINK "http://whois.webhosting.info/x.x.x.x" \o "http://whois.webhosting.info/x.x.x.x" http://whois.webhosting.info/x.x.x.x
DNSstuff: HYPERLINK "http://www.dnsstuff.com/" \o "http://www.dnsstuff.com/" http://www.dnsstuff.com/ (multiple services available)
HYPERLINK "http://net-square.com/msnpawn/index.shtml" \o "http://net-square.com/msnpawn/index.shtml" http://net-square.com/msnpawn/index.shtml (multiple queries on domains and IP addresses, requires installation)
tomDNS: HYPERLINK "http://www.tomdns.net/" \o "http://www.tomdns.net/" http://www.tomdns.net/ (some services are still private at the time of writing)
SEOlogs.com: HYPERLINK "http://www.seologs.com/ip-domains.html" \o "http://www.seologs.com/ip-domains.html" http://www.seologs.com/ip-domains.html (reverse-IP/domain lookup)
The following example shows the result of a query to one of the above reverse-IP services to 216.48.3.18, the IP address of www.owasp.org. Three additional non-obvious symbolic names mapping to the same address have been revealed.
HYPERLINK "https://www.owasp.org/index.php/Image:Owasp-Info.jpg" \o "Image:Owasp-Info.jpg" INCLUDEPICTURE "https://www.owasp.org/images/3/3e/Owasp-Info.jpg" \* MERGEFORMATINET
GooglingFollowing information gathering from the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs. For instance, considering the previous example regarding www.owasp.org, you could query Google and other search engines looking for information (hence, DNS names) related to the newly discovered domains of webgoat.org, webscarab.com, and webscarab.net. Googling techniques are explained in Testing:Spiders_Robots_and_Crawlers.
Gray Box testing and example
Not applicable. The methodology remains the same as listed in Black Box testing no matter how much information you start with.
References
Whitepapers
[1] HYPERLINK "http://tools.ietf.org/html/rfc2616" \o "http://tools.ietf.org/html/rfc2616" RFC 2616 Hypertext Transfer Protocol HTTP 1.1
Tools
DNS lookup tools such as nslookup, dig or similar.
Port scanners (such as nmap, HYPERLINK "http://www.insecure.org" \o "http://www.insecure.org" http://www.insecure.org) and vulnerability scanners (such as Nessus: HYPERLINK "http://www.nessus.org" \o "http://www.nessus.org" http://www.nessus.org; wikto: HYPERLINK "http://www.sensepost.com/research/wikto/" \o "http://www.sensepost.com/research/wikto/" http://www.sensepost.com/research/wikto/).
Search engines (Google, and other major engines).
Specialized DNS-related web-based search service: see text.
nmap - HYPERLINK "http://www.insecure.org" \o "http://www.insecure.org" http://www.insecure.org
Nessus Vulnerability Scanner - HYPERLINK "http://www.nessus.org" \o "http://www.nessus.org" http://www.nessus.org
4.2.6 Analysis of Error Codes (OWASP-IG-006)
Brief Summary
Often during a penetration test on web applications we come up against many error codes generated from applications or web servers. It's possible to cause these errors to be displayed by using a particular request, either specially crafted with tools or created manually. These codes are very useful to penetration testers during their activities because they reveal a lot of information about databases, bugs, and other technological components directly linked with web applications. Within this section we'll analyze the more common codes (error messages) and bring into focus the steps of vulnerability assessment. The most important aspect for this activity is to focus one's attention on these errors, seeing them as a collection of information that will aid in the next steps of our analysis. A good collection can facilitate assessment efficiency by decreasing the overall time taken to perform the penetration test.
Description of the Issue
A common error that we can see during our search is the HTTP 404 Not Found. Often this error code provides useful details about the underlying web server and associated components. For example:
Not Found
The requested URL /page.html was not found on this server.
Apache/2.2.3 (Unix) mod_ssl/2.2.3 OpenSSL/0.9.7g DAV/2 PHP/5.1.2 Server at localhost Port 80
This error message can be generated by requesting a non-existant URL. After the common message that shows a page not found, there is information about web server version, OS, modules and other products used. This information can be very important from an OS and application type and version identification point of view.
Web server errors aren't the only useful output returned requiring security analysis. Consider the next example error message:
Microsoft OLE DB Provider for ODBC Drivers (0x80004005)
[DBNETLIB][ConnectionOpen(Connect())] - SQL server does not exist or access denied
What happened? We will explain step-by-step below.
In this example, the 80004005 is a generic IIS error code which indicates that it could not establish a connection to its associated database. In many cases, the error message will detail the type of the database. This will often indicate the underlying operating system by association. With this information, the penetration tester can plan an appropriate strategy for the security test.
By manipulating the variables that are passed to the database connect string, we can invoke more detailed errors.
Microsoft OLE DB Provider for ODBC Drivers error '80004005'
[Microsoft][ODBC Access 97 ODBC driver Driver]General error Unable to open registry key 'DriverId'
In this example, we can see a generic error in the same situation which reveals the type and version of the associated database system and a dependence on Windows operating system registry key values.
Now we will look at a practical example with a security test against a web application that loses its link to its database server and does not handle the exception in a controlled manner. This could be caused by a database name resolution issue, processing of unexpected variable values, or other network problems.
Consider the scenario where we have a database administration web portal, which can be used as a front end GUI to issue database queries, create tables, and modify database fields. During the POST of the logon credentials, the following error message is presented to the penetration tester. The message indicates the presence of a MySQL database server:
Microsoft OLE DB Provider for ODBC Drivers (0x80004005)
[MySQL][ODBC 3.51 Driver]Unknown MySQL server host
If we see in the HTML code of the logon page the presence of a hidden field with a database IP, we can try to change this value in the URL with the address of database server under the penetration tester's control in an attempt to fool the application into thinking that the logon was successful.
Another example: knowing the database server that services a web application, we can take advantage of this information to carry out a SQL Injection for that kind of database or a persistent XSS test.
Error Handling in IIS and ASP .net
ASP .net is a common framework from Microsoft used for developing web applications. IIS is one of the commonly used web server. Errors occur in all applications, we try to trap most errors but it is almost impossible to cover each and every exception.
IIS uses a set of custom error pages generally found in c:\winnt\help\iishelp\common to display errors like '404 page not found' to the user. These default pages can be changed and custom errors can be configured for IIS server. When IIS receives a request for an aspx page, the request is passed on to the dot net framework.
There are various ways by which errors can be handled in dot net framework. Errors are handled at three places in ASP .net:
1. Inside Web.config customErrors section 2. Inside global.asax Application_Error Sub 3. At the the aspx or associated codebehind page in the Page_Error sub
Handling errors using web.config
mode="On" will turn on custom errors. mode=RemoteOnly will show custom errors to the remote web application users. A user accessing the server locally will be presented with the complete stack trace and custom errors will not be shown to him.
All the errors, except those explicitly specified, will cause a redirection to the resource specified by defaultRedirect, i.e., myerrorpagedefault.aspx. A status code 404 will be handled by myerrorpagefor404.aspx.
Handling errors in Global.asax
When an error occurs, the Application_Error sub is called. A developer can write code for error handling / page redirection in this sub.
Private Sub Application_Error (ByVal sender As Object, ByVal e As System.EventArgs)
Handles MyBase.Error
End Sub
Handling errors in Page_Error sub
This is similar to application error.
Private Sub Page_Error (ByVal sender As Object, ByVal e As System.EventArgs)
Handles MyBase.Error
End Sub
Error hierarchy in ASP .net
Page_Error sub will be processed first, followed by global.asax Application_Error sub, and, finally, customErrors section in web.config file.
Information Gathering on web applications with server-side technology is quite difficult, but the information discovered can be useful for the correct execution of an attempted exploit (for example, SQL injection or Cross Site Scripting (XSS) attacks) and can reduce false positives.
How to test for ASP.net and IIS Error Handling
Fire up your browser and type a random page name
http:\\www.mywebserver.com\anyrandomname.asp
If the server returns
The page cannot be found
HTTP 404 - File not found
Internet Information Services
it means that IIS custom errors are not configured. Please note the .asp extension.
Also test for .net custom errors. Type a random page name with aspx extension in your browser:
http:\\www.mywebserver.com\anyrandomname.aspx
If the server returns
Server Error in '/' Application.
--------------------------------------------------------------------------------
The resource cannot be found.
Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Custom errors for .net are not configured.
Black Box testing and example
Test:
telnet 80
GET / HTTP/1.1
Result:
HTTP/1.1 404 Not Found
Date: Sat, 04 Nov 2006 15:26:48 GMT
Server: Apache/2.2.3 (Unix) mod_ssl/2.2.3 OpenSSL/0.9.7g
Content-Length: 310
Connection: close
Content-Type: text/html; charset=iso-8859-1
Test:
1. network problems
2. bad configuration about host database address
Result:
Microsoft OLE DB Provider for ODBC Drivers (0x80004005) '
[MySQL][ODBC 3.51 Driver]Unknown MySQL server host
Test:
1. Authentication failed
2. Credentials not inserted
Result:
Firewall version used for authentication:
Error 407
FW-1 at : Unauthorized to access the document.
Authorization is needed for FW-1.
The authentication required by FW-1 is: unknown.
Reason for failure of last attempt: no user
Gray Box testing and example
Test:
Enumeration of the directories with access denied.
http:///
Result:
Directory Listing Denied
This Virtual Directory does not allow contents to be listed.
Forbidden
You don't have permission to access / on this server.
References
Whitepaper:
[1] [ HYPERLINK "http://www.ietf.org/rfc/rfc2616.txt?number=2616" \o "http://www.ietf.org/rfc/rfc2616.txt?number=2616" RFC2616] Hypertext Transfer Protocol -- HTTP/1.1
4.3 Configuration Management Testing
Often analysis of the infrastructure and topology architecture can reveal a great deal about a web application. Information such as source code, HTTP methods permitted, administrative functionality, authentication methods and infrastructural configurations can be obtained.
HYPERLINK "https://www.owasp.org/index.php/Testing_for_SSL-TLS" \o "Testing for SSL-TLS" 4.3.1 SSL/TLS Testing (OWASP-CM-001)SSL and TLS are two protocols that provide, with the support of cryptography, secure channels for the protection, confidentiality, and authentication of the information being transmitted.Considering the criticality of these security implementations, it is important to verify the usage of a strong cipher algorithm and its proper implementation.
HYPERLINK "https://www.owasp.org/index.php/Testing_for_DB_Listener" \o "Testing for DB Listener" 4.3.2 DB Listener Testing (OWASP-CM-002)During the configuration of a database server, many DB administrators do not adequately consider the security of the DB listener component. The listener could reveal sensitive data as well as configuration settings or running database instances if insecurely configured and probed with manual or automated techniques. Information revealed will often be useful to a tester serving as input to more impacting follow-on tests.
HYPERLINK "https://www.owasp.org/index.php/Testing_for_infrastructure_configuration_management" \o "Testing for infrastructure configuration management" 4.3.3 Infrastructure Configuration Management Testing (OWASP-CM-003)The intrinsic complexity of interconnected and heterogeneous web server infrastructure, which can count hundreds of web applications, makes configuration management and review a fundamental step in testing and deploying every single application. In fact it takes only a single vulnerability to undermine the security of the entire infrastructure, and even small and (almost) unimportant problems may evolve into severe risks for another application on the same server. In order to address these problems, it is of utmost importance to perform an in-depth review of configuration and known security issues.
HYPERLINK "https://www.owasp.org/index.php/Testing_for_application_configuration_management" \o "Testing for application configuration management" 4.3.4 Application Configuration Management Testing (OWASP-CM-004)Web applications hide some information that is usually not considered during the development or configuration of the application itself.This data can be discovered in the source code, in the log files or in the default error codes of the web servers. A correct approach to this topic is fundamental during a security assessment.
HYPERLINK "https://www.owasp.org/index.php/Testing_for_file_extensions_handling" \o "Testing for file extensions handling" 4.3.5 Testing for File Extensions Handling (OWASP-CM-005)The file extensions present in a web server or a web application make it possible to identify the technologies which compose the target application, e.g. jsp and asp extensions. File extensions can also expose additional systems connected to the application.
HYPERLINK "https://www.owasp.org/index.php/Testing_for_old_file" \o "Testing for old file" 4.3.6 Old, Backup and Unreferenced Files (OWASP-CM-006)Redundant, readable and downloadable files on a web server, such as old, backup and renamed files, are a big source of information leakage. It is necessary to verify the presence of these files because they may contain parts of source code, installation paths as well as passwords for applications and/or databases.
HYPERLINK "https://www.owasp.org/index.php/Testing_for_Admin_Interfaces" \o "Testing for Admin Interfaces" 4.3.7 Infrastructure and Application Admin Interfaces (OWASP-CM-007)Many applications use a common path for administrative interfaces which can be used to guess or brute force administrative passwords. This test tends to find admin interfaces and understand if it is possible to exploit it to access to admin functionality.
HYPERLINK "https://www.owasp.org/index.php/Testing_for_HTTP_Methods_and_XST" \o "Testing for HTTP Methods and XST" 4.3.8 Testing for HTTP Methods and XST (OWASP-CM-008)In this test we check that the web server is not configured to allow potentially dangerous HTTP commands (methods) and that Cross Site Tracing (XST) is not possible.
4.3.1 SSL/TLS Testing (OWASP-CM-001)
Brief Summary
Due to historical exporting restrictions of high grade cryptography, legacy and new web servers could be able to handle a weak cryptographic support.
Even if high grade ciphers are normally used and installed, some misconfiguration in server installation could be used to force the use of a weaker cipher to gain access to the supposed secure communication channel.
Testing SSL / TLS cipher specifications and requirements for site
The http clear-text protocol is normally secured via an SSL or TLS tunnel, resulting in https traffic. In addition to providing encryption of data in transit, https allows the identification of servers (and, optionally, of clients) by means of digital certificates.
Historically, there have been limitations set in place by the U.S. government to allow cryptosystems to be exported only for key sizes of at most 40 bits, a key length which could be broken and would allow the decryption of communications. Since then cryptographic export regulations have been relaxed (though some constraints still hold), however it is important to check the SSL configuration being used to avoid putting in place cryptographic support which could be easily defeated. SSL-based services should not offer the possibility to choose weak ciphers.
Technically, cipher determination is performed as follows. In the initial phase of a SSL connection setup, the client sends to the server a Client Hello message specifying, among other information, the cipher suites that it is able to handle. A client is usually a web browser (most popular SSL client nowadays), but not necessarily, since it can be any SSL-enabled application; the same holds for the server, which needs not be a web server, though this is the most common case. (For example, a noteworthy class of SSL clients is that of SSL proxies such as stunnel (www.stunnel.org) which can be used to allow non-SSL enabled tools to talk to SSL services.) A cipher suite is specified by an encryption protocol (DES, RC4, AES), the encryption key length (such as 40, 56, or 128 bits), and a hash algorithm (SHA, MD5) used for integrity checking. Upon receiving a Client Hello message, the server decides which cipher suite it will use for that session. It is possible (for example, by means of configuration directives) to specify which cipher suites the server will honour. In this way you may control, for example, whether or not conversations with clients will support 40-bit encryption only.
Black Box Test and example
In order to detect possible support of weak ciphers, the ports associated to SSL/TLS wrapped services must be identified. These typically include port 443 which is the standard https port, however this may change because a) https services may be configured to run on non-standard ports, and b) there may be additional SSL/TLS wrapped services related to the web application. In general a service discovery is required to identify such ports.
The nmap scanner, via the sV scan option, is able to identify SSL services. Vulnerability Scanners, in addition to performing service discovery, may include checks against weak ciphers (for example, the Nessus scanner has the capability of checking SSL services on arbitrary ports, and will report weak ciphers).
Example 1. SSL service recognition via nmap.
[root@test]# nmap -F -sV localhost
Starting nmap 3.75 ( http://www.insecure.org/nmap/ ) at 2005-07-27 14:41 CEST
Interesting ports on localhost.localdomain (127.0.0.1):
(The 1205 ports scanned but not shown below are in state: closed)
PORT STATE SERVICE VERSION
443/tcp open ssl OpenSSL
901/tcp open http Samba SWAT administration server
8080/tcp open http Apache httpd 2.0.54 ((Unix) mod_ssl/2.0.54 OpenSSL/0.9.7g PHP/4.3.11)
8081/tcp open http Apache Tomcat/Coyote JSP engine 1.0
Nmap run completed -- 1 IP address (1 host up) scanned in 27.881 seconds
[root@test]#
Example 2. Identifying weak ciphers with Nessus. The following is an anonymized excerpt of a report generated by the Nessus scanner, corresponding to the identification of a server certificate allowing weak ciphers (see underlined text).
https (443/tcp)
Description
Here is the SSLv2 server certificate:
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1 (0x1)
Signature Algorithm: md5WithRSAEncryption
Issuer: C=**, ST=******, L=******, O=******, OU=******, CN=******
Validity
Not Before: Oct 17 07:12:16 2002 GMT
Not After: Oct 16 07:12:16 2004 GMT
Subject: C=**, ST=******, L=******, O=******, CN=******
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public Key: (1024 bit)
Modulus (1024 bit):
00:98:4f:24:16:cb:0f:74:e8:9c:55:ce:62:14:4e:
6b:84:c5:81:43:59:c1:2e:ac:ba:af:92:51:f3:0b:
ad:e1:4b:22:ba:5a:9a:1e:0f:0b:fb:3d:5d:e6:fc:
ef:b8:8c:dc:78:28:97:8b:f0:1f:17:9f:69:3f:0e:
72:51:24:1b:9c:3d:85:52:1d:df:da:5a:b8:2e:d2:
09:00:76:24:43:bc:08:67:6b:dd:6b:e9:d2:f5:67:
e1:90:2a:b4:3b:b4:3c:b3:71:4e:88:08:74:b9:a8:
2d:c4:8c:65:93:08:e6:2f:fd:e0:fa:dc:6d:d7:a2:
3d:0a:75:26:cf:dc:47:74:29
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
Netscape Comment:
OpenSSL Generated Certificate
Page 10
Network Vulnerability Assessment Report 25.05.2005
X509v3 Subject Key Identifier:
10:00:38:4C:45:F0:7C:E4:C6:A7:A4:E2:C9:F0:E4:2B:A8:F9:63:A8
X509v3 Authority Key Identifier:
keyid:CE:E5:F9:41:7B:D9:0E:5E:5D:DF:5E:B9:F3:E6:4A:12:19:02:76:CE
DirName:/C=**/ST=******/L=******/O=******/OU=******/CN=******
serial:00
Signature Algorithm: md5WithRSAEncryption
7b:14:bd:c7:3c:0c:01:8d:69:91:95:46:5c:e6:1e:25:9b:aa:
8b:f5:0d:de:e3:2e:82:1e:68:be:97:3b:39:4a:83:ae:fd:15:
2e:50:c8:a7:16:6e:c9:4e:76:cc:fd:69:ae:4f:12:b8:e7:01:
b6:58:7e:39:d1:fa:8d:49:bd:ff:6b:a8:dd:ae:83:ed:bc:b2:
40:e3:a5:e0:fd:ae:3f:57:4d:ec:f3:21:34:b1:84:97:06:6f:
f4:7d:f4:1c:84:cc:bb:1c:1c:e7:7a:7d:2d:e9:49:60:93:12:
0d:9f:05:8c:8e:f9:cf:e8:9f:fc:15:c0:6e:e2:fe:e5:07:81:
82:fc
Here is the list of available SSLv2 ciphers:
RC4-MD5
EXP-RC4-MD5
RC2-CBC-MD5
EXP-RC2-CBC-MD5
DES-CBC-MD5
DES-CBC3-MD5
RC4-64-MD5
The SSLv2 server offers 5 strong ciphers, but also 0 medium strength and 2 weak "export class" ciphers.
The weak/medium ciphers may be chosen by an export-grade or badly configured client software. They only offer a limited protection against a brute force attack
Solution: disable those ciphers and upgrade your client software if necessary.
See HYPERLINK "http://support.microsoft.com/default.aspx?scid=kben-us216482" \o "http://support.microsoft.com/default.aspx?scid=kben-us216482" http://support.microsoft.com/default.aspx?scid=kben-us216482
or HYPERLINK "http://httpd.apache.org/docs-2.0/mod/mod_ssl.html" \l "sslciphersuite" \o "http://httpd.apache.org/docs-2.0/mod/mod_ssl.html#sslciphersuite" http://httpd.apache.org/docs-2.0/mod/mod_ssl.html#sslciphersuite
This SSLv2 server also accepts SSLv3 connections.
This SSLv2 server also accepts TLSv1 connections.
Example 3. Manually audit weak SSL cipher levels with OpenSSL. The following will attempt to connect to Google.com with SSLv2.
[root@test]# openssl s_client -no_tls1 -no_ssl3 -connect www.google.com:443
CONNECTED(00000003)
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com
verify error:num=27:certificate not trusted
verify return:1
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com
verify error:num=21:unable to verify the first certificate
verify return:1
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIDYzCCAsygAwIBAgIQYFbAC3yUC8RFj9MS7lfBkzANBgkqhkiG9w0BAQQFADCB
zjELMAkGA1UEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJ
Q2FwZSBUb3duMR0wGwYDVQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UE
CxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjEhMB8GA1UEAxMYVGhh
d3RlIFByZW1pdW0gU2VydmVyIENBMSgwJgYJKoZIhvcNAQkBFhlwcmVtaXVtLXNl
cnZlckB0aGF3dGUuY29tMB4XDTA2MDQyMTAxMDc0NVoXDTA3MDQyMTAxMDc0NVow
aDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDU1v
dW50YWluIFZpZXcxEzARBgNVBAoTCkdvb2dsZSBJbmMxFzAVBgNVBAMTDnd3dy5n
b29nbGUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC/e2Vs8U33fRDk
5NNpNgkB1zKw4rqTozmfwty7eTEI8PVH1Bf6nthocQ9d9SgJAI2WOBP4grPj7MqO
dXMTFWGDfiTnwes16G7NZlyh6peT68r7ifrwSsVLisJp6pUf31M5Z3D88b+Yy4PE
D7BJaTxq6NNmP1vYUJeXsGSGrV6FUQIDAQABo4GmMIGjMB0GA1UdJQQWMBQGCCsG
AQUFBwMBBggrBgEFBQcDAjBABgNVHR8EOTA3MDWgM6Axhi9odHRwOi8vY3JsLnRo
YXd0ZS5jb20vVGhhd3RlUHJlbWl1bVNlcnZlckNBLmNybDAyBggrBgEFBQcBAQQm
MCQwIgYIKwYBBQUHMAGGFmh0dHA6Ly9vY3NwLnRoYXd0ZS5jb20wDAYDVR0TAQH/
BAIwADANBgkqhkiG9w0BAQQFAAOBgQADlTbBdVY6LD1nHWkhTadmzuWq2rWE0KO3
Ay+7EleYWPOo+EST315QLpU6pQgblgobGoI5x/fUg2U8WiYj1I1cbavhX2h1hda3
FJWnB3SiXaiuDTsGxQ267EwCVWD5bCrSWa64ilSJTgiUmzAv0a2W8YHXdG08+nYc
X/dVk5WRTw==
-----END CERTIFICATE-----
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com
issuer=/C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/emailAddress=premium-server@thawte.com
---
No client certificate CA names sent
---
Ciphers common between both SSL endpoints:
RC4-MD5 EXP-RC4-MD5 RC2-CBC-MD5
EXP-RC2-CBC-MD5 DES-CBC-MD5 DES-CBC3-MD5
RC4-64-MD5
---
SSL handshake has read 1023 bytes and written 333 bytes
---
New, SSLv2, Cipher is DES-CBC3-MD5
Server public key is 1024 bit
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : SSLv2
Cipher : DES-CBC3-MD5
Session-ID: 709F48E4D567C70A2E49886E4C697CDE
Session-ID-ctx:
Master-Key: 649E68F8CF936E69642286AC40A80F433602E3C36FD288C3
Key-Arg : E8CB6FEB9ECF3033
Start Time: 1156977226
Timeout : 300 (sec)
Verify return code: 21 (unable to verify the first certificate)
---
closed
White Box Test and example
Check the configuration of the web servers which provide https services. If the web application provides other SSL/TLS wrapped services, these should be checked as well.
Example: The registry path in windows 2k3 defines the ciphers available to the server:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\
Testing SSL certificate validity client and server
When accessing a web application via the https protocol, a secure channel is established between the client (usually the browser) and the server. The identity of one (the server) or both parties (client and server) is then established by means of digital certificates. In order for the communication to be set up, a number of checks on the certificates must be passed. While discussing SSL and certificate based authentication is beyond the scope of this Guide, we will focus on the main criteria involved in ascertaining certificate validity: a) checking if the Certificate Authority (CA) is a known one (meaning one considered trusted), b) checking that the certificate is currently valid, and c) checking that the name of the site and the name reported in the certificate match.
Lets examine each check more in detail.
a) Each browser comes with a preloaded list of trusted CAs, against which the certificate signing CA is compared (this list can be customized and expanded at will). During the initial negotiations with an https server, if the server certificate relates to a CA unknown to the browser, a warning is usually raised. This happens most often because a web application relies on a certificate signed by a self-established CA. Whether this is to be considered a concern depends on several factors. For example, this may be fine for an Intranet environment (think of corporate web email being provided via https; here, obviously all users recognize the internal CA as a trusted CA). When a service is provided to the general public via the Internet, however (i.e. when it is important to positively verify the identity of the server we are talking to), it is usually imperative to rely on a trusted CA, one which is recognized by all the user base (and here we stop with our considerations'; we wont delve deeper in the implications of the trust model being used by digital certificates).
b) Certificates have an associated period of validity, therefore they may expire. Again, we are warned by the browser about this. A public service needs a temporally valid certificate; otherwise, it means we are talking with a server whose certificate was issued by someone we trust, but has expired without being renewed.
c) What if the name on the certificate and the name of the server do not match? If this happens, it might sound suspicious. For a number of reasons, this is not so rare to see. A system may host a number of name-based virtual hosts, which share the same IP address and are identified by means of the HTTP 1.1 Host: header information. In this case, since the SSL handshake checks the server certificate before the HTTP request is processed, it is not possible to assign different certificates to each virtual server. Therefore, if the name of the site and the name reported in the certificate do not match, we have a condition which is typically signalled by the browser. To avoid this, IP-based virtual servers must be used. [2] and [3] describe techniques to deal with this problem and allow name-based virtual hosts to be correctly referenced.
Black Box Testing and examples
Examine the validity of the certificates used by the application. Browsers will issue a warning when encountering expired certificates, certificates issued by untrusted CAs, and certificates which do not match namewise with the site to which they should refer. By clicking on the padlock which appears in the browser window when visiting an https site, you can look at information related to the certificate including the issuer, period of validity, encryption characteristics, etc.
If the application requires a client certificate, you probably have installed one to access it. Certificate information is available in the browser by inspecting the relevant certificate(s) in the list of the installed certificates.
These checks must be applied to all visible SSL-wrapped communication channels used by the application. Though this is the usual https service running on port 443, there may be additional services involved depending on the web application architecture and on deployment issues (an https administrative port left open, https services on non-standard ports, etc.). Therefore, apply these checks to all SSL-wrapped ports which have been discovered. For example, the nmap scanner features a scanning mode (enabled by the sV command line switch) which identifies SSL-wrapped services. The Nessus vulnerability scanner has the capability of performing SSL checks on all SSL/TLS-wrapped services.
Examples
Rather than providing a fictitious example, we have inserted an anonymized real-life example to stress how frequently one stumbles on https sites whose certificates are inaccurate with respect to naming. The following screenshots refer to a regional site of a high-profile IT Company. Warning issued by Microsoft Internet Explorer. We are visiting an .it site and the certificate was issued to a .com site! Internet Explorer warns that the name on the certificate does not match the name of the site.
INCLUDEPICTURE "http://www.owasp.org/images/7/70/SSL_Certificate_Validity_Testing_IE_Warning.gif" \* MERGEFORMATINET
Warning issued by Mozilla Firefox. The message issued by Firefox is different Firefox complains because it cannot ascertain the identity of the .com site the certificate refers to because it does not know the CA which signed the certificate. In fact, Internet Explorer and Firefox do not come preloaded with the same list of CAs. Therefore, the behavior experienced with various browsers may differ.
INCLUDEPICTURE "http://www.owasp.org/images/8/87/SSL_Certificate_Validity_Testing_Firefox_Warning.gif" \* MERGEFORMATINET
White Box Testing and examples
Examine the validity of the certificates used by the application at both server and client levels. The usage of certificates is primarily at the web server level; however, there may be additional communication paths protected by SSL (for example, towards the DBMS). You should check the application architecture to identify all SSL protected channels.
References
Whitepapers
[1] RFC2246. The TLS Protocol Version 1.0 (updated by RFC3546) - HYPERLINK "http://www.ietf.org/rfc/rfc2246.txt" \o "http://www.ietf.org/rfc/rfc2246.txt" http://www.ietf.org/rfc/rfc2246.txt
[2] RFC2817. Upgrading to TLS Within HTTP/1.1 - HYPERLINK "http://www.ietf.org/rfc/rfc2817.txt" \o "http://www.ietf.org/rfc/rfc2817.txt" http://www.ietf.org/rfc/rfc2817.txt
[3] RFC3546. Transport Layer Security (TLS) Extensions - HYPERLINK "http://www.ietf.org/rfc/rfc3546.txt" \o "http://www.ietf.org/rfc/rfc3546.txt" http://www.ietf.org/rfc/rfc3546.txt
[4] www.verisign.net features various material on the topic
Tools
Vulnerability scanners may include checks regarding certificate validity, including name mismatch and time expiration. They also usually report other information, such as the CA which issued the certificate. Remember, however, that there is no unified notion of a trusted CA; what is trusted depends on the configuration of the software and on the human assumptions made beforehand. Browsers come with a preloaded list of trusted CA. If your web application rely on a CA which is not in this list (for example, because you rely on a self-made CA), you should take into account the process of configuring user browsers to recognize the CA.
The Nessus scanner includes a plugin to check for expired certificates or certificates which are going to expire within 60 days (plugin SSL certificate expiry, plugin id 15901). This plugin will check certificates installed on the server.
Vulnerability scanners may include checks against weak ciphers. For example, the Nessus scanner ( HYPERLINK "http://www.nessus.org" \o "http://www.nessus.org" http://www.nessus.org) has this capability and flags the presence of SSL weak ciphers (see example provided above).
You may also rely on specialized tools such as SSL Digger ( HYPERLINK "http://www.foundstone.com/resources/proddesc/ssldigger.htm" \o "http://www.foundstone.com/resources/proddesc/ssldigger.htm" http://www.foundstone.com/resources/proddesc/ssldigger.htm), or for the command line oriented experiment with the openssl tool, which provides access to OpenSSL cryptographic functions directly from a Unix shell (may be already available on *nix boxes, otherwise see www.openssl.org).
To identify SSL-based services, use a vulnerability scanner or a port scanner with service recognition capabilities. The nmap scanner features a -sV scanning option which tries to identify services, while the Nessus vulnerability scanner has the capability of identifying SSL-based services on arbitrary ports and to run vulnerability checks on them regardless of whether they are configured on standard or non-standard ports.
In case you need to talk to a SSL service but your favourite tool doesnt support SSL, you may benefit from a SSL proxy such as stunnel; stunnel will take care of tunnelling the underlying protocol (usually http, but not necessarily so) and communicate with the SSL service you need to reach.
Finally, a word of advice. Though it may be tempting to use a regular browser to check certificates, there are various reasons for not doing so. Browsers have been plagued by various bugs in this area, and the way the browser will perform the check might be influenced by configuration settings that may not be always evident. Instead, rely on vulnerability scanners or on specialized tools to do the job.
4.3.2 DB Listener Testing (OWASP-CM-002)
Brief Summary
The Data base listener is a network daemon unique to Oracle databases. It waits for connection requests from remote clients. This daemon can be compromised and hence can affect the availability of the database.
Description of the Issue
The DB listener is the entry point for remote connections to an Oracle database. It listens for connection requests and handles them accordingly. This test is possible if the tester can access to this service -- the test should be done from the Intranet (major Oracle installations don't expose this service to the external network). The listener, by default, listens on port 1521(port 2483 is the new officially registered port for the TNS Listener and 2484 for the TNS Listener using SSL). It is good practice to change the listener from this port to another arbitrary port number. If this listener is "turned off" remote access to the database is not possible. If this is the case, ones application would fail also creating a denial of service attack.
Potential areas of attack:
Stop the Listener -- create a DoS attack.
Set a password and prevent others from controlling the Listener - Hijack the DB.
Write trace and log files to any file accessible to the process owner of tnslnsr (usually Oracle) - Possible information leakage.
Obtain detailed information on the Listener, database, and application configuration.
Black Box testing and example
Upon discovering the port on which the listener resides, one can assess the listener by running a tool developed by Integrigy:
INCLUDEPICTURE "http://www.owasp.org/images/6/6b/Listener_Test.JPG" \* MERGEFORMATINET
The tool above checks the following:
Listener Password. On many Oracle systems, the listener password may not be set. The tool above verifies this. If the password is not set, an attacker could set the password and hijack the listener, albeit the password can be removed by locally editing the Listener.ora file.
Enable Logging. The tool above also tests to see if logging has been enabled. If it has not, one would not detect any change to the listener or have a record of it. Also, detection of brute force attacks on the listener would not be audited.
Admin Restrictions. If Admin restrictions are not enabled, it is possible to use the "SET" commands remotely.
Example. If you find a TCP/1521 open port on a server, you may have an Oracle Listener that accepts connections from the outside. If the listener is not protected by an authentication mechanism, or if you can find easily a credential, it is possible to exploit this vulnerability to enumerate the Oracle services. For example, using LSNRCTL(.exe) (contained in every Client Oracle installation), you can obtain the following output:
TNSLSNR for 32-bit Windows: Version 9.2.0.4.0 - Production
TNS for 32-bit Windows: Version 9.2.0.4.0 - Production
Oracle Bequeath NT Protocol Adapter for 32-bit Windows: Version 9.2.0.4.0 - Production
Windows NT Named Pipes NT Protocol Adapter for 32-bit Windows: Version 9.2.0.4.0 - Production
Windows NT TCP/IP NT Protocol Adapter for 32-bit Windows: Version 9.2.0.4.0 - Production,,
SID(s): SERVICE_NAME = CONFDATA
SID(s): INSTANCE_NAME = CONFDATA
SID(s): SERVICE_NAME = CONFDATAPDB
SID(s): INSTANCE_NAME = CONFDATA
SID(s): SERVICE_NAME = CONFORGANIZ
SID(s): INSTANCE_NAME = CONFORGANIZ
The Oracle Listener permits to enumerate default users on Oracle Server:
User name Password
OUTLN OUTLN
DBSNMP DBSNMP
BACKUP BACKUP
MONITOR MONITOR
PDB CHANGE_ON_INSTALL
In this case, we have not founded privileged DBA accounts, but OUTLN and BACKUP accounts hold a fundamental privilege: EXECUTE ANY PROCEDURE. This means that it is possible to execute all procedures, for example the following:
exec dbms_repcat_admin.grant_admin_any_schema('BACKUP');
The execution of this command permits one to obtain DBA privileges. Now the user can interact directly with the DB and execute, for example:
select * from session_privs;
The output is the following screenshot:
INCLUDEPICTURE "http://www.owasp.org/images/1/18/ToadListener2.PNG" \* MERGEFORMATINET
So the user can now execute a lot of operations, in particular: DELETE ANY TABLE and DROP ANY TABLE.
Listener default ports
During the discovery phase of an Oracle server one may discover the following ports. The following is a list of the default ports:
1521: Default port for the TNS Listener.
1522 1540: Commonly used ports for the TNS Listener
1575: Default port for the Oracle Names Server
1630: Default port for the Oracle Connection Manager client connections
1830: Default port for the Oracle Connection Manager admin connections
2481: Default port for Oracle JServer/Java VM listener
2482: Default port for Oracle JServer/Java VM listener using SSL
2483: New port for the TNS Listener
2484: New port for the TNS Listener using SSL
Gray Box testing and example
Testing for restriction of the privileges of the listener
It is important to give the listener least privilege so it cannot read or write files in the database or in the server memory address space.
The file Listener.ora is used to define the database listener properties. One should check that the following line is present in the Listener.ora file:
ADMIN_RESTRICTIONS_LISTENER=ON
Listener password:
Many common exploits are performed due to the listener password not being set. By checking the Listener.ora file, one can determine if the password is set:
The password can be set manually by editing the Listener.ora file. This is performed by editing the following: PASSWORDS_. This issue with this manual method is that the password stored in cleartext, and can be read by anyone with acess to the Listener.ora file. A more secure way is to use the LSNRCTRL tool and invoke the change_password command.
LSNRCTL for 32-bit Windows: Version 9.2.0.1.0 - Production on 24-FEB-2004 11:27:55
Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved.
Welcome to LSNRCTL, type "help" for information.
LSNRCTL> set current_listener listener
Current Listener is listener
LSNRCTL> change_password
Old password:
New password:
Re-enter new password:
Connecting to
Password changed for listener
The command completed successfully
LSNRCTL> set password
Password:
The command completed successfully
LSNRCTL> save_config
Connecting to
Saved LISTENER configuration parameters.
Listener Parameter File D:\oracle\ora90\network\admin\listener.ora
Old Parameter File D:\oracle\ora90\network\admin\listener.bak
The command completed successfully
LSNRCTL>
References
Whitepapers
Oracle Database Listener Security Guide - HYPERLINK "http://www.integrigy.com/security-resources/whitepapers/Integrigy_Oracle_Listener_TNS_Security.pdf" \o "http://www.integrigy.com/security-resources/whitepapers/Integrigy_Oracle_Listener_TNS_Security.pdf" http://www.integrigy.com/security-resources/whitepapers/Integrigy_Oracle_Listener_TNS_Security.pdf
Tools
TNS Listener tool (Perl) - HYPERLINK "http://www.jammed.com/%7Ejwa/hacks/security/tnscmd/tnscmd-doc.html" \o "http://www.jammed.com/%7Ejwa/hacks/security/tnscmd/tnscmd-doc.html" http://www.jammed.com/%7Ejwa/hacks/security/tnscmd/tnscmd-doc.html
Toad for Oracle - HYPERLINK "http://www.quest.com/toad" \o "http://www.quest.com/toad" http://www.quest.com/toad
4.3.3 Infrastructure configuration management testing (OWASP-CM-003)
Brief Summary
The intrinsic complexity of interconnected and heterogeneous web server infrastructure, which can count hundreds of web applications, makes configuration management and review a fundamental step in testing and deploying every single application. In fact it takes only a single vulnerability to undermine the security of the entire infrastructure, and even small and (almost) unimportant problems may evolve into severe risks for another application on the same server. In order to address these problems, it is of utmost importance to perform an in-depth review of configuration and known security issues.
Description of the Issue
Proper configuration management of the web server infrastructure is very important in order to preserve the security of the application itself. If elements such as the web server software, the back-end database servers, or the authentication servers are not properly reviewed and secured, they might introduce undesired risks or introduce new vulnerabilities that might compromise the application itself.
For example, a web server vulnerability that would allow a remote attacker to disclose the source code of the application itself (a vulnerability that has arisen a number of times in both web servers or application servers) could compromise the application, as anonymous users could use the information disclosed in the source code to leverage attacks against the application or its users.
In order to test the configuration management infrastructure, the following steps need to be taken:
The different elements that make up the infrastructure need to be determined in order to understand how they interact with a web application and how they affect its security.
All the elements of the infrastructure need to be reviewed in order to make sure that they dont hold any known vulnerabilities.
A review needs to be made of the administrative tools used to maintain all the different elements.
The authentication systems, if any, need to reviewed in order to assure that they serve the needs of the application and that they cannot be manipulated by external users to leverage access.
A list of defined ports which are required for the application should be maintained and kept under change control.
Black Box Testing and examples
Review of the application architecture
The application architecture needs to be reviewed through the test to determine what different components are used to build the web application. In small setups, such as a simple CGI-based application, a single server might be used that runs the web server, which executes the C, Perl, or Shell CGIs application, and perhaps also the authentication mechanism. On more complex setups, such as an online bank system, multiple servers might be involved including: a reverse proxy, a front-end web server, an application server, and a database server or LDAP server. Each of these servers will be used for different purposes and might be even be divided in different networks with firewalling devices between them, creating different DMZs so that access to the web server will not grant a remote user access to the authentication mechanism itself, and so that compromises of the different elements of the architecture can be isolated in a way such that they will not compromise the whole architecture.
Getting knowledge of the application architecture can be easy if this information is provided to the testing team by the application developers in document form or through interviews, but can also prove to be very difficult if doing a blind penetration test.
In the latter case, a tester will first start with the assumption that there is a simple setup (a single server) and will, through the information retrieved from other tests, derive the different elements and question this assumption that the architecture will be extended. The tester will start by asking simple questions such as: Is there a firewalling system protecting the web server? which will be answered based on the results of network scans targeted at the web server and the analysis of whether the network ports of the web server are being filtered in the network edge (no answer or ICMP unreachables are received) or if the server is directly connected to the Internet (i.e. returns RST packets for all non-listening ports). This analysis can be enhanced in order to determine the type of firewall system used based on network packet tests: is it a stateful firewall or is it an access list filter on a router? How is it configured? Can it be bypassed?
Detecting a reverse proxy in front of the web server needs to be done by the analysis of the web server banner, which might directly disclose the existence of a reverse proxy (for example, if WebSEAL[1] is returned). It can also be determined by obtaining the answers given by the web server to requests and comparing them to the expected answers. For example, some reverse proxies act as intrusion prevention systems (or web-shields) by blocking known attacks targeted at the web server. If the web server is known to answer with a 404 message to a request which targets an unavailable page and returns a different error message for some common web attacks like those done by CGI scanners, it might be an indication of a reverse proxy (or an application-level firewall) which is filtering the requests and returning a different error page than the one expected. Another example: if the web server returns a set of available HTTP methods (including TRACE) but the expected methods return errors then there is probably something in between, blocking them. In some cases, even the protection system gives itself away:
GET / web-console/ServerInfo.jsp%00 HTTP/1.0
HTTP/1.0 200
Pragma: no-cache
Cache-Control: no-cache
Content-Type: text/html
Content-Length: 83
Error
Error
FW-1 at XXXXXX: Access denied.
Example of the security server of Check Point Firewall-1 NG AI protecting a web server
Reverse proxies can also be introduced as proxy-caches to accelerate the performance of back-end application servers. Detecting these proxies can be done based, again, on the server header or by timing requests that should be cached by the server and comparing the time taken to server the first request with subsequent requests.
Another element that can be detected: network load balancers. Typically, these systems will balance a given TCP/IP port to multiple servers based on different algorithms (round-robin, web server load, number of requests, etc.). Thus, the detection of this architecture element needs to be done by examining multiple requests and comparing results in order to determine if the requests are going to the same or different web servers. For example, based on the Date: header if the server clocks are not synchronized. In some cases, the network load balance process might inject new information in the headers that will make it stand out distinctively, like the AlteonP cookie introduced by Nortels Alteon WebSystems load balancer.
Application web servers are usually easy to detect. The request for several resources is handled by the application server itself (not the web server) and the response header will vary significantly (including different or additional values in the answer header). Another way to detect these is to see if the web server tries to set cookies, which are indicative of an application web server being used (such as the JSESSIONID provided by some J2EE servers), or to rewrite URLs automatically to do session tracking.
Authentication backends (such as LDAP directories, relational databases, or RADIUS servers) however, are not as easy to detect from an external point of view in an immediate way, since they will be hidden by the application itself.
The use of a database backend can be determined simply by navigating an application. If there is highly dynamic content generated on the fly," it is probably being extracted from some sort of database by the application itself. Sometimes the way information is requested might give insight to the existence of a database back-end. For example, an online shopping application that uses numeric identifiers (id) when browsing the different articles in the shop. However, when doing a blind application test, knowledge of the underlying database is usually only available when a vulnerability surfaces in the application, such as poor exception handling or susceptibility to SQL injection.
Known server vulnerabilities
Vulnerabilities found in the different elements that make up the application architecture, be it the web server or the database backend, can severely compromise the application itself. For example, consider a server vulnerability that allows a remote, unauthenticated user, to upload files to the web server, or even to replace files. This vulnerability could compromise the application, since a rogue user may be able to replace the application itself or introduce code that would affect the backend servers, as its application code would be run just like any other application.
Reviewing server vulnerabilities can be hard to do if the test needs to be done through a blind penetration test. In these cases, vulnerabilities need to be tested from a remote site, typically using an automated tool; however, the testing of some vulnerabilities can have unpredictable results to the web server, and testing for others (like those directly involved in denial of service attacks) might not be possible due to the service downtime involved if the test was successful. Also, some automated tools will flag vulnerabilities based on the web server version retrieved. This leads to both false positives and false negatives: on one hand, if the web server version has been removed or obscured by the local site administrator, the scan tool will not flag the server as vulnerable even if it is; on the other hand, if the vendor providing the software does not update the web server version when vulnerabilities are fixed in it, the scan tool will flag vulnerabilities that do not exist. The latter case is actually very common in some operating system vendors that do backport patches of security vulnerabilities to the software they provide in the operating system but do not do a full upload to the latest software version. This happens in most GNU/Linux distributions such as Debian, Red Hat or SuSE. In most cases, vulnerability scanning of an application architecture will only find vulnerabilities associated with the exposed elements of the architecture (such as the web server) and will usually be unable to find vulnerabilities associated to elements which are not directly exposed, such as the authentication backends, the database backends, or reverse proxies in use.
Finally, not all software vendors disclose vulnerabilities in a public way, and therefore these weaknesses do not become registered within publicly known vulnerability databases[2]. This information is only disclosed to customers or published through fixes that do not have accompanying advisories. This reduces the usefulness of vulnerability scanning tools. Typically, vulnerability coverage of these tools will be very good for common products (such as the Apache web server, Microsofts Internet Information Server, or IBMs Lotus Domino) but will be lacking for lesser known products.
This is why reviewing vulnerabilities is best done when the tester is provided with internal information of the software used, including versions and releases used and patches applied to the software. With this information, the tester can retrieve the information from the vendor itself and analyse what vulnerabilities might be present in the architecture and how they can affect the application itself. When possible, these vulnerabilities can be tested in order to determine their real effects and to detect if there might be any external elements (such as intrusion detection or prevention systems) that might reduce or negate the possibility of successful exploitation. Testers might even determine, through a configuration review, that the vulnerability is not even present, since it affects a software component that is not in use.
It is also worthwhile to notice that vendors will sometimes silently fix vulnerabilities and make the fixes available with new software releases. Different vendors will have different release cycles that determines the support they might provide for older releases. A tester with detailed information of the software versions used by the architecture can analyse the risk associated to the use of old software releases that might be unsupported in the short term or are already unsupported. This is critical, since if vulnerability were to surface in an old software version that is no longer supported, the systems personnel might not be directly aware of it. No patches will be ever made available for it and advisories might not list that version as vulnerable (as it is unsupported). Even in the event that they are aware that the vulnerability is present and the system is, indeed, vulnerable, they will need to do a full upgrade to a new software release, which might introduce significant downtime in the application architecture or might force the application to be recoded due to incompatibilities with the latest software version.
Administrative tools
Any web server infrastructure requires the existence of administrative tools to maintain and update the information used by the application: static content (web pages, graphic files), application source code, user authentication databases, etc. Depending on the site, technology or software used, administrative tools will differ. For example, some web servers will be managed using administrative interfaces which are, themselves, web servers (such as the iPlanet web server) or will be administrated by plain text configuration files (in the Apache case[3]) or use operating-system GUI tools (when using Microsofts IIS server or ASP.Net). In most cases, however, the server configuration will be handled using different tools rather than the maintenance of the files used by the web server, which are managed through FTP servers, WebDAV, network file systems (NFS, CIFS) or other mechanisms. Obviously, the operating system of the elements that make up the application architecture will also be managed using other tools. Applications may also have administrative interfaces embedded in them that are used to manage the application data itself (users, content, etc.).
Review of the administrative interfaces used to manage the different parts of the architecture is very important, since if an attacker gains access to any of them he can then compromise or damage the application architecture. Thus it is important to:
List all the possible administrative interfaces.
Determine if administrative interfaces are available from an internal network or are also available from the Internet.
If available from the Internet, determine the mechanisms that control access to these interfaces and their associated susceptibilities.
Change the default user & password.
Some companies choose not to manage all aspects of their web server applications, but may have other parties managing the content delivered by the web application. This external company might either provide only parts of the content (news updates or promotions) or might manage the web server completely (including content and code). It is common to find administrative interfaces available from the Internet in these situations, since using the Internet is cheaper than providing a dedicated line that will connect the external company to the application infrastructure through a management-only interface. In this situation, it is very important to test if the administrative interfaces can be vulnerable to attacks.
References
Whitepapers:
[1] WebSEAL, also known as Tivoli Authentication Manager, is a reverse Proxy from IBM which is part of the Tivoli framework.
[2] Such as Symantecs Bugtraq, ISS Xforce, or NISTs National Vulnerability Database (NVD)
[3] There are some GUI-based administration tools for Apache (like NetLoony) but they are not in widespread use yet.
4.3.4 Application configuration management testing (OWASP-CM-004)
Brief Summary
Proper configuration of the single elements that make up an application architecture is important in order to prevent mistakes that might compromise the security of the whole architecture.
Description of the issue
Configuration review and testing is a critical task in creating and maintaining such architecture since many different systems will be usually provided with generic configurations which might not be suited to the task they will perform on the specific site they're installed on. While the typical web and application server installation will spot a lot of functionalities (like application examples, documentation, test pages), what is not essential should be removed before deployment to avoid post-install exploitation.
Black Box Testing and Examples
Sample/known files and directories
Many web servers and application servers provide, in a default installation, sample applications and files that are provided for the benefit of the developer and in order to test that the server is working properly right after installation. However, many default web server applications have been later known to be vulnerable. This was the case, for example, for CVE-1999-0449 (Denial of Service in IIS when the Exair sample site had been installed), CAN-2002-1744 (Directory traversal vulnerability in CodeBrws.asp in Microsoft IIS 5.0), CAN-2002-1630 (Use of sendmail.jsp in Oracle 9iAS), or CAN-2003-1172 (Directory traversal in the view-source sample in Apaches Cocoon).
CGI scanners include a detailed list of known files and directory samples that are provided by different web or application servers and might be a fast way to determine if these files are present. However, the only way to be really sure is to do a full review of the contents of the web server and/or application server and determination of whether they are related to the application itself or not.
Comment review
It is very common, and even recommended, for programmers to include detailed comments on their source code in order to allow for other programmers to better understand why a given decision was taken in coding a given function. Programmers usually do it too when developing large web-based applications. However, comments included inline in HTML code might reveal to a potential attacker internal information that should not be available to them. Sometimes, even source code is commented out since a functionality is no longer required, but this comment is leaked out to the HTML pages returned to the users unintentionally.
Comment review should be done in order to determine if any information is being leaked through comments. This review can only be thoroughly done through an analysis of the web server static and dynamic content and through file searches. It can be useful, however, to browse the site either in an automatic or guided fashion and store all the content retrieved. This retrieved content can then be searched in order to analyse the HTML comments available, if any, in the code.
Gray Box Testing and Examples
Configuration review
The web server or application server configuration takes an important role in protecting the contents of the site and it must be carefully reviewed in order to spot common configuration mistakes. Obviously, the recommended configuration varies depending on the site policy, and the functionality that should be provided by the server software. In most cases, however, configuration guidelines (either provided by the software vendor or external parties) should be followed in order to determine if the server has been properly secured. It is impossible to generically say how a server should be configured, however, some common guidelines should be taken into account:
Only enable server modules (ISAPI extensions in the IIS case) that are needed for the application. This reduces the attack surface since the server is reduced in size and complexity as software modules are disabled. It also prevents vulnerabilities that might appear in the vendor software affect the site if they are only present in modules that have been already disabled.
Handle server errors (40x or 50x) with custom-made pages instead of with the default web server pages. Specifically make sure that any application errors will not be returned to the end-user and that no code is leaked through these since it will help an attacker. It is actually very common to forget this point since developers do need this information in pre-production environments.
Make sure that the server software runs with minimised privileges in the operating system. This prevents an error in the server software from directly compromising the whole system. However, an attacker could elevate privileges once running code as the web server.
Make sure the server software logs properly both legitimate access and errors.
Make sure that the server is configured to properly handle overloads and prevent Denial of Service attacks. Ensure that the server has been performance tuned properly.
Logging
Logging is an important asset of the security of an application architecture, since it can be used to detect flaws in applications (users constantly trying to retrieve a file that does not really exist) as well as sustained attacks from rogue users. Logs are typically properly generated by web and other server software but it is not so common to find applications that properly log their actions to a log and, when they do, the main intention of the application logs is to produce debugging output that could be used by the programmer to analyse a particular error.
In both cases (server and application logs) several issues should be tested and analysed based on the log contents:
Do the logs contain sensitive information?
Are the logs stored in a dedicated server?
Can log usage generate a Denial of Service condition?
How are they rotated? Are logs kept for the sufficient time?
How are logs reviewed? Can administrators use these reviews to detect targeted attacks?
How are log backups preserved?
Is the data being logged data validated (min/max length, chars etc) prior to being logged?
Sensitive information in logs
Some applications might, for example use GET requests to forward form data which will be viewable in the server logs. This means that server logs might contain sensitive information (such as usernames as passwords, or bank account details). This sensitive information can be misused by an attacker if logs were to be obtained by an attacker, for example, through administrative interfaces or known web server vulnerabilities or misconfiguration (like the well-known server-status misconfiguration in Apache-based HTTP servers ).
Also, in some jurisdictions, storing some sensitive information in log files, such as personal data, might oblige the enterprise to apply the data protection laws that they would apply to their back-end databases to log files too. And failure to do so, even unknowingly, might carry penalties under the data protection laws that apply.
Log location
Typically, servers will generate local logs of their actions and errors, consuming the disk of the system the server is running on. However, if the server is compromised, its logs can be wiped out by the intruder to clean up all the traces of its attack and methods. If this were to happen the system administrator would have no knowledge of how the attack occurred or where the attack source was located. Actually, most attacker toolkits include a log zapper that is capable of cleaning up any logs that hold given information (like the IP address of the attacker) and are routinely used in attackers system-level rootkits.
Consequently, it is wiser to keep logs in a separate location, and not in the web server itself. This also makes it easier to aggregate logs from different sources that refer to the same application (such as those of a web server farm) and it also makes it easier to do log analysis (which can be CPU intensive) without affecting the server itself.
Log storage
Logs can introduce a Denial of Service condition if they are not properly stored. Obviously, any attacker with sufficient resources could be able to, unless detected and blocked, produce a sufficient number of requests that would fill up the allocated space to log files. However, if the server is not properly configured, the log files will be stored in the same disk partition as the one used for the operating system software or the application itself. This means that, if the disk were to be filled up, the operating system or the application might fail because it is unable to write on disk.
Typically, in UNIX systems, logs will be located in /var (although some server installations might reside in /opt or /usr/local) and it is thus important to make sure that the directories that contain logs are in a separate partition. In some cases, and in order to prevent the system logs from being affected, the log directory of the server software itself (such as /var/log/apache in the Apache web server) should be stored in a dedicated partition.
This is not to say that logs should be allowed to grow to fill up the filesystem they reside in. Growth of server logs should be monitored in order to detect this condition since it may be indicative of an attack.
Testing this condition is as easy , and as dangerous in production environments, as firing off a sufficient and sustained number of requests to see if these requests are logged and, if so, if there is a possibility to fill up the log partition through these requests. In some environments where QUERY_STRING parameters are also logged regardless of whether they are produced through GET or POST requests, big queries can be simulated that will fill up the logs faster since, typically, a single request will cause only a small amount of data to be logged: date and time, source IP address, URI request, and server result.
Log rotation
Most servers (but few custom applications) will rotate logs in order to prevent them from filling up the filesystem they reside on. The assumption when rotating logs is that the information in them is only necessary for a limited amount of time.
This feature should be tested in order to ensure that:
Logs are kept for the time defined in the security policy, not more and not less.
Logs are compressed once rotated (this is a convenience, since it will mean that more logs will be stored for the same available disk space)
Filesystem permission of rotated log files are the same (or stricter) that those of the log files itself. For example, web servers will need to write to the logs they use but they dont actually need to write to rotated logs, which means that the permissions of the files can be changed upon rotation to prevent the web server process from modifying these.
Some servers might rotate logs when they reach a given size. If this happens, it must be ensured that an attacker cannot force logs to rotate in order to hide its tracks.
Log review
Review of logs can be used for more that extraction of usage statistics of files in the web servers (which is typically what most log-based application will focus on), but also to determine if attacks take place at the web server.
In order to analyse web server attacks the error log files of the server need to be analysed. Review should concentrate on:
40x (not found) error messages, a large amount of these from the same source might be indicative of a CGI scanner tool being used against the web server
50x (server error) messages. These can be an indication of an attacker abusing parts of the application which fail unexpectedly. For example, the first phases of a SQL injection attack will produce these error message when the SQL query is not properly constructed and its execution fails on the backend database.
Log statistics or analysis should not be generated, nor stored, in the same server that produces the logs. Otherwise, an attacker might, through a web server vulnerability or improper configuration, gain access to them and retrieve similar information as would be disclosed by log files themselves.
References
Whitepapers
Generic:
CERT Security Improvement Modules: Securing Public Web Servers - HYPERLINK "http://www.cert.org/security-improvement/" \o "http://www.cert.org/security-improvement/" http://www.cert.org/security-improvement/
Apache
Apache Security, by Ivan Ristic, Oreilly, march 2005.
Apache Security Secrets: Revealed (Again), Mark Cox, November 2003 - HYPERLINK "http://www.awe.com/mark/apcon2003/" \o "http://www.awe.com/mark/apcon2003/" http://www.awe.com/mark/apcon2003/
Apache Security Secrets: Revealed, ApacheCon 2002, Las Vegas, Mark J Cox, October 2002 - HYPERLINK "http://www.awe.com/mark/apcon2002" \o "http://www.awe.com/mark/apcon2002" http://www.awe.com/mark/apcon2002
Apache Security Configuration Document, InterSect Alliance - HYPERLINK "http://www.intersectalliance.com/projects/ApacheConfig/index.html" \o "http://www.intersectalliance.com/projects/ApacheConfig/index.html" http://www.intersectalliance.com/projects/ApacheConfig/index.html
Performance Tuning - HYPERLINK "http://httpd.apache.org/docs/misc/perf-tuning.html" \o "http://httpd.apache.org/docs/misc/perf-tuning.html" http://httpd.apache.org/docs/misc/perf-tuning.html
Lotus Domino
Lotus Security Handbook, William Tworek et al., April 2004, available in the IBM Redbooks collection
Lotus Domino Security, an X-force white-paper, Internet Security Systems, December 2002
Hackproofing Lotus Domino Web Server, David Litchfield, October 2001,
NGSSoftware Insight Security Research, available at www.nextgenss.com
Microsoft IIS
IIS 6.0 Security, by Rohyt Belani, Michael Muckin, - HYPERLINK "http://www.securityfocus.com/print/infocus/1765" \o "http://www.securityfocus.com/print/infocus/1765" http://www.securityfocus.com/print/infocus/1765
Securing Your Web Server (Patterns and Practices), Microsoft Corporation, January 2004
IIS Security and Programming Countermeasures, by Jason Coombs
From Blueprint to Fortress: A Guide to Securing IIS 5.0, by John Davis, Microsoft Corporation, June 2001
Secure Internet Information Services 5 Checklist, by Michael Howard, Microsoft Corporation, June 2000
How To: Use IISLockdown.exe - HYPERLINK "http://msdn.microsoft.com/library/en-us/secmod/html/secmod113.asp" \o "http://msdn.microsoft.com/library/en-us/secmod/html/secmod113.asp" http://msdn.microsoft.com/library/en-us/secmod/html/secmod113.asp
INFO: Using URLScan on IIS - HYPERLINK "http://support.microsoft.com/default.aspx?scid=307608" \o "http://support.microsoft.com/default.aspx?scid=307608" http://support.microsoft.com/default.aspx?scid=307608
Red Hats (formerly Netscapes) iPlanet
Guide to the Secure Configuration and Administration of iPlanet Web Server, Enterprise Edition 4.1, by James M Hayes
The Network Applications Team of the Systems and Network Attack Center (SNAC), NSA, January 2001
WebSphere
IBM WebSphere V5.0 Security, WebSphere Handbook Series, by Peter Kovari et al., IBM, December 2002.
IBM WebSphere V4.0 Advanced Edition Security, by Peter Kovari et al., IBM, March 2002
4.3.5 Testing for File extensions handling (OWASP-CM-005)
Brief Summary
File extensions are commonly used in web servers to easily determine which technologies / languages / plugins must be used to fulfill the web request.While this behaviour is consistent with RFCs and Web Standards, using standard file extensions provides the pentester useful information about the underlying technologies used in a web appliance and greatly simplifies the task of determining the attack scenario to be used on peculiar technologies.In addition, this misconfiguration in web servers could easily reveal confidential information about access credentials.
Description of the Issue
Determining how web servers handle requests corresponding to files having different extensions may help to understand web server behaviour depending on the kind of files we try to access. For example, it can help understand which file extensions are returned as text/plain versus those which cause execution on the server side. The latter are indicative of technologies / languages / plugins which are used by web servers or application servers, and may provide additional insight on how the web application is engineered. For example, a .pl extension is usually associated with server-side Perl support (though the file extension alone may be deceptive and not fully conclusive; for example, Perl server-side resources might be renamed to conceal the fact that they are indeed Perl related). See also next section on web server components for more on identifying server side technologies and components.
Black Box testing and example
Submit http[s] requests involving different file extensions and verify how they are handled. These verifications should be on a per web directory basis. Verify directories which allow script execution. Web server directories can be identified by vulnerability scanners, which look for the presence of well-known directories. In addition, mirroring the web site structure allows reconstructing the tree of web directories served by the application. In case the web application architecture is load-balanced, it is important to assess all of the web servers. This may or may not be easy depending on the configuration of the balancing infrastructure. In an infrastructure with redundant components there may be slight variations in the configuration of individual web / application servers; this may happen for example if the web architecture employs heterogeneous technologies (think of a set of IIS and Apache web servers in a load-balancing configuration, which may introduce slight asymmetric behaviour between themselves, and possibly different vulnerabilities).
Example:We have identified the existence of a file named connection.inc. Trying to access it directly gives back its contents, which are:
mysql_connect("127.0.0.1", "root", "")
or die("Could not connect");
?>
We determine the existence of a MySQL DBMS back end, and the (weak) credentials used by the web application to access it. This example (which occurred in a real assessment) shows how dangerous can be the access to some kind of files. The following file extensions should NEVER be returned by a web server, since they are related to files which may contain sensitive information, or to files for which there is no reason to be served.
.asa
.inc
The following file extensions are related to files which, when accessed, are either displayed or downloaded by the browser. Therefore, files with these extensions must be checked to verify that they are indeed supposed to be served (and are not leftovers), and that they do not contain sensitive information.
.zip, .tar, .gz, .tgz, .rar, ...: (Compressed) archive files
.java: No reason to provide access to Java source files
.txt: Text files
.pdf: PDF documents
.doc, .rtf, .xls, .ppt, ...: Office documents
.bak, .old and other extensions indicative of backup files (for example: ~ for Emacs backup files)
The list given above details only a few examples, since file extensions are too many to be comprehensively treated here. Refer to HYPERLINK "http://filext.com/" \o "http://filext.com/" http://filext.com/ for a more thorough database of extensions. To sum it up, in order to identify files having a given extensions, a mix of techniques can be employed, including: Vulnerability Scanners, spidering and mirroring tools, manually inspecting the application (this overcomes limitations in automatic spidering), querying search engines (see HYPERLINK \l "_4.2.3_Spidering_and" \o "Spidering and googling AoC"Spidering and googling). See also HYPERLINK \l "_4.2.6.2_Old,_backup" \o "Old file testing AoC"Old file testing which deals with the security issues related to "forgotten" files.
Gray Box testing and example
Performing white box testing against file extensions handling amounts to checking the configurations of web server(s) / application server(s) taking part in the web application architecture, and verifying how they are instructed to serve different file extensions. If the web application relies on a load-balanced, heterogeneous infrastructure, determine whether this may introduce different behaviour.
References
Tools
Vulnerability scanners, such as Nessus and Nikto check for the existence of well-known web directories. They may allow as well downloading the web site structure, which is helpful when trying to determine the configuration of web directories and how individual file extensions are served. Other tools that can be used for this purpose include:
wget - HYPERLINK "http://www.gnu.org/software/wget" \o "http://www.gnu.org/software/wget" http://www.gnu.org/software/wget
curl - HYPERLINK "http://curl.haxx.se" \o "http://curl.haxx.se" http://curl.haxx.se
Google for web mirroring tools.
4.3.6 Old, backup and unreferenced files (OWASP-CM-006)
Brief Summary
While most of the files within a web server are directly handled by the server itself it isn't uncommon to find unreferenced and/or forgotten files that can be used to obtain important information about either the infrastructure or the credentials.Most common scenarios include the presence of renamed old version of modified files, inclusion files that are loaded into the language of choice and can be downloaded as source, or even automatic or manual backups in form of compressed archives.All these files may grant the pentester access to inner workings, backdoors, administrative interfaces, or even credentials to connect to the administrative interface or the database server.
Description of the issue
An important source of vulnerability lies in files which have nothing to do with the application, but are created as a consequence of editing application files, or after creating on-the-fly backup copies, or by leaving in the web tree old files or unreferenced files. Performing in-place editing or other administrative actions on production web servers may inadvertently leave, as a consequence, backup copies (either generated automatically by the editor while editing files, or by the administrator who is zipping a set of files to create a backup).
It is particularly easy to forget such files, and this may pose a serious security threat to the application. That happens because backup copies may be generated with file extensions differing from those of the original files. A .tar, .zip or .gz archive that we generate (and forget...) has obviously a different extension, and the same happens with automatic copies created by many editors (for example, emacs generates a backup copy named file~ when editing file). Making a copy by hand may produce the same effect (think of copying file to file.old).
As a result, these activities generate files which a) are not needed by the application, b) may be handled differently than the original file by the web server. For example, if we make a copy of login.asp named login.asp.old, we are allowing users to download the source code of login.asp; this is because, due to its extension, login.asp.old will be typically served as text/plain, rather than being executed. In other words, accessing login.asp causes the execution of the server-side code of login.asp, while accessing login.asp.old causes the content of login.asp.old (which is, again, server-side code) to be plainly returned to the user and displayed in the browser. This may pose security risks, since sensitive information may be revealed. Generally, exposing server side code is a bad idea; not only are you unnecessarily exposing business logic, but you may be unknowingly revealing application-related information which may help an attacker (pathnames, data structures, etc.); not to mention the fact that there are too many scripts with embedded username/password in clear text (which is a careless and very dangerous practice).
Other causes of unreferenced files are due to design or configuration choices when they allow diverse kind of application-related files such as data files, configuration files, log files, to be stored in filesystem directories that can be accessed by the web server. These files have normally no reason to be in a filesystem space which could be accessed via web, since they should be accessed only at the application level, by the application itself (and not by the casual user browsing around!).
Threats
Old, backup and unreferenced files present various threats to the security of a web application:
Unreferenced files may disclose sensitive information that can facilitate a focused attack against the application; for example include files containing database credentials, configuration files containing references to other hidden content, absolute file paths, etc.
Unreferenced pages may contain powerful functionality that can be used to attack the application; for example an administration page that is not linked from published content but can be accessed by any user who knows where to find it.
Old and backup files may contain vulnerabilities that have been fixed in more recent versions; for example viewdoc.old.jsp may contain a directory traversal vulnerability that has been fixed in viewdoc.jsp but can still be exploited by anyone who finds the old version.
Backup files may disclose the source code for pages designed to execute on the server; for example requesting viewdoc.bak may return the source code for viewdoc.jsp, which can be reviewed for vulnerabilities that may be difficult to find by making blind requests to the executable page. While this threat obviously applies to scripted languages, such as Perl, PHP, ASP, shell scripts, JSP, etc., it is not limited to them, as shown in the example provided in the next bullet.
Backup archives may contain copies of all files within (or even outside) the webroot. This allows an attacker to quickly enumerate the entire application, including unreferenced pages, source code, include files, etc. For example, if you forget a file named myservlets.jar.old file containing (a backup copy of) your servlet implementation classes, you are exposing a lot of sensitive information which is susceptible to decompilation and reverse engineering.
In some cases copying or editing a file does not modify the file extension, but modifies the filename. This happens for example in Windows environments, where file copying operations generate filenames prefixed with Copy of or localized versions of this string. Since the file extension is left unchanged, this is not a case where an executable file is returned as plain text by the web server, and therefore not a case of source code disclosure. However, these files too are dangerous because there is a chance that they include obsolete and incorrect logic that, when invoked, could trigger application errors, which might yield valuable information to an attacker, if diagnostic message display is enabled.
Log files may contain sensitive information about the activities of application users, for example sensitive data passed in URL parameters, session IDs, URLs visited (which may disclose additional unreferenced content), etc. Other log files (e.g. ftp logs) may contain sensitive information about the maintenance of the application by system administrators.
Countermeasures
To guarantee an effective protection strategy, testing should be compounded by a security policy which clearly forbids dangerous practices, such as:
Editing files in-place on the web server / application server filesystem. This is a particular bad habit, since it is likely to unwillingly generate backup files by the editors. It is amazing to see how often this is done, even in large organizations. If you absolutely need to edit files on a production system, do ensure that you dont leave behind anything which is not explicitly intended, and consider that you are doing it at your own risk.
Check carefully any other activity performed on filesystems exposed by the web server, such as spot administration activities. For example, if you occasionally need to take a snapshot of a couple of directories (which you shouldnt, on a production system...), you may be tempted to zip/tar them first. Be careful not to forget behind those archive files!
Appropriate configuration management policies should help not to leave around obsolete and unreferenced files.
Applications should be designed not to create (or rely on) files stored under the web directory trees served by the web server. Data files, log files, configuration files, etc. should be stored in directories not accessible by the web server, to counter the possibility of information disclosure (not to mention data modification if web directory permissions allow writing...).
Black Box Testing and examples
Testing for unreferenced files uses both automated and manual techniques, and typically involves a combination of the following:
(i) Inference from the naming scheme used for published content
If not already done, enumerate all of the applications pages and functionality. This can be done manually using a browser, or using an application spidering tool. Most applications use a recognisable naming scheme, and organise resources into pages and directories using words that describe their function. From the naming scheme used for published content, it is often possible to infer the name and location of unreferenced pages. For example, if a page viewuser.asp is found, then look also for edituser.asp, adduser.asp and deleteuser.asp. If a directory /app/user is found, then look also for /app/admin and /app/manager.
(ii) Other clues in published content
Many web applications leave clues in published content that can lead to the discovery of hidden pages and functionality. These clues often appear in the source code of HTML and JavaScript files. The source code for all published content should be manually reviewed to identify clues about other pages and functionality. For example:
Programmers comments and commented-out sections of source code may refer to hidden content:
JavaScript may contain page links that are only rendered within the users GUI under certain circumstances:
var adminUser=false;
:
if (adminUser) menu.add (new menuItem ("Maintain users", "/admin/useradmin.jsp"));
HTML pages may contain FORMs that have been hidden by disabling the SUBMIT element:
Another source of clues about unreferenced directories is the /robots.txt file used to provide instructions to web robots:
User-agent: *
Disallow: /Admin
Disallow: /uploads
Disallow: /backup
Disallow: /~jbloggs
Disallow: /include
(iii) Blind guessing
In its simplest form, this involves running a list of common filenames through a request engine in an attempt to guess files and directories that exist on the server. The following netcat wrapper script will read a wordlist from stdin and perform a basic guessing attack:
#!/bin/bash
server=www.targetapp.com
port=80
while read url
do
echo -ne "$url\t"
echo -e "GET /$url HTTP/1.0\nHost: $server\n" | netcat $server $port | head -1
done | tee outputfile
Depending upon the server, GET may be replaced with HEAD for faster results. The output file specified can be grepped for interesting response codes. The response code 200 (OK) usually indicates that a valid resource has been found (provided the server does not deliver a custom not found page using the 200 code). But also look out for 301 (Moved), 302 (Found), 401 (Unauthorized), 403 (Forbidden) and 500 (Internal error), which may also indicate resources or directories that are worthy of further investigation.
The basic guessing attack should be run against the webroot, and also against all directories that have been identified through other enumeration techniques. More advanced/effective guessing attacks can be performed as follows:
Identify the file extensions in use within known areas of the application (e.g. jsp, aspx, html), and use a basic wordlist appended with each of these extensions (or use a longer list of common extensions if resources permit).
For each file identified through other enumeration techniques, create a custom wordlist derived from that filename. Get a list of common file extensions (including ~, bak, txt, src, dev, old, inc, orig, copy, tmp, etc.) and use each extension before, after, and instead of, the extension of the actual filename.
Note: Windows file copying operations generate filenames prefixed with Copy of or localized versions of this string, hence they do not change file extensions. While Copy of files typically do not disclose source code when accessed, they might yield valuable information in case they cause errors when invoked.
(iv) Information obtained through server vulnerabilities and misconfiguration
The most obvious way in which a misconfigured server may disclose unreferenced pages is through directory listing. Request all enumerated directories to identify any which provide a directory listing. Numerous vulnerabilities have been found in individual web servers which allow an attacker to enumerate unreferenced content, for example:
Apache?M=D directory listing vulnerability.
Various IIS script source disclosure vulnerabilities.
IIS WebDAV directory listing vulnerabilities.
(v) Use of publicly available information
Pages and functionality in Internet-facing web applications that are not referenced from within the application itself may be referenced from other public domain sources. There are various sources of these references:
Pages that used to be referenced may still appear in the archives of Internet search engines. For example, 1998results.asp may no longer be linked from a companys website, but may remain on the server and in search engine databases. This old script may contain vulnerabilities that could be used to compromise the entire site. The site: Google search operator may be used to run a query only against your domain of choice, such as in: site:www.example.com. (Mis)using search engines in this way has lead to a broad array of techniques which you may find useful and that are described in the Google Hacking section of this Guide. Check it to hone your testing skills via Google. Backup files are not likely to be referenced by any other files and therefore may have not been indexed by Google, but if they lie in browsable directories the search engine might know about them.
In addition, Google and Yahoo keep cached versions of pages found by their robots. Even if 1998results.asp has been removed from the target server, a version of its output may still be stored by these search engines. The cached version may contain references to, or clues about, additional hidden content that still remains on the server.
Content that is not referenced from within a target application may be linked to by third-party websites. For example, an application which processes online payments on behalf of third-party traders may contain a variety of bespoke functionality which can (normally) only be found by following links within the web sites of its customers.
Gray Box testing and examples
Performing gray box testing against old and backup files requires examining the files contained in the directories belonging to the set of web directories served by the web server(s) of the web application infrastructure. Theoretically the examination, to be thorough, has to be done by hand; however, since in most cases copies of files or backup files tend to be created by using the same naming conventions, the search can be easily scripted (for example, editors do leave behind backup copies by naming them with a recognizable extension or ending; humans tend to leave behind files with a .old or similar predictable extensions, etc.). A good strategy is that of periodically scheduling a background job checking for files with extensions likely to identify them as copy/backup files, and performing manual checks as well on a longer time basis.
References
Tools
Vulnerability assessment tools tend to include checks to spot web directories having standard names (such as admin, test, backup, etc.), and to report any web directory which allows indexing. If you cant get any directory listing, you should try to check for likely backup extensions. Check for example Nessus ( HYPERLINK "http://www.nessus.org" \o "http://www.nessus.org" http://www.nessus.org), Nikto ( HYPERLINK "http://www.cirt.net/code/nikto.shtml" \o "http://www.cirt.net/code/nikto.shtml" http://www.cirt.net/code/nikto.shtml) or its new derivative Wikto ( HYPERLINK "http://www.sensepost.com/research/wikto/" \o "http://www.sensepost.com/research/wikto/" http://www.sensepost.com/research/wikto/) which supports also Google hacking based strategies.
Web spider tools: wget ( HYPERLINK "http://www.gnu.org/software/wget/" \o "http://www.gnu.org/software/wget/" http://www.gnu.org/software/wget/, HYPERLINK "http://www.interlog.com/%7Etcharron/wgetwin.html" \o "http://www.interlog.com/~tcharron/wgetwin.html" http://www.interlog.com/~tcharron/wgetwin.html); Sam Spade ( HYPERLINK "http://www.samspade.org" \o "http://www.samspade.org" http://www.samspade.org); Spike proxy includes a web site crawler function ( HYPERLINK "http://www.immunitysec.com/spikeproxy.html" \o "http://www.immunitysec.com/spikeproxy.html" http://www.immunitysec.com/spikeproxy.html); Xenu ( HYPERLINK "http://home.snafu.de/tilman/xenulink.html" \o "http://home.snafu.de/tilman/xenulink.html" http://home.snafu.de/tilman/xenulink.html); curl ( HYPERLINK "http://curl.haxx.se" \o "http://curl.haxx.se" http://curl.haxx.se). Some of them are also included in standard Linux distributions.
Web development tools usually include facilities to identify broken links and unreferenced files.
4.3.7 Infrastructure and Application Admin Interfaces (OWASP-CM-007)
Brief Summary
Administrator interfaces may be present in the application or on the application server to allow certain users to undertake privileged activities on the site. Tests should be undertaken to reveal if and how this privileged functionality can be accessed by an unauthorized or standard user.
Description of the Issue
An application may require an administrator interface to enable a privileged user to access functionality that may make changes to how the site functions. Such changes may include:
- User account provisioning- Site design and layout- Data manipultion- Configuration changes
In many instances, such interfaces are usually implemented with little thought of how to separate them from the normal users of the site. Testing is aimed at discovering these administrator interfaces and accessing functionality intended for the privileged users.
Black Box testing and example
The following describes vectors that may be used to test for the presence of administrative interfaces. These techniques may also be used for testing for related issues including privilege escalation and are described elsewhere in this guide in greater detail:
Directory and file Enumeration - An administrative interface may be present but not visibly available to the tester. Attempting to guess the path of the administrative interface may be as simple as requesting: /admin or /administrator etc..A tester may have to also identify the filename of the administration page. Forcibly browsing to the identified page may provide access to the interface.
Comments and links in Source - Many sites use common code that is loaded for all site users. By examining all source sent to the client, links to administrator functionality may be discovered and should be investigated.
Reviewing Server and Application Documentation - If the application server or application is deployed in its default configuration it may be possible to access the administration interface using information described in configuration or help documentation. Default password lists should be consulted if an administrative interface is found and credentials are required.
Alternative Server Port - Administration interfaces may be seen on a different port on the host than the main application. For example, Apache Tomcat's Administration interface can often be seen on port 8080.
Parameter Tampering - A GET or POST parameter or a cookie variable may be required to enable the administrator functionality. Clues to this:
include the presence of hidden fields such as:
or in a cookie:
Cookie: session_cookie; useradmin=0
Once an administrative interface has been discovered, a combination of the above techniques may be used to attempt to bypass authentication. If this fails, the tester may wish to attempt a brute force attack. In such an instance the tester should be aware of the potential for administrative account lockout if such functionality is present.
Gray Box testing and example
A more detailed examination of the server and application components should be undertaken to ensure hardening (i.e. administrator pages are not accessible to everyone through the use of IP filtering or other controls), and where applicable, verification that all components do not use default credentials or configurations. Source code should be reviewed to ensure that the authorization and authentication model ensures clear separation of duties between normal users and site administrators. User interface functions shared between normal and administrator users should be reviewed to ensure clear separation between the drawing of such components and information leakage from such shared functionality.
References
Default Password list: HYPERLINK "http://www.governmentsecurity.org/articles/DefaultLoginsandPasswordsforNetworkedDevices.php" \o "http://www.governmentsecurity.org/articles/DefaultLoginsandPasswordsforNetworkedDevices.php" http://www.governmentsecurity.org/articles/DefaultLoginsandPasswordsforNetworkedDevices.php
4.3.8 Testing for HTTP Methods and XST (OWASP-CM-008)
Brief Summary
HTTP offers a number of methods that can be used to perform actions on the web server. Many of theses methods are designed to aid developers in deploying and testing HTTP applications. These HTTP methods can be used for nefarious purposes if the web server is misconfigured. Additionally, Cross Site Tracing (XST), a form of cross site scripting using the servers HTTP TRACE method, is examined.
Short Description of the Issue
While GET and POST are by far the most common methods that are used to access information provided by a web server, the Hypertext Transfer Protocol (HTTP) allows several other (and somewhat less known) methods. HYPERLINK "http://tools.ietf.org/html/rfc2616" \o "http://tools.ietf.org/html/rfc2616" RFC 2616 (which describes HTTP version 1.1 which is the today standard) defines the following eight methods:
HEAD
GET
POST
PUT
DELETE
TRACE
OPTIONS
CONNECT
Some of these methods can potentially pose a security risk for a web application, as they allow an attacker to modify the files stored on the web server and, in some scenarios, steal the credentials of legitimate users. More specifically, the methods that should be disabled are the following:
PUT: This method allows a client to upload new files on the web server. An attacker can exploit it by uploading malicious files (e.g.: an asp file that executes commands by invoking cmd.exe), or by simply using the victim server as a file repository
DELETE: This method allows a client to delete a file on the web server. An attacker can exploit it as a very simple and direct way to deface a web site or to mount a DoS attack
CONNECT: This method could allow a client to use the web server as a proxy
TRACE: This method simply echoes back to the client whatever string has been sent to the server, and is used mainly for debugging purposes. This method, originally assumed harmless, can be used to mount an attack known as Cross Site Tracing, which has been discovered by Jeremiah Grossman (see links at the bottom of the page)
If an application needs one or more of these methods, such as REST Web Services (which may require PUT or DELETE), it is important to check that their usage is properly limited to trusted users and safe conditions.
Arbitrary HTTP Methods
Arshan Dabirsiaghi (see links) discovered that many web application frameworks allowed well chosen and/or arbitrary HTTP methods to bypass an environment level access control check:
Many frameworks and languages treat "HEAD" as a "GET" request, albeit one without any body in the response. If a security constraint was set on "GET" requests such that only "authenticatedUsers" could access GET requests for a particular servlet or resource, it would be bypassed for the "HEAD" version. This allowed unauthorized blind submission of any privileged GET request
Some frameworks allowed arbitrary HTTP methods such as "JEFF" or "CATS" to be used without limitation. These were treated as if a "GET" method was issued, and again were found not to be subject to method role based access control checks on a number of languages and frameworks, again allowing unauthorized blind submission of privileged GET requests.
In many cases, code which explicitly checked for a "GET" or "POST" method would be safe.
Black Box testing and example
Discover the Supported Methods To perform this test, we need some way to figure out which HTTP methods are supported by the web server we are examining. The OPTIONS HTTP method provides us with the most direct and effective way to do that. HYPERLINK "http://tools.ietf.org/html/rfc2616" \o "http://tools.ietf.org/html/rfc2616" RFC 2616 states that The OPTIONS method represents a request for information about the communication options available on the request/response chain identified by the Request-URI.
The testing method is extremely straightforward and we only need to fire up netcat (or telnet):
icesurfer@nightblade ~ $ nc www.victim.com 80
OPTIONS / HTTP/1.1
Host: www.victim.com
HTTP/1.1 200 OK
Server: Microsoft-IIS/5.0
Date: Tue, 31 Oct 2006 08:00:29 GMT
Connection: close
Allow: GET, HEAD, POST, TRACE, OPTIONS
Content-Length: 0
icesurfer@nightblade ~ $
As we can see in the example, OPTIONS provides a list of the methods that are supported by the web server, and in this case we can see, for instance, that TRACE method is enabled. The danger that is posed by this method is illustrated in the following sectionTest XST PotentialNote: in order to understand the logic and the goals of this attack you need to be familiar with HYPERLINK "https://www.owasp.org/index.php/Cross_site_scripting_AoC" \o "Cross site scripting AoC" Cross Site Scripting attacks.
The TRACE method, while apparently harmless, can be successfully leveraged in some scenarios to steal legitimate users' credentials. This attack technique was discovered by Jeremiah Grossman in 2003, in an attempt to bypass the HYPERLINK "https://www.owasp.org/index.php/HTTPOnly" \o "HTTPOnly" HTTPOnly tag that Microsoft introduced in Internet Explorer 6 sp1 to protect cookies from being accessed by JavaScript. As a matter of fact, one of the most recurring attack patterns in Cross Site Scripting is to access the document.cookie object and send it to a web server controlled by the attacker so that he/she can hijack the victim's session. Tagging a cookie as httpOnly forbids JavaScript to access it, protecting it from being sent to a third party. However, the TRACE method can be used to bypass this protection and access the cookie even in this scenario.
As mentioned before, TRACE simply returns any string that is sent to the web server. In order to verify its presence (or to double-check the results of the OPTIONS request shown above), we can proceed as shown in the following example:
icesurfer@nightblade ~ $ nc www.victim.com 80
TRACE / HTTP/1.1
Host: www.victim.com
HTTP/1.1 200 OK
Server: Microsoft-IIS/5.0
Date: Tue, 31 Oct 2006 08:01:48 GMT
Connection: close
Content-Type: message/http
Content-Length: 39
TRACE / HTTP/1.1
Host: www.victim.com
As we can see, the response body is exactly a copy of our original request, meaning that our target allows this method. Now, where is the danger lurking? If we instruct a browser to issue a TRACE request to the web server, and this browser has a cookie for that domain, the cookie will be automatically included in the request headers, and will therefore echoed back in the resulting response. At that point, the cookie string will be accessible by JavaScript and it will be finally possible to send it to a third party even when the cookie is tagged as httpOnly.
There are multiple ways to make a browser issue a TRACE request, such as the XMLHTTP ActiveX control in Internet Explorer and XMLDOM in Mozilla and Netscape. However, for security reasons, the browser is allowed to start a connection only to the domain where the hostile script resides. This is a mitigating factor, as the attacker needs to combine the TRACE method with another vulnerability in order to mount the attack. Basically, an attacker has two ways to successfully launch a Cross Site Tracing attack:
1. Leveraging another server-side vulnerability: the attacker injects the hostile JavaScript snippet, that contains the TRACE request, in the vulnerable application, as in a normal Cross Site Scripting attack
2. Leveraging a client-side vulnerability: the attacker creates a malicious website that contains the hostile JavaScript snippet and exploits some cross-domain vulnerability of the browser of the victim, in order to make the JavaScript code successfully perform a connection to the site that supports the TRACE method and that originated the cookie that the attacker is trying to steal.
More detailed information, together with code samples, can be found in the original whitepaper written by Jeremiah Grossman.
Black Box Testing of HTTP method tampering
Testing for HTTP method tampering is essentially the same as testing for XST.
Testing for arbitrary HTTP methods
Find a page you'd like to visit that has a security constraint such that it would normally force a 302 redirect to a login page or forces a login directly. The test URL in this example works like this - as do many web applications. However, if you obtain a "200" response that is not a login page, it is possible to bypass authentication and thus authorization.
[rapidoffenseunit:~] vanderaj% nc www.example.com 80
JEFF / HTTP/1.1
Host: www.example.com
HTTP/1.1 200 OK
Date: Mon, 18 Aug 2008 22:38:40 GMT
Server: Apache
Set-Cookie: PHPSESSID=K53QW...
If your framework or firewall or application does not support the "JEFF" method, it should issue an error page (or preferably a 405 Not Allowed or 501 Not implemented error page). If it services the request, it is vulnerable to this issue.
If you feel that the system is vulnerable to this issue, issue CSRF-like attacks to exploit the issue more fully:
FOOBAR /admin/createUser.php?member=myAdmin
JEFF /admin/changePw.php?member=myAdmin&passwd=foo123&confirm=foo123
CATS /admin/groupEdit.php?group=Admins&member=myAdmin&action=add
With some luck, using the above three commands - modified to suit the application under test and testing requirements - a new user would be created, a password assigned, and made an admin.
Testing for HEAD access control bypass
Find a page you'd like to visit that has a security constraint such that it would normally force a 302 redirect to a login page or forces a login directly. The test URL in this example works like this - as do many web applications. However, if you obtain a "200" response that is not a login page, it is possible to bypass authentication and thus authorization.
[rapidoffenseunit:~] vanderaj% nc www.example.com 80
HEAD /admin HTTP/1.1
Host: www.example.com
HTTP/1.1 200 OK
Date: Mon, 18 Aug 2008 22:44:11 GMT
Server: Apache
Set-Cookie: PHPSESSID=pKi...; path=/; HttpOnly
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Set-Cookie: adminOnlyCookie1=...; expires=Tue, 18-Aug-2009 22:44:31 GMT; domain=www.example.com
Set-Cookie: adminOnlyCookie2=...; expires=Mon, 18-Aug-2008 22:54:31 GMT; domain=www.example.com
Set-Cookie: adminOnlyCookie3=...; expires=Sun, 19-Aug-2007 22:44:30 GMT; domain=www.example.com
Content-Language: EN
Connection: close
Content-Type: text/html; charset=ISO-8859-1
If you get a "405 Method not allowed" or "501 Method Unimplemented", the application/framework/language/system/firewall is working correctly. If a "200" response code comes back, and the response contains no body, it's likely that the application has processed the request without authentication or authorization and further testing is warranted.
If you feel that the system is vulnerable to this issue, issue CSRF-like attacks to exploit the issue more fully:
HEAD /admin/createUser.php?member=myAdmin
HEAD /admin/changePw.php?member=myAdmin&passwd=foo123&confirm=foo123
HEAD /admin/groupEdit.php?group=Admins&member=myAdmin&action=add
With some luck, using the above three commands - modified to suit the application under test and testing requirements - a new user would be created, a password assigned, and made an admin, all using blind request submission.
Gray Box testing and example
The testing in a Gray Box scenario follows the same steps of a Black Box scenario
References
Whitepapers
HYPERLINK "http://tools.ietf.org/html/rfc2616" \o "http://tools.ietf.org/html/rfc2616" RFC 2616: Hypertext Transfer Protocol -- HTTP/1.1
HYPERLINK "http://tools.ietf.org/html/rfc2109" \o "http://tools.ietf.org/html/rfc2109" RFC 2109 and HYPERLINK "http://tools.ietf.org/html/rfc2965" \o "http://tools.ietf.org/html/rfc2965" RFC 2965: HTTP State Management Mechanism
Jeremiah Grossman: "Cross Site Tracing (XST)" - HYPERLINK "http://www.cgisecurity.com/whitehat-mirror/WH-WhitePaper_XST_ebook.pdf" \o "http://www.cgisecurity.com/whitehat-mirror/WH-WhitePaper_XST_ebook.pdf" http://www.cgisecurity.com/whitehat-mirror/WH-WhitePaper_XST_ebook.pdf
Amit Klein: "XS(T) attack variants which can, in some cases, eliminate the need for TRACE" - HYPERLINK "http://www.securityfocus.com/archive/107/308433" \o "http://www.securityfocus.com/archive/107/308433" http://www.securityfocus.com/archive/107/308433
Arshan Dabirsiaghi: "Bypassing VBAAC with HTTP Verb Tampering" - HYPERLINK "http://www.aspectsecurity.com/documents/Bypassing_VBAAC_with_HTTP_Verb_Tampering.pdf" \o "http://www.aspectsecurity.com/documents/Bypassing_VBAAC_with_HTTP_Verb_Tampering.pdf" http://www.aspectsecurity.com/documents/Bypassing_VBAAC_with_HTTP_Verb_Tampering.pdf
Tools
NetCat - HYPERLINK "http://www.vulnwatch.org/netcat" \o "http://www.vulnwatch.org/netcat" http://www.vulnwatch.org/netcat
4.4 Authentication Testing
Authentication (Greek: = r e a l o r g e n u i n e , f r o m ' a u t h e n t e s ' = a u t h o r ) i s t h e a c t o f e s t a b l i s h i n g o r c o n f i r m i n g s o m e t h i n g ( o r s o m e o n e ) a s a u t h e n t i c , t h a t i s , t h a t c l a i m s m a d e b y o r a b o u t t h e t h i n g a r e t r u e . A u t h e n t i c a t i n g a n o b j e c t m a y m e a n c o n f i r m i n g i t s p r o v e n a n c e , w hereas authenticating a person often consists of verifying her identity. Authentication depends upon one or more authentication factors. In computer security, authentication is the process of attempting to verify the digital identity of the sender of a communication. A common example of such a process is the logon process. Testing the authentication schema means understanding how the authentication process works and using that information to circumvent the authentication mechanism.
HYPERLINK "https://www.owasp.org/index.php/Testing_for_credentials_transport" \o "Testing for credentials transport" 4.4.1 Credentials transport over an encrypted channel (OWASP-AT-001)Here, the tester will just try to understand if the data that users put into the web form, in order to log into a web site, are transmitted using secure protocols that protect them from an attacker or not.
HYPERLINK "https://www.owasp.org/index.php/Testing_for_user_enumeration" \o "Testing for user enumeration" 4.4.2 Testing for user enumeration (OWASP-AT-002)The scope of this test is to verify if it is possible to collect a set of valid users by interacting with the authentication mechanism of the application. This test will be useful for the brute force testing, in which we verify if, given a valid username, it is possible to find the corresponding password.
HYPERLINK "https://www.owasp.org/index.php/Testing_for_Default_or_Guessable_User_Account" \o "Testing for Default or Guessable User Account" 4.4.3 Testing for Guessable (Dictionary) User Account (OWASP-AT-003)Here we test if there are default user accounts or guessable username/password combinations (dictionary testing)
HYPERLINK "https://www.owasp.org/index.php/Testing_for_Brute_Force" \o "Testing for Brute Force" 4.4.4 Brute Force Testing (OWASP-AT-004)When a dictionary type attack fails, a tester can attempt to use brute force methods to gain authentication. Brute force testing is not easy to accomplish for testers because of the time required and the possible lockout of the tester.
HYPERLINK "https://www.owasp.org/index.php/Testing_for_Bypassing_Authentication_Schema" \o "Testing for Bypassing Authentication Schema" 4.4.5 Testing for bypassing authentication schema (OWASP-AT-005)Other passive testing methods attempt to bypass the authentication schema by recognizing that not all of the application's resources are adequately protected. The tester can access these resources without authentication.
HYPERLINK "https://www.owasp.org/index.php/Testing_for_Vulnerable_Remember_Password_and_Pwd_Reset" \o "Testing for Vulnerable Remember Password and Pwd Reset" 4.4.6 Testing for vulnerable remember password and pwd reset (OWASP-AT-006)Here we test how the application manages the process of "password forgotten". We also check whether the application allows the user to store the password in the browser ("remember password" function).
4.4.7 Testing for Logout and Browser Cache Management (OWASP-AT-007)Here we check that the logout and caching functions are properly implemented.
HYPERLINK "https://www.owasp.org/index.php/Testing_for_Captcha" \o "Testing for Captcha" 4.4.8 Testing for CAPTCHA (OWASP-AT-008)CAPTCHA ("Completely Automated Public Turing test to tell Computers and Humans Apart") is a type of challenge-response test used by many web applications to ensure that the response is not generated by a computer. CAPTCHA implementations are often vulnerable to various kinds of attacks even if the generated CAPTCHA is unbreakable. This section will help you to identify these kinds of attacks.
HYPERLINK "https://www.owasp.org/index.php/Testing_Multiple_Factors_Authentication" \o "Testing Multiple Factors Authentication" 4.4.9 Testing Multiple Factors Authentication (OWASP-AT-009)Multiple Factors Authentication means to test the following scenarios: One-time password (OTP) generator tokens, Crypto devices like USB tokens or smart cards, equipped with X.509 certificates, Random OTP sent via SMS, Personal information that only the legitimate user is supposed to know [OUTOFWALLET].
HYPERLINK "https://www.owasp.org/index.php/Testing_for_Race_Conditions" \o "Testing for Race Conditions" 4.4.10 Testing for Race Conditions (OWASP-AT-010)A race condition is a flaw that produces an unexpected result when timing of actions impact other actions. An example may be seen on a multithreaded application where actions are being performed on the same data. Race conditions, by their very nature, are difficult to test for.
HYPERLINK "https://www.owasp.org/index.php/Testing_for_credentials_transport" \o "Testing for credentials transport" 4.4.1 Credentials transport over an encrypted channel (OWASP-AT-001)
Brief Summary
Testing for credentials transport means to verify that the user's authentication data are transferred via an encrypted channel to avoid being intercepted by malicious users. The analysis focuses simply on trying to understand if the data travels unencrypted from the web browser to the server, or if the web application takes the appropriate security measures using a protocol like HTTPS. The HTTPS protocol is built on TLS/SSL to encrypt the data that is transmitted and to ensure that user is being sent towards the desired site. Clearly, the fact that traffic is encrypted does not necessarily mean that it's completely safe. The security also depends on the encryption algorithm used and the robustness of the keys that the application is using, but this particular topic will not be addressed in this section. For a more detailed discussion on testing the safety of TLS/SSL channels you can refer to the chapter HYPERLINK "https://www.owasp.org/index.php/Testing_for_SSL-TLS" \o "Testing for SSL-TLS" Testing for SSL-TLS. Here, the tester will just try to understand if the data that users put into web forms, for example, in order to log into a web site, are transmitted using secure protocols that protect them from an attacker or not. To do this we will consider various examples.
Description of the Issue
Nowadays, the most common example of this issue is the login page of a web application. The tester should verify that user's credentials are transmitted via an encrypted channel. In order to log into a web site, usually, the user has to fill a simple form that transmits the inserted data with the POST method. What is less obvious is that this data can be passed using the HTTP protocol, that means in a non-secure way, or using HTTPS, which encrypts the data. To further complicate things, there is the possibility that the site has the login page accessible via HTTP (making us believe that the transmission is insecure), but then it actually sends data via HTTPS. This test is done to be sure that an attacker cannot retrieve sensitive information by simply sniffing the net with a sniffer tool.
Black Box testing and example
In the following examples we will use WebScarab in order to capture packet headers and to inspect them. You can use any web proxy that you prefer.
Case study: Sending data with POST method through HTTP
Suppose that the login page presents a form with fields User, Pass, and the Submit button to authenticate and give access to the application. If we look at the header of our request with WebScarab, we get something like this:
POST http://www.example.com/AuthenticationServlet HTTP/1.1
Host: www.example.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.14) Gecko/20080404
Accept: text/xml,application/xml,application/xhtml+xml
Accept-Language: it-it,it;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: http://www.example.com/index.jsp
Cookie: JSESSIONID=LVrRRQQXgwyWpW7QMnS49vtW1yBdqn98CGlkP4jTvVCGdyPkmn3S!
Content-Type: application/x-www-form-urlencoded
Content-length: 64
delegated_service=218&User=test&Pass=test&Submit=SUBMIT
From this example the tester can understand that the POST sends the data to the page www.example.com/AuthenticationServlet simply using HTTP. So, in this case, data are transmitted without encryption and a malicious user could read our username and password by simply sniffing the net with a tool like Wireshark.
Case study: Sending data with POST method through HTTPS
Suppose that our web application uses the HTTPS protocol to encrypt data we are sending (or at least for those relating to the authentication). In this case, trying to access the login page and to authenticate, the header of our POST request would be similar to the following:
POST https://www.example.com:443/cgi-bin/login.cgi HTTP/1.1
Host: www.example.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.14) Gecko/20080404
Accept: text/xml,application/xml,application/xhtml+xml,text/html
Accept-Language: it-it,it;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: https://www.example.com/cgi-bin/login.cgi
Cookie: language=English;
Content-Type: application/x-www-form-urlencoded
Content-length: 50
Command=Login&User=test&Pass=test
We can see that the request is addressed to www.example.com:443/cgi-bin/login.cgi using the HTTPS protocol. This ensures that our data are sent through an encrypted channel and that they are not readable by other people.
Case study: sending data with POST method via HTTPS on a page reachable via HTTP
Now, suppose to have a web page reachable via HTTP and that then only data sent from the authentication form are shipped via HTTPS. This means that our data is transmitted in a secure way through encryption. This situation occurs, for example, when we are on a portal of a big company that offers various information and services publicly available, without identification, but which has also a private section accessible from the home page through a login. So when we try to login, the header of our request will look like the following example:
POST https://www.example.com:443/login.do HTTP/1.1
Host: www.example.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.14) Gecko/20080404
Accept: text/xml,application/xml,application/xhtml+xml,text/html
Accept-Language: it-it,it;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: http://www.example.com/homepage.do
Cookie: SERVTIMSESSIONID=s2JyLkvDJ9ZhX3yr5BJ3DFLkdphH0QNSJ3VQB6pLhjkW6F
Content-Type: application/x-www-form-urlencoded
Content-length: 45
User=test&Pass=test&portal=ExamplePortal
We can see that our request is addressed to www.example.com:443/login.do using HTTPS. But if we have a look at the referer field in the header (the page from which we came), it is www.example.com/homepage.do and is accessible via simple HTTP. So, in this case, we have no lock inside our browser window that tells us that we are using a secure connection, but, in reality, we are sending data via HTTPS. This ensures us that no other people can read the data that we are sending.
Case study: Sending data with GET method through HTTPS
In this last example, suppose that the application transfers data using the GET method. This method should never be used in a form that transmits sensitive data such as username and password, because they are displayed in clear in the URL and this entails a whole set of security issues. So this example is purely demonstrative, but, in reality, it is strongly suggested to use the POST method instead. This is because when the GET method is used, the url that it requests is easily available from, for example, the server logs exposing your sensitive data to information leakage.
GET https://www.example.com/success.html?user=test&pass=test HTTP/1.1
Host: www.example.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.14) Gecko/20080404
Accept: text/xml,application/xml,application/xhtml+xml,text/html
Accept-Language: it-it,it;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: https://www.example.com/form.html
If-Modified-Since: Mon, 30 Jun 2008 07:55:11 GMT
If-None-Match: "43a01-5b-4868915f"
You can see that the data is transferred in cleartext in the URL and not in the body of the message as before. But we must consider that TLS/SSL is a level 5 protocol, a lower level than HTTP, so the whole HTTP package is still encrypted and the URL is unreadable to an attacker. It is not a good practice to use the GET method in these cases, because the information contained in the URL can be stored in many servers such as proxy and web servers, leaking the privacy of the user's credentials.
Gray Box testing and example
Talk with the developers of the application and try to understand if they are aware of differences between HTTP and HTTPS protocols and why they should use the HTTPS for sensitive information transmissions.Then, check with them if HTTPS is used in every sensitive transmission, like those in login pages, to prevent unauthorized users to read the data.
References
Whitepapers
HTTP/1.1: Security Considerations - HYPERLINK "http://www.w3.org/Protocols/rfc2616/rfc2616-sec15.html" \o "http://www.w3.org/Protocols/rfc2616/rfc2616-sec15.html" http://www.w3.org/Protocols/rfc2616/rfc2616-sec15.html
Tools
HYPERLINK "https://www.owasp.org/index.php/OWASP_WebScarab_Project" \o "OWASP WebScarab Project" WebScarab
4.4.2 Testing for user enumeration (OWASP-AT-002)
Brief Summary
The scope of this test is to verify if it is possible to collect a set of valid usernames by interacting with the authentication mechanism of the application. This test will be useful for the brute force testing, in which we verify if, given a valid username, it is possible to find the corresponding password. Often, web applications reveal when a username exists on system, either as a consequence of a misconfiguration or as a design decision. For example, sometimes, when we submit wrong credentials, we receive a message that states that either the username is present on the system or the provided password is wrong. The information obtained can be used by an attacker to gain a list of users on system. This information can be used to attack the web application, for example, through a brute force or default username/password attack.
Description of the Issue
The tester should interact with the authentication mechanism of the application to understand if sending particular requests causes the application to answer in different manners. This issue exists because the information released from web application or web server, when we provide a valid username is different than when we use an invalid one.
In some cases, we receive a message that reveals if the provided credentials are wrong because an invalid username or an invalid password was used. Sometimes, we can enumerate the existing users by sending a username and an empty password.
Black Box testing and example
In a black box testing, we know nothing about the specific application, username, application logic and error messages on login page, or password recovery facilities. If the application is vulnerable, we receive a response message that reveals, directly or indirectly, some information useful for enumerating users. HTTP Response message
Testing for Valid user/right password Record the server answer when you submit a valid userID and valid password.
Result Expected:Using WebScarab, notice the information retrieved from this successful authentication (HTTP 200 Response, length of the response).
Testing for valid user/wrong password Now, the tester should try to insert a valid userID and a wrong password and record the error message generated by the application.
Result Expected:From the browser we will expect message similar to the following one: HYPERLINK "https://www.owasp.org/index.php/Image:AuthenticationFailed.png" \o "Image:AuthenticationFailed.png" INCLUDEPICTURE "https://www.owasp.org/images/f/f8/AuthenticationFailed.png" \* MERGEFORMATINET
or something like the following: HYPERLINK "https://www.owasp.org/index.php/Image:NoConfFound.jpg" \o "Image:NoConfFound.jpg" INCLUDEPICTURE "https://www.owasp.org/images/4/43/NoConfFound.jpg" \* MERGEFORMATINET
Against any message that reveals the existence of user, for instance, message similar to:
Login for User foo: invalid password
Using WebScarab, notice the information retrieved from this unsuccessful authentication attempt (HTTP 200 Response, length of the response). Testing for a nonexistent username Now, the tester should try to insert an invalid userID and a wrong password and record the server answer (you should be confident that the username is not valid in the application). Record the error message and the server answer.
Result Expected:If we enter a nonexistent userID, we can receive a message similar to: HYPERLINK "https://www.owasp.org/index.php/Image:Userisnotactive.png" \o "Image:Userisnotactive.png" INCLUDEPICTURE "https://www.owasp.org/images/8/8d/Userisnotactive.png" \* MERGEFORMATINET or message like the following one:
Login failed for User foo: invalid Account
Generally the application should respond with the same error message and length to the different wrong requests. If you notice that the responses are not the same, you should investigate and find out the key that creates a difference between the 2 responses. For example:
Client request: Valid user/wrong password --> Server answer:'The password is not correct'
Client request: Wrong user/wrong password --> Server answer:'User not recognized'
The above responses let the client understand that for the first request we have a valid user name. So we can interact with the application requesting a set of possible userIDs and observing the answer.Looking at the second server response, we understand in the same way that we don't hold a valid username. So we can interact in the same manner and create a list of valid userID looking at the server answers. Other ways to enumerate users We can enumerate users in several ways, such as: Analyzing the error code received on login pagesSome web application release a specific error code or message that we can analyze.
Analyzing URLs, and URLs redirectionsFor example:
HYPERLINK "http://www.foo.com/err.jsp?User=baduser&Error=0" \o "http://www.foo.com/err.jsp?User=baduser&Error=0" http://www.foo.com/err.jsp?User=baduser&Error=0 HYPERLINK "http://www.foo.com/err.jsp?User=gooduser&Error=2" \o "http://www.foo.com/err.jsp?User=gooduser&Error=2" http://www.foo.com/err.jsp?User=gooduser&Error=2
As we can see above, when we provide a userID and password to the web application, we see a message indication that an error has occurred in the URL. In the first case we has provided a bad userID and bad password. In the second, a good user and bad password, so we can identify a valid userID.
URI ProbingSometimes a web server responds differently if it receives a request for an existing directory or not. For instance in some portals every user is associated with a directory, if we try to access an existing directory we could receive a web server error. A very common error that we can receive from web server is:
403 Forbidden error code
and
404 Not found error code
Example
HYPERLINK "http://www.foo.com/account1" \o "http://www.foo.com/account1" http://www.foo.com/account1 - we receive from web server: 403 Forbidden HYPERLINK "http://www.foo.com/account2" \o "http://www.foo.com/account2" http://www.foo.com/account2 - we receive from web server: 404 file Not Found
In first case the user exists, but we cannot view the web page, in second case instead the user account2 doesnt exist. Collecting this information we can enumerate the users.
Analyzing Web page TitlesWe can receive useful information on Title of web page, where we can obtain a specific error code or messages that reveal if the problems are on username or password. For instance, if we cannot authenticate to an application and receive a web page whose title is similar to:
Invalid user
Invalid authentication
Analyzing message received from recovery facilitiesWhen we use a recovery facilities the applications that is vulnerable could return a message that reveals if a username exists or not.
For example, message similar to the following:
Invalid username: e-mail address are not valid or The specified user was not found
Valid username: Your recovery password has been successfully sent
Friendly 404 Error MessageWhen we request for a user within the directory that does not exist, we don't always receive 404 error code. Instead, we may receive 200 ok with an image, in this case we can assume that when we receive the specific image the user doesnt exist. This logic can be applied to other web server response; the trick is a good analysis of web server and web application messages. Guessing UsersIn some cases the userIDs are created with specific policies of administrator or company. For example we can view a user with a userID created in sequential order:
CN000100CN000101. Sometimes the usernames are created with a REALM alias and then a sequential numbers:
R1001 user 001 for REALM1R2001 user 001 for REALM2Other possibilities are userIDs associated with credit card numbers, or in general a numbers with a pattern. In the above sample we can create simple shell scripts that compose UserIDs and submit a request with tool like wget to automate a web query to discern valid userIDs. To create a script we can also use Perl and CURL.
Again, we can guess a username from the information received from an LDAP query or from google information gathering for example from a specific domain. Google can help to find domain users through a specific queries or through a simple shell script or tool.
For other information on guessing userIDs see next section, 4.5.3 Testing for Guessable (Dictionary) User Account. Attention: by enumerating user accounts, you risk locking out accounts after a predefined number of failed probes (based on application policy). Also, sometimes, our IP address can be banned by dynamic rules on the application firewall.
Gray Box testing and example
Testing for Authentication error messagesVerify that the application answers in the same manner for every client request that produces a failed authentication. For this issue the Black Box testing and Gray Box testing have the same concept based on the analysis of messages or error codes received from web application.Result Expected:The application should answer in the same manner for every failed attempt of authentication.For Example:
Credentials submitted are not valid
References
Marco Mella, Sun Java Access & Identity Manager Users enumeration: HYPERLINK "http://www.aboutsecurity.net" \o "http://www.aboutsecurity.net" http://www.aboutsecurity.net
Username Enumeration Vulnerabilities: HYPERLINK "http://www.gnucitizen.org/blog/username-enumeration-vulnerabilities" \o "http://www.gnucitizen.org/blog/username-enumeration-vulnerabilities" http://www.gnucitizen.org/blog/username-enumeration-vulnerabilities
Tools
WebScarab: HYPERLINK "https://www.owasp.org/index.php/OWASP_WebScarab_Project" \o "OWASP WebScarab Project" OWASP_WebScarab_Project
CURL: HYPERLINK "http://curl.haxx.se/" \o "http://curl.haxx.se/" http://curl.haxx.se/
PERL: HYPERLINK "http://www.perl.org" \o "http://www.perl.org" http://www.perl.org
Sun Java Access & Identity Manager users enumeration tool: HYPERLINK "http://www.aboutsecurity.net" \o "http://www.aboutsecurity.net" http://www.aboutsecurity.net
4.4.3 Default or guessable (dictionary) user account (OWASP-AT-003)
Brief Summary
Today's web applications typically run on popular open source or commercial software that is installed on servers and requires configuration or customization by the server administrator. In addition, most of today's hardware appliances, i.e., network routers and database servers, offer web-based configuration or administrative interfaces.Often these applications are not properly configured and the default credentials provided for initial authentication and configuration are never updated. In addition, it is typical to find generic accounts, left over from testing or administration, that use common usernames and passwords and are left enabled in the application and its infrastructure.These default username and password combinations are widely known by penetration testers and malicious attackers, who can use them to gain access to various types of custom, open source, or commercial applications.In addition, weak password policy enforcements seen in many applications allow users to sign up using easy to guess usernames and passwords, and may also not allow password changes to be undertaken.
Description of the Issue
The root cause of this problem can be identified as:
Inexperienced IT personnel, who are unaware of the importance of changing default passwords on installed infrastructure components.
Programmers who leave backdoors to easily access and test their application and later forget to remove them.
Application administrators and users that choose an easy username and password for themselves
Applications with built-in, non-removable default accounts with a pre-set username and password.
Applications which leak information as to the validity of usernames during either authentication attempts, password resets, or account signup.
An additional problem stems from the use of blank passwords, which are simply the result of a lack of security awareness or a desire to simplify administration.
Black Box testing and example
In Black box testing the tester knows nothing about the application, its underlying infrastructure, and any username or password policies. In reality this is often this is not the case and some information about the application is known. If this is the case, simply skip the steps that refer to obtaining information you already have.
When testing a known application interface, for example a Cisco router web interface or a Weblogic administrator portal, check that the known usernames and passwords for these devices do not result in successful authentication. Common credentials for many systems can be found using a search engine or by using one of the sites listed in the Further Reading section. When facing applications to which we do not have a list of default and common user accounts, or when common accounts do not work, we can perform manual testing:
Note that the application being tested may have an account lockout, and multiple password guess attempts with a known username may cause the account to be locked. If it is possible to lock the administrator account, it may be troublesome for the system administrator to reset it
Many applications have verbose error messages that inform the site users as to the validity of entered usernames. This information will be helpful when testing for default or guessable user accounts. Such functionality can be found, for example, on the login page, password reset and forgotten password page, and sign up page. More information on this can be seen in the section HYPERLINK "https://www.owasp.org/index.php/Testing_for_user_enumeration" \o "Testing for user enumeration" Testing for user enumeration.
Try the following usernames - "admin", "administrator", "root", "system", "guest", "operator", or "super". These are popular among system administrators and are often used. Additionally you could try "qa", "test", "test1", "testing", and similar names. Attempt any combination of the above in both the username and the password fields. If the application is vulnerable to username enumeration, and you successfully manage to identify any of the above usernames, attempt passwords in a similar manner. In addition try an empty password or one of the following "password", "pass123", "password123", "admin", or "guest" with the above accounts or any other enumerated accounts. Further permutations of the above can also be attempted. If these passwords fail, it may be worth using a common username and password list and attempting multiple requests against the application. This can, of course, be scripted to save time.
Application administrative users are often named after the application or organization. This means if you are testing an application named "Obscurity", try using obscurity/obscurity or any other similar combination as the username and password.
When performing a test for a customer, attempt using names of contacts you have received as usernames with any common passwords.
Viewing the User Registration page may help determine the expected format and length of the application usernames and passwords. If a user registration page does not exist, determine if the organization uses a standard naming convention for user names such as their email address or the name before the "@" in the email.
Attempt using all the above usernames with blank passwords.
Review the page source and javascript either through a proxy or by viewing the source. Look for any references to users and passwords in the source. For example "If username='admin' then starturl=/admin.asp else /index.asp" (for a successful login vs a failed login). Also, if you have a valid account, then login and view every request and response for a valid login vs an invalid login, such as additional hidden parameters, interesting GET request (login=yes), etc.
Look for account names and passwords written in comments in the source code. Also look in backup directories, etc for source code that may contain comments of interest.
Try to extrapolate from the application how usernames are generated. For example, can a user create their own username or does the system create an account for the user based on some personal information or a predictable sequence? If the application does create its own accounts in a predictable sequence, such as user7811, try fuzzing all possible accounts recursively. If you can identify a different response from the application when using a valid username and a wrong password, then you can try a brute force attack on the valid username (or quickly try any of the identified common passwords above or in the reference section).
If the application creates its own passwords for new users, whether or not the username is created by the application or by the user, then try to determine if the password is predictable. Try to create many new accounts in quick succession to compare and determine if the passwords are predictable. If predictable, then try to correlate these with the usernames, or any enumerated accounts, and use them as a basis for a brute force attack.
Result Expected:Successful authentication to the application or system being tested.
Gray Box testing and example
The following steps rely on an entirely Gray Box approach. If only some of the information is available to you, refer to black box testing to fill the gaps.
Talk to the IT personnel to determine passwords they use for administrative access and how administration of the application is undertaken.
Examine the password policy for the application, checking whether username and passwords are complex, difficult to guess, and not related to the application name, person name, or administrative names ("system").
Examine the user database for default names, application names, and easily guessed names as described in the Black Box testing section. Check for empty password fields.
Examine the code for hard coded usernames and passwords.
Check for configuration files that contain usernames and passwords.
Result Expected:Successful authentication to the application or system being tested.
References
Whitepapers
CIRT HYPERLINK "http://www.cirt.net/passwords" \o "http://www.cirt.net/passwords" http://www.cirt.net/passwords
Government Security - Default Logins and Passwords for Networked Devices HYPERLINK "http://www.governmentsecurity.org/articles/DefaultLoginsandPasswordsforNetworkedDevices.php" \o "http://www.governmentsecurity.org/articles/DefaultLoginsandPasswordsforNetworkedDevices.php" http://www.governmentsecurity.org/articles/DefaultLoginsandPasswordsforNetworkedDevices.php
Virus.org HYPERLINK "http://www.virus.org/default-password/" \o "http://www.virus.org/default-password/" http://www.virus.org/default-password/
Tools
Burp Intruder: HYPERLINK "http://portswigger.net/intruder/" \o "http://portswigger.net/intruder/" http://portswigger.net/intruder/
THC Hydra: HYPERLINK "http://www.thc.org/thc-hydra/" \o "http://www.thc.org/thc-hydra/" http://www.thc.org/thc-hydra/
Brutus HYPERLINK "http://www.hoobie.net/brutus/" \o "http://www.hoobie.net/brutus/" http://www.hoobie.net/brutus/
4.4.4 Testing For Brute Force (OWASP-AT-004)
Brief Summary
Brute-forcing consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement. In web application testing, the problem we are going to face with the most is very often connected with the need of having a valid user account to access the inner part of the application. Therefore we are going to check different types of authentication schema and the effectiveness of different brute-force attacks.
Description of the Issue
A great majority of web applications provide a way for users to authenticate themselves. By having knowledge of user's identity it's possible to create protected areas or more generally, to have the application behave differently upon the logon of different users. Actually there are several methods for a user to authenticate to a system like certificates, biometric devices, OTP (One Time Password) tokens, but in web application we usually find a combination of user ID and password. Therefore it's possible to carry out an attack to retrieve a valid user account and password, by trying to enumerate many (ex. dictionary attack) or the whole space of possible candidates.
After a successful bruteforce attack, a malicious user could have access to:
Confidential information / data;
Private sections of a web application, could disclose confidential documents, user's profile data, financial status, bank details, user's relationships, etc..
Administration panels;
These sections are used by webmasters to manage (modify, delete, add) web application content, manage user provisioning, assign different privileges to the users, etc..
Availability of further attack vectors;
Private sections of a web application could hide dangerous vulnerabilities and contain advanced functionalities not available to public users.
Black Box testing and example
To leverage different bruteforcing attacks it's important to discover the type of authentication method used by the application, because the techniques and the tools to be used may change.
Discovery Authentication Methods
Unless an entity decides to apply a sophisticated web authentication, the two most commonly seen methods are as follows:
HTTP Authentication;
Basic Access Authentication
Digest Access Authentication
HTML Form-based Authentication;
The following sections provide some good information on identifying the authentication mechanism employed during a blackbox test.
HTTP authentication
There are two native HTTP access authentication schemes available to an organisation Basic and Digest.
Basic Access Authentication
Basic Access Authentication assumes the client will identify themselves with a login name (e.g. "owasp") and password (e.g. "password"). When the client browser initially accesses a site using this scheme, the web server will reply with a 401 response containing a WWW-Authenticate tag containing a value of Basic and the name of the protected realm (e.g. WWW-Authenticate: Basic realm="wwwProtectedSite). The client browser will then prompt the user for their login name and password for that realm. The client browser then responds to the web server with an Authorization tag, containing the value Basic and the base64-encoded concatenation of the login name, a colon, and the password (e.g. Authorization: Basic b3dhc3A6cGFzc3dvcmQ=). Unfortunately, the authentication reply can be easily decoded should an attacker sniff the transmission.
Request and Response Test:
1. Client sends standard HTTP request for resource:
GET /members/docs/file.pdf HTTP/1.1
Host: target
2. The web server states that the requested resource is located in a protected directory.
3. Server Sends Response with HTTP 401 Authorization Required:
HTTP/1.1 401 Authorization Required
Date: Sat, 04 Nov 2006 12:52:40 GMT
WWW-Authenticate: Basic realm="User Realm"
Content-Length: 401
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=iso-8859-1
4. Browser displays challenge pop-up for username and password data entry.
5. Client Resubmits HTTP Request with credentials included:
GET /members/docs/file.pdf HTTP/1.1
Host: target
Authorization: Basic b3dhc3A6cGFzc3dvcmQ=
6. Server compares client information to its credentials list.
7. If the credentials are valid the server sends the requested content. If authorization fails the server resends HTTP status code 401 in the response header. If the user clicks Cancel the browser will likely display an error message.
If an attacker is able to intercept the request from step 5, the string
b3dhc3A6cGFzc3dvcmQ=
could simply be base64 decoded as follows (Base64 Decoded):
owasp:password
Digest Access Authentication
Digest Access Authentication expands upon the security of Basic Access Authentication by using a one-way cryptographic hashing algorithm (MD5) to encrypt authentication data and, secondly, adding a single use (connection unique) nonce value set by the web server. This value is used by the client browser in the calculation of a hashed password response. While the password is obscured by the use of the cryptographic hashing and the use of the nonce value precludes the threat of a replay attack, the login name is submitted in cleartext.
Request and Response Test:
1. Here is an example of the initial Response header when handling an HTTP Digest target:
HTTP/1.1 401 Unauthorized
WWW-Authenticate: Digest realm="OwaspSample",
nonce="Ny8yLzIwMDIgMzoyNjoyNCBQTQ",
opaque="0000000000000000", \
stale=false,
algorithm=MD5,
qop="auth"
2. The Subsequent response headers with valid credentials would look like this:
GET /example/owasp/test.asmx HTTP/1.1
Accept: */*
Authorization: Digest username="owasp",
realm="OwaspSample",
qop="auth",
algorithm="MD5",
uri="/example/owasp/test.asmx",
nonce="Ny8yLzIwMDIgMzoyNjoyNCBQTQ",
nc=00000001,
cnonce="c51b5139556f939768f770dab8e5277a",
opaque="0000000000000000",
response="2275a9ca7b2dadf252afc79923cd3823"
HTML Form-based Authentication
However, while both HTTP access authentication schemes may appear suitable for commercial use over the Internet, particularly when used over an SSL encrypted session, many organisations have chosen to utilise custom HTML and application level authentication procedures in order to provide a more sophisticated authentication procedure.
Source code taken from a HTML form:
Bruteforce Attacks
After having listed the different types of authentication methods for a web application, we will explain several types of bruteforce attacks.
Dictionary Attack
Dictionary-based attacks consist of automated scripts and tools that will try to guess username and passwords from a dictionary file. A dictionary file can be tuned and compiled to cover words probably used by the owner of the account that a malicious user is going to attack. The attacker can gather information (via active/passive reconnaissance, competitive intelligence, dumpster diving, social engineering) to understand the user, or build a list of all unique words available on the website.
Search Attacks
Search attacks will try to cover all possible combinations of a given character set and a given password length range. This kind of attack is very slow because the space of possible candidates is quite big. For example, given a known user id, the total number of passwords to try, up to 8 characters in length, is equal to 26^(8!) in a lower alpha charset (more than 200 billion possible passwords!).
Rule-based search attacks
To increase combination space coverage without slowing too much of the process it's suggested to create good rules to generate candidates. For example "John the Ripper" can generate password variations from part of the username or modify through a preconfigured mask words in the input (e.g. 1st round "pen" --> 2nd round "p3n" --> 3rd round "p3np3n").
Bruteforcing HTTP Basic Authentication
raven@blackbox /hydra $ ./hydra -L users.txt -P words.txt www.site.com http-head /private/
Hydra v5.3 (c) 2006 by van Hauser / THC - use allowed only for legal purposes.
Hydra (http://www.thc.org) starting at 2009-07-04 18:15:17
[DATA] 16 tasks, 1 servers, 1638 login tries (l:2/p:819), ~102 tries per task
[DATA] attacking service http-head on port 80
[STATUS] 792.00 tries/min, 792 tries in 00:01h, 846 todo in 00:02h
[80][www] host: 10.0.0.1 login: owasp password: password
[STATUS] attack finished for www.site.com (waiting for childs to finish)
Hydra (http://www.thc.org) finished at 2009-07-04 18:16:34
raven@blackbox /hydra $
Bruteforcing HTML Form Based Authentication
raven@blackbox /hydra $ ./hydra -L users.txt -P words.txt www.site.com https-post-form
"/index.cgi:login&name=^USER^&password=^PASS^&login=Login:Not allowed" &
Hydra v5.3 (c) 2006 by van Hauser / THC - use allowed only for legal purposes.
Hydra (http://www.thc.org)starting at 2009-07-04 19:16:17
[DATA] 16 tasks, 1 servers, 1638 login tries (l:2/p:819), ~102 tries per task
[DATA] attacking service http-post-form on port 443
[STATUS] attack finished for wiki.intranet (waiting for childs to finish)
[443] host: 10.0.0.1 login: owasp password: password
[STATUS] attack finished for www.site.com (waiting for childs to finish)
Hydra (http://www.thc.org) finished at 2009-07-04 19:18:34
raven@blackbox /hydra $
Gray Box testing and example
Partial knowledge of password and account details
When a tester has some information about length or password (account) structure, it's possible to perform a bruteforce attack with a higher probability of success. In fact, by limiting the number of characters and defining the password length, the total number of password values significantly decreases.
INCLUDEPICTURE "http://www.owasp.org/images/b/b8/Bf-partialknowledge.jpg" \* MERGEFORMATINET
Memory Trade Off Attacks
To perform a Memory Trade Off Attack, the tester needs at least a password hash previously obtained by the tester exploiting flaws in the application (e.g. SQL Injection) or sniffing http traffic. Nowadays, the most common attacks of this kind are based on Rainbow Tables, a special type of lookup table used in recovering the plaintext password from a ciphertext generated by a one-way hash.
Rainbowtable is an optimization of Hellman's Memory Trade Off Attack, where the reduction algorithm is used to create chains with the purpose to compress the data output generated by computing all possible candidates.
Tables are specific to the hash function they were created for e.g., MD5 tables can only crack MD5 hashes.
The more powerful RainbowCrack program was later developed that can generate and use rainbow tables for a variety of character sets and hashing algorithms, including LM hash, MD5, SHA1, etc.
INCLUDEPICTURE "http://www.owasp.org/images/e/e6/Bf-milworm.jpg" \* MERGEFORMATINET
References
Whitepapers
Philippe Oechslin: Making a Faster Cryptanalytic Time-Memory Trade-Off - HYPERLINK "http://lasecwww.epfl.ch/pub/lasec/doc/Oech03.pdf" \o "http://lasecwww.epfl.ch/pub/lasec/doc/Oech03.pdf" http://lasecwww.epfl.ch/pub/lasec/doc/Oech03.pdf
OPHCRACK (the time-memory-trade-off-cracker) - HYPERLINK "http://lasecwww.epfl.ch/%7Eoechslin/projects/ophcrack/" \o "http://lasecwww.epfl.ch/~oechslin/projects/ophcrack/" http://lasecwww.epfl.ch/~oechslin/projects/ophcrack/
Rainbowcrack.com - HYPERLINK "http://www.rainbowcrack.com/" \o "http://www.rainbowcrack.com/" http://www.rainbowcrack.com/
Project RainbowCrack - HYPERLINK "http://www.antsight.com/zsl/rainbowcrack/" \o "http://www.antsight.com/zsl/rainbowcrack/" http://www.antsight.com/zsl/rainbowcrack/
milw0rm - HYPERLINK "http://www.milw0rm.com/cracker/list.php" \o "http://www.milw0rm.com/cracker/list.php" http://www.milw0rm.com/cracker/list.php
Tools
THC Hydra: HYPERLINK "http://www.thc.org/thc-hydra/" \o "http://www.thc.org/thc-hydra/" http://www.thc.org/thc-hydra/
John the Ripper: HYPERLINK "http://www.openwall.com/john/" \o "http://www.openwall.com/john/" http://www.openwall.com/john/
Brutus HYPERLINK "http://www.hoobie.net/brutus/" \o "http://www.hoobie.net/brutus/" http://www.hoobie.net/brutus/
4.4.5 Testing for Bypassing authentication schema (OWASP-AT-005)
Brief Summary
While most applications require authentication for gaining access to private information or to execute tasks, not every authentication method is able to provide adequate security.
Negligence, ignorance or simple understatement of security threats often result in authentication schemes that can be bypassed by simply skipping the login page and directly calling an internal page that is supposed to be accessed only after authentication has been performed.
In addition to this, it is often possible to bypass authentication measures by tampering with requests and tricking the application into thinking that we're already authenticated. This can be accomplished either by modifying the given URL parameter or by manipulating the form or by counterfeiting sessions.
Description of the Issue
Problems related to Authentication Schema could be found at different stages of software development life cycle (SDLC), like design, development and deployment phase.
Examples of design errors include a wrong definition of application parts to be protected, the choice of not applying strong encryption protocols for securing authentication data exchange, and many more.
Problems in the development phase are, for example, the incorrect implementation of input validation functionalities, or not following the security best practices for the specific language.
In addition, there are issues during application setup (installation and configuration activities) due to a lack in required technical skills, or due to poor documentation available.
Black Box testing and example
There are several methods to bypass the authentication schema in use by a web application:
Direct page request (forced browsing)
Parameter Modification
Session ID Prediction
SQL Injection
Direct page request
If a web application implements access control only on the login page, the authentication schema could be bypassed. For example, if a user directly requests a different page via forced browsing, that page may not check the credentials of the user before granting access. Attempt to directly access a protected page through the address bar in your browser to test using this method.
INCLUDEPICTURE "http://www.owasp.org/images/7/7f/Basm-directreq.jpg" \* MERGEFORMATINET
Parameter Modification
Another problem related to authentication design is when the application verifies a successful login based on fixed value parameters. A user could modify these parameters to gain access to the protected areas without providing valid credentials. In the example below, the "authenticated" parameter is changed to a value of "yes", which allows the user to gain access. In this example, the parameter is in the URL, but a proxy could also be used to modify the parameter, especially when the parameters are sent as form elements in a POST.
http://www.site.com/page.asp?authenticated=no
raven@blackbox /home $nc www.site.com 80
GET /page.asp?authenticated=yes HTTP/1.0
HTTP/1.1 200 OK
Date: Sat, 11 Nov 2006 10:22:44 GMT
Server: Apache
Connection: close
Content-Type: text/html; charset=iso-8859-1
You Are Auhtenticated
INCLUDEPICTURE "http://www.owasp.org/images/8/8c/Basm-parammod.jpg" \* MERGEFORMATINET
Session ID Prediction
Many web applications manage authentication using session identification values (SESSION ID). Therefore, if Session ID generation is predictable, a malicious user could be able to find a valid session ID and gain unauthorized access to the application, impersonating a previously authenticated user.
In the following figure, values inside cookies increase linearly, so it could be easy for an attacker to guess a valid session ID.
INCLUDEPICTURE "http://www.owasp.org/images/8/83/Basm-sessid.jpg" \* MERGEFORMATINET
In the following figure, values inside cookies change only partially, so it's possible to restrict a bruteforce attack to the defined fields shown below.
INCLUDEPICTURE "http://www.owasp.org/images/f/f4/Basm-sessid2.jpg" \* MERGEFORMATINET
SQL Injection (HTML Form Authentication)
SQL Injection is a widely known attack technique. We are not going to describe this technique in detail in this section; there are several sections in this guide that explain injection techniques beyond the scope of this section.
INCLUDEPICTURE "http://www.owasp.org/images/4/46/Basm-sqlinj.jpg" \* MERGEFORMATINET
The following figure shows that with simple SQL injection, it is possible to bypass the authentication form.
INCLUDEPICTURE "http://www.owasp.org/images/d/d1/Basm-sqlinj2.gif" \* MERGEFORMATINET
Gray Box testing and example
If an attacker has been able to retrieve the application source code by exploiting a previously discovered vulnerability (e.g. directory traversal), or from a web repository (Open Source Applications), it could be possible to perform refined attacks against the implementation of the authentication process.
In the following example (PHPBB 2.0.13 - Authentication Bypass Vulnerability), at line 5 unserialize() function parse user supplied cookie and set values inside $row array. At line 10 user md5 password hash stored inside the backend database is compared to the one supplied.
1. if ( isset($HTTP_COOKIE_VARS[$cookiename . '_sid']) ||
2. {
3. $sessiondata = isset( $HTTP_COOKIE_VARS[$cookiename . '_data'] )?
4.
5. unserialize(stripslashes($HTTP_COOKIE_VARS[$cookiename . '_data'])): array();
6.
7. $sessionmethod = SESSION_METHOD_COOKIE;
8. }
9.
10. if( md5($password) == $row['user_password'] && $row['user_active'] )
11.
12. {
13. $autologin = ( isset($HTTP_POST_VARS['autologin']) )? TRUE: 0;
14. }
In PHP a comparison between a string value and a boolean value (1 - "TRUE") is always "TRUE", so supplying the following string (important part is "b:1") to the userialize() function is possible to bypass the authentication control:
a:2:{s:11:"autologinid";b:1;s:6:"userid";s:1:"2";}
References
Whitepapers
Mark Roxberry: "PHPBB 2.0.13 vulnerability"
David Endler: "Session ID Brute Force Exploitation and Prediction" - HYPERLINK "http://www.cgisecurity.com/lib/SessionIDs.pdf" \o "http://www.cgisecurity.com/lib/SessionIDs.pdf" http://www.cgisecurity.com/lib/SessionIDs.pdf
Tools
WebScarab: HYPERLINK "http://www.owasp.org/index.php/Category:OWASP_WebScarab_Project" \o "http://www.owasp.org/index.php/Category:OWASP_WebScarab_Project" http://www.owasp.org/index.php/Category:OWASP_WebScarab_Project
WebGoat: HYPERLINK "http://www.owasp.org/index.php/OWASP_WebGoat_Project" \o "http://www.owasp.org/index.php/OWASP_WebGoat_Project" http://www.owasp.org/index.php/OWASP_WebGoat_Project
4.4.6 Testing for Vulnerable remember password and pwd reset (OWASP-AT-006)
Brief Summary
Most web applications allow users to reset their password if they have forgotten it, usually by sending them a password reset email and/or by asking them to answer one or more "security questions". In this test we check that this function is properly implemented and that it does not introduce any flaw in the authentication scheme. We also check whether the application allows the user to store the password in the browser ("remember password" function).
Description of the Issue
A great majority of web applications provide a way for users to recover (or reset) their password in case they have forgotten it. The exact procedure varies heavily among different applications, also depending on the required level of security, but the approach is always to use an alternate way of verifying the identity of the user. One of the simplest (and most common) approaches is to ask the user for his/her e-mail address, and send the old password (or a new one) to that address. This scheme is based on the assumption that the user's email has not been compromised and that is secure enough for this goal.Alternatively (or in addition to that), the application could ask the user to answer one or more "secret questions", which are usually chosen by the user among a set of possible ones. The security of this scheme lies in the ability to provide a way for someone to identify themselves to the system with answers to questions that are not easily answerable via personal information lookups. As an example, a very insecure question would be your mothers maiden name since that is a piece of information that an attacker could find out without much effort. An example of a better question would be favorite grade-school teacher since this would be a much more difficult topic to research about a person whose identity may otherwise already be stolen. Another common feature that applications use to provide users a convenience, is to cache the password locally in the browser (on the client machine) and having it 'pre-typed' in all subsequent accesses. While this feature can be perceived as extremely friendly for the average user, at the same time it introduces a flaw, as the user account becomes easily accessible to anyone that uses the same machine account.
Black Box Testing and Examples
Password Reset The first step is to check whether secret questions are used. Sending the password (or a password reset link) to the user email address without first asking for a secret question means relying 100% on the security of that email address, which is not suitable if the application needs a high level of security.On the other hand, if secret question are used, the next step is to assess their strength.As a first point, how many questions need to be answered before the password can be reset? The majority of applications only need the user to answer to one question, but some critical applications require the user to answer correctly to two or even more questions.As a second step, we need to analyze the questions themselves. Often a self-reset system offers the choice of multiple questions; this is a good sign for the would-be attacker as this presents him/her with options. Ask yourself whether you could obtain answers to any or all of these questions via a simple Google search on the Internet or with a social engineering attack. As a penetration tester, here is a step-by-step walk through of assessing a password self-reset tool:
Are there multiple questions offered?
If so, try to pick a question which would have a public answer; for example, something Google would find with a simple query
Always pick questions which have a factual answer such as a first school or other facts which can be looked up
Look for questions which have few possible options such as what make was your first car; this question would present the attacker with a short-list of answers to guess at and based on statistics the attacker could rank answers from most to least likely
Determine how many guesses you have (if possible)
Does the password reset allow unlimited attempts?
Is there a lockout period after X incorrect answers? Keep in mind that a lockout system can be a security problem in itself, as it can be exploited by an attacker to launch a Denial of Service against users
Pick the appropriate question based on analysis from above point, and do research to determine the most likely answers
How does the password-reset tool (once a successful answer to a question is found) behave?
Does it allow immediate change of the password?
Does it display the old password?
Does it email the password to some pre-defined email address?
The most insecure scenario here is if the password reset tool shows you the password; this gives the attacker the ability to log into the account, and unless the application provides information about the last login the victim would not know that his/her account has been compromised.
A less insecure scenario is if the password reset tool forces the user to immediately change his/her password. While not as stealthy as the first case, it allows the attacker to gain access and locks the real user out.
The best security is achieved if the password reset is done via an email to the address the user initially registered with, or some other email address; this forces the attacker to not only guess at which email account the password reset was sent to (unless the application tells that) but also to compromise that account in order to take control of the victims access to the application.
The key to successfully exploiting and bypassing a password self-reset is to find a question or set of questions which give the possibility of easily acquiring the answers. Always look for questions which can give you the greatest statistical chance of guessing the correct answer, if you are completely unsure of any of the answers. In the end, a password self-reset tool is only as strong as the weakest question. As a side note, if the application sends/visualizes the old password in cleartext it means that passwords are not stored in a hashed form, which is a security issue in itself already.
Password Remember
The "remember my password" mechanism can be implemented with one of the following methods:
Allowing the "cache password" feature in web browsers. Although not directly an application mechanism, this can and should be disabled.
Storing the password in a permanent cookie. The password must be hashed/encrypted and not sent in cleartext.
For the first method, check the HTML code of the login page to see whether browser caching of the passwords is disabled. The code for this will usually be along the following lines:
The password autocomplete should always be disabled, especially in sensitive applications, since an attacker, if able to access the browser cache, could easily obtain the password in cleartext (public computers are a very notable example of this attack). To check the second implementation type, examine the cookie stored by the application. Verify the credentials are not stored in cleartext, but are hashed. Examine the hashing mechanism: if it appears a common well-known one, check for its strength; in homegrown hash functions, attempt several usernames to check whether the hash function is easily guessable. Additionally, verify that the credentials are only sent during the login phase, and not sent together with every request to the application.
Gray Box Testing and Examples
This test uses only functional features of the application and HTML code that is always available to the client, the graybox testing follows the same guidelines of the previous section. The only exception is for the password encoded in the cookie, where the same gray box analysis described in the HYPERLINK \l "_4.5.2_Cookie_and" \o "Cookie and Session Token Manipulation AoC"Cookie and Session Token Manipulation chapter can be applied.
4.4.7 Testing for Logout and Browser Cache Management (OWASP-AT-007)
Brief Summary
In this phase, we check that the logout function is properly implemented, and that it is not possible to reuse a session after logout. We also check that the application automatically logs out a user when that user has been idle for a certain amount of time, and that no sensitive data remains stored in the browser cache.
Description of the Issue
The end of a web session is usually triggered by one of the following two events:
The user logs out
The user remains idle for a certain amount of time and the application automatically logs him/her out
Both cases must be implemented carefully, in order to avoid introducing weaknesses that could be exploited by an attacker to gain unauthorized access. More specifically, the logout function must ensure that all session tokens (e.g.: cookies) are properly destroyed or made unusable, and that proper controls are enforced at the server side to forbid them to be used again.
Note: the most important thing is for the application to invalidate the session on the server side. Generally this means that the code must invoke the appropriate method, e.g. HttpSession.invalidate() in Java, Session.abandon() in .NET. Clearing the cookies from the browser is a nice touch, but is not strictly necessary, since if the session is properly invalidated on the server, having the cookie in the browser will not help an attacker.
If such actions are not properly carried out, an attacker could replay these session tokens in order to resurrect the session of a legitimate user and virtually impersonate him/her (this attack is usually known as 'cookie replay'). Of course, a mitigating factor is that the attacker needs to be able to access those tokens (that are stored on the victims PC), but in a variety of cases it might not be too difficult. The most common scenario for this kind of attack is a public computer that is used to access some private information (e.g.: webmail, online bank account, ...): when the user has finished using the application and logs out, if the logout process is not properly enforced the following user could access the same account, for instance by simply pressing the back button of the browser. Another scenario can result from a Cross Site Scripting vulnerability or a connection that is not 100% protected by SSL: a flawed logout function would make stolen cookies useful for a much longer time, making life for the attacker much easier. The third test of this chapter is aimed to check that the application forbids the browser to cache sensitive data, which again would pose a danger to a user accessing the application from a public computer.
Black Box testing and examples
Logout function: The first step is to test the presence of the logout function. Check that the application provides a logout button and that this button is present and well visible on all pages that require authentication. A logout button that is not clearly visible, or that is present only on certain pages, poses a security risk, as the user might forget to use it at the end of his/her session.
The second step consists in checking what happens to the session tokens when the logout function is invoked. For instance, when cookies are used a proper behavior is to erase all session cookies, by issuing a new Set-Cookie directive that sets their value to a non-valid one (e.g.: NULL or some equivalent value) and, if the cookie is persistent, setting its expiration date in the past, which tells the browser to discard the cookie. So, if the authentication page originally sets a cookie in the following way:
Set-Cookie: SessionID=sjdhqwoy938eh1q; expires=Sun, 29-Oct-2006 12:20:00 GMT; path=/; domain=victim.com
the logout function should trigger a response somewhat resembling the following:
Set-Cookie: SessionID=noauth; expires=Sat, 01-Jan-2000 00:00:00 GMT; path=/; domain=victim.com
The first (and simplest) test at this point consists of logging out and then hitting the 'back' button of the browser, to check whether we are still authenticated. If we are, it means that the logout function has been implemented insecurely, and that the logout function does not destroy the session IDs. This happens sometimes with applications that use non-persistent cookies and that require the user to close his browser in order to effectively erase such cookies from memory. Some of these applications provide a warning to the user, suggesting her to close her browser, but this solution completely relies on the user behavior, and results in a lower level of security compared to destroying the cookies. Other applications might try to close the browser using JavaScript, but that again is a solution that relies on the client behavior, which is intrinsically less secure, since the client browser could be configured to limit the execution of scripts (and in this case a configuration that had the goal of increasing security would end up decreasing it). Moreover, the effectiveness of this solution would be dependent on the browser vendor, version and settings (e.g.: the JavaScript code might successfully close an Internet Explorer instance but fail to close a Firefox one).
If by pressing the 'back' button we can access previous pages but not access new ones then we are simply accessing the browser cache. If these pages contain sensitive data, it means that the application did not forbid the browser to cache it (by not setting the Cache-Control header, a different kind of problem that we will analyze later).
After the back button technique has been tried, it's time for something a little more sophisticated: we can re-set the cookie to the original value and check whether we can still access the application in an authenticated fashion. If we can, it means that there is not a server-side mechanism that keeps track of active and non active cookies, but that the correctness of the information stored in the cookie is enough to grant access. To set a cookie to a determined value we can use WebScarab and, intercepting one response of the application, insert a Set-Cookie header with our desired values:
INCLUDEPICTURE "http://www.owasp.org/images/5/5a/TestingGuide-LogoutTest-fig1.png" \* MERGEFORMATINET Alternatively, we can install a cookie editor in our browser (e.g.: Add N Edit Cookies in Firefox): INCLUDEPICTURE "http://www.owasp.org/images/a/a2/TestingGuide-LogoutTest-fig2.png" \* MERGEFORMATINET
A notable example of a design where there is no control at the server side about cookies that belong to logged-out users is ASP.NET FormsAuthentication class, where the cookie is basically an encrypted and authenticated version of the user details that are decrypted and checked by the server side. While this is very effective in preventing cookie tampering, the fact that the server does not maintain an internal record of the session status means that it is possible to launch a cookie replay attack after the legitimate user has logged out, provided that the cookie has not expired yet (see the references for further detail).
It should be noted that this test only applies to session cookies, and that a persistent cookie that only stores data about some minor user preferences (e.g.: site appearance) and that is not deleted when the user logs out is not to be considered a security risk. Timeout logoutThe same approach that we have seen in the previous section can be applied when measuring the timeout logout. The most appropriate logout time should be a right balance between security (shorter logout time) and usability (longer logout time) and heavily depends on the criticality of the data handled by the application. A 60 minute logout time for a public forum can be acceptable, but such a long time would be way too much in a home banking application. In any case, any application that does not enforce a timeout-based logout should be considered not secure, unless such a behavior is addressing a specific functional requirement. The testing methodology is very similar to the one outlined in the previous section. First we have to check whether a timeout exists, for instance by logging in and then killing some time reading some other Testing Guide chapter, waiting for the timeout logout to be triggered. As in the logout function, after the timeout has passed, all session tokens should be destroyed or be unusable. We also need to understand whether the timeout is enforced by the client or by the server (or both). Getting back to our cookie example, if the session cookie is non-persistent (or, more in general, the session token does not store any data about the time) we can be sure that the timeout is enforced by the server. If the session token contains some time related data (e.g.: login time, or last access time, or expiration date for a persistent cookie), then we know that the client is involved in the timeout enforcing. In this case, we need to modify the token (if it's not cryptographically protected) and see what happens to our session. For instance, we can set the cookie expiration date far in the future and see whether our session can be prolonged. As a general rule, everything should be checked server-side and it should not be possible, re-setting the session cookies to previous values, to be able to access the application again. Cached pagesLogging out from an application obviously does not clear the browser cache of any sensitive information that might have been stored. Therefore, another test that is to be performed is to check that our application does not leak any critical data into the browser cache. In order to do that, we can use WebScarab and search through the server responses that belong to our session, checking that for every page that contains sensitive information the server instructed the browser not to cache any data. Such a directive can be issued in the HTTP response headers:
HTTP/1.1:
Cache-Control: no-cache
HTTP/1.0:
Pragma: no-cache
Expires:
Alternatively, the same effect can be obtained directly at the HTML level, including in each page that contains sensitive data the following code:
HTTP/1.1:
HTTP/1.0:
For instance, if we are testing an e-commerce application, we should look for all pages that contain a credit card number or some other financial information, and check that all those pages enforce the no-cache directive. On the other hand, if we find pages that contain critical information but that fail to instruct the browser not to cache their content, we know that sensitive information will be stored on the disk, and we can double-check that simply by looking for it in the browser cache. The exact location where that information is stored depends on the client operating system and on the browser that has been used, but here are some examples:
Mozilla Firefox:
Unix/Linux: ~/.mozilla/firefox//Cache/
Windows: C:\Documents and Settings\\Local Settings\Application Data\Mozilla\Firefox\Profiles\\Cache>
Internet Explorer:
C:\Documents and Settings\\Local Settings\Temporary Internet Files>
Gray Box testing and example
As a general rule, we need to check that:
The logout function effectively destroys all session token, or at least renders them unusable
The server performs proper checks on the session state, disallowing an attacker to replay some previous token
A timeout is enforced and it is properly checked by the server. If the server uses an expiration time that is read from a session token that is sent by the client, the token must be cryptographically protected
For the secure cache test, the methodology is equivalent to the black box case, as in both scenarios we have full access to the server response headers and to the HTML code.
References
Whitepapers
ASP.NET Forms Authentication: "Best Practices for Software Developers" - HYPERLINK "http://www.foundstone.com/resources/whitepapers/ASPNETFormsAuthentication.pdf" \o "http://www.foundstone.com/resources/whitepapers/ASPNETFormsAuthentication.pdf" http://www.foundstone.com/resources/whitepapers/ASPNETFormsAuthentication.pdf
"The FormsAuthentication.SignOut method does not prevent cookie reply attacks in ASP.NET applications" - HYPERLINK "http://support.microsoft.com/default.aspx?scid=kb;en-us;900111" \o "http://support.microsoft.com/default.aspx?scid=kb;en-us;900111" http://support.microsoft.com/default.aspx?scid=kb;en-us;900111
Tools
Add N Edit Cookies (Firefox extension): HYPERLINK "https://addons.mozilla.org/firefox/573/" \o "https://addons.mozilla.org/firefox/573/" https://addons.mozilla.org/firefox/573/
4.4.8 Testing for Captcha (OWASP-AT-008)
Brief Summary
CAPTCHA ("Completely Automated Public Turing test to tell Computers and Humans Apart") is a type of challenge-response test used by many web applications to ensure that the response is not generated by a computer. CAPTCHA implementations are often vulnerable to various kinds of attacks even if the generated CAPTCHA is unbreakable. This section will help you to identify these kinds of attacks.
Description of the Issue
Although CAPTCHA is not an authentication control, its use can be very efficient against:
HYPERLINK "https://www.owasp.org/index.php/Testing_for_user_enumeration" \o "https://www.owasp.org/index.php/Testing_for_user_enumeration" enumeration attacks (login, registration or password reset forms are often vulnerable to enumeration attacks - without CAPTCHA the attacker can gain valid usernames, phone numbers or any other sensitive information in a short time)
automated sending of many GET/POST requests in a short time where it is undesirable (e.g., SMS/MMS/email flooding), CAPTCHA provides a rate limiting function
automated creation/using of the account that should be used only by humans (e.g., creating webmail accounts, stop spamming)
automated posting to blogs, forums and wikis, whether as a result of commercial promotion, or harassment and vandalism
any automated attacks that massively gain or misuse sensitive information from the application
Using CAPTCHAs as a CSRF protection is not recommended (because there are HYPERLINK "https://www.owasp.org/index.php/Testing_for_CSRF" \o "Testing for CSRF" stronger CSRF countermeasures).
These vulnerabilities are quite common in many CAPTCHA implementations:
generated image CAPTCHA is weak, this can be identified (without any complex computer recognition systems) only by a simple comparison with already broken CAPTCHAs
generated CAPTCHA questions have a very limited set of possible answers
the value of decoded CAPTCHA is sent by the client (as a GET parameter or as a hidden field of POST form). This value is often:
encrypted by simple algorithm and can be easily decrypted by observing of multiple decoded CAPTCHA values
hashed by a weak hash function (e.g., MD5) that can be broken using a rainbow table
possibility of replay attacks:
the application does not keep track of what ID of CAPTCHA image is sent to the user. Therefore, the attacker can simply obtain an appropriate CAPTCHA image and its ID, solve it, and send the value of the decoded CAPTCHA with its corresponding ID (the ID of a CAPTCHA could be a hash of the decoded CAPTCHA or any unique identifier)
the application does not destroy the session when the correct phrase is entered - by reusing the session ID of a known CAPTCHA it is possible to bypass CAPTCHA protected page
Black Box testing and example
Use an intercepting fault injection proxy (e.g., HYPERLINK "https://www.owasp.org/index.php/OWASP_WebScarab_Project" \o "OWASP WebScarab Project" WebScarab) to:
identify all parameters that are sent in addition to the decoded CAPTCHA value from the client to the server (these parameters can contain encrypted or hashed values of decoded CAPTCHA and CAPTCHA ID number)
try to send an old decoded CAPTCHA value with an old CAPTCHA ID (if the application accepts them, it is vulnerable to replay attacks)
try to send an old decoded CAPTCHA value with an old session ID (if the application accepts them, it is vulnerable to replay attacks)
Find out if similar CAPTCHAs have already been broken. Broken CAPTCHA images can be found here HYPERLINK "http://www.cs.sfu.ca/%7Emori/research/gimpy/ez/" \o "http://www.cs.sfu.ca/~mori/research/gimpy/ez/" gimpy, HYPERLINK "http://libcaca.zoy.org/wiki/PWNtcha" \o "http://libcaca.zoy.org/wiki/PWNtcha" PWNtcha, and HYPERLINK "http://www.lafdc.com/captcha/" \o "http://www.lafdc.com/captcha/" lafdc.
Verify if the set of possible answers for a CAPTCHA is limited and can be easily determined.
Gray Box testing and example
Audit the application source code in order to reveal:
used CAPTCHA implementation and version - there are many known vulnerabilities in widely used CAPTCHA implementations, see HYPERLINK "http://osvdb.org/search?request=captcha" \o "http://osvdb.org/search?request=captcha" http://osvdb.org/search?request=captcha
if the application sends encrypted or hashed value from the client (which is a very bad security practice) verify if used encryption or hash algorithm is sufficiently strong
References
Captcha Decoders
HYPERLINK "http://libcaca.zoy.org/wiki/PWNtcha" \o "http://libcaca.zoy.org/wiki/PWNtcha" (Opensource) PWNtcha captcha decoder
HYPERLINK "http://churchturing.org/captcha-dist/" \o "http://churchturing.org/captcha-dist/" (Opensource) The Captcha Breaker
HYPERLINK "http://www.lafdc.com/captcha/" \o "http://www.lafdc.com/captcha/" (Commercial) Captcha decoder
HYPERLINK "http://www.captchakiller.com/" \o "http://www.captchakiller.com/" (Commercial - Free) Online Captcha Decoder Free limited usage, enough for testing.
Articles
HYPERLINK "http://www.cs.sfu.ca/%7Emori/research/gimpy/" \o "http://www.cs.sfu.ca/~mori/research/gimpy/" Breaking a Visual CAPTCHA
HYPERLINK "http://www.puremango.co.uk/cm_breaking_captcha_115.php" \o "http://www.puremango.co.uk/cm_breaking_captcha_115.php" Breaking CAPTCHAs Without Using OCR
HYPERLINK "http://securesoftware.blogspot.com/2007/11/captcha-placebo-security-control-for.html" \o "http://securesoftware.blogspot.com/2007/11/captcha-placebo-security-control-for.html" Why CAPTCHA is not a security control for user authentication
4.4.9 Testing for Multiple factors Authentication (OWASP-AT-009)
Brief Summary
Evaluating the strength of a Multiple Factors Authentication System (MFAS) is a critical task for the Penetration tester. Banks and other financial institutions are going to spend considerable amounts of money on expensive MFAS; therefore performing accurate tests before the adoption of a particular solution is absolutely suggested. In addition, a further responsibility of the Penetration Testers is to acknowledge if the currently adopted MFAS is effectively able to defend the organization assets from the threats that generally drive the adoption of a MFAS.
Description of the Issue
Generally the aim of a two factor authentication system is to enhance the strength of the authentication process [1]. This goal is achieved by checking an additional factor, or something you have as well as something you know, making sure that the user holds a hardware device of some kind in addition to the password. The hardware device provided to the user may be able to communicate directly and independently with the authentication infrastructure using an additional communication channel; this particular feature is something known as separation of channels. Bruce Schneier in 2005 observed that some years ago the threats were all passive: eavesdropping and offline password guessing. Today, the threats are more active: phishing and Trojan horses [2]. Actually the common threats that a MFAS in a Web environment should correctly address include:
Credential Theft (Phishing, Eavesdropping, MITM e.g. Banking from compromised network)
Weak Credentials (Credentials Password guessing and Password Bruteforcing attacks)
Session based attacks (Session Riding, Session Fixation)
Trojan and Malware attacks (Banking from compromised clients)
Password Reuse (Using the same password for different purposes or operations, e.g. different transactions)
The optimal solution should be able to address all the possible attacks related to the 5 categories above. Since the strength of an authentication solution is generally classified depending on how many authentication factors are checked when the user gets in touch with the computing system, the typical IT professionals advise is: If you are not happy with your current authentication solution, just add another authentication factor and it will be all right. [3] Unfortunately, as we will see in the next paragraphs, the risk associated to attacks performed by motivated attackers cannot be totally eliminated; in addition some MFAS solutions are more flexible and secure compared to the others. Considering the 5-Threats (5T) above we could analyze the strength of a particular MFAS solution, since the solution may be able to Address, Mitigate or Not Remediate that particular Web Attack.
Gray Box testing and example
A minimum amount of information about the authentication schema in use is necessary for testing the security of the MFAS solution in place. This is the main reason why the Black Box Testing section has been omitted. In particular, a general knowledge about the whole authentication infrastructure is important because:
MFAS solutions are principally implemented to authenticate disposal operations. Disposal actions are supposed to be performed in the inner parts of the secure website.
Attacks carried out successfully against MFAS are performed with a high degree of control over what is happening. This statement is usually true because attackers can grab detailed information about a particular authentication infrastructure by harvesting any data they can intercept through Malware attacks. Assuming that an attacker must be a customer to know how the authentication of a banking website works is not always correct; the attackers just need to get control of a single customer to study the entire security infrastructure of a particular website (Authors of SilentBanker Trojan [4] are known for continuously collecting information about visited websites while infected users browse the internet. Another example is the attack performed against the Swedish Nordea bank in 2005 [5]).
The following examples are about a security evaluation of different MFAS, based upon the 5T model presented above. The most common authentication solution for Web applications is User ID and password authentication. In this case, an additional password for authorizing wire transfers is often required. MFAS solutions add something you have to the authentication process. This component is usually a:
One-time password (OTP) generator token.
Grid Card, Scratch Card, or any information that only the legitimate user is supposed to have in his wallet
Crypto devices like USB tokens or smart cards, equipped with X.509 certificates.
Randomly generated OTPs transmitted through a GSM SMS messages [SMSOTP] [6]
The following examples are about the testing and evaluation of different implementations of MFAS similar to the ones above. Penetration Testers should consider all possible weaknesses of the current solution to propose the correct mitigating factors, in case the infrastructure is already in place. A correct evaluation may also permit one to choose the right MFAS for the infrastructure during a preliminary solution selection. A mitigating factor is any additional component or countermeasure that might result in reduced likelihood of exploitation of a particular vulnerability. Credit cards are a perfect example. Notice how little attention is paid to cardholder authentication. Clerks barely check signatures. People use their cards over the phone and on the Internet, where the card's existence isn't even verified . The credit card companies spend their security dollar controlling the transaction, not the cardholder [7]. The transactions could be effectively controlled by behavioral algorithms that automatically fill up a risk score chart while the user uses his own credit card. Anything that is marked as suspected could be temporarily blocked by the circuit.Another mitigating factor is also informing the customer about what is happening through a separate and secure channel. The Credit Card industry uses this method for informing the user about credit card transactions via SMS messages. If a fraudulent action is taken, the user knows immediately that something has gone wrong with his credit card. Real time information through separate channels can also have a higher accuracy by informing the user about transactions, before those transactions are successful. Common "User ID, password and Disposal password" usually protect from (3), partially from (2). They usually do not protect from (1), (4) and (5). From a Penetration tester's point of view, for correctly testing this kind of authentication system, we should concentrate on what the solution should protect from. In other words, the adopters of a User ID, Password and Disposal password authentication solution should be protected from (2) and from (3). A penetration tester should check if the current implementation effectively enforce the adoption of strong passwords and if is resilient to Session Based attacks (e.g. Cross Site Request Forgeries attacks in order to force the user to submitting unwanted disposal operations).
Vulnerability Chart for UserID + Password + Disposal Password based authentication:
Known Weaknesses: 1, 4, 5
Known Weaknesses (Details): This technology doesnt protect from (1) because the password is static and can be stolen through blended threat attacks [8] (e.g. MITM attack against a SSLv2 connection). It doesnt protect from (4) and (5) because its possible to submit multiple transactions with the same disposal password.
Strengths (if well implemented): 2, 3
Strengths (Details): This technology protects from (2) only if password enforcement rules are in place. It protects from (3) because the need for a disposal password does not permit an attacker to abuse the current user session to submit disposal operations [9].
Now lets analyze some different implementations of MFASs: "One Time Password Tokens" protects from (1), (2) and (3) if well implemented. Does not always protect from (5). Almost never protects from (4).
Vulnerability Chart for "One Time Password Tokens" based authentication:
Known Weaknesses: 4, sometimes 5
Known Weaknesses (Details): OTP tokens do not protect from (4), because Banking Malware is able to modify the Web Traffic in real-time upon pre-configured rules; examples of this kind include malicious codes SilentBanker, Mebroot, and Trojan Anserin . Banking Malware works like a web proxy interacting with HTTPS pages. Since Malware takes total control over a compromised client, any action that a user performs is registered and controlled: Malware may stop a legitimate transaction and redirect the wire transfer to a different location. Password Reuse (5) is a vulnerability that may affect OTP tokens. Tokens are valid for a certain amount of time e.g. 30 seconds; if the authentication does not discard tokens that have been already used, it could be possible that a single token may authenticate multiple transactions during its 30 second lifetime.
Strengths (if well implemented): 1,2,3
Strengths (Details): OTP tokens mitigate effectively (1), because token lifetime is usually very short. In 30 seconds the attacker should be able to steal the token, enter the banking website and perform a transaction. It could be feasible, but its not usually going to happen in large-scale attacks. They usually protect from (2) because OTP HMAC are at least 6 digits long. Penetration Testers should check that the algorithm implemented by the OTP tokens under the test is safe enough and not predictable. Finally, they usually protect from (3) because the disposal token is always required. Penetration testers should verify that the procedure of requesting the validation token could not be bypassed.
"Grid Cards, Scratch Cards and any information that only the legitimate user is supposed to have in his Wallet" should protect from (1), (2), (3). Like OTP tokens, it cannot protect from (4). During testing activities grid cards in particular have been found vulnerable to (5). Scratch card are not vulnerable to password reuse, because any code can be used just one time.The penetration tester, during the assessment of technologies of this kind, should pay particular attention to Password Reuse attacks (5) for grid cards. A grid card based system commonly would request the same code multiple times. An attacker would just need to know a single valid disposal code (e.g one of those inside the grid card), and to wait until the system requests the code that he knows. Tested grid cards that contain a limited number of combinations are usually prone to this vulnerability. (e.g., if a grid card contains 50 combinations the attacker just needs to ask for a disposal, filling up the fields, checking the challenge, and so on. This attack is not about bruteforcing the disposal code, its about bruteforcing the challenge). Other common mistakes include a weak password policy. Any disposal password contained inside the gridcard should have a length of at least 6 numbers. Attacks could be very effective in combination with blended threats or Cross Site Request forgeries. "Crypto Devices with certificates (Token USB, Smart Cards)" offer a good layer of defense from (1), (2). Its a common mistake to believe that they would always protect from (3), (4) and (5). Unfortunately technologies offer the best security promises and at the same time some of the worst implementations around. USB tokens vary from vendor to vendor. Some of them authorize a user when they are plugged in, and do not authorize operations when they are unplugged. It seems to be a good behavior, but what it looks like is that some of them add further layers of implicit authentication. Those devicesdo not protect users from (3) (e.g. Session Riding and Cross Site Scripting code for automating transfers). Custom Randomly generated OTPs transmitted through a GSM SMS messages [SMSOTP] could protect effectively from (1), (2), (3) and (5). Could also mitigate effectively (4) if well implemented. This solution, compared to the previous one, is the only one that uses an independent channel to communicate with the banking infrastructure. This solution is usually very effective if well implemented. By separating the communication channels, its possible to inform the user about what is going on.Ex. of a disposal token sent via SMS:
"This token: 32982747 authorizes a wire transfer of $ 1250.4 to bank account 2345623 Bank of NY".
The previous token authorizes a unique transaction that is reported inside the text of the SMS message. In this way, the user can control that the intended transfer is effectively going to be directed to the right bank account.The approach described in this section is intended to provide a simple methodology to evaluate Multiple Factor Authentication Systems. The examples shown are taken from real-case scenarios and can be used as a starting point for analyzing the efficacy of a custom MFAS.
References
Whitepapers[1] [Definition] Wikipedia, Definition of Two Factor Authentication HYPERLINK "http://en.wikipedia.org/wiki/Two-factor_authentication" \o "http://en.wikipedia.org/wiki/Two-factor_authentication" http://en.wikipedia.org/wiki/Two-factor_authentication[2] [SCHNEIER] Bruce Schneier, Blog Posts about two factor authentication 2005, HYPERLINK "http://www.schneier.com/blog/archives/2005/03/the_failure_of.html" \o "http://www.schneier.com/blog/archives/2005/03/the_failure_of.html" http://www.schneier.com/blog/archives/2005/03/the_failure_of.html HYPERLINK "http://www.schneier.com/blog/archives/2005/04/more_on_twofact.html" \o "http://www.schneier.com/blog/archives/2005/04/more_on_twofact.html" http://www.schneier.com/blog/archives/2005/04/more_on_twofact.html[3] [Finetti] Guido Mario Finetti, "Web application security in un-trusted client scenarios" HYPERLINK "http://www.scmagazineuk.com/Web-application-security-in-un-trusted-client-scenarios/article/110448" \o "http://www.scmagazineuk.com/Web-application-security-in-un-trusted-client-scenarios/article/110448" http://www.scmagazineuk.com/Web-application-security-in-un-trusted-client-scenarios/article/110448[4] [SilentBanker Trojan] Symantec, Banking in Silence HYPERLINK "http://www.symantec.com/enterprise/security_response/weblog/2008/01/banking_in_silence.html" \o "http://www.symantec.com/enterprise/security_response/weblog/2008/01/banking_in_silence.html" http://www.symantec.com/enterprise/security_response/weblog/2008/01/banking_in_silence.html[5] [Nordea] Finextra, Phishing attacks against two factor authentication, 2005 HYPERLINK "http://www.finextra.com/fullstory.asp?id=14384" \o "http://www.finextra.com/fullstory.asp?id=14384" http://www.finextra.com/fullstory.asp?id=14384[6] [SMSOTP] Bruce Schneier, Two-Factor Authentication with Cell Phones, November 2004, HYPERLINK "http://www.schneier.com/blog/archives/2004/11/twofactor_authe.html" \o "http://www.schneier.com/blog/archives/2004/11/twofactor_authe.html" http://www.schneier.com/blog/archives/2004/11/twofactor_authe.html[7] [Transaction Authentication Mindset] Bruce Schneier, "Fighting Fraudulent Transactions" HYPERLINK "http://www.schneier.com/blog/archives/2006/11/fighting_fraudu.html" \o "http://www.schneier.com/blog/archives/2006/11/fighting_fraudu.html" http://www.schneier.com/blog/archives/2006/11/fighting_fraudu.html[8] [Blended Threat] HYPERLINK "http://en.wikipedia.org/wiki/Blended_threat" \o "http://en.wikipedia.org/wiki/Blended_threat" http://en.wikipedia.org/wiki/Blended_threat[9] [GUNTEROLLMANN] Gunter Ollmann, Web Based Session Management. Best practices in managing HTTP-based client sessions, HYPERLINK "http://www.technicalinfo.net/papers/WebBasedSessionManagement.htm" \o "http://www.technicalinfo.net/papers/WebBasedSessionManagement.htm" http://www.technicalinfo.net/papers/WebBasedSessionManagement.htm
4.4.10 Testing for Race Conditions (OWASP-AT-010)
Brief Summary
A race condition is a flaw that produces an unexpected result when the timing of actions impact other actions. An example may be seen on a multithreaded application where actions are being performed on the same data. Race conditions, by their very nature, are difficult to test for.
Description of the Issue
Race conditions may occur when a process is critically or unexpectedly dependent on the sequence or timings of other events. In a web application environment, where multiple requests can be processed at a given time, developers may leave concurrency to be handled by the framework, server, or programming language. The following simplified example illustrates a potential concurrency problem in a transactional web application and relates to a joint savings account in which both users (threads) are logged into the same account and attempting a transfer.Account A has 100 credits.Account B has 100 credits.Both User 1 and User 2 want to transfer 10 credits from Account A to Account B. If the transaction was correct the outcome should be:Account A has 80 credits.Account B has 120 credits.However, due to concurrency issues, the following result could be obtained:User 1 checks the value of Account A (=100 credits)User 2 checks the value of Account A (=100 credits)User 2 takes 10 credits from Account A (=90 credits) and put it in Account B (=110 credits)User 1 takes 10 credits from Account A (Still believed to contain 100 credits) (=90 credits) and puts it into Account B (=120 credits).Result: Account A has 90 credits.Account B has 120 credits.
Another example can be seen in OWASP's WebGoat project in the Thread Safety lesson, and shows how a shopping cart can be manipulated to purchase items for less than their advertised price. This, as with the example above, is due to the data changing between the time of check and its time of use.
Black Box testing and example
Testing for race conditions is problematic due to their nature, and external influences on testing including server load, network latency, etc will all play a part in the presence and detection of the condition.However, testing can be focused on specific transactional areas of the application, where time-of-read to time-of-use of specific data variables could be adversely affected by concurrency issues. Black Box testing attempts to force a race condition may include the ability to make multiple simultaneous requests while observing the outcome for unexpected behavior. Examples of such areas are illustrated in the paper "On Race Vulnerabilities in Web Applications", cited in the further reading section. The authors suggest that it may be possible in certain circumstances to:
Create multiple user accounts with the same username.
Bypass account lockouts against brute forcing.
Testers should be aware of the security implications of race conditions and their factors surrounding their difficulty of testing.
Gray Box testing and example
Code review may reveal likely areas of concern for concurrency issues. More information on reviewing code for concurrency issues can be seen at OWASP Code Review Guide's HYPERLINK "https://www.owasp.org/index.php/Reviewing_Code_for_Race_Conditions" \o "Reviewing Code for Race Conditions" Reviewing Code for Race Conditions
References
iSec Partners - Concurrency attacks in Web Applications HYPERLINK "http://isecpartners.com/files/iSEC%20Partners%20-%20Concurrency%20Attacks%20in%20Web%20Applications.pdf" \o "http://isecpartners.com/files/iSEC%20Partners%20-%20Concurrency%20Attacks%20in%20Web%20Applications.pdf" http://isecpartners.com/files/iSEC%20Partners%20-%20Concurrency%20Attacks%20in%20Web%20Applications.pdf
B. Sullivan and B. Hoffman - Premature Ajax-ulation and You HYPERLINK "https://www.blackhat.com/presentations/bh-usa-07/Sullivan_and_Hoffman/Whitepaper/bh-usa-07-sullivan_and_hoffman-WP.pdf" \o "https://www.blackhat.com/presentations/bh-usa-07/Sullivan_and_Hoffman/Whitepaper/bh-usa-07-sullivan_and_hoffman-WP.pdf" https://www.blackhat.com/presentations/bh-usa-07/Sullivan_and_Hoffman/Whitepaper/bh-usa-07-sullivan_and_hoffman-WP.pdf
Thread Safety Challenge in WebGoat - HYPERLINK "http://www.owasp.org/index.php/OWASP_WebGoat_Project" \o "http://www.owasp.org/index.php/OWASP_WebGoat_Project" http://www.owasp.org/index.php/OWASP_WebGoat_Project
R. Paleari, D. Marrone, D. Bruschi, M. Monga - On Race Vulnerabilities in Web Applications HYPERLINK "http://security.dico.unimi.it/%7Eroberto/pubs/dimva08-web.pdf" \o "http://security.dico.unimi.it/~roberto/pubs/dimva08-web.pdf" http://security.dico.unimi.it/~roberto/pubs/dimva08-web.pdf
4.5 Session Management Testing
At the core of any web-based application is the way in which it maintains state and thereby controls user-interaction with the site. Session Management broadly covers all controls on a user from authentication to leaving the application. HTTP is a stateless protocol, meaning that web servers respond to client requests without linking them to each other. Even simple application logic requires a user's multiple requests to be associated with each other across a "session. This necessitates third party solutions through either Off-The-Shelf (OTS) middleware and web server solutions, or bespoke developer implementations. Most popular web application environments, such as ASP and PHP, provide developers with built-in session handling routines. Some kind of identification token will typically be issued, which will be referred to as a Session ID or Cookie. There are a number of ways in which a web application may interact with a user. Each is dependent upon the nature of the site, the security, and availability requirements of the application. Whilst there are accepted best practices for application development, such as those outlined in the HYPERLINK "https://www.owasp.org/index.php/OWASP_Guide_Project" \o "OWASP Guide Project" OWASP Guide to Building Secure Web Applications, it is important that application security is considered within the context of the providers requirements and expectations. In this chapter we describe the following items.
HYPERLINK "https://www.owasp.org/index.php/Testing_for_Session_Management_Schema" \o "Testing for Session Management Schema" 4.5.1 Testing for Session Management Schema (OWASP-SM-001)This describes how to analyse a Session Management Schema, with the goal to understand how the Session Management mechanism has been developed and if it is possible to break it to bypass the user session. It explains how to test the security of session tokens issued to the client's browser: how to reverse engineer a cookie, and how to manipulate cookies to hijack a session. HYPERLINK "https://www.owasp.org/index.php/Testing_for_cookies_attributes" \o "Testing for cookies attributes" 4.5.2 Testing for Cookies attributes (OWASP-SM-002)Cookies are often a key attack vector for malicious users (typically, targeting other users) and, as such, the application should always take due diligence to protect cookies. In this section, we will look at how an application can take the necessary precautions when assigning cookies and how to test that these attributes have been correctly configured. HYPERLINK "https://www.owasp.org/index.php/Testing_for_Session_Fixation" \o "Testing for Session Fixation" 4.5.3 Testing for Session Fixation (OWASP-SM_003)When an application does not renew the cookie after a successful user authentication, it could be possible to find a session fixation vulnerability and force a user to utilize a cookie known to the attacker. HYPERLINK "https://www.owasp.org/index.php/Testing_for_Exposed_Session_Variables" \o "Testing for Exposed Session Variables" 4.5.4 Testing for Exposed Session Variables (OWASP-SM-004)Session Tokens represent confidential information because they tie the user identity with his own session. It's possible to test if the session token is exposed to this vulnerability and try to create a replay session attack. HYPERLINK "https://www.owasp.org/index.php/Testing_for_CSRF" \o "Testing for CSRF" 4.5.5 Testing for CSRF (OWASP-SM-005)Cross Site Request Forgery describes a way to force an unknowing user to execute unwanted actions on a web application in which he is currently authenticated. This section describes how to test an application to find this kind of vulnerability.
4.5.1 Testing for Session Management Schema (OWASP-SM-001)
Brief Summary
In order to avoid continuous authentication for each page of a website or service, web applications implement various mechanisms to store and validate credentials for a pre-determined timespan.These mechanisms are known as Session Management and, while they're most important in order to increase the ease of use and user-friendliness of the application, they can be exploited by a penetration tester to gain access to a user account, without the need to provide correct credentials. In this test, we want to check that cookies and other session tokens are created in a secure and unpredictable way. An attacker who is able to predict and forge a weak cookie can easily hijack the sessions of legitimate users.
Related Security Activities
Description of Session Management Vulnerabilities
See the OWASP articles on HYPERLINK "https://www.owasp.org/index.php/Category:Session_Management_Vulnerability" \o "Category:Session Management Vulnerability" Session Management Vulnerabilities.
Description of Session Management Countermeasures
See the OWASP articles on HYPERLINK "https://www.owasp.org/index.php/Category:Session_Management" \o "Category:Session Management" Session Management Countermeasures.
How to Avoid Session Management Vulnerabilities
See the HYPERLINK "https://www.owasp.org/index.php/Category:OWASP_Guide_Project" \o "Category:OWASP Guide Project" OWASP Development Guide article on how to HYPERLINK "https://www.owasp.org/index.php/Session_Management" \o "Session Management" Avoid Session Management Vulnerabilities.
How to Review Code for Session Management| Vulnerabilities
See the HYPERLINK "https://www.owasp.org/index.php/Category:OWASP_Code_Review_Project" \o "Category:OWASP Code Review Project" OWASP Code Review Guide article on how to HYPERLINK "https://www.owasp.org/index.php/Codereview-Session-Management" \o "Codereview-Session-Management" Review Code for Session Management Vulnerabilities.
Description of the Issue
Cookies are used to implement session management and are described in detail in HYPERLINK "http://tools.ietf.org/html/rfc2965" \o "http://tools.ietf.org/html/rfc2965" RFC 2965. In a nutshell, when a user accesses an application which needs to keep track of the actions and identity of that user across multiple requests, a cookie (or more than one) is generated by the server and sent to the client. The client will then send the cookie back to the server in all following connections until the cookie expires or is destroyed. The data stored in the cookie can provide to the server a large spectrum of information about who the user is, what actions he has performed so far, what his preferences are, etc. therefore providing a state to a stateless protocol like HTTP.
A typical example is provided by an online shopping cart. Throughout the session of a user, the application must keep track of his identity, his profile, the products that he has chosen to buy, the quantity, the individual prices, the discounts, etc. Cookies are an efficient way to store and pass this information back and forth (other methods are URL parameters and hidden fields).
Due to the importance of the data that they store, cookies are therefore vital in the overall security of the application. Being able to tamper with cookies may result in hijacking the sessions of legitimate users, gaining higher privileges in an active session, and in general influencing the operations of the application in an unauthorized way. In this test we have to check whether the cookies issued to clients can resist a wide range of attacks aimed to interfere with the sessions of legitimate users and with the application itself. The overall goal is to be able to forge a cookie that will be considered valid by the application and that will provide some kind of unauthorized access (session hijacking, privilege escalation, ...). Usually the main steps of the attack pattern are the following:
cookie collection: collection of a sufficient number of cookie samples;
cookie reverse engineering: analysis of the cookie generation algorithm;
cookie manipulation: forging of a valid cookie in order to perform the attack. This last step might require a large number of attempts, depending on how the cookie is created (cookie brute-force attack).
Another pattern of attack consists of overflowing a cookie. Strictly speaking, this attack has a different nature, since here we are not trying to recreate a perfectly valid cookie. Instead, our goal is to overflow a memory area, thereby interfering with the correct behavior of the application and possibly injecting (and remotely executing) malicious code.
Black Box Testing and Examples
All interaction between the client and application should be tested at least against the following criteria:
Are all Set-Cookie directives tagged as Secure?
Do any Cookie operations take place over unencrypted transport?
Can the Cookie be forced over unencrypted transport?
If so, how does the application maintain security?
Are any Cookies persistent?
What Expires= times are used on persistent cookies, and are they reasonable?
Are cookies that are expected to be transient configured as such?
What HTTP/1.1 Cache-Control settings are used to protect Cookies?
What HTTP/1.0 Cache-Control settings are used to protect Cookies?
Cookie collection
The first step required in order to manipulate the cookie is obviously to understand how the application creates and manages cookies. For this task, we have to try to answer the following questions:
How many cookies are used by the application?
Surf the application. Note when cookies are created. Make a list of received cookies, the page that sets them (with the set-cookie directive), the domain for which they are valid, their value, and their characteristics.
Which parts of the application generate and/or modify the cookie?
Surfing the application, find which cookies remain constant and which get modified. What events modify the cookie?
Which parts of the application require this cookie in order to be accessed and utilized?
Find out which parts of the application need a cookie. Access a page, then try again without the cookie, or with a modified value of it. Try to map which cookies are used where.
A spreadsheet mapping each cookie to the corresponding application parts and the related information can be a valuable output of this phase.
Session Analysis
The session tokens (Cookie, SessionID or Hidden Field) themselves should be examined to ensure their quality from a security perspective. They should be tested against criteria such as their randomness, uniqueness, resistance to statistical and cryptographic analysis and information leakage.
Token Structure & Information Leakage
The first stage is to examine the structure and content of a Session ID provided by the application. A common mistake is to include specific data in the Token instead of issuing a generic value and referencing real data at the server side. If the Session ID is clear-text, the structure and pertinent data may be immediately obvious as the following:
192.168.100.1:owaspuser:password:15:58
If part or the entire token appears to be encoded or hashed, it should be compared to various techniques to check for obvious obfuscation. For example the string 192.168.100.1:owaspuser:password:15:58 is represented in Hex, Base64 and as an MD5 hash:
Hex 3139322E3136382E3130302E313A6F77617370757365723A70617373776F72643A31353A3538
Base64 MTkyLjE2OC4xMDAuMTpvd2FzcHVzZXI6cGFzc3dvcmQ6MTU6NTg=
MD5 01c2fc4f0a817afd8366689bd29dd40a
Having identified the type of obfuscation, it may be possible to decode back to the original data. In most cases, however, this is unlikely. Even so, it may be useful to enumerate the encoding in place from the format of the message. Furthermore, if both the format and obfuscation technique can be deduced, automated brute-force attacks could be devised. Hybrid tokens may include information such as IP address or User ID together with an encoded portion, as the following:
owaspuser:192.168.100.1: a7656fafe94dae72b1e1487670148412
Having analyzed a single session token, the representative sample should be examined. A simple analysis of the tokens should immediately reveal any obvious patterns. For example, a 32 bit token may include 16 bits of static data and 16 bits of variable data. This may indicate that the first 16 bits represent a fixed attribute of the user e.g. the username or IP address. If the second 16 bit chunk is incrementing at a regular rate, it may indicate a sequential or even time-based element to the token generation. See examples. If static elements to the Tokens are identified, further samples should be gathered, varying one potential input element at a time. For example, login attempts through a different user account or from a different IP address may yield a variance in the previously static portion of the session token. The following areas should be addressed during the single and multiple Session ID structure testing:
What parts of the Session ID are static?
What clear-text confidential information is stored in the Session ID? E.g. usernames/UID, IP addresses
What easily decoded confidential information is stored?
What information can be deduced from the structure of the Session ID?
What portions of the Session ID are static for the same login conditions?
What obvious patterns are present in the Session ID as a whole, or individual portions?
Session ID Predictability and Randomness
Analysis of the variable areas (if any) of the Session ID should be undertaken to establish the existence of any recognizable or predictable patterns. These analyses may be performed manually and with bespoke or OTS statistical or cryptanalytic tools in order to deduce any patterns in the Session ID content. Manual checks should include comparisons of Session IDs issued for the same login conditions e.g., the same username, password, and IP address. Time is an important factor which must also be controlled. High numbers of simultaneous connections should be made in order to gather samples in the same time window and keep that variable constant. Even a quantization of 50ms or less may be too coarse and a sample taken in this way may reveal time-based components that would otherwise be missed. Variable elements should be analyzed over time to determine whether they are incremental in nature. Where they are incremental, patterns relating to absolute or elapsed time should be investigated. Many systems use time as a seed for their pseudo-random elements. Where the patterns are seemingly random, one-way hashes of time or other environmental variations should be considered as a possibility. Typically, the result of a cryptographic hash is a decimal or hexadecimal number so should be identifiable. In analyzing Session ID sequences, patterns or cycles, static elements and client dependencies should all be considered as possible contributing elements to the structure and function of the application.
Are the Session IDs provably random in nature? I.e., can the resulting values be reproduced?
Do the same input conditions produce the same ID on a subsequent run?
Are the Session IDs provably resistant to statistical or cryptanalysis?
What elements of the Session IDs are time-linked?
What portions of the Session IDs are predictable?
Can the next ID be deduced, given full knowledge of the generation algorithm and previous IDs?
Cookie reverse engineering
Now that we have enumerated the cookies and have a general idea of their use, it is time to have a deeper look at cookies that seem interesting. Which cookies are we interested in? A cookie, in order to provide a secure method of session management, must combine several characteristics, each of which is aimed at protecting the cookie from a different class of attacks. These characteristics are summarized below:
Unpredictability: a cookie must contain some amount of hard-to-guess data. The harder it is to forge a valid cookie, the harder is to break into legitimate user's session. If an attacker can guess the cookie used in an active session of a legitimate user, he/she will be able to fully impersonate that user (session hijacking). In order to make a cookie unpredictable, random values and/or cryptography can be used.
Tamper resistance: a cookie must resist malicious attempts of modification. If we receive a cookie like IsAdmin=No, it is trivial to modify it to get administrative rights, unless the application performs a double check (for instance, appending to the cookie an encrypted hash of its value)
Expiration: a critical cookie must be valid only for an appropriate period of time and must be deleted from disk/memory afterwards, in order to avoid the risk of being replayed. This does not apply to cookies that store non-critical data that needs to be remembered across sessions (e.g., site look-and-feel)
Secure flag: a cookie whose value is critical for the integrity of the session should have this flag enabled in order to allow its transmission only in an encrypted channel to deter eavesdropping.
The approach here is to collect a sufficient number of instances of a cookie and start looking for patterns in their value. The exact meaning of sufficient can vary from a handful of samples, if the cookie generation method is very easy to break, to several thousands, if we need to proceed with some mathematical analysis (e.g., chi-squares, attractors. See later for more information).
It is important to pay particular attention to the workflow of the application, as the state of a session can have a heavy impact on collected cookies: a cookie collected before being authenticated can be very different from a cookie obtained after the authentication.
Another aspect to keep into consideration is time: always record the exact time when a cookie has been obtained, when there is the possibility that time plays a role in the value of the cookie (the server could use a timestamp as part of the cookie value). The time recorded could be the local time or the server's timestamp included in the HTTP response (or both).
Analyzing the collected values, try to figure out all variables that could have influenced the cookie value and try to vary them one at the time. Passing to the server modified versions of the same cookie can be very helpful in understanding how the application reads and processes the cookie.
Examples of checks to be performed at this stage include:
What character set is used in the cookie? Has the cookie a numeric value? alphanumeric? hexadecimal? What happens if we insert in a cookie characters that do not belong to the expected charset?
Is the cookie composed of different sub-parts carrying different pieces of information? How are the different parts separated? With which delimiters? Some parts of the cookie could have a higher variance, others might be constant, others could assume only a limited set of values. Breaking down the cookie to its base components is the first and fundamental step. An example of an easy-to-spot structured cookie is the following:
ID=5a0acfc7ffeb919:CR=1:TM=1120514521:LM=1120514521:S=j3am5KzC4v01ba3q
In this example we see 5 different fields, carrying different types of data:
ID hexadecimal
CR small integer
TM and LM large integer. (And curiously they hold the same value. Worth to see what happens modifying one of them)
S alphanumeric
Even when no delimiters are used, having enough samples can help. As an example, let's look at the following series:
0123456789abcdef
Brute Force Attacks
Brute force attacks inevitably lead on from questions relating to predictability and randomness. The variance within the Session IDs must be considered together with application session durations and timeouts. If the variation within the Session IDs is relatively small, and Session ID validity is long, the likelihood of a successful brute-force attack is much higher. A long Session ID (or rather one with a great deal of variance) and a shorter validity period would make it far harder to succeed in a brute force attack.
How long would a brute-force attack on all possible Session IDs take?
Is the Session ID space large enough to prevent brute forcing? For example, is the length of the key sufficient when compared to the valid life-span?
Do delays between connection attempts with different Session IDs mitigate the risk of this attack?
Cookie manipulation
Once you have squeezed out as much information as possible from the cookie, it is time to start to modify it. The methodologies here heavily depend on the results of the analysis phase, but we can provide some examples:
Example 1: cookie with identity in clear text
In figure 1 we show an example of cookie manipulation in an application that allows subscribers of a mobile telecom operator to send MMS messages via Internet. Surfing the application using OWASP WebScarab or BurpProxy we can see that after the authentication process the cookie msidnOneShot contains the senders telephone number: this cookie is used to identify the user for the service payment process. However, the phone number is stored in clear and is not protected in any way. Thus, if we modify the cookie from msidnOneShot=3*******59 to msidnOneShot=3*******99, the mobile user who owns the number 3*******99 will pay the MMS message!
INCLUDEPICTURE "http://www.owasp.org/images/0/0a/Example_of_Cookie_with_identy_in_clear_text.gif" \* MERGEFORMATINET
Example of Cookie with identity in clear text
Example 2: guessable cookie
An example of a cookie whose value is easy to guess and that can be used to impersonate other users can be found in OWASP WebGoat, in the Weak Authentication cookie lesson. In this example, you start with the knowledge of two username/password couples (corresponding to the users 'webgoat' and 'aspect'). The goal is to reverse engineer the cookie creation logic and break into the account of user 'alice'. Authenticating to the application using these known couples, you can collect the corresponding authentication cookies. In table 1 you can find the associations that bind each username/password couple to the corresponding cookie, together with the login exact time.
Username Password Authentication Cookie - Time webgoat Webgoat 65432ubphcfx 10/7/2005-10:10
65432ubphcfx 10/7/2005-10:11 aspect Aspect 65432udfqtb 10/7/2005-10:12
65432udfqtb 10/7/2005-10:13 alice ????? ??????????? Cookie collections
First of all, we can note that the authentication cookie remains constant for the same user across different logons, showing a first critical vulnerability to replay attacks: if we are able to steal a valid cookie (using for example a XSS vulnerability), we can use it to hijack the session of the corresponding user without knowing his/her credentials. Additionally, we note that the webgoat and aspect cookies have a common part: 65432u. 65432 seems to be a constant integer. What about u? The strings webgoat and aspect both end with the t letter, and u is the letter following it. So let's see the letter following each letter in webgoat:
1st char: w + 1 =x
2nd char: e + 1 = f
3rd char: b + 1 = c
4th char: g + 1= h
5th char: o + 1= p
6th char: a + 1= b
7th char: t + 1 = u
We obtain xfchpbu, which inverted gives us exactly ubphcfx. The algorithm fits perfectly also for the user 'aspect', so we only have to apply it to user 'alice', for which the cookie results to be 65432fdjmb. We repeat the authentication to the application providing the webgoat credentials, substitute the received cookie with the one that we have just calculated for alice andBingo! Now the application identifies us as alice instead of webgoat.
Brute force
The use of a brute force attack to find the right authentication cookie, could be a heavy time consuming technique. Foundstone Cookie Digger can help to collect a large number of cookies, giving the average length and the character set of the cookie. In advance, the tool compares the different values of the cookie to check how many characters are changing for every subsequent login. If the cookie values do not remain the same on subsequent logins, Cookie Digger gives the attacker longer periods of time to perform brute force attempts. In the following table we show an example in which we have collected all the cookies from a public site, trying 10 authentication attempts. For every type of cookie collected you have an estimate of all the possible attempts needed to brute force the cookie.
CookieName Has Username or Password Average Length Character Set Randomness Index Brute Force Attempts X_ID False 820 , 0-9, a-f 52,43 2,60699329187639E+129 COOKIE_IDENT_SERV False 54 , +, /-9, A-N, P-X, Z, a-z 31,19 12809303223894,6 X_ID_YACAS False 820 , 0-9, a-f 52,52 4,46965862559887E+129 COOKIE_IDENT False 54 , +, /-9, A-N, P-X, Z, a-z 31,19 12809303223894,6 X_UPC False 172 , 0-9, a-f 23,95 2526014396252,81 CAS_UPC False 172 , 0-9, a-f 23,95 2526014396252,81 CAS_SCC False 152 , 0-9, a-f 34,65 7,14901878613151E+15 COOKIE_X False 32 , +, /, 0, 8, 9, A, C, E, K, M, O, Q, R, W-Y, e-h, l, m, q, s, u, y, z 0 1 vgnvisitor False 26 , 0-2, 5, 7, A, D, F-I, K-M, O-Q, W-Y, a-h, j-q, t, u, w-y, ~ 33,59 18672264717,3479
X_ID 5573657249643a3d333335363937393835323b4d736973646e3a3d333335363937393835323b537461746f436f6e73656e736f3a3d303b4d65746f646f417574656e746963..0525147746d6e673d3d 5573657249643a3d333335363937393835323b4d736973646e3a3d333335363937393835323b537461746f436f6e73656e736f3a3d303b4d65746f646f417574656e746963617a696f6e6..354730632f5346673d3d An example of CookieDigger report
Overflow
Since the cookie value, when received by the server, will be stored in one or more variables, there is always the chance of performing a boundary violation of that variable. Overflowing a cookie can lead to all the outcomes of buffer overflow attacks. A Denial of Service is usually the easiest goal, but the execution of remote code can also be possible. Usually, however, this requires some detailed knowledge about the architecture of the remote system, as any buffer overflow technique is heavily dependent on the underlying operating system and memory management in order to correctly calculate offsets to properly craft and align inserted code.
Example: HYPERLINK "http://seclists.org/lists/fulldisclosure/2005/Jun/0188.html" \o "http://seclists.org/lists/fulldisclosure/2005/Jun/0188.html" http://seclists.org/lists/fulldisclosure/2005/Jun/0188.html
Gray Box testing and example
If you have access to the session management schema implementation, you can check for the following:
Random Session Token
The Session ID or Cookie issued to the client should not be easily predictable (don't use linear algorithms based on predictable variables such as the client IP address). The use of cryptographic algorithms with key length of 256 bits is encouraged (like AES).
Token length
Session ID will be at least 50 characters length.
Session Time-out
Session token should have a defined time-out (it depends on the criticality of the application managed data)
Cookie configuration:
non-persistent: only RAM memory
secure (set only on HTTPS channel): Set Cookie: cookie=data; path=/; domain=.aaa.it; secure
HYPERLINK "https://www.owasp.org/index.php/HTTPOnly" \o "HTTPOnly" HTTPOnly (not readable by a script): Set Cookie: cookie=data; path=/; domain=.aaa.it; HYPERLINK "https://www.owasp.org/index.php/HTTPOnly" \o "HTTPOnly" HTTPOnly
More information here: HYPERLINK "https://www.owasp.org/index.php/Testing_for_cookies_attributes" \o "Testing for cookies attributes" Testing_for_cookies_attributes
References
Whitepapers
HYPERLINK "http://tools.ietf.org/html/rfc2965" \o "http://tools.ietf.org/html/rfc2965" RFC 2965 HTTP State Management Mechanism
HYPERLINK "http://tools.ietf.org/html/rfc1750" \o "http://tools.ietf.org/html/rfc1750" RFC 1750 Randomness Recommendations for Security
Strange Attractors and TCP/IP Sequence Number Analysis: HYPERLINK "http://www.bindview.com/Services/Razor/Papers/2001/tcpseq.cfm" \o "http://www.bindview.com/Services/Razor/Papers/2001/tcpseq.cfm" http://www.bindview.com/Services/Razor/Papers/2001/tcpseq.cfm
Correlation Coefficient: HYPERLINK "http://mathworld.wolfram.com/CorrelationCoefficient.html" \o "http://mathworld.wolfram.com/CorrelationCoefficient.html" http://mathworld.wolfram.com/CorrelationCoefficient.html
ENT: HYPERLINK "http://fourmilab.ch/random/" \o "http://fourmilab.ch/random/" http://fourmilab.ch/random/
HYPERLINK "http://seclists.org/lists/fulldisclosure/2005/Jun/0188.html" \o "http://seclists.org/lists/fulldisclosure/2005/Jun/0188.html" http://seclists.org/lists/fulldisclosure/2005/Jun/0188.html
Darrin Barrall: "Automated Cookie Analysis" HYPERLINK "http://www.spidynamics.com/assets/documents/SPIcookies.pdf" \o "http://www.spidynamics.com/assets/documents/SPIcookies.pdf" http://www.spidynamics.com/assets/documents/SPIcookies.pdf
Gunter Ollmann: "Web Based Session Management" - HYPERLINK "http://www.technicalinfo.net" \o "http://www.technicalinfo.net" http://www.technicalinfo.net
Matteo Meucci:"MMS Spoofing" - www.owasp.org/images/7/72/MMS_Spoofing.ppt
Tools
HYPERLINK "https://www.owasp.org/index.php/Category:OWASP_WebScarab_Project" \o "Category:OWASP WebScarab Project" OWASP's WebScarab features a session token analysis mechanism. You can read HYPERLINK "https://www.owasp.org/index.php/How_to_test_session_identifier_strength_with_WebScarab" \o "How to test session identifier strength with WebScarab" How to test session identifier strength with WebScarab.
Foundstone CookieDigger - HYPERLINK "http://www.foundstone.com/resources/proddesc/cookiedigger.htm" \o "http://www.foundstone.com/resources/proddesc/cookiedigger.htm" http://www.foundstone.com/resources/proddesc/cookiedigger.htm
4.5.2 Testing for Cookies attributes (OWASP-SM-002)
Brief Summary
Cookies are often a key attack vector for malicious users (typically targeting other users) and, as such, the application should always take due diligence to protect cookies. In this section, we will look at how an application can take the necessary precautions when assigning cookies and how to test that these attributes have been correctly configured.
Description of the Issue
The importance of secure use of Cookies cannot be understated, especially within dynamic web applications, which need to maintain state across a stateless protocol such as HTTP. To understand the importance of cookies it is imperative to understand what they are primarily used for. These primary functions usually consist of being used as a session authorization/authentication token or as a temporary data container. Thus, if an attacker were by some means able to acquire a session token (for example, by exploiting a cross site scripting vulnerability or by sniffing an unencrypted session), then he/she could use this cookie to hijack a valid session. Additionally, cookies are set to maintain state across multiple requests. Since HTTP is stateless, the server cannot determine if a request it receives is part of a current session or the start of a new session without some type of identifier. This identifier is very commonly a cookie although other methods are also possible. As you can imagine, there are many different types of applications that need to keep track of session state across multiple requests. The primary one that comes to mind would be an online store. As a user adds multiple items to a shopping cart, this data needs to be retained in subsequent requests to the application. Cookies are very commonly used for this task and are set by the application using the Set-Cookie directive in the application's HTTP response, and is usually in a name=value format (if cookies are enabled and if they are supported, which is the case for all modern web browsers). Once an application has told the browser to use a particular cookie, the browser will send this cookie in each subsequent request. A cookie can contain data such as items from an online shopping cart, the price of these items, the quantity of these items, personal information, user IDs, etc. Due to the sensitive nature of information in cookies, they are typically encoded or encrypted in an attempt to protect the information they contain. Often, multiple cookies will be set (separated by a semicolon) upon subsequent requests. For example, in the case of an online store, a new cookie could be set as you add multiple items to your shopping cart. Additionally, you will typically have a cookie for authentication (session token as indicated above) once you login, and multiple other cookies used to identify the items you wish to purchase and their auxiliary information (i.e., price and quantity) in the online store type of application.
Now that you have an understanding of how cookies are set, when they are set, what they are used for, why they are used, and their importance, let's take a look at what attributes can be set for a cookie and how to test if they are secure. The following is a list of the attributes that can be set for each cookie and what they mean. The next section will focus on how to test for each attribute.
secure - This attribute tells the browser to only send the cookie if the request is being sent over a secure channel such as HTTPS. This will help protect the cookie from being passed over unencrypted requests.
If the application can be accessed over both HTTP and HTTPS, then there is the potential that the cookie can be sent in clear text.
HttpOnly - This attribute is used to help prevent attacks such as cross-site scripting, since it does not allow the cookie to be accessed via a client side script such as JavaScript. Note that not all browsers support this functionality.
domain - This attribute is used to compare against the domain of the server in which the URL is being requested. If the domain matches or if it is a sub-domain, then the path attribute will be checked next.
Note that only hosts within the specified domain can set a cookie for that domain. Also the domain attribute cannot be a top level domain (such as .gov or .com) to prevent servers from setting arbitrary cookies for another domain. If the domain attribute is not set, then the hostname of the server which generated the cookie is used as the default value of the domain. For example, if a cookie is set by an application at app.mydomain.com with no domain attribute set, then the cookie would be resubmitted for all subsequent requests for app.mydomain.com and its subdomains (such as hacker.app.mydomain.com), but not to otherapp.mydomain.com. If a developer wanted to loosen this restriction, then he could set the domain attribute to mydomain.com. In this case the cookie would be sent to all requests for app.mydomain.com and its subdomains, such as hacker.app.mydomain.com, and even bank.mydomain.com. If there was a vulnerable server on a subdomain (for example, otherapp.mydomain.com) and the domain attribute has been set too loosely (for example, mydomain.com), then the vulnerable server could be used to harvest cookies (such as session tokens).
path - In addition to the domain, the URL path can be specified for which the cookie is valid. If the domain and path match, then the cookie will be sent in the request.
Just as with the domain attribute, if the path attribute is set too loosely, then it could leave the application vulnerable to attacks by other applications on the same server. For example, if the path attribute was set to the web server root "/", then the application cookies will be sent to every application within the same domain.
expires - This attribute is used to set persistent cookies, since the cookie does not expire until the set date is exceeded. This persistent cookie will be used by this browser session and subsequent sessions until the cookie expires. Once the expiration date has exceeded, the browser will delete the cookie. Alternatively, if this attribute is not set, then the cookie is only valid in the current browser session and the cookie will be deleted when the session ends.
Black Box testing and example
Testing for cookie attribute vulnerabilities:
By using an intercepting proxy or traffic intercepting browser plug-in, trap all responses where a cookie is set by the application (using the Set-cookie directive) and inspect the cookie for the following:
Secure Attribute - Whenever a cookie contains sensitive information or is a session token, then it should always be passed using an encrypted tunnel. For example, after logging into an application and a session token is set using a cookie, then verify it is tagged using the ";secure" flag. If it is not, then the browser believes it safe to pass via an unencrypted channel such as using HTTP.
HttpOnly Attribute - This attribute should always be set even though not every browser supports it. This attribute aids in securing the cookie from being accessed by a client side script so check to see if the ";HttpOnly" tag has been set.
Domain Attribute - Verify that the domain has not been set too loosely. As noted above, it should only be set for the server that needs to receive the cookie. For example if the application resides on server app.mysite.com, then it should be set to "; domain=app.mysite.com" and NOT "; domain=.mysite.com" as this would allow other potentially vulnerable servers to receive the cookie.
Path Attribute - Verify that the path attribute, just as the Domain attribute, has not been set too loosely. Even if the Domain attribute has been configured as tight as possible, if the path is set to the root directory "/" then it can be vulnerable to less secure applications on the same server. For example if the application resides at /myapp/ then verify that the cookies path is set to "; path=/myapp/" and NOT "; path=/" or "; path=/myapp". Notice here that the trailing "/" must be used after myapp. If it is not used, the browser will send the cookie to any path that matches "myapp" such as "myapp-exploited".
Expires Attribute - Verify that, if this attribute is set to a time in the future, that it does not contain any sensitive information. For example, if a cookie is set to "; expires=Fri, 13-Jun-2010 13:45:29 GMT" and it is currently June 10th 2008, then you want to inspect the cookie. If the cookie is a session token that is stored on the user's hard drive then an attacker or local user (such as an admin) who has access to this cookie can access the application by resubmitting this token until the expiration date passes.
References
Whitepapers
HYPERLINK "http://tools.ietf.org/html/rfc2965" \o "http://tools.ietf.org/html/rfc2965" RFC 2965 - HTTP State Management Mechanism - HYPERLINK "http://tools.ietf.org/html/rfc2965" \o "http://tools.ietf.org/html/rfc2965" http://tools.ietf.org/html/rfc2965
HYPERLINK "http://tools.ietf.org/html/rfc2616" \o "http://tools.ietf.org/html/rfc2616" RFC 2616 Hypertext Transfer Protocol HTTP 1.1 - HYPERLINK "http://tools.ietf.org/html/rfc2616" \o "http://tools.ietf.org/html/rfc2616" http://tools.ietf.org/html/rfc2616
Tools
Intercepting Proxy:
OWASP: Webscarab - HYPERLINK "http://www.owasp.org/index.php/Category:OWASP_WebScarab_Project" \o "http://www.owasp.org/index.php/Category:OWASP_WebScarab_Project" http://www.owasp.org/index.php/Category:OWASP_WebScarab_Project
Dafydd Stuttard: Burp proxy - HYPERLINK "http://portswigger.net/proxy/" \o "http://portswigger.net/proxy/" http://portswigger.net/proxy/
MileSCAN: Paros Proxy - HYPERLINK "http://www.parosproxy.org/download.shtml" \o "http://www.parosproxy.org/download.shtml" http://www.parosproxy.org/download.shtml
Browser Plug-in:
"TamperIE" for Internet Explorer - HYPERLINK "http://www.bayden.com/TamperIE/" \o "http://www.bayden.com/TamperIE/" http://www.bayden.com/TamperIE/
Adam Judson: "Tamper Data" for Firefox - HYPERLINK "https://addons.mozilla.org/en-US/firefox/addon/966" \o "https://addons.mozilla.org/en-US/firefox/addon/966" https://addons.mozilla.org/en-US/firefox/addon/966
4.5.3 Testing for Session Fixation (OWASP-SM_003)
Brief Summary
When an application does not renew the cookie after a successful user authentication, it could be possible to find a session fixation vulnerability and force a user to utilize a cookie known by the attacker. In that case an attacker could steal the user session (session hijacking).
Description of the Issue
Session fixation vulnerabilities occur when:
A web application authenticates a user without first invalidating the existing session ID, thereby continuing to use the session ID already associated with the user.
An attacker is able to force a known session ID on a user so that, once the user authenticates, the attacker has access to the authenticated session.
In the generic exploit of session fixation vulnerabilities, an attacker creates a new session on a web application and records the associated session identifier. The attacker then causes the victim to authenticate against the server using the same session identifier, giving the attacker access to the user's account through the active session. Furthermore, the issue described above is problematic for sites which issue a session identifier over HTTP and then redirect the user to a HTTPS login form. If the session identifier is not reissued upon authentication, the identifier may be eavesdropped and may be used by an attacker to hijack the session.
Black Box testing and example
Testing for Session Fixation vulnerabilities: The first step is to make a request to the site to be tested (example www.example.com). If we request the following:
GET www.example.com
We will obtain the following answer:
HTTP/1.1 200 OK
Date: Wed, 14 Aug 2008 08:45:11 GMT
Server: IBM_HTTP_Server
Set-Cookie: JSESSIONID=0000d8eyYq3L0z2fgq10m4v-rt4:-1; Path=/; secure
Cache-Control: no-cache="set-cookie,set-cookie2"
Expires: Thu, 01 Dec 1994 16:00:00 GMT
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/html;charset=Cp1254
Content-Language: en-US
We observe that the application sets a new session identifier JSESSIONID=0000d8eyYq3L0z2fgq10m4v-rt4:-1 for the client.Next, if we successfully authenticate to the application with the following POST HTTPS:
POST https://www.example.com/authentication.php HTTP/1.1
Host: www.example.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.16) Gecko/20080702 Firefox/2.0.0.16
Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: it-it,it;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: http://www.example.com
Cookie: JSESSIONID=0000d8eyYq3L0z2fgq10m4v-rt4:-1
Content-Type: application/x-www-form-urlencoded
Content-length: 57
Name=Meucci&wpPassword=secret!&wpLoginattempt=Log+in
We observe the following response from the server:
HTTP/1.1 200 OK
Date: Thu, 14 Aug 2008 14:52:58 GMT
Server: Apache/2.2.2 (Fedora)
X-Powered-By: PHP/5.1.6
Content-language: en
Cache-Control: private, must-revalidate, max-age=0
X-Content-Encoding: gzip
Content-length: 4090
Connection: close
Content-Type: text/html; charset=UTF-8
...
HTML data
...
As no new cookie has been issued upon a successful authentication we know that it is possible to perform session hijacking. Result Expected:We can send a valid session identifier to a user (possibly using a social engineering trick), wait for them to authenticate, and subsequently verify that privileges have been assigned to this cookie.
Gray Box testing and example
Talk with developers and understand if they have implemented a session token renew after a user successful authentication.Result Expected:The application should always first invalidate the existing session ID before authenticating a user, and if the authentication is successful, provide another sessionID.
References
Whitepapers
HYPERLINK "https://www.owasp.org/index.php/Session_Fixation" \o "Session Fixation" Session Fixation
Chris Shiflett: HYPERLINK "http://shiflett.org/articles/session-fixation" \o "http://shiflett.org/articles/session-fixation" http://shiflett.org/articles/session-fixation
Tools
OWASP WebScarab: HYPERLINK "https://www.owasp.org/index.php/OWASP_WebScarab_Project" \o "OWASP WebScarab Project" OWASP_WebScarab_Project
4.5.4 Testing for Exposed Session Variables (OWASP-SM-004)
Brief Summary
The Session Tokens (Cookie, SessionID, Hidden Field), if exposed, will usually enable an attacker to impersonate a victim and access the application illegitimately. As such, it is important that it is protected from eavesdropping at all times particularly whilst in transit between the Client browser and the application servers.
Short Description of the Issue
The information here relates to how transport security applies to the transfer of sensitive Session ID data rather than data in general, and may be stricter than the caching and transport policies applied to the data served by the site. Using a personal proxy, it is possible to ascertain the following about each request and response:
Protocol used (e.g., HTTP vs. HTTPS)
HTTP Headers
Message Body (e.g., POST or page content)
Each time Session ID data is passed between the client and the server, the protocol, cache and privacy directives and body should be examined. Transport security here refers to Session IDs passed in GET or POST requests, message bodies, or other means over valid HTTP requests.
Black Box testing and example
Testing for Encryption & Reuse of Session Tokens vulnerabilities: Protection from eavesdropping is often provided by SSL encryption, but may incorporate other tunneling or encryption. It should be noted that encryption or cryptographic hashing of the Session ID should be considered separately from transport encryption, as it is the Session ID itself being protected, not the data that may be represented by it. If the Session ID could be presented by an attacker to the application to gain access, then it must be protected in transit to mitigate that risk. It should therefore be ensured that encryption is both the default and enforced for any request or response where the Session ID is passed, regardless of the mechanism used (e.g., a hidden form field). Simple checks such as replacing https:// with http:// during interaction with the application should be performed, together with modification of form posts to determine if adequate segregation between the secure and non-secure sites is implemented.Note: if there is also an element to the site where the user is tracked with Session IDs but security is not present (e.g., noting which public documents a registered user downloads) it is essential that a different Session ID is used. The Session ID should therefore be monitored as the client switches from the secure to non-secure elements to ensure a different one is used.Result Expected:Every time the authentication is successful The user should expect to receive:
A different session token
A token sent via encrypted channel every time I make an HTTP Request
Testing for Proxies & Caching vulnerabilities: Proxies must also be considered when reviewing application security. In many cases, clients will access the application through corporate, ISP, or other proxies or protocol aware gateways (e.g., Firewalls). The HTTP protocol provides directives to control the behaviour of downstream proxies, and the correct implementation of these directives should also be assessed. In general, the Session ID should never be sent over unencrypted transport and should never be cached. The application should therefore be examined to ensure that encrypted communications are both the default and enforced for any transfer of Session IDs. Furthermore, whenever the Session ID is passed, directives should be in place to prevent its caching by intermediate and even local caches.The application should also be configured to secure data in Caches over both HTTP/1.0 and HTTP/1.1 HYPERLINK "http://tools.ietf.org/html/rfc2616" \o "http://tools.ietf.org/html/rfc2616" RFC 2616 discusses the appropriate controls with reference to HTTP. HTTP/1.1 provides a number of cache control mechanisms. Cache-Control: no-cache indicates that a proxy must not re-use any data. Whilst Cache-Control: Private appears to be a suitable directive, this still allows a non-shared proxy to cache data. In the case of web-cafes or other shared systems, this presents a clear risk. Even with single-user workstations the cached Session ID may be exposed through a compromise of the file-system or where network stores are used. HTTP/1.0 caches do not recognise the Cache-Control: no-cache directive.
Result Expected:The Expires: 0 and Cache-Control: max-age=0 directives should be used to further ensure caches do not expose the data. Each request/response passing Session ID data should be examined to ensure appropriate cache directives are in use. Testing for GET & POST vulnerabilities: In general, GET requests should not be used, as the Session ID may be exposed in Proxy or Firewall logs. They are also far more easily manipulated than other types of transport, although it should be noted that almost any mechanism can be manipulated by the client with the right tools. Furthermore, HYPERLINK "https://www.owasp.org/index.php/Cross-site_Scripting_%28XSS%29" \o "Cross-site Scripting (XSS)" Cross-site Scripting (XSS) attacks are most easily exploited by sending a specially constructed link to the victim. This is far less likely if data is sent from the client as POSTs. Result Expected:All server side code receiving data from POST requests should be tested to ensure it doesnt accept the data if sent as a GET. For example, consider the following POST request generated by a login page.
POST http://owaspapp.com/login.asp HTTP/1.1
Host: owaspapp.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.0.2) Gecko/20030208 Netscape/7.02 Paros/3.0.2b
Accept: */*
Accept-Language: en-us, en
Accept-Charset: ISO-8859-1, utf-8;q=0.66, *;q=0.66
Keep-Alive: 300
Cookie: ASPSESSIONIDABCDEFG=ASKLJDLKJRELKHJG
Cache-Control: max-age=0
Content-Type: application/x-www-form-urlencoded
Content-Length: 34
Login=Username&password=Password&SessionID=12345678
If login.asp is badly implemented, it may be possible to log in using the following URL:
HYPERLINK "http://owaspapp.com/login.asp?Login=Username&password=Password&SessionID=12345678" \o "http://owaspapp.com/login.asp?Login=Username&password=Password&SessionID=12345678" http://owaspapp.com/login.asp?Login=Username&password=Password&SessionID=12345678
Potentially insecure server-side scripts may be identified by checking each POST in this way. Testing for Transport vulnerabilities: All interaction between the Client and Application should be tested at least against the following criteria.
How are Session IDs transferred? E.g., GET, POST, Form Field (including hidden fields)
Are Session IDs always sent over encrypted transport by default?
Is it possible to manipulate the application to send Session IDs unencrypted? E.g., by changing HTTP to HTTPS?
What cache-control directives are applied to requests/responses passing Session IDs?
Are these directives always present? If not, where are the exceptions?
Are GET requests incorporating the Session ID used?
If POST is used, can it be interchanged with GET?
References
Whitepapers
RFCs 2109 & 2965 HTTP State Management Mechanism [D. Kristol, L. Montulli] - www.ietf.org/rfc/rfc2965.txt, www.ietf.org/rfc/rfc2109.txt
HYPERLINK "http://tools.ietf.org/html/rfc2616" \o "http://tools.ietf.org/html/rfc2616" RFC 2616 Hypertext Transfer Protocol -- HTTP/1.1 - www.ietf.org/rfc/rfc2616.txt
4.5.5 Testing for CSRF (OWASP-SM-005)
bRIEF sUMMARY
HYPERLINK "https://www.owasp.org/index.php/CSRF" \o "CSRF" CSRF is an attack which forces an end user to execute unwanted actions on a web application in which he/she is currently authenticated. With a little help of social engineering (like sending a link via email/chat), an attacker may force the users of a web application to execute actions of the attacker's choosing. A successful CSRF exploit can compromise end user data and operation, when it targets a normal user. If the targeted end user is the administrator account, a CSRF attack can compromise the entire web application.
Related Security Activities
Description of CSRF Vulnerabilities
See the OWASP article on HYPERLINK "https://www.owasp.org/index.php/CSRF" \o "CSRF" CSRF Vulnerabilities.
How to Avoid CSRF Vulnerabilities
See the HYPERLINK "https://www.owasp.org/index.php/Category:OWASP_Guide_Project" \o "Category:OWASP Guide Project" OWASP Development Guide article on how to HYPERLINK "https://www.owasp.org/index.php?title=Guide_to_CSRF&action=edit" \o "Guide to CSRF" Avoid CSRF Vulnerabilities.
How to Review Code for CSRF Vulnerabilities
See the HYPERLINK "https://www.owasp.org/index.php/Category:OWASP_Code_Review_Project" \o "Category:OWASP Code Review Project" OWASP Code Review Guide article on how to HYPERLINK "https://www.owasp.org/index.php/Reviewing_code_for_Cross-Site_Request_Forgery_issues" \o "Reviewing code for Cross-Site Request Forgery issues" Review Code for CSRF Vulnerabilities.
Description of the Issue
CSRF relies on the following:1) Web browser behavior regarding the handling of session-related information such as cookies and http authentication information;2) Knowledge of valid web application URLs on the side of the attacker;3) Application session management relying only on information which is known by the browser;4) Existence of HTML tags whose presence cause immediate access to an http[s] resource; for example the image tag img.
Points 1, 2, and 3 are essential for the vulnerability to be present, while point 4 is accessory and facilitates the actual exploitation, but is not strictly required.
Point 1) Browsers automatically send information which is used to identify a user session. Suppose site is a site hosting a web application, and the user victim has just authenticated himself to site. In response, site sends victim a cookie which identifies requests sent by victim as belonging to victims authenticated session. Basically, once the browser receives the cookie set by site, it will automatically send it along with any further requests directed to site.
Point 2) If the application does not make use of session-related information in URLs, then it means that the application URLs, their parameters and legitimate values may be identified (either by code analysis or by accessing the application and taking note of forms and URLs embedded in the HTML/JavaScript).
Point 3) By known by the browser, we mean information such as cookies, or http-based authentication information (such as Basic Authentication; NOT form-based authentication), which are stored by the browser and subsequently resent at each request directed towards an application area requesting that authentication. The vulnerabilities discussed next apply to applications which rely entirely on this kind of information to identify a user session.
Suppose, for simplicity's sake, to refer to GET-accessible URLs (though the discussion applies as well to POST requests). If victim has already authenticated himself, submitting another request causes the cookie to be automatically sent with it (see picture, where the user accesses an application on www.example.com).
HYPERLINK "https://www.owasp.org/index.php/Image:Session_riding.GIF" \o "Image:session_riding.GIF" INCLUDEPICTURE "https://www.owasp.org/images/f/f3/Session_riding.GIF" \* MERGEFORMATINET
The GET request could be originated in several different ways:
by the user, who is using the actual web application;
by the user, who types the URL directly in the browser;
by the user, who follows a link (external to the application) pointing to the URL.
These invocations are indistinguishable by the application. In particular, the third may be quite dangerous. There are a number of techniques (and of vulnerabilities) which can disguise the real properties of a link. The link can be embedded in an email message, or appear in a malicious web site where the user is lured, i.e., the link appears in content hosted elsewhere (another web site, an HTML email message, etc.) and points to a resource of the application. If the user clicks on the link, since it was already authenticated by the web application on site, the browser will issue a GET request to the web application, accompanied by authentication information (the session id cookie). This results in a valid operation performed on the web application probably not what the user expects to happen! Think of a malicious link causing a fund transfer on a web banking application to appreciate the implications...
By using a tag such as img, as specified in point 4 above, it is not even necessary that the user follows a particular link. Suppose the attacker sends the user an email inducing him to visit an URL referring to a page containing the following (oversimplified) HTML:
...
...
What the browser will do when it displays this page is that it will try to display the specified zero-width (i.e., invisible) image as well. This results into a request being automatically sent to the web application hosted on site. It is not important that the image URL does not refer to a proper image, its presence will trigger the request specified in the src field anyway; this happens provided that image download is not disabled in the browsers, which is a typical configuration since disabling images would cripple most web applications beyond usability.
The problem here is a consequence of the following facts:
there are HTML tags whose appearance in a page result in automatic http request execution (img being one of those);
the browser has no way to tell that the resource referenced by img is not actually an image and is in fact not legitimate;
image loading happens regardless of the location of the alleged image, i.e., the form and the image itself need not be located in the same host, not even in the same domain. While this is a very handy feature, it makes difficult to compartmentalize applications.
It is the fact that HTML content unrelated to the web application may refer components in the application, and the fact that the browser automatically composes a valid request towards the application, that allows such kind of attacks. As no standards are defined right now, there is no way to prohibit this behavior unless it is made impossible for the attacker to specify valid application URLs. This means that valid URLs must contain information related to the user session, which is supposedly not known to the attacker and therefore make the identification of such URLs impossible.
The problem might be even worse, since in integrated mail/browser environments simply displaying an email message containing the image would result in the execution of the request to the web application with the associated browser cookie.
Things may be obfuscated further, by referencing seemingly valid image URLs such as
where [attacker] is a site controlled by the attacker, and by utilizing a redirect mechanism on http://[attacker]/picture.gif to http://[thirdparty]/action.
Cookies are not the only example involved in this kind of vulnerability. Web applications whose session information is entirely supplied by the browser are vulnerable too. This includes applications relying on HTTP authentication mechanisms alone, since the authentication information is known by the browser and is sent automatically upon each request. This DOES NOT include form-based authentication, which occurs just once and generates some form of session-related information (of course, in this case, such information is expressed simply as a cookie and can we fall back to one of the previous cases).
Sample scenario.
Lets suppose that the victim is logged on to a firewall web management application. To log in, a user has to authenticate himself; subsequently, session information is stored in a cookie.
Let's suppose our firewall web management application has a function that allows an authenticated user to delete a rule specified by its positional number, or all the rules of the configuration if the user enters * (quite a dangerous feature, but it will make the example more interesting). The delete page is shown next. Lets suppose that the form for the sake of simplicity issues a GET request, which will be of the form
https://[target]/fwmgt/delete?rule=1
(to delete rule number one)
https://[target]/fwmgt/delete?rule=*
(to delete all rules).
The example is purposely quite naive, but shows in a simple way the dangers of CSRF.
HYPERLINK "https://www.owasp.org/index.php/Image:Session_Riding_Firewall_Management.gif" \o "image:Session Riding Firewall Management.gif" INCLUDEPICTURE "https://www.owasp.org/images/c/ca/Session_Riding_Firewall_Management.gif" \* MERGEFORMATINET
Therefore, if we enter the value * and press the Delete button, the following GET request is submitted.
https://www.company.example/fwmgt/delete?rule=*
with the effect of deleting all firewall rules (and ending up in a possibly inconvenient situation...).
HYPERLINK "https://www.owasp.org/index.php/Image:Session_Riding_Firewall_Management_2.gif" \o "image:Session Riding Firewall Management 2.gif" INCLUDEPICTURE "https://www.owasp.org/images/f/f8/Session_Riding_Firewall_Management_2.gif" \* MERGEFORMATINET
Now, this is not the only possible scenario. The user might have accomplished the same results by manually submitting the URL https://[target]/fwmgt/delete?rule=*
or by following a link pointing, directly or via a redirection, to the above URL. Or, again, by accessing an HTML page with an embedded img tag pointing to the same URL.
In all of these cases, if the user is currently logged in the firewall management application, the request will succeed and will modify the configuration of the firewall.
One can imagine attacks targeting sensitive applications and making automatic auction bids, money transfers, orders, changing the configuration of critical software components, etc.
An interesting thing is that these vulnerabilities may be exercised behind a firewall; i.e., it is sufficient that the link being attacked be reachable by the victim (not directly by the attacker). In particular, it can be any Intranet web server; for example, the firewall management station mentioned before, which is unlikely to be exposed to the Internet. Imagine a CSRF attack targeting an application monitoring a nuclear power plant... Sounds far fetched? Probably, but it is a possibility.
Self-vulnerable applications, i.e., applications that are used both as attack vector and target (such as web mail applications), make things worse. If such an application is vulnerable, the user is obviously logged in when he reads a message containing a CSRF attack, that can target the web mail application and have it perform actions such as deleting messages, sending messages appearing as sent by the user, etc.
Countermeasures.
The following countermeasures are divided among recommendations to users and to developers.
Users
Since CSRF vulnerabilities are reportedly widespread, it is recommended to follow best practices to mitigate risk. Some mitigating actions are:
Logoff immediately after using a web application
Do not allow your browser to save username/passwords, and do not allow sites to remember your login
Do not use the same browser to access sensitive applications and to surf freely the Internet; if you have to do both things at the same machine, do them with separate browsers.
Integrated HTML-enabled mail/browser, newsreader/browser environments pose additional risks since simply viewing a mail message or a news message might lead to the execution of an attack.
Developers
Add session-related information to the URL. What makes the attack possible is the fact that the session is uniquely identified by the cookie, which is automatically sent by the browser. Having other session-specific information being generated at the URL level makes it difficult to the attacker to know the structure of URLs to attack.
Other countermeasures, while they do not resolve the issue, contribute to make it harder to exploit.
Use POST instead of GET. While POST requests may be simulated by means of JavaScript, they make it more complex to mount an attack. The same is true with intermediate confirmation pages (such as: Are you sure you really want to do this? type of pages). They can be bypassed by an attacker, although they will make their work a bit more complex. Therefore, do not rely solely on these measures to protect your application. Automatic logout mechanisms somewhat mitigate the exposure to these vulnerabilities, though it ultimately depends on the context (a user who works all day long on a vulnerable web banking application is obviously more at risk than a user who uses the same application occasionally).
Black Box testing and example
To test black box, you need to know URLs in the restricted (authenticated) area. If you possess valid credentials, you can assume both roles the attacker and the victim. In this case, you know the URLs to be tested just by browsing around the application.
Otherwise, if you dont have valid credentials available, you have to organize a real attack, and so induce a legitimate, logged in user into following an appropriate link. This may involve a substantial level of social engineering.
Either way, a test case can be constructed as follows:
let u the URL being tested; for example, u = HYPERLINK "http://www.example.com/action" \o "http://www.example.com/action" http://www.example.com/action
build an html page containing the http request referencing URL u (specifying all relevant parameters; in the case of http GET this is straightforward, while to a POST request you need to resort to some Javascript);
make sure that the valid user is logged on the application;
induce him into following the link pointing to the to-be-tested URL (social engineering involved if you cannot impersonate the user yourself);
observe the result, i.e. check if the web server executed the request.
Gray Box testing and example
Audit the application to ascertain if its session management is vulnerable. If session management relies only on client side values (information available to the browser), then the application is vulnerable. By client side values we mean cookies and HTTP authentication credentials (Basic Authentication and other forms of HTTP authentication; NOT form-based authentication, which is an application-level authentication). For an application to not be vulnerable, it must include session-related information in the URL, in a form of unidentifiable or unpredictable by the user ([3] uses the term secret to refer to this piece of information).
Resources accessible via HTTP GET requests are easily vulnerable, though POST requests can be automatized via Javascript and are vulnerable as well; therefore, the use of POST alone is not enough to correct the occurrence of CSRF vulnerabilities.
References
Whitepapers
This issue seems to get rediscovered from time to time, under different names. A history of these vulnerabilities has been reconstructed in: HYPERLINK "http://www.webappsec.org/lists/websecurity/archive/2005-05/msg00003.html" \o "http://www.webappsec.org/lists/websecurity/archive/2005-05/msg00003.html" http://www.webappsec.org/lists/websecurity/archive/2005-05/msg00003.html
Peter W: "Cross-Site Request Forgeries" - HYPERLINK "http://www.tux.org/%7Epeterw/csrf.txt" \o "http://www.tux.org/~peterw/csrf.txt" http://www.tux.org/~peterw/csrf.txt
Thomas Schreiber: "Session Riding" - HYPERLINK "http://www.securenet.de/papers/Session_Riding.pdf" \o "http://www.securenet.de/papers/Session_Riding.pdf" http://www.securenet.de/papers/Session_Riding.pdf
Oldest known post - HYPERLINK "http://www.zope.org/Members/jim/ZopeSecurity/ClientSideTrojan" \o "http://www.zope.org/Members/jim/ZopeSecurity/ClientSideTrojan" http://www.zope.org/Members/jim/ZopeSecurity/ClientSideTrojan
Cross-site Request Forgery FAQ - HYPERLINK "http://www.cgisecurity.com/articles/csrf-faq.shtml" \o "http://www.cgisecurity.com/articles/csrf-faq.shtml" http://www.cgisecurity.com/articles/csrf-faq.shtml
Tools
Currently there are no automated tools that can be used to test for the presence of CSRF vulnerabilities. However, you may use your favorite spider/crawler tools to acquire knowledge about the application structure and to identify the URLs to test.
4.6 Authorization testing
Authorization is the concept of allowing access to resources only to those permitted to use them. Testing for Authorization means understanding how the authorization process works, and using that information to circumvent the authorization mechanism. Authorization is a process that comes after a successful authentication, so the tester will verify this point after he holds valid credentials, associated with a well-defined set of roles and privileges. During this kind of assessment, it should be verified if it is possible to bypass the authorization schema, find a path traversal vulnerability, or find ways to escalate the privileges assigned to the tester.
HYPERLINK "https://www.owasp.org/index.php/Testing_for_Path_Traversal" \o "Testing for Path Traversal" 4.6.1 Testing for Path Traversal (OWASP-AZ-001)First, we test if it is possible to find a way to execute a path traversal attack and access reserved information
HYPERLINK "https://www.owasp.org/index.php/Testing_for_Bypassing_Authorization_Schema" \o "Testing for Bypassing Authorization Schema" 4.6.2 Testing for bypassing authorization schema (OWASP-AZ-002)This kind of test focuses on verifying how the authorization schema has been implemented for each role/privilege to get access to reserved functions/resources.
HYPERLINK "https://www.owasp.org/index.php/Testing_for_Privilege_escalation" \o "Testing for Privilege escalation" 4.6.3 Testing for Privilege Escalation (OWASP-AZ-003)During this phase, the tester should verify that it is not possible for a user to modify his or her privileges/roles inside the application in ways that could allow privilege escalation attacks.
4.6.1 Testing for path traversal (OWASP-AZ-001)
Brief Summary
Many web applications use and manage files as part of their daily operation. Using input validation methods that have not been well designed or deployed, an aggressor could exploit the system in order to read/write files that are not intended to be accessible. In particular situations, it could be possible to execute arbitrary code or system commands.
Related Security Activities
Description of Path Traversal Vulnerabilities
See the OWASP article on HYPERLINK "https://www.owasp.org/index.php/Path_Traversal" \o "Path Traversal" Path Traversal Vulnerabilities.
See the OWASP article on HYPERLINK "https://www.owasp.org/index.php/Relative_Path_Traversal" \o "Relative Path Traversal" Relative Path Traversal Vulnerabilities.
How to Avoid Path Traversal Vulnerabilities
See the HYPERLINK "https://www.owasp.org/index.php/Category:OWASP_Guide_Project" \o "Category:OWASP Guide Project" OWASP Guide article on how to HYPERLINK "https://www.owasp.org/index.php/File_System" \l "Path_traversal" \o "File System" Avoid Path Traversal Vulnerabilities.
How to Review Code for Path Traversal Vulnerabilities
See the HYPERLINK "https://www.owasp.org/index.php/Category:OWASP_Code_Review_Project" \o "Category:OWASP Code Review Project" OWASP Code Review Guide article on how to HYPERLINK "https://www.owasp.org/index.php?title=Reviewing_Code_for_Path_Traversal&action=edit" \o "Reviewing Code for Path Traversal" Review Code for Path Traversal Vulnerabilities.
Description of the Issue
Traditionally, web servers and web applications implement authentication mechanisms in order to control access to files and resources. Web servers try to confine users' files inside a "root directory" or "web document root" which represent a physical directory on the file system; users have to consider this directory as the base directory into the hierarchical structure of the web application. The definition of the privileges is made using Access Control Lists (ACL) which identify which users or groups are supposed to be able to access, modify, or execute a specific file on the server. These mechanisms are designed to prevent access to sensitive files from malicious users (for example, the common /etc/passwd file on a Unix-like platform) or to avoid the execution of system commands.
Many web applications use server-side scripts to include different kinds of files: it is quite common to use this method to manage graphics, templates, load static texts, and so on. Unfortunately, these applications expose security vulnerabilities if input parameters (i.e., form parameters, cookie values) are not correctly validated.
In web servers and web applications, this kind of problem arises in path traversal/file include attacks. By exploiting this kind of vulnerability, an attacker is able to read directories or files which he/she normally couldn't read, access data outside the web document root, or include scripts and other kinds of files from external websites.
For the purpose of the OWASP Testing Guide, we will just consider the security threats related to web applications and not to web servers (e.g., the infamous "%5c escape code" into Microsoft IIS web server). We will provide further reading suggestions in the references section, for interested readers.
This kind of attack is also known as the dot-dot-slash attack (../), directory traversal, directory climbing, or backtracking.
During an assessment, in order to discover path traversal and file include flaws, we need to perform two different stages:
(a) Input Vectors Enumeration (a systematic evaluation of each input vector)
(b) Testing Techniques (a methodical evaluation of each attack technique used by an attacker to exploit the vulnerability)
Black Box testing and example
(a) Input Vectors EnumerationIn order to determine which part of the application is vulnerable to input validation bypassing, the tester needs to enumerate all parts of the application which accept content from the user. This also includes HTTP GET and POST queries and common options like file uploads and HTML forms.
Here are some examples of the checks to be performed at this stage:
Are there request parameters which could be used for file-related operations?
Are there unusual file extensions?
Are there interesting variable names?
http://example.com/getUserProfile.jsp?item=ikki.html
http://example.com/index.php?file=content
http://example.com/main.cgi?home=index.htm
Is it possible to identify cookies used by the web application for the dynamic generation of pages/templates?
Cookie: ID=d9ccd3f4f9f18cc1:TM=2166255468:LM=1162655568:S=3cFpqbJgMSSPKVMV:TEMPLATE=flower
Cookie: USER=1826cc8f:PSTYLE=GreenDotRed
(b) Testing Techniques
The next stage of testing is analyzing the input validation functions present in the web application.
Using the previous example, the dynamic page called getUserProfile.jsp loads static information from a file, showing the content to users. An attacker could insert the malicious string "../../../../etc/passwd" to include the password hash file of a Linux/Unix system. Obviously, this kind of attack is possible only if the validation checkpoint fails; according to the filesystem privileges, the web application itself must be able to read the file.
To successfully test for this flaw, the tester needs to have knowledge of the system being tested and the location of the files being requested. There is no point requesting /etc/passwd from an IIS web server.
http://example.com/getUserProfile.jsp?item=../../../../etc/passwd
For the cookies example, we have:
Cookie: USER=1826cc8f:PSTYLE=../../../../etc/passwd
It's also possible to include files and scripts located on external website.
http://example.com/index.php?file=http://www.owasp.org/malicioustxt
The following example will demonstrate how it is possible to show the source code of a CGI component, without using any path traversal chars.
http://example.com/main.cgi?home=main.cgi
The component called "main.cgi" is located in the same directory as the normal HTML static files used by the application. In some cases the tester needs to encode the requests using special characters (like the "." dot, "%00" null, ...) in order to bypass file extension controls or to prevent script execution.
It's a common mistake by developers to not expect every form of encoding and therefore only do validation for basic encoded content. If at first your test string isn't successful, try another encoding scheme.
Each operating system uses different chars as path separator:
Unix-like OS:
root directory: "/"
directory separator: "/"
Windows OS:
root directory: ":\"
directory separator: "\" but also "/"
(Usually, on Windows, the directory traversal attack is limited to a single partition.)
Classic Mac OS:
root directory: ":"
directory separator: ":"
We should take in account the following chars encoding:
URL encoding and double URL encoding
%2e%2e%2f represents ../
%2e%2e/ represents ../
..%2f represents ../
%2e%2e%5c represents ..\
%2e%2e\ represents ..\
..%5c represents ..\
%252e%252e%255c represents ..\
..%255c represents ..\ and so on.
Unicode/UTF-8 Encoding (it only works in systems that are able to accept overlong UTF-8 sequences)
..%c0%af represents ../
..%c1%9c represents ..\
Gray Box testing and example
When the analysis is performed with a Gray Box approach, we have to follow the same methodology as in Black Box Testing. However, since we can review the source code, it is possible to search the input vectors (stage (a) of the testing) more easily and accurately. During a source code review, we can use simple tools (such as the grep command) to search for one or more common patterns within the application code: inclusion functions/methods, filesystem operations, and so on.
PHP: include(), include_once(), require(), require_once(), fopen(), readfile(), ...
JSP/Servlet: java.io.File(), java.io.FileReader(), ...
ASP: include file, include virtual, ...
Using online code search engines (e.g., Google CodeSearch HYPERLINK "http://www.google.com/codesearch" \o "http://www.google.com/codesearch" [1], Koders HYPERLINK "http://www.koders.com/" \o "http://www.koders.com/" [2]), it may also be possible to find path traversal flaws in OpenSource software published on the Internet.
For PHP, we can use:
lang:php (include|require)(_once)?\s*['"(]?\s*\$_(GET|POST|COOKIE)
Using the Gray Box Testing method, it is possible to discover vulnerabilities that are usually harder to discover, or even impossible to find during a standard Black Box assessment.
Some web applications generate dynamic pages using values and parameters stored in a database. It may be possible to insert specially crafted path traversal strings when the application adds data to the database. This kind of security problem is difficult to discover due to the fact the parameters inside the inclusion functions seem internal and "safe", but otherwise they are not.
Additionally, reviewing the source code, it is possible to analyze the functions that are supposed to handle invalid input: some developers try to change invalid input to make it valid, avoiding warnings and errors. These functions are usually prone to security flaws.
Consider a web application with these instructions:
filename = Request.QueryString(file);
Replace(filename, /,\);
Replace(filename, ..\,);
Testing for the flaw is achieved by:
file=....//....//boot.ini
file=....\\....\\boot.ini
file= ..\..\boot.ini
References
Whitepapers
Security Risks of - HYPERLINK "http://www.schneier.com/crypto-gram-0007.html" \o "http://www.schneier.com/crypto-gram-0007.html" http://www.schneier.com/crypto-gram-0007.html HYPERLINK "http://www.schneier.com/crypto-gram-0007.html" \o "http://www.schneier.com/crypto-gram-0007.html" [3]
phpBB Attachment Mod Directory Traversal HTTP POST Injection - HYPERLINK "http://archives.neohapsis.com/archives/fulldisclosure/2004-12/0290.html" \o "http://archives.neohapsis.com/archives/fulldisclosure/2004-12/0290.html" http://archives.neohapsis.com/archives/fulldisclosure/2004-12/0290.html HYPERLINK "http://archives.neohapsis.com/archives/fulldisclosure/2004-12/0290.html" \o "http://archives.neohapsis.com/archives/fulldisclosure/2004-12/0290.html" [4]
Tools
Web Proxy (Burp Suite HYPERLINK "http://portswigger.net" \o "http://portswigger.net" [5], Paros HYPERLINK "http://www.parosproxy.org/index.shtml" \o "http://www.parosproxy.org/index.shtml" [6], WebScarab HYPERLINK "http://www.owasp.org/index.php/OWASP_WebScarab_Project" \o "http://www.owasp.org/index.php/OWASP_WebScarab_Project" [7])
Enconding/Decoding tools
String searcher "grep" - HYPERLINK "http://www.gnu.org/software/grep/" \o "http://www.gnu.org/software/grep/" http://www.gnu.org/software/grep/
4.6.2 Testing for bypassing authorization schema (OWASP-AZ-002)
Brief Summary
This kind of test focuses on verifying how the authorization schema has been implemented for each role/privilege to get access to reserved functions/resources.
Description of the Issue
For every specific role the tester holds during the assessment, for every function and request that the application executes during the post-authentication phase, it is necessary to verify:
Is it possible to access that resource even if the user is not authenticated?
Is it possible to access that resource after the log-out?
Is it possible to access functions and resources that should be accessible to a user that holds a different role/privilege?
Try to access the application as an administrative user and track all the administrative functions. Is it possible to access administrative functions also if the tester is logged as a user with standard privileges?
Is it possible to use these functionalities for a user with a different role and for whom that action should be denied?
Black Box testing and example
Testing for Admin functionalities For example, suppose that the 'AddUser.jsp' function is part of the administrative menu of the application, and it is possible to access it by requesting the following URL:
https://www.example.com/admin/addUser.jsp
Then, the following HTTP request is generated when calling the AddUser function:
POST /admin/addUser.jsp HTTP/1.1
Host: www.example.com
[other HTTP headers]
userID=fakeuser&role=3&group=grp001
What happens if a non-administrative user tries to execute that request? Will the user be created? If so, can the new user use her privileges?
Testing for access to resources assigned to a different role Analyze, for example, an application that uses a shared directory to store temporary PDF files for different users. Suppose that documentABC.pdf should be accessible only by the user test1 with roleA. Verify if user test2 with roleB can access that resource.
Result Expected:Try to execute administrative functions or access administrative resources as a standard user.
References
Tools
OWASP WebScarab: HYPERLINK "https://www.owasp.org/index.php/OWASP_WebScarab_Project" \o "OWASP WebScarab Project" OWASP_WebScarab_Project
4.6.3 Testing for Privilege Escalation (OWASP-AZ-003)
Brief Summary
This section describes the issue of escalating privileges from one stage to another. During this phase, the tester should verify that it is not possible for a user to modify his or her privileges/roles inside the application in ways that could allow privilege escalation attacks.
Description of the Issue
Privilege escalation occurs when a user gets access to more resources or functionality than they are normally allowed, and such elevation/changes should have been prevented by the application. This is usually caused by a flaw in the application. The result is that the application performs actions with more privileges than those intended by the developer or system administrator.
The degree of escalation depends on which privileges the attacker is authorized to possess, and which privileges can be obtained in a successful exploit. For example, a programming error that allows a user to gain extra privilege after successful authentication limits the degree of escalation, because the user is already authorized to hold some privilege. Likewise, a remote attacker gaining superuser privilege without any authentication presents a greater degree of escalation.
Usually, we refer to vertical escalation when it is possible to access resources granted to more privileged accounts (e.g., acquiring administrative privileges for the application), and to horizontal escalation when it is possible to access resources granted to a similarly configured account (e.g., in an online banking application, accessing information related to a different user).
Black Box testing and example
Testing for role/privilege manipulation In every portion of the application where a user can create information in the database (e.g., making a payment, adding a contact, or sending a message), to receive information (statement of account, order details, etc.), or delete information (drop users, messages, etc.), it is necessary to record that functionality. The tester should try to access such functions as another user in order to verify, for example, if it is possible to access a function that should not be permitted by the user's role/privilege (but might be permitted as another user).
For example, the following HTTP POST allows the user that belongs to grp001 to access order #0001:
POST /user/viewOrder.jsp HTTP/1.1
Host: www.example.com
...
gruppoID=grp001&ordineID=0001
Verify if a user that does not belong to grp001 can modify the value of the parameters gruppoID and ordineID to gain access to that privileged data.
For example, the following server's answer shows a hidden field in the HTML returned to the user after a successful authentication.
HTTP/1.1 200 OK
Server: Netscape-Enterprise/6.0
Date: Wed, 1 Apr 2006 13:51:20 GMT
Set-Cookie: USER=aW78ryrGrTWs4MnOd32Fs51yDqp; path=/; domain=www.example.com
Set-Cookie: SESSION=k+KmKeHXTgDi1J5fT7Zz; path=/; domain= www.example.com
Cache-Control: no-cache
Pragma: No-cache
Content-length: 247
Content-Type: text/html
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Connection: close