Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Complete and Detailed OWASP Cheat Sheet, Cheat Sheet of Web Application Development

Complete cheat sheet on the Open Web Application Security Project (OWASP)

Typology: Cheat Sheet

2019/2020
On special offer
30 Points
Discount

Limited-time offer


Uploaded on 10/09/2020

agrata
agrata 🇺🇸

4.2

(6)

9 documents

Partial preview of the text

Download Complete and Detailed OWASP Cheat Sheet and more Cheat Sheet Web Application Development in PDF only on Docsity! OWASP Cheat Sheets Martin Woschek, owasp@jesterweb.de April 9, 2015 Contents I Developer Cheat Sheets (Builder) 11 1 Authentication Cheat Sheet 12 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2 Authentication General Guidelines . . . . . . . . . . . . . . . . . . . . . . . 12 1.3 Use of authentication protocols that require no password . . . . . . . . . . 17 1.4 Session Management General Guidelines . . . . . . . . . . . . . . . . . . . 19 1.5 Password Managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2 Choosing and Using Security Questions Cheat Sheet 20 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.3 Choosing Security Questions and/or Identity Data . . . . . . . . . . . . . . 20 2.4 Using Security Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.5 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3 Clickjacking Defense Cheat Sheet 26 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.2 Defending with Content Security Policy frame-ancestors directive . . . . . 26 3.3 Defending with X-Frame-Options Response Headers . . . . . . . . . . . . . 26 3.4 Best-for-now Legacy Browser Frame Breaking Script . . . . . . . . . . . . . 28 3.5 window.confirm() Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.6 Non-Working Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.7 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4 C-Based Toolchain Hardening Cheat Sheet 34 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.2 Actionable Items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.3 Build Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.4 Library Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.5 Static Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.6 Platform Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.7 Authors and Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5 Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet 40 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 5.2 Prevention Measures That Do NOT Work . . . . . . . . . . . . . . . . . . . . 40 5.3 General Recommendation: Synchronizer Token Pattern . . . . . . . . . . . 41 5.4 CSRF Prevention without a Synchronizer Token . . . . . . . . . . . . . . . 44 5.5 Client/User Prevention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2 Contents 18.7 Authors and primary editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 18.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 19 Session Management Cheat Sheet 126 19.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 19.2 Session ID Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 19.3 Session Management Implementation . . . . . . . . . . . . . . . . . . . . . 128 19.4 Cookies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 19.5 Session ID Life Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 19.6 Session Expiration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 19.7 Additional Client-Side Defenses for Session Management . . . . . . . . . . 134 19.8 Session Attacks Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 19.9 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 19.10 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 19.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 20 SQL Injection Prevention Cheat Sheet 139 20.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 20.2 Primary Defenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 20.3 Additional Defenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 20.4 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 20.5 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 20.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 21 Transport Layer Protection Cheat Sheet 149 21.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 21.2 Providing Transport Layer Protection with SSL/TLS . . . . . . . . . . . . . 149 21.3 Providing Transport Layer Protection for Back End and Other Connections 161 21.4 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 21.5 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 21.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 21.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 22 Unvalidated Redirects and Forwards Cheat Sheet 166 22.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 22.2 Safe URL Redirects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 22.3 Dangerous URL Redirects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 22.4 Preventing Unvalidated Redirects and Forwards . . . . . . . . . . . . . . . 168 22.5 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 22.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 22.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 23 User Privacy Protection Cheat Sheet 170 23.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 23.2 Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 23.3 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 23.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 24 Web Service Security Cheat Sheet 175 24.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 24.2 Transport Confidentiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 24.3 Server Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 24.4 User Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 24.5 Transport Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 24.6 Message Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 5 Contents 24.7 Message Confidentiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 24.8 Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 24.9 Schema Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 24.10 Content Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 24.11 Output Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 24.12 Virus Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 24.13 Message Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 24.14 Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 24.15 Endpoint Security Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 24.16 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 24.17 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 25 XSS (Cross Site Scripting) Prevention Cheat Sheet 179 25.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 25.2 XSS Prevention Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 25.3 XSS Prevention Rules Summary . . . . . . . . . . . . . . . . . . . . . . . . . 186 25.4 Output Encoding Rules Summary . . . . . . . . . . . . . . . . . . . . . . . . 188 25.5 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 25.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 25.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 II Assessment Cheat Sheets (Breaker) 191 26 Attack Surface Analysis Cheat Sheet 192 26.1 What is Attack Surface Analysis and Why is it Important? . . . . . . . . . 192 26.2 Defining the Attack Surface of an Application . . . . . . . . . . . . . . . . . 192 26.3 Identifying and Mapping the Attack Surface . . . . . . . . . . . . . . . . . . 193 26.4 Measuring and Assessing the Attack Surface . . . . . . . . . . . . . . . . . 194 26.5 Managing the Attack Surface . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 26.6 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 26.7 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 26.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 27 XSS Filter Evasion Cheat Sheet 197 27.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 27.2 Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 27.3 Character Encoding and IP Obfuscation Calculators . . . . . . . . . . . . . 219 27.4 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 27.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 28 REST Assessment Cheat Sheet 221 28.1 About RESTful Web Services . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 28.2 Key relevant properties of RESTful web services . . . . . . . . . . . . . . . . 221 28.3 The challenge of security testing RESTful web services . . . . . . . . . . . . 221 28.4 How to pen test a RESTful web service? . . . . . . . . . . . . . . . . . . . . 222 28.5 Related Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 28.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 28.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 III Mobile Cheat Sheets 224 29 IOS Developer Cheat Sheet 225 29.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 6 Contents 29.2 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 29.3 Remediation’s to OWASP Mobile Top 10 Risks . . . . . . . . . . . . . . . . . 225 29.4 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 29.5 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 29.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 30 Mobile Jailbreaking Cheat Sheet 231 30.1 What is "jailbreaking", "rooting" and "unlocking"? . . . . . . . . . . . . . . . 231 30.2 Why do they occur? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 30.3 What are the common tools used? . . . . . . . . . . . . . . . . . . . . . . . . 233 30.4 Why can it be dangerous? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 30.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 30.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 30.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 IV OpSec Cheat Sheets (Defender) 240 31 Virtual Patching Cheat Sheet 241 31.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 31.2 Definition: Virtual Patching . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 31.3 Why Not Just Fix the Code? . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 31.4 Value of Virtual Patching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 31.5 Virtual Patching Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 31.6 A Virtual Patching Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 242 31.7 Example Public Vulnerability . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 31.8 Preparation Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 31.9 Identification Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 31.10 Analysis Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 31.11 Virtual Patch Creation Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 31.12 Implementation/Testing Phase . . . . . . . . . . . . . . . . . . . . . . . . . 247 31.13 Recovery/Follow-Up Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 31.14 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 31.15 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 31.16 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 V Draft Cheat Sheets 249 32 OWASP Top Ten Cheat Sheet 251 33 Access Control Cheat Sheet 252 33.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 33.2 Attacks on Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 33.3 Access Control Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 33.4 Access Control Anti-Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 33.5 Attacking Access Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 33.6 Testing for Broken Access Control . . . . . . . . . . . . . . . . . . . . . . . . 256 33.7 Defenses Against Access Control Attacks . . . . . . . . . . . . . . . . . . . . 257 33.8 Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 33.9 SQL Integrated Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . 258 33.10 Access Control Positive Patterns . . . . . . . . . . . . . . . . . . . . . . . . . 259 33.11 Data Contextual Access Control . . . . . . . . . . . . . . . . . . . . . . . . . 259 33.12 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 7 Contents These Cheat Sheets have been taken from the owasp project on https://www.owasp. org. While this document is static, the online source is continuously improved and expanded. So please visit https://www.owasp.org if you have any doubt in the accuracy or actuality of this pdf or simply if this document is too old. All the articles are licenced under the Creative Commons Attribution-ShareAlike 3.0 Unported1. I have slightly reformatted and/or resectioned them in this work (which of course also is CC BY-SA 3.0). 1http://creativecommons.org/licenses/by-sa/3.0/ 10 Part I. Developer Cheat Sheets (Builder) 11 1. Authentication Cheat Sheet Last revision (mm/dd/yy): 02/24/2015 1.1. Introduction Authentication is the process of verification that an individual or an entity is who it claims to be. Authentication is commonly performed by submitting a user name or ID and one or more items of private information that only a given user should know. Session Management is a process by which a server maintains the state of an entity interacting with it. This is required for a server to remember how to react to sub- sequent requests throughout a transaction. Sessions are maintained on the server by a session identifier which can be passed back and forward between the client and server when transmitting and receiving requests. Sessions should be unique per user and computationally very difficult to predict. 1.2. Authentication General Guidelines 1.2.1. User IDs Make sure your usernames/userids are case insensitive. Regardless, it would be very strange for user ’smith’ and user ’Smith’ to be different users. Could result in serious confusion. Email address as a User ID Many sites use email addresses as a user id, which is a good mechanism for ensuring a unique identifier for each user without adding the burden of remembering a new username. However, many web applications do not treat email addresses correctly due to common misconceptions about what constitutes a valid address. Specifically, it is completely valid to have an mailbox address which: • Is case sensitive in the local-part • Has non-alphanumeric characters in the local-part (including + and @) • Has zero or more labels (though zero is admittedly not going to occur) The local-part is the part of the mailbox address to the left of the rightmost @ char- acter. The domain is the part of the mailbox address to the right of the rightmost @ character and consists of zero or more labels joined by a period character. At the time of writing, RFC 5321[2] is the current standard defining SMTP and what constitutes a valid mailbox address. Validation Many web applications contain computationally expensive and inaccurate regular expressions that attempt to validate email addresses. Recent changes to the landscape mean that the number of false-negatives will in- crease, particularly due to: 12 1. Authentication Cheat Sheet • If the new password doesn’t comply with the complexity policy, the error mes- sage should describe EVERY complexity rule that the new password does not comply with, not just the 1st rule it doesn’t comply with Changing passwords should be EASY, not a hunt in the dark. 1.2.3. Implement Secure Password Recovery Mechanism It is common for an application to have a mechanism that provides a means for a user to gain access to their account in the event they forget their password. Please see Forgot Password Cheat Sheet on page 65 for details on this feature. 1.2.4. Store Passwords in a Secure Fashion It is critical for a application to store a password using the right cryptographic tech- nique. Please see Password Storage Cheat Sheet on page 98 for details on this fea- ture. 1.2.5. Transmit Passwords Only Over TLS See: Transport Layer Protection Cheat Sheet on page 149 The login page and all subsequent authenticated pages must be exclusively accessed over TLS. The initial login page, referred to as the "login landing page", must be served over TLS. Failure to utilize TLS for the login landing page allows an attacker to mod- ify the login form action, causing the user’s credentials to be posted to an arbitrary location. Failure to utilize TLS for authenticated pages after the login enables an at- tacker to view the unencrypted session ID and compromise the user’s authenticated session. 1.2.6. Require Re-authentication for Sensitive Features In order to mitigate CSRF and session hijacking, it’s important to require the current credentials for an account before updating sensitive account information such as the user’s password, user’s email, or before sensitive transactions, such as shipping a purchase to a new address. Without this countermeasure, an attacker may be able to execute sensitive transactions through a CSRF or XSS attack without needing to know the user’s current credentials. Additionally, an attacker may get temporary physical access to a user’s browser or steal their session ID to take over the user’s session. 1.2.7. Utilize Multi-Factor Authentication Multi-factor authentication (MFA) is using more than one authentication factor to logon or process a transaction: • Something you know (account details or passwords) • Something you have (tokens or mobile phones) • Something you are (biometrics) Authentication schemes such as One Time Passwords (OTP) implemented using a hardware token can also be key in fighting attacks such as CSRF and client-side malware. A number of hardware tokens suitable for MFA are available in the market that allow good integration with web applications. See [6]. 15 1. Authentication Cheat Sheet 1.2.7.1. SSL Client Authentication SSL Client Authentication, also known as two-way SSL authentication, consists of both, browser and server, sending their respective SSL certificates during the TLS handshake process. Just as you can validate the authenticity of a server by using the certificate and asking a well known Certificate Authority (CA) if the certificate is valid, the server can authenticate the user by receiving a certificate from the client and validating against a third party CA or its own CA. To do this, the server must provide the user with a certificate generated specifically for him, assigning values to the subject so that these can be used to determine what user the certificate should validate. The user installs the certificate on a browser and now uses it for the website. It is a good idea to do this when: • It is acceptable (or even preferred) that the user only has access to the website from only a single computer/browser. • The user is not easily scared by the process of installing SSL certificates on his browser or there will be someone, probably from IT support, that will do this for the user. • The website requires an extra step of security. • It is also a good thing to use when the website is for an intranet of a company or organization. It is generally not a good idea to use this method for widely and publicly available websites that will have an average user. For example, it wouldn’t be a good idea to implement this for a website like Facebook. While this technique can prevent the user from having to type a password (thus protecting against an average keylogger from stealing it), it is still considered a good idea to consider using both a password and SSL client authentication combined. For more information, see: [4] or [5]. 1.2.8. Authentication and Error Messages Incorrectly implemented error messages in the case of authentication functionality can be used for the purposes of user ID and password enumeration. An application should respond (both HTTP and HTML) in a generic manner. 1.2.8.1. Authentication Responses An application should respond with a generic error message regardless of whether the user ID or password was incorrect. It should also give no indication to the status of an existing account. 1.2.8.2. Incorrect Response Examples • "Login for User foo: invalid password" • "Login failed, invalid user ID" • "Login failed; account disabled" • "Login failed; this user is not active" 16 1. Authentication Cheat Sheet 1.2.8.3. Correct Response Example • "Login failed; Invalid userID or password" The correct response does not indicate if the user ID or password is the incorrect parameter and hence inferring a valid user ID. 1.2.8.4. Error Codes and URLs The application may return a different HTTP Error code depending on the authenti- cation attempt response. It may respond with a 200 for a positive result and a 403 for a negative result. Even though a generic error page is shown to a user, the HTTP response code may differ which can leak information about whether the account is valid or not. 1.2.9. Prevent Brute-Force Attacks If an attacker is able to guess passwords without the account becoming disabled due to failed authentication attempts, the attacker has an opportunity to continue with a brute force attack until the account is compromised. Automating brute- force/password guessing attacks on web applications is a trivial challenge. Pass- word lockout mechanisms should be employed that lock out an account if more than a preset number of unsuccessful login attempts are made. Password lockout mech- anisms have a logical weakness. An attacker that undertakes a large number of authentication attempts on known account names can produce a result that locks out entire blocks of user accounts. Given that the intent of a password lockout sys- tem is to protect from brute-force attacks, a sensible strategy is to lockout accounts for a period of time (e.g., 20 minutes). This significantly slows down attackers, while allowing the accounts to reopen automatically for legitimate users. Also, multi-factor authentication is a very powerful deterrent when trying to prevent brute force attacks since the credentials are a moving target. When multi-factor is implemented and active, account lockout may no longer be necessary. 1.3. Use of authentication protocols that require no password While authentication through a user/password combination and using multi-factor authentication is considered generally secure, there are use cases where it isn’t con- sidered the best option or even safe. An example of this are third party applications that desire connecting to the web application, either from a mobile device, another website, desktop or other situations. When this happens, it is NOT considered safe to allow the third party application to store the user/password combo, since then it extends the attack surface into their hands, where it isn’t in your control. For this, and other use cases, there are several authentication protocols that can protect you from exposing your users’ data to attackers. 1.3.1. OAuth Open Authorization (OAuth) is a protocol that allows an application to authenticate against a server as a user, without requiring passwords or any third party server that acts as an identity provider. It uses a token generated by the server, and provides how the authorization flows most occur, so that a client, such as a mobile application, can tell the server what user is using the service. The recommendation is to use and implement OAuth 1.0a or OAuth 2.0, since the very first version (OAuth1.0) has been found to be vulnerable to session fixation. 17 2. Choosing and Using Security Questions Cheat Sheet Last revision (mm/dd/yy): 04/17/2014 2.1. Introduction This cheat sheet provides some best practice for developers to follow when choos- ing and using security questions to implement a "forgot password" web application feature. 2.2. The Problem There is no industry standard either for providing guidance to users or developers when using or implementing a Forgot Password feature. The result is that developers generally pick a set of dubious questions and implement them insecurely. They do so, not only at the risk to their users, but also–because of potential liability issues– at the risk to their organization. Ideally, passwords would be dead, or at least less important in the sense that they make up only one of several multi-factor authenti- cation mechanisms, but the truth is that we probably are stuck with passwords just like we are stuck with Cobol. So with that in mind, what can we do to make the Forgot Password solution as palatable as possible? 2.3. Choosing Security Questions and/or Identity Data Most of us can instantly spot a bad "security question" when we see one. You know the ones we mean. Ones like "What is your favorite color?" are obviously bad. But as the Good Security Questions [2] web site rightly points out, "there really are NO GOOD security questions; only fair or bad questions". The reason that most organizations allow users to reset their own forgotten pass- words is not because of security, but rather to reduce their own costs by reducing their volume of calls to their help desks. It’s the classic convenience vs. security trade-off, and in this case, convenience (both to the organization in terms of reduced costs and to the user in terms of simpler, self-service) almost always wins out. So given that the business aspect of lower cost generally wins out, what can we do to at least raise the bar a bit? Here are some suggestions. Note that we intentionally avoid recommending specific security questions. To do so likely would be counterproductive because many de- velopers would simply use those questions without much thinking and adversaries would immediately start harvesting that data from various social networks. 2.3.1. Desired Characteristics Any security questions or identity information presented to users to reset forgotten passwords should ideally have the following four characteristics: 20 2. Choosing and Using Security Questions Cheat Sheet 1. Memorable: If users can’t remember their answers to their security questions, you have achieved nothing. 2. Consistent: The user’s answers should not change over time. For instance, asking "What is the name of your significant other?" may have a different answer 5 years from now. 3. Nearly universal: The security questions should apply to a wide an audience of possible. 4. Safe: The answers to security questions should not be something that is easily guessed, or research (e.g., something that is matter of public record). 2.3.2. Steps 2.3.2.1. Step 1) Decide on Identity Data vs Canned Questions vs. User-Created Questions Generally, a single HTML form should be used to collect all of the inputs to be used for later password resets. If your organization has a business relationship with users, you probably have col- lected some sort of additional information from your users when they registered with your web site. Such information includes, but is not limited to: • email address • last name • date of birth • account number • customer number • last 4 of social security number • zip code for address on file • street number for address on file For enhanced security, you may wish to consider asking the user for their email address first and then send an email that takes them to a private page that requests the other 2 (or more) identity factors. That way the email itself isn’t that useful because they still have to answer a bunch of ’secret’ questions after they get to the landing page. On the other hand, if you host a web site that targets the general public, such as social networking sites, free email sites, news sites, photo sharing sites, etc., then you likely to not have this identity information and will need to use some sort of the ubiquitous "security questions". However, also be sure that you collect some means to send the password reset information to some out-of-band side-channel, such as a (different) email address, an SMS texting number, etc. Believe it or not, there is a certain merit to allow your users to select from a set of several "canned" questions. We generally ask users to fill out the security questions as part of completing their initial user profile and often that is the very time that the user is in a hurry; they just wish to register and get about using your site. If we ask users to create their own question(s) instead, they then generally do so under some amount of duress, and thus may be more likely to come up with extremely poor questions. 21 OWASP Cheat Sheets Martin Woschek, owasp@jesterweb.de April 9, 2015 Contents I Developer Cheat Sheets (Builder) 11 1 Authentication Cheat Sheet 12 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2 Authentication General Guidelines . . . . . . . . . . . . . . . . . . . . . . . 12 1.3 Use of authentication protocols that require no password . . . . . . . . . . 17 1.4 Session Management General Guidelines . . . . . . . . . . . . . . . . . . . 19 1.5 Password Managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2 Choosing and Using Security Questions Cheat Sheet 20 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.3 Choosing Security Questions and/or Identity Data . . . . . . . . . . . . . . 20 2.4 Using Security Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.5 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3 Clickjacking Defense Cheat Sheet 26 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.2 Defending with Content Security Policy frame-ancestors directive . . . . . 26 3.3 Defending with X-Frame-Options Response Headers . . . . . . . . . . . . . 26 3.4 Best-for-now Legacy Browser Frame Breaking Script . . . . . . . . . . . . . 28 3.5 window.confirm() Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.6 Non-Working Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.7 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4 C-Based Toolchain Hardening Cheat Sheet 34 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.2 Actionable Items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.3 Build Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.4 Library Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.5 Static Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.6 Platform Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.7 Authors and Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5 Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet 40 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 5.2 Prevention Measures That Do NOT Work . . . . . . . . . . . . . . . . . . . . 40 5.3 General Recommendation: Synchronizer Token Pattern . . . . . . . . . . . 41 5.4 CSRF Prevention without a Synchronizer Token . . . . . . . . . . . . . . . 44 5.5 Client/User Prevention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2 Contents 18.7 Authors and primary editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 18.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 19 Session Management Cheat Sheet 126 19.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 19.2 Session ID Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 19.3 Session Management Implementation . . . . . . . . . . . . . . . . . . . . . 128 19.4 Cookies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 19.5 Session ID Life Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 19.6 Session Expiration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 19.7 Additional Client-Side Defenses for Session Management . . . . . . . . . . 134 19.8 Session Attacks Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 19.9 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 19.10 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 19.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 20 SQL Injection Prevention Cheat Sheet 139 20.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 20.2 Primary Defenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 20.3 Additional Defenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 20.4 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 20.5 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 20.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 21 Transport Layer Protection Cheat Sheet 149 21.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 21.2 Providing Transport Layer Protection with SSL/TLS . . . . . . . . . . . . . 149 21.3 Providing Transport Layer Protection for Back End and Other Connections 161 21.4 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 21.5 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 21.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 21.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 22 Unvalidated Redirects and Forwards Cheat Sheet 166 22.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 22.2 Safe URL Redirects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 22.3 Dangerous URL Redirects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 22.4 Preventing Unvalidated Redirects and Forwards . . . . . . . . . . . . . . . 168 22.5 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 22.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 22.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 23 User Privacy Protection Cheat Sheet 170 23.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 23.2 Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 23.3 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 23.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 24 Web Service Security Cheat Sheet 175 24.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 24.2 Transport Confidentiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 24.3 Server Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 24.4 User Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 24.5 Transport Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 24.6 Message Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 5 Contents 24.7 Message Confidentiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 24.8 Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 24.9 Schema Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 24.10 Content Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 24.11 Output Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 24.12 Virus Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 24.13 Message Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 24.14 Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 24.15 Endpoint Security Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 24.16 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 24.17 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 25 XSS (Cross Site Scripting) Prevention Cheat Sheet 179 25.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 25.2 XSS Prevention Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 25.3 XSS Prevention Rules Summary . . . . . . . . . . . . . . . . . . . . . . . . . 186 25.4 Output Encoding Rules Summary . . . . . . . . . . . . . . . . . . . . . . . . 188 25.5 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 25.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 25.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 II Assessment Cheat Sheets (Breaker) 191 26 Attack Surface Analysis Cheat Sheet 192 26.1 What is Attack Surface Analysis and Why is it Important? . . . . . . . . . 192 26.2 Defining the Attack Surface of an Application . . . . . . . . . . . . . . . . . 192 26.3 Identifying and Mapping the Attack Surface . . . . . . . . . . . . . . . . . . 193 26.4 Measuring and Assessing the Attack Surface . . . . . . . . . . . . . . . . . 194 26.5 Managing the Attack Surface . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 26.6 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 26.7 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 26.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 27 XSS Filter Evasion Cheat Sheet 197 27.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 27.2 Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 27.3 Character Encoding and IP Obfuscation Calculators . . . . . . . . . . . . . 219 27.4 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 27.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 28 REST Assessment Cheat Sheet 221 28.1 About RESTful Web Services . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 28.2 Key relevant properties of RESTful web services . . . . . . . . . . . . . . . . 221 28.3 The challenge of security testing RESTful web services . . . . . . . . . . . . 221 28.4 How to pen test a RESTful web service? . . . . . . . . . . . . . . . . . . . . 222 28.5 Related Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 28.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 28.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 III Mobile Cheat Sheets 224 29 IOS Developer Cheat Sheet 225 29.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 6 Contents 29.2 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 29.3 Remediation’s to OWASP Mobile Top 10 Risks . . . . . . . . . . . . . . . . . 225 29.4 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 29.5 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 29.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 30 Mobile Jailbreaking Cheat Sheet 231 30.1 What is "jailbreaking", "rooting" and "unlocking"? . . . . . . . . . . . . . . . 231 30.2 Why do they occur? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 30.3 What are the common tools used? . . . . . . . . . . . . . . . . . . . . . . . . 233 30.4 Why can it be dangerous? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 30.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 30.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 30.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 IV OpSec Cheat Sheets (Defender) 240 31 Virtual Patching Cheat Sheet 241 31.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 31.2 Definition: Virtual Patching . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 31.3 Why Not Just Fix the Code? . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 31.4 Value of Virtual Patching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 31.5 Virtual Patching Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 31.6 A Virtual Patching Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 242 31.7 Example Public Vulnerability . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 31.8 Preparation Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 31.9 Identification Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 31.10 Analysis Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 31.11 Virtual Patch Creation Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 31.12 Implementation/Testing Phase . . . . . . . . . . . . . . . . . . . . . . . . . 247 31.13 Recovery/Follow-Up Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 31.14 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 31.15 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 31.16 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 V Draft Cheat Sheets 249 32 OWASP Top Ten Cheat Sheet 251 33 Access Control Cheat Sheet 252 33.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 33.2 Attacks on Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 33.3 Access Control Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 33.4 Access Control Anti-Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 33.5 Attacking Access Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 33.6 Testing for Broken Access Control . . . . . . . . . . . . . . . . . . . . . . . . 256 33.7 Defenses Against Access Control Attacks . . . . . . . . . . . . . . . . . . . . 257 33.8 Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 33.9 SQL Integrated Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . 258 33.10 Access Control Positive Patterns . . . . . . . . . . . . . . . . . . . . . . . . . 259 33.11 Data Contextual Access Control . . . . . . . . . . . . . . . . . . . . . . . . . 259 33.12 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 7 Contents These Cheat Sheets have been taken from the owasp project on https://www.owasp. org. While this document is static, the online source is continuously improved and expanded. So please visit https://www.owasp.org if you have any doubt in the accuracy or actuality of this pdf or simply if this document is too old. All the articles are licenced under the Creative Commons Attribution-ShareAlike 3.0 Unported1. I have slightly reformatted and/or resectioned them in this work (which of course also is CC BY-SA 3.0). 1http://creativecommons.org/licenses/by-sa/3.0/ 10 Part I. Developer Cheat Sheets (Builder) 11 1. Authentication Cheat Sheet Last revision (mm/dd/yy): 02/24/2015 1.1. Introduction Authentication is the process of verification that an individual or an entity is who it claims to be. Authentication is commonly performed by submitting a user name or ID and one or more items of private information that only a given user should know. Session Management is a process by which a server maintains the state of an entity interacting with it. This is required for a server to remember how to react to sub- sequent requests throughout a transaction. Sessions are maintained on the server by a session identifier which can be passed back and forward between the client and server when transmitting and receiving requests. Sessions should be unique per user and computationally very difficult to predict. 1.2. Authentication General Guidelines 1.2.1. User IDs Make sure your usernames/userids are case insensitive. Regardless, it would be very strange for user ’smith’ and user ’Smith’ to be different users. Could result in serious confusion. Email address as a User ID Many sites use email addresses as a user id, which is a good mechanism for ensuring a unique identifier for each user without adding the burden of remembering a new username. However, many web applications do not treat email addresses correctly due to common misconceptions about what constitutes a valid address. Specifically, it is completely valid to have an mailbox address which: • Is case sensitive in the local-part • Has non-alphanumeric characters in the local-part (including + and @) • Has zero or more labels (though zero is admittedly not going to occur) The local-part is the part of the mailbox address to the left of the rightmost @ char- acter. The domain is the part of the mailbox address to the right of the rightmost @ character and consists of zero or more labels joined by a period character. At the time of writing, RFC 5321[2] is the current standard defining SMTP and what constitutes a valid mailbox address. Validation Many web applications contain computationally expensive and inaccurate regular expressions that attempt to validate email addresses. Recent changes to the landscape mean that the number of false-negatives will in- crease, particularly due to: 12 1. Authentication Cheat Sheet • If the new password doesn’t comply with the complexity policy, the error mes- sage should describe EVERY complexity rule that the new password does not comply with, not just the 1st rule it doesn’t comply with Changing passwords should be EASY, not a hunt in the dark. 1.2.3. Implement Secure Password Recovery Mechanism It is common for an application to have a mechanism that provides a means for a user to gain access to their account in the event they forget their password. Please see Forgot Password Cheat Sheet on page 65 for details on this feature. 1.2.4. Store Passwords in a Secure Fashion It is critical for a application to store a password using the right cryptographic tech- nique. Please see Password Storage Cheat Sheet on page 98 for details on this fea- ture. 1.2.5. Transmit Passwords Only Over TLS See: Transport Layer Protection Cheat Sheet on page 149 The login page and all subsequent authenticated pages must be exclusively accessed over TLS. The initial login page, referred to as the "login landing page", must be served over TLS. Failure to utilize TLS for the login landing page allows an attacker to mod- ify the login form action, causing the user’s credentials to be posted to an arbitrary location. Failure to utilize TLS for authenticated pages after the login enables an at- tacker to view the unencrypted session ID and compromise the user’s authenticated session. 1.2.6. Require Re-authentication for Sensitive Features In order to mitigate CSRF and session hijacking, it’s important to require the current credentials for an account before updating sensitive account information such as the user’s password, user’s email, or before sensitive transactions, such as shipping a purchase to a new address. Without this countermeasure, an attacker may be able to execute sensitive transactions through a CSRF or XSS attack without needing to know the user’s current credentials. Additionally, an attacker may get temporary physical access to a user’s browser or steal their session ID to take over the user’s session. 1.2.7. Utilize Multi-Factor Authentication Multi-factor authentication (MFA) is using more than one authentication factor to logon or process a transaction: • Something you know (account details or passwords) • Something you have (tokens or mobile phones) • Something you are (biometrics) Authentication schemes such as One Time Passwords (OTP) implemented using a hardware token can also be key in fighting attacks such as CSRF and client-side malware. A number of hardware tokens suitable for MFA are available in the market that allow good integration with web applications. See [6]. 15 1. Authentication Cheat Sheet 1.2.7.1. SSL Client Authentication SSL Client Authentication, also known as two-way SSL authentication, consists of both, browser and server, sending their respective SSL certificates during the TLS handshake process. Just as you can validate the authenticity of a server by using the certificate and asking a well known Certificate Authority (CA) if the certificate is valid, the server can authenticate the user by receiving a certificate from the client and validating against a third party CA or its own CA. To do this, the server must provide the user with a certificate generated specifically for him, assigning values to the subject so that these can be used to determine what user the certificate should validate. The user installs the certificate on a browser and now uses it for the website. It is a good idea to do this when: • It is acceptable (or even preferred) that the user only has access to the website from only a single computer/browser. • The user is not easily scared by the process of installing SSL certificates on his browser or there will be someone, probably from IT support, that will do this for the user. • The website requires an extra step of security. • It is also a good thing to use when the website is for an intranet of a company or organization. It is generally not a good idea to use this method for widely and publicly available websites that will have an average user. For example, it wouldn’t be a good idea to implement this for a website like Facebook. While this technique can prevent the user from having to type a password (thus protecting against an average keylogger from stealing it), it is still considered a good idea to consider using both a password and SSL client authentication combined. For more information, see: [4] or [5]. 1.2.8. Authentication and Error Messages Incorrectly implemented error messages in the case of authentication functionality can be used for the purposes of user ID and password enumeration. An application should respond (both HTTP and HTML) in a generic manner. 1.2.8.1. Authentication Responses An application should respond with a generic error message regardless of whether the user ID or password was incorrect. It should also give no indication to the status of an existing account. 1.2.8.2. Incorrect Response Examples • "Login for User foo: invalid password" • "Login failed, invalid user ID" • "Login failed; account disabled" • "Login failed; this user is not active" 16 1. Authentication Cheat Sheet 1.2.8.3. Correct Response Example • "Login failed; Invalid userID or password" The correct response does not indicate if the user ID or password is the incorrect parameter and hence inferring a valid user ID. 1.2.8.4. Error Codes and URLs The application may return a different HTTP Error code depending on the authenti- cation attempt response. It may respond with a 200 for a positive result and a 403 for a negative result. Even though a generic error page is shown to a user, the HTTP response code may differ which can leak information about whether the account is valid or not. 1.2.9. Prevent Brute-Force Attacks If an attacker is able to guess passwords without the account becoming disabled due to failed authentication attempts, the attacker has an opportunity to continue with a brute force attack until the account is compromised. Automating brute- force/password guessing attacks on web applications is a trivial challenge. Pass- word lockout mechanisms should be employed that lock out an account if more than a preset number of unsuccessful login attempts are made. Password lockout mech- anisms have a logical weakness. An attacker that undertakes a large number of authentication attempts on known account names can produce a result that locks out entire blocks of user accounts. Given that the intent of a password lockout sys- tem is to protect from brute-force attacks, a sensible strategy is to lockout accounts for a period of time (e.g., 20 minutes). This significantly slows down attackers, while allowing the accounts to reopen automatically for legitimate users. Also, multi-factor authentication is a very powerful deterrent when trying to prevent brute force attacks since the credentials are a moving target. When multi-factor is implemented and active, account lockout may no longer be necessary. 1.3. Use of authentication protocols that require no password While authentication through a user/password combination and using multi-factor authentication is considered generally secure, there are use cases where it isn’t con- sidered the best option or even safe. An example of this are third party applications that desire connecting to the web application, either from a mobile device, another website, desktop or other situations. When this happens, it is NOT considered safe to allow the third party application to store the user/password combo, since then it extends the attack surface into their hands, where it isn’t in your control. For this, and other use cases, there are several authentication protocols that can protect you from exposing your users’ data to attackers. 1.3.1. OAuth Open Authorization (OAuth) is a protocol that allows an application to authenticate against a server as a user, without requiring passwords or any third party server that acts as an identity provider. It uses a token generated by the server, and provides how the authorization flows most occur, so that a client, such as a mobile application, can tell the server what user is using the service. The recommendation is to use and implement OAuth 1.0a or OAuth 2.0, since the very first version (OAuth1.0) has been found to be vulnerable to session fixation. 17 2. Choosing and Using Security Questions Cheat Sheet Last revision (mm/dd/yy): 04/17/2014 2.1. Introduction This cheat sheet provides some best practice for developers to follow when choos- ing and using security questions to implement a "forgot password" web application feature. 2.2. The Problem There is no industry standard either for providing guidance to users or developers when using or implementing a Forgot Password feature. The result is that developers generally pick a set of dubious questions and implement them insecurely. They do so, not only at the risk to their users, but also–because of potential liability issues– at the risk to their organization. Ideally, passwords would be dead, or at least less important in the sense that they make up only one of several multi-factor authenti- cation mechanisms, but the truth is that we probably are stuck with passwords just like we are stuck with Cobol. So with that in mind, what can we do to make the Forgot Password solution as palatable as possible? 2.3. Choosing Security Questions and/or Identity Data Most of us can instantly spot a bad "security question" when we see one. You know the ones we mean. Ones like "What is your favorite color?" are obviously bad. But as the Good Security Questions [2] web site rightly points out, "there really are NO GOOD security questions; only fair or bad questions". The reason that most organizations allow users to reset their own forgotten pass- words is not because of security, but rather to reduce their own costs by reducing their volume of calls to their help desks. It’s the classic convenience vs. security trade-off, and in this case, convenience (both to the organization in terms of reduced costs and to the user in terms of simpler, self-service) almost always wins out. So given that the business aspect of lower cost generally wins out, what can we do to at least raise the bar a bit? Here are some suggestions. Note that we intentionally avoid recommending specific security questions. To do so likely would be counterproductive because many de- velopers would simply use those questions without much thinking and adversaries would immediately start harvesting that data from various social networks. 2.3.1. Desired Characteristics Any security questions or identity information presented to users to reset forgotten passwords should ideally have the following four characteristics: 20 2. Choosing and Using Security Questions Cheat Sheet 1. Memorable: If users can’t remember their answers to their security questions, you have achieved nothing. 2. Consistent: The user’s answers should not change over time. For instance, asking "What is the name of your significant other?" may have a different answer 5 years from now. 3. Nearly universal: The security questions should apply to a wide an audience of possible. 4. Safe: The answers to security questions should not be something that is easily guessed, or research (e.g., something that is matter of public record). 2.3.2. Steps 2.3.2.1. Step 1) Decide on Identity Data vs Canned Questions vs. User-Created Questions Generally, a single HTML form should be used to collect all of the inputs to be used for later password resets. If your organization has a business relationship with users, you probably have col- lected some sort of additional information from your users when they registered with your web site. Such information includes, but is not limited to: • email address • last name • date of birth • account number • customer number • last 4 of social security number • zip code for address on file • street number for address on file For enhanced security, you may wish to consider asking the user for their email address first and then send an email that takes them to a private page that requests the other 2 (or more) identity factors. That way the email itself isn’t that useful because they still have to answer a bunch of ’secret’ questions after they get to the landing page. On the other hand, if you host a web site that targets the general public, such as social networking sites, free email sites, news sites, photo sharing sites, etc., then you likely to not have this identity information and will need to use some sort of the ubiquitous "security questions". However, also be sure that you collect some means to send the password reset information to some out-of-band side-channel, such as a (different) email address, an SMS texting number, etc. Believe it or not, there is a certain merit to allow your users to select from a set of several "canned" questions. We generally ask users to fill out the security questions as part of completing their initial user profile and often that is the very time that the user is in a hurry; they just wish to register and get about using your site. If we ask users to create their own question(s) instead, they then generally do so under some amount of duress, and thus may be more likely to come up with extremely poor questions. 21 2. Choosing and Using Security Questions Cheat Sheet However, there is also some strong rationale to requiring users to create their own question(s), or at least one such question. The prevailing legal opinion seems to be if we provide some sort of reasonable guidance to users in creating their own questions and then insist on them doing so, at least some of the potential liabilities are transferred from our organizations to the users. In such cases, if user accounts get hacked because of their weak security questions (e.g., "What is my favorite ice cream flavor?", etc.) then the thought is that they only have themselves to blame and thus our organizations are less likely to get sued. Since OWASP recommends in the Forgot Password Cheat Sheet on page 65 that multiple security questions should be posed to the user and successfully answered before allowing a password reset, a good practice might be to require the user to select 1 or 2 questions from a set of canned questions as well as to create (a different) one of their own and then require they answer one of their selected canned questions as well as their own question. 2.3.2.2. Step 2) Review Any Canned Questions with Your Legal Department or Privacy Officer While most developers would generally first review any potential questions with what- ever relevant business unit, it may not occur to them to review the questions with their legal department or chief privacy officer. However, this is advisable because their may be applicable laws or regulatory / compliance issues to which the ques- tions must adhere. For example, in the telecommunications industry, the FCC’s Customer Proprietary Network Information (CPNI) regulations prohibit asking cus- tomers security questions that involve "personal information", so questions such as "In what city were you born?" are generally not allowed. 2.3.2.3. Step 3) Insist on a Minimal Length for the Answers Even if you pose decent security questions, because users generally dislike putting a whole lot of forethought into answering the questions, they often will just answer with something short. Answering with a short expletive is not uncommon, nor is answering with something like "xxx" or "1234". If you tell the user that they should answer with a phrase or sentence and tell them that there is some minimal length to an acceptable answer (say 10 or 12 characters), you generally will get answers that are somewhat more resistant to guessing. 2.3.2.4. Step 4) Consider How To Securely Store the Questions and Answers There are two aspects to this...storing the questions and storing the answers. Ob- viously, the questions must be presented to the user, so the options there are store them as plaintext or as reversible ciphertext. The answers technically do not need to be ever viewed by any human so they could be stored using a secure cryptographic hash (although in principle, I am aware of some help desks that utilize the both the questions and answers for password reset and they insist on being able to read the answers rather than having to type them in; YMMV). Either way, we would always recommend at least encrypting the answers rather than storing them as plaintext. This is especially true for answers to the "create your own question" type as users will sometimes pose a question that potentially has a sensitive answer (e.g., "What is my bank account # that I share with my wife?"). So the main question is whether or not you should store the questions as plaintext or reversible ciphertext. Admittedly, we are a bit biased, but for the "create your own question" types at least, we recommend that such questions be encrypted. This is because if they are encrypted, it makes it much less likely that your company will 22 2. Choosing and Using Security Questions Cheat Sheet • Display the security question(s) on a separate page only after your users have successfully authenticated with their usernames / passwords (rather than only after they have entered their username). In this manner, you at least do not allow an adversary to view and research the security questions unless they also know the user’s current password. • If you also use security questions to reset a user’s password, then you should use a different set of security questions for an additional means of authenticat- ing. • Security questions used for actual authentication purposes should regularly expire much like passwords. Periodically make the user choose new security questions and answers. • If you use answers to security questions as a subsequent authentication mech- anism (say to enter a more sensitive area of your web site), make sure that you keep the session idle time out very low...say less than 5 minutes or so, or that you also require the user to first re-authenticate with their password and then immediately after answer the security question(s). 2.5. Related Articles • Forgot Password Cheat Sheet on page 65 • Good Security Questions web site 2.6. Authors and Primary Editors • Kevin Wall - kevin.w.wall[at]gmail com 2.7. References 1. https://www.owasp.org/index.php/Choosing_and_Using_Security_ Questions_Cheat_Sheet 2. http://goodsecurityquestions.com/ 3. http://en.wikipedia.org/wiki/Customer_proprietary_network_ information 25 3. Clickjacking Defense Cheat Sheet Last revision (mm/dd/yy): 02/11/2015 3.1. Introduction This cheat sheet is focused on providing developer guidance on Clickjack/UI Redress [2] attack prevention. The most popular way to defend against Clickjacking is to include some sort of "frame-breaking" functionality which prevents other web pages from framing the site you wish to defend. This cheat sheet will discuss two methods of implementing frame-breaking: first is X-Frame-Options headers (used if the browser supports the functionality); and second is javascript frame-breaking code. 3.2. Defending with Content Security Policy frame-ancestors directive The frame-ancestors directive can be used in a Content-Security-Policy HTTP re- sponse header to indicate whether or not a browser should be allowed to render a page in a <frame> or <iframe>. Sites can use this to avoid Clickjacking attacks, by ensuring that their content is not embedded into other sites. frame-ancestors allows a site to authorize multiple domains using the normal Con- tent Security Policy symantics. See [19] for further details 3.2.1. Limitations • Browser support: frame-ancestors is not supported by all the major browsers yet. • X-Frame-Options takes priority: Section 7.7.1 of the CSP Spec [18] says X- Frame-Options should be ignored if frame-ancestors is specified, but Chrome 40 & Firefox 35 ignore the frame-ancestors directive and follow the X-Frame- Options header instead. 3.3. Defending with X-Frame-Options Response Headers The X-Frame-Options HTTP response header can be used to indicate whether or not a browser should be allowed to render a page in a <frame> or <iframe>. Sites can use this to avoid Clickjacking attacks, by ensuring that their content is not embedded into other sites. 3.3.1. X-Frame-Options Header Types There are three possible values for the X-Frame-Options header: • DENY, which prevents any domain from framing the content. 26 3. Clickjacking Defense Cheat Sheet • SAMEORIGIN, which only allows the current site to frame the content. • ALLOW-FROM uri, which permits the specified ’uri’ to frame this page. (e.g., ALLOW-FROM http://www.example.com) Check Limitations Below this will fail open if the browser does not support it. 3.3.2. Browser Support The following browsers support X-Frame-Options headers. Browser DENY/SAMEORIGIN Support Introduced ALLOW-FROM Support Introduced Chrome 4.1.249.1042 [3] Not supported/Bug reported [4] Firefox (Gecko) 3.6.9 (1.9.2.9) [5] 18.0 [6] Internet Explorer 8.0 [7] 9.0 [8] Opera 10.50 [9] Safari 4.0 [10] Not supported/Bug reported [11] See: [12], [13], [14] 3.3.3. Implementation To implement this protection, you need to add the X-Frame-Options HTTP Response header to any page that you want to protect from being clickjacked via framebusting. One way to do this is to add the HTTP Response Header manually to every page. A possibly simpler way is to implement a filter that automatically adds the header to every page. OWASP has an article and some code [15] that provides all the details for implement- ing this in the Java EE environment. The SDL blog has posted an article [16] covering how to implement this in a .NET environment. 3.3.4. Common Defense Mistakes Meta-tags that attempt to apply the X-Frame-Options directive DO NOT WORK. For example, <meta http-equiv="X-Frame-Options" content="deny">) will not work. You must apply the X-FRAME-OPTIONS directive as HTTP Response Header as described above. 3.3.5. Limitations • Per-page policy specification The policy needs to be specified for every page, which can complicate deploy- ment. Providing the ability to enforce it for the entire site, at login time for instance, could simplify adoption. • Problems with multi-domain sites The current implementation does not allow the webmaster to provide a whitelist of domains that are allowed to frame the page. While whitelisting can be dan- gerous, in some cases a webmaster might have no choice but to use more than one hostname. • ALLOW-FROM browser support The ALLOW-FROM option is a relatively recent addition (circa 2012) and may not be supported by all browsers yet. BE CAREFUL ABOUT DEPENDING ON ALLOW-FROM. If you apply it and the browser does not support it, then you will have NO clickjacking defense in place. 27 3. Clickjacking Defense Cheat Sheet i f ( top . location != se l f . locaton ) { parent . location = se l f . location ; } Attacker top frame: <iframe src="attacker2 . html"> Attacker sub-frame: <iframe src="http ://www. victim .com"> 3.6.2. The onBeforeUnload Event A user can manually cancel any navigation request submitted by a framed page. To exploit this, the framing page registers an onBeforeUnload handler which is called whenever the framing page is about to be unloaded due to navigation. The handler function returns a string that becomes part of a prompt displayed to the user. Say the attacker wants to frame PayPal. He registers an unload handler function that returns the string "Do you want to exit PayPal?". When this string is displayed to the user is likely to cancel the navigation, defeating PayPal’s frame busting attempt. The attacker mounts this attack by registering an unload event on the top page using the following code: <script > window. onbeforeunload = function ( ) { return "Asking the user nicely " ; } </script > <iframe src="http ://www. paypal .com"> PayPal’s frame busting code will generate a BeforeUnload event activating our func- tion and prompting the user to cancel the navigation event. 3.6.3. No-Content Flushing While the previous attack requires user interaction, the same attack can be done without prompting the user. Most browsers (IE7, IE8, Google Chrome, and Firefox) enable an attacker to automatically cancel the incoming navigation request in an onBeforeUnload event handler by repeatedly submitting a navigation request to a site responding with \204 - No Content." Navigating to a No Content site is effectively a NOP, but flushes the request pipeline, thus canceling the original navigation request. Here is sample code to do this: var preventbust = 0 window. onbeforeunload = function ( ) { k i l lbust++ } set Interval ( function ( ) { i f ( k i l lbust > 0) { k i l lbust = 2; window. top . location = ’ http ://nocontent204 .com’ } } , 1) ; 30 3. Clickjacking Defense Cheat Sheet <iframe src="http ://www. victim .com"> 3.6.4. Exploiting XSS filters IE8 and Google Chrome introduced reflective XSS filters that help protect web pages from certain types of XSS attacks. Nava and Lindsay (at Blackhat) observed that these filters can be used to circumvent frame busting code. The IE8 XSS filter com- pares given request parameters to a set of regular expressions in order to look for obvious attempts at cross-site scripting. Using "induced false positives", the filter can be used to disable selected scripts. By matching the beginning of any script tag in the request parameters, the XSS filter will disable all inline scripts within the page, including frame busting scripts. External scripts can also be targeted by matching an external include, effectively disabling all external scripts. Since subsets of the JavaScript loaded is still functional (inline or external) and cookies are still available, this attack is effective for clickjacking. Victim frame busting code: <script > i f ( top != se l f ) { top . location = se l f . location ; } </script > Attacker: <iframe src="http ://www. victim .com/?v=<script > i f ’ ’ > The XSS filter will match that parameter "<script>if" to the beginning of the frame busting script on the victim and will consequently disable all inline scripts in the victim’s page, including the frame busting script. The XSSAuditor filter available for Google Chrome enables the same exploit. 3.6.5. Clobbering top.location Several modern browsers treat the location variable as a special immutable attribute across all contexts. However, this is not the case in IE7 and Safari 4.0.4 where the location variable can be redefined. IE7 Once the framing page redefines location, any frame busting code in a subframe that tries to read top.location will commit a security violation by trying to read a local variable in another domain. Similarly, any attempt to navigate by assigning top.location will fail. Victim frame busting code: i f ( top . location != se l f . location ) { top . location = se l f . location ; } 31 3. Clickjacking Defense Cheat Sheet Attacker: <script > var location = " clobbered " ; </script > <iframe src="http ://www. victim .com"> </iframe> Safari 4.0.4 We observed that although location is kept immutable in most circumstances, when a custom location setter is defined via defineSetter (through window) the object location becomes undefined. The framing page simply does: <script > window. defineSetter ( " location " , function ( ) { } ) ; </script > Now any attempt to read or navigate the top frame’s location will fail. 3.6.6. Restricted zones Most frame busting relies on JavaScript in the framed page to detect framing and bust itself out. If JavaScript is disabled in the context of the subframe, the frame busting code will not run. There are unfortunately several ways of restricting JavaScript in a subframe: In IE 8: <iframe src="http ://www. victim .com" security =" restr ic ted "></iframe> In Chrome: <iframe src="http ://www. victim .com" sandbox></iframe> In Firefox and IE: Activate designMode in parent page. 3.7. Authors and Primary Editors [none named] 3.8. References 1. https://www.owasp.org/index.php/Clickjacking_Defense_Cheat_Sheet 2. https://www.owasp.org/index.php/Clickjacking 3. http://blog.chromium.org/2010/01/security-in-depth-new-security-features. html 4. https://code.google.com/p/chromium/issues/detail?id=129139 32 4. C-Based Toolchain Hardening Cheat Sheet integration or build server will use test configurations, and you will ship release builds. 1970’s K&R code and one size fits all flags are from a bygone era. Processes have evolved and matured to meet the challenges of a modern landscape, including threats. Because tools like Autconfig and Automake do not support the notion of build config- urations [4], you should prefer to work in an Integrated Develop Environments (IDE) or write your makefiles so the desired targets are supported. In addition, Autconfig and Automake often ignore user supplied flags (it depends on the folks writing the various scripts and templates), so you might find it easier to again write a makefile from scratch rather than retrofitting existing auto tool files. 4.3.1. Debug Builds Debug is used during development, and the build assists you in finding problems in the code. During this phase, you develop your program and test integration with third party libraries you program depends upon. To help with debugging and di- agnostics, you should define DEBUG and _DEBUG (if on a Windows platform) pre- processor macros and supply other ’debugging and diagnostic’ oriented flags to the compiler and linker. Additional preprocessor macros for selected libraries are offered in the full article [2]. You should use the following for GCC when building for debug: -O0 (or -O1) and -g3 -ggdb. No optimizations improve debuggability because optimizations often rear- range statements to improve instruction scheduling and remove unneeded code. You may need -O1 to ensure some analysis is performed. -g3 ensures maximum debug information is available, including symbolic constants and #defines. Asserts will help you write self debugging programs. The program will alert you to the point of first failure quickly and easily. Because asserts are so powerful, the code should be completely and full instrumented with asserts that: (1) validates and asserts all program state relevant to a function or a method; (2) validates and asserts all function parameters; and (3) validates and asserts all return values for functions or methods which return a value. Because of item (3), you should be very suspicious of void functions that cannot convey failures. Anywhere you have an if statement for validation, you should have an assert. Any- where you have an assert, you should have an if statement. They go hand-in-hand. Posix states if NDEBUG is not defined, then assert "shall write information about the particular call that failed on stderr and shall call abort" [5]. Calling abort during de- velopment is useless behavior, so you must supply your own assert that SIGTRAPs. A Unix and Linux example of a SIGTRAP based assert is provided in the full article [2]. Unlike other debugging and diagnostic methods - such as breakpoints and printf - asserts stay in forever and become silent guardians. If you accidentally nudge something in an apparently unrelated code path, the assert will snap the debugger for you. The enduring coverage means debug code - with its additional diagnostics and instrumentation - is more highly valued than unadorned release code. If code is checked in that does not have the additional debugging and diagnostics, including full assertions, you should reject the check-in. 4.3.2. Release Builds Release builds are diametrically opposed to debug configurations. In a release config- uration, the program will be built for use in production. Your program is expected to operate correctly, securely and efficiently. The time for debugging and diagnostics is over, and your program will define NDEBUG to remove the supplemental information and behavior. 35 4. C-Based Toolchain Hardening Cheat Sheet A release configuration should also use -O2/-O3/-Os and -g1/-g2. The optimizations will make it somewhat more difficult to make sense of a stack trace, but they should be few and far between. The -gN flag ensures debugging information is available for post mortem analysis. While you generate debugging information for release builds, you should strip the information before shipping and check the symbols into you version control system along with the tagged build. NDEBUG will also remove asserts from your program by defining them to void since its not acceptable to crash via abort in production. You should not depend upon assert for crash report generation because those reports could contain sensitive in- formation and may end up on foreign systems, including for example, Windows Error Reporting [6]. If you want a crash dump, you should generate it yourself in a con- trolled manner while ensuring no sensitive information is written or leaked. Release builds should also curtail logging. If you followed earlier guidance, you have properly instrumented code and can determine the point of first failure quickly and easily. Simply log the failure and and relevant parameters. Remove all NSLog and similar calls because sensitive information might be logged to a system logger. Worse, the data in the logs might be egressed by backup or sync. If your default configura- tion includes a logging level of ten or maximum verbosity, you probably lack stability and are trying to track problems in the field. That usually means your program or library is not ready for production. 4.3.3. Test Builds A Test build is closely related to a release build. In this build configuration, you want to be as close to production as possible, so you should be using -O2/-O3/-Os and -g1/-g2. You will run your suite of positive and negative tests against the test build. You will also want to exercise all functions or methods provided by the program and not just the public interfaces, so everything should be made public. For example, all member functions public (C++ classes), all selectors (Objective C), all methods (Java), and all interfaces (library or shared object) should be made available for testing. As such, you should: • Add -Dprotected=public -Dprivate=public to CFLAGS and CXXFLAGS • Change __attribute__ ((visibility ("hidden"))) to __attribute__ ((visibility ("default"))) Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (q.v.) is about building reliable and secure software. You should also concentrate on negative tests. Positive self tests are relatively useless except for functional and regression tests. Since this is your line of business or area of expertise, you should have the business logic correct when operating in a benign environment. A hostile or toxic environment is much more interesting, and that’s where you want to know how your library or program will fail in the field when under attack. 4.4. Library Integration You must properly integrate and utilize libraries in your program. Proper integration includes acceptance testing, configuring for your build system, identifying libraries you should be using, and correctly using the libraries. A well integrated library can compliment your code, and a poorlly written library can detract from your program. Because a stable library with required functionality can be elusive and its tricky to integrate libraries, you should try to minimize dependencies and avoid thrid party libraries whenever possible. 36 4. C-Based Toolchain Hardening Cheat Sheet Acceptance testing a library is practically non-existent. The testing can be a simple code review or can include additional measures, such as negative self tests. If the library is defective or does not meet standards, you must fix it or reject the library. An example of lack of acceptance testing is Adobe’s inclusion of a defective Sablotron library [7], which resulted in CVE-2012-1525 [8]. Another example is the 10’s to 100’s of millions of vulnerable embedded devices due to defects in libupnp. While its popular to lay blame on others, the bottom line is you chose the library so you are responsible for it. You must also ensure the library is integrated into your build process. For ex- ample, the OpenSSL library should be configured without SSLv2, SSLv3 and com- pression since they are defective. That means config should be executed with -no- comp -no-sslv2 and -no-sslv3. As an additional example, using STLPort your de- bug configuration should also define _STLP_DEBUG=1, _STLP_USE_DEBUG_LIB=1, _STLP_DEBUG_ALLOC=1, _STLP_DEBUG_UNINITIALIZED=1 because the library of- fers the additional diagnostics during development. Debug builds also present an opportunity to use additional libraries to help locate problems in the code. For example, you should be using a memory checker such as Debug Malloc Library (Dmalloc) during development. If you are not using Dmalloc, then ensure you have an equivalent checker, such as GCC 4.8’s -fsanitize=memory. This is one area where one size clearly does not fit all. Using a library properly is always difficult, especially when there is no documenta- tion. Review any hardening documents available for the library, and be sure to visit the library’s documentation to ensure proper API usage. If required, you might have to review code or step library code under the debugger to ensure there are no bugs or undocumented features. 4.5. Static Analysis Compiler writers do a fantastic job of generating object code from source code. The process creates a lot of additional information useful in analyzing code. Compilers use the analysis to offer programmers warnings to help detect problems in their code, but the catch is you have to ask for them. After you ask for them, you should take time to understand what the underlying issue is when a statement is flagged. For example, compilers will warn you when comparing a signed integer to an unsigned integer because -1 > 1 after C/C++ promotion. At other times, you will need to back off some warnings to help separate the wheat from the chaff. For example, interface programming is a popular C++ paradigm, so -Wno-unused-parameter will probably be helpful with C++ code. You should consider a clean compile as a security gate. If you find its painful to turn warnings on, then you have likely been overlooking some of the finer points in the details. In addition, you should strive for multiple compilers and platforms support since each has its own personality (and interpretation of the C/C++ standards). By the time your core modules clean compile under Clang, GCC, ICC, and Visual Studio on the Linux and Windows platforms, your code will have many stability obstacles removed. When compiling programs with GCC, you should use the following flags to help detect errors in your programs. The options should be added to CFLAGS for a program with C source files, and CXXFLAGS for a program with C++ source files. Objective C devel- opers should add their warnings to CFLAGS: -Wall -Wextra -Wconversion (or -Wsign- conversion), -Wcast-align, -Wformat=2 -Wformat-security, -fno-common, -Wmissing- prototypes, -Wmissing-declarations, -Wstrict-prototypes, -Wstrict-overflow, and -Wtrampolines. C++ presents additional opportunities under GCC, and the flags include - 37 5. Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet Last revision (mm/dd/yy): 08/14/2014 5.1. Introduction Cross-Site Request Forgery (CSRF) is a type of attack that occurs when a malicious Web site, email, blog, instant message, or program causes a user’s Web browser to perform an unwanted action on a trusted site for which the user is currently authenticated. The impact of a successful cross-site request forgery attack is limited to the capabilities exposed by the vulnerable application. For example, this attack could result in a transfer of funds, changing a password, or purchasing an item in the user’s context. In effect, CSRF attacks are used by an attacker to make a target system perform a function (funds Transfer, form submission etc.) via the target’s browser without knowledge of the target user, at least until the unauthorized function has been committed. Impacts of successful CSRF exploits vary greatly based on the role of the victim. When targeting a normal user, a successful CSRF attack can compromise end-user data and their associated functions. If the targeted end user is an administrator account, a CSRF attack can compromise the entire Web application. The sites that are more likely to be attacked are community Websites (social networking, email) or sites that have high dollar value accounts associated with them (banks, stock brokerages, bill pay services). This attack can happen even if the user is logged into a Web site using strong encryption (HTTPS). Utilizing social engineering, an attacker will embed malicious HTML or JavaScript code into an email or Website to request a specific ’task url’. The task then executes with or without the user’s knowledge, either directly or by utilizing a Cross-site Scripting flaw (ex: Samy MySpace Worm). For more information on CSRF, please see the OWASP Cross-Site Request Forgery (CSRF) page [2]. 5.2. Prevention Measures That Do NOT Work 5.2.1. Using a Secret Cookie Remember that all cookies, even the secret ones, will be submitted with every re- quest. All authentication tokens will be submitted regardless of whether or not the end-user was tricked into submitting the request. Furthermore, session identifiers are simply used by the application container to associate the request with a specific session object. The session identifier does not verify that the end-user intended to submit the request. 5.2.2. Only Accepting POST Requests Applications can be developed to only accept POST requests for the execution of busi- ness logic. The misconception is that since the attacker cannot construct a malicious link, a CSRF attack cannot be executed. Unfortunately, this logic is incorrect. There 40 5. Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet are numerous methods in which an attacker can trick a victim into submitting a forged POST request, such as a simple form hosted in an attacker’s Website with hidden values. This form can be triggered automatically by JavaScript or can be triggered by the victim who thinks the form will do something else. 5.2.3. Multi-Step Transactions Multi-Step transactions are not an adequate prevention of CSRF. As long as an at- tacker can predict or deduce each step of the completed transaction, then CSRF is possible. 5.2.4. URL Rewriting This might be seen as a useful CSRF prevention technique as the attacker can not guess the victim’s session ID. However, the user’s credential is exposed over the URL. 5.3. General Recommendation: Synchronizer Token Pattern In order to facilitate a "transparent but visible" CSRF solution, developers are encour- aged to adopt the Synchronizer Token Pattern [3]. The synchronizer token pattern requires the generating of random "challenge" tokens that are associated with the user’s current session. These challenge tokens are then inserted within the HTML forms and links associated with sensitive server-side operations. When the user wishes to invoke these sensitive operations, the HTTP request should include this challenge token. It is then the responsibility of the server application to verify the existence and correctness of this token. By including a challenge token with each request, the developer has a strong control to verify that the user actually intended to submit the desired requests. Inclusion of a required security token in HTTP requests associated with sensitive business functions helps mitigate CSRF attacks as suc- cessful exploitation assumes the attacker knows the randomly generated token for the target victim’s session. This is analogous to the attacker being able to guess the target victim’s session identifier. The following synopsis describes a general approach to incorporate challenge tokens within the request. When a Web application formulates a request (by generating a link or form that causes a request when submitted or clicked by the user), the application should include a hidden input parameter with a common name such as "CSRFToken". The value of this token must be randomly generated such that it cannot be guessed by an attacker. Consider leveraging the java.security.SecureRandom class for Java applications to generate a sufficiently long random token. Alternative generation algorithms include the use of 256-bit BASE64 encoded hashes. Developers that choose this generation algorithm must make sure that there is randomness and uniqueness utilized in the data that is hashed to generate the random token. <form action="/ transfer .do" method="post"> <input type="hidden" name="CSRFToken" value="OWY4NmQwODE4ODRjN2Q2NTlhMmZlYWE. . . wYzU1YWQwMTVhM2JmNGYxYjJiMGI4MjJjZDE1ZDZ. . . MGYwMGEwOA=="> . . . </form> In general, developers need only generate this token once for the current session. After initial generation of this token, the value is stored in the session and is utilized for each subsequent request until the session expires. When a request is issued by the end-user, the server-side component must verify the existence and validity of the 41 5. Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet token in the request as compared to the token found in the session. If the token was not found within the request or the value provided does not match the value within the session, then the request should be aborted, token should be reset and the event logged as a potential CSRF attack in progress. To further enhance the security of this proposed design, consider randomizing the CSRF token parameter name and or value for each request. Implementing this ap- proach results in the generation of per-request tokens as opposed to per-session tokens. Note, however, that this may result in usability concerns. For example, the "Back" button browser capability is often hindered as the previous page may contain a token that is no longer valid. Interaction with this previous page will result in a CSRF false positive security event at the server. Regardless of the approach taken, developers are encouraged to protect the CSRF token the same way they protect authenticated session identifiers, such as the use of SSLv3/TLS. 5.3.1. Disclosure of Token in URL Many implementations of this control include the challenge token in GET (URL) re- quests as well as POST requests. This is often implemented as a result of sensitive server-side operations being invoked as a result of embedded links in the page or other general design patterns. These patterns are often implemented without knowl- edge of CSRF and an understanding of CSRF prevention design strategies. While this control does help mitigate the risk of CSRF attacks, the unique per-session token is being exposed for GET requests. CSRF tokens in GET requests are potentially leaked at several locations: browser history, HTTP log files, network appliances that make a point to log the first line of an HTTP request, and Referer headers if the protected site links to an external site. In the latter case (leaked CSRF token due to the Referer header being parsed by a linked site), it is trivially easy for the linked site to launch a CSRF attack on the protected site, and they will be able to target this attack very effectively, since the Referer header tells them the site as well as the CSRF token. The attack could be run entirely from javascript, so that a simple addition of a script tag to the HTML of a site can launch an attack (whether on an originally malicious site or on a hacked site). This attack scenario is easy to prevent, the referer will be omitted if the origin of the request is HTTPS. Therefore this attack does not affect web applications that are HTTPS only. The ideal solution is to only include the CSRF token in POST requests and modify server-side actions that have state changing affect to only respond to POST requests. This is in fact what the RFC 2616 [4] requires for GET requests. If sensitive server- side actions are guaranteed to only ever respond to POST requests, then there is no need to include the token in GET requests. In most JavaEE web applications, however, HTTP method scoping is rarely ever utilized when retrieving HTTP parameters from a request. Calls to "HttpServletRe- quest.getParameter" will return a parameter value regardless if it was a GET or POST. This is not to say HTTP method scoping cannot be enforced. It can be achieved if a developer explicitly overrides doPost() in the HttpServlet class or leverages framework specific capabilities such as the AbstractFormController class in Spring. For these cases, attempting to retrofit this pattern in existing applications requires significant development time and cost, and as a temporary measure it may be better to pass CSRF tokens in the URL. Once the application has been fixed to respond to HTTP GET and POST verbs correctly, CSRF tokens for GET requests should be turned off. 42 5. Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet 5.4.2. Checking The Origin Header The Origin HTTP Header [8] standard was introduced as a method of defending against CSRF and other Cross-Domain attacks. Unlike the referer, the origin will be present in HTTP request that originates from an HTTPS url. If the origin header is present, then it should be checked for consistency. 5.4.3. Challenge-Response Challenge-Response is another defense option for CSRF. The following are some ex- amples of challenge-response options. • CAPTCHA • Re-Authentication (password) • One-time Token While challenge-response is a very strong defense to CSRF (assuming proper imple- mentation), it does impact user experience. For applications in need of high security, tokens (transparent) and challenge-response should be used on high risk functions. 5.5. Client/User Prevention Since CSRF vulnerabilities are reportedly widespread, it is recommended to follow best practices to mitigate risk. Some mitigating include: • Logoff immediately after using a Web application • Do not allow your browser to save username/passwords, and do not allow sites to "remember" your login • Do not use the same browser to access sensitive applications and to surf the Internet freely (tabbed browsing). • The use of plugins such as No-Script makes POST based CSRF vulnerabilities difficult to exploit. This is because JavaScript is used to automatically submit the form when the exploit is loaded. Without JavaScript the attacker would have to trick the user into submitting the form manually. Integrated HTML-enabled mail/browser and newsreader/browser environments pose additional risks since simply viewing a mail message or a news message might lead to the execution of an attack. 5.6. No Cross-Site Scripting (XSS) Vulnerabilities Cross-Site Scripting is not necessary for CSRF to work. However, any cross-site scripting vulnerability can be used to defeat token, Double-Submit cookie, referer and origin based CSRF defenses. This is because an XSS payload can simply read any page on the site using a XMLHttpRequest and obtain the generated token from the response, and include that token with a forged request. This technique is ex- actly how the MySpace (Samy) worm [9] defeated MySpace’s anti CSRF defenses in 2005, which enabled the worm to propagate. XSS cannot defeat challenge-response defenses such as Captcha, re-authentication or one-time passwords. It is impera- tive that no XSS vulnerabilities are present to ensure that CSRF defenses can’t be circumvented. Please see the OWASP XSS Prevention Cheat Sheet on page 179 for detailed guidance on how to prevent XSS flaws. 45 5. Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet 5.7. Authors and Primary Editors • Paul Petefish - paulpetefish[at]solutionary.com • Eric Sheridan - eric.sheridan[at]owasp.org • Dave Wichers - dave.wichers[at]owasp.org 5.8. References 1. https://www.owasp.org/index.php/Cross-Site_Request_Forgery_ (CSRF)_Prevention_Cheat_Sheet 2. https://www.owasp.org/index.php/Cross-Site_Request_Forgery_ (CSRF) 3. http://www.corej2eepatterns.com/Design/PresoDesign.htm 4. http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.1 5. http://directwebremoting.org/ 6. http://en.wikipedia.org/wiki/Cryptographic_nonce 7. http://en.wikipedia.org/wiki/Claims-based_identity 8. https://wiki.mozilla.org/Security/Origin 9. http://en.wikipedia.org/wiki/Samy_(XSS) 46 6. Cryptographic Storage Cheat Sheet Last revision (mm/dd/yy): 03/10/2015 6.1. Introduction This article provides a simple model to follow when implementing solutions for data at rest. 6.1.1. Architectural Decision An architectural decision must be made to determine the appropriate method to pro- tect data at rest. There are such wide varieties of products, methods and mechanisms for cryptographic storage. This cheat sheet will only focus on low-level guidelines for developers and architects who are implementing cryptographic solutions. We will not address specific vendor solutions, nor will we address the design of cryptographic algorithms. 6.2. Providing Cryptographic Functionality 6.2.1. Secure Cryptographic Storage Design Rule - Only store sensitive data that you need Many eCommerce businesses utilize third party payment providers to store credit card information for recurring billing. This offloads the burden of keeping credit card numbers safe. Rule - Use strong approved Authenticated Encryption E.g. CCM [2] or GCM [3] are approved Authenticated Encryption [4] modes based on AES [5] algorithm. Rule - Use strong approved cryptographic algorithms Do not implement an existing cryptographic algorithm on your own, no matter how easy it appears. Instead, use widely accepted algorithms and widely accepted implementations. Only use approved public algorithms such as AES, RSA public key cryptography, and SHA-256 or better for hashing. Do not use weak algorithms, such as MD5 or SHA1. Avoid hashing for password storage, instead use PBKDF2, bcrypt or scrypt. Note that the classification of a "strong" cryptographic algorithm can change over time. See NIST approved algorithms [6] or ISO TR 14742 "Recommendations on Cryptographic Algorithms and their use" or Algorithms, key size and parameters report – 2014 [7] from European Union Agency for Network and Information Security. E.g. AES 128, RSA [8] 3072, SHA [9] 256. Ensure that the implementation has (at minimum) had some cryptography experts involved in its creation. If possible, use an implementation that is FIPS 140-2 certi- fied. 47 6. Cryptographic Storage Cheat Sheet Rule - Protect keys in a key vault Keys should remain in a protected key vault at all times. In particular, ensure that there is a gap between the threat vectors that have direct access to the data and the threat vectors that have direct access to the keys. This implies that keys should not be stored on the application or web server (assuming that application attackers are part of the relevant threat model). Rule - Document concrete procedures for managing keys through the lifecycle These procedures must be written down and the key custodians must be adequately trained. Rule - Build support for changing keys periodically Key rotation is a must as all good keys do come to an end either through expiration or revocation. So a developer will have to deal with rotating keys at some point – better to have a system in place now rather than scrambling later. (From Bil Cory as a starting point). Rule - Document concrete procedures to handle a key compromise Rule - Rekey data at least every one to three years Rekeying refers to the process of decrypting data and then re-encrypting it with a new key. Periodically rekeying data helps protect it from undetected compromises of older keys. The appropriate rekeying period depends on the security of the keys. Data protected by keys secured in dedicated hardware security modules might only need rekeying every three years. Data protected by keys that are split and stored on two application servers might need rekeying every year. Rule - Follow applicable regulations on use of cryptography Rule - Under PCI DSS requirement 3, you must protect cardholder data The Payment Card Industry (PCI) Data Security Standard (DSS) was developed to encourage and enhance cardholder data security and facilitate the broad adoption of consistent data security measures globally. The standard was introduced in 2005 and replaced in- dividual compliance standards from Visa, Mastercard, Amex, JCB and Diners. The current version of the standard is 2.0 and was initialized on January 1, 2011. PCI DSS requirement 3 covers secure storage of credit card data. This requirement covers several aspects of secure storage including the data you must never store but we are covering Cryptographic Storage which is covered in requirements 3.4, 3.5 and 3.6 as you can see below: 3.4 Render PAN (Primary Account Number), at minimum, unreadable anywhere it is stored Compliance with requirement 3.4 can be met by implementing any of the four types of secure storage described in the standard which includes encrypting and hashing data. These two approaches will often be the most popular choices from the list of options. The standard doesn’t refer to any specific algorithms but it mandates the use of Strong Cryptography. The glossary document from the PCI council defines Strong Cryptography as: Cryptography based on industry-tested and accepted algorithms, along with strong key lengths and proper key-management practices. Cryptography is a method to pro- tect data and includes both encryption (which is reversible) and hashing (which is not reversible, or "one way"). SHA-1 is an example of an industry-tested and accepted hashing algorithm. Examples of industry-tested and accepted standards and algo- rithms for encryption include AES (128 bits and higher), TDES (minimum double-length keys), RSA (1024 bits and higher), ECC (160 bits and higher), and ElGamal (1024 bits and higher). 50 6. Cryptographic Storage Cheat Sheet If you have implemented the second rule in this cheat sheet you will have imple- mented a strong cryptographic algorithm which is compliant with or stronger than the requirements of PCI DSS requirement 3.4. You need to ensure that you identify all locations that card data could be stored including logs and apply the appropriate level of protection. This could range from encrypting the data to replacing the card number in logs. This requirement can also be met by implementing disk encryption rather than file or column level encryption. The requirements for Strong Cryptography are the same for disk encryption and backup media. The card data should never be stored in the clear and by following the guidance in this cheat sheet you will be able to securely store your data in a manner which is compliant with PCI DSS requirement 3.4 3.5 Protect any keys used to secure cardholder data against disclosure and misuse As the requirement name above indicates, we are required to securely store the en- cryption keys themselves. This will mean implementing strong access control, audit- ing and logging for your keys. The keys must be stored in a location which is both secure and "away" from the encrypted data. This means key data shouldn’t be stored on web servers, database servers etc Access to the keys must be restricted to the smallest amount of users possible. This group of users will ideally be users who are highly trusted and trained to perform Key Custodian duties. There will obviously be a requirement for system/service accounts to access the key data to perform encryption/decryption of data. The keys themselves shouldn’t be stored in the clear but encrypted with a KEK (Key Encrypting Key). The KEK must not be stored in the same location as the encryption keys it is encrypting. 3.6 Fully document and implement all key-management processes and procedures for cryptographic keys used for encryption of cardholder data Requirement 3.6 mandates that key management processes within a PCI compliant company cover 8 specific key lifecycle steps: 3.6.1 Generation of strong cryptographic keys As we have previously described in this cheat sheet we need to use algorithms which offer high levels of data security. We must also generate strong keys so that the security of the data isn’t undermined by weak cryptographic keys. A strong key is generated by using a key length which is sufficient for your data security require- ments and compliant with the PCI DSS. The key size alone isn’t a measure of the strength of a key. The data used to generate the key must be sufficiently random ("sufficient" often being determined by your data security requirements) and the en- tropy of the key data itself must be high. 3.6.2 Secure cryptographic key distribution The method used to distribute keys must be secure to prevent the theft of keys in transit. The use of a protocol such as Diffie Hellman can help secure the distribution of keys, the use of secure transport such as TLS and SSHv2 can also secure the keys in transit. Older protocols like SSLv3 should not be used. 3.6.3 Secure cryptographic key storage The secure storage of encryption keys including KEK’s has been touched on in our description of requirement 3.5 (see above). 3.6.4 Periodic cryptographic key changes The PCI DSS standard mandates that keys used for encryption must be rotated at least annually. The key rotation process must remove an old key from the encryp- tion/decryption process and replace it with a new key. All new data entering the 51 6. Cryptographic Storage Cheat Sheet system must encrypted with the new key. While it is recommended that existing data be rekeyed with the new key, as per the Rekey data at least every one to three years rule above, it is not clear that the PCI DSS requires this. 3.6.5 Retirement or replacement of keys as deemed necessary when the integrity of the key has been weakened or keys are suspected of being compromised The key management processes must cater for archived, retired or compromised keys. The process of securely storing and replacing these keys will more than likely be covered by your processes for requirements 3.6.2, 3.6.3 and 3.6.4 3.6.6 Split knowledge and establishment of dual control of cryptographic keys The requirement for split knowledge and/or dual control for key management pre- vents an individual user performing key management tasks such as key rotation or deletion. The system should require two individual users to perform an action (i.e. entering a value from their own OTP) which creates to separate values which are concatenated to create the final key data. 3.6.7 Prevention of unauthorized substitution of cryptographic keys The system put in place to comply with requirement 3.6.6 can go a long way to preventing unauthorised substitution of key data. In addition to the dual control process you should implement strong access control, auditing and logging for key data so that unauthorised access attempts are prevented and logged. 3.6.8 Requirement for cryptographic key custodians to sign a form stating that they understand and accept their key-custodian responsibilities To perform the strong key management functions we have seen in requirement 3.6 we must have highly trusted and trained key custodians who understand how to perform key management duties. The key custodians must also sign a form stating they understand the responsibilities that come with this role. 6.3. Related Articles OWASP - Testing for SSL-TLS [28], and OWASP Guide to Cryptography [29], OWASP – Application Security Verification Standard (ASVS) – Communication Security Veri- fication Requirements (V10) [30]. 6.4. Authors and Primary Editors • Kevin Kenan - kevin[at]k2dd.com • David Rook - david.a.rook[at]gmail.com • Kevin Wall - kevin.w.wall[at]gmail.com • Jim Manico - jim[at]owasp.org • Fred Donovan - fred.donovan(at)owasp.org 6.5. References 1. https://www.owasp.org/index.php/Cryptographic_Storage_Cheat_ Sheet 2. http://en.wikipedia.org/wiki/CCM_mode 3. http://en.wikipedia.org/wiki/GCM_mode 52 7. DOM based XSS Prevention Cheat Sheet Let’s look at the individual subcontexts of the execution context in turn. 7.1.1. RULE #1 - HTML Escape then JavaScript Escape Before Inserting Untrusted Data into HTML Subcontext within the Execution Context There are several methods and attributes which can be used to directly render HTML content within JavaScript. These methods constitute the HTML Subcontext within the Execution Context. If these methods are provided with untrusted input, then an XSS vulnerability could result. For example: Example Dangerous HTML Methods Attributes element . innerHTML = "<HTML> Tags and markup" ; element .outerHTML = "<HTML> Tags and markup" ; Methods document . write (" <HTML> Tags and markup" ) ; document . writeln (" <HTML> Tags and markup" ) ; Guideline To make dynamic updates to HTML in the DOM safe, we recommend a) HTML en- coding, and then b) JavaScript encoding all untrusted input, as shown in these examples: element . innerHTML = "<%=Encoder . encodeForJS ( Encoder .encodeForHTML( ↪→ untrustedData ) ) %>"; element .outerHTML = "<%=Encoder . encodeForJS ( Encoder .encodeForHTML( ↪→ untrustedData ) ) %>"; document . write ("<%=Encoder . encodeForJS ( Encoder .encodeForHTML( untrustedData ) ↪→ ) %>") ; document . writeln ("<%=Encoder . encodeForJS ( Encoder .encodeForHTML( ↪→ untrustedData ) ) %>") ; Note: The Encoder.encodeForHTML() and Encoder.encodeForJS() are just notional encoders. Various options for actual encoders are listed later in this document. 7.1.2. RULE #2 - JavaScript Escape Before Inserting Untrusted Data into HTML Attribute Subcontext within the Execution Context The HTML attribute *subcontext* within the *execution* context is divergent from the standard encoding rules. This is because the rule to HTML attribute encode in an HTML attribute rendering context is necessary in order to mitigate attacks which try to exit out of an HTML attributes or try to add additional attributes which could lead to XSS. When you are in a DOM execution context you only need to JavaScript encode HTML attributes which do not execute code (attributes other than event handler, CSS, and URL attributes). For example, the general rule is to HTML Attribute encode untrusted data (data from the database, HTTP request, user, back-end system, etc.) placed in an HTML Attribute. This is the appropriate step to take when outputting data in a rendering context, however using HTML Attribute encoding in an execution context will break the application display of data. 55 7. DOM based XSS Prevention Cheat Sheet SAFE but BROKEN example var x = document . createElement ( " input " ) ; x . setAttribute ( "name" , "company_name" ) ; // In the fol lowing l ine of code , companyName represents untrusted user ↪→ input // The Encoder . encodeForHTMLAttr ( ) i s unnecessary and causes double− ↪→ encoding x . setAttribute ( " value " , ’<%=Encoder . encodeForJS ( Encoder . encodeForHTMLAttr ( ↪→ companyName) ) %>’) ; var form1 = document . forms [ 0 ] ; form1 . appendChild ( x ) ; The problem is that if companyName had the value "Johnson & Johnson". What would be displayed in the input text field would be "Johnson &amp; Johnson". The appropriate encoding to use in the above case would be only JavaScript encoding to disallow an attacker from closing out the single quotes and in-lining code, or escaping to HTML and opening a new script tag. SAFE and FUNCTIONALLY CORRECT example var x = document . createElement ( " input " ) ; x . setAttribute ( "name" , "company_name" ) ; x . setAttribute ( " value " , ’<%=Encoder . encodeForJS (companyName) %>’) ; var form1 = document . forms [ 0 ] ; form1 . appendChild ( x ) ; It is important to note that when setting an HTML attribute which does not execute code, the value is set directly within the object attribute of the HTML element so there is no concerns with injecting up. 7.1.3. RULE #3 - Be Careful when Inserting Untrusted Data into the Event Handler and JavaScript code Subcontexts within an Execution Context Putting dynamic data within JavaScript code is especially dangerous because JavaScript encoding has different semantics for JavaScript encoded data when com- pared to other encodings. In many cases, JavaScript encoding does not stop attacks within an execution context. For example, a JavaScript encoded string will execute even though it is JavaScript encoded. Therefore, the primary recommendation is to avoid including untrusted data in this context. If you must, the following examples describe some approaches that do and do not work. var x = document . createElement ( " a " ) ; x . href ="#"; // In the l ine of code below , the encoded data // on the right ( the second argument to setAttribute ) // is an example of untrusted data that was properly // JavaScript encoded but s t i l l executes . x . setAttribute ( " onclick " , "\u0061\u006c\u0065\u0072\u0074\u0028\u0032\u0032 ↪→ \u0029" ) ; var y = document . createTextNode ( " Click To Test " ) ; x . appendChild ( y ) ; document .body . appendChild ( x ) ; The setAttribute(name_string,value_string) method is dangerous because it implicitly coerces the string_value into the DOM attribute datatype of name_string. In the case 56 7. DOM based XSS Prevention Cheat Sheet above, the attribute name is an JavaScript event handler, so the attribute value is im- plicitly converted to JavaScript code and evaluated. In the case above, JavaScript en- coding does not mitigate against DOM based XSS. Other JavaScript methods which take code as a string types will have a similar problem as outline above (setTimeout, setInterval, new Function, etc.). This is in stark contrast to JavaScript encoding in the event handler attribute of a HTML tag (HTML parser) where JavaScript encoding mitigates against XSS. <a id ="bb" href ="#" onclick="\u0061\u006c\u0065\u0072\u0074\u0028\u0031\ ↪→ u0029"> Test Me</a> An alternative to using Element.setAttribute(...) to set DOM attributes is to set the attribute directly. Directly setting event handler attributes will allow JavaScript en- coding to mitigate against DOM based XSS. Please note, it is always dangerous design to put untrusted data directly into a command execution context. <a id ="bb" href="#"> Test Me</a> //The fol lowing does NOT work because the event handler //is being set to a string . " a ler t (7 ) " is JavaScript encoded . document . getElementById ( " bb " ) . onclick = "\u0061\u006c\u0065\u0072\u0074\ ↪→ u0028\u0037\u0029" ; //The fol lowing does NOT work because the event handler is being set to a ↪→ string . document . getElementById ( " bb " ) . onmouseover = " t e s t I t " ; //The fol lowing does NOT work because of the //encoded " ( " and " ) " . " a ler t (77) " is JavaScript encoded . document . getElementById ( " bb " ) . onmouseover = \u0061\u006c\u0065\u0072\u0074\ ↪→ u0028\u0037\u0037\u0029; //The fol lowing does NOT work because of the encoded " ; " . //" t e s t I t ; t e s t I t " is JavaScript encoded . document . getElementById ( " bb " ) . onmouseover \u0074\u0065\u0073\u0074\u0049\ ↪→ u0074\u003b\u0074\u0065\u0073\u0074\u0049\u0074; //The fol lowing DOES WORK because the encoded value //is a val id variable name or function reference . " t e s t I t " is JavaScript ↪→ encoded document . getElementById ( " bb " ) . onmouseover = \u0074\u0065\u0073\u0074\u0049\ ↪→ u0074; function t e s t I t ( ) { a ler t ( " I was called . " ) ; } There are other places in JavaScript where JavaScript encoding is accepted as valid executable code. for ( var \u0062=0; \u0062 < 10; \u0062++) { \u0064\u006f\u0063\u0075\u006d\u0065\u006e\u0074 .\u0077\u0072\u0069\u0074\u0065\u006c\u006e ( "\u0048\u0065\u006c\u006c\u006f\u0020\u0057\u006f\u0072\u006c\u0064" ) ; } \u0077\u0069\u006e\u0064\u006f\u0077 .\u0065\u0076\u0061\u006c \u0064\u006f\u0063\u0075\u006d\u0065\u006e\u0074 .\u0077\u0072\u0069\u0074\u0065(111111111) ; or var s = "\u0065\u0076\u0061\u006c " ; var t = "\u0061\u006c\u0065\u0072\u0074\u0028\u0031\u0031\u0029" ; window[ s ] ( t ) ; 57 7. DOM based XSS Prevention Cheat Sheet setTimeout ( ( function (param) { return function ( ) { customFunction (param) ; } } ) ("<%=Encoder . encodeForJS ( untrustedData ) %>") , y ) ; The other alternative is using N-levels of encoding. N-Levels of Encoding If your code looked like the following, you would need to only double JavaScript encode input data. setTimeout ( " customFunction(’<%=doubleJavaScriptEncodedData%>’, y ) " ) ; function customFunction ( firstName , lastName ) a ler t ( " Hello " + firstName + " " + lastNam ) ; } The doubleJavaScriptEncodedData has its first layer of JavaScript encoding reversed (upon execution) in the single quotes. Then the implicit eval() of setTimeout() reverses another layer of JavaScript encoding to pass the correct value to customFunction. The reason why you only need to double JavaScript encode is that the customFunc- tion function did not itself pass the input to another method which implicitly or explicitly called eval(). If "firstName" was passed to another JavaScript method which implicitly or explicitly called eval() then <%=doubleJavaScriptEncodedData%> above would need to be changed to <%=tripleJavaScriptEncodedData%>. An important implementation note is that if the JavaScript code tries to utilize the double or triple encoded data in string comparisons, the value may be interpreted as different values based on the number of evals() the data has passed through before being passed to the if comparison and the number of times the value was JavaScript encoded. If "A" is double JavaScript encoded then the following if check will return false. var x = "doubleJavaScriptEncodedA " ; //\u005c\u0075\u0030\u0030\u0034\u0031 i f ( x == "A" ) { a ler t ( " x is A" ) ; } e lse i f ( x == "\u0041" ) { a ler t ( " This is what pops " ) ; } This brings up an interesting design point. Ideally, the correct way to apply en- coding and avoid the problem stated above is to server-side encode for the output context where data is introduced into the application. Then client-side encode (using a JavaScript encoding library such as ESAPI4JS) for the individual subcontext (DOM methods) which untrusted data is passed to. ESAPI4JS [5] and jQuery Encoder [6] are two client side encoding libraries developed by Chris Schmidt. Here are some examples of how they are used: var input = "<%=Encoder . encodeForJS ( untrustedData ) %>"; //server−side ↪→ encoding window. location = ESAPI4JS.encodeForURL ( input ) ; //URL encoding is happening ↪→ in JavaScript document . writeln (ESAPI4JS.encodeForHTML( input ) ) ; //HTML encoding is ↪→ happening in JavaScript It has been well noted by the group that any kind of reliance on a JavaScript library for encoding would be problematic as the JavaScript library could be subverted by attackers. One option is to wait till ECMAScript 5 so the JavaScript library could 60 7. DOM based XSS Prevention Cheat Sheet support immutable properties. Another option provided by Gaz (Gareth) was to use a specific code construct to limit mutability with anonymous clousures. An example follows: function escapeHTML( str ) { str = str + " " ; var out = " " ; for ( var i =0; i <str . length ; i ++) { i f ( str [ i ] === ’ < ’ ) { out += ’& l t ; ’ ; } e lse i f ( str [ i ] === ’ > ’ ) { out += ’&gt ; ’ ; } e lse i f ( str [ i ] === " ’ " ) { out += ’&#39; ’; } e lse i f ( str [ i ] === ’ " ’ ) { out += ’&quot ; ’ ; } e lse { out += str [ i ] ; } } return out ; } Chris Schmidt has put together another implementation of a JavaScript encoder [7]. 7. Limit the usage of dynamic untrusted data to right side operations. And be aware of data which may be passed to the application which look like code (eg. location, eval()). (Achim) var x = "<%=properly encoded data for flow%>"; If you want to change different object attributes based on user input use a level of indirection. Instead of: window[ userData ] = "moreUserData " ; Do the following instead: i f ( userData===" location " ) { window. location = " stat ic/path/or/properly/url/encoded/value " ; } 8. When URL encoding in DOM be aware of character set issues as the character set in JavaScript DOM is not clearly defined (Mike Samuel). 9. Limit access to properties objects when using object[x] accessors. (Mike Samuel). In other words use a level of indirection between untrusted input and specified object properties. Here is an example of the problem when using map types: var myMapType = { } ; myMapType[<%=untrustedData%>] = "moreUntrustedData " ; Although the developer writing the code above was trying to add additional keyed elements to the myMapType object. This could be used by an attacker to subvert internal and external attributes of the myMapType object. 10. Run your JavaScript in a ECMAScript 5 canopy or sand box to make it harder for your JavaScript API to be compromised (Gareth Heyes and John Stevens). 61 7. DOM based XSS Prevention Cheat Sheet 11. Don’t eval() JSON to convert it to native JavaScript objects. Instead use JSON.toJSON() and JSON.parse() (Chris Schmidt). 7.3. Common Problems Associated with Mitigating DOM Based XSS 7.3.1. Complex Contexts In many cases the context isn’t always straightforward to discern. <a href =" javascript :myFunction(’<%=untrustedData%>’, ’ test ’ ) ;" > Click Me</a> . . . <script > Function myFunction ( url ,name) { window. location = url ; } </script > In the above example, untrusted data started in the rendering URL context (href attribute of an <a> tag) then changed to a JavaScript execution context (javascript: protocol handler) which passed the untrusted data to an execution URL subcontext (window.location of myFunction). Because the data was introduced in JavaScript code and passed to a URL subcontext the appropriate server-side encoding would be the following: <a href =" javascript :myFunction(’<%=Encoder . encodeForJS ( Encoder . encodeForURL ( untrustedData ) ) %>’, ’ test ’ ) ;" > Click Me</a> . . . Or if you were using ECMAScript 5 with an immutable JavaScript client-side encod- ing libraries you could do the following: <!−−server side URL encoding has been removed . Now only JavaScript encoding ↪→ on server side . −−> <a href =" javascript :myFunction(’<%=Encoder . encodeForJS ( untrustedData ) %>’, ’ ↪→ test ’ ) ;" > Click Me</a> . . . <script > Function myFunction ( url ,name) { var encodedURL = ESAPI4JS.encodeForURL ( url ) ; //URL encoding using cl ient− ↪→ side scripts window. location = encodedURL; } </script > 7.3.2. Inconsistencies of Encoding Libraries There are a number of open source encoding libraries out there: 1. ESAPI [8] 2. Apache Commons String Utils 3. Jtidy 4. Your company’s custom implementation. 62 8. Forgot Password Cheat Sheet Last revision (mm/dd/yy): 11/19/2014 8.1. Introduction This article provides a simple model to follow when implementing a "forgot password" web application feature. 8.2. The Problem There is no industry standard for implementing a Forgot Password feature. The result is that you see applications forcing users to jump through myriad hoops involving emails, special URLs, temporary passwords, personal security questions, and so on. With some applications you can recover your existing password. In others you have to reset it to a new value. 8.3. Steps 8.3.1. Step 1) Gather Identity Data or Security Questions The first page of a secure Forgot Password feature asks the user for multiple pieces of hard data that should have been previously collected (generally when the user first registers). Steps for this are detailed in the identity section the Choosing and Using Security Questions Cheat Sheet on page 20. At a minimum, you should have collected some data that will allow you to send the password reset information to some out-of-band side-channel, such as a (possibly different) email address or an SMS text number, etc. to be used in Step 3. 8.3.2. Step 2) Verify Security Questions After the form on Step 1 is submitted, the application verifies that each piece of data is correct for the given username. If anything is incorrect, or if the username is not recognized, the second page displays a generic error message such as "Sorry, invalid data". If all submitted data is correct, Step 2 should display at least two of the user’s pre-established personal security questions, along with input fields for the answers. It’s important that the answer fields are part of a single HTML form. Do not provide a drop-down list for the user to select the questions he wants to answer. Avoid sending the username as a parameter (hidden or otherwise) when the form on this page is submitted. The username should be stored in the server-side session where it can be retrieved as needed. Because users’ security questions / answers generally contains much less entropy than a well-chosen password (how many likely answers are there to the typical "What’s your favorite sports team?" or "In what city where you born?" security ques- tions anyway?), make sure you limit the number of guesses attempted and if some threshold is exceeded for that user (say 3 to 5), lock out the user’s account for some reasonable duration (say at least 5 minutes) and then challenge the user with some 65 8. Forgot Password Cheat Sheet form of challenge token per standard multi-factor workflow; see #3, below) to miti- gate attempts by hackers to guess the questions and reset the user’s password. (It is not unreasonable to think that a user’s email account may have already been com- promised, so tokens that do not involve email, such as SMS or a mobile soft-token, are best.) 8.3.3. Step 3) Send a Token Over a Side-Channel After step 2, lock out the user’s account immediately. Then SMS or utilize some other multi-factor token challenge with a randomly-generated code having 8 or more char- acters. This introduces an "out-of-band" communication channel and adds defense- in-depth as it is another barrier for a hacker to overcome. If the bad guy has somehow managed to successfully get past steps 1 and 2, he is unlikely to have compromised the side-channel. It is also a good idea to have the random code which your system generates to only have a limited validity period, say no more than 20 minutes or so. That way if the user doesn’t get around to checking their email and their email ac- count is later compromised, the random token used to reset the password would no longer be valid if the user never reset their password and the "reset password" token was discovered by an attacker. Of course, by all means, once a user’s password has been reset, the randomly-generated token should no longer be valid. 8.3.4. Step 4) Allow user to change password in the existing session Step 4 requires input of the code sent in step 3 in the existing session where the challenge questions were answered in step 2, and allows the user to reset his pass- word. Display a simple HTML form with one input field for the code, one for the new password, and one to confirm the new password. Verify the correct code is provided and be sure to enforce all password complexity requirements that exist in other ar- eas of the application. As before, avoid sending the username as a parameter when the form is submitted. Finally, it’s critical to have a check to prevent a user from accessing this last step without first completing steps 1 and 2 correctly. Otherwise, a forced browsing [2] attack may be possible. 8.4. Authors and Primary Editors • Dave Ferguson - gmdavef[at]gmail.com • Jim Manico - jim[at]owasp.org • Kevin Wall - kevin.w.wall[at]gmail.com • Wesley Philip - wphilip[at]ca.ibm.com 8.5. References 1. https://www.owasp.org/index.php/Forgot_Password_Cheat_Sheet 2. https://www.owasp.org/index.php/Forced_browsing 66 9. HTML5 Security Cheat Sheet Last revision (mm/dd/yy): 04/7/2014 9.1. Introduction The following cheat sheet serves as a guide for implementing HTML 5 in a secure fashion. 9.2. Communication APIs 9.2.1. Web Messaging Web Messaging (also known as Cross Domain Messaging) provides a means of mes- saging between documents from different origins in a way that is generally safer than the multiple hacks used in the past to accomplish this task. However, there are still some recommendations to keep in mind: • When posting a message, explicitly state the expected origin as the second argu- ment to postMessage rather than * in order to prevent sending the message to an unknown origin after a redirect or some other means of the target window’s origin changing. • The receiving page should always: – Check the origin attribute of the sender to verify the data is originating from the expected location. – Perform input validation on the data attribute of the event to ensure that it’s in the desired format. • Don’t assume you have control over the data attribute. A single Cross Site Scripting [2] flaw in the sending page allows an attacker to send messages of any given format. • Both pages should only interpret the exchanged messages as data. Never eval- uate passed messages as code (e.g. via eval()) or insert it to a page DOM (e.g. via innerHTML), as that would create a DOM-based XSS vulnerability. For more information see DOM based XSS Prevention Cheat Sheet on page 54. • To assign the data value to an element, instead of using a insecure method like element.innerHTML = data;, use the safer option: element.textContent = data; • Check the origin properly exactly to match the FQDN(s) you expect. Note that the following code: if(message.orgin.indexOf(".owasp.org")!=-1) { /* ... */ } is very insecure and will not have the desired behavior as www.owasp.org.attacker.com will match. • If you need to embed external content/untrusted gadgets and allow user- controlled scripts (which is highly discouraged), consider using a JavaScript rewriting framework such as Google Caja [3] or check the information on sand- boxed frames [4]. 67 9. HTML5 Security Cheat Sheet 9.3.2. Client-side databases • On November 2010, the W3C announced Web SQL Database (relational SQL database) as a deprecated specification. A new standard Indexed Database API or IndexedDB (formerly WebSimpleDB) is actively developed, which provides key/value database storage and methods for performing advanced queries. • Underlying storage mechanisms may vary from one user agent to the next. In other words, any authentication your application requires can be bypassed by a user with local privileges to the machine on which the data is stored. Therefore, it’s recommended not to store any sensitive information in local storage. • If utilized, WebDatabase content on the client side can be vulnerable to SQL injection and needs to have proper validation and parameterization. • Like Local Storage, a single Cross Site Scripting can be used to load malicious data into a web database as well. Don’t consider data in these to be trusted. 9.4. Geolocation • The Geolocation RFC recommends that the user agent ask the user’s permission before calculating location. Whether or how this decision is remembered varies from browser to browser. Some user agents require the user to visit the page again in order to turn off the ability to get the user’s location without asking, so for privacy reasons, it’s recommended to require user input before calling getCurrentPosition or watchPosition. 9.5. Web Workers • Web Workers are allowed to use XMLHttpRequest object to perform in-domain and Cross Origin Resource Sharing requests. See relevant section of this Cheat Sheet to ensure CORS security. • While Web Workers don’t have access to DOM of the calling page, malicious Web Workers can use excessive CPU for computation, leading to Denial of Ser- vice condition or abuse Cross Origin Resource Sharing for further exploitation. Ensure code in all Web Workers scripts is not malevolent. Don’t allow creating Web Worker scripts from user supplied input. • Validate messages exchanged with a Web Worker. Do not try to exchange snip- pets of Javascript for evaluation e.g. via eval() as that could introduce a DOM Based XSS [8] vulnerability. 9.6. Sandboxed frames • Use the sandbox attribute of an iframe for untrusted content. • The sandbox attribute of an iframe enables restrictions on content within a iframe. The following restrictions are active when the sandbox attribute is set: 1. All markup is treated as being from a unique origin. 2. All forms and scripts are disabled. 3. All links are prevented from targeting other browsing contexts. 4. All features that triggers automatically are blocked. 70 9. HTML5 Security Cheat Sheet 5. All plugins are disabled. It is possible to have a fine-grained control [9] over iframe capabilities using the value of the sandbox attribute. • In old versions of user agents where this feature is not supported, this attribute will be ignored. Use this feature as an additional layer of protection or check if the browser supports sandboxed frames and only show the untrusted content if supported. • Apart from this attribute, to prevent Clickjacking attacks and unsolicited fram- ing it is encouraged to use the header X-Frame-Options which supports the deny and same-origin values. Other solutions like framebusting if(window!== window.top) { window.top.location = location; } are not recommended. 9.7. Offline Applications • Whether the user agent requests permission to the user to store data for offline browsing and when this cache is deleted varies from one browser to the next. Cache poisoning is an issue if a user connects through insecure networks, so for privacy reasons it is encouraged to require user input before sending any manifest file. • Users should only cache trusted websites and clean the cache after browsing through open or insecure networks. 9.8. Progressive Enhancements and Graceful Degradation Risks • The best practice now is to determine the capabilities that a browser supports and augment with some type of substitute for capabilities that are not directly supported. This may mean an onion-like element, e.g. falling through to a Flash Player if the <video> tag is unsupported, or it may mean additional scripting code from various sources that should be code reviewed. 9.9. HTTP Headers to enhance security 9.9.1. X-Frame-Options • This header can be used to prevent ClickJacking in modern browsers. • Use the same-origin attribute to allow being framed from urls of the same origin or deny to block all. Example: X-Frame-Options: DENY • For more information on Clickjacking Defense please see the Clickjacking De- fense Cheat Sheet. 9.9.2. X-XSS-Protection • Enable XSS filter (only works for Reflected XSS). • Example: X-XSS-Protection: 1; mode=block 71 9. HTML5 Security Cheat Sheet 9.9.3. Strict Transport Security • Force every browser request to be sent over TLS/SSL (this can prevent SSL strip attacks). • Use includeSubDomains. • Example: Strict-Transport-Security: max-age=8640000; includeSubDomains 9.9.4. Content Security Policy • Policy to define a set of content restrictions for web resources which aims to mitigate web application vulnerabilities such as Cross Site Scripting. • Example: X-Content-Security-Policy: allow ’self’; img-src *; object-src me- dia.example.com; script-src js.example.com 9.9.5. Origin • Sent by CORS/WebSockets requests. • There is a proposal to use this header to mitigate CSRF attacks, but is not yet implemented by vendors for this purpose. 9.10. Authors and Primary Editors • Mark Roxberry mark.roxberry [at] owasp.org • Krzysztof Kotowicz krzysztof [at] kotowicz.net • Will Stranathan will [at] cltnc.us • Shreeraj Shah shreeraj.shah [at] blueinfy.net • Juan Galiana Lara jgaliana [at] owasp.org 9.11. References 1. https://www.owasp.org/index.php/HTML5_Security_Cheat_Sheet 2. https://www.owasp.org/index.php/Cross-site_Scripting_(XSS) 3. http://code.google.com/p/google-caja/ 4. https://www.owasp.org/index.php/HTML5_Security_Cheat_Sheet# Sandboxed_frames 5. https://www.owasp.org/index.php/Cross-Site_Request_Forgery_ (CSRF) 6. http://tools.ietf.org/html/rfc6455 7. https://www.owasp.org/index.php/Cross_Site_Scripting_Flaw 8. https://www.owasp.org/index.php/DOM_Based_XSS 9. http://www.whatwg.org/specs/web-apps/current-work/multipage/ the-iframe-element.html#attr-iframe-sandbox 72 11. JAAS Cheat Sheet Last revision (mm/dd/yy): 04/7/2014 11.1. Introduction 11.1.1. What is JAAS authentication The process of verifying the identity of a user or another system is authentication. JAAS, as an authentication framework manages the authenticated user’s identity and credentials from login to logout. The JAAS authentication lifecycle: 1. Create LoginContext 2. Read the configuration file for one or more LoginModules to initialize 3. Call LoginContext.initialize() for each LoginModule to initialize. 4. Call LoginContext.login() for each LoginModule 5. If login successful then call LoginContext.commit() else call LoginContext.abort() 11.1.2. Configuration file The JAAS configuration file contains a LoginModule stanza for each LoginModule available for logging on to the application. A stanza from a JAAS configuration file: Branches { USNavy.AppLoginModule required debug=true succeeded=true ; } Note the placement of the semicolons, terminating both LoginModule entries and stanzas. The word required indicates the LoginContext’s login() method must be successful when logging in the user. The LoginModule-specific values debug and succeeded are passed to the LoginModule. They are defined by the LoginModule and their usage is managed inside the LoginModule. Note, Options are Configured using key-value pairing such as debug=”true” and the key and value should be separated by a ’equals’ sign. 11.1.3. Main.java (The client) Execution syntax Java −Djava . security . auth . login . config==packageName/packageName. config packageName.Main Stanza1 Where: packageName is the directory containing the config f i l e . packageName. config spec i f ies the config f i l e in the Java package , ↪→ packageName 75 11. JAAS Cheat Sheet packageName.Main spec i f ies Main. java in the Java package , packageName Stanza1 is the name of the stanza Main ( ) should read from the config f i l e . • When executed, the 1st command line argument is the stanza from the config file. The Stanza names the LoginModule to be used. The 2nd argument is the CallbackHandler. • Create a new LoginContext with the arguments passed to Main.java. – loginContext = new LoginContext (args[0], new AppCallbackHandler()); • Call the LoginContext.Login Module – loginContext.login (); • The value in succeeded Option is returned from loginContext.login() • If the login was successful, a subject was created. 11.1.4. LoginModule.java A LoginModule must have the following authentication methods: • initialize() • login() • commit() • abort() • logout() initialize() In Main(), after the LoginContext reads the correct stanza from the config file, the LoginContext instantiates the LoginModule specified in the stanza. • initialize() methods signature: – Public void initialize(Subject subject, CallbackHandler callbackHandler, Map sharedState, Map options) • The arguments above should be saved as follows: – this.subject = subject; – this.callbackHandler = callbackHandler; – this.sharedState = sharedState; – this.options = options; • What the initialize() method does: – Builds a subject object of the Subject class contingent on a successful lo- gin() – Sets the CallbackHandler which interacts with the user to gather login in- formation – If a LoginContext specifies 2 or more LoginModules, which is legal, they can share information via a sharedState map – Saves state information such as debug and succeeded in an options Map 76 11. JAAS Cheat Sheet login() Captures user supplied login information. The code snippet below declares an array of two callback objects which, when passed to the callbackHandler.handle method in the callbackHandler.java program, will be loaded with a user name and password provided interactively by the user. NameCallback nameCB = new NameCallback ( "Username " ) ; PasswordCallback passwordCB = new PasswordCallback ( " Password " , fa lse ) ; Callback [ ] callbacks = new Callback [ ] { nameCB, passwordCB } ; callbackHandler . handle ( callbacks ) ; • Authenticates the user • Retrieves the user supplied information from the callback objects: – String ID = nameCallback.getName(); – char[] tempPW = passwordCallback.getPassword(); • Compare name and tempPW to values stored in a repository such as LDAP • Set the value of the variable succeeded and return to Main() commit() Once the users credentials are successfully verified during login (), the JAAS authen- tication framework associates the credentials, as needed, with the subject. There are two types of credentials, public and private. Public credentials include public keys. Private credentials include passwords and public keys. Principals (i.e. Identities the subject has other than their login name) such as employee number or membership ID in a user group are added to the subject. Below, is an example commit() method where first, for each group the authenticated user has membership in, the group name is added as a principal to the subject. The subject’s username is then added to their public credentials. Code snippet setting then adding any principals and a public credentials to a subject: public boolean commit ( ) { I f ( userAuthenticated ) { Set groups = UserService . findGroups (username ) ; for ( I terator i t r = groups . i te rator ( ) ; i t r . hasNext ( ) ; { String groupName = ( String ) i t r . next ( ) ; UserGroupPrincipal group = new UserGroupPrincipal (GroupName) ; subject . getPrincipals ( ) .add ( group ) ; } UsernameCredential cred = new UsernameCredential (username ) ; subject . getPublicCredentials ( ) .add ( cred ) ; } } abort() The abort() method is called when authentication doesn’t succeed. Before the abort() method exits the LoginModule, care should be taken to reset state including the user name and password input fields. 77 OWASP Cheat Sheets Martin Woschek, owasp@jesterweb.de April 9, 2015 Contents I Developer Cheat Sheets (Builder) 11 1 Authentication Cheat Sheet 12 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2 Authentication General Guidelines . . . . . . . . . . . . . . . . . . . . . . . 12 1.3 Use of authentication protocols that require no password . . . . . . . . . . 17 1.4 Session Management General Guidelines . . . . . . . . . . . . . . . . . . . 19 1.5 Password Managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2 Choosing and Using Security Questions Cheat Sheet 20 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.3 Choosing Security Questions and/or Identity Data . . . . . . . . . . . . . . 20 2.4 Using Security Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.5 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3 Clickjacking Defense Cheat Sheet 26 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.2 Defending with Content Security Policy frame-ancestors directive . . . . . 26 3.3 Defending with X-Frame-Options Response Headers . . . . . . . . . . . . . 26 3.4 Best-for-now Legacy Browser Frame Breaking Script . . . . . . . . . . . . . 28 3.5 window.confirm() Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.6 Non-Working Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.7 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4 C-Based Toolchain Hardening Cheat Sheet 34 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.2 Actionable Items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.3 Build Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.4 Library Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.5 Static Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.6 Platform Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.7 Authors and Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5 Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet 40 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 5.2 Prevention Measures That Do NOT Work . . . . . . . . . . . . . . . . . . . . 40 5.3 General Recommendation: Synchronizer Token Pattern . . . . . . . . . . . 41 5.4 CSRF Prevention without a Synchronizer Token . . . . . . . . . . . . . . . 44 5.5 Client/User Prevention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2 Contents 18.7 Authors and primary editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 18.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 19 Session Management Cheat Sheet 126 19.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 19.2 Session ID Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 19.3 Session Management Implementation . . . . . . . . . . . . . . . . . . . . . 128 19.4 Cookies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 19.5 Session ID Life Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 19.6 Session Expiration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 19.7 Additional Client-Side Defenses for Session Management . . . . . . . . . . 134 19.8 Session Attacks Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 19.9 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 19.10 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 19.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 20 SQL Injection Prevention Cheat Sheet 139 20.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 20.2 Primary Defenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 20.3 Additional Defenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 20.4 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 20.5 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 20.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 21 Transport Layer Protection Cheat Sheet 149 21.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 21.2 Providing Transport Layer Protection with SSL/TLS . . . . . . . . . . . . . 149 21.3 Providing Transport Layer Protection for Back End and Other Connections 161 21.4 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 21.5 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 21.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 21.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 22 Unvalidated Redirects and Forwards Cheat Sheet 166 22.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 22.2 Safe URL Redirects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 22.3 Dangerous URL Redirects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 22.4 Preventing Unvalidated Redirects and Forwards . . . . . . . . . . . . . . . 168 22.5 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 22.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 22.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 23 User Privacy Protection Cheat Sheet 170 23.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 23.2 Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 23.3 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 23.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 24 Web Service Security Cheat Sheet 175 24.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 24.2 Transport Confidentiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 24.3 Server Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 24.4 User Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 24.5 Transport Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 24.6 Message Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 5 Contents 24.7 Message Confidentiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 24.8 Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 24.9 Schema Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 24.10 Content Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 24.11 Output Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 24.12 Virus Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 24.13 Message Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 24.14 Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 24.15 Endpoint Security Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 24.16 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 24.17 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 25 XSS (Cross Site Scripting) Prevention Cheat Sheet 179 25.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 25.2 XSS Prevention Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 25.3 XSS Prevention Rules Summary . . . . . . . . . . . . . . . . . . . . . . . . . 186 25.4 Output Encoding Rules Summary . . . . . . . . . . . . . . . . . . . . . . . . 188 25.5 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 25.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 25.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 II Assessment Cheat Sheets (Breaker) 191 26 Attack Surface Analysis Cheat Sheet 192 26.1 What is Attack Surface Analysis and Why is it Important? . . . . . . . . . 192 26.2 Defining the Attack Surface of an Application . . . . . . . . . . . . . . . . . 192 26.3 Identifying and Mapping the Attack Surface . . . . . . . . . . . . . . . . . . 193 26.4 Measuring and Assessing the Attack Surface . . . . . . . . . . . . . . . . . 194 26.5 Managing the Attack Surface . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 26.6 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 26.7 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 26.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 27 XSS Filter Evasion Cheat Sheet 197 27.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 27.2 Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 27.3 Character Encoding and IP Obfuscation Calculators . . . . . . . . . . . . . 219 27.4 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 27.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 28 REST Assessment Cheat Sheet 221 28.1 About RESTful Web Services . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 28.2 Key relevant properties of RESTful web services . . . . . . . . . . . . . . . . 221 28.3 The challenge of security testing RESTful web services . . . . . . . . . . . . 221 28.4 How to pen test a RESTful web service? . . . . . . . . . . . . . . . . . . . . 222 28.5 Related Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 28.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 28.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 III Mobile Cheat Sheets 224 29 IOS Developer Cheat Sheet 225 29.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 6 Contents 29.2 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 29.3 Remediation’s to OWASP Mobile Top 10 Risks . . . . . . . . . . . . . . . . . 225 29.4 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 29.5 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 29.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 30 Mobile Jailbreaking Cheat Sheet 231 30.1 What is "jailbreaking", "rooting" and "unlocking"? . . . . . . . . . . . . . . . 231 30.2 Why do they occur? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 30.3 What are the common tools used? . . . . . . . . . . . . . . . . . . . . . . . . 233 30.4 Why can it be dangerous? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 30.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 30.6 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 30.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 IV OpSec Cheat Sheets (Defender) 240 31 Virtual Patching Cheat Sheet 241 31.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 31.2 Definition: Virtual Patching . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 31.3 Why Not Just Fix the Code? . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 31.4 Value of Virtual Patching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 31.5 Virtual Patching Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 31.6 A Virtual Patching Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 242 31.7 Example Public Vulnerability . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 31.8 Preparation Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 31.9 Identification Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 31.10 Analysis Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 31.11 Virtual Patch Creation Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 31.12 Implementation/Testing Phase . . . . . . . . . . . . . . . . . . . . . . . . . 247 31.13 Recovery/Follow-Up Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 31.14 Related Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 31.15 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 31.16 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 V Draft Cheat Sheets 249 32 OWASP Top Ten Cheat Sheet 251 33 Access Control Cheat Sheet 252 33.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 33.2 Attacks on Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 33.3 Access Control Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 33.4 Access Control Anti-Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 33.5 Attacking Access Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 33.6 Testing for Broken Access Control . . . . . . . . . . . . . . . . . . . . . . . . 256 33.7 Defenses Against Access Control Attacks . . . . . . . . . . . . . . . . . . . . 257 33.8 Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 33.9 SQL Integrated Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . 258 33.10 Access Control Positive Patterns . . . . . . . . . . . . . . . . . . . . . . . . . 259 33.11 Data Contextual Access Control . . . . . . . . . . . . . . . . . . . . . . . . . 259 33.12 Authors and Primary Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 7 Contents These Cheat Sheets have been taken from the owasp project on https://www.owasp. org. While this document is static, the online source is continuously improved and expanded. So please visit https://www.owasp.org if you have any doubt in the accuracy or actuality of this pdf or simply if this document is too old. All the articles are licenced under the Creative Commons Attribution-ShareAlike 3.0 Unported1. I have slightly reformatted and/or resectioned them in this work (which of course also is CC BY-SA 3.0). 1http://creativecommons.org/licenses/by-sa/3.0/ 10 Part I. Developer Cheat Sheets (Builder) 11 1. Authentication Cheat Sheet Last revision (mm/dd/yy): 02/24/2015 1.1. Introduction Authentication is the process of verification that an individual or an entity is who it claims to be. Authentication is commonly performed by submitting a user name or ID and one or more items of private information that only a given user should know. Session Management is a process by which a server maintains the state of an entity interacting with it. This is required for a server to remember how to react to sub- sequent requests throughout a transaction. Sessions are maintained on the server by a session identifier which can be passed back and forward between the client and server when transmitting and receiving requests. Sessions should be unique per user and computationally very difficult to predict. 1.2. Authentication General Guidelines 1.2.1. User IDs Make sure your usernames/userids are case insensitive. Regardless, it would be very strange for user ’smith’ and user ’Smith’ to be different users. Could result in serious confusion. Email address as a User ID Many sites use email addresses as a user id, which is a good mechanism for ensuring a unique identifier for each user without adding the burden of remembering a new username. However, many web applications do not treat email addresses correctly due to common misconceptions about what constitutes a valid address. Specifically, it is completely valid to have an mailbox address which: • Is case sensitive in the local-part • Has non-alphanumeric characters in the local-part (including + and @) • Has zero or more labels (though zero is admittedly not going to occur) The local-part is the part of the mailbox address to the left of the rightmost @ char- acter. The domain is the part of the mailbox address to the right of the rightmost @ character and consists of zero or more labels joined by a period character. At the time of writing, RFC 5321[2] is the current standard defining SMTP and what constitutes a valid mailbox address. Validation Many web applications contain computationally expensive and inaccurate regular expressions that attempt to validate email addresses. Recent changes to the landscape mean that the number of false-negatives will in- crease, particularly due to: 12 1. Authentication Cheat Sheet • If the new password doesn’t comply with the complexity policy, the error mes- sage should describe EVERY complexity rule that the new password does not comply with, not just the 1st rule it doesn’t comply with Changing passwords should be EASY, not a hunt in the dark. 1.2.3. Implement Secure Password Recovery Mechanism It is common for an application to have a mechanism that provides a means for a user to gain access to their account in the event they forget their password. Please see Forgot Password Cheat Sheet on page 65 for details on this feature. 1.2.4. Store Passwords in a Secure Fashion It is critical for a application to store a password using the right cryptographic tech- nique. Please see Password Storage Cheat Sheet on page 98 for details on this fea- ture. 1.2.5. Transmit Passwords Only Over TLS See: Transport Layer Protection Cheat Sheet on page 149 The login page and all subsequent authenticated pages must be exclusively accessed over TLS. The initial login page, referred to as the "login landing page", must be served over TLS. Failure to utilize TLS for the login landing page allows an attacker to mod- ify the login form action, causing the user’s credentials to be posted to an arbitrary location. Failure to utilize TLS for authenticated pages after the login enables an at- tacker to view the unencrypted session ID and compromise the user’s authenticated session. 1.2.6. Require Re-authentication for Sensitive Features In order to mitigate CSRF and session hijacking, it’s important to require the current credentials for an account before updating sensitive account information such as the user’s password, user’s email, or before sensitive transactions, such as shipping a purchase to a new address. Without this countermeasure, an attacker may be able to execute sensitive transactions through a CSRF or XSS attack without needing to know the user’s current credentials. Additionally, an attacker may get temporary physical access to a user’s browser or steal their session ID to take over the user’s session. 1.2.7. Utilize Multi-Factor Authentication Multi-factor authentication (MFA) is using more than one authentication factor to logon or process a transaction: • Something you know (account details or passwords) • Something you have (tokens or mobile phones) • Something you are (biometrics) Authentication schemes such as One Time Passwords (OTP) implemented using a hardware token can also be key in fighting attacks such as CSRF and client-side malware. A number of hardware tokens suitable for MFA are available in the market that allow good integration with web applications. See [6]. 15 1. Authentication Cheat Sheet 1.2.7.1. SSL Client Authentication SSL Client Authentication, also known as two-way SSL authentication, consists of both, browser and server, sending their respective SSL certificates during the TLS handshake process. Just as you can validate the authenticity of a server by using the certificate and asking a well known Certificate Authority (CA) if the certificate is valid, the server can authenticate the user by receiving a certificate from the client and validating against a third party CA or its own CA. To do this, the server must provide the user with a certificate generated specifically for him, assigning values to the subject so that these can be used to determine what user the certificate should validate. The user installs the certificate on a browser and now uses it for the website. It is a good idea to do this when: • It is acceptable (or even preferred) that the user only has access to the website from only a single computer/browser. • The user is not easily scared by the process of installing SSL certificates on his browser or there will be someone, probably from IT support, that will do this for the user. • The website requires an extra step of security. • It is also a good thing to use when the website is for an intranet of a company or organization. It is generally not a good idea to use this method for widely and publicly available websites that will have an average user. For example, it wouldn’t be a good idea to implement this for a website like Facebook. While this technique can prevent the user from having to type a password (thus protecting against an average keylogger from stealing it), it is still considered a good idea to consider using both a password and SSL client authentication combined. For more information, see: [4] or [5]. 1.2.8. Authentication and Error Messages Incorrectly implemented error messages in the case of authentication functionality can be used for the purposes of user ID and password enumeration. An application should respond (both HTTP and HTML) in a generic manner. 1.2.8.1. Authentication Responses An application should respond with a generic error message regardless of whether the user ID or password was incorrect. It should also give no indication to the status of an existing account. 1.2.8.2. Incorrect Response Examples • "Login for User foo: invalid password" • "Login failed, invalid user ID" • "Login failed; account disabled" • "Login failed; this user is not active" 16 1. Authentication Cheat Sheet 1.2.8.3. Correct Response Example • "Login failed; Invalid userID or password" The correct response does not indicate if the user ID or password is the incorrect parameter and hence inferring a valid user ID. 1.2.8.4. Error Codes and URLs The application may return a different HTTP Error code depending on the authenti- cation attempt response. It may respond with a 200 for a positive result and a 403 for a negative result. Even though a generic error page is shown to a user, the HTTP response code may differ which can leak information about whether the account is valid or not. 1.2.9. Prevent Brute-Force Attacks If an attacker is able to guess passwords without the account becoming disabled due to failed authentication attempts, the attacker has an opportunity to continue with a brute force attack until the account is compromised. Automating brute- force/password guessing attacks on web applications is a trivial challenge. Pass- word lockout mechanisms should be employed that lock out an account if more than a preset number of unsuccessful login attempts are made. Password lockout mech- anisms have a logical weakness. An attacker that undertakes a large number of authentication attempts on known account names can produce a result that locks out entire blocks of user accounts. Given that the intent of a password lockout sys- tem is to protect from brute-force attacks, a sensible strategy is to lockout accounts for a period of time (e.g., 20 minutes). This significantly slows down attackers, while allowing the accounts to reopen automatically for legitimate users. Also, multi-factor authentication is a very powerful deterrent when trying to prevent brute force attacks since the credentials are a moving target. When multi-factor is implemented and active, account lockout may no longer be necessary. 1.3. Use of authentication protocols that require no password While authentication through a user/password combination and using multi-factor authentication is considered generally secure, there are use cases where it isn’t con- sidered the best option or even safe. An example of this are third party applications that desire connecting to the web application, either from a mobile device, another website, desktop or other situations. When this happens, it is NOT considered safe to allow the third party application to store the user/password combo, since then it extends the attack surface into their hands, where it isn’t in your control. For this, and other use cases, there are several authentication protocols that can protect you from exposing your users’ data to attackers. 1.3.1. OAuth Open Authorization (OAuth) is a protocol that allows an application to authenticate against a server as a user, without requiring passwords or any third party server that acts as an identity provider. It uses a token generated by the server, and provides how the authorization flows most occur, so that a client, such as a mobile application, can tell the server what user is using the service. The recommendation is to use and implement OAuth 1.0a or OAuth 2.0, since the very first version (OAuth1.0) has been found to be vulnerable to session fixation. 17 2. Choosing and Using Security Questions Cheat Sheet Last revision (mm/dd/yy): 04/17/2014 2.1. Introduction This cheat sheet provides some best practice for developers to follow when choos- ing and using security questions to implement a "forgot password" web application feature. 2.2. The Problem There is no industry standard either for providing guidance to users or developers when using or implementing a Forgot Password feature. The result is that developers generally pick a set of dubious questions and implement them insecurely. They do so, not only at the risk to their users, but also–because of potential liability issues– at the risk to their organization. Ideally, passwords would be dead, or at least less important in the sense that they make up only one of several multi-factor authenti- cation mechanisms, but the truth is that we probably are stuck with passwords just like we are stuck with Cobol. So with that in mind, what can we do to make the Forgot Password solution as palatable as possible? 2.3. Choosing Security Questions and/or Identity Data Most of us can instantly spot a bad "security question" when we see one. You know the ones we mean. Ones like "What is your favorite color?" are obviously bad. But as the Good Security Questions [2] web site rightly points out, "there really are NO GOOD security questions; only fair or bad questions". The reason that most organizations allow users to reset their own forgotten pass- words is not because of security, but rather to reduce their own costs by reducing their volume of calls to their help desks. It’s the classic convenience vs. security trade-off, and in this case, convenience (both to the organization in terms of reduced costs and to the user in terms of simpler, self-service) almost always wins out. So given that the business aspect of lower cost generally wins out, what can we do to at least raise the bar a bit? Here are some suggestions. Note that we intentionally avoid recommending specific security questions. To do so likely would be counterproductive because many de- velopers would simply use those questions without much thinking and adversaries would immediately start harvesting that data from various social networks. 2.3.1. Desired Characteristics Any security questions or identity information presented to users to reset forgotten passwords should ideally have the following four characteristics: 20 2. Choosing and Using Security Questions Cheat Sheet 1. Memorable: If users can’t remember their answers to their security questions, you have achieved nothing. 2. Consistent: The user’s answers should not change over time. For instance, asking "What is the name of your significant other?" may have a different answer 5 years from now. 3. Nearly universal: The security questions should apply to a wide an audience of possible. 4. Safe: The answers to security questions should not be something that is easily guessed, or research (e.g., something that is matter of public record). 2.3.2. Steps 2.3.2.1. Step 1) Decide on Identity Data vs Canned Questions vs. User-Created Questions Generally, a single HTML form should be used to collect all of the inputs to be used for later password resets. If your organization has a business relationship with users, you probably have col- lected some sort of additional information from your users when they registered with your web site. Such information includes, but is not limited to: • email address • last name • date of birth • account number • customer number • last 4 of social security number • zip code for address on file • street number for address on file For enhanced security, you may wish to consider asking the user for their email address first and then send an email that takes them to a private page that requests the other 2 (or more) identity factors. That way the email itself isn’t that useful because they still have to answer a bunch of ’secret’ questions after they get to the landing page. On the other hand, if you host a web site that targets the general public, such as social networking sites, free email sites, news sites, photo sharing sites, etc., then you likely to not have this identity information and will need to use some sort of the ubiquitous "security questions". However, also be sure that you collect some means to send the password reset information to some out-of-band side-channel, such as a (different) email address, an SMS texting number, etc. Believe it or not, there is a certain merit to allow your users to select from a set of several "canned" questions. We generally ask users to fill out the security questions as part of completing their initial user profile and often that is the very time that the user is in a hurry; they just wish to register and get about using your site. If we ask users to create their own question(s) instead, they then generally do so under some amount of duress, and thus may be more likely to come up with extremely poor questions. 21 2. Choosing and Using Security Questions Cheat Sheet However, there is also some strong rationale to requiring users to create their own question(s), or at least one such question. The prevailing legal opinion seems to be if we provide some sort of reasonable guidance to users in creating their own questions and then insist on them doing so, at least some of the potential liabilities are transferred from our organizations to the users. In such cases, if user accounts get hacked because of their weak security questions (e.g., "What is my favorite ice cream flavor?", etc.) then the thought is that they only have themselves to blame and thus our organizations are less likely to get sued. Since OWASP recommends in the Forgot Password Cheat Sheet on page 65 that multiple security questions should be posed to the user and successfully answered before allowing a password reset, a good practice might be to require the user to select 1 or 2 questions from a set of canned questions as well as to create (a different) one of their own and then require they answer one of their selected canned questions as well as their own question. 2.3.2.2. Step 2) Review Any Canned Questions with Your Legal Department or Privacy Officer While most developers would generally first review any potential questions with what- ever relevant business unit, it may not occur to them to review the questions with their legal department or chief privacy officer. However, this is advisable because their may be applicable laws or regulatory / compliance issues to which the ques- tions must adhere. For example, in the telecommunications industry, the FCC’s Customer Proprietary Network Information (CPNI) regulations prohibit asking cus- tomers security questions that involve "personal information", so questions such as "In what city were you born?" are generally not allowed. 2.3.2.3. Step 3) Insist on a Minimal Length for the Answers Even if you pose decent security questions, because users generally dislike putting a whole lot of forethought into answering the questions, they often will just answer with something short. Answering with a short expletive is not uncommon, nor is answering with something like "xxx" or "1234". If you tell the user that they should answer with a phrase or sentence and tell them that there is some minimal length to an acceptable answer (say 10 or 12 characters), you generally will get answers that are somewhat more resistant to guessing. 2.3.2.4. Step 4) Consider How To Securely Store the Questions and Answers There are two aspects to this...storing the questions and storing the answers. Ob- viously, the questions must be presented to the user, so the options there are store them as plaintext or as reversible ciphertext. The answers technically do not need to be ever viewed by any human so they could be stored using a secure cryptographic hash (although in principle, I am aware of some help desks that utilize the both the questions and answers for password reset and they insist on being able to read the answers rather than having to type them in; YMMV). Either way, we would always recommend at least encrypting the answers rather than storing them as plaintext. This is especially true for answers to the "create your own question" type as users will sometimes pose a question that potentially has a sensitive answer (e.g., "What is my bank account # that I share with my wife?"). So the main question is whether or not you should store the questions as plaintext or reversible ciphertext. Admittedly, we are a bit biased, but for the "create your own question" types at least, we recommend that such questions be encrypted. This is because if they are encrypted, it makes it much less likely that your company will 22 2. Choosing and Using Security Questions Cheat Sheet • Display the security question(s) on a separate page only after your users have successfully authenticated with their usernames / passwords (rather than only after they have entered their username). In this manner, you at least do not allow an adversary to view and research the security questions unless they also know the user’s current password. • If you also use security questions to reset a user’s password, then you should use a different set of security questions for an additional means of authenticat- ing. • Security questions used for actual authentication purposes should regularly expire much like passwords. Periodically make the user choose new security questions and answers. • If you use answers to security questions as a subsequent authentication mech- anism (say to enter a more sensitive area of your web site), make sure that you keep the session idle time out very low...say less than 5 minutes or so, or that you also require the user to first re-authenticate with their password and then immediately after answer the security question(s). 2.5. Related Articles • Forgot Password Cheat Sheet on page 65 • Good Security Questions web site 2.6. Authors and Primary Editors • Kevin Wall - kevin.w.wall[at]gmail com 2.7. References 1. https://www.owasp.org/index.php/Choosing_and_Using_Security_ Questions_Cheat_Sheet 2. http://goodsecurityquestions.com/ 3. http://en.wikipedia.org/wiki/Customer_proprietary_network_ information 25 3. Clickjacking Defense Cheat Sheet Last revision (mm/dd/yy): 02/11/2015 3.1. Introduction This cheat sheet is focused on providing developer guidance on Clickjack/UI Redress [2] attack prevention. The most popular way to defend against Clickjacking is to include some sort of "frame-breaking" functionality which prevents other web pages from framing the site you wish to defend. This cheat sheet will discuss two methods of implementing frame-breaking: first is X-Frame-Options headers (used if the browser supports the functionality); and second is javascript frame-breaking code. 3.2. Defending with Content Security Policy frame-ancestors directive The frame-ancestors directive can be used in a Content-Security-Policy HTTP re- sponse header to indicate whether or not a browser should be allowed to render a page in a <frame> or <iframe>. Sites can use this to avoid Clickjacking attacks, by ensuring that their content is not embedded into other sites. frame-ancestors allows a site to authorize multiple domains using the normal Con- tent Security Policy symantics. See [19] for further details 3.2.1. Limitations • Browser support: frame-ancestors is not supported by all the major browsers yet. • X-Frame-Options takes priority: Section 7.7.1 of the CSP Spec [18] says X- Frame-Options should be ignored if frame-ancestors is specified, but Chrome 40 & Firefox 35 ignore the frame-ancestors directive and follow the X-Frame- Options header instead. 3.3. Defending with X-Frame-Options Response Headers The X-Frame-Options HTTP response header can be used to indicate whether or not a browser should be allowed to render a page in a <frame> or <iframe>. Sites can use this to avoid Clickjacking attacks, by ensuring that their content is not embedded into other sites. 3.3.1. X-Frame-Options Header Types There are three possible values for the X-Frame-Options header: • DENY, which prevents any domain from framing the content. 26 3. Clickjacking Defense Cheat Sheet • SAMEORIGIN, which only allows the current site to frame the content. • ALLOW-FROM uri, which permits the specified ’uri’ to frame this page. (e.g., ALLOW-FROM http://www.example.com) Check Limitations Below this will fail open if the browser does not support it. 3.3.2. Browser Support The following browsers support X-Frame-Options headers. Browser DENY/SAMEORIGIN Support Introduced ALLOW-FROM Support Introduced Chrome 4.1.249.1042 [3] Not supported/Bug reported [4] Firefox (Gecko) 3.6.9 (1.9.2.9) [5] 18.0 [6] Internet Explorer 8.0 [7] 9.0 [8] Opera 10.50 [9] Safari 4.0 [10] Not supported/Bug reported [11] See: [12], [13], [14] 3.3.3. Implementation To implement this protection, you need to add the X-Frame-Options HTTP Response header to any page that you want to protect from being clickjacked via framebusting. One way to do this is to add the HTTP Response Header manually to every page. A possibly simpler way is to implement a filter that automatically adds the header to every page. OWASP has an article and some code [15] that provides all the details for implement- ing this in the Java EE environment. The SDL blog has posted an article [16] covering how to implement this in a .NET environment. 3.3.4. Common Defense Mistakes Meta-tags that attempt to apply the X-Frame-Options directive DO NOT WORK. For example, <meta http-equiv="X-Frame-Options" content="deny">) will not work. You must apply the X-FRAME-OPTIONS directive as HTTP Response Header as described above. 3.3.5. Limitations • Per-page policy specification The policy needs to be specified for every page, which can complicate deploy- ment. Providing the ability to enforce it for the entire site, at login time for instance, could simplify adoption. • Problems with multi-domain sites The current implementation does not allow the webmaster to provide a whitelist of domains that are allowed to frame the page. While whitelisting can be dan- gerous, in some cases a webmaster might have no choice but to use more than one hostname. • ALLOW-FROM browser support The ALLOW-FROM option is a relatively recent addition (circa 2012) and may not be supported by all browsers yet. BE CAREFUL ABOUT DEPENDING ON ALLOW-FROM. If you apply it and the browser does not support it, then you will have NO clickjacking defense in place. 27 3. Clickjacking Defense Cheat Sheet i f ( top . location != se l f . locaton ) { parent . location = se l f . location ; } Attacker top frame: <iframe src="attacker2 . html"> Attacker sub-frame: <iframe src="http ://www. victim .com"> 3.6.2. The onBeforeUnload Event A user can manually cancel any navigation request submitted by a framed page. To exploit this, the framing page registers an onBeforeUnload handler which is called whenever the framing page is about to be unloaded due to navigation. The handler function returns a string that becomes part of a prompt displayed to the user. Say the attacker wants to frame PayPal. He registers an unload handler function that returns the string "Do you want to exit PayPal?". When this string is displayed to the user is likely to cancel the navigation, defeating PayPal’s frame busting attempt. The attacker mounts this attack by registering an unload event on the top page using the following code: <script > window. onbeforeunload = function ( ) { return "Asking the user nicely " ; } </script > <iframe src="http ://www. paypal .com"> PayPal’s frame busting code will generate a BeforeUnload event activating our func- tion and prompting the user to cancel the navigation event. 3.6.3. No-Content Flushing While the previous attack requires user interaction, the same attack can be done without prompting the user. Most browsers (IE7, IE8, Google Chrome, and Firefox) enable an attacker to automatically cancel the incoming navigation request in an onBeforeUnload event handler by repeatedly submitting a navigation request to a site responding with \204 - No Content." Navigating to a No Content site is effectively a NOP, but flushes the request pipeline, thus canceling the original navigation request. Here is sample code to do this: var preventbust = 0 window. onbeforeunload = function ( ) { k i l lbust++ } set Interval ( function ( ) { i f ( k i l lbust > 0) { k i l lbust = 2; window. top . location = ’ http ://nocontent204 .com’ } } , 1) ; 30 3. Clickjacking Defense Cheat Sheet <iframe src="http ://www. victim .com"> 3.6.4. Exploiting XSS filters IE8 and Google Chrome introduced reflective XSS filters that help protect web pages from certain types of XSS attacks. Nava and Lindsay (at Blackhat) observed that these filters can be used to circumvent frame busting code. The IE8 XSS filter com- pares given request parameters to a set of regular expressions in order to look for obvious attempts at cross-site scripting. Using "induced false positives", the filter can be used to disable selected scripts. By matching the beginning of any script tag in the request parameters, the XSS filter will disable all inline scripts within the page, including frame busting scripts. External scripts can also be targeted by matching an external include, effectively disabling all external scripts. Since subsets of the JavaScript loaded is still functional (inline or external) and cookies are still available, this attack is effective for clickjacking. Victim frame busting code: <script > i f ( top != se l f ) { top . location = se l f . location ; } </script > Attacker: <iframe src="http ://www. victim .com/?v=<script > i f ’ ’ > The XSS filter will match that parameter "<script>if" to the beginning of the frame busting script on the victim and will consequently disable all inline scripts in the victim’s page, including the frame busting script. The XSSAuditor filter available for Google Chrome enables the same exploit. 3.6.5. Clobbering top.location Several modern browsers treat the location variable as a special immutable attribute across all contexts. However, this is not the case in IE7 and Safari 4.0.4 where the location variable can be redefined. IE7 Once the framing page redefines location, any frame busting code in a subframe that tries to read top.location will commit a security violation by trying to read a local variable in another domain. Similarly, any attempt to navigate by assigning top.location will fail. Victim frame busting code: i f ( top . location != se l f . location ) { top . location = se l f . location ; } 31 3. Clickjacking Defense Cheat Sheet Attacker: <script > var location = " clobbered " ; </script > <iframe src="http ://www. victim .com"> </iframe> Safari 4.0.4 We observed that although location is kept immutable in most circumstances, when a custom location setter is defined via defineSetter (through window) the object location becomes undefined. The framing page simply does: <script > window. defineSetter ( " location " , function ( ) { } ) ; </script > Now any attempt to read or navigate the top frame’s location will fail. 3.6.6. Restricted zones Most frame busting relies on JavaScript in the framed page to detect framing and bust itself out. If JavaScript is disabled in the context of the subframe, the frame busting code will not run. There are unfortunately several ways of restricting JavaScript in a subframe: In IE 8: <iframe src="http ://www. victim .com" security =" restr ic ted "></iframe> In Chrome: <iframe src="http ://www. victim .com" sandbox></iframe> In Firefox and IE: Activate designMode in parent page. 3.7. Authors and Primary Editors [none named] 3.8. References 1. https://www.owasp.org/index.php/Clickjacking_Defense_Cheat_Sheet 2. https://www.owasp.org/index.php/Clickjacking 3. http://blog.chromium.org/2010/01/security-in-depth-new-security-features. html 4. https://code.google.com/p/chromium/issues/detail?id=129139 32 4. C-Based Toolchain Hardening Cheat Sheet integration or build server will use test configurations, and you will ship release builds. 1970’s K&R code and one size fits all flags are from a bygone era. Processes have evolved and matured to meet the challenges of a modern landscape, including threats. Because tools like Autconfig and Automake do not support the notion of build config- urations [4], you should prefer to work in an Integrated Develop Environments (IDE) or write your makefiles so the desired targets are supported. In addition, Autconfig and Automake often ignore user supplied flags (it depends on the folks writing the various scripts and templates), so you might find it easier to again write a makefile from scratch rather than retrofitting existing auto tool files. 4.3.1. Debug Builds Debug is used during development, and the build assists you in finding problems in the code. During this phase, you develop your program and test integration with third party libraries you program depends upon. To help with debugging and di- agnostics, you should define DEBUG and _DEBUG (if on a Windows platform) pre- processor macros and supply other ’debugging and diagnostic’ oriented flags to the compiler and linker. Additional preprocessor macros for selected libraries are offered in the full article [2]. You should use the following for GCC when building for debug: -O0 (or -O1) and -g3 -ggdb. No optimizations improve debuggability because optimizations often rear- range statements to improve instruction scheduling and remove unneeded code. You may need -O1 to ensure some analysis is performed. -g3 ensures maximum debug information is available, including symbolic constants and #defines. Asserts will help you write self debugging programs. The program will alert you to the point of first failure quickly and easily. Because asserts are so powerful, the code should be completely and full instrumented with asserts that: (1) validates and asserts all program state relevant to a function or a method; (2) validates and asserts all function parameters; and (3) validates and asserts all return values for functions or methods which return a value. Because of item (3), you should be very suspicious of void functions that cannot convey failures. Anywhere you have an if statement for validation, you should have an assert. Any- where you have an assert, you should have an if statement. They go hand-in-hand. Posix states if NDEBUG is not defined, then assert "shall write information about the particular call that failed on stderr and shall call abort" [5]. Calling abort during de- velopment is useless behavior, so you must supply your own assert that SIGTRAPs. A Unix and Linux example of a SIGTRAP based assert is provided in the full article [2]. Unlike other debugging and diagnostic methods - such as breakpoints and printf - asserts stay in forever and become silent guardians. If you accidentally nudge something in an apparently unrelated code path, the assert will snap the debugger for you. The enduring coverage means debug code - with its additional diagnostics and instrumentation - is more highly valued than unadorned release code. If code is checked in that does not have the additional debugging and diagnostics, including full assertions, you should reject the check-in. 4.3.2. Release Builds Release builds are diametrically opposed to debug configurations. In a release config- uration, the program will be built for use in production. Your program is expected to operate correctly, securely and efficiently. The time for debugging and diagnostics is over, and your program will define NDEBUG to remove the supplemental information and behavior. 35 4. C-Based Toolchain Hardening Cheat Sheet A release configuration should also use -O2/-O3/-Os and -g1/-g2. The optimizations will make it somewhat more difficult to make sense of a stack trace, but they should be few and far between. The -gN flag ensures debugging information is available for post mortem analysis. While you generate debugging information for release builds, you should strip the information before shipping and check the symbols into you version control system along with the tagged build. NDEBUG will also remove asserts from your program by defining them to void since its not acceptable to crash via abort in production. You should not depend upon assert for crash report generation because those reports could contain sensitive in- formation and may end up on foreign systems, including for example, Windows Error Reporting [6]. If you want a crash dump, you should generate it yourself in a con- trolled manner while ensuring no sensitive information is written or leaked. Release builds should also curtail logging. If you followed earlier guidance, you have properly instrumented code and can determine the point of first failure quickly and easily. Simply log the failure and and relevant parameters. Remove all NSLog and similar calls because sensitive information might be logged to a system logger. Worse, the data in the logs might be egressed by backup or sync. If your default configura- tion includes a logging level of ten or maximum verbosity, you probably lack stability and are trying to track problems in the field. That usually means your program or library is not ready for production. 4.3.3. Test Builds A Test build is closely related to a release build. In this build configuration, you want to be as close to production as possible, so you should be using -O2/-O3/-Os and -g1/-g2. You will run your suite of positive and negative tests against the test build. You will also want to exercise all functions or methods provided by the program and not just the public interfaces, so everything should be made public. For example, all member functions public (C++ classes), all selectors (Objective C), all methods (Java), and all interfaces (library or shared object) should be made available for testing. As such, you should: • Add -Dprotected=public -Dprivate=public to CFLAGS and CXXFLAGS • Change __attribute__ ((visibility ("hidden"))) to __attribute__ ((visibility ("default"))) Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (q.v.) is about building reliable and secure software. You should also concentrate on negative tests. Positive self tests are relatively useless except for functional and regression tests. Since this is your line of business or area of expertise, you should have the business logic correct when operating in a benign environment. A hostile or toxic environment is much more interesting, and that’s where you want to know how your library or program will fail in the field when under attack. 4.4. Library Integration You must properly integrate and utilize libraries in your program. Proper integration includes acceptance testing, configuring for your build system, identifying libraries you should be using, and correctly using the libraries. A well integrated library can compliment your code, and a poorlly written library can detract from your program. Because a stable library with required functionality can be elusive and its tricky to integrate libraries, you should try to minimize dependencies and avoid thrid party libraries whenever possible. 36 4. C-Based Toolchain Hardening Cheat Sheet Acceptance testing a library is practically non-existent. The testing can be a simple code review or can include additional measures, such as negative self tests. If the library is defective or does not meet standards, you must fix it or reject the library. An example of lack of acceptance testing is Adobe’s inclusion of a defective Sablotron library [7], which resulted in CVE-2012-1525 [8]. Another example is the 10’s to 100’s of millions of vulnerable embedded devices due to defects in libupnp. While its popular to lay blame on others, the bottom line is you chose the library so you are responsible for it. You must also ensure the library is integrated into your build process. For ex- ample, the OpenSSL library should be configured without SSLv2, SSLv3 and com- pression since they are defective. That means config should be executed with -no- comp -no-sslv2 and -no-sslv3. As an additional example, using STLPort your de- bug configuration should also define _STLP_DEBUG=1, _STLP_USE_DEBUG_LIB=1, _STLP_DEBUG_ALLOC=1, _STLP_DEBUG_UNINITIALIZED=1 because the library of- fers the additional diagnostics during development. Debug builds also present an opportunity to use additional libraries to help locate problems in the code. For example, you should be using a memory checker such as Debug Malloc Library (Dmalloc) during development. If you are not using Dmalloc, then ensure you have an equivalent checker, such as GCC 4.8’s -fsanitize=memory. This is one area where one size clearly does not fit all. Using a library properly is always difficult, especially when there is no documenta- tion. Review any hardening documents available for the library, and be sure to visit the library’s documentation to ensure proper API usage. If required, you might have to review code or step library code under the debugger to ensure there are no bugs or undocumented features. 4.5. Static Analysis Compiler writers do a fantastic job of generating object code from source code. The process creates a lot of additional information useful in analyzing code. Compilers use the analysis to offer programmers warnings to help detect problems in their code, but the catch is you have to ask for them. After you ask for them, you should take time to understand what the underlying issue is when a statement is flagged. For example, compilers will warn you when comparing a signed integer to an unsigned integer because -1 > 1 after C/C++ promotion. At other times, you will need to back off some warnings to help separate the wheat from the chaff. For example, interface programming is a popular C++ paradigm, so -Wno-unused-parameter will probably be helpful with C++ code. You should consider a clean compile as a security gate. If you find its painful to turn warnings on, then you have likely been overlooking some of the finer points in the details. In addition, you should strive for multiple compilers and platforms support since each has its own personality (and interpretation of the C/C++ standards). By the time your core modules clean compile under Clang, GCC, ICC, and Visual Studio on the Linux and Windows platforms, your code will have many stability obstacles removed. When compiling programs with GCC, you should use the following flags to help detect errors in your programs. The options should be added to CFLAGS for a program with C source files, and CXXFLAGS for a program with C++ source files. Objective C devel- opers should add their warnings to CFLAGS: -Wall -Wextra -Wconversion (or -Wsign- conversion), -Wcast-align, -Wformat=2 -Wformat-security, -fno-common, -Wmissing- prototypes, -Wmissing-declarations, -Wstrict-prototypes, -Wstrict-overflow, and -Wtrampolines. C++ presents additional opportunities under GCC, and the flags include - 37 5. Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet Last revision (mm/dd/yy): 08/14/2014 5.1. Introduction Cross-Site Request Forgery (CSRF) is a type of attack that occurs when a malicious Web site, email, blog, instant message, or program causes a user’s Web browser to perform an unwanted action on a trusted site for which the user is currently authenticated. The impact of a successful cross-site request forgery attack is limited to the capabilities exposed by the vulnerable application. For example, this attack could result in a transfer of funds, changing a password, or purchasing an item in the user’s context. In effect, CSRF attacks are used by an attacker to make a target system perform a function (funds Transfer, form submission etc.) via the target’s browser without knowledge of the target user, at least until the unauthorized function has been committed. Impacts of successful CSRF exploits vary greatly based on the role of the victim. When targeting a normal user, a successful CSRF attack can compromise end-user data and their associated functions. If the targeted end user is an administrator account, a CSRF attack can compromise the entire Web application. The sites that are more likely to be attacked are community Websites (social networking, email) or sites that have high dollar value accounts associated with them (banks, stock brokerages, bill pay services). This attack can happen even if the user is logged into a Web site using strong encryption (HTTPS). Utilizing social engineering, an attacker will embed malicious HTML or JavaScript code into an email or Website to request a specific ’task url’. The task then executes with or without the user’s knowledge, either directly or by utilizing a Cross-site Scripting flaw (ex: Samy MySpace Worm). For more information on CSRF, please see the OWASP Cross-Site Request Forgery (CSRF) page [2]. 5.2. Prevention Measures That Do NOT Work 5.2.1. Using a Secret Cookie Remember that all cookies, even the secret ones, will be submitted with every re- quest. All authentication tokens will be submitted regardless of whether or not the end-user was tricked into submitting the request. Furthermore, session identifiers are simply used by the application container to associate the request with a specific session object. The session identifier does not verify that the end-user intended to submit the request. 5.2.2. Only Accepting POST Requests Applications can be developed to only accept POST requests for the execution of busi- ness logic. The misconception is that since the attacker cannot construct a malicious link, a CSRF attack cannot be executed. Unfortunately, this logic is incorrect. There 40 5. Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet are numerous methods in which an attacker can trick a victim into submitting a forged POST request, such as a simple form hosted in an attacker’s Website with hidden values. This form can be triggered automatically by JavaScript or can be triggered by the victim who thinks the form will do something else. 5.2.3. Multi-Step Transactions Multi-Step transactions are not an adequate prevention of CSRF. As long as an at- tacker can predict or deduce each step of the completed transaction, then CSRF is possible. 5.2.4. URL Rewriting This might be seen as a useful CSRF prevention technique as the attacker can not guess the victim’s session ID. However, the user’s credential is exposed over the URL. 5.3. General Recommendation: Synchronizer Token Pattern In order to facilitate a "transparent but visible" CSRF solution, developers are encour- aged to adopt the Synchronizer Token Pattern [3]. The synchronizer token pattern requires the generating of random "challenge" tokens that are associated with the user’s current session. These challenge tokens are then inserted within the HTML forms and links associated with sensitive server-side operations. When the user wishes to invoke these sensitive operations, the HTTP request should include this challenge token. It is then the responsibility of the server application to verify the existence and correctness of this token. By including a challenge token with each request, the developer has a strong control to verify that the user actually intended to submit the desired requests. Inclusion of a required security token in HTTP requests associated with sensitive business functions helps mitigate CSRF attacks as suc- cessful exploitation assumes the attacker knows the randomly generated token for the target victim’s session. This is analogous to the attacker being able to guess the target victim’s session identifier. The following synopsis describes a general approach to incorporate challenge tokens within the request. When a Web application formulates a request (by generating a link or form that causes a request when submitted or clicked by the user), the application should include a hidden input parameter with a common name such as "CSRFToken". The value of this token must be randomly generated such that it cannot be guessed by an attacker. Consider leveraging the java.security.SecureRandom class for Java applications to generate a sufficiently long random token. Alternative generation algorithms include the use of 256-bit BASE64 encoded hashes. Developers that choose this generation algorithm must make sure that there is randomness and uniqueness utilized in the data that is hashed to generate the random token. <form action="/ transfer .do" method="post"> <input type="hidden" name="CSRFToken" value="OWY4NmQwODE4ODRjN2Q2NTlhMmZlYWE. . . wYzU1YWQwMTVhM2JmNGYxYjJiMGI4MjJjZDE1ZDZ. . . MGYwMGEwOA=="> . . . </form> In general, developers need only generate this token once for the current session. After initial generation of this token, the value is stored in the session and is utilized for each subsequent request until the session expires. When a request is issued by the end-user, the server-side component must verify the existence and validity of the 41 5. Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet token in the request as compared to the token found in the session. If the token was not found within the request or the value provided does not match the value within the session, then the request should be aborted, token should be reset and the event logged as a potential CSRF attack in progress. To further enhance the security of this proposed design, consider randomizing the CSRF token parameter name and or value for each request. Implementing this ap- proach results in the generation of per-request tokens as opposed to per-session tokens. Note, however, that this may result in usability concerns. For example, the "Back" button browser capability is often hindered as the previous page may contain a token that is no longer valid. Interaction with this previous page will result in a CSRF false positive security event at the server. Regardless of the approach taken, developers are encouraged to protect the CSRF token the same way they protect authenticated session identifiers, such as the use of SSLv3/TLS. 5.3.1. Disclosure of Token in URL Many implementations of this control include the challenge token in GET (URL) re- quests as well as POST requests. This is often implemented as a result of sensitive server-side operations being invoked as a result of embedded links in the page or other general design patterns. These patterns are often implemented without knowl- edge of CSRF and an understanding of CSRF prevention design strategies. While this control does help mitigate the risk of CSRF attacks, the unique per-session token is being exposed for GET requests. CSRF tokens in GET requests are potentially leaked at several locations: browser history, HTTP log files, network appliances that make a point to log the first line of an HTTP request, and Referer headers if the protected site links to an external site. In the latter case (leaked CSRF token due to the Referer header being parsed by a linked site), it is trivially easy for the linked site to launch a CSRF attack on the protected site, and they will be able to target this attack very effectively, since the Referer header tells them the site as well as the CSRF token. The attack could be run entirely from javascript, so that a simple addition of a script tag to the HTML of a site can launch an attack (whether on an originally malicious site or on a hacked site). This attack scenario is easy to prevent, the referer will be omitted if the origin of the request is HTTPS. Therefore this attack does not affect web applications that are HTTPS only. The ideal solution is to only include the CSRF token in POST requests and modify server-side actions that have state changing affect to only respond to POST requests. This is in fact what the RFC 2616 [4] requires for GET requests. If sensitive server- side actions are guaranteed to only ever respond to POST requests, then there is no need to include the token in GET requests. In most JavaEE web applications, however, HTTP method scoping is rarely ever utilized when retrieving HTTP parameters from a request. Calls to "HttpServletRe- quest.getParameter" will return a parameter value regardless if it was a GET or POST. This is not to say HTTP method scoping cannot be enforced. It can be achieved if a developer explicitly overrides doPost() in the HttpServlet class or leverages framework specific capabilities such as the AbstractFormController class in Spring. For these cases, attempting to retrofit this pattern in existing applications requires significant development time and cost, and as a temporary measure it may be better to pass CSRF tokens in the URL. Once the application has been fixed to respond to HTTP GET and POST verbs correctly, CSRF tokens for GET requests should be turned off. 42 5. Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet 5.4.2. Checking The Origin Header The Origin HTTP Header [8] standard was introduced as a method of defending against CSRF and other Cross-Domain attacks. Unlike the referer, the origin will be present in HTTP request that originates from an HTTPS url. If the origin header is present, then it should be checked for consistency. 5.4.3. Challenge-Response Challenge-Response is another defense option for CSRF. The following are some ex- amples of challenge-response options. • CAPTCHA • Re-Authentication (password) • One-time Token While challenge-response is a very strong defense to CSRF (assuming proper imple- mentation), it does impact user experience. For applications in need of high security, tokens (transparent) and challenge-response should be used on high risk functions. 5.5. Client/User Prevention Since CSRF vulnerabilities are reportedly widespread, it is recommended to follow best practices to mitigate risk. Some mitigating include: • Logoff immediately after using a Web application • Do not allow your browser to save username/passwords, and do not allow sites to "remember" your login • Do not use the same browser to access sensitive applications and to surf the Internet freely (tabbed browsing). • The use of plugins such as No-Script makes POST based CSRF vulnerabilities difficult to exploit. This is because JavaScript is used to automatically submit the form when the exploit is loaded. Without JavaScript the attacker would have to trick the user into submitting the form manually. Integrated HTML-enabled mail/browser and newsreader/browser environments pose additional risks since simply viewing a mail message or a news message might lead to the execution of an attack. 5.6. No Cross-Site Scripting (XSS) Vulnerabilities Cross-Site Scripting is not necessary for CSRF to work. However, any cross-site scripting vulnerability can be used to defeat token, Double-Submit cookie, referer and origin based CSRF defenses. This is because an XSS payload can simply read any page on the site using a XMLHttpRequest and obtain the generated token from the response, and include that token with a forged request. This technique is ex- actly how the MySpace (Samy) worm [9] defeated MySpace’s anti CSRF defenses in 2005, which enabled the worm to propagate. XSS cannot defeat challenge-response defenses such as Captcha, re-authentication or one-time passwords. It is impera- tive that no XSS vulnerabilities are present to ensure that CSRF defenses can’t be circumvented. Please see the OWASP XSS Prevention Cheat Sheet on page 179 for detailed guidance on how to prevent XSS flaws. 45 5. Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet 5.7. Authors and Primary Editors • Paul Petefish - paulpetefish[at]solutionary.com • Eric Sheridan - eric.sheridan[at]owasp.org • Dave Wichers - dave.wichers[at]owasp.org 5.8. References 1. https://www.owasp.org/index.php/Cross-Site_Request_Forgery_ (CSRF)_Prevention_Cheat_Sheet 2. https://www.owasp.org/index.php/Cross-Site_Request_Forgery_ (CSRF) 3. http://www.corej2eepatterns.com/Design/PresoDesign.htm 4. http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.1 5. http://directwebremoting.org/ 6. http://en.wikipedia.org/wiki/Cryptographic_nonce 7. http://en.wikipedia.org/wiki/Claims-based_identity 8. https://wiki.mozilla.org/Security/Origin 9. http://en.wikipedia.org/wiki/Samy_(XSS) 46 6. Cryptographic Storage Cheat Sheet Last revision (mm/dd/yy): 03/10/2015 6.1. Introduction This article provides a simple model to follow when implementing solutions for data at rest. 6.1.1. Architectural Decision An architectural decision must be made to determine the appropriate method to pro- tect data at rest. There are such wide varieties of products, methods and mechanisms for cryptographic storage. This cheat sheet will only focus on low-level guidelines for developers and architects who are implementing cryptographic solutions. We will not address specific vendor solutions, nor will we address the design of cryptographic algorithms. 6.2. Providing Cryptographic Functionality 6.2.1. Secure Cryptographic Storage Design Rule - Only store sensitive data that you need Many eCommerce businesses utilize third party payment providers to store credit card information for recurring billing. This offloads the burden of keeping credit card numbers safe. Rule - Use strong approved Authenticated Encryption E.g. CCM [2] or GCM [3] are approved Authenticated Encryption [4] modes based on AES [5] algorithm. Rule - Use strong approved cryptographic algorithms Do not implement an existing cryptographic algorithm on your own, no matter how easy it appears. Instead, use widely accepted algorithms and widely accepted implementations. Only use approved public algorithms such as AES, RSA public key cryptography, and SHA-256 or better for hashing. Do not use weak algorithms, such as MD5 or SHA1. Avoid hashing for password storage, instead use PBKDF2, bcrypt or scrypt. Note that the classification of a "strong" cryptographic algorithm can change over time. See NIST approved algorithms [6] or ISO TR 14742 "Recommendations on Cryptographic Algorithms and their use" or Algorithms, key size and parameters report – 2014 [7] from European Union Agency for Network and Information Security. E.g. AES 128, RSA [8] 3072, SHA [9] 256. Ensure that the implementation has (at minimum) had some cryptography experts involved in its creation. If possible, use an implementation that is FIPS 140-2 certi- fied. 47 6. Cryptographic Storage Cheat Sheet Rule - Protect keys in a key vault Keys should remain in a protected key vault at all times. In particular, ensure that there is a gap between the threat vectors that have direct access to the data and the threat vectors that have direct access to the keys. This implies that keys should not be stored on the application or web server (assuming that application attackers are part of the relevant threat model). Rule - Document concrete procedures for managing keys through the lifecycle These procedures must be written down and the key custodians must be adequately trained. Rule - Build support for changing keys periodically Key rotation is a must as all good keys do come to an end either through expiration or revocation. So a developer will have to deal with rotating keys at some point – better to have a system in place now rather than scrambling later. (From Bil Cory as a starting point). Rule - Document concrete procedures to handle a key compromise Rule - Rekey data at least every one to three years Rekeying refers to the process of decrypting data and then re-encrypting it with a new key. Periodically rekeying data helps protect it from undetected compromises of older keys. The appropriate rekeying period depends on the security of the keys. Data protected by keys secured in dedicated hardware security modules might only need rekeying every three years. Data protected by keys that are split and stored on two application servers might need rekeying every year. Rule - Follow applicable regulations on use of cryptography Rule - Under PCI DSS requirement 3, you must protect cardholder data The Payment Card Industry (PCI) Data Security Standard (DSS) was developed to encourage and enhance cardholder data security and facilitate the broad adoption of consistent data security measures globally. The standard was introduced in 2005 and replaced in- dividual compliance standards from Visa, Mastercard, Amex, JCB and Diners. The current version of the standard is 2.0 and was initialized on January 1, 2011. PCI DSS requirement 3 covers secure storage of credit card data. This requirement covers several aspects of secure storage including the data you must never store but we are covering Cryptographic Storage which is covered in requirements 3.4, 3.5 and 3.6 as you can see below: 3.4 Render PAN (Primary Account Number), at minimum, unreadable anywhere it is stored Compliance with requirement 3.4 can be met by implementing any of the four types of secure storage described in the standard which includes encrypting and hashing data. These two approaches will often be the most popular choices from the list of options. The standard doesn’t refer to any specific algorithms but it mandates the use of Strong Cryptography. The glossary document from the PCI council defines Strong Cryptography as: Cryptography based on industry-tested and accepted algorithms, along with strong key lengths and proper key-management practices. Cryptography is a method to pro- tect data and includes both encryption (which is reversible) and hashing (which is not reversible, or "one way"). SHA-1 is an example of an industry-tested and accepted hashing algorithm. Examples of industry-tested and accepted standards and algo- rithms for encryption include AES (128 bits and higher), TDES (minimum double-length keys), RSA (1024 bits and higher), ECC (160 bits and higher), and ElGamal (1024 bits and higher). 50 6. Cryptographic Storage Cheat Sheet If you have implemented the second rule in this cheat sheet you will have imple- mented a strong cryptographic algorithm which is compliant with or stronger than the requirements of PCI DSS requirement 3.4. You need to ensure that you identify all locations that card data could be stored including logs and apply the appropriate level of protection. This could range from encrypting the data to replacing the card number in logs. This requirement can also be met by implementing disk encryption rather than file or column level encryption. The requirements for Strong Cryptography are the same for disk encryption and backup media. The card data should never be stored in the clear and by following the guidance in this cheat sheet you will be able to securely store your data in a manner which is compliant with PCI DSS requirement 3.4 3.5 Protect any keys used to secure cardholder data against disclosure and misuse As the requirement name above indicates, we are required to securely store the en- cryption keys themselves. This will mean implementing strong access control, audit- ing and logging for your keys. The keys must be stored in a location which is both secure and "away" from the encrypted data. This means key data shouldn’t be stored on web servers, database servers etc Access to the keys must be restricted to the smallest amount of users possible. This group of users will ideally be users who are highly trusted and trained to perform Key Custodian duties. There will obviously be a requirement for system/service accounts to access the key data to perform encryption/decryption of data. The keys themselves shouldn’t be stored in the clear but encrypted with a KEK (Key Encrypting Key). The KEK must not be stored in the same location as the encryption keys it is encrypting. 3.6 Fully document and implement all key-management processes and procedures for cryptographic keys used for encryption of cardholder data Requirement 3.6 mandates that key management processes within a PCI compliant company cover 8 specific key lifecycle steps: 3.6.1 Generation of strong cryptographic keys As we have previously described in this cheat sheet we need to use algorithms which offer high levels of data security. We must also generate strong keys so that the security of the data isn’t undermined by weak cryptographic keys. A strong key is generated by using a key length which is sufficient for your data security require- ments and compliant with the PCI DSS. The key size alone isn’t a measure of the strength of a key. The data used to generate the key must be sufficiently random ("sufficient" often being determined by your data security requirements) and the en- tropy of the key data itself must be high. 3.6.2 Secure cryptographic key distribution The method used to distribute keys must be secure to prevent the theft of keys in transit. The use of a protocol such as Diffie Hellman can help secure the distribution of keys, the use of secure transport such as TLS and SSHv2 can also secure the keys in transit. Older protocols like SSLv3 should not be used. 3.6.3 Secure cryptographic key storage The secure storage of encryption keys including KEK’s has been touched on in our description of requirement 3.5 (see above). 3.6.4 Periodic cryptographic key changes The PCI DSS standard mandates that keys used for encryption must be rotated at least annually. The key rotation process must remove an old key from the encryp- tion/decryption process and replace it with a new key. All new data entering the 51 6. Cryptographic Storage Cheat Sheet system must encrypted with the new key. While it is recommended that existing data be rekeyed with the new key, as per the Rekey data at least every one to three years rule above, it is not clear that the PCI DSS requires this. 3.6.5 Retirement or replacement of keys as deemed necessary when the integrity of the key has been weakened or keys are suspected of being compromised The key management processes must cater for archived, retired or compromised keys. The process of securely storing and replacing these keys will more than likely be covered by your processes for requirements 3.6.2, 3.6.3 and 3.6.4 3.6.6 Split knowledge and establishment of dual control of cryptographic keys The requirement for split knowledge and/or dual control for key management pre- vents an individual user performing key management tasks such as key rotation or deletion. The system should require two individual users to perform an action (i.e. entering a value from their own OTP) which creates to separate values which are concatenated to create the final key data. 3.6.7 Prevention of unauthorized substitution of cryptographic keys The system put in place to comply with requirement 3.6.6 can go a long way to preventing unauthorised substitution of key data. In addition to the dual control process you should implement strong access control, auditing and logging for key data so that unauthorised access attempts are prevented and logged. 3.6.8 Requirement for cryptographic key custodians to sign a form stating that they understand and accept their key-custodian responsibilities To perform the strong key management functions we have seen in requirement 3.6 we must have highly trusted and trained key custodians who understand how to perform key management duties. The key custodians must also sign a form stating they understand the responsibilities that come with this role. 6.3. Related Articles OWASP - Testing for SSL-TLS [28], and OWASP Guide to Cryptography [29], OWASP – Application Security Verification Standard (ASVS) – Communication Security Veri- fication Requirements (V10) [30]. 6.4. Authors and Primary Editors • Kevin Kenan - kevin[at]k2dd.com • David Rook - david.a.rook[at]gmail.com • Kevin Wall - kevin.w.wall[at]gmail.com • Jim Manico - jim[at]owasp.org • Fred Donovan - fred.donovan(at)owasp.org 6.5. References 1. https://www.owasp.org/index.php/Cryptographic_Storage_Cheat_ Sheet 2. http://en.wikipedia.org/wiki/CCM_mode 3. http://en.wikipedia.org/wiki/GCM_mode 52 7. DOM based XSS Prevention Cheat Sheet Let’s look at the individual subcontexts of the execution context in turn. 7.1.1. RULE #1 - HTML Escape then JavaScript Escape Before Inserting Untrusted Data into HTML Subcontext within the Execution Context There are several methods and attributes which can be used to directly render HTML content within JavaScript. These methods constitute the HTML Subcontext within the Execution Context. If these methods are provided with untrusted input, then an XSS vulnerability could result. For example: Example Dangerous HTML Methods Attributes element . innerHTML = "<HTML> Tags and markup" ; element .outerHTML = "<HTML> Tags and markup" ; Methods document . write (" <HTML> Tags and markup" ) ; document . writeln (" <HTML> Tags and markup" ) ; Guideline To make dynamic updates to HTML in the DOM safe, we recommend a) HTML en- coding, and then b) JavaScript encoding all untrusted input, as shown in these examples: element . innerHTML = "<%=Encoder . encodeForJS ( Encoder .encodeForHTML( ↪→ untrustedData ) ) %>"; element .outerHTML = "<%=Encoder . encodeForJS ( Encoder .encodeForHTML( ↪→ untrustedData ) ) %>"; document . write ("<%=Encoder . encodeForJS ( Encoder .encodeForHTML( untrustedData ) ↪→ ) %>") ; document . writeln ("<%=Encoder . encodeForJS ( Encoder .encodeForHTML( ↪→ untrustedData ) ) %>") ; Note: The Encoder.encodeForHTML() and Encoder.encodeForJS() are just notional encoders. Various options for actual encoders are listed later in this document. 7.1.2. RULE #2 - JavaScript Escape Before Inserting Untrusted Data into HTML Attribute Subcontext within the Execution Context The HTML attribute *subcontext* within the *execution* context is divergent from the standard encoding rules. This is because the rule to HTML attribute encode in an HTML attribute rendering context is necessary in order to mitigate attacks which try to exit out of an HTML attributes or try to add additional attributes which could lead to XSS. When you are in a DOM execution context you only need to JavaScript encode HTML attributes which do not execute code (attributes other than event handler, CSS, and URL attributes). For example, the general rule is to HTML Attribute encode untrusted data (data from the database, HTTP request, user, back-end system, etc.) placed in an HTML Attribute. This is the appropriate step to take when outputting data in a rendering context, however using HTML Attribute encoding in an execution context will break the application display of data. 55 7. DOM based XSS Prevention Cheat Sheet SAFE but BROKEN example var x = document . createElement ( " input " ) ; x . setAttribute ( "name" , "company_name" ) ; // In the fol lowing l ine of code , companyName represents untrusted user ↪→ input // The Encoder . encodeForHTMLAttr ( ) i s unnecessary and causes double− ↪→ encoding x . setAttribute ( " value " , ’<%=Encoder . encodeForJS ( Encoder . encodeForHTMLAttr ( ↪→ companyName) ) %>’) ; var form1 = document . forms [ 0 ] ; form1 . appendChild ( x ) ; The problem is that if companyName had the value "Johnson & Johnson". What would be displayed in the input text field would be "Johnson &amp; Johnson". The appropriate encoding to use in the above case would be only JavaScript encoding to disallow an attacker from closing out the single quotes and in-lining code, or escaping to HTML and opening a new script tag. SAFE and FUNCTIONALLY CORRECT example var x = document . createElement ( " input " ) ; x . setAttribute ( "name" , "company_name" ) ; x . setAttribute ( " value " , ’<%=Encoder . encodeForJS (companyName) %>’) ; var form1 = document . forms [ 0 ] ; form1 . appendChild ( x ) ; It is important to note that when setting an HTML attribute which does not execute code, the value is set directly within the object attribute of the HTML element so there is no concerns with injecting up. 7.1.3. RULE #3 - Be Careful when Inserting Untrusted Data into the Event Handler and JavaScript code Subcontexts within an Execution Context Putting dynamic data within JavaScript code is especially dangerous because JavaScript encoding has different semantics for JavaScript encoded data when com- pared to other encodings. In many cases, JavaScript encoding does not stop attacks within an execution context. For example, a JavaScript encoded string will execute even though it is JavaScript encoded. Therefore, the primary recommendation is to avoid including untrusted data in this context. If you must, the following examples describe some approaches that do and do not work. var x = document . createElement ( " a " ) ; x . href ="#"; // In the l ine of code below , the encoded data // on the right ( the second argument to setAttribute ) // is an example of untrusted data that was properly // JavaScript encoded but s t i l l executes . x . setAttribute ( " onclick " , "\u0061\u006c\u0065\u0072\u0074\u0028\u0032\u0032 ↪→ \u0029" ) ; var y = document . createTextNode ( " Click To Test " ) ; x . appendChild ( y ) ; document .body . appendChild ( x ) ; The setAttribute(name_string,value_string) method is dangerous because it implicitly coerces the string_value into the DOM attribute datatype of name_string. In the case 56 7. DOM based XSS Prevention Cheat Sheet above, the attribute name is an JavaScript event handler, so the attribute value is im- plicitly converted to JavaScript code and evaluated. In the case above, JavaScript en- coding does not mitigate against DOM based XSS. Other JavaScript methods which take code as a string types will have a similar problem as outline above (setTimeout, setInterval, new Function, etc.). This is in stark contrast to JavaScript encoding in the event handler attribute of a HTML tag (HTML parser) where JavaScript encoding mitigates against XSS. <a id ="bb" href ="#" onclick="\u0061\u006c\u0065\u0072\u0074\u0028\u0031\ ↪→ u0029"> Test Me</a> An alternative to using Element.setAttribute(...) to set DOM attributes is to set the attribute directly. Directly setting event handler attributes will allow JavaScript en- coding to mitigate against DOM based XSS. Please note, it is always dangerous design to put untrusted data directly into a command execution context. <a id ="bb" href="#"> Test Me</a> //The fol lowing does NOT work because the event handler //is being set to a string . " a ler t (7 ) " is JavaScript encoded . document . getElementById ( " bb " ) . onclick = "\u0061\u006c\u0065\u0072\u0074\ ↪→ u0028\u0037\u0029" ; //The fol lowing does NOT work because the event handler is being set to a ↪→ string . document . getElementById ( " bb " ) . onmouseover = " t e s t I t " ; //The fol lowing does NOT work because of the //encoded " ( " and " ) " . " a ler t (77) " is JavaScript encoded . document . getElementById ( " bb " ) . onmouseover = \u0061\u006c\u0065\u0072\u0074\ ↪→ u0028\u0037\u0037\u0029; //The fol lowing does NOT work because of the encoded " ; " . //" t e s t I t ; t e s t I t " is JavaScript encoded . document . getElementById ( " bb " ) . onmouseover \u0074\u0065\u0073\u0074\u0049\ ↪→ u0074\u003b\u0074\u0065\u0073\u0074\u0049\u0074; //The fol lowing DOES WORK because the encoded value //is a val id variable name or function reference . " t e s t I t " is JavaScript ↪→ encoded document . getElementById ( " bb " ) . onmouseover = \u0074\u0065\u0073\u0074\u0049\ ↪→ u0074; function t e s t I t ( ) { a ler t ( " I was called . " ) ; } There are other places in JavaScript where JavaScript encoding is accepted as valid executable code. for ( var \u0062=0; \u0062 < 10; \u0062++) { \u0064\u006f\u0063\u0075\u006d\u0065\u006e\u0074 .\u0077\u0072\u0069\u0074\u0065\u006c\u006e ( "\u0048\u0065\u006c\u006c\u006f\u0020\u0057\u006f\u0072\u006c\u0064" ) ; } \u0077\u0069\u006e\u0064\u006f\u0077 .\u0065\u0076\u0061\u006c \u0064\u006f\u0063\u0075\u006d\u0065\u006e\u0074 .\u0077\u0072\u0069\u0074\u0065(111111111) ; or var s = "\u0065\u0076\u0061\u006c " ; var t = "\u0061\u006c\u0065\u0072\u0074\u0028\u0031\u0031\u0029" ; window[ s ] ( t ) ; 57 7. DOM based XSS Prevention Cheat Sheet setTimeout ( ( function (param) { return function ( ) { customFunction (param) ; } } ) ("<%=Encoder . encodeForJS ( untrustedData ) %>") , y ) ; The other alternative is using N-levels of encoding. N-Levels of Encoding If your code looked like the following, you would need to only double JavaScript encode input data. setTimeout ( " customFunction(’<%=doubleJavaScriptEncodedData%>’, y ) " ) ; function customFunction ( firstName , lastName ) a ler t ( " Hello " + firstName + " " + lastNam ) ; } The doubleJavaScriptEncodedData has its first layer of JavaScript encoding reversed (upon execution) in the single quotes. Then the implicit eval() of setTimeout() reverses another layer of JavaScript encoding to pass the correct value to customFunction. The reason why you only need to double JavaScript encode is that the customFunc- tion function did not itself pass the input to another method which implicitly or explicitly called eval(). If "firstName" was passed to another JavaScript method which implicitly or explicitly called eval() then <%=doubleJavaScriptEncodedData%> above would need to be changed to <%=tripleJavaScriptEncodedData%>. An important implementation note is that if the JavaScript code tries to utilize the double or triple encoded data in string comparisons, the value may be interpreted as different values based on the number of evals() the data has passed through before being passed to the if comparison and the number of times the value was JavaScript encoded. If "A" is double JavaScript encoded then the following if check will return false. var x = "doubleJavaScriptEncodedA " ; //\u005c\u0075\u0030\u0030\u0034\u0031 i f ( x == "A" ) { a ler t ( " x is A" ) ; } e lse i f ( x == "\u0041" ) { a ler t ( " This is what pops " ) ; } This brings up an interesting design point. Ideally, the correct way to apply en- coding and avoid the problem stated above is to server-side encode for the output context where data is introduced into the application. Then client-side encode (using a JavaScript encoding library such as ESAPI4JS) for the individual subcontext (DOM methods) which untrusted data is passed to. ESAPI4JS [5] and jQuery Encoder [6] are two client side encoding libraries developed by Chris Schmidt. Here are some examples of how they are used: var input = "<%=Encoder . encodeForJS ( untrustedData ) %>"; //server−side ↪→ encoding window. location = ESAPI4JS.encodeForURL ( input ) ; //URL encoding is happening ↪→ in JavaScript document . writeln (ESAPI4JS.encodeForHTML( input ) ) ; //HTML encoding is ↪→ happening in JavaScript It has been well noted by the group that any kind of reliance on a JavaScript library for encoding would be problematic as the JavaScript library could be subverted by attackers. One option is to wait till ECMAScript 5 so the JavaScript library could 60 7. DOM based XSS Prevention Cheat Sheet support immutable properties. Another option provided by Gaz (Gareth) was to use a specific code construct to limit mutability with anonymous clousures. An example follows: function escapeHTML( str ) { str = str + " " ; var out = " " ; for ( var i =0; i <str . length ; i ++) { i f ( str [ i ] === ’ < ’ ) { out += ’& l t ; ’ ; } e lse i f ( str [ i ] === ’ > ’ ) { out += ’&gt ; ’ ; } e lse i f ( str [ i ] === " ’ " ) { out += ’&#39; ’; } e lse i f ( str [ i ] === ’ " ’ ) { out += ’&quot ; ’ ; } e lse { out += str [ i ] ; } } return out ; } Chris Schmidt has put together another implementation of a JavaScript encoder [7]. 7. Limit the usage of dynamic untrusted data to right side operations. And be aware of data which may be passed to the application which look like code (eg. location, eval()). (Achim) var x = "<%=properly encoded data for flow%>"; If you want to change different object attributes based on user input use a level of indirection. Instead of: window[ userData ] = "moreUserData " ; Do the following instead: i f ( userData===" location " ) { window. location = " stat ic/path/or/properly/url/encoded/value " ; } 8. When URL encoding in DOM be aware of character set issues as the character set in JavaScript DOM is not clearly defined (Mike Samuel). 9. Limit access to properties objects when using object[x] accessors. (Mike Samuel). In other words use a level of indirection between untrusted input and specified object properties. Here is an example of the problem when using map types: var myMapType = { } ; myMapType[<%=untrustedData%>] = "moreUntrustedData " ; Although the developer writing the code above was trying to add additional keyed elements to the myMapType object. This could be used by an attacker to subvert internal and external attributes of the myMapType object. 10. Run your JavaScript in a ECMAScript 5 canopy or sand box to make it harder for your JavaScript API to be compromised (Gareth Heyes and John Stevens). 61 7. DOM based XSS Prevention Cheat Sheet 11. Don’t eval() JSON to convert it to native JavaScript objects. Instead use JSON.toJSON() and JSON.parse() (Chris Schmidt). 7.3. Common Problems Associated with Mitigating DOM Based XSS 7.3.1. Complex Contexts In many cases the context isn’t always straightforward to discern. <a href =" javascript :myFunction(’<%=untrustedData%>’, ’ test ’ ) ;" > Click Me</a> . . . <script > Function myFunction ( url ,name) { window. location = url ; } </script > In the above example, untrusted data started in the rendering URL context (href attribute of an <a> tag) then changed to a JavaScript execution context (javascript: protocol handler) which passed the untrusted data to an execution URL subcontext (window.location of myFunction). Because the data was introduced in JavaScript code and passed to a URL subcontext the appropriate server-side encoding would be the following: <a href =" javascript :myFunction(’<%=Encoder . encodeForJS ( Encoder . encodeForURL ( untrustedData ) ) %>’, ’ test ’ ) ;" > Click Me</a> . . . Or if you were using ECMAScript 5 with an immutable JavaScript client-side encod- ing libraries you could do the following: <!−−server side URL encoding has been removed . Now only JavaScript encoding ↪→ on server side . −−> <a href =" javascript :myFunction(’<%=Encoder . encodeForJS ( untrustedData ) %>’, ’ ↪→ test ’ ) ;" > Click Me</a> . . . <script > Function myFunction ( url ,name) { var encodedURL = ESAPI4JS.encodeForURL ( url ) ; //URL encoding using cl ient− ↪→ side scripts window. location = encodedURL; } </script > 7.3.2. Inconsistencies of Encoding Libraries There are a number of open source encoding libraries out there: 1. ESAPI [8] 2. Apache Commons String Utils 3. Jtidy 4. Your company’s custom implementation. 62 8. Forgot Password Cheat Sheet Last revision (mm/dd/yy): 11/19/2014 8.1. Introduction This article provides a simple model to follow when implementing a "forgot password" web application feature. 8.2. The Problem There is no industry standard for implementing a Forgot Password feature. The result is that you see applications forcing users to jump through myriad hoops involving emails, special URLs, temporary passwords, personal security questions, and so on. With some applications you can recover your existing password. In others you have to reset it to a new value. 8.3. Steps 8.3.1. Step 1) Gather Identity Data or Security Questions The first page of a secure Forgot Password feature asks the user for multiple pieces of hard data that should have been previously collected (generally when the user first registers). Steps for this are detailed in the identity section the Choosing and Using Security Questions Cheat Sheet on page 20. At a minimum, you should have collected some data that will allow you to send the password reset information to some out-of-band side-channel, such as a (possibly different) email address or an SMS text number, etc. to be used in Step 3. 8.3.2. Step 2) Verify Security Questions After the form on Step 1 is submitted, the application verifies that each piece of data is correct for the given username. If anything is incorrect, or if the username is not recognized, the second page displays a generic error message such as "Sorry, invalid data". If all submitted data is correct, Step 2 should display at least two of the user’s pre-established personal security questions, along with input fields for the answers. It’s important that the answer fields are part of a single HTML form. Do not provide a drop-down list for the user to select the questions he wants to answer. Avoid sending the username as a parameter (hidden or otherwise) when the form on this page is submitted. The username should be stored in the server-side session where it can be retrieved as needed. Because users’ security questions / answers generally contains much less entropy than a well-chosen password (how many likely answers are there to the typical "What’s your favorite sports team?" or "In what city where you born?" security ques- tions anyway?), make sure you limit the number of guesses attempted and if some threshold is exceeded for that user (say 3 to 5), lock out the user’s account for some reasonable duration (say at least 5 minutes) and then challenge the user with some 65 8. Forgot Password Cheat Sheet form of challenge token per standard multi-factor workflow; see #3, below) to miti- gate attempts by hackers to guess the questions and reset the user’s password. (It is not unreasonable to think that a user’s email account may have already been com- promised, so tokens that do not involve email, such as SMS or a mobile soft-token, are best.) 8.3.3. Step 3) Send a Token Over a Side-Channel After step 2, lock out the user’s account immediately. Then SMS or utilize some other multi-factor token challenge with a randomly-generated code having 8 or more char- acters. This introduces an "out-of-band" communication channel and adds defense- in-depth as it is another barrier for a hacker to overcome. If the bad guy has somehow managed to successfully get past steps 1 and 2, he is unlikely to have compromised the side-channel. It is also a good idea to have the random code which your system generates to only have a limited validity period, say no more than 20 minutes or so. That way if the user doesn’t get around to checking their email and their email ac- count is later compromised, the random token used to reset the password would no longer be valid if the user never reset their password and the "reset password" token was discovered by an attacker. Of course, by all means, once a user’s password has been reset, the randomly-generated token should no longer be valid. 8.3.4. Step 4) Allow user to change password in the existing session Step 4 requires input of the code sent in step 3 in the existing session where the challenge questions were answered in step 2, and allows the user to reset his pass- word. Display a simple HTML form with one input field for the code, one for the new password, and one to confirm the new password. Verify the correct code is provided and be sure to enforce all password complexity requirements that exist in other ar- eas of the application. As before, avoid sending the username as a parameter when the form is submitted. Finally, it’s critical to have a check to prevent a user from accessing this last step without first completing steps 1 and 2 correctly. Otherwise, a forced browsing [2] attack may be possible. 8.4. Authors and Primary Editors • Dave Ferguson - gmdavef[at]gmail.com • Jim Manico - jim[at]owasp.org • Kevin Wall - kevin.w.wall[at]gmail.com • Wesley Philip - wphilip[at]ca.ibm.com 8.5. References 1. https://www.owasp.org/index.php/Forgot_Password_Cheat_Sheet 2. https://www.owasp.org/index.php/Forced_browsing 66 9. HTML5 Security Cheat Sheet Last revision (mm/dd/yy): 04/7/2014 9.1. Introduction The following cheat sheet serves as a guide for implementing HTML 5 in a secure fashion. 9.2. Communication APIs 9.2.1. Web Messaging Web Messaging (also known as Cross Domain Messaging) provides a means of mes- saging between documents from different origins in a way that is generally safer than the multiple hacks used in the past to accomplish this task. However, there are still some recommendations to keep in mind: • When posting a message, explicitly state the expected origin as the second argu- ment to postMessage rather than * in order to prevent sending the message to an unknown origin after a redirect or some other means of the target window’s origin changing. • The receiving page should always: – Check the origin attribute of the sender to verify the data is originating from the expected location. – Perform input validation on the data attribute of the event to ensure that it’s in the desired format. • Don’t assume you have control over the data attribute. A single Cross Site Scripting [2] flaw in the sending page allows an attacker to send messages of any given format. • Both pages should only interpret the exchanged messages as data. Never eval- uate passed messages as code (e.g. via eval()) or insert it to a page DOM (e.g. via innerHTML), as that would create a DOM-based XSS vulnerability. For more information see DOM based XSS Prevention Cheat Sheet on page 54. • To assign the data value to an element, instead of using a insecure method like element.innerHTML = data;, use the safer option: element.textContent = data; • Check the origin properly exactly to match the FQDN(s) you expect. Note that the following code: if(message.orgin.indexOf(".owasp.org")!=-1) { /* ... */ } is very insecure and will not have the desired behavior as www.owasp.org.attacker.com will match. • If you need to embed external content/untrusted gadgets and allow user- controlled scripts (which is highly discouraged), consider using a JavaScript rewriting framework such as Google Caja [3] or check the information on sand- boxed frames [4]. 67 9. HTML5 Security Cheat Sheet 9.3.2. Client-side databases • On November 2010, the W3C announced Web SQL Database (relational SQL database) as a deprecated specification. A new standard Indexed Database API or IndexedDB (formerly WebSimpleDB) is actively developed, which provides key/value database storage and methods for performing advanced queries. • Underlying storage mechanisms may vary from one user agent to the next. In other words, any authentication your application requires can be bypassed by a user with local privileges to the machine on which the data is stored. Therefore, it’s recommended not to store any sensitive information in local storage. • If utilized, WebDatabase content on the client side can be vulnerable to SQL injection and needs to have proper validation and parameterization. • Like Local Storage, a single Cross Site Scripting can be used to load malicious data into a web database as well. Don’t consider data in these to be trusted. 9.4. Geolocation • The Geolocation RFC recommends that the user agent ask the user’s permission before calculating location. Whether or how this decision is remembered varies from browser to browser. Some user agents require the user to visit the page again in order to turn off the ability to get the user’s location without asking, so for privacy reasons, it’s recommended to require user input before calling getCurrentPosition or watchPosition. 9.5. Web Workers • Web Workers are allowed to use XMLHttpRequest object to perform in-domain and Cross Origin Resource Sharing requests. See relevant section of this Cheat Sheet to ensure CORS security. • While Web Workers don’t have access to DOM of the calling page, malicious Web Workers can use excessive CPU for computation, leading to Denial of Ser- vice condition or abuse Cross Origin Resource Sharing for further exploitation. Ensure code in all Web Workers scripts is not malevolent. Don’t allow creating Web Worker scripts from user supplied input. • Validate messages exchanged with a Web Worker. Do not try to exchange snip- pets of Javascript for evaluation e.g. via eval() as that could introduce a DOM Based XSS [8] vulnerability. 9.6. Sandboxed frames • Use the sandbox attribute of an iframe for untrusted content. • The sandbox attribute of an iframe enables restrictions on content within a iframe. The following restrictions are active when the sandbox attribute is set: 1. All markup is treated as being from a unique origin. 2. All forms and scripts are disabled. 3. All links are prevented from targeting other browsing contexts. 4. All features that triggers automatically are blocked. 70 9. HTML5 Security Cheat Sheet 5. All plugins are disabled. It is possible to have a fine-grained control [9] over iframe capabilities using the value of the sandbox attribute. • In old versions of user agents where this feature is not supported, this attribute will be ignored. Use this feature as an additional layer of protection or check if the browser supports sandboxed frames and only show the untrusted content if supported. • Apart from this attribute, to prevent Clickjacking attacks and unsolicited fram- ing it is encouraged to use the header X-Frame-Options which supports the deny and same-origin values. Other solutions like framebusting if(window!== window.top) { window.top.location = location; } are not recommended. 9.7. Offline Applications • Whether the user agent requests permission to the user to store data for offline browsing and when this cache is deleted varies from one browser to the next. Cache poisoning is an issue if a user connects through insecure networks, so for privacy reasons it is encouraged to require user input before sending any manifest file. • Users should only cache trusted websites and clean the cache after browsing through open or insecure networks. 9.8. Progressive Enhancements and Graceful Degradation Risks • The best practice now is to determine the capabilities that a browser supports and augment with some type of substitute for capabilities that are not directly supported. This may mean an onion-like element, e.g. falling through to a Flash Player if the <video> tag is unsupported, or it may mean additional scripting code from various sources that should be code reviewed. 9.9. HTTP Headers to enhance security 9.9.1. X-Frame-Options • This header can be used to prevent ClickJacking in modern browsers. • Use the same-origin attribute to allow being framed from urls of the same origin or deny to block all. Example: X-Frame-Options: DENY • For more information on Clickjacking Defense please see the Clickjacking De- fense Cheat Sheet. 9.9.2. X-XSS-Protection • Enable XSS filter (only works for Reflected XSS). • Example: X-XSS-Protection: 1; mode=block 71 9. HTML5 Security Cheat Sheet 9.9.3. Strict Transport Security • Force every browser request to be sent over TLS/SSL (this can prevent SSL strip attacks). • Use includeSubDomains. • Example: Strict-Transport-Security: max-age=8640000; includeSubDomains 9.9.4. Content Security Policy • Policy to define a set of content restrictions for web resources which aims to mitigate web application vulnerabilities such as Cross Site Scripting. • Example: X-Content-Security-Policy: allow ’self’; img-src *; object-src me- dia.example.com; script-src js.example.com 9.9.5. Origin • Sent by CORS/WebSockets requests. • There is a proposal to use this header to mitigate CSRF attacks, but is not yet implemented by vendors for this purpose. 9.10. Authors and Primary Editors • Mark Roxberry mark.roxberry [at] owasp.org • Krzysztof Kotowicz krzysztof [at] kotowicz.net • Will Stranathan will [at] cltnc.us • Shreeraj Shah shreeraj.shah [at] blueinfy.net • Juan Galiana Lara jgaliana [at] owasp.org 9.11. References 1. https://www.owasp.org/index.php/HTML5_Security_Cheat_Sheet 2. https://www.owasp.org/index.php/Cross-site_Scripting_(XSS) 3. http://code.google.com/p/google-caja/ 4. https://www.owasp.org/index.php/HTML5_Security_Cheat_Sheet# Sandboxed_frames 5. https://www.owasp.org/index.php/Cross-Site_Request_Forgery_ (CSRF) 6. http://tools.ietf.org/html/rfc6455 7. https://www.owasp.org/index.php/Cross_Site_Scripting_Flaw 8. https://www.owasp.org/index.php/DOM_Based_XSS 9. http://www.whatwg.org/specs/web-apps/current-work/multipage/ the-iframe-element.html#attr-iframe-sandbox 72 11. JAAS Cheat Sheet Last revision (mm/dd/yy): 04/7/2014 11.1. Introduction 11.1.1. What is JAAS authentication The process of verifying the identity of a user or another system is authentication. JAAS, as an authentication framework manages the authenticated user’s identity and credentials from login to logout. The JAAS authentication lifecycle: 1. Create LoginContext 2. Read the configuration file for one or more LoginModules to initialize 3. Call LoginContext.initialize() for each LoginModule to initialize. 4. Call LoginContext.login() for each LoginModule 5. If login successful then call LoginContext.commit() else call LoginContext.abort() 11.1.2. Configuration file The JAAS configuration file contains a LoginModule stanza for each LoginModule available for logging on to the application. A stanza from a JAAS configuration file: Branches { USNavy.AppLoginModule required debug=true succeeded=true ; } Note the placement of the semicolons, terminating both LoginModule entries and stanzas. The word required indicates the LoginContext’s login() method must be successful when logging in the user. The LoginModule-specific values debug and succeeded are passed to the LoginModule. They are defined by the LoginModule and their usage is managed inside the LoginModule. Note, Options are Configured using key-value pairing such as debug=”true” and the key and value should be separated by a ’equals’ sign. 11.1.3. Main.java (The client) Execution syntax Java −Djava . security . auth . login . config==packageName/packageName. config packageName.Main Stanza1 Where: packageName is the directory containing the config f i l e . packageName. config spec i f ies the config f i l e in the Java package , ↪→ packageName 75 11. JAAS Cheat Sheet packageName.Main spec i f ies Main. java in the Java package , packageName Stanza1 is the name of the stanza Main ( ) should read from the config f i l e . • When executed, the 1st command line argument is the stanza from the config file. The Stanza names the LoginModule to be used. The 2nd argument is the CallbackHandler. • Create a new LoginContext with the arguments passed to Main.java. – loginContext = new LoginContext (args[0], new AppCallbackHandler()); • Call the LoginContext.Login Module – loginContext.login (); • The value in succeeded Option is returned from loginContext.login() • If the login was successful, a subject was created. 11.1.4. LoginModule.java A LoginModule must have the following authentication methods: • initialize() • login() • commit() • abort() • logout() initialize() In Main(), after the LoginContext reads the correct stanza from the config file, the LoginContext instantiates the LoginModule specified in the stanza. • initialize() methods signature: – Public void initialize(Subject subject, CallbackHandler callbackHandler, Map sharedState, Map options) • The arguments above should be saved as follows: – this.subject = subject; – this.callbackHandler = callbackHandler; – this.sharedState = sharedState; – this.options = options; • What the initialize() method does: – Builds a subject object of the Subject class contingent on a successful lo- gin() – Sets the CallbackHandler which interacts with the user to gather login in- formation – If a LoginContext specifies 2 or more LoginModules, which is legal, they can share information via a sharedState map – Saves state information such as debug and succeeded in an options Map 76 11. JAAS Cheat Sheet login() Captures user supplied login information. The code snippet below declares an array of two callback objects which, when passed to the callbackHandler.handle method in the callbackHandler.java program, will be loaded with a user name and password provided interactively by the user. NameCallback nameCB = new NameCallback ( "Username " ) ; PasswordCallback passwordCB = new PasswordCallback ( " Password " , fa lse ) ; Callback [ ] callbacks = new Callback [ ] { nameCB, passwordCB } ; callbackHandler . handle ( callbacks ) ; • Authenticates the user • Retrieves the user supplied information from the callback objects: – String ID = nameCallback.getName(); – char[] tempPW = passwordCallback.getPassword(); • Compare name and tempPW to values stored in a repository such as LDAP • Set the value of the variable succeeded and return to Main() commit() Once the users credentials are successfully verified during login (), the JAAS authen- tication framework associates the credentials, as needed, with the subject. There are two types of credentials, public and private. Public credentials include public keys. Private credentials include passwords and public keys. Principals (i.e. Identities the subject has other than their login name) such as employee number or membership ID in a user group are added to the subject. Below, is an example commit() method where first, for each group the authenticated user has membership in, the group name is added as a principal to the subject. The subject’s username is then added to their public credentials. Code snippet setting then adding any principals and a public credentials to a subject: public boolean commit ( ) { I f ( userAuthenticated ) { Set groups = UserService . findGroups (username ) ; for ( I terator i t r = groups . i te rator ( ) ; i t r . hasNext ( ) ; { String groupName = ( String ) i t r . next ( ) ; UserGroupPrincipal group = new UserGroupPrincipal (GroupName) ; subject . getPrincipals ( ) .add ( group ) ; } UsernameCredential cred = new UsernameCredential (username ) ; subject . getPublicCredentials ( ) .add ( cred ) ; } } abort() The abort() method is called when authentication doesn’t succeed. Before the abort() method exits the LoginModule, care should be taken to reset state including the user name and password input fields. 77 12. Logging Cheat Sheet Last revision (mm/dd/yy): 07/13/2014 12.1. Introduction This cheat sheet is focused on providing developers with concentrated guidance on building application logging mechanisms, especially related to security logging. Many systems enable network device, operating system, web server, mail server and database server logging, but often custom application event logging is missing, dis- abled or poorly configured. It provides much greater insight than infrastructure logging alone. Web application (e.g. web site or web service) logging is much more than having web server logs enabled (e.g. using Extended Log File Format). Application logging should be consistent within the application, consistent across an organization’s application portfolio and use industry standards where relevant, so the logged event data can be consumed, correlated, analyzed and managed by a wide variety of systems. 12.2. Purpose Application logging should be always be included for security events. Application logs are invaluable data for: • Identifying security incidents • Monitoring policy violations • Establishing baselines • Providing information about problems and unusual conditions • Contributing additional application-specific data for incident investigation which is lacking in other log sources • Helping defend against vulnerability identification and exploitation through at- tack detection Application logging might also be used to record other types of events too such as: • Security events • Business process monitoring e.g. sales process abandonment, transactions, connections • Audit trails e.g. data addition, modification and deletion, data exports • Performance monitoring e.g. data load time, page timeouts • Compliance monitoring • Data for subsequent requests for information e.g. data subject access, freedom of information, litigation, police and other regulatory investigations 80 12. Logging Cheat Sheet • Legally sanctioned interception of data e.g application-layer wire-tapping • Other business-specific requirements Process monitoring, audit and transaction logs/trails etc are usually collected for different purposes than security event logging, and this often means they should be kept separate. The types of events and details collected will tend to be different. For example a PCIDSS audit log will contain a chronological record of activities to provide an independently verifiable trail that permits reconstruction, review and examination to determine the original sequence of attributable transactions. It is important not to log too much, or too little. Use knowledge of the intended purposes to guide what, when and how much. The remainder of this cheat sheet primarily discusses security event logging. 12.3. Design, implementation and testing 12.3.1. Event data sources The application itself has access to a wide range of information events that should be used to generate log entries. Thus, the primary event data source is the application code itself. The application has the most information about the user (e.g. identity, roles, permissions) and the context of the event (target, action, outcomes), and of- ten this data is not available to either infrastructure devices, or even closely-related applications. Other sources of information about application usage that could also be considered are: • Client software e.g. actions on desktop software and mobile devices in local logs or using messaging technologies, JavaScript exception handler via Ajax, web browser such as using Content Security Policy (CSP) reporting mechanism • Network firewalls • Network and host intrusion detection systems (NIDS and HIDS) • Closely-related applications e.g. filters built into web server software, web server URL redirects/rewrites to scripted custom error pages and handlers • Application firewalls e.g. filters, guards, XML gateways, database firewalls, web application firewalls (WAFs) • Database applications e.g. automatic audit trails, trigger-based actions • Reputation monitoring services e.g. uptime or malware monitoring • Other applications e.g. fraud monitoring, CRM • Operating system e.g. mobile platform The degree of confidence in the event information has to be considered when in- cluding event data from systems in a different trust zone. Data may be missing, modified, forged, replayed and could be malicious – it must always be treated as untrusted data. Consider how the source can be verified, and how integrity and non-repudiation can be enforced. 81 12. Logging Cheat Sheet 12.3.2. Where to record event data Applications commonly write event log data to the file system or a database (SQL or NoSQL). Applications installed on desktops and on mobile devices may use local storage and local databases. Your selected framework may limit the available choices. All types of applications may send event data to remote systems (instead of or as well as more local storage). This could be a centralized log collection and management system (e.g. SIEM or SEM) or another application elsewhere. Consider whether the application can simply send its event stream, unbuffered, to stdout, for management by the execution environment. • When using the file system, it is preferable to use a separate partition than those used by the operating system, other application files and user generated content – For file-based logs, apply strict permissions concerning which users can access the directories, and the permissions of files within the directories – In web applications, the logs should not be exposed in web-accessible loca- tions, and if done so, should have restricted access and be configured with a plain text MIME type (not HTML) • When using a database, it is preferable to utilize a separate database account that is only used for writing log data and which has very restrictive database , table, function and command permissions • Use standard formats over secure protocols to record and send event data, or log files, to other systems e.g. Common Log File System (CLFS), Common Event Format (CEF) over syslog, possibly Common Event Expression (CEE) in future; standard formats facilitate integration with centralised logging services Consider separate files/tables for extended event information such as error stack traces or a record of HTTP request and response headers and bodies. 12.3.3. Which events to log The level and content of security monitoring, alerting and reporting needs to be set during the requirements and design stage of projects, and should be proportionate to the information security risks. This can then be used to define what should be logged. There is no one size fits all solution, and a blind checklist approach can lead to unnecessary "alarm fog" that means real problems go undetected. Where possible, always log: • Input validation failures e.g. protocol violations, unacceptable encodings, in- valid parameter names and values • Output validation failures e.g. database record set mismatch, invalid data en- coding • Authentication successes and failures • Authorization (access control) failures • Session management failures e.g. cookie session identification value modifica- tion • Application errors and system events e.g. syntax and runtime errors, connec- tivity problems, performance issues, third party service error messages, file system errors, file upload virus detection, configuration changes 82 12. Logging Cheat Sheet For more information on these, see the "other" related articles listed at the end, especially the comprehensive article by Anton Chuvakin and Gunnar Peterson. Note A: The "Interaction identifier" is a method of linking all (relevant) events for a single user interaction (e.g. desktop application form submission, web page re- quest, mobile app button click, web service call). The application knows all these events relate to the same interaction, and this should be recorded instead of los- ing the information and forcing subsequent correlation techniques to re-construct the separate events. For example a single SOAP request may have multiple input validation failures and they may span a small range of times. As another example, an output validation failure may occur much later than the input submission for a long-running "saga request" submitted by the application to a database server. Note B: Each organisation should ensure it has a consistent, and documented, ap- proach to classification of events (type, confidence, severity), the syntax of descrip- tions, and field lengths & data types including the format used for dates/times. 12.3.5. Data to exclude Never log data unless it is legally sanctioned. For example intercepting some com- munications, monitoring employees, and collecting some data without consent may all be illegal. Never exclude any events from "known" users such as other internal systems, "trusted" third parties, search engine robots, uptime/process and other remote mon- itoring systems, pen testers, auditors. However, you may want to include a classifi- cation flag for each of these in the recorded data. The following should not usually be recorded directly in the logs, but instead should be removed, masked, sanitized, hashed or encrypted: • Application source code • Session identification values (consider replacing with a hashed value if needed to track session specific events) • Access tokens • Sensitive personal data and some forms of personally identifiable information (PII) • Authentication passwords • Database connection strings • Encryption keys • Bank account or payment card holder data • Data of a higher security classification than the logging system is allowed to store • Commercially-sensitive information • Information it is illegal to collect in the relevant jurisdiction • Information a user has opted out of collection, or not consented to e.g. use of do not track, or where consent to collect has expired Sometimes the following data can also exist, and whilst useful for subsequent inves- tigation, it may also need to be treated in some special manner before the event is recorded: 85 12. Logging Cheat Sheet • File paths • Database connection strings • Internal network names and addresses • Non sensitive personal data (e.g. personal names, telephone numbers, email addresses) In some systems, sanitization can be undertaken post log collection, and prior to log display. 12.3.6. Customizable logging It may be desirable to be able to alter the level of logging (type of events based on severity or threat level, amount of detail recorded). If this is implemented, ensure that: • The default level must provide sufficient detail for business needs • It should not be possible to completely inactivate application logging or logging of events that are necessary for compliance requirements • Alterations to the level/extent of logging must be intrinsic to the application (e.g. undertaken automatically by the application based on an approved algorithm) or follow change management processes (e.g. changes to configuration data, modification of source code) • The logging level must be verified periodically 12.3.7. Event collection If your development framework supports suitable logging mechanisms use, or build upon that. Otherwise, implement an application-wide log handler which can be called from other modules/components. Document the interface referencing the organisation-specific event classification and description syntax requirements. If possible create this log handler as a standard module that can is thoroughly tested, deployed in multiple application, and added to a list of approved & recommended modules. • Perform input validation on event data from other trust zones to ensure it is in the correct format (and consider alerting and not logging if there is an input validation failure) • Perform sanitization on all event data to prevent log injection attacks e.g. car- riage return (CR), line feed (LF) and delimiter characters (and optionally to re- move sensitive data) • Encode data correctly for the output (logged) format • If writing to databases, read, understand and apply the SQL injection cheat sheet • Ensure failures in the logging processes/systems do not prevent the application from otherwise running or allow information leakage • Synchronize time across all servers and devices [Note C] 86 12. Logging Cheat Sheet Note C: This is not always possible where the application is running on a device under some other party’s control (e.g. on an individual’s mobile phone, on a remote cus- tomer’s workstation which is on another corporate network). In these cases attempt to measure the time offset, or record a confidence level in the event time stamp. Where possible record data in a standard format, or at least ensure it can be export- ed/broadcast using an industry-standard format. In some cases, events may be relayed or collected together in intermediate points. In the latter some data may be aggregated or summarized before forwarding on to a central repository and analysis system. 12.3.8. Verification Logging functionality and systems must be included in code review, application test- ing and security verification processes: • Ensure the logging is working correctly and as specified • Check events are being classified consistently and the field names, types and lengths are correctly defined to an agreed standard • Ensure logging is implemented and enabled during application security, fuzz, penetration and performance testing • Test the mechanisms are not susceptible to injection attacks • Ensure there are no unwanted side-effects when logging occurs • Check the effect on the logging mechanisms when external network connectivity is lost (if this is usually required) • Ensure logging cannot be used to deplete system resources, for example by filling up disk space or exceeding database transaction log space, leading to denial of service • Test the effect on the application of logging failures such as simulated database connectivity loss, lack of file system space, missing write permissions to the file system, and runtime errors in the logging module itself • Verify access controls on the event log data • If log data is utilized in any action against users (e.g. blocking access, account lock-out), ensure this cannot be used to cause denial of service (DoS) of other users 12.4. Deployment and operation 12.4.1. Release • Provide security configuration information by adding details about the logging mechanisms to release documentation • Brief the application/process owner about the application logging mechanisms • Ensure the outputs of the monitoring (see below) are integrated with incident response processes 87 12. Logging Cheat Sheet 4. http://tools.ietf.org/html/rfc5424 5. http://cee.mitre.org/ 6. http://csrc.nist.gov/publications/nistpubs/800-92/SP800-92.pdf 7. https://www.pcisecuritystandards.org/security_standards/ documents.php 8. http://www.w3.org/TR/WD-logfile.html 9. http://arctecgroup.net/pdf/howtoapplogging.pdf 10. http://taosecurity.blogspot.co.uk/2009/08/build-visibility-in. html 11. http://www.arcsight.com/solutions/solutions-cef/ 12. http://www.clerkendweller.com/2010/8/17/Application-Security-Logging 13. http://msdn.microsoft.com/en-us/library/windows/desktop/ bb986747(v=vs.85).aspx 14. http://www.symantec.com/connect/articles/building-secure-applications-consistent-logging 90 13. .NET Security Cheat Sheet Last revision (mm/dd/yy): 03/29/2015 13.1. Introduction This page intends to provide quick basic .NET security tips for developers. 13.1.1. The .NET Framework The .NET Framework is Microsoft’s principal platform for enterprise development. It is the supporting API for ASP.NET, Windows Desktop applications, Windows Com- munication Foundation services, SharePoint, Visual Studio Tools for Office and other technologies. 13.1.2. Updating the Framework The .NET Framework is kept up-to-date by Microsoft with the Windows Update ser- vice. Developers do not normally need to run seperate updates to the Framework. Windows update can be accessed at Windows Update [2] or from the Windows Update program on a Windows computer. Individual frameworks can be kept up to date using NuGet [3]. As Visual Studio prompts for updates, build it into your lifecycle. Remember that third party libraries have to be updated separately and not all of them use Nuget. ELMAH for instance, requires a separate update effort. 13.2. .NET Framework Guidance The .NET Framework is the set of APIs that support an advanced type system, data, graphics, network, file handling and most of the rest of what is needed to write enterprise apps in the Microsoft ecosystem. It is a nearly ubiquitous library that is strong named and versioned at the assembly level. 13.2.1. Data Access • Use Parameterized SQL [4] commands for all data access, without exception. • Do not use SqlCommand [5] with a string parameter made up of a concatenated SQL String [6]. • Whitelist allowable values coming from the user. Use enums, TryParse [7] or lookup values to assure that the data coming from the user is as expected. – Enums are still vulnerable to unexpected values because .NET only val- idates a successful cast to the underlying data type, integer by default. Enum.IsDefined [25] can validate whether the input value is valid within the list of defined constants. 91 13. .NET Security Cheat Sheet • Apply the principle of least privilege when setting up the Database User in your database of choice. The database user should only be able to access items that make sense for the use case. • Use of the Entity Framework [8] is a very effective SQL injection [9] prevention mechanism. Remember that building your own ad hoc queries in EF is just as susceptible to SQLi as a plain SQL query. • When using SQL Server, prefer integrated authentication over SQL authentica- tion. 13.2.2. Encryption • Never, ever write your own encryption. • Use the Windows Data Protection API (DPAPI) [10] for secure local storage of sensitive data. • The standard .NET framework libraries only offer unauthenticated encryption implementations. Authenticated encryption modes such as AES-GCM based on the underlying newer, more modern Cryptography API: Next Generation are available via the CLRSecurity library [11]. • Use a strong hash algorithm. – In .NET 4.5 the strongest algorithm for password hashing is PBKDF2, im- plemented as System.Security.Cryptography.Rfc2898DeriveBytes [12]. – In .NET 4.5 the strongest hashing algorithm for general hashing require- ments is System.Security.Cryptography.SHA512 [13]. – When using a hashing function to hash non-unique inputs such as pass- words, use a salt value added to the original value before hashing. • Make sure your application or protocol can easily support a future change of cryptographic algorithms. • Use Nuget to keep all of your packages up to date. Watch the updates on your development setup, and plan updates to your applications accordingly. 13.2.3. General • Always check the MD5 hashes of the .NET Framework assemblies to prevent the possibility of rootkits in the framework. Altered assemblies are possible and simple to produce. Checking the MD5 hashes will prevent using altered assemblies on a server or client machine. See [14]. • Lock down the config file. – Remove all aspects of configuration that are not in use. – Encrypt sensitive parts of the web.config using aspnet_regiis -pe 13.3. ASP.NET Web Forms Guidance ASP.NET Web Forms is the original browser-based application development API for the .NET framework, and is still the most common enterprise platform for web appli- cation development. • Always use HTTPS [15]. 92 13. .NET Security Cheat Sheet • Use the ASP.NET Membership provider and role provider, but review the pass- word storage. The default storage hashes the password with a single iteration of SHA-1 which is rather weak. The ASP.NET MVC4 template uses ASP.NET Iden- tity [24] instead of ASP.NET Membership, and ASP.NET Identity uses PBKDF2 by default which is better. Review the OWASP Password Storage Cheat Sheet on page 98 for more information. • Explicitly authorize resource requests. • Leverage role based authorization using User.Identity.IsInRole. 13.4. ASP.NET MVC Guidance ASP.NET MVC (Model-View-Controller) is a contemporary web application framework that uses more standardized HTTP communication than the Web Forms postback model. • Always use HTTPS. • Use the Synchronizer token pattern. In Web Forms, this is handled by View- State, but in MVC you need to use ValidateAntiForgeryToken. • Remove the version header. MvcHandler . DisableMvcResponseHeader = true ; • Also remove the Server header. HttpContext . Current .Response . Headers .Remove ( " Server " ) ; • Decorate controller methods using PrincipalPermission to prevent unrestricted URL access. • Make use of IsLocalUrl() in logon methods. i f ( MembershipService . ValidateUser (model .UserName, model . Password ) ) { FormsService . SignIn (model .UserName, model .RememberMe) ; i f ( IsLocalUrl ( returnUrl ) ) { return Redirect ( returnUrl ) ; } e lse { return RedirectToAction ( " Index " , "Home" ) ; } } • Use the AntiForgeryToken on every form post to prevent CSRF attacks. In the HTML: <% using (Html .Form( "Form" , "Update " ) ) { %> <%= Html . AntiForgeryToken ( ) %> <% } %> and on the controller method: [ ValidateAntiForgeryToken ] public ViewResult Update ( ) { // gimmee da codez } • Maintain security testing and analysis on Web API services. They are hidden inside MEV sites, and are public parts of a site that will be found by an attacker. All of the MVC guidance and much of the WCF guidance applies to the Web API. 95 13. .NET Security Cheat Sheet 13.5. XAML Guidance • Work within the constraints of Internet Zone security for your application. • Use ClickOnce deployment. For enhanced permissions, use permission eleva- tion at runtime or trusted application deployment at install time. 13.6. Windows Forms Guidance • Use partial trust when possible. Partially trusted Windows applications reduce the attack surface of an application. Manage a list of what permissions your app must use, and what it may use, and then make the request for those per- missions declaratively at run time. • Use ClickOnce deployment. For enhanced permissions, use permission eleva- tion at runtime or trusted application deployment at install time. 13.7. WCF Guidance • Keep in mind that the only safe way to pass a request in RESTful services is via HTTP POST, with TLS enabled. GETs are visible in the querystring, and a lack of TLS means the body can be intercepted. • Avoid BasicHttpBinding. It has no default security configuration. • Use WSHttpBinding instead. Use at least two security modes for your bind- ing. Message security includes security provisions in the headers. Transport security means use of SSL. TransportWithMessageCredential combines the two. • Test your WCF implementation with a fuzzer like the Zed Attack Proxy. 13.8. Authors and Primary Editors • Bill Sempf - bill.sempf(at)owasp.org • Troy Hunt - troyhunt(at)hotmail.com • Jeremy Long - jeremy.long(at)owasp.org 13.9. References 1. https://www.owasp.org/index.php/.NET_Security_Cheat_Sheet 2. http://windowsupdate.microsoft.com/ 3. http://nuget.codeplex.com/wikipage?title=Getting% 20Started&referringTitle=Home 4. http://msdn.microsoft.com/en-us/library/ms175528(v=sql.105).aspx 5. http://msdn.microsoft.com/en-us/library/system.data.sqlclient. sqlcommand.aspx 6. http://msdn.microsoft.com/en-us/library/ms182310.aspx 7. http://msdn.microsoft.com/en-us/library/f02979c7.aspx 96 13. .NET Security Cheat Sheet 8. http://msdn.microsoft.com/en-us/data/ef.aspx 9. http://msdn.microsoft.com/en-us/library/ms161953(v=sql.105).aspx 10. http://msdn.microsoft.com/en-us/library/ms995355.aspx 11. https://clrsecurity.codeplex.com/ 12. http://msdn.microsoft.com/en-us/library/system.security. cryptography.rfc2898derivebytes(v=vs.110).aspx 13. http://msdn.microsoft.com/en-us/library/system.security. cryptography.sha512.aspx 14. https://www.owasp.org/index.php/File:Presentation_-_.NET_ Framework_Rootkits_-_Backdoors_Inside_Your_Framework.ppt 15. http://support.microsoft.com/kb/324069 16. http://msdn.microsoft.com/en-us/library/system.web. configuration.httpcookiessection.requiressl.aspx 17. http://msdn.microsoft.com/en-us/library/system.web. configuration.httpcookiessection.httponlycookies.aspx 18. http://msdn.microsoft.com/en-us/library/h0hfz6fc(v=VS.71).aspx 19. http://www.iis.net/configreference/system.webserver/tracing 20. http://msdn.microsoft.com/en-us/library/ms972969.aspx# securitybarriers_topic2 21. http://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security 22. http://www.asp.net/whitepapers/request-validation 23. http://msdn.microsoft.com/en-us/library/system.uri. iswellformeduristring.aspx 24. http://www.asp.net/identity/overview/getting-started/ introduction-to-aspnet-identity 25. https://msdn.microsoft.com/en-us/library/system.enum.isdefined 97
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved