When we met, we both worked at the same Fortune 500 security company. Unknown faces to each other in a large corporation, we reported to the same senior VP. That day, we both attended an all-day company technical exchange meeting. Chris was one of the project managers. His PowerPoint presentation contained the statement “Testers don’t do security testing.” It stopped me in my tracks.

As the QA manager responsible for all of the development groups in the room, and with my QA presentation still looming, my red flags started flying high. “But we are indeed conducting security testing!” I thought. “Who is this guy? I need to tell him about the type of security testing we do,” I grumbled inwardly, continuing to ponder as his presentation continued.

Was he referring to a recent security bug that slipped through into production? No, I didn’t recall any bugs slipping through at all. My mind was racing. “He must be referring to those ‘other’ testing groups in those ‘other’ companies,” I pacified myself.

After his presentation ended and the applause had subsided, I walked up and stood in line to talk to Chris for clarification about the little bomb he’d dropped. I started to set him straight, telling him that we QA/testers are indeed doing just that, explaining the type of security testing we do. I was proud of the memory leak detection, SQL injection and the user roles/permission tests we ran. Lately we also had been testing for cross-site scripting and other new and upcoming security tests.

With a slightly condescending smile, Chris countered that the security testing we were doing was just scratching the surface, and intoned, “A full life-cycle approach is the only way to achieve secure software.”

And that’s how I met Chris, the co-author (with Lucas Nelson and Dino Dai Zovi) of my latest book, “The Art of Software Security Testing” (Symantec Press, 2006). Yes, now I’m a convert: The full life-cycle approach to security testing is what Chris and I preach today.

It turns out that Chris Wysopal is one of the most recognized security researchers in the country. He’s now co-founder and CTO of Veracode, where he’s responsible for the software security analysis capabilities of Veracode’s technology.

Unfortunately, Chris is correct: Testers don’t do effective security testing. Many testers today still neglect basic security testing techniques, such as fuzzing, and companies still farm out security testing to expensive security researchers after the product has been released into production.

Testers have abundant knowledge, techniques and technology at their disposal. For example, testers, more so than developers, get a view of the entire system. While a developer might focus on her component alone, testers get to test the system integration and often focus on end-to-end testing.

Testers generally have the entire system view. While a developer usually performs unit tests to determine if the feature positively does what it is supposed to do, testers also conduct “negative” testing: They test for how a feature behaves when it receives an input it’s not supposed to receive, for example.

Additionally, by redirecting testing tools and techniques toward an attacker’s perspective, backed up with threat modeling and secure development techniques, testers make a major contribution to product security. When I present the Secure Software Development Life Cycle at conferences, attendees love the concept, but complain about the additional time that security testing requires. Indeed, security testing does require more time, but security must parallel the software development life cycle to be effectively built into a product. The sooner a security defect is uncovered in the product life cycle, the cheaper it is to fix. Imagine the implications of a security breach from a bug that wasn’t uncovered before going live!

Adding Security Testing to the SDLC

In the SSDL, security issues are evaluated and addressed early in the system’s life cycle, during business analysis, throughout the requirements phase, and during design and development of each software build. This early involvement allows the security team to provide a quality review of the security requirements specification, attack use cases and software design. The team will also gain a complete understanding of business needs and requirements and their associated risks. Finally, the team can design and architect the most appropriate system environment using secure development methods, threat modeling efforts and other techniques to generate a more secure design.

Early involvement is significant because requirements or attack use cases comprise the foundation or reference point from which security requirements are defined, the system is designed and implemented, and test cases are written and executed, and by which success is measured. The security team needs to review the system or application’s functional specification.

Once it’s been determined that a vulnerability has a high level of exploitability, the respective mitigation strategies must be evaluated and implemented.

In addition, a process must be in place that allows for deploying the application securely.

Secure deployment means that the software is installed with secure defaults. File permissions need to be set appropriately, and the secure settings of the application’s configuration are used.

After the software has been deployed securely, its security must be maintained throughout its existence. An all-encompassing software patch management process needs to be in place. Emerging threats must be evaluated, and vulnerabilities prioritized and managed. Infrastructure security, such as firewall, DMZ and IDS management, is assumed to be in place. Backup/recoverability and availability plans must also be present. And the team’s roles and responsibilities in relation to security must be understood by all.

The SSDL outlines secure development processes. No matter how strong your firewall rule sets are, or how diligent your infrastructure patching mechanism is, if your Web application developers haven’t followed secure coding practices, attackers can walk right into your systems through port 80. To help make sure that won’t happen, let’s explore the phases of the SSDL.

Phase 1: Define Your Security Guidelines, Rules and Compliance Regulations

First, create a system-wide specification that defines the security requirements that apply to the system; it can be based on specific government regulations. One such company-wide regulation could be the Sarbanes-Oxley Act of 2002, which contains specific security requirements. For example, Section 404 of SOX states “Various internal controls must be in place to curtail fraud and abuse.”

This can serve as a baseline for creating a company-wide security policy that covers this requirement. Role-based permission levels, access-level controls and password standards and controls are just some of the elements that need to be implemented and tested to meet the requirements of this specific SOX section.

The Open Web Application Security Project (OWASP) lists a few security standards such as the ISO 17799, the International Standard for Information Security Management, a well-adopted and well-understood standard published by the International Organization for Standard¬ization. However, it has rarely been applied specifically to those concerned with managing a secure Web site. When you implement a secure Web application, information security management is unavoidable.

ISO 17799 does an excellent job of identifying policies and procedures you should consider. But it doesn’t explain how they should be implemented, nor does it give you the tools to implement them. It’s simply a guide of which policies and procedures you should consider, and doesn’t mandate that you should implement them all.

OWASP recommends the Web Application Security Standards (WASS) project, which proposes a set of minimum requirements a Web application must exhibit if it processes credit card information. This project aims to develop specific, testable criteria that can stand alone or be integrated into existing security standards such as the Cardholder Information Security Program (CISP), which is vendor- and technology-neutral.

By testing against this standard, you should be able to determine that minimal security procedures and best practices have been followed in the development of a Web-based application.

Another such company-wide security regulation could state, for example, “The system needs to consider the HIPAA privacy and security regulations and be compliant,” “The system will meet the FISMA standards,” “The system will be BASEL II–compatible,” “The system needs to meet the Payment Card Industry Data Security Standard” or “We must abide by the Gramm-Leach-Bliley (Financial Modernization) Act,” to name just a few.

Phase 2: Document Security Requirements, Develop Attack Use Cases

A common mistake is to omit security requirements from any type of requirements documentation. Not only do security requirements aid in software design, implementation and test case development, they also can help determine technology choices and areas of risk.

The security engineer should insist that associated security requirements be described and documented along with each functional requirement. Each functional requirement description should contain a section that documents the specific security needs of that particular requirement that deviate from the system-wide security policy or specification.

Guidelines for requirement development and documentation must be defined at the project’s outset. In all but the smallest programs, careful analysis is required to ensure that the system is developed properly.

Attack use cases are one way to document security requirements. They can lead to more thorough secure system designs and test procedures. See the sidebar on this page for examples.

Defining a requirement’s specific quality measure helps reduce fuzzy requirements. For example, everyone would agree with a statement such as “The system must be highly secure,” but each person may have a different interpretation of what that means. Security requirements don’t endow the system with specific functions. Instead, they constrain or further define how the system will handle any function that shouldn’t be allowed.

This is where the analysts should look at the system from an attacker’s point of view. Attack use cases can be developed that show behavioral flows that aren’t allowed or are unauthorized. They can help you understand and analyze security implications of pre- and post-conditions. “Includes” relationships can illustrate many protection mechanisms, such as the logon process. “Extends” relationships can illustrate many detection mechanisms, such as audit logging.

Attack use cases list ways in which the system could possibly be attacked.

Security defect prevention involves the use of techniques and processes that can help detect and avoid security errors before they propagate to later development phases. Defect prevention is most effective during the requirements phase, when the impact of a change required to fix a defect is low. If security is in everyone’s minds from the beginning of the development life cycle, they can help recognize omissions, discrepancies, ambiguities and other problems that may affect the project’s security.

Requirements traceability ensures that each security requirement is identified in such a way that it can be associated with all parts of the system where it is used. For any change to requirements, is it possible to identify all parts of the system where this change has an effect?

Traceability also lets you collect information about individual requirements and other parts of the system that could be affected by requirement changes, such as designs, code or tests. When informed of requirement changes, security testers should make sure that all affected areas are adjusted accordingly.

Phase 3: Perform Architectural and Design Reviews; Identify and Define Threat Models

Security practitioners need a solid understanding of the product’s architecture and design so that they can devise better and more complete security strategies, plans, designs, procedures and techniques. Early security team involvement can prevent insecure architectures and low-security designs, as well as help eliminate confusion about the application’s behavior later in the project life cycle. In addition, early involvement allows the security expert to learn which aspects of the application are the most critical and which are the highest-risk elements from a security perspective.

This knowledge enables security practitioners to focus on the most important parts of the application first and helps testers avoid over-testing low-risk areas and under-testing the high-risk ones.

With threat modeling, the various methods of attack are identified and defined. With threat modeling, you can find security problems early, before coding them into products. This helps determine the “highest-risk” parts of application—those that need the most scrutiny throughout the software development effort. Another valuable benefit of the threat model is that it can give you a sense of completeness. Saying “Every data input is contained within this drawing” is a powerful statement that can’t be made at any other point.

Phase 4: Use Secure Coding Guidelines; Differentiate Between Design and Implementation Vulnerabilities

To understand how vulnerabilities get into all software, the developer must learn how to prevent them from sneaking into programs and must be able to differentiate design versus implementation vulnerabilities.

A design vulnerability is a flaw in the design that precludes the program from operating securely no matter how perfectly it’s implemented by the coders. Implementation vulnerabilities are caused by security bugs in the actual coding of the software.

Static analysis tools can detect many implementation errors by scanning the source code or the binary executable. These tools are useful in finding issues such as buffer overflows, and their output can help developers learn to prevent the errors in the first place.

You can also send your code to a third party for defect analysis to validate your code security for compliance reasons or customer requirements. Because this usually doesn’t take place until the code has already been developed, initial standards should be devised and followed. The third party can then verify adherence and uncover other security issues.

Phase 5: The Colorful World of Black-, White- And Gray-Box Testing

Black-, white- and gray-box testing refer to the perspective of the tester when designing test cases—black from outside with no visibility into the application under test, white from inside with total source code visibility, and gray with access to source code and the ability to seed target data and build specific tests for specific results.

The test environment setup is an essential part of security test planning. It facilitates the need to plan, track and manage test environment setup activities, where material procurements may have long lead times. The test team must schedule and track environment setup activities; install test environment hardware, software and network resources; integrate and install test environment resources; obtain/refine test databases; and develop environment setup scripts and test bed scripts.

Additionally, setup includes developing security test scripts that are based on the attack use cases described in the SSDL Phase 2, then executing security test scripts and refining them, conducting evaluation activities to avoid false positives and/or false negatives, documenting security problems via system problem reports, supporting developer understanding of system and software problems and replication of the issue, performing regression tests and other tests, and tracking problems to closure.

Phase 6: Determine the Exploitability of Your Vulnerabilities

Ideally, every vulnerability discovered in the testing phase of the SSDL can be easily fixed. But depending on the cause, whether a design or implementation error, the effort required to address it can vary widely.

A vulnerability’s exploitability is an important factor in gauging the risk it presents. You can use this information to prioritize the vulnerability’s remediation among other development requirements, such as implementing new features and addressing other security concerns.

Determining a vulnerability’s exploitability involves weighing five factors:

  • The access or positioning required by the attacker to attempt exploitation
  • The level of access or privilege yielded by successful exploitation
  • The time or work factor required to exploit the vulnerability
  • The exploit’s potential reliability
  • The repeatability of exploit attempts

To determine exploitability, the risks of each vulnerability are used to prioritize, comparing them against each other; then, based on risk and priority, to address the vulnerabilities and other development tasks (such as new features). This is also the phase in which you can manage external vulnerability reports.

Exploitability needs to be regularly reevaluated because it always gets easier over time: Crypto gets weaker, people figure out new techniques and so on.

Once the six phases of the Secure Software Development Lifecycle have been carried out, the secure defaults are set and understood, and testers verify the settings, the application is ready to be deployed.

SIDEBAR: SAMPLE SECURITY REQUIREMENTS

There are many examples of security requirements; a few are listed here. A tester who relies solely on requirements for testing and who usually would miss any type of security testing is now armed with this set of security requirements. From here, you can start developing the security test cases.

The application stores sensitive user information that must be protected for HIPAA compliance. Therefore, strong encryption must be used to protect all sensitive user information wherever it is stored.

The application transmits sensitive user information across potentially untrusted or unsecured networks. To protect the data, communication channels must be encrypted to prevent snooping, and mutual cryptographic authentication must be employed to prevent man-in-the-middle attacks.

The application sends private data over the network. Therefore, communication encryption is a requirement.

The application must remain available to legitimate users. Resource utilization by remote users must be monitored and limited to prevent or mitigate denial-of-service attacks.

The application supports multiple users with different levels of privilege. The application assigns each user to an appropriate privilege level and defines the actions each privilege level is authorized to perform. The various levels need to be defined and tested. Mitigations for authorization bypass attacks need to be defined.

The application takes user input and uses SQL. SQL injection mitigations are a requirement.

SIDEBAR: ATTACK PATTERNS TO APPLY THROUGHOUT THE SSDL

* Define security/software development roles and responsibilities.

* Understand the security regulations your system must abide by, as applicable.

* Request a security policy if none exists.

* Documented security requirements and/or attack use cases.

* Develop and execute test cases for adherence to umbrella security regulations, if applicable. Develop and execute test cases for the security requirements/attack use cases described in this article.

* Request secure coding guidelines and train software developers and testers on them.

* Test for adherence to secure coding practices.

* Participate in threat modeling walkthroughs and prioritize security tests.

* Understand and practice secure deployment practices.

* Maintain a secure system by having a patch management process in place, including evaluating exploitability.

SIDEBAR: ROLES AND RESPONSIBILITIES

It’s often unclear whose responsibility security really is. Is it the sole domain of the infrastructure group who sets up and monitors the networks? Is it the architect’s responsibility to design security into the software? For effective security testing to take place, roles and responsibilities must be clarified. In the SSDL, security is the responsibility of many. It’s a mistake to rely on infrastructure or the network group to simply set up the IDSes and firewalls and have them run a few network tools for security. Instead, roles and responsibilities must be defined so that everyone understands who’s testing what and an application testing team, for example, doesn’t assume that a network testing tool will also catch application vulnerabilities.

The program or product manager should write the security policies. They can be based on the standards dictated, if applicable, or based on the best security practices discussed here. The product or project manager is also responsible for handling security certification processes if no specific security role is available.

Architects and developers are responsible for providing design and implementation details, determining and investigating threats, and performing code reviews. QA/testers drive critical analyses of the system, take part in threat-modeling efforts, determine and investigate threats, and build white- and black-box tests. Program managers manage the schedule and own individual documents and dates. Security process managers can oversee threat modeling, security assessments and secure coding training.


About the Author

Elfriede Dustin To see a list of my software testing books, go to http://www.amazon.com/author/elfriededustin
There you will also find my bio: I am a technical director at IDT ( www.idtus.com ) and have been working at IDT since 2006. IDT is a leader in providing automated software testing solutions, called Automated Test and Re-Test (ATRT), to DOD and its contractors.

Software development is still an art and that makes automated software testing a special challenge. At IDT (www.idtus.com) we strive to meet that challenge by producing an extensible automated testing framework.

For any questions related to automated software testing, ATRT, or the various books here, please contact me at edustin@idtus.com.

My bio continued:
I have a B.S. in Computer Science with over 20 years of IT experience, implementing effective testing strategies, both on Government and commercial programs. I have implemented automated testing methodologies and testing strategies as an Internal SQA Consultant at Symantec, worked as an Asst. Director for Integrated Testing at the IRS Modernization Efforts, implemented testing strategies and built test teams as a QA Director for BNA Software, and was the QA Manager for the Coast Guard MOISE program, just to name a few of the companies and testing/QA efforts I have worked on in the past 20 years.

I am the author and co-author of 6 books related to Software Testing, i.e. author of the book ‘Effective Software Testing’ and lead author of ‘Automated Software Testing’ and ‘Quality Web Systems,’ and co-authored the book ‘The Art of Software Security Testing,’ together with Chris Wysopal, Lucas Nelson, Dino D’ai Zovi, which was published by Symantec Press, Nov 2006. See my list of books here on this page.

My goal is to continue to help further automated software testing advances and contribute to solving the automated software testing challenge.

You can also follow me on twitter @ElfriedeDustin or find me on linkedin ElfriedeDustin