By integrating security assurance into the software lifecycle, you can improve product quality—and avert disaster!

At the peak of the dot-com boom, my firm consulted extensively for large organizations concerned about the risks of Internet-based applications. My experience with one client in particular—we’ll call it Company X to protect its identity—remains the most palpable argument for integrating security throughout the software lifecycle.

Company X contracted with us to assess the security of its new consumer electronics retail portal. It had completed application development and the product had passed all critical QA tests. Our security assessment was part of a final “sanity check” prior to going live.

But we discovered a major security flaw within hours of starting the assessment. The application was passing critical data values, such as userIDs, from page to page using hidden form fields. An interloper could assume any Company X customer identity without authentication simply by changing the values in these fields and reposting a form. Worse, the consumer e-mail address was the user-identity key data element, making it child’s play to access a specific individual’s account.

Preventable Losses

The most incredible irony to security practitioners is that such incidents are usually simple to prevent, yet they continue to grow in number and scope. As I write this article, RockYou.com faces its first class-action lawsuit stemming from the compromise of 32 million usernames and passwords1. The attacker used a SQL injection attack to gain access to user records, including passwords stored in an unencrypted format.

These sorts of vulnerabilities can be easily identified and disaster averted if security quality assurance is effectively built into the software quality lifecycle.Indeed, an unfortunately frequent and harmful mistake is relegating application security to the status of “afterthought.” Failing to integrate security throughout the lifecycle is a top factor contributing to the identity thefts, site defacements and fraud we read about almost daily. Any important business function needs an owner—a team that takes responsibility for setting goals and managing activities to achieve them. But ownership of application security varies wildly across organizations; sometimes there’s no single responsible owner at all.

While software performance and resilience are core components of most software quality assurance programs, ownership of security remains scattered, leaving software security tasks fragmented and rudderless, and disconnects between security requirements and testing assurances almost certain. In many enterprises, the testers never see security test cases and often remain in the dark about who’s responsible for security in any capacity.

Application security is crucial to brand protection, loss prevention, regulatory compliance and customer satisfaction. Understanding the components of an application security assurance program helps us assess and improve security integration into the lifecycle. As you read the following sections, consider how your organization’s software quality program could be improved through closer alignment of security needs.

But be patient: Don’t try to integrate or even plan everything in one day. Weaving security into your software quality assurance programs will take effort and cooperation, and the investment will pay dividends in time.

Defining Your Security Requirements

Typical requirements for Web application security are usually easily defined and sorted into categories. They’re generally driven by multiple organizations, each of which should be engaged early in the software requirements analysis process to ensure its needs are addressed.

Compliance Requirements
Major functional requirements and development standards are driven by regulatory and internal compliance demands. Prime examples include the Payment Card Industry Data Security Standard (PCIDSS) for organizations that handle credit card data and Sarbanes-Oxley regulations for certain financial management and reporting environments. These are highly specific; more often than not they necessitate interpretation and distillation into system-specific functional requirements.

Compliance, especially with multiple standards, is an extensive topic that demands domain-specific expertise. Your best approach to ensure compliance needs are met is to review targeted application functionality with the organization’s chief compliance officer, audit team or legal department.

Organizational Security Standards
If your organization has an information security department, it’s likely that standards exist for security issues related to application architecture, data handling and coding practices. Examples of these standards include ISO/IEC 27002 and policies based on the Federal Information Security Management Act of 2002 (FISMA).

Requesting these standards from your organization’s information security officer and reviewing the proposed application functionality against them is usually an effective approach. In most cases, only part of overall security standards will apply specifically to software development and functional requirements.

Such standards are often closely related to compliance requirements, so it may make sense to combine the reviews of compliance and information security standards into a single workshop or discussion.

Functional Security Requirements
Defining requirements for an application’s security functionality involves combining proven effective but generic practices with product-specific needs.

Some common security functionality for Web applications includes user authentication, session management, role-based access control and sanitizing user input. You can get comprehensive documentation for common Web application security controls from the Open Web Application Security Project (http://www.owasp.org). Defining a baseline set of security functions is a useful first step to defining application-specific security functionality.

In addition to these basic best practices, the application’s purpose, audience and other characteristics may drive functional security requirements. For example, an online banking application will have specific requirements around multifactor user authentication imposed by the Federal Financial Institutions Examination Council (http://www.ffiec.gov). This functionality exceeds the single-factor user authentication commonly seen in less stringently controlled industries.

Probably the most convenient approach to nailing down the appropriate level of functional application requirements is to use a set of known practices (industry-specific as appropriate) as a jumping-off point to discuss additions, omissions and changes with the product management team. Combining your organization’s core security and compliance requirements with the OWASP technical practices often provides a strong starting point.

Requirements Signoff
If all parties have participated effectively, the application quality team should now be armed with a well-defined set of security quality requirements. These should include a proposed set of functional security requirements supported by coding and architectural standards.

As with any collaborative requirements definition process, the key stakeholders must sign off on the final requirements. The best practice to accomplish this is simply to do what already works for your organization. For example, if the usual software requirements-approval process involves distributing the requirement’s followed by a final sign-off workshop with all stakeholders, do it. If product management addresses all approvals, stick to that approach.

Leveraging existing processes will ensure smooth security requirements definition and will garner participation—both key to successful implementation of any process change.

Designing Your Application Security

In most application development processes, requirements are handed off to technology development teams that design the overall architecture and functionality to address functional use cases. Security and compliance design requirements should be assigned technical ownership and design workstreams should be included to address them.

Application security design activities and artifacts typically fall into a few broad categories similar to those encountered in general application development activities. The collective deliverables compose a reasonably complete application security design appropriate for review, approval and development execution.

Application Security Architecture

Many parts of a Web application will require major security functions, such as authentication, authorization and data-privilege management. Often, these functions are closely related.

Developing an overall architecture of components delivering security services ensures consistent protection, faster unit-level development and easier management of changing requirements.

If security architecture is relatively new to your organization, it may help to develop a best-case generic architecture before diving into application-specific or vendor-specific technologies. For example, designing a generic Web access manager (WAM) architecture to address user access control can prevent constraints that could result from diving directly into a specific WAM product or technology.

Here are some examples of common Web application security architecture elements:

  • Web Access Management: Provides user authentication, functional access management, session management and in some cases user provisioning.
  • Accounting and Auditing: Offers unified security event capture, secure transmission and storage, integration with enterprise security event and information management solutions.
  • Web Interface Security: Addresses prevention of purposeful manipulation of user interfaces and related security elements (e.g., cross-site scripting or SQL injection) through the use of simple best-coding practices, unified request-handling logic or application-layer security gateways.
  • Application Intrusion Detection/Prevention: Leverages information from other elements of the application security architecture to detect and possibly prevent attempts to subvert application functionality.

Exhaustive educational materials on generic security architecture are readily available from numerous resources—again, refer to the OWASP site for guidance.

Functional Design

It’s usually wise to weave application security functionality into general functional application design. In agile, waterfall and most other development approaches, the concept of the use case or user story is virtually universal. This is often the correct workstream to address functional security design.

As functional requirements are translated into contextually specific use cases, be sure security requirements appear naturally in each context. For example, an application-wide security standard might dictate: “Each user-level content record will be accessible only by the creating user.” This might apply to multiple functional contexts, including user, context and data-model contexts. The requirement’s applicability to the user context should prompt the designer to consider needed functionality while working on user-related application functionality.

Extending this example to hands-on development against a functional standard, a developer creating data models and screens to handle user content records might add an itemOwnerID column to a database to be populated upon record creation and used later for access control. Review of the requirement for owner-only record access should prompt this or a similar design choice.

Design Review and Signoff
Security architecture and functional designs necessitate review and approval before development begins, to ensure all major areas have been addressed. At a minimum, qualified application security experts should assess any general or security-specific design decisions that would create major or hard-to-repair flaws in application safety.

Again, if your organization has a working design review and sign-off process, by all means, go with it.

Developing Your Application Security

If security requirements have been properly distilled into a security architecture and functional design spec, security assurance will be running on autopilot at this point.

As developers focus on specific-use cases, the design specifications—including security—should be clear. You can now leverage your existing process to ensure use cases or stories are completed to spec so security functionality can be implemented without a hitch.

Although security implementation should be happening inline at this point, it’s important to consider the following during development:

  • Security functionality should be addressed at the use-case or story level. Developers should never “go back” to security after developing core functionality. Corners may get cut when time gets tight or coding decisions may get made that require undoing to fit security in later.
  • Security should be included in unit testing, including integration of security testing within automation tools. In addition to testing scenarios under expected user behavior, conduct tests that include unexpected behavior.
  • Spot-checking code to ensure adherence to coding standards is also important. No piece of code should ever be deployed without at least a cursory peer review (this also ensures segregation of responsibility, protecting against insertion of back-doors or other malicious functionality).
  • The developers should have immediate access to back-up resources. Not all developers have been exposed to security development techniques or approaches, and unfortunately design patterns are not well-defined. If you have application security experts in-house, allocate an appropriate amount of their time to mentoring others. Otherwise, retain outside expertise in advance. You don’t want developers skipping over security requirements simply because they don’t know what to do.

A good resource that’s both educational and eye-opening for developers is the “CWE/SANS Top 25 Most Dangerous Programming Errors” (http://cwe.mitre.org/top25).

Testing for Security Quality Assurance

Now that the theme of integrating a security workstream into every software lifecycle phase is clear, let’s focus on security QA testing, which should be completed alongside other quality assurance testing.

QA test cases for new Web applications are based on the functional requirements for each application. For existing applications, QA test cases probably also include regression test cases to ensure that old bugs have not crept back into evolving code. In some cases QA teams may include resiliency testing, including scenarios the application may encounter outside its intended execution paths.

All these methods also can, and in most cases should, be used to test the software’s security capabilities. By extending the testing methodology, you can leverage existing processes and cultural acceptance. As you begin to integrate security QA testing into the larger testing program, also take the following into account:

  • Each security test case should include a set of tests for unexpected or malicious behavior. These will focus primarily on manipulation of data elements or injection of unexpected characters to force unanticipated, potentially exploitable behavior. (Here, too, the OWASP project is a good resource for tools and techniques.)
  • The QA project manager should allow the testers some reasonable amount of time per use case or per major testing area to “freestyle” against the application—that is, attempt to manipulate the application’s security outside of any test case. With the exception of the developers, the testers will probably be the most familiar with the user interface, making them the most likely to discover unexpected user-interaction vulnerabilities.
  • A closed set of test data should be provided to each tester for individual use, and each tester should use only one record per test case involving data manipulation. The tester should have access to raw database views for his or her dataset, enabling immediate assessment of the data-tier impact of injection activities.

Like the developers, the testers would benefit from familiarity with the “CWE/SANS Top 25 Most Dangerous Programming Errors” (http://cwe.mitre.org/top25), especially before they do any freestyle testing.

Performing Other Security Assurance Activities

In addition to security QA testing, other actions help ensure that a newly launched or updated application won’t expose an organization to undue risk. Information security, infrastructure management or even external consultants may carry out these activities, but the overall application security assurance owner should maintain responsibility for ensuring their completion. It’s best to have them performed in the order they appear below, because the later activities often depend on the results of the earlier ones.

Final Code and Deployment Review

When the application is packaged and staged in a predeployment state, the final code base and all associated resources should be reviewed. Commercial automation tools or home-grown scripts are frequently used to help perform these checks, which have averted disaster on many occasions.

Here are a few examples of “red flags” that should be assessed and addressed (check the OWASP site for more):

  • Extra copies of files (e.g., “login.bak,” “login.old.1”) that have not been cleaned up after development are an extremely frequent and embarrassing cause of compromise, often leaving comments and multiple iterations of code exposed for attacker analysis. In addition, many application platforms will default to showing unknown extensions as a plain test MIME-type instead of attempting to compile, interpret or execute them.
  • Comments in HTML and development-time documentation blocks are often left in applications, where they are easily visible and can help move an attack forward. Unnecessary comments or in-line technical documentation should be removed unless they’re essential, in which case they should be carefully examined for potentially dangerous content.
  • Malicious code such as fraudster malware can be hidden in many forms of multimedia content (e.g., video and Flash files) to infect users and leave your company potentially liable. All multimedia content should be scanned with appropriate tools to protect against damaging content.

Infrastructure Security Review
Many a well-secured Web application has been compromised due to a poorly configured infrastructure. My favorite was the application service provider running a “SAS-70 certified” financial application…but the SAS-70 audit scope didn’t include the operating system or network, which collectively allowed anonymous FTP access to the application servers’ code base.

The infrastructure security review is another topic unto itself, and again plenty of free resources are available on the Internet. Generically, however, here are the major areas of concern:

  • Operating systems should have the least needed software and services running, and all software should be kept up to date.
  • Administrative, network, host-level and other access to server systems should be tightly controlled and closely monitored.
  • Application and database platforms have their own security configuration concerns, and should also be run in a hardened, closely controlled configuration.

At a minimum, software quality managers should familiarize themselves with the SANS top infrastructure vulnerabilities list.

Penetration Testing

Penetration testing, sometimes called “ethical hacking,” should be performed only after all other security QA has been completed. Although useful as a sanity check and to an extent as an assessment tool, penetration testing alone isn’t a comprehensive quality approach; it’s one piece in a larger toolkit.

The most frequent goal of penetration testing is simply to attempt application compromise using common attack methods. The most typical form is “black-box” penetration testing, in which the testers have no prior knowledge of the application or environment. This testing most closely simulates a real attack from an unknown threat agent. Other variants offer the testers varying levels of access to the application environment and in some cases application code/data to assess attack effects.

The decision about which penetration testing model to use is driven by the goals of the testing efforts. If the test is being used as a sanity check against common attack methods, a black-box penetration test may be most appropriate. If it is intended to provide a more in-depth software vulnerability assessment, providing the testers with access to software information makes sense. The latter approach, called “white-box” or “informed” testing, can provide extensive insight into security-related design flaws, though it takes more time and effort.

No matter which approach you take, penetration tests can easily turn into never-ending projects if poorly scoped. The testing boundaries must be clearly defined. In the case of a black-box penetration test, define the attacks that pose the greatest threats to the application’s confidentiality, integrity or availability. Have the penetration testers define the attack set that would pose those risks. If a white-box penetration test is being performed, work with the testers to assess which interface and code components are most exposed to compromise and determine the specific use cases to be tested.

Go/No-Go and Risk Acceptance

Some security bugs will probably still exist after completion of all development and testing. Attempting to address every possible vulnerability would be unreasonable for the vast majority of applications. As with general software quality, security bugs must be analyzed for severity, potential impact and mitigating factors. Armed with this information, technology and business managers can decide which bugs are worth delaying a release to fix, and which can be addressed post-release.

From a security perspective, there are many factors to consider when analyzing a specific vulnerability.If a technical situation is complex, seeking the advice of experienced security and risk analysts is clearly warranted. For more well-known security issues, some of the major considerations when assessing a bug’s severity include:

  • Can the vulnerability be remotely exploited? If so, can it be exploited by anyone or does it require authentication or a specific level of authorization?
  • Does the vulnerability expose personal information of any type to unauthorized disclosure? If so, does the vulnerability expose just the current user’s data or multiple users’ data? Is such data regulated or protected by law (e.g., GLBA or HIPAA)?
  • How easy is the vulnerability to exploit? Do tools or techniques exist “in the wild” that would help an attacker execute an attack? Would an attacker need special knowledge of the system or environment?
  • Would the presence of the vulnerability or its exploit expose the organization to compliance failure, breach of contract, legal liability or significant brand damage?

Performing this analysis in a simple Excel spreadsheet (see sample at http://www.cloudpassage.com/articles/stp-0310) can provide tremendous insight into the security bugs that should be of most concern. Again, if there are questions or lack of clarity around issues, seek expertise. When significant security issues remain in software beyond the point of code-freeze, it is critical to complete risk-acceptance documentation for each vulnerability.

A risk-acceptance document (see sample at http://www.cloudpassage.com/articles/stp-0310) provides senior management with all details needed to make a go/no-go decision relative to significant security risks posed by software vulnerabilities. Because senior executives can hold personal responsibility for data compromises, these disclosures are crucial. At a minimum, a risk disclosure should describe the vulnerability, the potential impact to specific data or services, ease-of-exploit details and legal/compliance/brand risks to the organization. It is often considered appropriate to review all significant security bugs with senior management in a conference or workshop format to ensure that all concerns are communicated effectively and consistently to all stakeholders.

If senior management chooses to move forward without immediately addressing significant vulnerabilities, this decision should be documented and the vulnerabilities should be added to the high-priority queue for soonest possible post-release resolution. In addition, close monitoring should be conducted to detect attempts to exploit the vulnerabilities.

Bottom line, while accepting risks and attempting to monitor for exploits is an option, it’s best to avoid going into production with serious Web application vulnerabilities if at all possible. In most cases it’s not a matter of “if” an attacker will wander by your application—it’s a matter of “when.”

The Upshot

Poorly protected Web applications contribute to the theft of hundreds of millions of identities every year2, creating costly and treacherous conditions for businesses and consumers. The vast majority of these compromises can be traced directly to easily preventable issues in software and infrastructure. Quality assurance organizations can offer remarkable improvements in risk posture simply by extending current processes to include security and compliance.

Share this article with your colleagues, especially with technology management within your organization. Discuss how security gets addressed with your security team. Identify potential areas where you can leverage the synergies between security assurance and quality assurance.

This simple act of improvement could one day help your organization avoid becoming the next Company X.

Beware of Bots

Malicious botnets are massive collections of compromised computers (termed “zombies”) that unsuspectingly serve the commands of fraudsters or other e-criminals. While most “zombified” computers are actually workstations, high-bandwidth Internet servers are often required to distribute and control the malicious software (known as “crimeware”).

Crimesters use insecure Web applications to compromise a company’s Web servers, turning them into unsuspecting distribution and command-and-control servers that effectively serve as the “brains” of the malicious botnets.

Conficker, Kraken and Srizbi are examples of highly publicized and severely damaging botnets— Srizbi, named for the Srizbi trojan, has reportedly compromised some 450,000 machines and can send as many as 60 billion spam messages a day, according to some estimates. (For a more complete list and links to additional information visit http://en.wikipedia.org/wiki/Botnet.)

1 “Facebook App Maker Hit With Data-Breach Class Action” by David Kravets. Dec. 30, 2009 http://www.wired.com/threatlevel/2009/12/facebook-app-data-breach
2 “2009 Data Breach Investigations Report” by Wade Baker, et al. April 2009.
http://www.verizonbusiness.com/resources/security/reports/2009_databreach_rp.pdf


About the Author

Carson Sweet is CEO of CloudPassage, a soft ware-as-a-service provider delivering security, performance and effi ciency management for cloudbased enterprises. His 15-year informati on security career has included a broad range of management experience and hands-on technical roles. He holds multi ple industry accreditati ons, including ISACA CISM and ISC2 CISSP. Sweet has been a senior strategy and technology consultant across a range of industries and public sectors. His clients include Goldman Sachs, JPMC, Bank of New York, ADP, CapitalOne, SmithKline Beecham, Becton Dickinson, US-DHS and US-DOE. He formerly served as CSO for GlobalNetXchange (now Agentrics) and CTO for the Investor Responsibility Research Center (now the RiskMetrics Group).