Before you dive into the world of SAMM 2.0, watch out for these common pitfalls.
Yan Kravchenko shares Part II in a series dedicated to the release of SAMM 2.0.
SAMM 2.0 Assessment
The release of SAMM 2.0 is widely anticipated and promises to provide a method for measuring and evolving Application Security Programs with the focus on Agile and DevOps methodologies. Naturally, organizations want to dive right in and start assessing their score against the latest SAMM. While the assessment process is self-explanatory and straightforward, several factors should be considered, including:
- Size of the organization
- Consistency of the SDLC practices
- Prior SAMM experience
While SAMM predominantly focuses on the assessment workflow, I wanted to offer some guidance to help organizations avoid common pitfalls based on my experiences of performing these and other assessments.
How should you perform the assessment? Is it better to complete on behalf of the organization as a representation of all development teams and efforts, or should you complete as individual divisions and business units? How do you demonstrate continuity between SAMM 1.x and 2.0? These and other questions are widespread and, while there is no right or wrong answer, I would like to share the approach and thought process for making these decisions.
Size of the Organization
Regardless of organization size, SAMM can be used to create a global scorecard showing the maturity level of the organization's Application Security program. As the size of the organization grows, however, creating a single SAMM scorecard requires taking increasingly larger liberties on the details. While the result will meet the overall objectives of the assessment, loss of detail may make it hard to move from answering ‘what’ to ‘why.’ Larger organizations should consider performing the evaluation at more granular levels and using collected data to create aggregate summaries.
Small organizations with a few products may safely skip the rest of this article and proceed with the assessment – they are not likely to encounter challenges faced by organizations with hundreds and thousands of individual applications and development teams. Larger organizations should consider the best levels for the assessment. I have performed assessments from division level to application level and only learned one universal truth: it depends. Determining the right approach is driven by questions the SAMM assessment will answer.
One question that generally helps to guide this consideration is, "If you were to gamify application security, what groups would be competing?" Using metrics to create ‘natural forces’ to encourage positive change can be a powerful tool and reduce the need for application security to be singularly responsible for pushing the program forward. Using numbers to represent application security can highlight teams or applications that may be trailing behind and hopefully compel the laggards to catch up to the others.
Large organizations often grow by acquisition, which significantly increases the likelihood of a wide range of SDLC practices. Another common source of inconsistencies results from differences between technology stacks. Discrepancy between SDLC drives the need for granular assessments that include every team. At the same time, those organizations with more consistent technology stacks and practices may be able to use sampling to help substantiate their scores. In my practice, I observed that even smaller organizations with practices expected to be consistent between teams often benefit from polling multiple teams.
SAMM is a qualitative assessment and will always be subject to the interpretation of the responder. Therefore, an organization should consider normalizing the interpretation of different questions. Reading SAMM guidelines provided with each question will certainly improve consistency, but it may not be feasible to expect all responders to invest this much time voluntarily.
Before beginning, the assessment application security teams should review the questionnaires with Software Architects to identify opportunities to improve question interpretations. Replacing generic names of committees, teams, and policies is a helpful place to start, but further analysis may find other opportunities to make responses to the questionnaire easier. Exercise care to ensure the modifications remain consistent with the description of associated Activity and Maturity level.
In all cases, training is appropriate prior to beginning the assessment, even if limited to a recorded webinar introducing the assessment, business functions, and security practices. However, including SAMM in the existing security training will further improve the quality of the responses and help with the acceptance and adoption of the application security roadmap.
After gathering the data, it's advisable to conduct a series of meetings to validate and, more importantly, gain context behind the different responses. While in larger organizations meeting with every responding team may not be practical, sampling the groups will provide information to aggregate SAMM scores and identify concrete steps for implementation of SAMM guidance.
Qualitative assessments are prone to inaccuracies due to differences in interpretations of guidance, prior experience, and, frankly, time of day, but that should not minimize their importance.
Before beginning, the organization should develop a set of questions they would like the assessment to answer. These questions will help implement SAMM in the organization and determine the assessment structure.
Future posts will cover individual business functions and conclude with the various ways data can be analyzed and aggregated for executive briefings.