More often than not quality assurance is considered to be an implicit benefit of agile process with no specific techniques or practices applied. Integrating effective quality assurance can be one of the most challenging aspects of applying the Scrum framework.
Before considering how quality assurance factors can be applied to the Scrum framework, let’s start with a clear definition.
Quality assurance (QA) is a set of practices which collectively ensure that standards of quality are met (i.e. technical standards) and specified requirements are delivered (customer or user needs or expectations). Standards of quality may include for example measures of correctness, integrity, maintainability or usability. Quality assurance does not mean software testing; this is one element of a quality assurance process. Effective QA means there are limited defects in the first place. The core elements of a QA process include standards, audits, solution governance, testing, defect management (analysis, tracking and resolution) and communication.
Scrum Concepts
The following points describe how QA should be considered in relation to some of the core Scrum concepts.
Developer. In Scrum terms this means any project contributor such as Software Testers, UI Designers in addition to Software Developers. This term is obviously confusing and a source of misunderstanding.
Scrum Ceremonies. QA should attend all meetings (Sprint Planning, Product Backlog Grooming, Stand-ups, Reviews and Retrospectives). A team-wide, test-first philosophy is often the driving factor behind high performance Scrum teams. This concept is also referred to as acceptance-test driven development.
Working Software. This means fully tested quality software – not code complete.
Definition of Done. The DoD must include strict criteria for testing and quality.
Customer Satisfaction. Conditions of Satisfaction (or Acceptance Criteria) defined for each User Story drive the quality assurance process.
Scrum Artefacts
The following points describe how QA should be considered in relation to some of the core Scrum artefacts.
User Stories. The User Story is a concise description of a feature told from the perspective of the person who requires it. User Stories are best considered a reminder of a conversation and should be limited to the detail required for effective estimation and prioritisation.
The following (INVEST) factors should be considered in respect to the quality characteristics of a User Story;
Independent – can it be delivered in a single sprint, or do dependencies exist?
Negotiable – the user story is not a specification (reminder of a conversation)
Valuable – is there obvious intrinsic value to the user?
Estimable – can it be estimated (sufficient detail, available knowledge)?
Small – is it sized appropriately (to fit in a sprint)?
Testable – are there appropriate Conditions of Satisfaction – or Acceptance Criteria?
The Conditions of Satisfaction (CoS) are most significant to QA. Each condition is a specific, user-focused requirement for acceptance and should be written using the Given-When-Then format (or heuristic technique) as below.
Given [context] – e.g. I have created a Customer Record and Sales Invoice.
When [event] – e.g. I attempt to enter a Sales Payment above the Sales Invoice Total.
Then [outcome] – e.g. I am warned that the payment exceeds the outstanding amount.
The high-level CoS will drive the team discussion of the User Story (in a test-first approach), a “Design and Test” whiteboard approach can be useful in this respect where the team consider both the design and testing approach at the same time. Each high-level CoS will be translated into lower-level (detailed) test cases (technical; unit and integration or functional).
User Story Test Cases. The Conditions of Satisfaction (CoS) for a given User Story provide a high-level framework for the definition of specific test cases. Such test cases should be structured to address a small, focused end-to-end workflow (or scenario) to avoid interdependencies and should enable collaborative and automated testing. Effective test cases provide a simple set of step-by-step test instructions (or steps). Note, the CoS covers the happy path scenarios only, test cases must be developed also that cover critical path, alternate path, negative path, technical, validation and edge case scenarios.
Product Backlog. The Product Backlog is a single, ordered list of requirements (typically but not exclusively User Stories) used by Scrum teams. The list is constantly evolving, business value ordered and estimated by the team – including QA.
The following factors should be considered in respect to the quality characteristics of the Product Backlog;
Prioritisation – are the Product Backlog Items (PBI) accurately prioritised to a consistent scale based on business value, ROI, interdependencies and risk?
Estimation – are the PBI consistently estimated (and benchmarked) by the team?
Well Groomed – is the backlog frequently groomed by the team?
Product Backlog grooming entails User Story discussion by the whole team (lead by the Product Owner) with a view to revising and refining estimates and adding details such as new Conditions of Satisfaction. Regular sessions should be scheduled for this purpose; ideally 10% of the Sprint time should be dedicated to this task. In practice this is rarely the case, however given the direct relationship between the completeness of the CoS to the efficiency of the quality assurance process it is important that sufficient time is allowed for effective grooming.
Scrum Practices
The following points describe how QA should be considered in relation to some of the core Scrum practices.
In Sprint Testing. User Story testing must be completed within the sprint – full-stop. The Definition of Done must include this criterion. Testing a sprint behind the development is common bad practice and directly violates the working software principle; additionally this approach builds up technical debt and defect debt to be resolved at a later time. Instead, the sprint backlog must be shaped to enable collaborative design, development and testing of each user story within the sprint. High performing Scrum teams will also automate some of the tests within the sprint. The following simplistic process describes how collaborative sprint development can be applied.
Developers and Testers working collaboratively – Team Swarming:
Write the test case
Write the code
Test together (developers and testers)
Fix the bugs (developers) / Automate the test case (testers)
>> next user story
Test Automation. The ability to automate the execution of test cases (i.e. unit tests and acceptance tests) is critical at scale. In order to quickly identify cases where the current sprint development breaks the code or functionality developed in a previous sprint it is necessary to execute the related tests efficiently. Test automation also supports evaluation of the impact of changes (What-if analysis) that can help with design decisions. Further QA related benefits of test automation include the ability to apply load testing (e.g. multiple concurrent users) and performance testing types. Unlike manual testing which must be conducted in-sprint, test automation can be added outside of the sprint and be developed a separate team. High performance Scrum teams will automate critical tests within the sprint.
Test data quality has a significant impact upon the effectiveness of testing. Random data simply doesn’t work and prefabricated test data is often written to expectations and is not representative. Using production data (real data) for test purposes can violate data protection regulations and comes with the risk of inadvertent communication to the customer, supplier etc. or PII or financial information leakage. To mitigate this an approach that combines real data with obfuscation (scrambling or scrubbing) techniques can produce large scale, representative test data sets without the risk. This can take significant time to perfect however the return on this investment will be high.
Note, in addition to automated tests, a quality test plan still includes manual testing to cover exploratory testing and usability testing types. Automated tests are systematic and unable to gauge the quality of the user experience for example.
Regression Testing. The ability to run test cases developed in previous sprints (and perhaps by different teams) enables the impact of a current development to be understood immediately. This testing practice is referred to as regression testing. Regression testing effort gets bigger over time, as with User Story test cases, regression testing should occur in-sprint and automation should be used wherever possible. It is good practice to reserve time at the end of the sprint to allow the team to complete regression testing, this should also include performance and security test cases. At scale, it is often necessary to be selective in terms of which test cases to focus upon. An effective but contentious practice is to reserve the final sprint for end-to-end regression testing – this sprint is referred to as a hardening sprint; the key risk with this approach is that defect resolution from earlier sprints is deferred to this sprint. The hardening sprint should be driven from the regression test plan not the product backlog and be managed by the QA lead.
Post Release Support. Once the product is released – and despite the QA rigour applied – it is inevitable that production defects will arise. Such defects should be reviewed first by the Product Owner to determine priority and relevancy. Where defects are to be addressed immediately (show-stopper bugs etc.) the resolution work should replace similar sized work in the current sprint. In every respect the defect should be treated in the same manner as a User Story. Defects can also be placed on the Product Backlog for grooming. Where the Scrum team is providing defect resolution it is good practice to retain velocity for production defects; this proactive approach reduces the requirement to pull User Stories from the current sprint that may be in progress.