Every single product in the world needs quality assurance. Try to imagine a world without quality assurance (QA). Should we allow self-driving cars go their way on our roads without being tested? Or would we let people board airplanes, without the slightest detail being examined? Test cases are defined and test criteria should be met. This should happen for all products before even getting on the market. It is no different for In The Pocket. We want to create digital products that make people happy and businesses grow, and this is only possible by making sure those products are thoroughly tested and having the slightest details examined. But you need dedicated people in your team to do that. You need a strong team of quality assurance engineers, acting as quality gate keepers of all our products. All QA engineers at In The Pocket want the same thing: to deliver products of exquisite quality and make sure our clients are happy.
But to get to this stage, the QA team evolved a lot. At first, there wasn't even a dedicated QA team. Quality assurance was done by the developers itself. And if anyone is still thinking that's a good idea, let me interrupt you: it's not. Every developer is subjective towards the code he or she writes. All parents think their kids are the best, right? Fortunately, In The Pocket was aware of that and rather quickly, the first QA engineer started testing our products created by ± 20 developers. Of course, it was a lot to manage and more importantly, a lot to test.
So we had to evolve to a real QA team, with more people and clear processes. We had to evolve to a team providing more transparency to our clients, but also to ourselves: what have we tested already, what do we need to test again, what is the status of a feature being shipped next week… And the evolution succeeded. We tried several tools, different ways of working. Now we are proud to say our QA team is grown up and full-functioning. But how exactly do we manage quality assurance at In The Pocket? Let's find out!
Handling autonomous teams
In The Pocket is divided into autonomous teams, each team having their own projects to be responsible for. This way, every teams consists of a team lead, developers, a designer, a product manager and a QA engineer. Because of this structure, all QA engineers have their own projects to be responsible for. They know their projects from a to z and know what is yet to come. It is their mission to keep the QA mindset in their teams high so everyone working on those projects is aware this is a topic that is extremely important from the beginning of a project, until the end of a project (if there is any).
But as quality assurance engineers, we cannot remain focused only on our own projects and our own way of working. In order to streamline things a bit, we have a monthly meeting to discuss the next thing to work on, so we keep getting better in what we do. Because of these monthly meetings, we have our main QA process like it is now.
The QA process
The start of a brand new project
When a new project enters the team, the QA engineer is responsible for starting the QA process. The first step is fairly easy and straight-forward: we create a test plan dashboard, containing all major information about the QA process of this particular project. This is being set-up by the QA engineer, with input received from the solution architect and the project manager. Having a dashboard is necessary to provide transparency towards clients, but also for other members of the team: at least they know now what a QA engineer is doing. But what does it contain exactly?
- Project information: A summary of the project: what is its purpose, target? Is it something mobile/web/…? Where to get more product information?
- People: Who works on the projects? What are their roles and responsibilities?
- Tools: What tools are used? What is their purpose? Where to find the test cases in the test management tool? Do we use logging tools?
- Supported browsers/operating systems: One of the most important topics for a QA engineer: what devices do we test on and what are the lowest supported operating system versions? Android 5, 4? What browsers should we test the project with and which versions are the lowest supported?
- Environments: Is there a staging and preproduction environment? What versions are deployed on them and where are they located?
- Testing preconditions: Is there something we need before we can start testing? Are the acceptance criteria ready? Is the test environment stable and up and running?
- In scope/out of scope: What high-level features are in-scope and the responsibility of the QA engineer? But also important: what is out of scope and should not be tested by the QA engineer?
- Test data: Is there any test data that should be used or is necessary for testing?
- Risks: Are there any risks that should be taken into account? Can OS updates cause any risks? Are there any merging scenario's that are important?
- Non-functional requirements: Are there any other requirements that should be met: performance, usability, reliability, security…
- Short description about the way-of-working: Some more information about QA topics, like: definition of done, agile testing, bug severity classification, test levels…
Once all this information is bundled, there is a nice overview of everything related to QA in the project. Of course, this is a living document and everything is subject to change. But once this document is set-up, we can continue and start on the real testing preparation.
The real testing preparation
All teams at In The Pocket, including the QA engineers, work in sprints of 2 weeks within the agile philosophy. Using Jira - a tool for managing project versions, features, bugs and more - we add new features, so-called "stories" to those sprints. The features will be taken up by development and it is the responsibility of the QA engineer to make sure these stories are thoroughly tested. This can be done by preparing the stories in advance, before the sprint starts. A story cannot be planned if it isn't prepared by the QA engineer. But how?
- First, the QA engineer participates in "The 3 Amigo's" meeting. A meeting where the product manager presents the requirements and acceptance criteria for review by the solution architect and the QA engineer. That way, we can ensure a common understanding of what needs to be built and how we can test the story. Only when there is approval of all 3 amigo's, a story can be planned in a sprint.
- Second, an individual backlog refinement needs to be done. The QA engineer goes over all stories that are ready for review by QA and creates high-level test cases to the stories using Adaptavist, a Jira-plugin. With this plugin, it is very easy to add test cases from within stories and keep a good overview. We try to only add a good descriptive title without very detailed steps. Because of this, we'll do more out-of-the-box thinking during our tests and more exploratory testing in general, but still cover all necessary test cases. Also in this step, we determine which test case(s) we want to be automated.
- Third, the QA engineer participates in estimation meetings. A final check to see if all test cases are set, and if there is a lot of QA effort, it needs to be part of the estimation too.
Because of this preparation, a decent planning can be done: QA effort is taken into account, development knows what exactly is going to be tested, which results in better code quality too.
The main testing part
Once those well-prepared tickets are planned in the next sprint, development moves the ticket to the "In Progress" state and starts implementing those features. Because the test cases are now visible in the ticket itself, developers will know how the new features will be tested and so, provide more time on testing already. This results in less reopened tickets and better code quality.
Once the developer is confident of his/her new feature, the ticket moves to the "In Test" state, which will automatically assign the QA engineer of the team. The main testing part has now started:
- First, using the Adaptavist plugin in Jira, a new "test run" can be created from within the ticket. It automatically selects all the test cases linked and you're able to change some information, like the mobile device/browser/OS/environment… you'll be testing on. Starting the test run opens a "test player" window where you can go through the test cases and mark them as succeeded or failed. When failed, a new bug ticket needs to be created or the main ticket needs to be reopened:
- When a blocking or critical bug occurs in new functionalities, the story needs to be reopened. In this case, we cannot test this ticket further.
- When a major or minor bug occurs in the new functionalities, a new bug ticket needs to be created and linked to the story. In this case, we can test the story further so it can stay "In Test". Bugs are reported using the "bug-first" approach, which means they are immediately added to the sprint and developers can immediately start working on them. Only minor bugs are added to the backlog.
- Some test cases need to be automated. This also happens when the ticket is "In Test". This works well if there's a good ratio between number of developers and QA engineers. If there's not and there's a high pressure on the QA engineer, some testing work needs to be delegated and developers will have to automate the test cases instead. It's the responsibility of the QA engineer to check if this is necessary.
When all acceptance criteria are thoroughly tested, the test cases succeeded and the automated tests are well automated, the ticket can be moved to the "Done" state. But this doesn't mean the job of the QA engineer is done at this stage.
Final QA checks
All stories are tested, the most important bugs are fixed and others are transferred to the next version. Now it is time for the next stage in the project: releasing a next version.
In order to release a version, we have to follow a test script in Adaptavist, especially created for this occasion. This script covers regression test cases of existing functionalities, test cases of new functionalities and of course, some update tests. These tests are project-specific and will be different in other projects. For example, the update scenario of web applications is completely different than the update scenario of native applications. And then there's also mixed applications: projects using both native and web. In those cases, a clear testing process is necessary, which should be clear to every single one in the team.
When the release script succeeded, the QA engineer gives his or her final "go" and the application can be released. Afterwards, a sanity check ensures everything went as it should. All of this being well-documented in Adaptavist, providing as much transparency as possible to our clients. All these Adaptavist test runs and bug metrics are now bundled in a nice and clear QA report.
But, the QA process is a never-ending story and feedback is incoming. People will use the product and bugs will happen. Crashes will occur. All of this needs to be monitored and the next version is already being prepared, with bug fixes and new stories with test cases. Making it an even better product, a product that makes our clients happy.