Read Seavus consultant Tibor Kladeks article on an alternative approach to testing – interactive automated testing.
With the introduction of agile way of working across IT organizations and projects, software testing as part of the SDLC, is quite often being mentioned as part of the development within the sprint, but it is usually not precise how it needs to be addressed. Various teams have found various ways to include testing within agile sprints but what cannot be avoided, is the uttermost importance of automation testing as the cornerstone of securing quality in a high paced agile environment.
One of the widely used practices is utilizing automation of the regression test set, which grows with every sprint. Various tools are also introduced as part of the practice of continuous integration, so that it is possible to do unattended testing of the nightly builds which would make them ready for the testing activities during the day. It has to be said that this approach is really good when the complexity and the size of the product being developed is small to medium, but when it comes to larger projects there are few drawbacks I have observed, that need to be catered for. Here are some challenges that immediately come to mind:
- It is not always clear what those regression test sets cover, unit test or functional test as well. In complex environments functional test scope gets much more complex than the unit tests by themselves;
- Developer (or test developer) is usually involved in coding those tests, so valuable development time is invested in a side product (automation test set) that is not really a deliverable to the business side.
- Regression test sets can grow a lot in size and become a burden for maintenance. User stories become bigger and more expensive with time, as changes to the automated regressions become more frequent and extensive;
- Manual testers have little participation in maintaining/developing those automations and as the regression test set grows usually those tests are only superficially understood; on the other hand manual testing is still needed
- Since developers are coding the automated tests quite often the “independent view” on testing is not there and the verification part remains less important than the validation.
This article is trying to address these and few other issues by a slightly different approach which I would like to call “Interactive test automation”.
First of all why do I call it interactive? The main goal of such approach would be to involve testers into the automation efforts as much as possible. The interactive part means, that the target we would like to achieve with this approach, is that testers can simultaneously automate and test various progression and regression scenarios. They would interact closely with developers on automation bugs and at the same time test the functionality of the deliverables. First question that comes to mind is, if this is at all possible. But before we go into that, let us see what would be the benefits of such an approach. I can think off several benefits immediately:
- Work between development and test efforts regarding automating tests, is more evenly split
- Developers would only need to develop and maintain components for test automation, and that means less frequent and smaller changes
- Independent view and focus on testing will in this case be retained as the actual test scenario specification is left entirely up to the testers within the team.
- Manual testers will get more taste of automation and will help them overcome the steep automation development learning curve more easily as it would gradually introduce them to the practice.
- Manual and automated testing would virtually be blended into one practice
To answer the question raised above about the viability of such an approach, we must address a bit the possible solution design of an automation framework for such a solution. Namely, in order for the testers to be able to use such a tool they would need a kind of an interface which allows for “on the fly automation” by using already made automation components. This means that developers would have to make those components flexible to accept different inputs and modular enough so that they can be used within various scenarios independently. Please note here that depending on how the functional test is envisioned, the automation components mentioned above, with slight adjustments to make them flexible, might as well originate from the unit tests that developers must do anyway.
Should such a tool (or framework) be flexible enough to support on the fly automation, the testers would mainly test the deliverables only through the automation interface. This in turn allows for the “interactive automation testing” to become possible. The scripts that would come from the actual functional test immediately would become part of the functional regression test set that would be used for future sprints. Off course there is always possibility for usability testing which can verify if the product that is being developed actually supports the user interface requirements from human perspective as well.
To also address the maintenance of the regression test set, with this approach the maintenance effort would be split between developers (for the automation components) and testers (for the actual test scenarios). This, on one hand enables greater support for the regression test pack within the team and on the other hand it helps in freeing up valuable developers’ time for future development.
Wrapping this up, it seems that with this automation approach there is lot to be gained with respect to the efficiency of agile teams in terms of testing automation. The main view here, is that automation developers instead of producing complete automation test scripts, to focus on developing flexible automation components that would later on be used by the testers (interactively) within the team to produce and maintain on their own an independent test regression set at the same time during testing the new functionality. This should help organize the automation development work in a better way and make the agile teams more flexible with respect to automation testing.
Article written by Tibor Kladek for Seavus. © Seavus AB