A couple of weeks back, we were doing training for a team. We talked at length about the benefits of team members who specialize in one discipline, moving around to help other disciplines. As I was explaining the role the test engineers play in this regard, Frank Vega(https://twitter.com/fxvega) mentioned something that put it a lot more succinctly. Frank, who was helping me out with the training mentioned that what we are talking about is the difference between testing and quality assurance. Let us draw the distinction more clearly.
Testing as defined by Wikipedia(yes, the source of all truths) is - Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. Testing has the connotation that most or all the code for the piece of software being developed is already written. Piece of software might mean the entire project in a waterfall world or just one story in the agile world. In essence, the act of testing can begin only after the code has been written. There are many techniques that make this process effective. Test case design techniques like pairwise testing, equivalence classes, automated tests at different levels can make this activity a very profitable one. Many testers can even open up the code and write tests based on the code they see. White box testing is very effective at finding flaws and writing well guided test cases.
On the other hand Wikipedia defines Quality Assurance as a way of preventing mistakes or defects in manufactured products and avoiding problems when delivering solutions or services to customers. Quality assurance makes sure that quality is built into every artifact and every part of the process. Quality assurance begins from the writing of requirements and does not end till the shipping of the code. Reviewing/collaborating on requirements, code, test cases and automated tests are all activities that provide assurances at multiple levels that bugs are being avoided. Not saying that the actual execution of test cases is not a great source of quality assurance, but often we waste our best quality assurance minds on pure testing activities, rather than elicit their help in other quality assurance activities.
While testing is a product oriented activity, quality assurance is a process oriented activity. Testing is meant to find defects so that the problems can be resolved, while quality assurance is meant to prevent defects so that the problem never arises.
Similar to the concepts mentioned in the last post on this blog, QA folk have a much larger role to play than just testing code for the product. Most of us are familiar with the exponential increase in the cost of bugs the later it is in the development process. Getting your quality engineers to review and/or contribute to requirements could help better define stories and more often than not avoid bugs from being introduced even before any code is written. Getting QAs to sit with engineers when they are writing code can help the developer understand the testing strategy and avoid writing the defects in the first place. Very often I have observed QAs be the bridge between the business language of the product analysts and the technical jargon of the software engineers.
This is very different from the traditional perspectives of "testers should test". How many times have we discovered that the requirements were written with assumptions that were invalid? Having a QA review these even before the coding for a story started would have not only prevented the defect/rework from being introduced, but would also have prevented context switching for the devs and the analysts who might have moved on to the next story by this time. I am in no way suggesting that actually testing the coded piece of software is a useless activity. What I am suggesting, is that if all your QA folks are doing is writing and executing test case and automation, that is a waste of their talent.
There is still specialization to be craved in the discipline of developing automated test suites. You still need the more technical type of tester that is great at developing test suites that test the product well and prevent regression defects from being introduced by future stories. These folks can still review requirements, open up the code to develop test strategies and most importantly they can fix the defects that they find without ever sending the story back to the engineer(s) that worked on the story. Once again, preventing context switching and keeping the flow of stories going. The test engineers in this case provide Quality Assurance by being a lot more involved in the coding of the product than in the writing of the requirements.
What about the other roles on the team and their contribution to quality assurance? Product Analysts can and should help test stories as much as testers should help review/write requirements. Engineers should be involved in activities like code reviews, writing unit tests, developing other automation test suites and using their skills in the best possible ways to ensure that a quality product is being produced. The engineers sometimes have the best handle at assuring and testing quality as far as performance, deployment and security concerns go. They might be able to contribute to, requirements, code and test creation, especially in these disciplines.
None of this is revolutionary, we have always asked our teams to produce the highest quality products possible. The problem is that we have tried to do this via a heavy emphasis on testing our products rather than performing quality assurance activities on our products. I feel very lucky to work with some great quality assurance folks on my team alongside product analysts and developers who do their part in ensuring that they have built quality into the product rather than throwing things over the wall at the "tester" and making it their problem.
Testing as defined by Wikipedia(yes, the source of all truths) is - Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. Testing has the connotation that most or all the code for the piece of software being developed is already written. Piece of software might mean the entire project in a waterfall world or just one story in the agile world. In essence, the act of testing can begin only after the code has been written. There are many techniques that make this process effective. Test case design techniques like pairwise testing, equivalence classes, automated tests at different levels can make this activity a very profitable one. Many testers can even open up the code and write tests based on the code they see. White box testing is very effective at finding flaws and writing well guided test cases.
On the other hand Wikipedia defines Quality Assurance as a way of preventing mistakes or defects in manufactured products and avoiding problems when delivering solutions or services to customers. Quality assurance makes sure that quality is built into every artifact and every part of the process. Quality assurance begins from the writing of requirements and does not end till the shipping of the code. Reviewing/collaborating on requirements, code, test cases and automated tests are all activities that provide assurances at multiple levels that bugs are being avoided. Not saying that the actual execution of test cases is not a great source of quality assurance, but often we waste our best quality assurance minds on pure testing activities, rather than elicit their help in other quality assurance activities.
While testing is a product oriented activity, quality assurance is a process oriented activity. Testing is meant to find defects so that the problems can be resolved, while quality assurance is meant to prevent defects so that the problem never arises.
Similar to the concepts mentioned in the last post on this blog, QA folk have a much larger role to play than just testing code for the product. Most of us are familiar with the exponential increase in the cost of bugs the later it is in the development process. Getting your quality engineers to review and/or contribute to requirements could help better define stories and more often than not avoid bugs from being introduced even before any code is written. Getting QAs to sit with engineers when they are writing code can help the developer understand the testing strategy and avoid writing the defects in the first place. Very often I have observed QAs be the bridge between the business language of the product analysts and the technical jargon of the software engineers.
This is very different from the traditional perspectives of "testers should test". How many times have we discovered that the requirements were written with assumptions that were invalid? Having a QA review these even before the coding for a story started would have not only prevented the defect/rework from being introduced, but would also have prevented context switching for the devs and the analysts who might have moved on to the next story by this time. I am in no way suggesting that actually testing the coded piece of software is a useless activity. What I am suggesting, is that if all your QA folks are doing is writing and executing test case and automation, that is a waste of their talent.
There is still specialization to be craved in the discipline of developing automated test suites. You still need the more technical type of tester that is great at developing test suites that test the product well and prevent regression defects from being introduced by future stories. These folks can still review requirements, open up the code to develop test strategies and most importantly they can fix the defects that they find without ever sending the story back to the engineer(s) that worked on the story. Once again, preventing context switching and keeping the flow of stories going. The test engineers in this case provide Quality Assurance by being a lot more involved in the coding of the product than in the writing of the requirements.
What about the other roles on the team and their contribution to quality assurance? Product Analysts can and should help test stories as much as testers should help review/write requirements. Engineers should be involved in activities like code reviews, writing unit tests, developing other automation test suites and using their skills in the best possible ways to ensure that a quality product is being produced. The engineers sometimes have the best handle at assuring and testing quality as far as performance, deployment and security concerns go. They might be able to contribute to, requirements, code and test creation, especially in these disciplines.
None of this is revolutionary, we have always asked our teams to produce the highest quality products possible. The problem is that we have tried to do this via a heavy emphasis on testing our products rather than performing quality assurance activities on our products. I feel very lucky to work with some great quality assurance folks on my team alongside product analysts and developers who do their part in ensuring that they have built quality into the product rather than throwing things over the wall at the "tester" and making it their problem.
I totally support the notion of Quality assurance over Test execution. Having someone on the team with a good understanding of what needs to be talking place to ensure something is built correctly and all permutations of logic have been covered in the test framework. Someone that can view the tests created by the team to sure that they are suitable, maintain the expected coverage and are data agnostic.
ReplyDeleteHowever I don't agree with your comments in the third paragraph from the end, having testers create automated regression suites. I believe this should be the responsibility of the development community, writing tests into their code as they go. Having a tester involved creates an accountability handover which wastes time, creates a gap for things to fall down and reduces visibility of progress and ownership. I blogged on the same topic a while ago https://philagiledesign.wordpress.com mini-waterfall - look at your testing
Phillip, First of all thanks for your comment and thanks for taking the time to read through and evaluate the post. I agree with you, there is no reason for a "tester" to have accountability for the tests. The writing of the tests could be done by anyone on the team. That person is putting on their testing hat at that time. In fact once the test cases are in place, anyone on the team who has the bandwidth to automate them, should and hopefully does. This definitely avoids the mini-waterfall within a sprint(In fact it leads to a great place where we dont need sprint boundaries even). I do think another team member should review to ensure that the automation covers the required test cases.
DeleteI do draw the distinction between Unit tests and integration/system tests. Unit tests are written to make sure that future development does not change the unit of code's intended behaviour, while the other kinds are written to make sure that the product's behaviour does not change, unless that is what was specified. Again, the ownership of these might lie with any team member (my last post was similar to yours - http://agileroundup.blogspot.com/2015/03/not-my-job-and-new-definition-of.html) as long as their is a review process buiult in to ensure Quality of these suites.