There are many different styles of QAing a project, and I’ve found that much of it depends on who you are working with, what project you’re working on, and what is expected of the final outcome. I have worked hard at developing the Collaborative Testing environment at Zywave, but that doesn’t mean it’s the best method for everyone, it’s just another method that might work, if you’re feeling stuck in the ways you’re currently testing.
So What is Collaborative Testing?
At Zywave, we define Collaborative Testing as the QA team (often more specific to the functional testers) working side-by-side with the dev team, to ensure accuracy of the acceptance criteria, prior to their work getting into the QA environment, where it officially becomes ready to test. This isn’t the all inclusive definition, nor is it one that has been officially decided on, but rather just a statement about what it looks like.
So what does it look like?
When our project managers/directors/BAs give the team a story with acceptance criteria (and everything has been planned and decided on), the dev team begins their work building out the functionality. During this time, I often ask for them to show me what they’re doing, walk me through their code (even though I don’t really understand it all), show off the UI a little, so we can have an open discussion of their work. We have been able to avoid a lot of problematic situations, because we were able to clear up questions prior to them completing there work.
I have found that people interpret stories slightly differently when simply reading the story. For example, we’ve all seen the picture of the swing in the tree, what the BA planned, the designer created, the dev developed, and the qa tested. That image shows that each team member thought about the swing a little differently, and therefore possibly incorrect from what the project manager was hoping for.
These are things we attempt to avoid with Collaborative Testing. By working along side of the developer while they are developing the code, it ensures that we are on the same page with each other. Then, when questions arise, we can approach the BA and get further clarity on expectations. It saves a lot of time and effort, that is often seen with over-the-wall or waterfall approaches.
What about bugs?
Bugs are always going to happen. It’s inevitable (a post to come perhaps). However, I have found that by working closely with devs during the development process, we are able to prevent a large handful of bugs from making their way to the QA environment. We do this a few different ways.
When I Drive
Sometimes the developer will call me over and ask me to sit at their computer and test their code locally, while they go get a cup of coffee. Assuming I’ve already got a good understanding of the story requirements and possibly written my test plan, I am able to get into their work and pull things apart. Just like I might have done (or still will do) when the code is in QA, I can test it hard on their computer, and jot down a list of items for them to fix that I was able to catch. This, however, does not get the developer thinking like I do, but it does allow me to get into an in-depth test scenario and potentially reveal a big issue that I was expecting to see.
When Developers Drive
A similar style would be for the developer to drive, and I will look over their shoulder and ask them to perform miscellaneous clicks and entries, ensuring accuracy of the requirements. Sometimes we are able to get into deeper scenarios, but in this method, it’s often a bit more surface level of the functionality. This method gets the developer into the mindset of the QA analyst, beginning to think about different workflows to test on their own, before calling me over to collaborate with them next time.
I have found many developers use either or both methods with me at different times, and the results of these conversations are great. I’ve seen devs testing their code in ways they normally wouldn’t, or when they call me over, and I throw a wild scenario at them, they tell me they’ve already tested and verified it works, they will prove it, and then inform me that they knew I was going to ask that. We have now begun to save everyone time, and the developer is beginning to produce less bugs in their code up front.
What Are the Dilemmas?
There are a handful of things that can hold this process up, cause it to not work so great, or prevent Collaborative Testing from happening all together. Some of these things can be worked on or fixed, and some things will just never change.
We all know about the stereotypical developer, with their quiet personality, perhaps a bit shy, maybe staring at the floor when they talk to you, etc. Sometimes we gain this interpretation from experience, and other times from TV and movies like IT Crowd, The Social Network, or Silicon Valley. In my experience, I have found a wide variety of developer styles:
- The stereotypical devs
- The very outgoing devs
- The confident or possibly cocky devs
- The “my code is perfect” devs
- The “love the feedback” devs (my favorite)
- and many more
Each dev works differently. It’s up to you as the tester to figure out your team member’s personality, and challenge them to try something new. At Zywave, our Agile approach allows us to try something for a short time, and if it works, we keep it. If it fails miserably, we evaluate why, and then either try again or drop it completely.
Collaborative Testing isn’t for everyone, no method is. If you are the type of tester who prefers working entirely alone, or are not bold enough to ask a developer to show you what they have, then you might not be interested in Collaborative Testing. Perhaps you’re stuck in your old ways and what you’ve been doing just works. Well I challenge you to step out of your comfort zone, have a discussion with your development team, and attempt some of these methods described above. The efforts will pay off when you realize they fix almost all of your bugs before their code even gets into your hands.
This has been an interesting experience for me and a challenge to overcome. I’ve had the opportunity to work with outsourced teams, where the developers where in another location, and I was in the office. We had to communicate entirely over chat programs and the occasional phone call, while still attempting to get collaboration done together.
We have made this work by first, explaining the concepts above, and what I wanted to do with them. Second, repeatedly ask them where they’re at in their work throughout the day, because it’s so easy to go a whole day and not talk when there is no in-person interaction. And lastly, joining them in a screen-sharing program and while they walk me through their work, design, and development of the UI. Then I get the opportunity to have them go through a few scenarios before promoting this work to QA. This falls under the developer driving methods above.
Company Policy or Structure
One of the hardest barriers to overcome, and perhaps one of the most common I’ve heard from talking with the local QA community, is that the office structure is setup in a way to promote department collaboration instead of the product or workflow. Meaning, the developers all sit together in one room, the QA in another, and so on. This is a difficult scenario, and one that may not work for this type of testing efforts. However, if the company allows, you could go for a walk and visit their desk, and take that time to have them show you some things.
Another structure I’ve heard is the over-the-wall approach that some companies are so used to. With over-the wall testing, code gets out of the developers hands entirely, and into QA for testing. Then when there is a bug, they hand it back to the dev to fix. QA moves on, and developer eventually gets to the problem, fixes it, and tosses it back over. This cycle repeats, and essentially wastes a lot of time for both parties.
What are you thoughts on this? Have you tried something like this before? Is this just crazy talk? Let me know what you’re thinking or if you have any other advice on this subject. I love learning additional ways to test with different people and products.