Last night I went to a QA event hosted by MKE SPIN which is a local organization that puts together regular meetups for tech-related topics. Last night was a Quality Assurance Panel Discussion with four panel members who have all been in the industry for a number of years, providing some differing opinions on the direction of QA at their local establishment, and also attempting to prove the direction of QA as a whole in the industry. There was some lively discussion and I’ll attempt to cover some of it here.
There was a panel member who was from American Family Insurance who mentioned that he deals with a high volume of legacy product. His team has determined that automating all of their testing is the only way to ensure accuracy of functionality and longevity of the intended behavior. This sparked some good discussion on the potential importance of having testers there who are not automating the process, who can think like a user, who can go through the workflow and see other things that might be happening outside of the realm of the automated test.
Another panel member argued that the automated test can only check what you tell it to, while someone exploring through the product can provide so much value to the end user because of what they are able to find during their process. It was later revealed that the automate-everything guy was more specifically referring to his legacy product, that never gets touched or changed, but rather that his tests just ensure the products still run and are operational to the end user.
This led to a long night for myself, thinking through the benefits of automation vs exploratory testing. I am not an automation tester, I cannot write in c#, selenium, or any other language. I can barely read code, though I understand the logic and process of code. I would consider myself a master exploratory tester, hands on in the UI of functionality, and acting as a hybrid BA/UXer as well. As I contemplated the benefit of automation, it is important to know that in an agile and ever changing environment, automation has a serious amount of overhead that cannot be made back in the time-savings for the explorer. A good exploratory tester can adapt quickly to the new acceptance criteria, challenge the thought processes along the way, and get through testing faster than any automation test could.
However, in a scenario where the product’s functionality will not be changing and the end-to-end testing effort is tedious and lengthy, a well written automation test can provide substantial benefit to ensuring the functionality still works day after day, without a UI tester to check this out.
Still deciding where I stand on all of this.
Test Cases Are Dead
One of the panel members was very adamant that test cases are a waste of time and effort, and shouldn’t be used as the source or truth, or even spent time on, with the exception of very basic “this is what I tested”. I have gone back and forth on my opinions of test cases and their usefulness. A few QA testers at Zywave had a good discussion on the benefits of test cases as well, and here is what I wrote.
I agreed with much of what was said about test case-stupidity (that was pretty harsh but I’m quoting), in that I spend so much time writing test cases to be the source of truth, when in reality I believe that production environment is the source of truth. We don’t ever push to production without it meeting all the acceptance criteria, which means at it’s current implementation, it is the source of truth of intended functionality. When I pair test with my devs or ask them to go ham on the UI, I want them to be completely agnostic of any test cases, so that they don’t follow step-by-step expectations. It helps them look for issues that might be outside of any test case I write. Also trains them to be better testers, ultimately putting myself out of a job! 😋 A developer of mine (amongst a handful of other devs that I can speak towards) is living proof of someone who has developed a QA mindset due to the freedom and encouragement to think outside a test plan.
That being said, testing my product for the last 6 months, there were a number of test cases that another tester wrote that were extremely valuable towards some far-out scenarios that I may not have stumbled across just by exploring. So… still deciding where I stand.
I proceeded to discuss with a few co-workers about the usefulness of test cases, and while most agree about the time-waste of overhead, some use them for functionality accuracy and swarming opportunity with their developers. I don’t love this answer because I want my developers to learn to test outside of the scope of my test plan.
Further discussion on my opinions led to the location of intended functionality being in the test plans vs my opinions of them existing in production. Here is what I feel:
I’m going to give test cases a 2-3 out of 10. I feel that intended functionality is found in production, not a manually written test case. Another tester said it well, saying “it’s more of a culture of good testing”. If other products didn’t have this culture established, then no amount of test cases would have made it any better. But if you spend a week exploring production, you know pretty quickly what expected functionality is. I find it tedious and laborious to hand off test cases from team to team, when production is simply a URL away. And using test cases for the basis of test automation doesn’t really make sense to me either…
Someone responded to my thoughts with a product we had that did not operate as users wanted it to in production. Is that the source of truth or an edge case?
My response is that if a product is not doing what users want, then the users become the source of truth. No amount of test plans would have solved that problem.
All that to say, it was a good panel discussion and good follow-up today with my team.
Manual Testers and Code Learning
An audience question was asked that was something like:
What do you think about manual testers who aren’t willing to learn code, will they have a short career?
The panel, and the industry, will always come to their own conclusions. One of the guys quickly ranted about the fact that “there are no manual testers, we’re all just testers”. I will go on about that next. But this guy’s opinion was more so that if you aren’t willing to learn to code, it doesn’t mean you won’t be successful, it just requires to you to be charming and good at flexing in other areas. He did mention that everyone should be able to read or understand code, but only for the benefit of conversation with your developers and because it will naturally make you a better tester. Another panel member took the side of strict automation in the industry, and that everyone needs to move towards this and adapt to the coding way of life.
But, because you’re reading my blog, you get my opinions.
There is no level of current automation testing that would be able to do what I can do. I will out-perform any automation test any day. WHY? Because an automation test can only test the exact things you told it to do. Even the most elaborate automation test will still only check the exact functionality that you typed in. It will not think like a user, act like a user, have personal preference or opinion, or approach your system with a good or bad attitude because of a meeting it had earlier in that day. While there may be ways to automate some of this, even things like distractions at your desk, or elevator doors closed while mid-use, there is no level of automation test that can take into account all of the feelings, emotions, and personalities of your users.
Automation tests are also tedious and laborious to update. A small change in the code could lead to half a day updating an automation test to continue to run, while a good exploratory tester will adapt instantaneously to the code change, perhaps even faster. The testing cycle is just as fast as the code change.
Automation tests cannot test this thing over here on the other side of the page that randomly appears or glitches or jumps, because as far as the automation test is concerned, everything performed as it should.
Also, typical automation testers are focused on testing what was given to them, where an exploratory tester is often thinking about what isn’t given to them. What cracks need to be filled or where are the wholes in the acceptance criteria that are detailed out. This is a slightly unfair statement to automation testers, as I work with some pretty outstanding ones that can think like an exploratory tester, but because the automation tester role is not too far from a software developer, this is a typical developer mentality. Developers are not trained to think like an exploratory tester unless they are trained to do it by an exploratory tester.
So many more thoughts and points. But essentially I think this. I am currently blessed to work for a company (Zywave based out of Milwaukee), that believes in the importance and value of having a dedicated QA team split between Analysts and Engineers. It gives us all the freedom to excel in our area of expertise, and really drives the throughput that our company has. I regularly take on a role of BA or UXer in addition to my daily exploring, and this has proven true time and again, that our products are far superior to others.
No Such Thing as Manual Testers
This was such a semantic argument that really didn’t need to be driven as hard as it was. We can call it whatever we want, Functional testers, Manual testers, Software testers, Exploratory testers, etc. I think the point that was missed by the panel member is that it’s not so much a title as a style. He continued to drive home the idea that “there is no such thing as manual testers, we’re all just testers. There isn’t a Manual Manager, you don’t manually manage things…” but I think where he was lost is that these are styles. So while we are all testers, sometimes we are performing manual testing, regression testing, automation testing, just like a manager will perform manual intervention, micro-managing techniques, and automated managing.
It was a brief unfortunate time when a question was asked, that this panel member wasn’t really getting to the heart of the questions, but getting caught up on the title of a tester. It didn’t seem to have too much affect on the overall night, but definitely could have swayed his emotions and answers due to him being held up on this wording.
My title is a QA Analyst, but what does that mean for my everyday? It means that I do countless activities on a daily basis that have nothing to do with writing a single line of automation code. At the end of the day when someone asks me what do I do at Zywave, I tell him my title, and often will say I’m a functional tester or a manual tester. Yes, we’re all just testers, but so are developers, yet they have a vast differentiation between what they do (back-end, front-end, etc.)
I’m going slightly out of order per the panel discussion, but wanted to touch on one final thing. A question was asked about the migration into Agile development, and some hurdles you encountered along the way. If you are not familiar with Agile and the transition out of Waterfall, then this may not make sense to you.
A panel member discussed that they were hired at their company just as it was going through this transition, and mentioned that it was slow and lengthy, but the benefits were substantial. They mentioned the downfalls of Waterfall, being that QA is separated from the development team, and the deliverable timeline constantly getting pushed back. A big hurdle to the transition is thinking about what deliverables can the dev team get to the QA team early in the sprint, so that we can begin testing early on.
Another process that has developed is testing early and testing often. Shift left testing is an example of this, where testing happens simultaneously to the development. I have adapted to pair/collaborative testing where I sit next to my developers while they code, to ensure what they’re doing is what I expect, based on our interpretations of the user story.
These agile processes prove beneficial as there is very minimal time wasted due to over-the-wall testing, or just having to go back and redevelop things. By testing along-side of my developers while they code, we find all the bugs immediately and don’t have to go back to it later. Since this is how I spend most of my day, there would be no time to automate that, thus, another benefit of a good exploratory tester.
These are just one man’s opinions. That doesn’t mean their right, or thorough, just personal. Feel free to hammer on me a little and start a lively discussion. I welcome it!
Remember to follow me on Instagram @Lifeofatester