As a QA professional, one of your biggest fears will always be allowing something big to get past you. You miss something that ends up with your company directors being dragged out of bed in the middle of the night, and requires diversion to a shunt page or honest-to-goodness downtime for your site to get fixed. That will inevitably end up with all those awkward questions being asked about why this issue was missed in testing.
I don't have an immediate answer to those questions, and your mileage may vary depending on the scale of your issue and the reason behind it, but I'd strongly advise against saying 'Hey, it's not like QA put the bug in there in the first place!' and throwing your developers under the bus unless you want to completely alienate yourself within your technology team.
In short, if you've missed a P1 shipstopper issue you're in trouble, so prepare to suck it up. But how can you make sure that this never happens to you?
You can't. And that's okay.
We've all heard the trite saying about the one thing that you don't test being the one thing that will break production. But to me, that particular pearl of wisdom can be lumped into the same category as your keys always being in the last place you look. By definition, edge case issues occur in areas that you'd never dream of testing, or involve steps that you'd never believe that anyone would actually take.
So even if, for some insane reason, you've opted to attempt 100% test coverage, and you've decided to test every user situation that you can think of, you cannot and will not cover everything. Taking zero issues to production is an utterly impractical goal that will only lead to frustration and disappointment when it is inevitably not met.
Most folks in modern technology organisations understand that shorter SDLCs come with risks, so in these environments, QA's function is not to catch everything, but to keep the risk of issues making it to production to a minimum. Sadly, not all business stakeholders see things that way, and it's down to technology directors to make them understand that going faster comes at a price, and as per Brooks's Law, throwing more people at QA will only muddy the waters. So it's important that QA thinks carefully about their priorities when testing at speed.
Of course, it's vital that you ensure that requirements have been met, and that your software does what it has been designed to do (according to the precise needs of the business, not your interpretation or your development team's interpretation of those needs � if your sprint planning sessions are done right, you've already nailed this). But that isn't the be-all and end-all of your testing scope, and of course, just testing the happy path is a recipe for disaster.
As a QA professional you know what the software is supposed to do in detail, but end users haven't read the user stories, and they inevitably wander off and do stuff they're not supposed to. So once you've established the software's fitness for purpose, exploratory testing is necessary to ensure your technology director doesn't get that 3am phone call from the company president telling him that his latest release is causing satellites to drop out of the sky. But how far down the exploratory testing rabbit hole do you go?
|