As testers, we absolutely need to focus less on the acceptance criteria. I recently watched/listened to Nicola Lindgrens chat with Ben Dowen (above), discussing testing beyond requirements. Needless to say, it struck so many chords with me. I was going to tweet about it, but thought, I have too much to say about this. 👀👀
Firstly, I agree 100%. We absolutely need to be testing outside of the acceptance criteria. We need to focus less on acceptance criteria. I would argue that at least 80% of bugs (if not more) are found outside of acceptance criteria (Ben even put out a tweet asking for people’s favourite bugs outside of Acceptance Criteria). Why is that? It could be many reasons…
One may be that developers also have access to those Acceptance Criteria. Any developer worth their salt will be developing the software explicitly against those acceptance criteria. Maybe using software development practices like Acceptance Test Driven Development. Therefore, the chances of finding bugs against the acceptance criteria are relatively slim.
Less is more
When it comes to acceptance criteria, less is more. The reason being (and Ben mentioned this in the conversation above) they can be very constrictive in what you test. If people are presented with a long list of things to check, chances are they will focus more on that long list, however, give a few key acceptance criteria, that allows room for exploring and actually testing the app, and by testing I mean running hypotheses and seeing what the outcome is, then you may well find more than just what you bargained for.
There was a time in a previous role, we were keen on acceptance criteria, we were keen on writing test cases against that acceptance criteria. This is fine, however, what we didn’t do enough of at the time was test outside of those acceptance criteria.
Diverse Testing Minds(et)
This is where it’s important to put on your testing mindset, or if you’re like me, you never take it off. Start thinking how the software might behave under certain circumstances, outside of the acceptance criteria.
It’s also why being in Tech, and testing, diversity is so important. There is so much value to be had from having people from different backgrounds, different schools of thought throughout Tech. But for testers, just to try and test things differently. If you show me a team who are all the same, chances are they will test very similarly. The impact of this is that the the testing that is covered may be less, not just in the physical testing of the application, but throughout the SDLC, in the refining, the planning, the questions that are being asked are more likely to be the same or similar.
I love being the person in the room who thinks of something that others haven’t thought of. I love being the person that makes everyone go “Oh, I hadn’t thought about that”. So much so, that I go out of my way to be that person.
Emotions and acceptance criteria don’t mix well
We all have emotions, and using software can be very emotive. Just last night I was on an e-commerce website, where the experience of purchasing something left me visibly frustrated. I was on the checkout page, I put in my debit card details, and then hit “Pay” unfortunately for me there was a timeout with Stripe gateway, and the order didn’t go through. Finally, I tried contacting their customer services, but they only had a chatbot, eventually I managed to work it through to speak to someone after exhausting all the options that were open to me. They said to take the issue up with the payment provider, so not that helpful, but to be fair, they had no record of the transaction/order.
I decided to order them again, this time on a credit card, should the worst happen. I went through to checkout, clicked on the Add New Card, and lo and behold, it started the payment process again, and I was presented with the Order Confirmation screen.
Emotions I felt throughout this were:
At ASOS, we are talking about positive emotions as something our customers need to feel when purchasing, when shopping with us, obviously positive emotions are good. It got me thinking how we test for that?
Anyway, the purpose here being, I am yet to see emotions as part of the acceptance criteria, we can not, and should not, say how people should feel when using our application, it’s another point why we should be testing outside of the acceptance criteria.
Rejection criteria over acceptance criteria
In this post from Michael Bolton, and how we should talk about rather than acceptance tests, we should call them rejection tests. If one of these tests fail, we reject the story. With that in mind, perhaps we should do the same for Acceptance Criteria. Start thinking of them as “rejection criteria”. If one of these isn’t met, the story is rejected. This only works if we don’t just check the criteria, but we perform real knowledgeable testing around the feature and the software as a whole. Applying our craft and getting the most out of it.
It isn’t just testers who need to hear this
A lot of testers that I talk to agree on this topic, and they are aware of this topic. Having said that, we as a community need to get better at talking outside of our peers. How can we do this? Conferences, Meetups, sharing in wider communities.
Until we start highlighting that testing is moving on as a craft. We’re not just checking against acceptance criteria, we’re going to find bugs of real value, we’re going to find things before our customers do. Then we’ll always be having the same discussions.
I’m also not saying we should bin acceptance criteria completely. But it comes down to the time and reward, ask yourself if you’re getting enough reward from the amount of time you spend on defining acceptance criteria, and subsequently testing against those acceptance criteria. Your time might be better spent focusing on exploratory testing outside of defining acceptance criteria to the nth degree.