“A majority resoundingly pointed to testing as the area most likely to cause delays.” That’s not just us. That’s from a GitLab report. But the truth is, for new product and modernization initiatives — the actual code testing is as important as the user testing we do. In the early stages, this is laying the framework for future functional expansions. In later stages, missteps in releases can bring the whole product crashing down.
Thankfully, automated testing has solved the speed at which developers can test significantly. No wonder, it is seeing more universal adoption, especially in teams using agile methodologies for product innovation. [As demonstrated in this report]
The cost of fixing something, the further it goes into the release cycle, increases exponentially. And, we’re not referring to the $ value associated, but the cost of a missed opportunity, an upset user, and erosion of trust that you’ve painstakingly built. At no point can defects be allowed to escape to production.
So, our agile product pods live by:
Test early, test continuously and test EVERYTHING. Even the smallest changes.
Knight Capital saw the extreme fall-out from an escaped bug: a bug in their trading platform resulted in $440 million in just 30 minutes. Another time, a software bug in a New Jersey hospital's vaccine scheduling system caused thousands of duplicate appointments.
Your developers want to automate: but be nuanced about it
According to the same report as above, the top four priorities for engineering teams are:
Automating testing is central to being agile. It ensures quality without delaying the launch date. In effect, it:
Automating testing simplifies software testing, but it isn’t a silver bullet that can be indiscriminately applied. There's a “right” approach to test automation. Like in Wizard of Oz, to get to the truth you have to pay attention to what’s behind the curtain. To do test automation right, you’ve got to know what, when and how much to automate.
An IBM study partially answered this question by making the ideal conditions for automating software testing pretty clear. They are:
So, how do you decide when to automate testing, and what exactly to automate? It depends on the stage of the product itself.
If you remember reading our last post about product innovation in 2023, we made a strong case in favor of adapting methodologies and processes to the lifecycle of the product. The same applies to your testing strategy as well.
If you won’t take our word for it, take ChatGPT’s:
Here’s how we adapted our testing strategy for our clients at each stage.
In the early stages of a product's lifecycle, change is the name of the game. From primary vision to architecture, everything changes rapidly and frequently. At this stage, you’re not trying to establish that the product CAN work, but testing to see which version will get you a win with the paying customers.
And while being embarrassed by your MVP is a given, it hardly is an excuse for shoddy or low-quality work. Even though early adopters are innovation enthusiasts, your friendlies, might be forgiving of initial tech setbacks, but it hardly inspires confidence. Plus, the cost of making an error in a heavily regulated industry like the HealthTech or FinTech industry is devastating. As NHS found out to their chagrin when one coding error that wasn’t caught resulted in them sharing confidential health data of 150,000 patients.
Here’s an example of how we used test automation to maintain quality and accelerate the launch for a HealthTech startup.
Case study: The automation story of an a16z-funded organization
We used an evolutionary testing strategy to enable continuous delivery for this early-stage healthcare startup. We focused on the golden user path (also known as the steel thread or the key user journey) first and unblocked critical issues for end-users. Automating a percentage of easier-to-setup test cases allowed us to deliver the core features of the product within the timeline. It also allowed our QA (Quality Assurance) engineers to devote more time to the exploratory testing of new features, be creative, and really pay attention to the complex elements of the roll-out.
Typically, after the MVP launch, the focus shifts to everything and anything that'll enable an organization to use product-led growth strategies to get more paying customers, while simultaneously improving what’s there to retain existing customers. Therefore, from an engineering perspective, it’s really time to:
Therefore, at this stage our testing focus pays more attention to ensuring even faster, frictionless deployments, without breaking anything that’s going well. We:
The right balance of automation and manual testing at this stage helps you move forward quickly, and efficiently. The idea is that you still want to reserve manual testing for the most complex, super-nascent, and likely-to-change scenarios, but move the more stable, constant elements to automated testing.
Case study: The automation story as deployed for a growth-stage product at a top e-commerce organization
For a multinational e-commerce company, our teams tested the feasibility of automation first and ended up automating about 70–80% of the use cases. They also exponentially expanded the regression suite, ensuring zero slippages for deployment timelines. For this, test automation worked with how quickly we could stabilize the initially released functions and move them to this new testing pipeline. Expanding the application of structured test cases, that you know you can rely on, will be crucial to moving fast.
This resulted in quick and accurate test executions and the client’s appreciation for the high-quality product.
But at this point, we still kept some exploratory manual testing. The reason behind that is it allowed us to catch stray challenges, even occasional slip-ups caused due to new roll-outs and built more test cases.
For the next stage of the product, having a diversity of reliable test cases is going to be crucial, and you want to prepare for as many scenarios as possible.
From a product priority standpoint, the focus shifts to accelerate new user adoption and increase engagement for existing users. The trouble with performing QA at this scale is that even a bug that affects 1% of users can wreak havoc on your reputation just because of the volume you are operating at. Minimizing any fallout when the stakes are this high is a no-brainer.
At this stage, there’s no one-size-fits-all strategy; no straight way to set up or carry out test automation. You're parallelly running testing strategies for two product lifecycles at this point.
Of course, your regression suites, existing test cases, and scenario list should be extensive, provide excellent coverage, and run like clockwork. But, for continued growth, new releases, expansions, innovation, and feature launches are crucial. So you also have that part of your product that's changing as fast as an early-stage product, but only, this time, there’s more to lose if there’s an accidental slip-up.
One command disconnected all of Facebook’s data centers, and as a result, one of the world’s most cutting-edge tech organizations went down for almost a full day. Their loss is estimated to be over $60 million in that one day. At a time when device-level cookie tracking was already creating havoc on the company's ad revenues, this delivered a much bigger blow to Facebook and all its subsidiaries.
The lesson, even the regular, everyday scenarios need one-on-one attention sometimes. Who would have thought that a bug in the software audit tool could take down Meta? And, even a simple maintenance code should be tested before letting it touch your product’s backbone. This is also an excellent warning to start testing for fringe cases, the unlikely scenarios, and build reliable, automated guardrail testing for those as well.
So for our customers at this stage, we like to evaluate the reliability of specific test cases and do automated and manual vulnerability scanning. Plus, exploratory testing is always ongoing.
Manual and automation testing aren't either/or choices. You can’t rely 100% on either. But a blend of the two designed to reinforce each other, and back each other up will ensure that you are making smaller tradeoffs, taking a lesser risk, and still getting to the market on time.
If there’s a particularly different testing strategy that gets you results, and reduces your time to market, comment on our post on LinkedIn, and we’ll carry the conversation on.
©2023 Zemoso Technologies. All rights reserved.