Breaking the Time Barrier:

Test Automation and Its Impact

on Product Launch Cycles

Zemoso Engineering Studio

Tuesday, September 13, 2022

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C
Text link

Bold text




“A majority resoundingly pointed to testing as the area most likely to cause delays.” That’s not just us. That’s from a GitLab report. But the truth is, for new product and modernization initiatives — the actual code testing is as important as the user testing we do. In the early stages, this is laying the framework for future functional expansions. In later stages, missteps in releases can bring the whole product crashing down. 

Thankfully, automated testing has solved the speed at which developers can test significantly.  No wonder, it is seeing more universal adoption, especially in teams using agile methodologies for product innovation. [As demonstrated in this report]

Test Automation to Accelerate Product Launches

The cost of fixing something, the further it goes into the release cycle, increases exponentially. And, we’re not referring to the $ value associated, but the cost of a missed opportunity, an upset user, and erosion of trust that you’ve painstakingly built. At no point can defects be allowed to escape to production. 

So, our agile product pods live by: 

Test early, test continuously and test EVERYTHING. Even the smallest changes. 

Knight Capital saw the extreme fall-out from an escaped bug: a bug in their trading platform resulted in $440 million in just 30 minutes. Another time, a software bug in a New Jersey hospital's vaccine scheduling system caused thousands of duplicate appointments

Your developers want to automate: but be nuanced about it

According to the same report as above, the top four priorities for engineering teams are: 

  1. Move to automated testing
  2.  Improve test automation coverage
  3.  Execute tests faster
  4. Reduce regression time testing

Why's test-automation essential for product innovation?

Automating testing is central to being agile. It ensures quality without delaying the launch date. In effect, it: 

  1. Reduces time to market
  2. Ensures faster feedback cycles 
  3. Delivers accuracy and increased coverage
  4. Improves test suite reliability significantly
  5. Catches regression early

What, when, and how much to automate? 

Automating testing simplifies software testing, but it isn’t a silver bullet that can be indiscriminately applied. There's a “right” approach to test automation. Like in Wizard of Oz, to get to the truth you have to pay attention to what’s behind the curtain. To do test automation right, you’ve got to know what, when and how much to automate. 

Ideal use cases for automation testing

Learning for the best, Microsoft’s Bill Gates’ words of wisdom

An IBM study partially answered this question by making the ideal conditions for automating software testing pretty clear. They are:

  1. The automated test cases won't need frequent changes. 
  2. The test cases are easy to automate. The more complex the task, the more difficult it is to automate.
  3. The comparative cost of automating is lower than that of executing the test manually. And here, for any stage launch, we insist that you look at $ and the opportunity cost of not automating, and slowing the development cycle. 

So, how do you decide when to automate testing, and what exactly to automate? It depends on the stage of the product itself. 

Automation testing in product maturity context 

If you remember reading our last post about product innovation in 2023, we made a strong case in favor of adapting methodologies and processes to the lifecycle of the product. The same applies to your testing strategy as well. 

If you won’t take our word for it, take ChatGPT’s: 

ChatGPT said it best

Here’s how we adapted our testing strategy for our clients at each stage.

Testing automation for early-stage products

In the early stages of a product's lifecycle, change is the name of the game. From primary vision to architecture, everything changes rapidly and frequently. At this stage, you’re not trying to establish that the product CAN work, but testing to see which version will get you a win with the paying customers.  

And while being embarrassed by your MVP is a given, it hardly is an excuse for shoddy or low-quality work. Even though early adopters are innovation enthusiasts, your friendlies, might be forgiving of initial tech setbacks, but it hardly inspires confidence. Plus, the cost of making an error in a heavily regulated industry like the HealthTech or FinTech industry is devastating. As NHS found out to their chagrin when one coding error that wasn’t caught resulted in them sharing confidential health data of 150,000 patients. 

Here’s an example of how we used test automation to maintain quality and accelerate the launch for a HealthTech startup. 

Case study: The automation story of an a16z-funded organization

We used an evolutionary testing strategy to enable continuous delivery for this early-stage healthcare startup. We focused on the golden user path (also known as the steel thread or the key user journey) first and unblocked critical issues for end-users. Automating a percentage of easier-to-setup test cases allowed us to deliver the core features of the product within the timeline. It also allowed our QA (Quality Assurance) engineers to devote more time to the exploratory testing of new features, be creative, and really pay attention to the complex elements of the roll-out. 

Golden Path or Steel Thread or Key User Journey for Test Automation
This is how evolutionary automation enabled us to adopt continuous delivery

Testing automation for growth-stage products

Typically, after the MVP launch, the focus shifts to everything and anything that'll enable an organization to use product-led growth strategies to get ‌more paying customers, while simultaneously improving what’s there to retain existing customers. Therefore, from an engineering perspective, it’s really time to:

  1.  Pay attention to every detail and bring engineering precision to stabilize the software even more
  2.  Pressure-test to ensure that the product will perform incredibly well as you increase usage, users, and jobs to be done
  3. Adding more functionalities and capabilities

Therefore, at this stage our testing focus pays more attention to ensuring even faster, frictionless deployments, without breaking anything that’s going well. We:

  1. Identify potential automation scenarios
  2. Create test cases and segregate them as “Automated” or “Manual”
  3. Create and run automation scripts on stable features
  4. Execute the regression suite before deployment 
  5.  Give the green signal faster to advance deployment to the next stage

The right balance of automation and manual testing at this stage helps you move forward quickly, and efficiently. The idea is that you still want to reserve manual testing for the most complex, super-nascent, and likely-to-change scenarios, but move the more stable, constant elements to automated testing. 

Case study: The automation story as deployed for a growth-stage product at a top e-commerce organization

For a multinational e-commerce company, our teams tested the feasibility of automation first and ended up automating about 70–80% of the use cases. They also exponentially expanded the regression suite, ensuring zero slippages for ‌deployment timelines. For this, ‌test automation worked with how quickly we could stabilize the initially released functions and move them to this new testing pipeline. Expanding the application of structured test cases, that you know you can rely on, will be crucial to moving fast. 

This resulted in quick and accurate test executions and the client’s appreciation for the high-quality product. 

But at this point, we still kept some exploratory manual testing. The reason behind that is it allowed us to catch stray challenges, even occasional slip-ups caused due to new roll-outs and built more test cases. 

For the next stage of the product, having a diversity of reliable test cases is going to be crucial, and you want to prepare for as many scenarios as possible. 

Testing automation for scale-stage products

From a product priority standpoint, the focus shifts to accelerate new user adoption and increase engagement for existing users. The trouble with performing QA at this scale is that even a bug that affects 1% of users can wreak havoc on your reputation just because of the volume you are operating at. Minimizing any fallout when the stakes are this high is a no-brainer.

At this stage, there’s no one-size-fits-all strategy; no straight way to set up or carry out test automation. You're parallelly running testing strategies for two product lifecycles at this point. 

Of course, your regression suites, existing test cases, and scenario list should be extensive, provide excellent coverage, and run like clockwork. But, for continued growth, new releases, expansions, innovation, and feature launches are crucial. So you also have that part of your product that's changing as fast as an early-stage product, but only, this time, there’s more to lose if there’s an accidental slip-up. 

One command disconnected all of Facebook’s data centers, and as a result, one of ‌the world’s most cutting-edge tech organizations went down for almost a full day. Their loss is estimated to be over $60 million in that one day. At a time when device-level cookie tracking was already creating havoc on the company's ad revenues, this delivered a much bigger blow to Facebook and all its subsidiaries. 

The lesson, even the regular, everyday scenarios need one-on-one attention sometimes. Who would have thought that a bug in the software audit tool could take down Meta? And, even a simple maintenance code should be tested before letting it touch your product’s backbone. This is also an excellent warning to start testing for fringe cases, the unlikely scenarios, and build reliable, automated guardrail testing for those as well. 

So for our customers at this stage, we like to evaluate the reliability of specific test cases and do automated and manual vulnerability scanning. Plus, exploratory testing is always ongoing.

The bottom line

 Manual or automating tests aren’t either/or

Manual and automation testing aren't either/or choices. You can’t rely 100% on either. But a blend of the two designed to reinforce each other, and back each other up will ensure that you are making smaller tradeoffs, taking a lesser risk, and still getting to the market on time. 

If there’s a particularly different testing strategy that gets you results, and reduces your time to market, comment on our post on LinkedIn, and we’ll carry the conversation on.

Got an idea?

Together, we’ll build it into a great product

Follow us on

Dallas, USA

London, UK

Waterloo, Canada

Hyderabad, India

©2024 Zemoso Technologies. All rights reserved.