ZEMOSO ENGINEERING STUDIO
January 20, 2023
6 min

Breaking the time barrier: Test Automation and its impact on product launch cycles

“A majority resoundingly pointed to testing as the area most likely to cause delays.” That’s not just us. That’s from a GitLab report. But the truth is, for new product and modernization initiatives — the actual code testing is as important as the user testing we do. In the early stages, this is laying the framework for future functional expansions. In later stages, missteps in releases can bring the whole product crashing down. 

Thankfully, automated testing has solved the speed at which developers can test significantly.  No wonder, it is seeing more universal adoption, especially in teams using agile methodologies for product innovation. [As demonstrated in this report]

Test Automation to Accelerate Product Launches
Source: Perfecto.io

The cost of fixing something, the further it goes into the release cycle, increases exponentially. And, we’re not referring to the $ value associated, but the cost of a missed opportunity, an upset user, and erosion of trust that you’ve painstakingly built. At no point can defects be allowed to escape to production. 

So, our agile product pods live by: 

Test early, test continuously and test EVERYTHING. Even the smallest changes. 

Knight Capital saw the extreme fall-out from an escaped bug: a bug in their trading platform resulted in $440 million in just 30 minutes. Another time, a software bug in a New Jersey hospital's vaccine scheduling system caused thousands of duplicate appointments

Your developers want to automate: but be nuanced about it

According to the same report as above, the top four priorities for engineering teams are: 

   Move to automated testing

   Improve test automation coverage

   Execute tests faster

   Reduce regression time testing

Why's test-automation essential for product innovation?

Automating testing is central to being agile. It ensures quality without delaying the launch date. In effect, it: 

   Reduces time to market

   Ensures faster feedback cycles 

   Delivers accuracy and increased coverage

   Improves test suite reliability significantly

   Catches regression early

What, when, and how much to automate? 

Automating testing simplifies software testing, but it isn’t a silver bullet that can be indiscriminately applied. There's a “right” approach to test automation. Like in Wizard of Oz, to get to the truth you have to pay attention to what’s behind the curtain. To do test automation right, you’ve got to know what, when and how much to automate. 

Ideal use cases for automation testing

Learning for the best, Microsoft’s Bill Gates’ words of wisdom

An IBM study partially answered this question by making the ideal conditions for automating software testing pretty clear. They are:

1.   The automated test cases won't need frequent changes. 

2.   The test cases are easy to automate. The more complex the task, the more difficult it is to automate.

3.   The comparative cost of automating is lower than that of executing the test manually. And here, for any stage launch, we insist that you look at $ and the opportunity cost of not automating, and slowing the development cycle. 

So, how do you decide when to automate testing, and what exactly to automate? It depends on the stage of the product itself. 

Automation testing in product maturity context 

If you remember reading our last post about product innovation in 2023, we made a strong case in favor of adapting methodologies and processes to the lifecycle of the product. The same applies to your testing strategy as well. 

If you won’t take our word for it, take ChatGPT’s: 

ChatGPT said it best

Here’s how we adapted our testing strategy for our clients at each stage.

Testing automation for early-stage products

In the early stages of a product's lifecycle, change is the name of the game. From primary vision to architecture, everything changes rapidly and frequently. At this stage, you’re not trying to establish that the product CAN work, but testing to see which version will get you a win with the paying customers.  

And while being embarrassed by your MVP is a given, it hardly is an excuse for shoddy or low-quality work. Even though early adopters are innovation enthusiasts, your friendlies, might be forgiving of initial tech setbacks, but it hardly inspires confidence. Plus, the cost of making an error in a heavily regulated industry like the HealthTech or FinTech industry is devastating. As NHS found out to their chagrin when one coding error that wasn’t caught resulted in them sharing confidential health data of 150,000 patients. 

Here’s an example of how we used test automation to maintain quality and accelerate the launch for a HealthTech startup. 

Case study: The automation story of an a16z-funded organization

We used an evolutionary testing strategy to enable continuous delivery for this early-stage healthcare startup. We focused on the golden user path (also known as the steel thread or the key user journey) first and unblocked critical issues for end-users. Automating a percentage of easier-to-setup test cases allowed us to deliver the core features of the product within the timeline. It also allowed our QA (Quality Assurance) engineers to devote more time to the exploratory testing of new features, be creative, and really pay attention to the complex elements of the roll-out. 

Golden Path or Steel Thread or Key User Journey for Test Automation
This is how evolutionary automation enabled us to adopt continuous delivery

Testing automation for growth-stage products

Typically, after the MVP launch, the focus shifts to everything and anything that'll enable an organization to use product-led growth strategies to get ‌more paying customers, while simultaneously improving what’s there to retain existing customers. Therefore, from an engineering perspective, it’s really time to:

   Pay attention to every detail and bring engineering precision to stabilize the software even more

   Pressure-test to ensure that the product will perform incredibly well as you increase usage, users, and jobs to be done

   Adding more functionalities and capabilities


Therefore, at this stage our testing focus pays more attention to ensuring even faster, frictionless deployments, without breaking anything that’s going well. We:

   Identify potential automation scenarios

   Create test cases and segregate them as “Automated” or “Manual”

   Create and run automation scripts on stable features

   Execute the regression suite before deployment 

   Give the green signal faster to advance deployment to the next stage


The right balance of automation and manual testing at this stage helps you move forward quickly, and efficiently. The idea is that you still want to reserve manual testing for the most complex, super-nascent, and likely-to-change scenarios, but move the more stable, constant elements to automated testing. 

Case study: The automation story as deployed for a growth-stage product at a top e-commerce organization

For a multinational e-commerce company, our teams tested the feasibility of automation first and ended up automating about 70–80% of the use cases. They also exponentially expanded the regression suite, ensuring zero slippages for ‌deployment timelines. For this, ‌test automation worked with how quickly we could stabilize the initially released functions and move them to this new testing pipeline. Expanding the application of structured test cases, that you know you can rely on, will be crucial to moving fast. 

This resulted in quick and accurate test executions and the client’s appreciation for the high-quality product. 

But at this point, we still kept some exploratory manual testing. The reason behind that is it allowed us to catch stray challenges, even occasional slip-ups caused due to new roll-outs and built more test cases. 

For the next stage of the product, having a diversity of reliable test cases is going to be crucial, and you want to prepare for as many scenarios as possible. 

Testing automation for scale-stage products

From a product priority standpoint, the focus shifts to accelerate new user adoption and increase engagement for existing users. The trouble with performing QA at this scale is that even a bug that affects 1% of users can wreak havoc on your reputation just because of the volume you are operating at. Minimizing any fallout when the stakes are this high is a no-brainer.

At this stage, there’s no one-size-fits-all strategy; no straight way to set up or carry out test automation. You're parallelly running testing strategies for two product lifecycles at this point. 

Of course, your regression suites, existing test cases, and scenario list should be extensive, provide excellent coverage, and run like clockwork. But, for continued growth, new releases, expansions, innovation, and feature launches are crucial. So you also have that part of your product that's changing as fast as an early-stage product, but only, this time, there’s more to lose if there’s an accidental slip-up. 

One command disconnected all of Facebook’s data centers, and as a result, one of ‌the world’s most cutting-edge tech organizations went down for almost a full day. Their loss is estimated to be over $60 million in that one day. At a time when device-level cookie tracking was already creating havoc on the company's ad revenues, this delivered a much bigger blow to Facebook and all its subsidiaries. 

The lesson, even the regular, everyday scenarios need one-on-one attention sometimes. Who would have thought that a bug in the software audit tool could take down Meta? And, even a simple maintenance code should be tested before letting it touch your product’s backbone. This is also an excellent warning to start testing for fringe cases, the unlikely scenarios, and build reliable, automated guardrail testing for those as well. 

So for our customers at this stage, we like to evaluate the reliability of specific test cases and do automated and manual vulnerability scanning. Plus, exploratory testing is always ongoing.

The bottom line

 Manual or automating tests aren’t either/or

Manual and automation testing aren't either/or choices. You can’t rely 100% on either. But a blend of the two designed to reinforce each other, and back each other up will ensure that you are making smaller tradeoffs, taking a lesser risk, and still getting to the market on time. 

If there’s a particularly different testing strategy that gets you results, and reduces your time to market, comment on our post on LinkedIn, and we’ll carry the conversation on.

ZEMOSO ENGINEERING STUDIO

Breaking the time barrier: Test Automation and its impact on product launch cycles

January 20, 2023
6 min

“A majority resoundingly pointed to testing as the area most likely to cause delays.” That’s not just us. That’s from a GitLab report. But the truth is, for new product and modernization initiatives — the actual code testing is as important as the user testing we do. In the early stages, this is laying the framework for future functional expansions. In later stages, missteps in releases can bring the whole product crashing down. 

Thankfully, automated testing has solved the speed at which developers can test significantly.  No wonder, it is seeing more universal adoption, especially in teams using agile methodologies for product innovation. [As demonstrated in this report]

Test Automation to Accelerate Product Launches
Source: Perfecto.io

The cost of fixing something, the further it goes into the release cycle, increases exponentially. And, we’re not referring to the $ value associated, but the cost of a missed opportunity, an upset user, and erosion of trust that you’ve painstakingly built. At no point can defects be allowed to escape to production. 

So, our agile product pods live by: 

Test early, test continuously and test EVERYTHING. Even the smallest changes. 

Knight Capital saw the extreme fall-out from an escaped bug: a bug in their trading platform resulted in $440 million in just 30 minutes. Another time, a software bug in a New Jersey hospital's vaccine scheduling system caused thousands of duplicate appointments

Your developers want to automate: but be nuanced about it

According to the same report as above, the top four priorities for engineering teams are: 

   Move to automated testing

   Improve test automation coverage

   Execute tests faster

   Reduce regression time testing

Why's test-automation essential for product innovation?

Automating testing is central to being agile. It ensures quality without delaying the launch date. In effect, it: 

   Reduces time to market

   Ensures faster feedback cycles 

   Delivers accuracy and increased coverage

   Improves test suite reliability significantly

   Catches regression early

What, when, and how much to automate? 

Automating testing simplifies software testing, but it isn’t a silver bullet that can be indiscriminately applied. There's a “right” approach to test automation. Like in Wizard of Oz, to get to the truth you have to pay attention to what’s behind the curtain. To do test automation right, you’ve got to know what, when and how much to automate. 

Ideal use cases for automation testing

Learning for the best, Microsoft’s Bill Gates’ words of wisdom

An IBM study partially answered this question by making the ideal conditions for automating software testing pretty clear. They are:

1.   The automated test cases won't need frequent changes. 

2.   The test cases are easy to automate. The more complex the task, the more difficult it is to automate.

3.   The comparative cost of automating is lower than that of executing the test manually. And here, for any stage launch, we insist that you look at $ and the opportunity cost of not automating, and slowing the development cycle. 

So, how do you decide when to automate testing, and what exactly to automate? It depends on the stage of the product itself. 

Automation testing in product maturity context 

If you remember reading our last post about product innovation in 2023, we made a strong case in favor of adapting methodologies and processes to the lifecycle of the product. The same applies to your testing strategy as well. 

If you won’t take our word for it, take ChatGPT’s: 

ChatGPT said it best

Here’s how we adapted our testing strategy for our clients at each stage.

Testing automation for early-stage products

In the early stages of a product's lifecycle, change is the name of the game. From primary vision to architecture, everything changes rapidly and frequently. At this stage, you’re not trying to establish that the product CAN work, but testing to see which version will get you a win with the paying customers.  

And while being embarrassed by your MVP is a given, it hardly is an excuse for shoddy or low-quality work. Even though early adopters are innovation enthusiasts, your friendlies, might be forgiving of initial tech setbacks, but it hardly inspires confidence. Plus, the cost of making an error in a heavily regulated industry like the HealthTech or FinTech industry is devastating. As NHS found out to their chagrin when one coding error that wasn’t caught resulted in them sharing confidential health data of 150,000 patients. 

Here’s an example of how we used test automation to maintain quality and accelerate the launch for a HealthTech startup. 

Case study: The automation story of an a16z-funded organization

We used an evolutionary testing strategy to enable continuous delivery for this early-stage healthcare startup. We focused on the golden user path (also known as the steel thread or the key user journey) first and unblocked critical issues for end-users. Automating a percentage of easier-to-setup test cases allowed us to deliver the core features of the product within the timeline. It also allowed our QA (Quality Assurance) engineers to devote more time to the exploratory testing of new features, be creative, and really pay attention to the complex elements of the roll-out. 

Golden Path or Steel Thread or Key User Journey for Test Automation
This is how evolutionary automation enabled us to adopt continuous delivery

Testing automation for growth-stage products

Typically, after the MVP launch, the focus shifts to everything and anything that'll enable an organization to use product-led growth strategies to get ‌more paying customers, while simultaneously improving what’s there to retain existing customers. Therefore, from an engineering perspective, it’s really time to:

   Pay attention to every detail and bring engineering precision to stabilize the software even more

   Pressure-test to ensure that the product will perform incredibly well as you increase usage, users, and jobs to be done

   Adding more functionalities and capabilities


Therefore, at this stage our testing focus pays more attention to ensuring even faster, frictionless deployments, without breaking anything that’s going well. We:

   Identify potential automation scenarios

   Create test cases and segregate them as “Automated” or “Manual”

   Create and run automation scripts on stable features

   Execute the regression suite before deployment 

   Give the green signal faster to advance deployment to the next stage


The right balance of automation and manual testing at this stage helps you move forward quickly, and efficiently. The idea is that you still want to reserve manual testing for the most complex, super-nascent, and likely-to-change scenarios, but move the more stable, constant elements to automated testing. 

Case study: The automation story as deployed for a growth-stage product at a top e-commerce organization

For a multinational e-commerce company, our teams tested the feasibility of automation first and ended up automating about 70–80% of the use cases. They also exponentially expanded the regression suite, ensuring zero slippages for ‌deployment timelines. For this, ‌test automation worked with how quickly we could stabilize the initially released functions and move them to this new testing pipeline. Expanding the application of structured test cases, that you know you can rely on, will be crucial to moving fast. 

This resulted in quick and accurate test executions and the client’s appreciation for the high-quality product. 

But at this point, we still kept some exploratory manual testing. The reason behind that is it allowed us to catch stray challenges, even occasional slip-ups caused due to new roll-outs and built more test cases. 

For the next stage of the product, having a diversity of reliable test cases is going to be crucial, and you want to prepare for as many scenarios as possible. 

Testing automation for scale-stage products

From a product priority standpoint, the focus shifts to accelerate new user adoption and increase engagement for existing users. The trouble with performing QA at this scale is that even a bug that affects 1% of users can wreak havoc on your reputation just because of the volume you are operating at. Minimizing any fallout when the stakes are this high is a no-brainer.

At this stage, there’s no one-size-fits-all strategy; no straight way to set up or carry out test automation. You're parallelly running testing strategies for two product lifecycles at this point. 

Of course, your regression suites, existing test cases, and scenario list should be extensive, provide excellent coverage, and run like clockwork. But, for continued growth, new releases, expansions, innovation, and feature launches are crucial. So you also have that part of your product that's changing as fast as an early-stage product, but only, this time, there’s more to lose if there’s an accidental slip-up. 

One command disconnected all of Facebook’s data centers, and as a result, one of ‌the world’s most cutting-edge tech organizations went down for almost a full day. Their loss is estimated to be over $60 million in that one day. At a time when device-level cookie tracking was already creating havoc on the company's ad revenues, this delivered a much bigger blow to Facebook and all its subsidiaries. 

The lesson, even the regular, everyday scenarios need one-on-one attention sometimes. Who would have thought that a bug in the software audit tool could take down Meta? And, even a simple maintenance code should be tested before letting it touch your product’s backbone. This is also an excellent warning to start testing for fringe cases, the unlikely scenarios, and build reliable, automated guardrail testing for those as well. 

So for our customers at this stage, we like to evaluate the reliability of specific test cases and do automated and manual vulnerability scanning. Plus, exploratory testing is always ongoing.

The bottom line

 Manual or automating tests aren’t either/or

Manual and automation testing aren't either/or choices. You can’t rely 100% on either. But a blend of the two designed to reinforce each other, and back each other up will ensure that you are making smaller tradeoffs, taking a lesser risk, and still getting to the market on time. 

If there’s a particularly different testing strategy that gets you results, and reduces your time to market, comment on our post on LinkedIn, and we’ll carry the conversation on.

Recent Publications
Actual access control without getting in the way of actual work: 2023
Actual access control without getting in the way of actual work: 2023
March 13, 2023
Product innovation for today and the future! It’s outcome-first, timeboxed, and accountable
Product innovation for today and the future! It’s outcome-first, timeboxed, and accountable
January 9, 2023
From "great potential" purgatory to "actual usage" reality: getting SDKs right in a product-led world
From "great potential" purgatory to "actual usage" reality: getting SDKs right in a product-led world
December 6, 2022
Why Realm trumps SQLite as database of choice for complex mobile apps — Part 2
Why Realm trumps SQLite as database of choice for complex mobile apps — Part 2
October 13, 2022
Testing what doesn’t exist with a Wizard of Oz twist
Testing what doesn’t exist with a Wizard of Oz twist
October 12, 2022
Docs, Guides, Resources: Getting developer microsites right in a product-led world
Docs, Guides, Resources: Getting developer microsites right in a product-led world
September 20, 2022
Beyond methodologies: Five engineering do's for an agile product build
Beyond methodologies: Five engineering do's for an agile product build
September 5, 2022
Zemoso’s next big move: Entering Europe with new offices open in London
Zemoso’s next big move: Entering Europe with new offices open in London
August 29, 2022
Actual access control without getting in the way of actual work: 2023
Actual access control without getting in the way of actual work: 2023
March 13, 2023
Product innovation for today and the future! It’s outcome-first, timeboxed, and accountable
Product innovation for today and the future! It’s outcome-first, timeboxed, and accountable
January 9, 2023
From "great potential" purgatory to "actual usage" reality: getting SDKs right in a product-led world
From "great potential" purgatory to "actual usage" reality: getting SDKs right in a product-led world
December 6, 2022
Why Realm trumps SQLite as database of choice for complex mobile apps — Part 2
Why Realm trumps SQLite as database of choice for complex mobile apps — Part 2
October 13, 2022
Testing what doesn’t exist with a Wizard of Oz twist
Testing what doesn’t exist with a Wizard of Oz twist
October 12, 2022
ZEMOSO ENGINEERING STUDIO
January 20, 2023
6 min

Breaking the time barrier: Test Automation and its impact on product launch cycles

“A majority resoundingly pointed to testing as the area most likely to cause delays.” That’s not just us. That’s from a GitLab report. But the truth is, for new product and modernization initiatives — the actual code testing is as important as the user testing we do. In the early stages, this is laying the framework for future functional expansions. In later stages, missteps in releases can bring the whole product crashing down. 

Thankfully, automated testing has solved the speed at which developers can test significantly.  No wonder, it is seeing more universal adoption, especially in teams using agile methodologies for product innovation. [As demonstrated in this report]

Test Automation to Accelerate Product Launches
Source: Perfecto.io

The cost of fixing something, the further it goes into the release cycle, increases exponentially. And, we’re not referring to the $ value associated, but the cost of a missed opportunity, an upset user, and erosion of trust that you’ve painstakingly built. At no point can defects be allowed to escape to production. 

So, our agile product pods live by: 

Test early, test continuously and test EVERYTHING. Even the smallest changes. 

Knight Capital saw the extreme fall-out from an escaped bug: a bug in their trading platform resulted in $440 million in just 30 minutes. Another time, a software bug in a New Jersey hospital's vaccine scheduling system caused thousands of duplicate appointments

Your developers want to automate: but be nuanced about it

According to the same report as above, the top four priorities for engineering teams are: 

   Move to automated testing

   Improve test automation coverage

   Execute tests faster

   Reduce regression time testing

Why's test-automation essential for product innovation?

Automating testing is central to being agile. It ensures quality without delaying the launch date. In effect, it: 

   Reduces time to market

   Ensures faster feedback cycles 

   Delivers accuracy and increased coverage

   Improves test suite reliability significantly

   Catches regression early

What, when, and how much to automate? 

Automating testing simplifies software testing, but it isn’t a silver bullet that can be indiscriminately applied. There's a “right” approach to test automation. Like in Wizard of Oz, to get to the truth you have to pay attention to what’s behind the curtain. To do test automation right, you’ve got to know what, when and how much to automate. 

Ideal use cases for automation testing

Learning for the best, Microsoft’s Bill Gates’ words of wisdom

An IBM study partially answered this question by making the ideal conditions for automating software testing pretty clear. They are:

1.   The automated test cases won't need frequent changes. 

2.   The test cases are easy to automate. The more complex the task, the more difficult it is to automate.

3.   The comparative cost of automating is lower than that of executing the test manually. And here, for any stage launch, we insist that you look at $ and the opportunity cost of not automating, and slowing the development cycle. 

So, how do you decide when to automate testing, and what exactly to automate? It depends on the stage of the product itself. 

Automation testing in product maturity context 

If you remember reading our last post about product innovation in 2023, we made a strong case in favor of adapting methodologies and processes to the lifecycle of the product. The same applies to your testing strategy as well. 

If you won’t take our word for it, take ChatGPT’s: 

ChatGPT said it best

Here’s how we adapted our testing strategy for our clients at each stage.

Testing automation for early-stage products

In the early stages of a product's lifecycle, change is the name of the game. From primary vision to architecture, everything changes rapidly and frequently. At this stage, you’re not trying to establish that the product CAN work, but testing to see which version will get you a win with the paying customers.  

And while being embarrassed by your MVP is a given, it hardly is an excuse for shoddy or low-quality work. Even though early adopters are innovation enthusiasts, your friendlies, might be forgiving of initial tech setbacks, but it hardly inspires confidence. Plus, the cost of making an error in a heavily regulated industry like the HealthTech or FinTech industry is devastating. As NHS found out to their chagrin when one coding error that wasn’t caught resulted in them sharing confidential health data of 150,000 patients. 

Here’s an example of how we used test automation to maintain quality and accelerate the launch for a HealthTech startup. 

Case study: The automation story of an a16z-funded organization

We used an evolutionary testing strategy to enable continuous delivery for this early-stage healthcare startup. We focused on the golden user path (also known as the steel thread or the key user journey) first and unblocked critical issues for end-users. Automating a percentage of easier-to-setup test cases allowed us to deliver the core features of the product within the timeline. It also allowed our QA (Quality Assurance) engineers to devote more time to the exploratory testing of new features, be creative, and really pay attention to the complex elements of the roll-out. 

Golden Path or Steel Thread or Key User Journey for Test Automation
This is how evolutionary automation enabled us to adopt continuous delivery

Testing automation for growth-stage products

Typically, after the MVP launch, the focus shifts to everything and anything that'll enable an organization to use product-led growth strategies to get ‌more paying customers, while simultaneously improving what’s there to retain existing customers. Therefore, from an engineering perspective, it’s really time to:

   Pay attention to every detail and bring engineering precision to stabilize the software even more

   Pressure-test to ensure that the product will perform incredibly well as you increase usage, users, and jobs to be done

   Adding more functionalities and capabilities


Therefore, at this stage our testing focus pays more attention to ensuring even faster, frictionless deployments, without breaking anything that’s going well. We:

   Identify potential automation scenarios

   Create test cases and segregate them as “Automated” or “Manual”

   Create and run automation scripts on stable features

   Execute the regression suite before deployment 

   Give the green signal faster to advance deployment to the next stage


The right balance of automation and manual testing at this stage helps you move forward quickly, and efficiently. The idea is that you still want to reserve manual testing for the most complex, super-nascent, and likely-to-change scenarios, but move the more stable, constant elements to automated testing. 

Case study: The automation story as deployed for a growth-stage product at a top e-commerce organization

For a multinational e-commerce company, our teams tested the feasibility of automation first and ended up automating about 70–80% of the use cases. They also exponentially expanded the regression suite, ensuring zero slippages for ‌deployment timelines. For this, ‌test automation worked with how quickly we could stabilize the initially released functions and move them to this new testing pipeline. Expanding the application of structured test cases, that you know you can rely on, will be crucial to moving fast. 

This resulted in quick and accurate test executions and the client’s appreciation for the high-quality product. 

But at this point, we still kept some exploratory manual testing. The reason behind that is it allowed us to catch stray challenges, even occasional slip-ups caused due to new roll-outs and built more test cases. 

For the next stage of the product, having a diversity of reliable test cases is going to be crucial, and you want to prepare for as many scenarios as possible. 

Testing automation for scale-stage products

From a product priority standpoint, the focus shifts to accelerate new user adoption and increase engagement for existing users. The trouble with performing QA at this scale is that even a bug that affects 1% of users can wreak havoc on your reputation just because of the volume you are operating at. Minimizing any fallout when the stakes are this high is a no-brainer.

At this stage, there’s no one-size-fits-all strategy; no straight way to set up or carry out test automation. You're parallelly running testing strategies for two product lifecycles at this point. 

Of course, your regression suites, existing test cases, and scenario list should be extensive, provide excellent coverage, and run like clockwork. But, for continued growth, new releases, expansions, innovation, and feature launches are crucial. So you also have that part of your product that's changing as fast as an early-stage product, but only, this time, there’s more to lose if there’s an accidental slip-up. 

One command disconnected all of Facebook’s data centers, and as a result, one of ‌the world’s most cutting-edge tech organizations went down for almost a full day. Their loss is estimated to be over $60 million in that one day. At a time when device-level cookie tracking was already creating havoc on the company's ad revenues, this delivered a much bigger blow to Facebook and all its subsidiaries. 

The lesson, even the regular, everyday scenarios need one-on-one attention sometimes. Who would have thought that a bug in the software audit tool could take down Meta? And, even a simple maintenance code should be tested before letting it touch your product’s backbone. This is also an excellent warning to start testing for fringe cases, the unlikely scenarios, and build reliable, automated guardrail testing for those as well. 

So for our customers at this stage, we like to evaluate the reliability of specific test cases and do automated and manual vulnerability scanning. Plus, exploratory testing is always ongoing.

The bottom line

 Manual or automating tests aren’t either/or

Manual and automation testing aren't either/or choices. You can’t rely 100% on either. But a blend of the two designed to reinforce each other, and back each other up will ensure that you are making smaller tradeoffs, taking a lesser risk, and still getting to the market on time. 

If there’s a particularly different testing strategy that gets you results, and reduces your time to market, comment on our post on LinkedIn, and we’ll carry the conversation on.

Recent Publications

ZEMOSO ENGINEERING STUDIO

Testing what doesn’t exist with a Wizard of Oz twist

October 12, 2022
7 min read
ZEMOSO ENGINEERING STUDIO

Beyond methodologies: Five engineering do's for an agile product build

September 5, 2022
6 min read
ZEMOSO ENGINEERING STUDIO

How we built a big data platform for a futuristic AgriTech product

June 3, 2022
8 min read
ZEMOSO NEWS

Zemoso Labs starts operations in Waterloo, Canada

May 25, 2022
5 min read
ZEMOSO ENGINEERING STUDIO

Honorable mention at O’Reilly’s Architectural Katas event

May 17, 2021
5 min read
ZEMOSO ENGINEERING STUDIO

Product dev with testable spring boot applications, from day one

May 4, 2021
5 min read
ZEMOSO ENGINEERING STUDIO

When not to @Autowire in Spring or Spring Boot applications

May 1, 2021
5 min read
ZEMOSO ENGINEERING STUDIO

Efficiently handle data and integrations in Spring Boot

January 24, 2021
5 min read
ZEMOSO ENGINEERING STUDIO

Our favorite CI/CD DevOps Practice: Simplify with GitHub Actions

October 25, 2020
5 min read
ZEMOSO ENGINEERING STUDIO

How to use BERT and DNN to build smarter NLP algorithms for products

February 14, 2020
12 min read
ZEMOSO ENGINEERING STUDIO

GraphQL — Why is it essential for agile product development?

April 30, 2019
12 min read
ZEMOSO ENGINEERING STUDIO

GraphQL with Java Spring Boot and Apollo Angular for Agile platforms

April 30, 2019
9 min read
ZEMOSO ENGINEERING STUDIO

Deploying Airflow on Kubernetes 

November 30, 2018
2 min read
ZEMOSO PRODUCT STUDIO

How to validate your Innovation: Mastering Experiment Design

November 22, 2018
8 min read
ZEMOSO PRODUCT STUDIO

Working Backwards: Amazon's Culture of Innovation: My notes

November 19, 2018
8 min read
ZEMOSO ENGINEERING STUDIO

Product developer POV: Caveats when building with Spark

November 5, 2018
2 min read

Want more best practices?

Access thought-leadership and best practice content across
the product development lifecycle

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

© 2023  Zemoso Technologies
Privacy Policy

Terms of Use
LinkedIn Page - Zemoso TechnologiesFacebook Page - Zemoso TechnologiesTwitter Account - Zemoso Technologies

© 2021 Zemoso Technologies
Privacy Policy

LinkedIn Page - Zemoso TechnologiesFacebook Page - Zemoso TechnologiesTwitter Account - Zemoso Technologies
May 25, 2023