In 2021, the year agile methodology turned 20, agile adoption increased by more than 100% in software development teams to 86% from 2020. But methodologies alone don't make a product agile, flexible, and scalable! Our product teams have been delivering zero-to-one lean acceleration and 66% of the products that we help launch or scale have seen success. Against the industry average of anything between 10% to 30%, we’d say that's pretty rad.
How do we define success?
● For an early stage/stealth-mode product, we define it as successful fund raise, early adoptions, etc.
● For growth-stage companies, we define it as products where cost of new customer acquisition is perpetually falling.
● For all stage products, it’s continuously nailing product-market fit and investing in only those builds that can be monetized.
But agile methodologies are only half the battle! How do you set up a truly agile back-end, front-end, and product architecture that facilitates the following three things:
● Gives you the ability to scale quickly with quick deployments
● Gives you the ability to hire easily for skills needed as you scale
● Gives you the ability to outsource or bring functions in house at the drop of a hat
As the person responsible for a new product, we understand that you are battling two things:
● Ambiguity: when no one else has solved the problem, you know what you don’t know until you encounter it
● Time: you slow down, and the problem statement (let alone your approach to the solution) will probably have changed on you
So what do the Zemoso teams do to help startups and enterprises with fantabulous new product ideas instill agility in the product itself? We asked our lead architects, engineers, and patent holders. This is what they said.
Roadmap - We combine an architecture sprint with our version of a Google Ventures Design Sprint. In our virtual room, we have a Miro Board, fantabulous note-taking abilities, Otter to transcribe, Loom to record, and every key stakeholder from the client and Zemoso. We create the blueprint, we set the direction, draft the roadmap, clarify priorities, and create a jobs to-be done list.
Don’t reinvent the wheel - If it ain’t broken, don’t fix it! Open-source software, stable technology, third-party integrations — if it will work as we need it to, then we’ll adopt it to expedite production timelines. Not because it’ll be easier for us, (integrations are sometimes harder than ground-up builds) but because that means we'll save you time (and money!). Plus, finding talent to bring those functions in-house will be smoother. So, unless our clients have an existing stack or a specific preference, we tend to lean towards a tech stack that you can stick with. *We'll never, ever choose anything over functionality though.
Continuous feedback loop - Early course-correction is our mantra. We build, deliver, and finalize incrementally. That means our customers’ feedback is sought at each stage and incorporated through iterations. The continuous feedback mechanism enables the company to fast-track the development of the product with desired features.
Parallel platform development - We segment the entire product dev process into parts and figure out which developments, deployments can happen simultaneously. For example, design and architecture sprints run parallelly. In the initial stages, we use mock data for faster iterations. Application Programming Interface (API) documentation via Swagger also happens simultaneously.
For early stage products, as discussed above, we're working with a certain amount of ambiguity. This means, we expect: lots of changes during the build, and a request from our clients to ensure that they can keep moving fast as they drive product-led growth: roll out new features, announce new capabilities, beat the competition with speed. To enable that, we tend to adopt the following:
Atomic design principles for the front-end: Inspired by Lego, Tangrams, and a million other building block toys, we stick to atomic design principles. We break down your front-end and your user interface into smaller components and build bottom-up, starting with atoms, molecules, then organisms, then templates, then pages, and so on. This helps us to break down the code into manageable, easily updatable units, re-use modular page elements for consistency, and overall update the front-end without having to touch the back-end at the drop of a hat.
Mapping layer (GraphQL): It’s almost impossible to find a digital product that doesn't need some kind of an API. That’s why we use GraphQL, a query language for APIs, as an interface between the front-end and the back-end. When you deploy a feature, you would have to modify and route the API to meet the requirements for that specific capability. However, with GraphQL, the modification can only happen on the front-end as GraphQL functions as the back-end for your front-end. The layer requests the information and investments in build are made only after the capability is validated and monetized. Apart from faster processing times, this layer has an incredible impact on creating a properly validated product backlog.
Choosing the right architecture: The most defining feature that you have found the right architecture is it is “enabling” your next phase of growth. Our team uses our own feasibility-effort grid to evaluate the best architecture to take on. Therefore, we look at what the product needs to accomplish and do in the short term, what scale would look like in the long run, and the time-effort-monetization trade-offs that would entail. As an example, for a very early-stage buy-now-pay-later platform that we were deploying the front-end for, we used a layered architecture. The specialization, expertise required to maintain layered architecture is easier to find and we deployed the front-end faster. Without compromising on the “enabler” quality of the architecture, we saved time, cost, and sped up validation cycles for said product.
In a sharp contrast, we used a microservices approach for a more complex PropTech product. For said product, there was a single ML/AI model that was powering all its different use cases for different users, buyers, etc. Each use case warranted phased launches and needed to function, integrate, and be available for purchase independently. So, while the approach around “what architecture should we use” be the most evasive, “it depends,” we bring structure and an evaluation method to it that simply works.
Containerization: For containerization, we typically default to Kubernetes or Docker. Both have their advantages and the pro-con/comparable ranking system we use for each case will be exhaustive. What we don't debate and compromise on is the need for containerizing the applications for an incredibly agile build. Why? When we are working on a project with our client, we aren’t playing tag team. We are rallying against time with them and that means everything is happening parallelly. As a result, both our client’s internal developers and our teams must develop, deploy, and distribute applications at the same time. Containerization is a form of OS-level virtualization, in which the kernel allows the existence of multiple isolated user-space instances AKA containers. It encapsulates everything an application needs to run — its binaries, libraries, configuration files, and dependencies. As it’s isolated, it can run uniformly and consistently on various types of infrastructure. This way, a complex application can be broken down into several smaller and manageable applications, and be worked on simultaneously.
CI/CD that actually works: The ultimate benefit of an intelligent CI/CD pipeline and platform boils down to this: faster delivery of quality code. And in today's world of product-led growth, it’s not negotiable. Arguably, continuous integration (CI) has more history and might be marginally easier to set up than CD. We default to GitLab or GitHub Actions, unless the client has a preference for Jenkins, AWS, Azure, BitBucket, etc. But adopting CI is essential to helping us meet our value proposition of launching an MVP in under four months. We pivot faster, iterate at record speeds, and complete parallel builds without risking user experience or quality.
As far as CD goes: some refer to it as continuous delivery, while others refer to it as continuous deployment. However you refer to it, both must be done. As a reminder, continuous delivery automatically deploys all code to a production or testing environment, while continuous deployment is to ensure automatic release for the end users. And while speed is a given benefit of adopting CD, its true benefits lie in the amount of control it gives the engineering team over ensuring quality and minimizing disruption. It ensures that you’re not overlooking something super crucial to the delivery or deployment, creating a security incident, and can automatically roll back if needed.
So, our top ask from any CI/CD platform is one that allows us to integrate easily with top monitoring and testing platforms as you test, test, test, and test: unit tests, integration tests, OSS security tests, regression tests, smoke tests, browser tests, etc. etc.
In 2021, the year agile methodology turned 20, agile adoption increased by more than 100% in software development teams to 86% from 2020. But methodologies alone don't make a product agile, flexible, and scalable! Our product teams have been delivering zero-to-one lean acceleration and 66% of the products that we help launch or scale have seen success. Against the industry average of anything between 10% to 30%, we’d say that's pretty rad.
How do we define success?
● For an early stage/stealth-mode product, we define it as successful fund raise, early adoptions, etc.
● For growth-stage companies, we define it as products where cost of new customer acquisition is perpetually falling.
● For all stage products, it’s continuously nailing product-market fit and investing in only those builds that can be monetized.
But agile methodologies are only half the battle! How do you set up a truly agile back-end, front-end, and product architecture that facilitates the following three things:
● Gives you the ability to scale quickly with quick deployments
● Gives you the ability to hire easily for skills needed as you scale
● Gives you the ability to outsource or bring functions in house at the drop of a hat
As the person responsible for a new product, we understand that you are battling two things:
● Ambiguity: when no one else has solved the problem, you know what you don’t know until you encounter it
● Time: you slow down, and the problem statement (let alone your approach to the solution) will probably have changed on you
So what do the Zemoso teams do to help startups and enterprises with fantabulous new product ideas instill agility in the product itself? We asked our lead architects, engineers, and patent holders. This is what they said.
Roadmap - We combine an architecture sprint with our version of a Google Ventures Design Sprint. In our virtual room, we have a Miro Board, fantabulous note-taking abilities, Otter to transcribe, Loom to record, and every key stakeholder from the client and Zemoso. We create the blueprint, we set the direction, draft the roadmap, clarify priorities, and create a jobs to-be done list.
Don’t reinvent the wheel - If it ain’t broken, don’t fix it! Open-source software, stable technology, third-party integrations — if it will work as we need it to, then we’ll adopt it to expedite production timelines. Not because it’ll be easier for us, (integrations are sometimes harder than ground-up builds) but because that means we'll save you time (and money!). Plus, finding talent to bring those functions in-house will be smoother. So, unless our clients have an existing stack or a specific preference, we tend to lean towards a tech stack that you can stick with. *We'll never, ever choose anything over functionality though.
Continuous feedback loop - Early course-correction is our mantra. We build, deliver, and finalize incrementally. That means our customers’ feedback is sought at each stage and incorporated through iterations. The continuous feedback mechanism enables the company to fast-track the development of the product with desired features.
Parallel platform development - We segment the entire product dev process into parts and figure out which developments, deployments can happen simultaneously. For example, design and architecture sprints run parallelly. In the initial stages, we use mock data for faster iterations. Application Programming Interface (API) documentation via Swagger also happens simultaneously.
For early stage products, as discussed above, we're working with a certain amount of ambiguity. This means, we expect: lots of changes during the build, and a request from our clients to ensure that they can keep moving fast as they drive product-led growth: roll out new features, announce new capabilities, beat the competition with speed. To enable that, we tend to adopt the following:
Atomic design principles for the front-end: Inspired by Lego, Tangrams, and a million other building block toys, we stick to atomic design principles. We break down your front-end and your user interface into smaller components and build bottom-up, starting with atoms, molecules, then organisms, then templates, then pages, and so on. This helps us to break down the code into manageable, easily updatable units, re-use modular page elements for consistency, and overall update the front-end without having to touch the back-end at the drop of a hat.
Mapping layer (GraphQL): It’s almost impossible to find a digital product that doesn't need some kind of an API. That’s why we use GraphQL, a query language for APIs, as an interface between the front-end and the back-end. When you deploy a feature, you would have to modify and route the API to meet the requirements for that specific capability. However, with GraphQL, the modification can only happen on the front-end as GraphQL functions as the back-end for your front-end. The layer requests the information and investments in build are made only after the capability is validated and monetized. Apart from faster processing times, this layer has an incredible impact on creating a properly validated product backlog.
Choosing the right architecture: The most defining feature that you have found the right architecture is it is “enabling” your next phase of growth. Our team uses our own feasibility-effort grid to evaluate the best architecture to take on. Therefore, we look at what the product needs to accomplish and do in the short term, what scale would look like in the long run, and the time-effort-monetization trade-offs that would entail. As an example, for a very early-stage buy-now-pay-later platform that we were deploying the front-end for, we used a layered architecture. The specialization, expertise required to maintain layered architecture is easier to find and we deployed the front-end faster. Without compromising on the “enabler” quality of the architecture, we saved time, cost, and sped up validation cycles for said product.
In a sharp contrast, we used a microservices approach for a more complex PropTech product. For said product, there was a single ML/AI model that was powering all its different use cases for different users, buyers, etc. Each use case warranted phased launches and needed to function, integrate, and be available for purchase independently. So, while the approach around “what architecture should we use” be the most evasive, “it depends,” we bring structure and an evaluation method to it that simply works.
Containerization: For containerization, we typically default to Kubernetes or Docker. Both have their advantages and the pro-con/comparable ranking system we use for each case will be exhaustive. What we don't debate and compromise on is the need for containerizing the applications for an incredibly agile build. Why? When we are working on a project with our client, we aren’t playing tag team. We are rallying against time with them and that means everything is happening parallelly. As a result, both our client’s internal developers and our teams must develop, deploy, and distribute applications at the same time. Containerization is a form of OS-level virtualization, in which the kernel allows the existence of multiple isolated user-space instances AKA containers. It encapsulates everything an application needs to run — its binaries, libraries, configuration files, and dependencies. As it’s isolated, it can run uniformly and consistently on various types of infrastructure. This way, a complex application can be broken down into several smaller and manageable applications, and be worked on simultaneously.
CI/CD that actually works: The ultimate benefit of an intelligent CI/CD pipeline and platform boils down to this: faster delivery of quality code. And in today's world of product-led growth, it’s not negotiable. Arguably, continuous integration (CI) has more history and might be marginally easier to set up than CD. We default to GitLab or GitHub Actions, unless the client has a preference for Jenkins, AWS, Azure, BitBucket, etc. But adopting CI is essential to helping us meet our value proposition of launching an MVP in under four months. We pivot faster, iterate at record speeds, and complete parallel builds without risking user experience or quality.
As far as CD goes: some refer to it as continuous delivery, while others refer to it as continuous deployment. However you refer to it, both must be done. As a reminder, continuous delivery automatically deploys all code to a production or testing environment, while continuous deployment is to ensure automatic release for the end users. And while speed is a given benefit of adopting CD, its true benefits lie in the amount of control it gives the engineering team over ensuring quality and minimizing disruption. It ensures that you’re not overlooking something super crucial to the delivery or deployment, creating a security incident, and can automatically roll back if needed.
So, our top ask from any CI/CD platform is one that allows us to integrate easily with top monitoring and testing platforms as you test, test, test, and test: unit tests, integration tests, OSS security tests, regression tests, smoke tests, browser tests, etc. etc.
In 2021, the year agile methodology turned 20, agile adoption increased by more than 100% in software development teams to 86% from 2020. But methodologies alone don't make a product agile, flexible, and scalable! Our product teams have been delivering zero-to-one lean acceleration and 66% of the products that we help launch or scale have seen success. Against the industry average of anything between 10% to 30%, we’d say that's pretty rad.
How do we define success?
● For an early stage/stealth-mode product, we define it as successful fund raise, early adoptions, etc.
● For growth-stage companies, we define it as products where cost of new customer acquisition is perpetually falling.
● For all stage products, it’s continuously nailing product-market fit and investing in only those builds that can be monetized.
But agile methodologies are only half the battle! How do you set up a truly agile back-end, front-end, and product architecture that facilitates the following three things:
● Gives you the ability to scale quickly with quick deployments
● Gives you the ability to hire easily for skills needed as you scale
● Gives you the ability to outsource or bring functions in house at the drop of a hat
As the person responsible for a new product, we understand that you are battling two things:
● Ambiguity: when no one else has solved the problem, you know what you don’t know until you encounter it
● Time: you slow down, and the problem statement (let alone your approach to the solution) will probably have changed on you
So what do the Zemoso teams do to help startups and enterprises with fantabulous new product ideas instill agility in the product itself? We asked our lead architects, engineers, and patent holders. This is what they said.
Roadmap - We combine an architecture sprint with our version of a Google Ventures Design Sprint. In our virtual room, we have a Miro Board, fantabulous note-taking abilities, Otter to transcribe, Loom to record, and every key stakeholder from the client and Zemoso. We create the blueprint, we set the direction, draft the roadmap, clarify priorities, and create a jobs to-be done list.
Don’t reinvent the wheel - If it ain’t broken, don’t fix it! Open-source software, stable technology, third-party integrations — if it will work as we need it to, then we’ll adopt it to expedite production timelines. Not because it’ll be easier for us, (integrations are sometimes harder than ground-up builds) but because that means we'll save you time (and money!). Plus, finding talent to bring those functions in-house will be smoother. So, unless our clients have an existing stack or a specific preference, we tend to lean towards a tech stack that you can stick with. *We'll never, ever choose anything over functionality though.
Continuous feedback loop - Early course-correction is our mantra. We build, deliver, and finalize incrementally. That means our customers’ feedback is sought at each stage and incorporated through iterations. The continuous feedback mechanism enables the company to fast-track the development of the product with desired features.
Parallel platform development - We segment the entire product dev process into parts and figure out which developments, deployments can happen simultaneously. For example, design and architecture sprints run parallelly. In the initial stages, we use mock data for faster iterations. Application Programming Interface (API) documentation via Swagger also happens simultaneously.
For early stage products, as discussed above, we're working with a certain amount of ambiguity. This means, we expect: lots of changes during the build, and a request from our clients to ensure that they can keep moving fast as they drive product-led growth: roll out new features, announce new capabilities, beat the competition with speed. To enable that, we tend to adopt the following:
Atomic design principles for the front-end: Inspired by Lego, Tangrams, and a million other building block toys, we stick to atomic design principles. We break down your front-end and your user interface into smaller components and build bottom-up, starting with atoms, molecules, then organisms, then templates, then pages, and so on. This helps us to break down the code into manageable, easily updatable units, re-use modular page elements for consistency, and overall update the front-end without having to touch the back-end at the drop of a hat.
Mapping layer (GraphQL): It’s almost impossible to find a digital product that doesn't need some kind of an API. That’s why we use GraphQL, a query language for APIs, as an interface between the front-end and the back-end. When you deploy a feature, you would have to modify and route the API to meet the requirements for that specific capability. However, with GraphQL, the modification can only happen on the front-end as GraphQL functions as the back-end for your front-end. The layer requests the information and investments in build are made only after the capability is validated and monetized. Apart from faster processing times, this layer has an incredible impact on creating a properly validated product backlog.
Choosing the right architecture: The most defining feature that you have found the right architecture is it is “enabling” your next phase of growth. Our team uses our own feasibility-effort grid to evaluate the best architecture to take on. Therefore, we look at what the product needs to accomplish and do in the short term, what scale would look like in the long run, and the time-effort-monetization trade-offs that would entail. As an example, for a very early-stage buy-now-pay-later platform that we were deploying the front-end for, we used a layered architecture. The specialization, expertise required to maintain layered architecture is easier to find and we deployed the front-end faster. Without compromising on the “enabler” quality of the architecture, we saved time, cost, and sped up validation cycles for said product.
In a sharp contrast, we used a microservices approach for a more complex PropTech product. For said product, there was a single ML/AI model that was powering all its different use cases for different users, buyers, etc. Each use case warranted phased launches and needed to function, integrate, and be available for purchase independently. So, while the approach around “what architecture should we use” be the most evasive, “it depends,” we bring structure and an evaluation method to it that simply works.
Containerization: For containerization, we typically default to Kubernetes or Docker. Both have their advantages and the pro-con/comparable ranking system we use for each case will be exhaustive. What we don't debate and compromise on is the need for containerizing the applications for an incredibly agile build. Why? When we are working on a project with our client, we aren’t playing tag team. We are rallying against time with them and that means everything is happening parallelly. As a result, both our client’s internal developers and our teams must develop, deploy, and distribute applications at the same time. Containerization is a form of OS-level virtualization, in which the kernel allows the existence of multiple isolated user-space instances AKA containers. It encapsulates everything an application needs to run — its binaries, libraries, configuration files, and dependencies. As it’s isolated, it can run uniformly and consistently on various types of infrastructure. This way, a complex application can be broken down into several smaller and manageable applications, and be worked on simultaneously.
CI/CD that actually works: The ultimate benefit of an intelligent CI/CD pipeline and platform boils down to this: faster delivery of quality code. And in today's world of product-led growth, it’s not negotiable. Arguably, continuous integration (CI) has more history and might be marginally easier to set up than CD. We default to GitLab or GitHub Actions, unless the client has a preference for Jenkins, AWS, Azure, BitBucket, etc. But adopting CI is essential to helping us meet our value proposition of launching an MVP in under four months. We pivot faster, iterate at record speeds, and complete parallel builds without risking user experience or quality.
As far as CD goes: some refer to it as continuous delivery, while others refer to it as continuous deployment. However you refer to it, both must be done. As a reminder, continuous delivery automatically deploys all code to a production or testing environment, while continuous deployment is to ensure automatic release for the end users. And while speed is a given benefit of adopting CD, its true benefits lie in the amount of control it gives the engineering team over ensuring quality and minimizing disruption. It ensures that you’re not overlooking something super crucial to the delivery or deployment, creating a security incident, and can automatically roll back if needed.
So, our top ask from any CI/CD platform is one that allows us to integrate easily with top monitoring and testing platforms as you test, test, test, and test: unit tests, integration tests, OSS security tests, regression tests, smoke tests, browser tests, etc. etc.