Design

How to

Validate Your Innovation

: Mastering Experiment Design

Satish Madhira

CEO and Co-founder

Wednesday, July 27, 2022

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C
Text link

Bold text

Emphasis

Superscript

Subscript

"A Long time ago in a galaxy far, far away"

.Star trek was the professed inspiration in 2012 for Google glass
Fig: Star trek was the professed inspiration in 2012 for Google glass.

Mother of All Evil: The Mom Test

We are all irrational. If you ever had an idea, it is likely you will get the 'mom disease'. Your baby never makes a mistake. The 'Moms Test' is the first test you would naturally undertake when you get an idea.

What does 'Mom Test' mean'???

1. Pick 5 friends, colleagues ('friendlies') who 'know' stuff or worse still: your Mom

2. They say 'This is awesome, if we getV1 of this stuff it WILL sell like free pizza to famine starved hungry billions'

Sometimes the Mom Test comes in form of a survey or focus group study with friendly customers, with questions designed to prove that it will work. It's a familiar feeling since all of us have been there, done that - we have the 'mom disease' & excel in administering 'mom tests'. Now what?

Why do we need to test and who is this for?

Innovation is dirty. Nearly every firm would have had its Google glass moment. Google glass has been successful in some niche segment(s).

Disruptive innovations like Google glass are hard, require massive leaps of faith & therefore abundance of optimism. It is this optimism that leads us astray and gets us into gargantuan execution plans without validation.

If you are in an innovation cycle, have uncertainty, face high impact assumptions and need to make decisions : this session offers a framework to test your assumptions.

Where do we go from here?

We deal with assumptions like scientists do. When a scientist announces his invention he has a hypothesis, he shows his experiment setup, the results & what constitutes as proof for his hypothesis! We design experiments.

We want to learn double the amount in half the time for assumptions that could bring down our initiative.

Technique: Crazy 6s

Crazy 6s is like Crazy Eight, you use six boxes instead of eight boxes. Crazy Eights is a technique used in UX design sprints extensively. I wrote about it earlier in detail here

Crazy six technique
Fig: Crazy six technique.

Here is what we did in the session:

Step 1: Map the problem you are solving and the value map for the customer:

The questions to draft the value map are simple:

1. Who is the customer?

2. What problem are you solving?

3. How will we acquire and retain customers?

4. How will you capture value for our company?

5. How will you solve it?

6. How could it go wrong? (Find assumptions using Crazy sixes)

We can map our value delivery using frameworks like business model canvas, lean canvas or similar. These are just frameworks at the end of the day.

The customer problem, channels should be enough for most cases.

In my opinion using business plan as a surrogate for customer will doom you before you start.

Step 2: Ideate assumptions:

What assumptions are we making, that if proven wrong would cause our initiative to fail? Use crazy 6s.

1. Take a A4 sheet

2. Fold into 6 boxes

3. Within 60 seconds, jot one assumption that you think will bring down your initiative.

4. Repeat till there are no more boxes:#3

5. Now pick one big assumption.

How do you decide which one is the most important assumption?

If you are doing this within your organization (say as a team exercise):

1. Write up one assumption on one sticky note

2. Now paste those sticky notes on the2x2 impact/uncertainty matrix (see figure below).

Here is the impact/uncertainty matrix:

Impact/Uncertainity matrix
Fig: Impact/Uncertainity matrix

Focus on those assumptions that are on the north east (quadrant with high-high). Those are your riskiest assumptions.

I like to think of these assumptions as cards in stacked cards:

Balanced pack of cards


Pull any card at the top of the stack and your stack might holdup. Pull any card from the bottom and your house of cards collapses.

The base is the customer, the channels. More often than not, these are the ones that fail.

The bigger you are, the more optimistic you will be. Your optimism on go to market capability & customer loyalty will be through the roof. That is where you might fail since you will sit in an echo chamber with your customers, channels and market.

Bottom line find those assumptions that are going to have a very high impact.

Now write this assumption on the reverse side of your A4 sheet.

Step 3: Ideate tests:

1. We use a library of experiment patterns along the "Truthcurve"(Hat tip: Book "Testing with Humans", link at end of article). Believability and effort form the axis of the Truth curve

2. You could think of these experiments as fidelity. Usually higher the fidelity, more the believability

3. For your reference, I sketched some examples of companies that have used these experiment patterns to test their business models

Experiment patterns with some examples
Fig: Experiment patterns with some examples

Use Crazy 6s to ideate. We don't test everything. This is not some statistical exercise. You wont have time for any nonsense- your innovation initiative will die before you do all that.

So therefore we hunker down to identify that one or two experiments that you can run.

The pattern library is an assistant, a framework to inspire you - go crazy with experiments when brainstorming, while minimizing effort. Do not get pedantic with the means here.

The big idea is : How can we learn what breaks this business in half the time with half the effort?

Interesting examples:

1. Have a payment page, where the customer can leave his credit card and when he clicks on buy, do NOT fulfil. You are creating intentional friction ,to see if there is enough value for user to take action.

2. Elliot Sussel was at TaxiMagic which was founded in 2008. For frame of reference, Uber was founded in 2009. There were dime a dozen taxi apps during that time. His explanation on what should have been done(and NOT experimented) was interesting.

3. Dropbox tested its hypothesis by creating an explainer video and posting it to Digg(Landing page + promotional material). With about 75000(?) signups for beta (up from 5000) they knew that there was a need.

4. Zappos experimented with Wizard of Oz in early days. They would go to local stores, take photos of shoes and put them online. When customers paid, the same shoes were purchased from local stores & shipped under the Zappos brand. From a customer experience perspective customer did not care.

5. Kickstarter campaigns in some sense are presell experiment patterns. They are powerful validation because someone is committing money upfront to your idea.

One thing to note here - you want to break the model. You are NOT looking to optimize here, therefore sometimes adding friction makes sense. For example if your landing page is dirty, there is a friction on leaving credit card AND YET people want your stuff, there is a real problem you are solving

So your experiment CAN have friction. This is contrarian thinking.

Step 4: Follow the experiment template.

This template has a simple checklist:

1. What hypothesis do you want to prove/disprove?

2. For each hypothesis what metric will you use?

3. What will be the gating criteria (ie> or < x its a pass or fail)

4. Who is target participant?

5. How will you recruit them and how many do you need?

6. Which experiment are you running and for how long?

7. List any qualitative learnings as well finally

Now go ahead and test. In some cases you have to be careful on the cohorts you are choosing. Keep the rules simple, experiment simple. The idea is to just (dis)-prove with evidence.

Create a roadmap, the cultural change

Experimentation is a process. Create a living document of these risks and an execution roadmap for addressing these. Prioritize, start with quicker wins and move on.

Takeaways

1. Once I was forced to bring out the assumptions with crazy sixes, it was hard for me to ignore them. Once again this technique brought out urgency in identifying those risks and assumptions quickly. Time boxing shines light on the elephant in the room.

2. Customer definition and segment is important, this technique assumes you have done all of that in a very good way. If you are using a lean canvas or business model canvas, start from the right(customer segment and come finally to solution), otherwise you will end up identifying execution assumptions and risks. Regardless of the experiment pattern, remember the Amazon way - work backwards!

3. Techniques like Landing Pages are fraught with issues. There are too many variables: are you testing the copy? Are you testing the call to action button? I will confuse you further: CamelCasing in PPC could give a difference of over 2x (ie 200%) or even more, every PPC enthusiast knows that little changes could create differences that are order of magnitude. So much for gating or threshold metrics/values.

4. I have started reading up more on this topic. In my past startups, I had tried many of these experiments in an adhoc fashion - almost in an intuitive way. Once some success came, a lot of hubris set in. This session gave me a great framework and was a wakeup call.

5. I must add that patterns like concierge are slightly risky because the customer experience might be better than automation so you might see better demand. If I have to choose I will choose Wizard of Oz over Concierge. Watch for the variables. Once again (this is for my big company innovation bretheren) this is not about the process - you will kill yourself if you do not get this.

6. Gif and Elliott complemented each other. Fabulous work. Clap clap clap.

Resources


Testing with Humans is a great book, easy read & extremely executable. There is no fluff, no preaching. The other book Giff has Talking to Humans is about customer discovery.

Got an idea?

Together, we’ll build it into a great product

Follow us on

Dallas, USA

London, UK

Waterloo, Canada

Hyderabad, India

©2024 Zemoso Technologies. All rights reserved.