• Work together
  • Product development
  • Ways of working

menu image

Have you read my two bestsellers, Unlearn and Lean Enterprise? If not, please do. If you have, please write a review!

  • Read my story
  • Get in touch

menu image

  • Oval Copy 2 Blog

How to Implement Hypothesis-Driven Development

  • Facebook__x28_alt_x29_ Copy

Remember back to the time when we were in high school science class. Our teachers had a framework for helping us learn – an experimental approach based on the best available evidence at hand. We were asked to make observations about the world around us, then attempt to form an explanation or hypothesis to explain what we had observed. We then tested this hypothesis by predicting an outcome based on our theory that would be achieved in a controlled experiment – if the outcome was achieved, we had proven our theory to be correct.

We could then apply this learning to inform and test other hypotheses by constructing more sophisticated experiments, and tuning, evolving, or abandoning any hypothesis as we made further observations from the results we achieved.

Experimentation is the foundation of the scientific method, which is a systematic means of exploring the world around us. Although some experiments take place in laboratories, it is possible to perform an experiment anywhere, at any time, even in software development.

Practicing Hypothesis-Driven Development [1] is thinking about the development of new ideas, products, and services – even organizational change – as a series of experiments to determine whether an expected outcome will be achieved. The process is iterated upon until a desirable outcome is obtained or the idea is determined to be not viable.

We need to change our mindset to view our proposed solution to a problem statement as a hypothesis, especially in new product or service development – the market we are targeting, how a business model will work, how code will execute and even how the customer will use it.

We do not do projects anymore, only experiments. Customer discovery and Lean Startup strategies are designed to test assumptions about customers. Quality Assurance is testing system behavior against defined specifications. The experimental principle also applies in Test-Driven Development – we write the test first, then use the test to validate that our code is correct, and succeed if the code passes the test. Ultimately, product or service development is a process to test a hypothesis about system behavior in the environment or market it is developed for.

The key outcome of an experimental approach is measurable evidence and learning. Learning is the information we have gained from conducting the experiment. Did what we expect to occur actually happen? If not, what did and how does that inform what we should do next?

In order to learn we need to use the scientific method for investigating phenomena, acquiring new knowledge, and correcting and integrating previous knowledge back into our thinking.

As the software development industry continues to mature, we now have an opportunity to leverage improved capabilities such as Continuous Design and Delivery to maximize our potential to learn quickly what works and what does not. By taking an experimental approach to information discovery, we can more rapidly test our solutions against the problems we have identified in the products or services we are attempting to build. With the goal to optimize our effectiveness of solving the right problems, over simply becoming a feature factory by continually building solutions.

The steps of the scientific method are to:

  • Make observations
  • Formulate a hypothesis
  • Design an experiment to test the hypothesis
  • State the indicators to evaluate if the experiment has succeeded
  • Conduct the experiment
  • Evaluate the results of the experiment
  • Accept or reject the hypothesis
  • If necessary, make and test a new hypothesis

Using an experimentation approach to software development

We need to challenge the concept of having fixed requirements for a product or service. Requirements are valuable when teams execute a well known or understood phase of an initiative and can leverage well-understood practices to achieve the outcome. However, when you are in an exploratory, complex and uncertain phase you need hypotheses. Handing teams a set of business requirements reinforces an order-taking approach and mindset that is flawed. Business does the thinking and ‘knows’ what is right. The purpose of the development team is to implement what they are told. But when operating in an area of uncertainty and complexity, all the members of the development team should be encouraged to think and share insights on the problem and potential solutions. A team simply taking orders from a business owner is not utilizing the full potential, experience and competency that a cross-functional multi-disciplined team offers.

Framing Hypotheses

The traditional user story framework is focused on capturing requirements for what we want to build and for whom, to enable the user to receive a specific benefit from the system.

As A…. <role>

I Want… <goal/desire>

So That… <receive benefit>

Behaviour Driven Development (BDD) and Feature Injection aims to improve the original framework by supporting communication and collaboration between developers, tester and non-technical participants in a software project.

In Order To… <receive benefit>

As A… <role>

When viewing work as an experiment, the traditional story framework is insufficient. As in our high school science experiment, we need to define the steps we will take to achieve the desired outcome. We then need to state the specific indicators (or signals) we expect to observe that provide evidence that our hypothesis is valid. These need to be stated before conducting the test to reduce the bias of interpretation of results.

If we observe signals that indicate our hypothesis is correct, we can be more confident that we are on the right path and can alter the user story framework to reflect this.

Therefore, a user story structure to support Hypothesis-Driven Development would be;

hdd-card

We believe < this capability >

What functionality we will develop to test our hypothesis? By defining a ‘test’ capability of the product or service that we are attempting to build, we identify the functionality and hypothesis we want to test.

Will result in < this outcome >

What is the expected outcome of our experiment? What is the specific result we expect to achieve by building the ‘test’ capability?

We will have confidence to proceed when < we see a measurable signal >

What signals will indicate that the capability we have built is effective? What key metrics (qualitative or quantitative) we will measure to provide evidence that our experiment has succeeded and give us enough confidence to move to the next stage.

The threshold you use for statistical significance will depend on your understanding of the business and context you are operating within. Not every company has the user sample size of Amazon or Google to run statistically significant experiments in a short period of time. Limits and controls need to be defined by your organization to determine acceptable evidence thresholds that will allow the team to advance to the next step.

For example, if you are building a rocket ship you may want your experiments to have a high threshold for statistical significance. If you are deciding between two different flows intended to help increase user sign up you may be happy to tolerate a lower significance threshold.

The final step is to clearly and visibly state any assumptions made about our hypothesis, to create a feedback loop for the team to provide further input, debate, and understanding of the circumstance under which we are performing the test. Are they valid and make sense from a technical and business perspective?

Hypotheses, when aligned to your MVP, can provide a testing mechanism for your product or service vision. They can test the most uncertain areas of your product or service, in order to gain information and improve confidence.

Examples of Hypothesis-Driven Development user stories are;

Business story.

We Believe That increasing the size of hotel images on the booking page Will Result In improved customer engagement and conversion We Will Have Confidence To Proceed When  we see a 5% increase in customers who review hotel images who then proceed to book in 48 hours.

It is imperative to have effective monitoring and evaluation tools in place when using an experimental approach to software development in order to measure the impact of our efforts and provide a feedback loop to the team. Otherwise, we are essentially blind to the outcomes of our efforts.

In agile software development, we define working software as the primary measure of progress. By combining Continuous Delivery and Hypothesis-Driven Development we can now define working software and validated learning as the primary measures of progress.

Ideally, we should not say we are done until we have measured the value of what is being delivered – in other words, gathered data to validate our hypothesis.

Examples of how to gather data is performing A/B Testing to test a hypothesis and measure to change in customer behavior. Alternative testings options can be customer surveys, paper prototypes, user and/or guerilla testing.

One example of a company we have worked with that uses Hypothesis-Driven Development is lastminute.com . The team formulated a hypothesis that customers are only willing to pay a max price for a hotel based on the time of day they book. Tom Klein, CEO and President of Sabre Holdings shared the story  of how they improved conversion by 400% within a week.

Combining practices such as Hypothesis-Driven Development and Continuous Delivery accelerates experimentation and amplifies validated learning. This gives us the opportunity to accelerate the rate at which we innovate while relentlessly reducing costs, leaving our competitors in the dust. Ideally, we can achieve the ideal of one-piece flow: atomic changes that enable us to identify causal relationships between the changes we make to our products and services, and their impact on key metrics.

As Kent Beck said, “Test-Driven Development is a great excuse to think about the problem before you think about the solution”. Hypothesis-Driven Development is a great opportunity to test what you think the problem is before you work on the solution.

We also run a  workshop to help teams implement Hypothesis-Driven Development . Get in touch to run it at your company. 

[1]  Hypothesis-Driven Development  By Jeffrey L. Taylor

More strategy insights

Say hello to venture capital 3.0, negotiation made simple with dr john lowry, how high performance organizations innovate at scale, read my newsletter.

Insights in every edition. News you can use. No spam, ever. Read the latest edition

We've just sent you your first email. Go check it out!

.

  • Explore Insights
  • Nobody Studios
  • LinkedIn Learning: High Performance Organizations

how-implement-hypothesis-driven-development

How to Implement Hypothesis-Driven Development

Remember back to the time when we were in high school science class. Our teachers had a framework for helping us learn – an experimental approach based on the best available evidence at hand. We were asked to make observations about the world around us, then attempt to form an explanation or hypothesis to explain what we had observed. We then tested this hypothesis by predicting an outcome based on our theory that would be achieved in a controlled experiment – if the outcome was achieved, we had proven our theory to be correct.

We could then apply this learning to inform and test other hypotheses by constructing more sophisticated experiments, and tuning, evolving or abandoning any hypothesis as we made further observations from the results we achieved.

Experimentation is the foundation of the scientific method, which is a systematic means of exploring the world around us. Although some experiments take place in laboratories, it is possible to perform an experiment anywhere, at any time, even in software development.

Practicing  Hypothesis-Driven Development  is thinking about the development of new ideas, products and services – even organizational change – as a series of experiments to determine whether an expected outcome will be achieved. The process is iterated upon until a desirable outcome is obtained or the idea is determined to be not viable.

We need to change our mindset to view our proposed solution to a problem statement as a hypothesis, especially in new product or service development – the market we are targeting, how a business model will work, how code will execute and even how the customer will use it.

We do not do projects anymore, only experiments. Customer discovery and Lean Startup strategies are designed to test assumptions about customers. Quality Assurance is testing system behavior against defined specifications. The experimental principle also applies in Test-Driven Development – we write the test first, then use the test to validate that our code is correct, and succeed if the code passes the test. Ultimately, product or service development is a process to test a hypothesis about system behaviour in the environment or market it is developed for.

The key outcome of an experimental approach is measurable evidence and learning.

Learning is the information we have gained from conducting the experiment. Did what we expect to occur actually happen? If not, what did and how does that inform what we should do next?

In order to learn we need use the scientific method for investigating phenomena, acquiring new knowledge, and correcting and integrating previous knowledge back into our thinking.

As the software development industry continues to mature, we now have an opportunity to leverage improved capabilities such as Continuous Design and Delivery to maximize our potential to learn quickly what works and what does not. By taking an experimental approach to information discovery, we can more rapidly test our solutions against the problems we have identified in the products or services we are attempting to build. With the goal to optimize our effectiveness of solving the right problems, over simply becoming a feature factory by continually building solutions.

The steps of the scientific method are to:

  • Make observations
  • Formulate a hypothesis
  • Design an experiment to test the hypothesis
  • State the indicators to evaluate if the experiment has succeeded
  • Conduct the experiment
  • Evaluate the results of the experiment
  • Accept or reject the hypothesis
  • If necessary, make and test a new hypothesis

Using an experimentation approach to software development

We need to challenge the concept of having fixed requirements for a product or service. Requirements are valuable when teams execute a well known or understood phase of an initiative, and can leverage well understood practices to achieve the outcome. However, when you are in an exploratory, complex and uncertain phase you need hypotheses.

Handing teams a set of business requirements reinforces an order-taking approach and mindset that is flawed.

Business does the thinking and ‘knows’ what is right. The purpose of the development team is to implement what they are told. But when operating in an area of uncertainty and complexity, all the members of the development team should be encouraged to think and share insights on the problem and potential solutions. A team simply taking orders from a business owner is not utilizing the full potential, experience and competency that a cross-functional multi-disciplined team offers.

Framing hypotheses

The traditional user story framework is focused on capturing requirements for what we want to build and for whom, to enable the user to receive a specific benefit from the system.

As A…. <role>

I Want… <goal/desire>

So That… <receive benefit>

Behaviour Driven Development (BDD) and Feature Injection  aims to improve the original framework by supporting communication and collaboration between developers, tester and non-technical participants in a software project.

In Order To… <receive benefit>

As A… <role>

When viewing work as an experiment, the traditional story framework is insufficient. As in our high school science experiment, we need to define the steps we will take to achieve the desired outcome. We then need to state the specific indicators (or signals) we expect to observe that provide evidence that our hypothesis is valid. These need to be stated before conducting the test to reduce biased interpretations of the results. 

If we observe signals that indicate our hypothesis is correct, we can be more confident that we are on the right path and can alter the user story framework to reflect this.

Therefore, a user story structure to support Hypothesis-Driven Development would be;

how-implement-hypothesis-driven-development

We believe < this capability >

What functionality we will develop to test our hypothesis? By defining a ‘test’ capability of the product or service that we are attempting to build, we identify the functionality and hypothesis we want to test.

Will result in < this outcome >

What is the expected outcome of our experiment? What is the specific result we expect to achieve by building the ‘test’ capability?

We will know we have succeeded when < we see a measurable signal >

What signals will indicate that the capability we have built is effective? What key metrics (qualitative or quantitative) we will measure to provide evidence that our experiment has succeeded and give us enough confidence to move to the next stage.

The threshold you use for statistically significance will depend on your understanding of the business and context you are operating within. Not every company has the user sample size of Amazon or Google to run statistically significant experiments in a short period of time. Limits and controls need to be defined by your organization to determine acceptable evidence thresholds that will allow the team to advance to the next step.

For example if you are building a rocket ship you may want your experiments to have a high threshold for statistical significance. If you are deciding between two different flows intended to help increase user sign up you may be happy to tolerate a lower significance threshold.

The final step is to clearly and visibly state any assumptions made about our hypothesis, to create a feedback loop for the team to provide further input, debate and understanding of the circumstance under which we are performing the test. Are they valid and make sense from a technical and business perspective?

Hypotheses when aligned to your MVP can provide a testing mechanism for your product or service vision. They can test the most uncertain areas of your product or service, in order to gain information and improve confidence.

Examples of Hypothesis-Driven Development user stories are;

Business story

We Believe That increasing the size of hotel images on the booking page

Will Result In improved customer engagement and conversion

We Will Know We Have Succeeded When we see a 5% increase in customers who review hotel images who then proceed to book in 48 hours.

It is imperative to have effective monitoring and evaluation tools in place when using an experimental approach to software development in order to measure the impact of our efforts and provide a feedback loop to the team. Otherwise we are essentially blind to the outcomes of our efforts.

In agile software development we define working software as the primary measure of progress.

By combining Continuous Delivery and Hypothesis-Driven Development we can now define working software and validated learning as the primary measures of progress.

Ideally we should not say we are done until we have measured the value of what is being delivered – in other words, gathered data to validate our hypothesis.

Examples of how to gather data is performing A/B Testing to test a hypothesis and measure to change in customer behaviour. Alternative testings options can be customer surveys, paper prototypes, user and/or guerrilla testing.

One example of a company we have worked with that uses Hypothesis-Driven Development is  lastminute.com . The team formulated a hypothesis that customers are only willing to pay a max price for a hotel based on the time of day they book. Tom Klein, CEO and President of Sabre Holdings shared  the story  of how they improved conversion by 400% within a week.

Combining practices such as Hypothesis-Driven Development and Continuous Delivery accelerates experimentation and amplifies validated learning. This gives us the opportunity to accelerate the rate at which we innovate while relentlessly reducing cost, leaving our competitors in the dust. Ideally we can achieve the ideal of one piece flow: atomic changes that enable us to identify causal relationships between the changes we make to our products and services, and their impact on key metrics.

As Kent Beck said, “Test-Driven Development is a great excuse to think about the problem before you think about the solution”. Hypothesis-Driven Development is a great opportunity to test what you think the problem is, before you work on the solution.

How can you achieve faster growth?

Cookie Notice

This site uses cookies for performance, analytics, personalization and advertising purposes.

For more information about how we use cookies please see our Cookie Policy .

Cookie Policy   |   Privacy Policy

Manage Consent Preferences

Essential/Strictly Necessary Cookies

These cookies are essential in order to enable you to move around the website and use its features, such as accessing secure areas of the website.

Analytical/ Performance Cookies

These are analytics cookies that allow us to collect information about how visitors use a website, for instance which pages visitors go to most often, and if they get error messages from web pages. This helps us to improve the way the website works and allows us to test different ideas on the site.

Functional/ Preference Cookies

These cookies allow our website to properly function and in particular will allow you to use its more personal features.

Targeting/ Advertising Cookies

These cookies are used by third parties to build a profile of your interests and show you relevant adverts on other sites. You should check the relevant third party website for more information and how to opt out, as described below.

Announcing Dell Data Lakehouse Analytics Engine powered by Starburst Read the announcement >

how to implement hypothesis driven development

  • Starburst vs OSS Trino

By Use Cases

  • Modern Data Lake
  • Artificial Intelligence
  • ELT Data Processing
  • Data Applications
  • Data Migrations
  • Data Products

By Industry

  • Financial Services
  • Healthcare & Life Sciences
  • Retail & CPG
  • All Industries
  • Meet our Customers
  • Customer Experience
  • Starburst Data Rebels
  • Documentation
  • Technical overview
  • Starburst Galaxy
  • Starburst Enterprise
  • Upcoming Events
  • Data Universe
  • Data Fundamentals
  • Starburst Academy
  • Become a Partner
  • Partner Login
  • Security & Trust

how to implement hypothesis driven development

Fully managed in the cloud

Self-managed anywhere

Hypothesis-driven development is an approach used in software development and product management

What is hypothesis driven development.

Hypothesis-driven development (HDD), also known as hypothesis-driven product development, is an approach used in software development and product management.

HDD involves creating hypotheses about user behavior, needs, or desired outcomes, and then designing and implementing experiments to validate or invalidate those hypotheses.

Data Lake Blogs About modern data lakes

O’Reilly Data Mesh Book

Data Mesh Book Cover

Get your free copy

Why use a hypothesis-driven approach?

How do you implement hypothesis-driven development.

With hypothesis-driven development, instead of making assumptions and building products or features based on those assumptions, teams should formulate hypotheses and conduct experiments to gather data and insights.

This method assists with making informed decisions and reduces the overall risk of building products that do not meet user needs or solve their problems.

At a high level, here’s a general approach to implementing HDD: 

  • Identify the problem or opportunity: Begin by identifying the problem or opportunity that you want to address with your product or feature.
  • Create a hypothesis: Clearly define a hypothesis that describes a specific user behavior, need, or outcome you believe will occur if you implement the solution.
  • Design an experiment: Determine the best way to test your hypothesis. This could involve creating a prototype, conducting user interviews, A/B testing, or other forms of user research.
  • Implement the experiment: Execute the experiment by building the necessary components or conducting the research activities.
  • Collect and analyze data: Gather data from the experiment and analyze the results to determine if the hypothesis is supported or not.
  • If the hypothesis is supported, you can move forward with further development. 
  • If the hypothesis is not supported, you may need to pivot, refine the hypothesis, or explore alternative solutions.
  • Rinse and repeat: Continuously repeat the process, iterating and refining your hypotheses and experiments to guide the development of your product or feature.

Hypothesis-driven development emphasizes a data-driven and iterative approach to product development, allowing teams to make more informed decisions, validate assumptions, and ultimately deliver products that better meet user needs.

how to implement hypothesis driven development

2022-25 Data Strategy: Three New Actionable Ideas for the CDO

Learn more arrow_right_alt

how to implement hypothesis driven development

6 ways to foster curiosity in data engineering and drive real results

how to implement hypothesis driven development

7 key takeaways: The future of financial services with distributed data

how to implement hypothesis driven development

A Gentle Introduction to the Hive Connector

how to implement hypothesis driven development

Lie #1 — A single source of truth

A single point of access to all your data, stay in the know - sign up for our newsletter.

  • Resource Library
  • Events and Webinars
  • Open-source Trino

Quick Links

Get in touch.

  • Customer Support

LinkedIn

© Starburst Data, Inc. Starburst and Starburst Data are registered trademarks of Starburst Data, Inc. All rights reserved. Presto®, the Presto logo, Delta Lake, and the Delta Lake logo are trademarks of LF Projects, LLC

Read Starburst reviews on G2

Privacy Policy   |   Legal Terms   |   Cookie Notice

Start Free with Starburst Galaxy

Up to $500 in usage credits included

  • Query your data lake fast with Starburst's best-in-class MPP SQL query engine
  • Get up and running in less than 5 minutes

For more deployment options:

Please fill in all required fields and ensure you are using a valid email address.

By clicking Create Account , you agree to Starburst Galaxy's terms of service and privacy policy .

Advisory boards aren’t only for executives. Join the LogRocket Content Advisory Board today →

LogRocket blog logo

  • Product Management
  • Solve User-Reported Issues
  • Find Issues Faster
  • Optimize Conversion and Adoption

What is hypothesis-driven development?

how to implement hypothesis driven development

Uncertainty is one of the biggest challenges of modern product development. Most often, there are more question marks than answers available.

What Is Hypothesis Driven Development

This fact forces us to work in an environment of ambiguity and unpredictability.

Instead of combatting this, we should embrace the circumstances and use tools and solutions that excel in ambiguity. One of these tools is a hypothesis-driven approach to development.

Hypothesis-driven development in a nutshell

As the name suggests, hypothesis-driven development is an approach that focuses development efforts around, you guessed it, hypotheses.

To make this example more tangible, let’s compare it to two other common development approaches: feature-driven and outcome-driven.

In feature-driven development, we prioritize our work and effort based on specific features we planned and decided on upfront. The underlying goal here is predictability.

In outcome-driven development, the priorities are dictated not by specific features but by broader outcomes we want to achieve. This approach helps us maximize the value generated.

When it comes to hypothesis-driven development, the development effort is focused first and foremost on validating the most pressing hypotheses the team has. The goal is to maximize learning speed over all else.

Benefits of hypothesis-driven development

There are numerous benefits of a hypothesis-driven approach to development, but the main ones include:

Continuous learning

Mvp mindset, data-driven decision-making.

Hypothesis-driven development maximizes the amount of knowledge the team acquires with each release.

After all, if all you do is test hypotheses, each test must bring you some insight:

Continuous Learning With Hypothesis Driven Development Cycle Image

Hypothesis-driven development centers the whole prioritization and development process around learning.

Instead of designing specific features or focusing on big, multi-release outcomes, a hypothesis-driven approach forces you to focus on minimum viable solutions ( MVPs ).

After all, the primary thing you are aiming for is hypothesis validation. It often doesn’t require scalability, perfect user experience, and fully-fledged features.

how to implement hypothesis driven development

Over 200k developers and product managers use LogRocket to create better digital experiences

how to implement hypothesis driven development

By definition, hypothesis-driven development forces you to truly focus on MVPs and avoid overcomplicating.

In hypothesis-driven development, each release focuses on testing a particular assumption. That test then brings you new data points, which help you formulate and prioritize next hypotheses.

That’s truly a data-driven development loop that leaves little room for HiPPOs (the highest-paid person in the room’s opinion).

Guide to hypothesis-driven development

Let’s take a look at what hypothesis-driven development looks like in practice. On a high level, it consists of four steps:

  • Formulate a list of hypotheses and assumptions
  • Prioritize the list
  • Design an MVP
  • Test and repeat

1. Formulate hypotheses

The first step is to list all hypotheses you are interested in.

Everything you wish to know about your users and market, as well as things you believe you know but don’t have tangible evidence to support, is a form of a hypothesis.

At this stage, I’m not a big fan of robust hypotheses such as, “We believe that if <we do something> then <something will happen> because <some user action>.”

To have such robust hypotheses, you need a solid enough understanding of your users, and if you do have it, then odds are you don’t need hypothesis-driven development anymore.

Instead, I prefer simpler statements that are closer to assumptions than hypotheses, such as:

  • “Our users will love the feature X”
  • “The option to do X is very important for student segment”
  • “Exam preparation is an important and underserved need that our users have”

2. Prioritize

The next step in hypothesis-driven development is to prioritize all assumptions and hypotheses you have. This will create your product backlog:

Prioritization Graphic With Cards In Order Of Descending Priority

There are various prioritization frameworks and approaches out there, so choose whichever you prefer. I personally prioritize assumptions based on two main criteria:

  • How much will we gain if we positively validate the hypothesis?
  • How much will we learn during the validation process?

Your priorities, however, might differ depending on your current context.

3. Design an MVP

Hypothesis-driven development is centered around the idea of MVPs — that is, the smallest possible releases that will help you gather enough information to validate whether a given hypothesis is true.

User experience, maintainability, and product excellence are secondary.

4. Test and repeat

The last step is to launch the MVP and validate whether the actual impact and consequent user behavior validate or invalidate the initial hypothesis.

The success isn’t measured by whether the hypothesis turned out to be accurate, but by how many new insights and learnings you captured during the process.

Based on the experiment, revisit your current list of assumptions, and, if needed, adjust the priority list.

Challenges of hypothesis-driven development

Although hypothesis-driven development comes with great benefits, it’s not all wine and roses.

Let’s take a look at a few core challenges that come with a hypothesis-focused approach.

Lack of robust product experience

Focusing on validating hypotheses and underlying MVP mindset comes at a cost. Robust product experience and great UX often require polishes, optimizations, and iterations, which go against speed-focused hypothesis-driven development.

You can’t optimize for both learning and quality simultaneously.

Unfocused direction

Although hypothesis-driven development is great for gathering initial learnings, eventually, you need to start developing a focused and sustainable long-term product strategy. That’s where outcome-driven development shines.

There’s an infinite amount of explorations you can do, but at some point, you must flip the switch and narrow down your focus around particular outcomes.

Over-emphasis on MVPs

Teams that embrace a hypothesis-driven approach often fall into the trap of an “MVP only” approach. However, shipping an actual prototype is not the only way to validate an assumption or hypothesis.

You can utilize tools such as user interviews, usability tests, market research, or willingness to pay (WTP) experiments to validate most of your doubts.

There’s a thin line between being MVP-focused in development and overusing MVPs as a validation tool.

When to use hypothesis-driven development

As you’ve most likely noticed, a hypothesis-driven development isn’t a multi-tool solution that can be used in every context.

On the contrary, its challenges make it an unsuitable development strategy for many companies.

As a rule of thumb, hypothesis-driven development works best in early-stage products with a high dose of ambiguity. Focusing on hypotheses helps bring enough clarity for the product team to understand where even to focus:

When To Use Hypothesis Driven Development Grid

But once you discover your product-market fit and have a solid idea for your long-term strategy, it’s often better to shift into more outcome-focused development. You should still optimize for learning, but it should no longer be the primary focus of your development effort.

While at it, you might also consider feature-driven development as a next step. However, that works only under particular circumstances where predictability is more important than the impact itself — for example, B2B companies delivering custom solutions for their clients or products focused on compliance.

Hypothesis-driven development can be a powerful learning-maximization tool. Its focus on MVP, continuous learning process, and inherent data-driven approach to decision-making are great tools for reducing uncertainty and discovering a path forward in ambiguous settings.

Honestly, the whole process doesn’t differ much from other development processes. The primary difference is that backlog and priories focus on hypotheses rather than features or outcomes.

Start by listing your assumptions, prioritizing them as you would any other backlog, and working your way top-to-bottom by shipping MVPs and adjusting priorities as you learn more about your market and users.

However, since hypothesis-driven development often lacks long-term cohesiveness, focus, and sustainable product experience, it’s rarely a good long-term approach to product development.

I tend to stick to outcome-driven and feature-driven approaches most of the time and resort to hypothesis-driven development if the ambiguity in a particular area is so hard that it becomes challenging to plan sensibly.

Featured image source: IconScout

LogRocket generates product insights that lead to meaningful action

Get your teams on the same page — try LogRocket today.

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • #product strategy

how to implement hypothesis driven development

Stop guessing about your digital experience with LogRocket

Recent posts:.

Alex Swain Leader Spotlight

Leader Spotlight: The importance of challenging assumptions, with Alex Swain

Alex Swain talks about how the key to avoiding building a product that nobody will purchase is to always challenge assumptions.

how to implement hypothesis driven development

How to use the PR/FAQ method to drive product innovation

The PR/FAQ method helps you clarify your vision, communicate your strategy, validate your assumptions, and solicit feedback from others.

how to implement hypothesis driven development

Leader Spotlight: The nuances of quality localization, with Drew Wrangles

Drew Wrangles, Head of Product & Design at Taskrabbit, shares his experiences leading product localization.

how to implement hypothesis driven development

Techniques for gaining insights from customers

A deep understanding of your customers helps you prioritize problems, define solutions, and adjust communications.

Leave a Reply Cancel reply

Mobile Menu

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

HDD & More from Me

Hypothesis-Driven Development (Practitioner’s Guide)

Table of Contents

What is hypothesis-driven development (HDD)?

How do you know if it’s working, how do you apply hdd to ‘continuous design’, how do you apply hdd to application development, how do you apply hdd to continuous delivery, how does hdd relate to agile, design thinking, lean startup, etc..

Like agile, hypothesis-driven development (HDD) is more a point of view with various associated practices than it is a single, particular practice or process. That said, my goal here for is you to leave with a solid understanding of how to do HDD and a specific set of steps that work for you to get started.

After reading this guide and trying out the related practice you will be able to:

  • Diagnose when and where hypothesis-driven development (HDD) makes sense for your team
  • Apply techniques from HDD to your work in small, success-based batches across your product pipeline
  • Frame and enhance your existing practices (where applicable) with HDD

Does your product program feel like a Netflix show you’d binge watch? Is your team excited to see what happens when you release stuff? If so, congratulations- you’re already doing it and please hit me up on Twitter so we can talk about it! If not, don’t worry- that’s pretty normal, but HDD offers some awesome opportunities to work better.

Scientific-Method

Building on the scientific method, HDD is a take on how to integrate test-driven approaches across your product development activities- everything from creating a user persona to figuring out which integration tests to automate. Yeah- wow, right?! It is a great way to energize and focus your practice of agile and your work in general.

By product pipeline, I mean the set of processes you and your team undertake to go from a certain set of product priorities to released product. If you’re doing agile, then iteration (sprints) is a big part of making these work.

Product-Pipeline-Cowan.001

It wouldn’t be very hypothesis-driven if I didn’t have an answer to that! In the diagram above, you’ll find metrics for each area. For your application of HDD to what we’ll call continuous design, your metric to improve is the ratio of all your release content to the release content that meets or exceeds your target metrics on user behavior. For example, if you developed a new, additional way for users to search for products and set the success threshold at it being used in >10% of users sessions, did that feature succeed or fail by that measure? For application development, the metric you’re working to improve is basically velocity, meaning story points or, generally, release content per sprint. For continuous delivery, it’s how often you can release. Hypothesis testing is, of course, central to HDD and generally doing agile with any kind focus on valuable outcomes, and I think it shares the metric on successful release content with continuous design.

how to implement hypothesis driven development

The first component is team cost, which you would sum up over whatever period you’re measuring. This includes ‘c $ ’, which is total compensation as well as loading (benefits, equipment, etc.) as well as ‘g’ which is the cost of the gear you use- that might be application infrastructure like AWS, GCP, etc. along with any other infrastructure you buy or share with other teams. For example, using a backend-as-a-service like Heroku or Firebase might push up your value for ‘g’ while deferring the cost of building your own app infrastructure.

The next component is release content, fe. If you’re already estimating story points somehow, you can use those. If you’re a NoEstimates crew, and, hey, I get it, then you’d need to do some kind of rough proportional sizing of your release content for the period in question. The next term, r f , is optional but this is an estimate of the time you’re having to invest in rework, bug fixes, manual testing, manual deployment, and anything else that doesn’t go as planned.

The last term, s d , is one of the most critical and is an estimate of the proportion of your release content that’s successful relative to the success metrics you set for it. For example, if you developed a new, additional way for users to search for products and set the success threshold at it being used in >10% of users sessions, did that feature succeed or fail by that measure? Naturally, if you’re not doing this it will require some work and changing your habits, but it’s hard to deliver value in agile if you don’t know what that means and define it against anything other than actual user behavior.

Here’s how some of the key terms lay out in the product pipeline:

how to implement hypothesis driven development

The example here shows how a team might tabulate this for a given month:

how to implement hypothesis driven development

Is the punchline that you should be shooting for a cost of $1,742 per story point? No. First, this is for a single month and would only serve the purpose of the team setting a baseline for itself. Like any agile practice, the interesting part of this is seeing how your value for ‘F’ changes from period to period, using your team retrospectives to talk about how to improve it. Second, this is just a single team and the economic value (ex: revenue) related to a given story point will vary enormously from product to product. There’s a Google Sheets-based calculator that you can use here: Innovation Accounting with ‘F’ .

Like any metric, ‘F’ only matters if you find it workable to get in the habit of measuring it and paying attention to it. As a team, say, evaluates its progress on OKR (objectives and key results), ‘F’ offers a view on the health of the team’s collaboration together in the context of their product and organization. For example, if the team’s accruing technical debt, that will show up as a steady increase in ‘F’. If a team’s invested in test or deploy automation or started testing their release content with users more specifically, that should show up as a steady lowering of ‘F’.

In the next few sections, we’ll step through how to apply HDD to your product pipeline by area, starting with continuous design.

pipeline-continuous-design

It’s a mistake to ask your designer to explain every little thing they’re doing, but it’s also a mistake to decouple their work from your product’s economics. On the one hand, no one likes someone looking over their shoulder and you may not have the professional training to reasonably understand what they’re doing hour to hour, even day to day. On the other hand, it’s a mistake not to charter a designer’s work without a testable definition of success and not to collaborate around that.

Managing this is hard since most of us aren’t designers and because it takes a lot of work and attention to detail to work out what you really want to achieve with a given design.

Beginning with the End in Mind

The difference between art and design is intention- in design we always have one and, in practice, it should be testable. For this, I like the practice of customer experience (CX) mapping. CX mapping is a process for focusing the work of a team on outcomes–day to day, week to week, and quarter to quarter. It’s amenable to both qualitative and quantitative evidence but it is strictly focused on observed customer behaviors, as opposed to less direct, more lagging observations.

CX mapping works to define the CX in testable terms that are amenable to both qualitative and quantitative evidence. Specifically for each phase of a potential customer getting to behaviors that accrue to your product/market fit (customer funnel), it answers the following questions:

1. What do we mean by this phase of the customer funnel? 

What do we mean by, say, ‘Acquisition’ for this product or individual feature? How would we know it if we see it?

2. How do we observe this (in quantitative terms)? What’s the DV?

This come next after we answer the question “What does this mean?”. The goal is to come up with a focal single metric (maybe two), a ‘dependent variable’ (DV) that tells you how a customer has behaved in a given phase of the CX (ex: Acquisition, Onboarding, etc.).

3. What is the cut off for a transition?

Not super exciting, but extremely important in actual practice, the idea here is to establish the cutoff for deciding whether a user has progressed from one phase to the next or abandoned/churned.

4. What is our ‘Line in the Sand’ threshold?

Popularized by the book ‘Lean Analytics’, the idea here is that good metrics are ones that change a team’s behavior (decisions) and for that you need to establish a threshold in advance for decision making.

5. How might we test this? What new IVs are worth testing?

The ‘independent variables’ (IV’s) you might test are basically just ideas for improving the DV (#2 above).

6. What’s tricky? What do we need to watch out for?

Getting this working will take some tuning, but it’s infinitely doable and there aren’t a lot of good substitutes for focusing on what’s a win and what’s a waste of time.

The image below shows a working CX map for a company (HVAC in a Hurry) that services commercial heating, ventilation, and air-conditioning systems. And this particular CX map is for the specific ‘job’/task/problem of how their field technicians get the replacement parts they need.

CX-Map-Full-HinH

For more on CX mapping you can also check out it’s page- Tutorial: Customer Experience (CX) Mapping .

Unpacking Continuous Design for HDD

For the unpacking the work of design/Continuous Design with HDD , I like to use the ‘double diamond’ framing of ‘right problem’ vs. ‘right solution’, which I first learned about in Donald Norman’s seminal book, ‘The Design of Everyday Things’.

I’ve organized the balance of this section around three big questions:

How do you test that you’ve found the ‘Right Problem’?

How do you test that you’ve found demand and have the ‘right solution’, how do you test that you’ve designed the ‘right solution’.

hdd+design-thinking-UX

Let’s say it’s an internal project- a ‘digital transformation’ for an HVAC (heating, ventilation, and air conditioning) service company. The digital team thinks it would be cool to organize the documentation for all the different HVAC equipment the company’s technicians service. But, would it be?

The only way to find out is to go out and talk to these technicians and find out! First, you need to test whether you’re talking to someone who is one of these technicians. For example, you might have a screening question like: ‘How many HVAC’s did you repair last week?’. If it’s <10,  you might instead be talking to a handyman or a manager (or someone who’s not an HVAC tech at all).

Second, you need to ask non-leading questions. The evidentiary value of a specific answer to a general question is much higher than a specific answer to a specific questions. Also, some questions are just leading. For example, if you ask such a subject ‘Would you use a documentation system if we built it?’, they’re going to say yes, just to avoid the awkwardness and sales pitch they expect if they say no.

How do you draft personas? Much more renowned designers than myself (Donald Norman among them) disagree with me about this, but personally I like to draft my personas while I’m creating my interview guide and before I do my first set of interviews. Whether you draft or interview first is also of secondary important if you’re doing HDD- if you’re not iteratively interviewing and revising your material based on what you’ve found, it’s not going to be very functional anyway.

Really, the persona (and the jobs-to-be-done) is a means to an end- it should be answering some facet of the question ‘Who is our customer, and what’s important to them?’. It’s iterative, with a process that looks something like this:

personas-process-v3

How do you draft jobs-to-be-done? Personally- I like to work these in a similar fashion- draft, interview, revise, and then repeat, repeat, repeat.

You’ll use the same interview guide and subjects for these. The template is the same as the personas, but I maintain a separate (though related) tutorial for these–

A guide on creating Jobs-to-be-Done (JTBD) A template for drafting jobs-to-be-done (JTBD)

How do you interview subjects? And, action! The #1 place I see teams struggle is at the beginning and it’s with the paradox that to get to a big market you need to nail a series of small markets. Sure, they might have heard something about segmentation in a marketing class, but here you need to apply that from the very beginning.

The fix is to create a screener for each persona. This is a factual question whose job is specifically and only to determine whether a given subject does or does not map to your target persona. In the HVAC in a Hurry technician persona (see above), you might have a screening question like: ‘How many HVAC’s did you repair last week?’. If it’s <10,  you might instead be talking to a handyman or a manager (or someone who’s not an HVAC tech at all).

And this is the point where (if I’ve made them comfortable enough to be candid with me) teams will ask me ‘But we want to go big- be the next Facebook.’ And then we talk about how just about all those success stories where there’s a product that has for all intents and purpose a universal user base started out by killing it in small, specific segments and learning and growing from there.

Sorry for all that, reader, but I find all this so frequently at this point and it’s so crucial to what I think is a healthy practice of HDD it seemed necessary.

The key with the interview guide is to start with general questions where you’re testing for a specific answer and then progressively get into more specific questions. Here are some resources–

An example interview guide related to the previous tutorials A general take on these interviews in the context of a larger customer discovery/design research program A template for drafting an interview guide

To recap, what’s a ‘Right Problem’ hypothesis? The Right Problem (persona and PS/JTBD) hypothesis is the most fundamental, but the hardest to pin down. You should know what kind of shoes your customer wears and when and why they use your product. You should be able to apply factual screeners to identify subjects that map to your persona or personas.

You should know what people who look like/behave like your customer who don’t use your product are doing instead, particularly if you’re in an industry undergoing change. You should be analyzing your quantitative data with strong, specific, emphatic hypotheses.

If you make software for HVAC (heating, ventilation and air conditioning) technicians, you should have a decent idea of what you’re likely to hear if you ask such a person a question like ‘What are the top 5 hardest things about finishing an HVAC repair?’

In summary, HDD here looks something like this:

Persona-Hypothesis

01 IDEA : The working idea is that you know your customer and you’re solving a problem/doing a job (whatever term feels like it fits for you) that is important to them. If this isn’t the case, everything else you’re going to do isn’t going to matter.

Also, you know the top alternatives, which may or may not be what you see as your direct competitors. This is important as an input into focused testing demand to see if you have the Right Solution.

02 HYPOTHESIS : If you ask non-leading questions (like ‘What are the top 5 hardest things about finishing an HVAC repair?’), then you should generally hear relatively similar responses.

03 EXPERIMENTAL DESIGN : You’ll want an Interview Guide and, critically, a screener. This is a factual question you can use to make sure any given subject maps to your persona. With the HVAC repair example, this would be something like ‘How many HVAC repairs have you done in the last week?’ where you’re expecting an answer >5. This is important because if your screener isn’t tight enough, your interview responses may not converge.

04 EXPERIMENTATION : Get out and interview some subjects- but with a screener and an interview guide. The resources above has more on this, but one key thing to remember is that the interview guide is a guide, not a questionnaire. Your job is to make the interaction as normal as possible and it’s perfectly OK to skip questions or change them. It’s also 1000% OK to revise your interview guide during the process.

05: PIVOT OR PERSEVERE : What did you learn? Was it consistent? Good results are: a) We didn’t know what was on their A-list and what alternatives they are using, but we do know. b) We knew what was on their A-list and what alternatives they are using- we were pretty much right (doesn’t happen as much as you’d think). c) Our interviews just didn’t work/converge. Let’s try this again with some changes (happens all the time to smart teams and is very healthy).

By this, I mean: How do you test whether you have demand for your proposition? How do you know whether it’s better enough at solving a problem (doing a job, etc.) than the current alternatives your target persona has available to them now?

If an existing team was going to pick one of these areas to start with, I’d pick this one. While they’ll waste time if they haven’t found the right problem to solve and, yes, usability does matter, in practice this area of HDD is a good forcing function for really finding out what the team knows vs. doesn’t. This is why I show it as a kind of fulcrum between Right Problem and Right Solution:

Right-Solution-VP

This is not about usability and it does not involve showing someone a prototype, asking them if they like it, and checking the box.

Lean Startup offers a body of practice that’s an excellent fit for this. However, it’s widely misused because it’s so much more fun to build stuff than to test whether or not anyone cares about your idea. Yeah, seriously- that is the central challenge of Lean Startup.

Here’s the exciting part: You can massively improve your odds of success. While Lean Startup does not claim to be able to take any idea and make it successful, it does claim to minimize waste- and that matters a lot. Let’s just say that a new product or feature has a 1 in 5 chance of being successful. Using Lean Startup, you can iterate through 5 ideas in the space it would take you to build 1 out (and hope for the best)- this makes the improbably probable which is pretty much the most you can ask for in the innovation game .

Build, measure, learn, right? Kind of. I’ll harp on this since it’s important and a common failure mode relate to Lean Startup: an MVP is not a 1.0. As the Lean Startup folks (and Eric Ries’ book) will tell you, the right order is learn, build, measure. Specifically–

Learn: Who your customer is and what matters to them (see Solving the Right Problem, above). If you don’t do this, you’ll throwing darts with your eyes closed. Those darts are a lot cheaper than the darts you’d throw if you were building out the solution all the way (to strain the metaphor some), but far from free.

In particular, I see lots of teams run an MVP experiment and get confusing, inconsistent results. Most of the time, this is because they don’t have a screener and they’re putting the MVP in front of an audience that’s too wide ranging. A grandmother is going to respond differently than a millennial to the same thing.

Build : An experiment, not a real product, if at all possible (and it almost always is). Then consider MVP archetypes (see below) that will deliver the best results and try them out. You’ll likely have to iterate on the experiment itself some, particularly if it’s your first go.

Measure : Have metrics and link them to a kill decision. The Lean Startup term is ‘pivot or persevere’, which is great and makes perfect sense, but in practice the pivot/kill decisions are hard and as you decision your experiment you should really think about what metrics and thresholds are really going to convince you.

How do you code an MVP? You don’t. This MVP is a means to running an experiment to test motivation- so formulate your experiment first and then figure out an MVP that will get you the best results with the least amount of time and money. Just since this is a practitioner’s guide, with regard to ‘time’, that’s both time you’ll have to invest as well as how long the experiment will take to conclude. I’ve seen them both matter.

The most important first step is just to start with a simple hypothesis about your idea, and I like the form of ‘If we [do something] for [a specific customer/persona], then they will [respond in a specific, observable way that we can measure]. For example, if you’re building an app for parents to manage allowances for their children, it would be something like ‘If we offer parents and app to manage their kids’ allowances, they will download it, try it, make a habit of using it, and pay for a subscription.’

All that said, for getting started here is- A guide on testing with Lean Startup A template for creating motivation/demand experiments

To recap, what’s a Right Solution hypothesis for testing demand? The core hypothesis is that you have a value proposition that’s better enough than the target persona’s current alternatives that you’re going to acquire customers.

As you may notice, this creates a tight linkage with your testing from Solving the Right Problem. This is important because while testing value propositions with Lean Startup is way cheaper than building product, it still takes work and you can only run a finite set of tests. So, before you do this kind of testing I highly recommend you’ve iterated to validated learning on the what you see below: a persona, one or more PS/JTBD, the alternatives they’re using, and a testable view of why your VP is going to displace those alternatives. With that, your odds of doing quality work in this area dramatically increase!

trent-value-proposition.001

What’s the testing, then? Well, it looks something like this:

how to implement hypothesis driven development

01 IDEA : Most practicing scientists will tell you that the best way to get a good experimental result is to start with a strong hypothesis. Validating that you have the Right Problem and know what alternatives you’re competing against is critical to making investments in this kind of testing yield valuable results.

With that, you have a nice clear view of what alternative you’re trying to see if you’re better than.

02 HYPOTHESIS : I like a cause an effect stated here, like: ‘If we [offer something to said persona], they will [react in some observable way].’ This really helps focus your work on the MVP.

03 EXPERIMENTAL DESIGN : The MVP is a means to enable an experiment. It’s important to have a clear, explicit declaration of that hypothesis and for the MVP to delivery a metric for which you will (in advance) decide on a fail threshold. Most teams find it easier to kill an idea decisively with a kill metric vs. a success metric, even though they’re literally different sides of the same threshold.

04 EXPERIMENTATION : It is OK to tweak the parameters some as you run the experiment. For example, if you’re running a Google AdWords test, feel free to try new and different keyword phrases.

05: PIVOT OR PERSEVERE : Did you end up above or below your fail threshold? If below, pivot and focus on something else. If above, great- what is the next step to scaling up this proposition?

How does this related to usability? What’s usability vs. motivation? You might reasonably wonder: If my MVP has something that’s hard to understand, won’t that affect the results? Yes, sure. Testing for usability and the related tasks of building stuff are much more fun and (short-term) gratifying. I can’t emphasize enough how much harder it is for most founders, etc. is to push themselves to focus on motivation.

There’s certainly a relationship and, as we transition to the next section on usability, it seems like a good time to introduce the relationship between motivation and usability. My favorite tool for this is BJ Fogg’s Fogg Curve, which appears below. On the y-axis is motivation and on the x-axis is ‘ability’, the inverse of usability. If you imagine a point in the upper left, that would be, say, a cure for cancer where no matter if it’s hard to deal with you really want. On the bottom right would be something like checking Facebook- you may not be super motivated but it’s so easy.

The punchline is that there’s certainly a relationship but beware that for most of us our natural bias is to neglect testing our hypotheses about motivation in favor of testing usability.

Fogg-Curve

First and foremost, delivering great usability is a team sport. Without a strong, co-created narrative, your performance is going to be sub-par. This means your developers, testers, analysts should be asking lots of hard, inconvenient (but relevant) questions about the user stories. For more on how these fit into an overall design program, let’s zoom out and we’ll again stand on the shoulders of Donald Norman.

Usability and User Cognition

To unpack usability in a coherent, testable fashion, I like to use Donald Norman’s 7-step model of user cognition:

user-cognition

The process starts with a Goal and that goals interacts with an object in an environment, the ‘World’. With the concepts we’ve been using here, the Goal is equivalent to a job-to-be-done. The World is your application in whatever circumstances your customer will use it (in a cubicle, on a plane, etc.).

The Reflective layer is where the customer is making a decision about alternatives for their JTBD/PS. In his seminal book, The Design of Everyday Things, Donald Normal’s is to continue reading a book as the sun goes down. In the framings we’ve been using, we looked at understanding your customers Goals/JTBD in ‘How do you test that you’ve found the ‘right problem’?’, and we looked evaluating their alternatives relative to your own (proposition) in ‘How do you test that you’ve found the ‘right solution’?’.

The Behavioral layer is where the user interacts with your application to get what they want- hopefully engaging with interface patterns they know so well they barely have to think about it. This is what we’ll focus on in this section. Critical here is leading with strong narrative (user stories), pairing those with well-understood (by your persona) interface patterns, and then iterating through qualitative and quantitative testing.

The Visceral layer is the lower level visual cues that a user gets- in the design world this is a lot about good visual design and even more about visual consistency. We’re not going to look at that in depth here, but if you haven’t already I’d make sure you have a working style guide to ensure consistency (see  Creating a Style Guide ).

How do you unpack the UX Stack for Testability? Back to our example company, HVAC in a Hurry, which services commercial heating, ventilation, and A/C systems, let’s say we’ve arrived at the following tested learnings for Trent the Technician:

As we look at how we’ll iterate to the right solution in terms of usability, let’s say we arrive at the following user story we want to unpack (this would be one of many, even just for the PS/JTBD above):

As Trent the Technician, I know the part number and I want to find it on the system, so that I can find out its price and availability.

Let’s step through the 7 steps above in the context of HDD, with a particular focus on achieving strong usability.

1. Goal This is the PS/JTBD: Getting replacement parts to a job site. An HDD-enabled team would have found this out by doing customer discovery interviews with subjects they’ve screened and validated to be relevant to the target persona. They would have asked non-leading questions like ‘What are the top five hardest things about finishing an HVAC repair?’ and consistently heard that one such thing is sorting our replacement parts. This validates the PS/JTBD hypothesis that said PS/JTBD matters.

2. Plan For the PS/JTBD/Goal, which alternative are they likely to select? Is our proposition better enough than the alternatives? This is where Lean Startup and demand/motivation testing is critical. This is where we focused in ‘How do you test that you’ve found the ‘right solution’?’ and the HVAC in a Hurry team might have run a series of MVP to both understand how their subject might interact with a solution (concierge MVP) as well as whether they’re likely to engage (Smoke Test MVP).

3. Specify Our first step here is just to think through what the user expects to do and how we can make that as natural as possible. This is where drafting testable user stories, looking at comp’s, and then pairing clickable prototypes with iterative usability testing is critical. Following that, make sure your analytics are answering the same questions but at scale and with the observations available.

4. Perform If you did a good job in Specify and there are not overt visual problems (like ‘Can I click this part of the interface?’), you’ll be fine here.

5. Perceive We’re at the bottom of the stack and looping back up from World: Is the feedback from your application readily apparent to the user? For example, if you turn a switch for a lightbulb, you know if it worked or not. Is your user testing delivering similar clarity on user reactions?

6. Interpret Do they understand what they’re seeing? Does is make sense relative to what they expected to happen. For example, if the user just clicked ‘Save’, do they’re know that whatever they wanted to save is saved and OK? Or not?

7. Compare Have you delivered your target VP? Did they get what they wanted relative to the Goal/PS/JTBD?

How do you draft relevant, focused, testable user stories? Without these, everything else is on a shaky foundation. Sometimes, things will work out. Other times, they won’t. And it won’t be that clear why/not. Also, getting in the habit of pushing yourself on the relevance and testability of each little detail will make you a much better designer and a much better steward of where and why your team invests in building software.

For getting started here is- A guide on creating user stories A template for drafting user stories

How do you create find the relevant patterns and apply them? Once you’ve got great narrative, it’s time to put the best-understood, most expected, most relevant interface patterns in front of your user. Getting there is a process.

For getting started here is- A guide on interface patterns and prototyping

How do you run qualitative user testing early and often? Once you’ve got great something to test, it’s time to get that design in front of a user, give them a prompt, and see what happens- then rinse and repeat with your design.

For getting started here is- A guide on qualitative usability testing A template for testing your user stories

How do you focus your outcomes and instrument actionable observation? Once you release product (features, etc.) into the wild, it’s important to make sure you’re always closing the loop with analytics that are a regular part of your agile cadences. For example, in a high-functioning practice of HDD the team should be interested in and  reviewing focused analytics to see how their pair with the results of their qualitative usability testing.

For getting started here is- A guide on quantitative usability testing with Google Analytics .

To recap, what’s a Right Solution hypothesis for usability? Essentially, the usability hypothesis is that you’ve arrived at a high-performing UI pattern that minimizes the cognitive load, maximizes the user’s ability to act on their motivation to connect with your proposition.

Right-Solution-Usability-Hypothesis

01 IDEA : If you’re writing good user stories , you already have your ideas implemented in the form of testable hypotheses. Stay focused and use these to anchor your testing. You’re not trying to test what color drop-down works best- you’re testing which affordances best deliver on a given user story.

02 HYPOTHESIS : Basically, the hypothesis is that ‘For [x] user story, this interface pattern will perform will, assuming we supply the relevant motivation and have the right assessments in place.

03 EXPERIMENTAL DESIGN : Really, this means have a tests set up that, beyond working, links user stories to prompts and narrative which supply motivation and have discernible assessments that help you make sure the subject didn’t click in the wrong place by mistake.

04 EXPERIMENTATION : It is OK to iterate on your prototypes and even your test plan in between sessions, particularly at the exploratory stages.

05: PIVOT OR PERSEVERE : Did the patterns perform well, or is it worth reviewing patterns and comparables and giving it another go?

There’s a lot of great material and successful practice on the engineering management part of application development. But should you pair program? Do estimates or go NoEstimates? None of these are the right choice for every team all of the time. In this sense, HDD is the only way to reliably drive up your velocity, or f e . What I love about agile is that fundamental to its design is the coupling and integration of working out how to make your release content successful while you’re figuring out how to make your team more successful.

What does HDD have to offer application development, then? First, I think it’s useful to consider how well HDD integrates with agile in this sense and what existing habits you can borrow from it to improve your practice of HDD. For example, let’s say your team is used to doing weekly retrospectives about its practice of agile. That’s the obvious place to start introducing a retrospective on how your hypothesis testing went and deciding what that should mean for the next sprint’s backlog.

Second, let’s look at the linkage from continuous design. Primarily, what we’re looking to do is move fewer designs into development through more disciplined experimentation before we invest in development. This leaves the developers the do things better and keep the pipeline healthier (faster and able to produce more content or story points per sprint). We’d do this by making sure we’re dealing with a user that exists, a job/problem that exists for them, and only propositions that we’ve successfully tested with non-product MVP’s.

But wait– what does that exactly mean: ‘only propositions that we’ve successfully tested with non-product MVP’s’? In practice, there’s no such thing as fully validating a proposition. You’re constantly looking at user behavior and deciding where you’d be best off improving. To create balance and consistency from sprint to sprint, I like to use a ‘ UX map ‘. You can read more about it at that link but the basic idea is that for a given JTBD:VP pairing you map out the customer experience (CX) arc broken into progressive stages that each have a description, a dependent variable you’ll observe to assess success, and ideas on things (independent variables or ‘IV’s’) to test. For example, here’s what such a UX map might look like for HVAC in a Hurry’s work on the JTBD of ‘getting replacement parts to a job site’.

how to implement hypothesis driven development

From there, how can we use HDD to bring better, more testable design into the development process? One thing I like to do with user stories and HDD is to make a habit of pairing every single story with a simple, analytical question that would tell me whether the story is ‘done’ from the standpoint of creating the target user behavior or not. From there, I consider focal metrics. Here’s what that might look like at HinH.

how to implement hypothesis driven development

For the last couple of decades, test and deploy/ops was often treated like a kind of stepchild to the development- something that had to happen at the end of development and was the sole responsibility of an outside group of specialists. It didn’t make sense then, and now an integral test capability is table stakes for getting to a continuous product pipeline, which at the core of HDD itself.

A continuous pipeline means that you release a lot. Getting good at releasing relieves a lot of energy-draining stress on the product team as well as creating the opportunity for rapid learning that HDD requires. Interestingly, research by outfits like DORA (now part of Google) and CircleCI shows teams that are able to do this both release faster and encounter fewer bugs in production.

Amazon famously releases code every 11.6 seconds. What this means is that a developer can push a button to commit code and everything from there to that code showing up in front of a customer is automated. How does that happen? For starters, there are two big (related) areas: Test & Deploy.

While there is some important plumbing that I’ll cover in the next couple of sections, in practice most teams struggle with test coverage. What does that mean? In principal, what it means is that even though you can’t test everything, you iterate to test automation coverage that is catching most bugs before they end up in front of a user. For most teams, that means a ‘pyramid’ of tests like you see here, where the x-axis the number of tests and the y-axis is the level of abstraction of the tests.

test-pyramid-v2

The reason for the pyramid shape is that the tests are progressively more work to create and maintain, and also each one provides less and less isolation about where a bug actually resides. In terms of iteration and retrospectives, what this means is that you’re always asking ‘What’s the lowest level test that could have caught this bug?’.

Unit tests isolate the operation of a single function and make sure it works as expected. Integration tests span two functions and system tests, as you’d guess, more or less emulate the way a user or endpoint would interact with a system.

Feature Flags: These are a separate but somewhat complimentary facility. The basic idea is that as you add new features, they each have a flag that can enable or disable them. They are start out disabled and you make sure they don’t break anything. Then, on small sets of users, you can enable them and test whether a) the metrics look normal and nothing’s broken and, closer to the core of HDD, whether users are actually interacting with the new feature.

In the olden days (which is when I last did this kind of thing for work), if you wanted to update a web application, you had to log in to a server, upload the software, and then configure it, maybe with the help of some scripts. Very often, things didn’t go accordingly to plan for the predictable reason that there was a lot of opportunity for variation between how the update was tested and the machine you were updating, not to mention how you were updating.

Now computers do all that- but you still have to program them. As such, the job of deployment has increasingly become a job where you’re coding solutions on top of platforms like Kubernetes, Chef, and Terraform. These folks are (hopefully) working closely with developers on this. For example, rather than spending time and money on writing documentation for an upgrade, the team would collaborate on code/config. that runs on the kind of application I mentioned earlier.

Pipeline Automation

Most teams with a continuous pipeline orchestrate something like what you see below with an application made for this like Jenkins or CircleCI. The Manual Validation step you see is, of course, optional and not a prevalent part of a truly continuous delivery. In fact, if you automate up to the point of a staging server or similar before you release, that’s what’s generally called continuous integration.

Finally, the two yellow items you see are where the team centralizes their code (version control) and the build that they’re taking from commit to deploy (artifact repository).

Continuous-Delivery

To recap, what’s the hypothesis?

Well, you can’t test everything but you can make sure that you’re testing what tends to affect your users and likewise in the deployment process. I’d summarize this area of HDD as follows:

CD-Hypothesis

01 IDEA : You can’t test everything and you can’t foresee everything that might go wrong. This is important for the team to internalize. But you can iteratively, purposefully focus your test investments.

02 HYPOTHESIS : Relative to the test pyramid, you’re looking to get to a place where you’re finding issues with the least expensive, least complex test possible- not an integration test when a unit test could have caught the issue, and so forth.

03 EXPERIMENTAL DESIGN : As you run integrations and deployments, you see what happens! Most teams move from continuous integration (deploy-ready system that’s not actually in front of customers) to continuous deployment.

04 EXPERIMENTATION : In  retrospectives, it’s important to look at the tests suite and ask what would have made the most sense and how the current processes were or weren’t facilitating that.

05: PIVOT OR PERSEVERE : It takes work, but teams get there all the time- and research shows they end up both releasing more often and encounter fewer production bugs, believe it or not!

Topline, I would say it’s a way to unify and focus your work across those disciplines. I’ve found that’s a pretty big deal. While none of those practices are hard to understand, practice on the ground is patchy. Usually, the problem is having the confidence that doing things well is going to be worthwhile, and knowing who should be participating when.

My hope is that with this guide and the supporting material (and of course the wider body of practice), that teams will get in the habit of always having a set of hypotheses and that will improve their work and their confidence as a team.

Naturally, these various disciplines have a lot to do with each other, and I’ve summarized some of that here:

Hypothesis-Driven-Dev-Diagram

Mostly, I find practitioners learn about this through their work, but I’ll point out a few big points of intersection that I think are particularly notable:

  • Learn by Observing Humans We all tend to jump on solutions and over invest in them when we should be observing our user, seeing how they behave, and then iterating. HDD helps reinforce problem-first diagnosis through its connections to relevant practice.
  • Focus on What Users Actually Do A lot of thing might happen- more than we can deal with properly. The goods news is that by just observing what actually happens you can make things a lot easier on yourself.
  • Move Fast, but Minimize Blast Radius Working across so many types of org’s at present (startups, corporations, a university), I can’t overstate how important this is and yet how big a shift it is for more traditional organizations. The idea of ‘moving fast and breaking things’ is terrifying to these places, and the reality is with practice you can move fast and rarely break things/only break them a tiny bit. Without this, you end up stuck waiting for someone else to create the perfect plan or for that next super important hire to fix everything (spoiler: it won’t and they don’t).
  • Minimize Waste Succeeding at innovation is improbable, and yet it happens all the time. Practices like Lean Startup do not warrant that by following them you’ll always succeed; however, they do promise that by minimizing waste you can test five ideas in the time/money/energy it would otherwise take you to test one, making the improbable probable.

What I love about Hypothesis-Driven Development is that it solves a really hard problem with practice: that all these behaviors are important and yet you can’t learn to practice them all immediately. What HDD does is it gives you a foundation where you can see what’s similar across these and how your practice in one is reenforcing the other. It’s also a good tool to decide where you need to focus on any given project or team.

Copyright © 2022 Alex Cowan · All rights reserved.

how to implement hypothesis driven development

  • Mobile App Development Developing scalable and highly customizable digital business products.
  • Digital Product Development Connecting the dots between the product and the user’s needs, perceptions, and feelings.
  • Hire Mobile App Developers Our experienced team specializes in creating world-class mobile apps.
  • UX & UI Design Creating apps that are both visually appealing and user-friendly at the same time.
  • Dedicated Development Team Working diligently on building custom applications from idea to implementation.
  • ASO Services App Store Optimization Optimize your app’s success with leading ASO services.
  • Neobank & Fintech Sharing our in-depth FinTech domain expertise to help you build the next-gen financial app.
  • Healthcare Creating an enjoyable digital experience both for caregivers and patients.
  • Retail & E-Commerce Developing an app idea into a high-ranking app in app stores.
  • Logistics Helping to leverage emerging technologies for the better — without breaking the bank.
  • Loyalty Program Improving customer loyalty and retention using the latest developments in the industry.

how to implement hypothesis driven development

How to Build a List of Hypotheses for Mobile App (Guide for Hypothesis-Driven Development)

4.9 / 5. votes 23

No votes so far! Be the first to rate this post.

how to implement hypothesis driven development

Henn Akimov

Marketer, Startup Advisor

how to implement hypothesis driven development

Ihor Polych

CEO at Devlight

how to implement hypothesis driven development

“Most businesses die because they offer a product that consumers don’t need” — this is a famous saying of Eric Ries , the author of the Lean Startup methodology. So how can a hypothesis-driven development help your project avoid this trap?

The answer is simple — it aims to research the demand for your future product before making a mobile app. So it is worth starting the research by compiling a set of hypotheses about the needs of consumers. That answers the question about what problems and difficulties your future product will help solve.

Forming hypotheses is a creative process, and it is difficult to follow a certain procedure, but some rules still apply. In this article, we will describe an algorithm for creating a set of product hypotheses and their further verification upon user surveys.

What Is Hypothesis-Driven Development ?

A hypothesis-based approach allows product developers to design, test, and refactor a product until it is acceptable to consumers. This methodology involves testing and refining the product based on consumer feedback to verify the assumptions made during the ideation process. The utilization of this approach helps to eliminate any uncertainties in the design phase and leads to a final product that is well-received by users.

Here are some examples of hypotheses for mobile app development from various segments:

  • The behavioral hypothesis demonstrates user behavior under various conditions and what drives people to act in a certain manner;
  • The difficulties users encounter and the justifications for why they regard such challenges as obstacles to their objectives are covered by the problem hypothesis;
  • The motivation hypothesis focuses on the wants of users and the reasons why they are currently ineffective in accomplishing their objectives;
  • The blocker hypothesis reveals the cause of the present ineffective conduct or difficulty.

Why Do We Use a Hypothesis-Driven Development ?

When developing a product, you define your hypotheses, find the fastest ways to test them, and use the results to change your strategy.

how to implement hypothesis driven development

You have a lot of assumptions, to begin with. You predict what users want, what they are looking for, what the design should be, what marketing strategy to use, what architecture will be most effective, and how to monetize the product.

Some hypotheses need to be corrected. You don’t know which ones. CB Insights determined that a lack of market demand was one of the main causes of startup failure. Almost half of these projects spent months, or even years, building a product.

The only way to test a list of hypotheses for the mobile app is to give the product to a potential customer as soon as possible. If you follow this methodology consistently, you will realize that most hypotheses fail. You assume, fail, and have to go back to the beginning each time to test new hypotheses.

how to implement hypothesis driven development

This approach is not an innovation in product development. When you write a book or essay, you spend a lot of time editing and revising. When you write code, you also redo it. Every creative endeavor requires a huge amount of trial and error.

how to implement hypothesis driven development

In this world, the one who detects their own mistakes and corrects them faster becomes the winner. The most important thing is to determine which of your hypotheses is wrong with the help of feedback from real users. Thus, when you’re building a product, writing code, or developing a marketing plan, always ask yourself a few questions:

  • Which hypothesis in the project is the most doubtful?
  • What is the fastest way to check it?

It’s all in App Playbook. Our tried-and-true sequence of 75 tasks has already driven 35M installs, and now it’s your turn to experience the same level of success!

How Does A Hypothesis-Driven Development Look Like in Real Life?

Let’s look at a simple example. Let’s say we choose a project approach (one that sets a task, not puts forward hypotheses) to a service for selling goods. We decide to add a delivery option to it. We decide to hire delivery people, buy them branded clothes, bags, and possibly transport. The development team creates a page where you can enter the delivery address and the desired date. Then we write a service that transfers the order from the store to the delivery person and an application for those delivery people. What happens in the negative scenario? That’s right. We’re losing hundreds of thousands of dollars.

What if we had a hypothesis-driven approach? First of all, we would write hypotheses and confirm that customers, in the first place, need the delivery option. Then we would understand the optimal cost and delivery time to calculate the unit economy. Next, surveys or interviews of users would be conducted to give us an understanding of user needs. Then we would make a fake “delivery” button on the website or in the app to see how many clients would try to use it. 

Of course, this action cannot be used to calculate the exact demand because there are still dozens of ways to kill the conversion after the user’s clicking the button: complicated fill-out forms, poorly available delivery periods, high costs, etc. But, at least, we would understand how many people out of 10 thousand, who saw the button, tried to use it — three or eight thousand. So then, to test the hypothesis in real-life conditions, we would use a ready-made B2B solution rather than develop our delivery feature. 

Moreover, to save the integration time, we would collect orders, put them in a database, and then pass them over to our manager, who would manually issue each delivery through the third-party service web form. What would happen in the worst-case scenario? Nothing too serious. We wouldn’t have wasted hundreds of thousands of dollars and many weeks on development.

To sum up, hypothesis-driven development aims to understand what product feature will bring the greatest value at the moment and test this feature in the simplest possible way. To put it bluntly, try to refute each of your hypotheses as soon as possible. Proving to yourself that an idea is worthless without spending time on its development is morally difficult but very effective from the company’s activities point of view.

A hypothesis-driven approach provides a structured way of consolidating ideas and building hypotheses based on objective criteria. This methodology provides a deep understanding of prioritizing features related to business goals and desired user outcomes.

How to Test the Hypothesis of Product Demand and Value Without Development

Starting development without testing the key hypotheses behind the new product is a widely spread mistake. In this case, you are completely sure of your idea and see no point in testing it but begin the development process immediately.

The second most common hypothesis-driven development mistake is to look for confirmation of a hypothesis instead of testing it. Often, demand or value testing becomes a formal step. The decision is not based on received data but on initial settings and startup owners’ prejudices. This cognitive distortion happens for several reasons:

  • Commitment to an idea blocks critical thinking (typical of startups);
  • The bureaucratic apparatus perceives testing hypotheses for mobile app development as that part of the project development process, which is inevitably followed by implementation, regardless of the results of the test (typical of corporations). Even if all the early tests show that the product in its current form does not stand a chance, it still goes into development;
  • The third mistake is testing unimportant things. Instead of testing key risks (demand and value), secondary elements related to subjective perception (appearance, non-core functions, etc.) are tested. As a result, time is wasted, and the hypothesis-testing process itself is devalued.

Testing the Demand Hypothesis for a New Product

The demand hypothesis is one of the riskiest assumptions behind a new product. This hypothesis assumes that the potential audience is interested in solving a certain problem. The demand hypothesis is also called the need hypothesis or the problem hypothesis. 

It is necessary to study the target audience and its tasks in order to check the demand, sometimes to sell a product that has not been created:

  • The most common way to test the demand is to create a landing page with a detailed description and illustrations of the product and show it to potential buyers;
  • In some cases, you don’t need to create your site— just place an ad on a platform attracting the audience of potential customers for the product;
  • The demand for some products is difficult to check with a landing page or an announcement on social networks. Especially if the sales process includes a long conversation, a call, and sometimes a meeting with the buyer. You can use targeted advertising and personal communication in such situations. Again, without yet creating an actual product;
  • If deciding to buy your product requires minimal experience interacting with it, you can offer customers a shell without filling;
  • One of the easiest and most effective ways to test demand without development is to show users videos simulating how the product works. This way, you can demonstrate its capabilities, interface, design, and situations where the product will be useful.

Testing the New Product Value Hypothesis

Once the demand list of hypotheses for the mobile app is validated and you know that the product solves the desired problem among potential buyers, the next key risk is the value. The value hypothesis assumes that the product’s intended implementation will bring customers real value. It usually means that the product will solve users’ problems more effectively than alternatives available on the market. Otherwise, users will have no motivation to switch from one solution to another:

  • Allowing users to try something as close to the future product is the most proper way to test a value hypothesis. This can be done with the help of third-party services that reproduce complex functions and automate their work without writing your code;
  • Alternatively, to check value hypotheses for mobile app development without development, you can reproduce the process of the system in manual mode;
  • The third method lies in value validation through the prototype. Usability testing of prototypes allows you to see the process of using the product, and subsequent interviews give a fairly accurate understanding of the presence or absence of the value of the solution being studied.

App Playbook is the ultimate solution. With a bulletproof sequence of 75 App Building Tasks and real-life cases that have already driven 35M app installs, your app’s success is guaranteed!

How to Build and Test List of Hypotheses for Mobile App

The HADI (Hypothesis – Action – Data – Insights) methodology is the simplest algorithm for cyclical testing of ideas – from hypothesis through action to data and conclusions.

The hypothesis-driven development management cycle begins with formulating a hypothesis according to the “if” and “then” principles. In the second stage, it is necessary to carry out several works to launch the experiment (Action), then collect data for a given period (Data), and at the end, make an unambiguous conclusion about whether the hypothesis was successful, what and how it can be improved by launching the next cycle of hypothesis testing (Insights).

how to implement hypothesis driven development

Step 1. Forming a Hypothesis

Here you formulate what you want to know. What problem are you trying to solve? Determine which product level you should test:

  • Value level —  test the problem your product is supposed to solve. Understand whether it is worth solving;
  • Feature level is a functionality through which the user quickly realizes the value of our product;
  • Design level means the design and visualization. How does your functionality work in terms of user experience? Simply put, will people intuitively figure out how to manage, where to click, and what to do with your product?
  • The feasibility level of hypothesis-driven development is about the technical implementation of everything you have created.

The hypothesis is based on the principle “If…, then…”. (if…then)

You can also prioritize the hypotheses to be tested. Think about what might have the biggest impact on your users’ needs and prioritize accordingly. You can also use the ICE Score framework, which includes these  three elements:

  • Confidence;

ICE is calculated as follows:

how to implement hypothesis driven development

Of course, this is only one option — several formulas you can choose from existing. However, remember that the formula for all hypotheses you compare should stay the same, and your ICE should have the same rating range — either 1 to 10, 1 to 100, or another scale (determine it in the beginning).

Impact estimates how much an idea will positively affect the metric you’re trying to improve. To determine the impact, we ask the following questions: How effective will it be? How much will this affect the metrics (conversion, retention, LTV, MAU, DAU)?

Confidence shows how much you trust the impact estimates and ease of implementation. To determine this hypothesis-driven development metric, you need to answer the question: How confident are you that this feature will lead to the improvement described in Impact and be as easy to implement as described in Ease?

Ease of implementation is an estimate of how much effort and resources are required to implement this hypothesis. The easier the task, the higher the number. To determine the ease of implementation, you need to answer the question: How long will testing these hypotheses for mobile app development or developing this feature take? How many people will be involved? Consider the work of the development, design, and marketing departments.

Step 2. Performing the Action

At the beginning of each cycle, we take several hypotheses and start testing them using the next methods:

  • A/B Testing or Split Testing

In such testing, the main thing is clearly defining the sample or its size. This is important so that the results are as realistic and statistically significant as possible. We recommend conducting split testing with at least ten thousand active monthly users. If your audience still needs to be bigger, it is better to use other tools.

  • Quantitative User Survey

Use special services facilitating survey creation and implementation. For example, Survey Monkey. Such services allow you to select the desired audience and ask them questions. With the free plan, you can create questionnaires with up to 10 questions. The link to the questionnaire can be placed on the website or social networks.

  • Qualitative Research or Customer Development

This research type is a direct conversation with consumers or a certain group of potential product consumers. Such hypothesis-driven development interviews can be divided into two groups:

  • Usability— will help to understand whether users can use your product at all to solve their tasks with its help and achieve their desired goals;
  • Discovery – delves into the state, problems, and perceptions of users of a certain group in detail. In such interviews, we usually ask questions like “Who? How? Why? Where?”

How many such interviews do you need to test the hypothesis? We usually start with five. Then we continue until people stop giving new answers. You can stop as soon as the information starts to repeat itself. For hypotheses testing small product changes, 5-7 interviews may be enough. For the launch of a completely new product — 50-70 interviews.

Step 3. Data Analytics

At this stage, we collect data from our research. You should have a backlog of your hypotheses for mobile app development prioritized according to certain criteria to help you at various stages of development. Approach all feature development from the perspective of hypotheses. A good indicator is when you have two states: one in which an experiment is being planned to test the hypothesis and another in which functional validation is ongoing and data is being measured. 

Then, when your experiment is over, you can identify the hypothesis as being supported, refuted, or even abandoned if you decide to call it quits due to the results. You may ensure that you reach any necessary pivots as early as possible and prevent investing in needless work by always tackling the highest-risk hypotheses first.

Step 4. Insights

This stage can also be called interpretation. First, you should analyze whether your list of hypotheses for the mobile app was confirmed (worked). Whether a theory is confirmed or refuted, the process itself offers a chance to learn. Even if you cannot support the hypothesis, the result may offer insightful information that you might use for a different hypothesis.

Now that some of your hypotheses have been supported, you can proceed to development. But even once the product is released, testing must continue. In addition, you should be alert since certain aspects can need to change due to client needs, market trends , regional economics, and other factors.

The ultimate founder’s checklist of 75 tasks to build, launch & scale your app 3-5x faster systematically. Proven by 35M of app installs!

List of Hypotheses for Mobile App: Example

ABC (name changed) is the largest provider of microcredits in Ukraine. They have no physical branches — their services are fully digital and offered online. However, ABC has thousands of contented customers and devoted staff members. The figures speak for themselves:

  • During the first half of 2021, ABC’s net income was estimated at 44 million dollars (the biggest number among competitors );
  • The company has a net profit of $1.4 million;
  • ABC’s team consists of more than 700 workers;
  • They are frequently used by 1.8 m Ukrainians;
  • 6,000,000 loans were made in the service;
  • Their total issued money circulation is $1 billion.

It is a sizable, contemporary, and well-run business that turned to us to help them with diversification and new ways of development. The customer’s goals were growing the company, diversifying the line of goods, and breaking into a new market. Devlight used this data when forming the list of hypotheses for the mobile app.

Internal Discussions and Hypotheses Forming

First, we gained a profound knowledge of the ABC team’s technology, product, capabilities, vision, and passion through our meetings. We saw that we could accomplish our ultimate objective thanks to our significant experience working with neo-banks and our business knowledge. 

The Ukrainian market was solely focused on loans as of 2021. Users voluntarily took out loans for various purposes, including small household and personal expenses, buying vehicles, and starting businesses. Loans were a common practice. It was typical, a common occurrence that clients were fully aware of.

However, the market’s offerings fell short of users’ needs. They were one-dimensional and impersonal. We concluded that this vulnerability is exactly where we can compete. Depending on their credit score, we can provide different consumers with flexible credit limits or high limits with a longer grace period. For instance, the market had nothing like a big credit limit of UAH 100,000 for 100 days. As a result of ABC’s significant experience in the credit industry, we can accomplish this relatively effortlessly.

One of the advantages of ABC’s business model was its capacity to deal with credit scores and potential hazards properly. These advantages enabled us to formulate the premise of a flexible product credit engine, which we could then use to develop the product’s key competitive advantage . The primary target audience had to be used to test this idea. It would serve as our foundation for hypotheses for mobile app development. 

how to implement hypothesis driven development

Do you keep failing to form hypotheses for mobile app development? Devlight will be happy to point you in the right direction. Be sure to contact us!

Hypothesis-Driven Development: Summary

Do not worry that your hypotheses will be incorrect. Your objective is not to convince everyone that you are correct. Your objective is to establish a prosperous business. The hypotheses are merely a tool to get you there, so the more of them you debunk — the better. Finally, keep in mind that a hypothesis-driven development:

  • is about a sequence of tests to support or refute a theory. Determine value!
  • provides a quantifiable result and promotes ongoing learning;
  • enables the user — a critical stakeholder — to provide continuous feedback to comprehend the unknowns better;
  • enables us to comprehend the changing environment and gradually expose the value.

Apps that were developed based on tested hypotheses have a big and advantageous impact on a company’s business objectives. Utilizing data that is closely related to the company’s goal guarantees that customers’ needs are prioritized. 

Hypothesis-Driven Development: FAQ

How to correctly formulate hypotheses for a mobile application.

A correct hypothesis:

  • predicts the connection and result;
  • is brief and simple;
  • is formed without any ambiguity or presumptions;
  • contains measurable outcomes that can be tested;
  • is specific and pertinent to the research subject or issue.

“If these modifications are made to a particular independent variable, then we will notice a change in a specific dependent variable” can be the fundamental format. Here is an example of a basic hypothesis: “Food apps with vibrant designs are used more frequently than those made in a dull color palette.”

How to Build a List of Hypotheses for a Mobile App?

First, you brainstorm different assumptions based on your product-specific, request, or expected results. Then, you may group the hypotheses according to a certain non-changing criterion: their common problem, the complexity of the further experiment needed, or their overall time span. 

Alternatively, you may group your findings after conducting the experiments and present the hypotheses upon their adequacy towards the examined issue.

What Are the Benefits of Hypothesis-Driven Development?

Hypothesis-driven development is a methodology that involves creating a hypothesis, devising experiments to validate it, and utilizing data to steer product development decisions. The advantages of this approach are numerous:

Accelerated time-to-market: By gathering data and examining hypotheses, development teams can make informed decisions and improve the speed with which products are brought to market.

Enhanced product quality: Hypothesis-driven development helps teams identify and rectify potential issues early in the development process, resulting in higher-quality products.

Increased user satisfaction: By focusing on user needs and verifying hypotheses with real users, development teams can create products that better align with user preferences, leading to heightened user satisfaction.

Optimal resource utilization: Hypothesis-driven development enables teams to concentrate on the most promising ideas, resulting in better utilization of their time and resources.

Decreased risk: By evaluating hypotheses and gathering data, development teams can identify and address potential issues early, reducing the likelihood of launching a product that fails to meet user requirements or fails to achieve its goals. The list of hypotheses for the mobile app is a priceless repository for organizational data.

How to Create a Prototype for a Mobile App

GOT A PROJECT IN MIND?

Privacy overview.

The 6 Steps that We Use for Hypothesis-Driven Development

how to implement hypothesis driven development

One of the greatest fears of product managers is to create an app that flopped because it's based on untested assumptions. After successfully launching more than 20 products, we're convinced that we've found the right approach for hypothesis-driven development.

In this guide, I'll show you how we validated the hypotheses to ensure that the apps met the users' expectations and needs.

What is hypothesis-driven development?

Hypothesis-driven development is a prototype methodology that allows product designers to develop, test, and rebuild a product until it’s acceptable by the users. It is an iterative measure that explores assumptions defined during the project and attempts to validate it with users’ feedbacks.

What you have assumed during the initial stage of development may not be valid for the users. Even if they are backed by historical data, user behaviors can be affected by specific audiences and other factors. Hypothesis-driven development removes these uncertainties as the project progresses. 

hypothesis-driven development

Why we use hypothesis-driven development

For us, the hypothesis-driven approach provides a structured way to consolidate ideas and build hypotheses based on objective criteria. It’s also less costly to test the prototype before production.

Using this approach has reliably allowed us to identify what, how, and in which order should the testing be done. It gives us a deep understanding of how we prioritise the features, how it’s connected to the business goals and desired user outcomes.

We’re also able to track and compare the desired and real outcomes of developing the features. 

The process of Prototype Development that we use

Our success in building apps that are well-accepted by users is based on the Lean UX definition of hypothesis. We believe that the business outcome will be achieved if the user’s outcome is fulfilled for the particular feature. 

Here’s the process flow:

How Might We technique → Dot voting (based on estimated/assumptive impact) → converting into a hypothesis → define testing methodology (research method + success/fail criteria) → impact effort scale for prioritizing → test, learn, repeat.

Once the hypothesis is proven right, the feature is escalated into the development track for UI design and development. 

hypothesis driven development

Step 1: List Down Questions And Assumptions

Whether it’s the initial stage of the project or after the launch, there are always uncertainties or ideas to further improve the existing product. In order to move forward, you’ll need to turn the ideas into structured hypotheses where they can be tested prior to production.  

To start with, jot the ideas or assumptions down on paper or a sticky note. 

Then, you’ll want to widen the scope of the questions and assumptions into possible solutions. The How Might We (HMW) technique is handy in rephrasing the statements into questions that facilitate brainstorming.

For example, if you have a social media app with a low number of users, asking, “How might we increase the number of users for the app?” makes brainstorming easier. 

Step 2: Dot Vote to Prioritize Questions and Assumptions

Once you’ve got a list of questions, it’s time to decide which are potentially more impactful for the product. The Dot Vote method, where team members are given dots to place on the questions, helps prioritize the questions and assumptions. 

Our team uses this method when we’re faced with many ideas and need to eliminate some of them. We started by grouping similar ideas and use 3-5 dots to vote. At the end of the process, we’ll have the preliminary data on the possible impact and our team’s interest in developing certain features. 

This method allows us to prioritize the statements derived from the HMW technique and we’re only converting the top ones. 

Step 3: Develop Hypotheses from Questions

The questions lead to a brainstorming session where the answers become hypotheses for the product. The hypothesis is meant to create a framework that allows the questions and solutions to be defined clearly for validation.

Our team followed a specific format in forming hypotheses. We structured the statement as follow:

We believe we will achieve [ business outcome], 

If [ the persona],

Solve their need in  [ user outcome] using [feature]. ‍

Here’s a hypothesis we’ve created:

We believe we will achieve DAU=100 if Mike (our proto persona) solve their need in recording and sharing videos instantaneously using our camera and cloud storage .

hypothesis driven team

Step 4: Test the Hypothesis with an Experiment

It’s crucial to validate each of the assumptions made on the product features. Based on the hypotheses, experiments in the form of interviews, surveys, usability testing, and so forth are created to determine if the assumptions are aligned with reality. 

Each of the methods provides some level of confidence. Therefore, you don’t want to be 100% reliant on a particular method as it’s based on a sample of users.

It’s important to choose a research method that allows validation to be done with minimal effort. Even though hypotheses validation provides a degree of confidence, not all assumptions can be tested and there could be a margin of error in data obtained as the test is conducted on a sample of people. 

The experiments are designed in such a way that feedback can be compared with the predicted outcome. Only validated hypotheses are brought forward for development.

Testing all the hypotheses can be tedious. To be more efficient, you can use the impact effort scale. This method allows you to focus on hypotheses that are potentially high value and easy to validate. 

You can also work on hypotheses that deliver high impact but require high effort. Ignore those that require high impact but low impact and keep hypotheses with low impact and effort into the backlog. 

At Uptech, we assign each hypothesis with clear testing criteria. We rank the hypothesis with a binary ‘task success’ and subjective ‘effort on task’ where the latter is scored from 1 to 10. 

While we’re conducting the test, we also collect qualitative data such as the users' feedback. We have a habit of segregation the feedback into pros, cons and neutral with color-coded stickers.  (red - cons, green -pros, blue- neutral).

The best practice is to test each hypothesis at least on 5 users. 

Step 5  Learn, Build (and Repeat)

The hypothesis-driven approach is not a single-ended process. Often, you’ll find that some of the hypotheses are proven to be false. Rather than be disheartened, you should use the data gathered to finetune the hypothesis and design a better experiment in the next phase.

Treat the entire cycle as a learning process where you’ll better understand the product and the customers. 

We’ve found the process helpful when developing an MVP for Carbon Club, an environmental startup in the UK. The app allows users to donate to charity based on the carbon-footprint produced. 

In order to calculate the carbon footprint, we’re weighing the options of

  • Connecting the app to the users’ bank account to monitor the carbon footprint based on purchases made.
  • Allowing users to take quizzes on their lifestyles.

Upon validation, we’ve found that all of the users opted for the second option as they are concerned about linking an unknown app to their banking account. 

The result makes us shelves the first assumption we’ve made during pre-Sprint research. It also saves our client $50,000, and a few months of work as connecting the app to the bank account requires a huge effort. 

hypothesis driven development

Step 6: Implement Product and Maintain

Once you’ve got the confidence that the remaining hypotheses are validated, it’s time to develop the product. However, testing must be continued even after the product is launched. 

You should be on your toes as customers’ demands, market trends, local economics, and other conditions may require some features to evolve. 

hypothesis driven development

Our takeaways for hypothesis-driven development

If there’s anything that you could pick from our experience, it’s these 5 points.

1. Should every idea go straight into the backlog? No, unless they are validated with substantial evidence. 

2. While it’s hard to define business outcomes with specific metrics and desired values, you should do it anyway. Try to be as specific as possible, and avoid general terms. Give your best effort and adjust as you receive new data.  

3. Get all product teams involved as the best ideas are born from collaboration.

4. Start with a plan consists of 2 main parameters, i.e., criteria of success and research methods. Besides qualitative insights, you need to set objective criteria to determine if a test is successful. Use the Test Card to validate the assumptions strategically. 

5. The methodology that we’ve recommended in this article works not only for products. We’ve applied it at the end of 2019 for setting the strategic goals of the company and end up with robust results, engaged and aligned team.

You'll have a better idea of which features would lead to a successful product with hypothesis-driven development. Rather than vague assumptions, the consolidated data from users will provide a clear direction for your development team. 

As for the hypotheses that don't make the cut, improvise, re-test, and leverage for future upgrades.

Keep failing with product launches? I'll be happy to point you in the right direction. Drop me a message here.

Tell us about your idea. We will reach you out.

how to implement hypothesis driven development

Hypothesis-Driven Development Can Revolutionize Agile

Agile’s flexible, well-structured philosophy makes it easier for developers to tackle projects in bite-sized chunks. But Agile, it turns out, is only half of the development coin. As University of Virginia Darden business professor Alex Cowan explained in a recent Zenhub webinar , while Agile and Scrum tell you how to build something, they don’t help you figure out what you’re going to build.

While building without a plan is excellent for freeform Lego, it’s not great for software development. As Alex noted, when developers “design things and hope for the best,” it usually doesn’t work out. After all, an excellent development process doesn’t count for much if you’re making something nobody wants or needs. That’s where Hypothesis-Driven Development comes in.

Alex explained that Hypothesis-Driven Development is the yin to the yang of Agile practices because it helps determine what you’re going to build. With Hypothesis-Driven Development, you first identify a possible need, then find out if it’s something your end-users actually need. This process remains useful throughout product development in five key areas: continuous design, application development, the delivery pipeline, deployment, and post-deployment.

Continuous design helps ensure success

The Agile process is iterative. You build, evaluate, test, build more, and rinse and repeat that process until you have something functional and useful. But if you don’t take an iterative approach to that iterative approach at the very beginning of the design process, your team can run into the same types of pitfalls they’re using Agile to avoid.

How to apply Hypothesis-Driven Design to continuous design

As the diagram above shows, the core principle of Hypothesis-Driven Development is to link the right solution to a problem. Every project, after all, starts with an idea: I have a user with problem X that can be solved with solution Y. But while the problems and jobs of your end-users stay very stable over time, how they do those jobs and solve those problems changes drastically.

Maybe there’s already a solution out there, or some other factors have changed and made that problem not such a big deal anymore, and other issues are now a higher priority. So even if issues are static, design can’t be.

The key is not to assume you have the right solution to a problem. Question the assumption. And be willing to give up on an idea in favor of a better one.

“By discarding ideas earlier, you give yourself more chances to be successful with the same amount of time and energy.” – Alex Cowan

Focus your application development on what truly matters to the end-user

Hypothesis-Driven Development also streamlines application development, enabling developers to concentrate efforts on fewer better applications. By reducing the number of features in the development pipeline, Hypothesis-Driven Development frees up time and energy that can be used to refine and focus on only those features that matter.

Ultimately, you probably end up with the same number of useful features. But focusing on the right ones from the start means features are that much better when they enter the delivery pipeline. This can save your team some soul-crushingly tedious work.

Streamline your delivery pipeline with automation

Unfortunately, bugs, glitches, and tech debt are facts of life, so we can’t eliminate the automated test process entirely. But although most developers love automation, they hate automating. So, building an automated test is one of those boring-but-necessary jobs that we all procrastinate as much as we possibly can, like folding laundry.

Hypothesis-Driven Development reduces the anxiety units. It helps you hone in on where automated tests are helpful because the only thing worse than finishing a tedious job is finding out afterward that it was totally unnecessary. Hypothesis-Driven Development ensures you pick the right problems to automate testing for, which is critical to working efficiently in agile and preserving your team’s sanity.

Optimize deployment by weighing the value of new features

Hypothesis-Driven Development makes it easier to compare the cost of delivering a feature against its value at the deployment stage. This diagram shows how to map that comparison.

Cost to build a feature formula. F = ((C+g)/(fe*rf))/Sd

Obviously, this approach wants returns to exceed costs, and a critical portion of the cost evaluation is deployment. Different deployment tools offer distinct advantages. More powerful and customizable tools require experienced teams, whereas back-end-as-a-service tools ease deployment but offer less customization. Using the Hypothesis-Driven Development comparison process can help you determine which tool fits better and provides the best return for your team’s skills.

Figure out where to go next in post-deployment

Post-deployment, Hypothesis-Driven Development can help you determine whether your application meets end-user needs and to what degree, so when the next design phase rolls around, your team already knows what to tackle.

A Hypothesis-Driven Development approach starts by looking at how much people are using the application: how many different people, how often, how regularly, etc. From there, it focuses on asking questions about how useful it is. Do people stick with it, or are most uses short-term? Where does it work well? Where does it work not-so-well?

Questions and metrics must be clear and precise to be useful. “Do you like it?” probably won’t bring back any actionable answers. “How often do you use X feature, and what might make you more likely to use it more?” probably will.

This approach can take a team from a not-so-useful goal like “make the app better” to something actionable like “most technicians that aren’t using the app say it’s because the layout is too intimidating.” The first goal is an existential crisis; the second is an afternoon’s work.

Implement Hypothesis-Driven Development into Agile by using Hypothesis-Driven Development and Agile

Trying to implement Hypothesis-Driven Development all at once is a bit like trying to complete a project all in one go in Agile. It’s a contradiction in terms.

So, as Alex noted in the webinar, the best way to implement Hypothesis-Driven Development into Agile is by using these philosophies and Agile methods together. Look at your product pipeline. Sketch it out. Talk to your team about it. Pick one place to start and take it from there.

“What you’ll find is that, after you do that once or twice, it’ll breathe its own air,” Alex said. Refine and continually evaluate what’s working and what isn’t. Come up with new ideas and test them before implementing them more broadly.

Want to learn more? Watch the full webinar for plenty more insights.

Curious about other ways to help your team stay focused? Check out our eBook on Focus-Driven Development.

community-banner_1

Share this article

Simplified agile processes. Faster task management. All powered by AI.

Related Posts

how to implement hypothesis driven development

Product Roadmap Tools: The Top 5 in 2024

how to implement hypothesis driven development

What tasks are the biggest waste of a product manager’s time?

how to implement hypothesis driven development

As a ScrumMaster, how well do you know the Agile mindset?

scrum events

Scrum events: are they all necessary? 3 experts weigh-in

Hone your skills with a bi-weekly email course. subscribe to zenhub’s newsletter., explore how zenhub can boost your team’s productivity.

how to implement hypothesis driven development

Scrum and Hypothesis Driven Development

Profile picture for user Dave West

  • Website for Dave West
  • Twitter for Dave West
  • LinkedIn for Dave West

how to implement hypothesis driven development

Scrum was built to better manage risk and deliver value by focusing on inspection and encouraging adaptation. It uses an empirical approach combined with self organizing, empowered teams to effectively work on complex problems. And after reading Jeff Gothelf ’s and Josh Seiden ’s book “ Sense and Respond: How Successful Organizations Listen to Customers and Create New Products Continuously ”, I realized that the world is full of complex problems. This got me thinking about the relationship between Scrum and modern organizations as they pivot toward becoming able to ‘sense and respond’. So, I decided to ask Jeff Gothelf… Here is a condensed version of our conversation.

how to implement hypothesis driven development

Sense & Respond was exactly this attempt to change the hearts and minds of managers, executives and aspiring managers. It makes the case that first and foremost, any business of scale or that seeks to scale is in the software business. We share a series of compelling case studies to illustrate how this is true across nearly every industry. We then move on to the second half of the book where we discuss how managing a software-based business is different. We cover culture, process, staffing, planning, budgeting and incentives. Change has to be holistic.

What you are describing is the challenge of ownership. Product Owner (PO) is the role in the Scrum Framework empowered to make decisions about what and when things are in the product. But disempowerment is a real problem in most organizations, with their POs not having the power to make decisions. Is this something you see when introducing the ideas of Sense and Respond?

There will always be situations where things simply have to get built. Legal and compliance are two great examples of this. In these, low risk, low uncertainty situations a more straightforward execution is usually warranted. That said, just because a feature has to be included for compliance reasons doesn’t mean there is only one way to implement it. What teams will often find is that there is actual flexibility in how these (actual) requirements can be implemented with some being more successful and less distracting to the overall user experience than others. The level of discovery that you would expend on these features is admittedly smaller but it shouldn’t be thrown out altogether as these features still need to figure into a holistic workflow.   

What did you think about this post?

Share with your network.

  • Share this page via email
  • Share this page on Facebook
  • Share this page on Twitter
  • Share this page on LinkedIn

View the discussion thread.

Why hypothesis-driven development is key to DevOps

gears and lightbulb to represent innovation

Opensource.com

The definition of DevOps, offered by  Donovan Brown is  "The union of people , process , and products to enable continuous delivery of value to our customers. " It accentuates the importance of continuous delivery of value. Let's discuss how experimentation is at the heart of modern development practices.

how to implement hypothesis driven development

Reflecting on the past

Before we get into hypothesis-driven development, let's quickly review how we deliver value using waterfall, agile, deployment rings, and feature flags.

In the days of waterfall , we had predictable and process-driven delivery. However, we only delivered value towards the end of the development lifecycle, often failing late as the solution drifted from the original requirements, or our killer features were outdated by the time we finally shipped.

how to implement hypothesis driven development

Here, we have one release X and eight features, which are all deployed and exposed to the patiently waiting user. We are continuously delivering value—but with a typical release cadence of six months to two years, the value of the features declines as the world continues to move on . It worked well enough when there was time to plan and a lower expectation to react to more immediate needs.

The introduction of agile allowed us to create and respond to change so we could continuously deliver working software, sense, learn, and respond.

how to implement hypothesis driven development

Now, we have three releases: X.1, X.2, and X.3. After the X.1 release, we improved feature 3 based on feedback and re-deployed it in release X.3. This is a simple example of delivering features more often, focused on working software, and responding to user feedback. We are on the path of continuous delivery, focused on our key stakeholders: our users.

Using deployment rings and/or feature flags , we can decouple release deployment and feature exposure, down to the individual user, to control the exposure—the blast radius—of features. We can conduct experiments; progressively expose, test, enable, and hide features; fine-tune releases, and continuously pivot on learnings and feedback.

When we add feature flags to the previous workflow, we can toggle features to be ON (enabled and exposed) or OFF (hidden).

how to implement hypothesis driven development

Here, feature flags for features 2, 4, and 8 are OFF, which results in the user being exposed to fewer of the features. All features have been deployed but are not exposed (yet). We can fine-tune the features (value) of each release after deploying to production.

Ring-based deployment limits the impact (blast) on users while we gradually deploy and evaluate one or more features through observation. Rings allow us to deploy features progressively and have multiple releases (v1, v1.1, and v1.2) running in parallel.

Ring-based deployment

Exposing features in the canary and early-adopter rings enables us to evaluate features without the risk of an all-or-nothing big-bang deployment.

Feature flags decouple release deployment and feature exposure. You "flip the flag" to expose a new feature, perform an emergency rollback by resetting the flag, use rules to hide features, and allow users to toggle preview features.

Toggling feature flags on/off

When you combine deployment rings and feature flags, you can progressively deploy a release through rings and use feature flags to fine-tune the deployed release.

See deploying new releases: Feature flags or rings , what's the cost of feature flags , and breaking down walls between people, process, and products for discussions on feature flags, deployment rings, and related topics.

Adding hypothesis-driven development to the mix

Hypothesis-driven development is based on a series of experiments to validate or disprove a hypothesis in a complex problem domain where we have unknown-unknowns. We want to find viable ideas or fail fast. Instead of developing a monolithic solution and performing a big-bang release, we iterate through hypotheses, evaluating how features perform and, most importantly, how and if customers use them.

Template: We believe {customer/business segment} wants {product/feature/service} because {value proposition}. Example: We believe that users want to be able to select different themes because it will result in improved user satisfaction. We expect 50% or more users to select a non-default theme and to see a 5% increase in user engagement.

Every experiment must be based on a hypothesis, have a measurable conclusion, and contribute to feature and overall product learning. For each experiment, consider these steps:

  • Observe your user
  • Define a hypothesis and an experiment to assess the hypothesis
  • Define clear success criteria (e.g., a 5% increase in user engagement)
  • Run the experiment
  • Evaluate the results and either accept or reject the hypothesis

Let's have another look at our sample release with eight hypothetical features.

how to implement hypothesis driven development

When we deploy each feature, we can observe user behavior and feedback, and prove or disprove the hypothesis that motivated the deployment. As you can see, the experiment fails for features 2 and 6, allowing us to fail-fast and remove them from the solution. We do not want to carry waste that is not delivering value or delighting our users! The experiment for feature 3 is inconclusive, so we adapt the feature, repeat the experiment, and perform A/B testing in Release X.2. Based on observations, we identify the variant feature 3.2 as the winner and re-deploy in release X.3. We only expose the features that passed the experiment and satisfy the users.

Hypothesis-driven development lights up progressive exposure

When we combine hypothesis-driven development with progressive exposure strategies, we can vertically slice our solution, incrementally delivering on our long-term vision. With each slice, we progressively expose experiments, enable features that delight our users and hide those that did not make the cut.

But there is more. When we embrace hypothesis-driven development, we can learn how technology works together, or not, and what our customers need and want. We also complement the test-driven development (TDD) principle. TDD encourages us to write the test first (hypothesis), then confirm our features are correct (experiment), and succeed or fail the test (evaluate). It is all about quality and delighting our users , as outlined in principles 1, 3, and 7  of the Agile Manifesto :

  • Our highest priority is to satisfy the customers through early and continuous delivery of value.
  • Deliver software often, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  • Working software is the primary measure of progress.

More importantly, we introduce a new mindset that breaks down the walls between development, business, and operations to view, design, develop, deliver, and observe our solution in an iterative series of experiments, adopting features based on scientific analysis, user behavior, and feedback in production. We can evolve our solutions in thin slices through observation and learning in production, a luxury that other engineering disciplines, such as aerospace or civil engineering, can only dream of.

The good news is that hypothesis-driven development supports the empirical process theory and its three pillars: Transparency , Inspection , and Adaption .

how to implement hypothesis driven development

But there is more. Based on lean principles, we must pivot or persevere after we measure and inspect the feedback. Using feature toggles in conjunction with hypothesis-driven development, we get the best of both worlds, as well as the ability to use A|B testing to make decisions on feedback, such as likes/dislikes and value/waste.

Hypothesis-driven development:

  • Is about a series of experiments to confirm or disprove a hypothesis. Identify value!
  • Delivers a measurable conclusion and enables continued learning.
  • Enables continuous feedback from the key stakeholder—the user—to understand the unknown-unknowns!
  • Enables us to understand the evolving landscape into which we progressively expose value.

Progressive exposure:

  • Is not an excuse to hide non-production-ready code. Always ship quality!
  • Is about deploying a release of features through rings in production. Limit blast radius!
  • Is about enabling or disabling features in production. Fine-tune release values!
  • Relies on circuit breakers to protect the infrastructure from implications of progressive exposure. Observe, sense, act!

What have you learned about progressive exposure strategies and hypothesis-driven development? We look forward to your candid feedback.

User profile image.

Comments are closed.

Related content.

Working on a team, busy worklife

DEV Community

DEV Community

Alex Bunardzic

Posted on Sep 24, 2020

Hypothesis-Driven Development

“The only way it’s all going to go according to plan is if you don’t learn anything.” -Kent Beck

Note: This post was written with a nod to John Cuttler's innovative and ground-breaking work.

Experimentation is the foundation of the scientific method, which is a systematic means of exploring the world around us. But experimentation is not only reserved for the field of scientific research. It has its central place in the world of business too.

Most of us are by now familiar with the business methodology called Minimum Viable Product ( MVP ). This Minimum Viable Product is basically just an experiment. By building and launching MVPs, business operations are engaging in a systematic means of exploring the markets.

If we look at market leaders today, we learn that they’re not doing projects anymore. The only thing they’re doing is experiments. Customer discovery and Lean strategies are used to test assumptions about the markets. Such approach is equivalent to Test-Driven Development ( TDD ), which is the process we are intimately familiar with. In TDD, we write the hypothesis first (the test). We then use that test to guide our implementation. Ultimately, product or service development is no different than TDD – we first write a hypothesis, that hypothesis guides our implementation which serves as measurable validation of the hypothesis.

Information discovery

Back in the pre-agile days, requirements gathering was an important activity that used to always kick-off the project. A bunch of SMEs used to get assigned on the project, and were tasked with gathering the requirements. After a prolonged period of upfront information discovery, the gathered requirements got reviewed and, if agreed upon, signed off and frozen. No more changes allowed!

Back then it seemed a perfectly reasonable thing to do. The fly in the ointment always kicked in once the build phase commenced. Sooner or later, as the project progresses, new information comes into the light of day. Suddenly, what we initially held as incontrovertible truth, gets challenged by the newly acquired information and evidence.

But the clincher was in the gated phases. Remember, once requirements get signed off, they get frozen. No more changes, no scope creep allowed. Which means, newly obtained market insights get willfully ignored.

Well, that’s kind of a foolish neglect. More often than not, the newly emerging evidence could be of critical importance to the health of the business operation. Can we afford to ignore it? You bet we cannot! We have no recourse other than to embrace the change.

It is after a number of prominent fiascos in the industry that many software development projects switched to the agile approach. With agile, information discovery is partial. With agile we never claim that we have gathered the requirements, and are now ready to implement them. We keep discovering information and implementing it at the same time (we embrace the change). We do it in tiny steps, keeping our efforts interruptible and steerable at all times.

How to leverage the scientific method

Scientific method is empirical and consists of performing the following steps:

  • Step 1: make and record careful observations
  • Step 2: perform orientation with regards to observed evidence
  • Step 3: formulate a hypothesis, including measurable indicators for hypothesis evaluation
  • Step 4: design an experiment that will enable us to test the hypothesis
  • Step 5: conduct the experiment (i.e. release the partial implementation)
  • Step 6: collect the telemetry that results from running the experiment
  • Step 7: evaluate the results of the experiment
  • Step 8: accept or reject the hypothesis
  • Step 9: go to Step 1

How to formulate a hypothesis

When switching from projects to experiments, traditional user story framework (As a/I want to/So that) is proving insufficient. The traditional user story format does not expose the signals needed in order to evaluate the outcomes. Instead, old school user story format is focused on outputs.

The problem with doing an experiment without first formulating a hypothesis is that there is a danger of introducing a bias when interpreting the results of an experiment. Defining the measurable signals that will enable us to corroborate our hypothesis must be done before we conduct the experiment. That way, we can remain completely impartial when interpreting the results of the experiment. We cannot be swayed by wishful thinking.

The best way to proceed with formulating a hypothesis is to use the following format:

We believe [this capability] Will result in [this outcome] We will have the confidence to proceed when [we see a measurable signal]

Working software is not a measure of progress

Output-based metrics and concepts (Definition of Done, acceptance criteria, burndown charts, and velocity) are good for detecting working software, but fall miserably when it comes to detecting if working software adds value.

“Done” only matters if it adds value. Working software that doesn’t add value cannot be declared “done”.

The forgotten column

Technology-centric projects break activities down into four columns:

  • Backlog of ideas
  • In progress

The above structure is based on the strong belief that all software that works is valuable. That focus must now shift toward continuously delivering real value, something that serves customers. Agilists value outcomes (value to the customers) over features.

The new breakdown for hypothesis-driven development looks something like this:

All eyes must remain peeled on the Achieved desired outcome .

Top comments (4)

pic

Templates let you quickly answer FAQs or store snippets for re-use.

fransafu profile image

  • Location Santiago, Chile
  • Work Software engineer at MercadoLibre
  • Joined Jan 15, 2020

A time ago I started to learn about the Driven Development like Testing, Observability, etc. My point is... What do you think about all those kind of "Driven Development" in essence, it's a part of the development process?

alexbunardzic profile image

  • Location Vancouver, BC
  • Education Masters Degree Numerical Analysis
  • Joined Aug 31, 2017

Hi Francisco,

Thank you for the brilliant question! I think that software development must be driven. There is more than one way to skin a cat, as the saying goes, so no surprise that we have at our disposal more than one way to drive software development. But without driving it, we are doomed to fall into the dreaded waterfall death march, where most software development initiatives never deliver on the expectations.

One may ask: how come we don't have -Driven Development in other engineering disciplines, only in software? It's because other branches of engineering are faced with much lower levels of uncertainty. When engineering physical objects/systems. it is much cleared what are we building and what are the actual physical constraints (for example, when building a bridge).

With software, we don't enjoy such privileges, and we therefore must proceed much more cautiously. Meaning, we must be driven by some discipline which will keep our development efforts on the straight and narrow.

Great conclusion and you're right. Software development has a high level of uncertainty.

Thanks for the reply, I'll stick with the main idea for my learning path :)

incrementis profile image

  • Location Anonymus
  • Education Bachelor in Computer Science
  • Work Thinking to stop!
  • Joined Oct 2, 2019

Hello Alex Bunardzic,

Thank you for your article. I find it very helpful for getting an abstract and simple overview of what a hypothesis is and how to approach it. Instead of googling it, I checked Dev.to first and am glad I found your article.

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink .

Hide child comments as well

For further actions, you may consider blocking this person and/or reporting abuse

mxro profile image

Voice Memos with ChatGPT: My First "GPT"

Max Rohde - Apr 6

meertanveer profile image

React Js libraries every developer should know

Tanveer Hussain Mir - Apr 6

ReactX (RX)

syedbalkhi profile image

5 Productivity Tips for Developers and Other Professionals

Syed Balkhi - Apr 6

DEV Community

We're a place where coders share, stay up-to-date and grow their careers.

Hypothesis-Driven Development

Hypothesis-Driven Development (HDD) is a software development approach rooted in the philosophy of systematically formulating and testing hypotheses to drive decision-making and improvements in a product or system. At its core, HDD seeks to align development efforts with the goal of discovering what resonates with users. This philosophy recognizes that assumptions about user behavior and preferences can often be flawed, and the best way to understand users is through experimentation and empirical evidence.

In the context of HDD, features and user stories are often framed as hypotheses. This means that instead of assuming a particular feature or enhancement will automatically improve the user experience, development teams express these elements as testable statements. For example, a hypothesis might propose that introducing a real-time chat feature will lead to increased user engagement by facilitating instant communication.

The Process

The process of Hypothesis-Driven Development involves a series of steps. Initially, development teams formulate clear and specific hypotheses based on the goals of the project and the anticipated impact on users. These hypotheses are not merely speculative ideas but are designed to be testable through concrete experiments.

Once hypotheses are established, the next step is to design and implement experiments within the software. This could involve introducing new features, modifying existing ones, or making adjustments to the user interface. Throughout this process, the emphasis is on collecting relevant data that can objectively measure the impact of the changes being tested.

Validating Hypotheses

The collected data is then rigorously analyzed to determine the validity of the hypotheses. This analytical phase is critical for extracting actionable insights and understanding how users respond to the implemented changes. If a hypothesis is validated, the development team considers how to build upon the success. Conversely, if a hypothesis is invalidated, adjustments are made based on the lessons learned from the experiment.

HDD embraces a cycle of continuous improvement. As new insights are gained and user preferences evolve, the development process remains flexible and adaptive. This iterative approach allows teams to respond to changing conditions and ensures that the software is consistently refined in ways that genuinely resonate with users. In essence, Hypothesis-Driven Development serves as a methodology that not only recognizes the complexity of user behavior but actively seeks to uncover what truly works through a structured and empirical approach.

Other Recent Articles

Customer Development

Customer Development

What is a Fractional CPO?

What is a Fractional CPO?

AI for Product Managers

AI for Product Managers

Start building amazing products today.

Search form

UVa Logo

  • Open Educational Resources
  • Coursera for UVA
  • Faculty Resources
  • Student Resources

Hypothesis-Driven Development

Hypothesis-Driven Development image

To deliver agile outcomes, you have to do more than implement an agile process- you have to create focus around what matters to your user and rigorously test how well what you’re doing is delivering on that focus. Driving to testable ideas (hypotheses) and maximizing the results of your experimentation is at the heart of a high-functioning practice of agile. This course shows you how to facilitate alignment and create a culture of experimentation across your product pipeline.

You’ll understand how to answer these four big questions: 1. How do we drive our agility with analytics? 2. How do we create compelling propositions for our user?
 3. How do we achieve excellent usability?
 4. How do we release fast without creating disasters?
 As a Project Management Institute (PMI®) Registered Education Provider, the University of Virginia Darden School of Business has been approved by PMI to issue 20 professional development units (PDUs) for this course, which focuses on core competencies recognized by PMI. (Provider #2122) This course is supported by the Batten Institute at UVA’s Darden School of Business. The Batten Institute’s mission is to improve the world through entrepreneurship and innovation: www.batteninstitute.org .

Join us for Flagship 2024: April 16-17 – Register Now .

Split

Search site

Feature delivery & control, feature measurement & learning, enterprise readiness, related links, by industry, developer resources, content hub, building a hypothesis-driven culture of experimentation.

A hypothesis-driven culture reframes software development as a series of changes, outcomes, and measurable results.

Split automatically pairs feature flags with event data from a wide range of sources. This enables users to develop a set of dimensions, powering feature-level analysis and better-informed future experiments.

how to implement hypothesis driven development

A hypothesis-driven approach to software development

Hypothesis-driven development  changes the typical software development approach to focus first on a desired outcome and a hypothesis on how to reach it.

Using a series of experiments to validate or disprove the hypothesis leads a development team closer to achieving their desired outcome, such as solving a certain user issue.

Guide to implementing hypothesis-driven development

how to implement hypothesis driven development

Achieving hypothesis-driven development with an experimentation platform

Split empowers teams to transition rapidly to a culture of experimentation, combining feature flags with existing data pipelines in a full stack  experimentation platform . Experiment with every new feature you build, automatically creating an  A/B test  and calculating the results.

Define and customize metrics to match your hypotheses. We’ll calculate all metrics for all of your experiments with built-in best practices that ensure your sample sizes, sample ratio, and review period are properly aligned to your experiment.

Make your software development process more innovative

Split brings together everything you need for hypothesis-driven development: feature flags, rich data sources, and automatic impact analysis.

DANIEL TENNER

DANIELTENNER.COM

FREEDOM WORKS!

How to evaluate and implement startup ideas using Hypothesis Driven Development

This article was originally published on swombat.com in January 2011.

So you’ve come up with an interesting idea. You think it might work. You’ve sketched it out using various tools like the Business model generation   canvas, a business plan, an Excel financial model, etc. You’re still positive on the idea and think it’s probably worth giving it a shot.

One common approach is to visualise launch day and work back from there. Figure out what you need to launch some kind of initial product, and then start casting the spells and incantations required to get there (mostly in the form of code or application specifications).

A slightly better approach, which those who have tried the above method usually end up using next, after they built something that took them 6 months but was utterly useless to anyone, is to build the bare minimum that you need in order to get some users, any users, to use the system on a regular basis. This is better, and gets you feedback much more quickly than the previous method.

Even better is to aim not for something you can get people to use, but instead try to build something you can charge for immediately, even if the price is lower. This is often favoured by [Lean Startup](http://en.wikipedia.org/wiki/Lean_Startup) afficionados who haven’t quite taken the lean methodology the whole way yet.

What do all these methods have in common? They present the unfolding startup as a series of tasks to be completed to get somewhere.

Here’s a better approach.

Hypothesis-Driven Development

A startup idea is not a plan of action. A startup idea is a series of unchecked hypotheses. In essence, it is a series of questions that you haven’t completely answered yet. The process of progressing a startup from idea to functioning business is the process of answering these questions, of validating these hypotheses.

Let’s consider a theoretical startup to illustrate this. Let’s say we’re looking at building “Heroku for Django”. The initial three questions for most web startups will be in the form:

  • Can I actually build it?
  • Can I get people to know about it?
  • Can I make money from it?

Often, this is the order in which they will arise, if you have some experience of web startups but are fundamentally a builder type. Making money is the last concern. “If I can get lots of passionate users who are willing to pay something, then it will probably be alright.”

To apply Hypothesis-driven development properly, you will want to order your questions by priority before proceeding. This is especially essential once you break down the questions into sub-questions and end up with dozens upon dozens of questions.

The best way to prioritise the questions is by uncertainty. An initial order for these three questions might then be:

  • Can I build it?

Your own prioritisation may vary, but if you’re technical, “Can I build it?” will probably be last on the list. Of course you can build it. If you couldn’t, you would probably have discarded the idea before even getting to this stage.

Before trying to answer the questions, you first break them down into sub-questions (please note this breakdown is nowhere near exhaustive enough, it’s just an example):

  • Which cloud platform is best for this?
  • How many instances will I need at a minimum to run the platform?
  • How much will I be able to charge per user?
  • What proportion of paid vs free users will I have?
  • How well will users convert from free to paid?
  • What channels are there to get the message out?
  • How competitive are the Ad-words for this?
  • Do I have enough contacts to get the initial, core users so the service will be useful to real users?
  • What are the hardest bits of technology I’ll need to put together?
  • Can the scope be cut down so that I have a chance of building a version 1 with extremely limited resources?
  • Which features can be put off until later?

You should keep expanding this list until you can start to see what the burning uncertainties are. These will be unique to your startup idea and to your skills and available resources. Two people evaluating the same idea will probably come up with different key questions. Once you’ve got those key questions (the ones which make you think “Hmm, I really don’t know this and it’s really important.”), shift those to the top.

Then, start working through the questions, one by one or even in parallel. Most of the time, the answer will not be found in code, but in good old-fashioned research, planning, and the dreaded Excel spreadsheets. You don’t need to answer all these questions with 100% certainty, but you should be clearly aware of the limits of your answers, and when the answer is really critical, you should make an effort to answer it as fully as possible. You can’t know everything, but you gotta know what you don’t know and how much it can hurt you.

At some point, if the idea has answered enough questions, the next most important question will require you to build something – be it a paper prototype, a landing page, or something else. When it’s the most important question, do it, and do just enough to answer the question. Later, if your idea is really good, you will probably, at some point, start to do the really expensive stuff: building a real application.

At that point, with so many critical questions having been answered, the likelihood that you build something that nobody uses or pays for should be low.

Of course, you may succeed by shooting from the hip and just going for it without a second thought, but more experienced entrepreneurs will usually look before they leap.

Thoughts from 2017

When I wrote this article, Eric Ries had been blogging about his Lean Startup ideas for a little while, but the book was still 9 months away from publication. It is obvious that this approach is one of the core components of the Lean Startup Methodology. That said, it extracts one of the key points and condenses it in a more readable format, which I think is valuable to be able to point people to.

All of the ideas in this article are still very much current – even more so, perhaps. Lean Startup, or Hypothesis Driven Development, is still the best way to devise a plan for building a new company: find the most risky assumptions, test them and update them until they are no longer risks, then find the next most risky assumptions and repeat until you have built a functioning company.

Coursera for the Reliance Family

  • Top Courses

Coursera for the Reliance Family

Hypothesis-Driven Development

This course is part of multiple programs. Learn more

This course is part of multiple programs

Taught in English

Some content may not be translated

Alex Cowan

Instructor: Alex Cowan

Sponsored by Coursera for the Reliance Family

49,562 already enrolled

(949 reviews)

What you'll learn

How to drive valuable outcomes for your user and reduce waste for your team by diagnosing and prioritizing what you need to know about them

How to focus your practice of agile by pairing qualitative and quantitative analytics

How to do just enough research when you need it by running design sprints

How to accelerate value delivery by investing in your product pipeline

Skills you'll gain

  • Design and Product
  • Communication
  • Leadership and Management
  • Project Management
  • User Experience
  • Research and Design
  • Software Engineering

Details to know

how to implement hypothesis driven development

Add to your LinkedIn profile

See how employees at top companies are mastering in-demand skills

Placeholder

Build your subject-matter expertise

  • Learn new concepts from industry experts
  • Gain a foundational understanding of a subject or tool
  • Develop job-relevant skills with hands-on projects
  • Earn a shareable career certificate

Placeholder

Earn a career certificate

Add this credential to your LinkedIn profile, resume, or CV

Share it on social media and in your performance review

Placeholder

There are 4 modules in this course

To deliver agile outcomes, you have to do more than implement agile processes- you have to create focus around what matters to your user and constantly test your ideas. This is easier said than done, but most of today’s high-functioning innovators have a strong culture of experimentation.

In this course, you’ll learn how to identify the right questions at the right time, and pair them with the right methods to do just enough testing to make sure you minimize waste and maximize the outcomes you create with your user. This course is supported by the Batten Institute at UVA’s Darden School of Business. The Batten Institute’s mission is to improve the world through entrepreneurship and innovation: www.batteninstitute.org.

How Do We Know if We're Building for a User that Doesn't Exist?

How do you go from backlog grooming to blockbuster results with agile? Hypothesis-driven decisions. Specifically, you need to shift your teammates focus from their natural tendency to focus on their own output to focusing out user outcomes. Easier said than done, but getting everyone excited about results of an experiment is one of the most reliable ways to get there. This week, we’ll focus on how you get started in a practical way.

What's included

22 videos 1 reading 1 quiz

22 videos • Total 88 minutes

  • Course Introduction • 4 minutes • Preview module
  • Hypotheses-Driven Development & Your Product Pipeline • 7 minutes
  • Introducing Example Company: HVAC in a Hurry • 1 minute
  • Driving Outcomes With Your Product Pipeline • 7 minutes
  • The Persona Hypothesis • 3 minutes
  • The JTBD Hypothesis • 3 minutes
  • The Demand Hypothesis • 2 minutes
  • The Usability Hypothesis • 2 minutes
  • The Collaboration Hypothesis • 2 minutes
  • The Functional Hypothesis • 2 minutes
  • Driving to Value with Your Persona & JTBD Hypothesis • 2 minutes
  • Example Personas and Jobs-to-be-Done • 4 minutes
  • Setting Up Interviews • 3 minutes
  • Prepping for Subject Interviews • 3 minutes
  • Conducting the Interview • 6 minutes
  • How Not to Interview • 6 minutes
  • Day in the Life • 4 minutes
  • You and Your Next Design Sprint • 4 minutes
  • The Practice of Time Boxing • 4 minutes
  • Overview of the Persona and JTBD Sprint • 2 minutes
  • How Do I Sell the Idea of a Design Sprint • 4 minutes
  • Your Persona & JTBD Hypotheses: What's Next For You? • 3 minutes

1 reading • Total 15 minutes

  • Course Overview & Requirements • 15 minutes

1 quiz • Total 20 minutes

  • Week 1 Quiz • 20 minutes

How Do We Reduce Waste & Increase Wins by Testing Our Propositions Before We Build Them?

Nothing will help a team deliver better outcomes like making sure they’re building something the user values. This might sound simple or obvious, but I think after this week it’s likely you’ll find opportunities to help improve your team’s focus by testing ideas more definitively before you invest in developing software. In this module, you’ll learn how to make concept testing an integral part of your product pipeline. We’ll continue to apply methods from Lean Startup, looking at how they pair with agile. We’ll look at how high-functioning teams design and run situation-appropriate experiments to test ideas, and how that works before the fact (when you’re testing an idea) and after the fact (when you’re testing the value of software you’ve released).

20 videos 1 quiz 1 discussion prompt

20 videos • Total 120 minutes

  • Creating More Wins • 5 minutes • Preview module
  • Describing the Customer Experience (CX) for Testability • 8 minutes
  • CX Mapping for Prioritization and Testing • 6 minutes
  • Testing Demand Hypotheses with MVP's • 4 minutes
  • Learning What's Valuable • 7 minutes
  • Introducing Enable Quiz • 1 minute
  • Business to Consumer Case Studies • 9 minutes
  • Business to Business Case Studies • 6 minutes
  • Using a Design Sprint to Test Your Demand Hypothesis • 3 minutes
  • Lean Startup and Learning from Practice • 0 minutes
  • Interview: Tristan Kromer on the Practice of Lean Startup • 6 minutes
  • Interview: David Bland on the Practice of Lean Startup • 5 minutes
  • Interview: Tristan Kromer on Creating a Culture of Experimentation Part 1 • 7 minutes
  • Interview: Tristan Kromer on Creating a Culture of Experimentation Part 2 • 6 minutes
  • Interview: David Bland on Creating a Culture of Experimentation: Part 1 • 4 minutes
  • Interview: David Bland on Creating a Culture of Experimentation: Part 2 • 9 minutes
  • Interview: David Bland on Marrying Agile to Lean Startup • 7 minutes
  • Interview: David Bland on Using Hypothesis with Agile • 5 minutes
  • Interview: Laura Klein on the Right Kind of Research • 10 minutes
  • Your Demand Hypotheses: What's next for you? • 3 minutes
  • Week 2 Quiz • 20 minutes

1 discussion prompt • Total 15 minutes

  • Learnings from David, Tristan, and Laura • 15 minutes

How Do We Consistently Deliver Great Usability?

The best products are tested for usability early and often, avoiding the destructive stress and uncertainty of a "big unveil." In this module, you’ll learn how to diagnose, design and execute phase-appropriate user testing. The tools you’ll learn to use here (a test plan template, prototyping tool, and test session infrastructure) are accessible/teachable to anyone on your team. And that’s a very good thing -- often products are released with poor usability because there "wasn’t enough time" to test it. With these techniques, you’ll be able to test early and often, reinforcing your culture of experimentation.

19 videos 1 quiz 1 discussion prompt

19 videos • Total 90 minutes

  • The Always Test • 4 minutes • Preview module
  • A Test-Driven Approach to Usability • 5 minutes
  • The Inexact Science of Interface Design • 6 minutes
  • Diagnosing Usability with Donald Norman's 7 Steps Model • 8 minutes
  • Fixing Usability with Donald Norman's 7 Steps Model • 3 minutes
  • Applying the 7 Steps Model to Hypothesis-Driven Development • 3 minutes
  • Fixing the Visceral Layer • 4 minutes
  • Fixing the Behavioral Layer: The Importance of Comparables & Prototyping • 9 minutes
  • Prototyping With Balsamiq • 4 minutes
  • Usability Testing: Fun & Affordable • 2 minutes
  • The Right Testing at the Right Time • 2 minutes
  • A Test Plan Anyone Can Use • 6 minutes
  • Creating Good Test Items • 3 minutes
  • Running a Usability Design Sprint • 3 minutes
  • Running a Usability Design Sprint Skit • 5 minutes
  • Interview: Laura Klein on Qualitative vs. Quantitative Research • 4 minutes
  • Interview: Laura Klein on Lean UX in Enterprise IT • 5 minutes
  • Prioritizing User Outcomes with Story Mapping • 4 minutes
  • Your Usability Hypotheses: What's Next For You? • 3 minutes
  • Week 3 Quiz • 20 minutes
  • How will these techniques help you? • 15 minutes

How Do We Invest to Move Fast?

You’ve learned how to test ideas and usability to reduce the amount of software your team needs to build and to focus its execution. Now you’re going to learn how high-functioning teams approach testing of the software itself. The practice of continuous delivery and the closely related Devops movement are changing the way we build and release software. It wasn’t that long ago where 2-3 releases a year was considered standard. Now, Amazon, for example, releases code every 11.6 seconds. This week, we’ll look at the delivery pipeline and step through what successful practitioners do at each stage and how you can diagnose and apply the practices that will improve your implementation of agile.

24 videos 1 quiz 1 peer review

24 videos • Total 128 minutes

  • Functional Hypotheses and Continous Delivery • 6 minutes • Preview module
  • The Team that Releases Together • 4 minutes
  • Getting Started with Continuous Delivery • 3 minutes
  • Anders Wallgren on Getting Started • 4 minutes
  • The Test Pyramid • 6 minutes
  • The Commit & Small Tests Stage • 2 minutes
  • The Job of Version Control • 3 minutes
  • Medium Tests • 1 minute
  • Large Tests • 6 minutes
  • Creating Large/Behavioral Tests • 9 minutes
  • Anders Wallgren on Functional Testing • 9 minutes
  • Release Stage • 4 minutes
  • The Job of Deploying • 6 minutes
  • Anders Wallgren on Deployment • 2 minutes
  • Chris Kent on Developing with Continuous Delivery • 10 minutes
  • Chris Kent on Continuous Deployment • 11 minutes
  • Test-Driven General Management • 5 minutes
  • Narrative and the 'Happy Path' • 3 minutes
  • The Emergence of DevOps and the Ascent of Continuous Delivery • 4 minutes
  • Design for Deployability • 2 minutes
  • Anders Wallgren on Continuous Deployment • 3 minutes
  • Anders Wallgren on Creating a Friendly Environment for Continuous Deployment • 6 minutes
  • Your Functional Hypotheses: What's Next For You? • 2 minutes
  • Course Conclusion • 8 minutes
  • Week 4 Quiz • 20 minutes

1 peer review • Total 90 minutes

  • Creating and Testing a Demand/Value Hypothesis • 90 minutes

Instructor ratings

We asked all learners to give feedback on our instructors based on the quality of their teaching style.

how to implement hypothesis driven development

A premier institution of higher education, The University of Virginia offers outstanding academics, world-class faculty, and an inspiring, supportive environment. Founded by Thomas Jefferson in 1819, the University is guided by his vision of discovery, innovation, and development of the full potential of students from all walks of life. Through these courses, global learners have an opportunity to study with renowned scholars and thought leaders.

Why people choose Coursera for their career

how to implement hypothesis driven development

Learner reviews

Showing 3 of 949

949 reviews

Reviewed on Nov 14, 2020

Great insights on how to test both the motivation and the usability! Gave me additional knowledge of Agile. It is better to take this course within the Specialization programme.

Reviewed on Sep 25, 2018

This course actually bring all that knowledge into light which has been taught in Course 1-3. all videos specially the interview are the essence of this course.

Reviewed on Mar 18, 2018

The course contains complete detailed information on Agile Testing. Users should make the best use of this course knowledge as now all the companies are now moving to Agile.

Recommended if you're interested in Computer Science

how to implement hypothesis driven development

University of Virginia

Managing an Agile Team

how to implement hypothesis driven development

Product Analytics and AI

how to implement hypothesis driven development

Agile Meets Design Thinking

how to implement hypothesis driven development

Agile Development

Specialization

Placeholder

Open new doors with Coursera Plus

Unlimited access to 7,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription

Advance your career with an online degree

Earn a degree from world-class universities - 100% online

Join over 3,400 global companies that choose Coursera for Business

Upskill your employees to excel in the digital economy

MarketSplash

How To Implement Swift Event-Driven Architecture

Swift's event-driven architecture is transforming how applications respond to user interactions and system events. In this article, we delve into its core concepts, benefits, and practical implementations, providing developers with the tools to create responsive and efficient apps.

💡 KEY INSIGHTS

  • In Swift's event-driven architecture , the use of closures and delegates enhances asynchronous programming and event handling.
  • Protocol-oriented programming in Swift allows for more flexible and reusable code components in event-driven models.
  • Swift's Combine framework streamlines the creation of complex event-processing chains, promoting reactive programming paradigms.
  • Effective management of memory leaks and retain cycles is crucial in Swift's event-driven architecture for optimal app performance.

Swift's event-driven architecture offers a fresh approach to responsive application design. By responding to events rather than following a linear flow, you can achieve more fluid interactions and maintainability. This architectural pattern is essential for anyone looking to elevate their Swift programming techniques.

how to implement hypothesis driven development

Understanding Event-Driven Concepts

Setting up the swift environment, designing with event-driven patterns, swift's event handling mechanisms, examples of swift event-driven applications, best practices in swift event-driven development, troubleshooting common issues, frequently asked questions, events vs. traditional methods, the core components of eda, why opt for eda in swift.

Event-driven architecture (EDA) is a design pattern that centers on the production, detection, consumption, and reaction to events. In EDA, Events are messages that denote a state change or an occurrence within a system. This approach is vastly different from traditional linear programming where tasks are executed sequentially.

In traditional procedural programming, operations are often carried out in a sequential manner. One function is called, executed, and completed before the next is called. In contrast, event-driven programming waits for external events, such as user input or system signals, before executing the relevant code. This makes applications more Reactive and adaptable to a user's actions.

There are three main components in an event-driven system:

  • Event Creators : These are the sources of events. They generate events based on certain conditions or triggers.
  • Event Channels : Once an event is created, it is sent via a channel. This is the medium through which events are passed.
  • Event Consumers : These are the entities waiting to react to a specific event.

For clarity, let's explore a basic event-driven concept in Swift:

Swift's concise syntax and powerful frameworks make it a perfect candidate for event-driven design. It not only makes code cleaner but also allows for more Fluid Interactions in applications. Swift's built-in event handling mechanisms, like the ones used for handling UI actions, make the development process more intuitive.

To benefit maximally from EDA, understanding its concepts is paramount. The next sections will dive deeper into practical applications of these principles in Swift.

Before diving into event-driven development in Swift, it's essential to have a suitable environment in place. This ensures that you can compile, test, and debug your Swift applications seamlessly.

Prerequisites For Swift Development

Installing xcode, configuring the swift playground, swift package manager.

Before setting up the Swift environment, ensure you have:

  • A Mac computer with macOS Catalina or later.
  • Adequate disk space for Xcode, the primary tool for Swift development.
  • A stable internet connection for downloading required software and packages.

Xcode is the official integrated development environment (IDE) for Swift. It offers a rich set of tools that simplify Swift development and provides an intuitive interface for building iOS, macOS, watchOS, and tvOS apps.

To install Xcode:

  • Open the App Store on your Mac.
  • Search for "Xcode".
  • Click on "Get" and then "Install".

Once installed, launch Xcode and complete the initial setup. It might prompt you to install additional components – proceed with these installations.

how to implement hypothesis driven development

Swift Playground is a feature within Xcode that allows developers to write and test Swift code interactively. Here's how you can set it up:

Swift Package Manager is a tool for managing the distribution of Swift code. It's integrated with the Swift build system to automate the process of downloading, compiling, and linking dependencies.

To initiate a new package:

Setting up the right environment is the foundational step in Swift development. Once you have your tools and configurations in place, diving into event-driven design becomes significantly more accessible.

Decoupling Components With Events

Design patterns for event-driven architecture, implementing the observer pattern.

Event-driven programming can be seen as an inversion of traditional programming logic. Instead of writing code that runs from start to finish, event-driven code runs in response to specific events. Properly using this pattern can lead to highly responsive and scalable applications.

A cornerstone of event-driven design is the Decoupling of components. In essence, one part of your application emits an event without knowing which other part of the application will react to it. This separation ensures that individual components can evolve independently.

Here's a simple Swift example:

Event-driven design in Swift can leverage several patterns:

  • Observer Pattern : Allows an object to publish changes to its state for other objects to observe.
  • Command Pattern : Encapsulates a request as an object, allowing parameterization and execution of requests.
  • Mediator Pattern : Centralizes external communications to ensure components remain decoupled.

The Observer Pattern is one of the most common patterns in event-driven design. Here's how you can implement it in Swift:

Event-driven design patterns like these are instrumental in creating flexible, maintainable, and scalable Swift applications. Grasping their core principles and structures is pivotal to harnessing their full potential.

NotificationCenter

Gesture recognizers.

Swift provides a robust suite of mechanisms to handle events. By understanding these built-in tools, developers can effectively design and manage event-driven applications.

One of the primary mechanisms in Swift for broadcasting information across components is the NotificationCenter . It allows objects to register as observers for specific events and respond when those events are posted.

Delegates are another common way of handling events in Swift, especially in UI components. They define a contract between objects to delegate responsibilities.

In the context of iOS development, Gesture Recognizers are used to detect and respond to specific user interactions like taps, swipes, or pinches.

Swift also supports event handling through Closures , which are self-contained blocks of code that can be passed around and used in your code.

Understanding and leveraging Swift's event handling mechanisms can make the process of designing event-driven applications both straightforward and efficient. These tools, when used aptly, can significantly boost an application's responsiveness and adaptability.

Interactive UI Components

Network request handling, real-time messaging systems.

The beauty of event-driven programming in Swift is its widespread applicability. From user interfaces to backend data processing, Swift's event-driven paradigm plays a crucial role. Let's explore some practical examples of how event-driven concepts are employed in real-world Swift applications.

Modern applications often require interactive UI elements that respond to user inputs. Buttons, sliders, and switches are examples of event-driven components.

In applications that require network communication, events play a role in handling Network Requests and their responses.

Many modern apps feature chat or messaging systems, which are inherently event-driven. When a new message arrives, the system responds by notifying the user.

By examining these practical examples, it's evident that event-driven programming in Swift is a fundamental concept applicable in a myriad of situations. It allows developers to create highly responsive, efficient, and user-friendly applications.

Use Strongly-Typed Events

Avoid retain cycles with observers, document events clearly, limit event chaining.

Employing event-driven architecture in Swift offers immense power, but with great power comes the need for responsibility. Following best practices ensures maintainability, clarity, and robustness in your event-driven applications.

Avoid using generic or vague event types. Instead, Strongly-Typed Events help in conveying the clear intention of the event and the kind of data it carries.

When using observers, especially closures, it's essential to prevent Retain Cycles that could cause memory leaks.

Given the decoupled nature of event-driven systems, clear documentation helps developers understand the events' lifecycle and their intended effects.

While events can trigger other events, excessive chaining can lead to convoluted logic that's hard to debug. Limit the Chaining of Events to ensure your application remains understandable and maintainable.

Incorporating these best practices into your Swift event-driven development process ensures that your applications remain efficient, maintainable, and less error-prone. A mindful approach to event handling will pave the way for scalable and robust applications.

Events Not Firing

Retain cycles with closures, handling concurrent events, misunderstanding event flow.

Working with event-driven architectures in Swift offers numerous benefits. However, like all programming paradigms, it comes with its own set of challenges. Let's delve into some common issues faced and how to address them.

One of the most frequent challenges is when expected events don't fire. This could be due to several reasons:

  • The event listener might not be correctly registered.
  • The event might not be emitted as anticipated.
  • There might be a mismatch between event types.

Closures can inadvertently create Retain Cycles , leading to memory leaks and unexpected behaviors.

In a multi-threaded environment, handling multiple events concurrently can lead to race conditions or deadlocks. It's essential to synchronize access to shared resources.

Events can sometimes fire in an order that's different from what a developer might expect. Proper logging and understanding the event lifecycle are crucial.

Navigating the intricacies of Swift's event-driven development requires vigilance and understanding. By recognizing common pitfalls and equipping yourself with solutions, you can create resilient and efficient event-driven applications.

What is the difference between event-driven programming and procedural programming in Swift?

In event-driven programming, code execution is determined by external events, such as user interactions or system signals. The application responds to these events as they occur. In contrast, procedural programming follows a linear approach where tasks are executed sequentially from start to finish.

How can I avoid memory leaks when using closures in Swift's event-driven mechanisms?

Memory leaks with closures often occur due to retain cycles. To prevent them, use [weak self] or [unowned self] within your closure to capture self weakly. This ensures that self isn't strongly captured by the closure, preventing potential retain cycles.

How do I debug event-driven applications in Swift?

Debugging event-driven applications can be trickier than linear applications due to the unpredictable order of event execution. Key strategies include:

  • Use logging extensively to track the order of event firings and handler executions.
  • Use breakpoints in your event handlers.
  • Leverage tools like the Xcode debugger and Instruments to monitor app behavior and performance.

Are there any performance concerns with event-driven programming in Swift?

Event-driven programming can be very efficient, as it allows the system to remain idle when not processing events. However, if there are too many events or if event handlers perform intensive operations, it can slow down the application. It's essential to profile and optimize event-driven applications regularly to ensure smooth performance.

Can I mix event-driven programming with other paradigms in Swift?

Absolutely. In real-world applications, it's common to see a mix of paradigms. For instance, while the UI might be event-driven, the core logic or data processing might follow a procedural or object-oriented approach. Swift's flexibility allows developers to use the best paradigm for the task at hand.

Let’s test your knowledge!

What Is Swift's Built-In Mechanism for Broadcasting Information Across Components?

Continue learning with these swift guides.

  • SwiftUI Vs Xcode: Key Differences And Insights
  • Swift Playground Vs Xcode: Key Differences And Insights
  • Exploring Swift: A Deep Dive Into Xcode's Features
  • 120+ Swift Statistics: Developer Community, Usage & Market Share
  • How To Use Swift Version Control Effectively

Subscribe to our newsletter

Subscribe to be notified of new content on marketsplash..

IMAGES

  1. Hypothesis-driven Development

    how to implement hypothesis driven development

  2. The 6 Steps that We Use for Hypothesis-Driven Development

    how to implement hypothesis driven development

  3. How to Implement Hypothesis-Driven Development

    how to implement hypothesis driven development

  4. Data-driven hypothesis development

    how to implement hypothesis driven development

  5. Hypothesis Driven Developmenpt

    how to implement hypothesis driven development

  6. Data-driven hypothesis development

    how to implement hypothesis driven development

VIDEO

  1. Step10 Hypothesis Driven Design Cindy Alvarez

  2. Deloitte Problem Solving: Overview of Hypothesis Based Problem Solving

  3. Detecting the Unknown: Hypothesis-Driven Threat Hunting

  4. Day-2, Hypothesis Development and Testing

  5. 1.4.4 Development of working hypothesis

  6. Tiny Python Projects: Ch 1 Pt 8 (adding a main)

COMMENTS

  1. How to Implement Hypothesis-Driven Development

    Make observations. Formulate a hypothesis. Design an experiment to test the hypothesis. State the indicators to evaluate if the experiment has succeeded. Conduct the experiment. Evaluate the results of the experiment. Accept or reject the hypothesis. If necessary, make and test a new hypothesis.

  2. How to Implement Hypothesis-Driven Development

    Make observations. Formulate a hypothesis. Design an experiment to test the hypothesis. State the indicators to evaluate if the experiment has succeeded. Conduct the experiment. Evaluate the results of the experiment. Accept or reject the hypothesis. If necessary, make and test a new hypothesis.

  3. Hypothesis-driven development: Definition, why and implementation

    How do you implement hypothesis-driven development. At a high level, here's a general approach to implementing HDD: Identify the problem or opportunity: Begin by identifying the problem or opportunity that you want to address with your product or feature. Create a hypothesis: Clearly define a hypothesis that describes a specific user behavior, need, or outcome you believe will occur if you ...

  4. What is hypothesis-driven development?

    Hypothesis-driven development in a nutshell. As the name suggests, hypothesis-driven development is an approach that focuses development efforts around, you guessed it, hypotheses. To make this example more tangible, let's compare it to two other common development approaches: feature-driven and outcome-driven.

  5. Hypothesis-Driven Development (Practitioner's Guide)

    Like agile, hypothesis-driven development (HDD) is more a point of view with various associated practices than it is a single, particular practice or process. That said, my goal here for is you to leave with a solid understanding of how to do HDD and a specific set of steps that work for you to get started. After reading this guide and trying ...

  6. Guide for Hypothesis-Driven Development: How to Form a List of

    The hypothesis-driven development management cycle begins with formulating a hypothesis according to the "if" and "then" principles. In the second stage, it is necessary to carry out several works to launch the experiment (Action), then collect data for a given period (Data), and at the end, make an unambiguous conclusion about whether ...

  7. The 6 Steps that We Use for Hypothesis-Driven Development

    Hypothesis-driven development is a prototype methodology that allows product designers to develop, test, and rebuild a product until it's acceptable by the users. ... Step 6: Implement Product and Maintain. Once you've got the confidence that the remaining hypotheses are validated, it's time to develop the product. However, testing must ...

  8. Apply the Scientific Method to agile development

    How to leverage the scientific method. The scientific method is empirical and consists of the following steps: Step 1: Make and record careful observations. Step 2: Perform orientation with regard to observed evidence. Step 3: Formulate a hypothesis, including measurable indicators for hypothesis evaluation.

  9. Understanding Hypothesis-Driven Development in Software Development

    Hypothesis-Driven Development (HDD) is a systematic and iterative approach that leverages the scientific method to inform software development decisions. ... Implementing the Experiment. After the experiment design is finalized, the actual implementation takes place. This may involve making changes to the software, setting up the necessary data ...

  10. Hypothesis-Driven Development

    Course Introduction • 4 minutes • Preview module. Hypotheses-Driven Development & Your Product Pipeline • 7 minutes. Introducing Example Company: HVAC in a Hurry • 1 minute. Driving Outcomes With Your Product Pipeline • 7 minutes. The Persona Hypothesis • 3 minutes. The JTBD Hypothesis • 3 minutes.

  11. An Explanation of Hypothesis-Driven Development

    In this Scrum Tapas video, PST Martin Hinshelwood delves into the Lean idea of Hypothesis-driven Development and explains how it works when it comes to delivering value. (6:04 Minutes) In this Scrum Tapas video, PST Martin Hinshelwood delves into the Lean idea of Hypothesis-driven Development and explains how it works when it comes to ...

  12. Using Hypothesis-Driven Development

    Trying to implement Hypothesis-Driven Development all at once is a bit like trying to complete a project all in one go in Agile. It's a contradiction in terms. So, as Alex noted in the webinar, the best way to implement Hypothesis-Driven Development into Agile is by using these philosophies and Agile methods together. Look at your product ...

  13. Hypothesis-Driven Development

    the right outcome. By borrowing concepts from Hypothesis-Driven Development, the DoD can improve its ability to produce leaner, more relevant, and more resilient capabilities by continuously learning through data-driven methodologies. Need Often, development tasks in the DoD lack good criteria to measure or evaluate the output.

  14. Scrum and Hypothesis Driven Development

    Scrum and Hypothesis Driven Development. The opportunities and consequences of being responsive to change have never been higher. Organizations that once had many years to respond to competitive, environmental or socio/political pressures now have to respond within months or weeks. Organizations have to transition from thoughtful, careful ...

  15. Why hypothesis-driven development is key to DevOps

    We also complement the test-driven development (TDD) principle. TDD encourages us to write the test first (hypothesis), then confirm our features are correct (experiment), and succeed or fail the test (evaluate). It is all about quality and delighting our users, as outlined in principles 1, 3, and 7 of the Agile Manifesto:

  16. Lessons from Hypothesis-Driven Development

    The principle of hypothesis-driven development is to apply scientific methods to product development. Defining success criteria and then forming testable hypotheses around how to meet them. Over ...

  17. Hypothesis-Driven Development

    In TDD, we write the hypothesis first (the test). We then use that test to guide our implementation. Ultimately, product or service development is no different than TDD - we first write a hypothesis, that hypothesis guides our implementation which serves as measurable validation of the hypothesis. Information discovery

  18. How to implement Hypothesis-Driven Development

    Remember back to the time when we were in high school science class. Our teachers had a framework for helping us learn — an experimental approach based on the best available evidence at hand. We ...

  19. Hypothesis-Driven Development

    The process of Hypothesis-Driven Development involves a series of steps. Initially, development teams formulate clear and specific hypotheses based on the goals of the project and the anticipated impact on users. ... Once hypotheses are established, the next step is to design and implement experiments within the software. This could involve ...

  20. Hypothesis-Driven Development

    Hypothesis-Driven Development. To deliver agile outcomes, you have to do more than implement an agile process- you have to create focus around what matters to your user and rigorously test how well what you're doing is delivering on that focus. Driving to testable ideas (hypotheses) and maximizing the results of your experimentation is at the ...

  21. Hypothesis-driven development

    Make your software development process more innovative. Split brings together everything you need for hypothesis-driven development: feature flags, rich data sources, and automatic impact analysis. In hypothesis-driven development, engineers define a desired outcome, a hypothesis to test, and a series of experiments to validate it.

  22. How to evaluate and implement startup ideas using Hypothesis Driven

    Hypothesis-Driven Development. A startup idea is not a plan of action. A startup idea is a series of unchecked hypotheses. In essence, it is a series of questions that you haven't completely answered yet. The process of progressing a startup from idea to functioning business is the process of answering these questions, of validating these ...

  23. Hypothesis-Driven Development

    Hypothesis-driven decisions. Specifically, you need to shift your teammates focus from their natural tendency to focus on their own output to focusing out user outcomes. Easier said than done, but getting everyone excited about results of an experiment is one of the most reliable ways to get there.

  24. How To Implement Swift Event-Driven Architecture

    Implementing The Observer Pattern. The Observer Pattern is one of the most common patterns in event-driven design. Here's how you can implement it in Swift: observers.append( observer) } func removeObserver(_ observer: UserObserver) { if let index = observers.firstIndex(where: { $0 === observer }) {.