A/B Tests: Should they always be MVPs?

As Product Managers, we’re always looking to validate our ideas as early as possible before investing more into them, and then actually proving our features are providing value as intended, either by watching our users’ behaviours through metrics, or qualitative feedback.

When solving small problems (or building small features) it might make sense to immediately build the solution if you have enough conviction from your problem and solution discovery process, and it’s small enough that you can kill later if the feature misses the mark entirely.

But for larger problems, which may require a larger feature set to solve, you may need more conviction before building the entire feature/solution set, and that’s a good thing; nobody wants to waste months building a feature nobody uses.

One strategy I’ve often used is to A/B test the smallest version of the feature. That might seem like a “duh, of course” moment, but the question I’ve always contemplated is, what is the smallest version of the feature?

Is it the MVP of the feature?

Or is it the smallest version of the feature that would validate more investment in the solution, but that smallest version cannot be regarded as an MVP?

In an ideal world, the smallest version of the feature is the MVP. But as I found, that’s not always the case and hence the reason for this post.

Before we explore why let’s look at some definitions, so we’re both on the same page:

Minimum Viable Product (MVP) is the smallest (aka minimum) version of a something, a feature of the product, that solves a user’s problem (aka product) and helps solves a business objective (aka viable). Another definition of viable or product that I entertain is that it’s a standalone feature.

An A/B Test is a technique used to determine whether one version/variation of something performs better than another, or whether users behave in a certain way under a certain condition versus not being in that condition.

With those definitions cleared up, let’s continue…

As we described above, an MVP is the smallest version of something that solves a user’s problem and reaches a business goal. But even though we’re saying it’s the smallest version and scoping to be small, it still could be a large piece of work for the dev team. An example could be building a new checkout flow that has a simpler authentication user flow, which still requires a month or two to build out and QA.

A decision like this becomes even harder to make if you have a smaller dev team, for example, two developers, in which case you may have one developer on this project reducing your dev team by 50% to work on other features and bugs.

And so, it’s under situations like this, where I advocate for building incomplete versions of MVPs, on non-MVPs as they are not products yet, to build that conviction to say “yes, we should invest a month or two into building out the MVP.”

Making such decisions can irk your users as well as those of your team and company, so I’ve come up with some guidelines to ensure this a path worth going down:

  1. The test duration is short (i.e., 1-2 weeks rather than a month or longer)
  2. The test won’t damage your company’s reputation
  3. It allows your user to reach their goal successfully, rather than incompletely
  4. Customer Service/Success is onboard at the minimum (they’ll be handling any potential issues/complaints)
  5. The situation is recoverable
  6. You can get the necessary data needed to decide whether to invest in building the MVP or not


#6 is very, very important. There is no use going down this path if you can’t prove it’s the right path.


Hopefully making such decisions are not the norm for you, but when you have to, I hope these guidelines can help you as much as they helped me!




Leave a Reply

Your email address will not be published. Required fields are marked *