top of page
Writer's pictureMark Wheatley

How To Run a Fake Door Test




What is a Fake Door Test?


A Fake door test, also known as a ‘painted door test’ or a ‘RAT’ (Riskiest Assumption Test) is a popular lean startup technique used to validate product ideas before committing to development. These types of experiments save businesses time and money by helping to avoid building a product or feature that doesn't have product-market fit, and helps to reduce uncertainty and provide actionable insight quickly. The ‘fake door’ is essentially an illusion of a functional product or service where you can measure customer interactions with a call to action in order to gauge market interest.


In this article I’ll go through the steps needed to create a fake door test, and talk about some of the tests I’ve created and the product decisions they helped inform.  


Fake Door Test
The Fake Door Test

How To Run a Fake Door Test


1. Define and Document your Test Hypothesis

This is probably the most important step. You need to start by documenting the assumptions you are trying to prove. If you don’t do this you are not optimising your ‘learning environment’ and may find yourself struggling to agree on your take-aways once the test is complete. 


I always like to state the hypothesis in the simple format of: 

We believe that if {action} then {outcome}


For example - ‘“We believe that if we offer an advert-free experience and offline functionality then at least 5% of our current user base will be willing to pay $5 a month to have it”.

I don’t believe it’s necessary to define a ‘null’ hypothesis to learn what you need from a fake door test (mathematicians might disagree, but I think you’re just introducing unneeded complexity). 


It’s good practice to document not just the test hypothesis, but also the details of how you are running the test. For example, you should define the test duration - you might want to decide to end the test after a certain number of days or once you have reached a certain number of CTA impressions (eyeballs) or CTA click-throughs. You should also define the target audience for the test - are you segmenting your test to only be shown to a specific demographic?


Once you have completed your test you can add your observations to the same document and you have a useful artefact that will help your learnings ‘live’ longer and create more value for your business.


2. Design the Test

First, create a compelling call-to-action (CTA). This should be a very brief statement that describes the value proposition you are testing in the most compelling way you can. You may actually want to create more than one call-to-action to test - it’s quite common to use the fake door test to not only understand if the value proposition is interesting to customers, but what language is most effective in getting customers to respond to it.


Once you’ve agreed on your CTA, design the landing page (the page the customer sees when they click on the CTA). The content on this page should explain the value proposition in more detail, and say that it is not quite ready but ‘coming soon’. 


You might want to provide the ability for people to provide their email address so they can be informed when the product is available. This gives you an interesting additional conversion point to measure. It also enables you to follow up with those people after the test to thank them for their interest, explain that you are conducting research and ask them if they’d like to be involved in beta-testing or an interview. If you are collecting email addresses remember to add a GDPR statement to your landing page.


The more professional and polished your CTA and landing page look, the more likely you are to get good data from your test. Remember not to ‘overload’ your landing page with information and stick to good principles of information design and a ‘less is more’ philosophy.


3. Run the Test and Analyse the Data

Running the test is the most straightforward step, just add the CTA to your app or website. You might have decided to run the test as an advert on a social media site such as Facebook, in which case you will need to set up an advertiser account and create a campaign. You’ll need to set up some tracking mechanisms on your landing page. There are many free analytics plug-ins available that will do the job well, the most popular being Google Analytics.

Once your test is complete, it’s time to analyse the data. You’ll have already defined which data points it is you are most interested in. At the very minimum you will be looking at the click-through rate for your call to action. If you are collecting email addresses you may also want to track ‘landing page visit to email submission’ as a conversion metric as well.

Once you have your data, you can compare the results to your initial hypothesis. Did the results compare favourably or unfavourably with your conversion estimate? If the numbers are in line or better then you have some good evidence that the product or feature has good customer demand. If however the conversion/click-through rate is lower there are a couple of things that might be happening -


  1. Your test was flawed / poorly designed. There is some ‘friction’ in the messaging or user journey that means the value proposition was not properly communicated. If the flaw in the test is obvious after some analysis, you can revise the test and try again with a different cohort of customers. If you still have confidence in your idea but can’t see a problem with the test, it might be worth walking through the test in a couple of customer interviews to see if you can get some qualitative insight into where you are going wrong.


  1. Your product or feature idea does not work as a value proposition. You can conduct further research to see if iterations on the idea prove more attractive, or you can abandon the idea and move on to more promising ideas. Remember - this outcome isn’t a ‘failed’ test, it’s a good result as it means you have successfully avoided spending expensive development resources on a product or service that isn’t attractive to customers. 


Fake Door Test: Steps
Fake Door Test: Steps

The Two Types of Fake Door Tests

Fake door tests normally fall into two categories - tests run using CTAs placed on your existing product using your own customers, and tests done using an advert placed on a site such as LinkedIn or Facebook that allows you to segment your audience in detail to fit your needs. The former is common with businesses trying to test the demand for new product features, the latter is often used in startups or in businesses where there is sensitivity about the impact on an established brand, as you can make up a fake brand to go with your ‘fake’ product. I’ll give an example of each below from my own experience.


1. Own brand / current product


A few years ago I was working with one of the world’s largest kitchen appliance manufacturers. They were keen to launch ‘premium’ recipes and sell them in their current free recipe app. I suggested a fake door test as a way of understanding the demand for this before they invested any time or money.


We decided we would run the test for a week and that an acceptable conversion or ‘sale’ rate would be 5%. We then added a panel halfway down the homepage of the app offering a recipe pack for sale. This showed some recipe images and the price of the pack. Any click on this CTA would be a very strong indicator of a likely sale.


After a week we had nearly 96k impressions/views of the fake recipe pack, and just over 3700 click-throughs. This gave us a conversion rate of only 3.9%, lower than what we wanted to achieve. We had two options - we could lower the price of the fake recipe pack and run the test again, or we could park the idea of premium recipes. In the end we decided we couldn’t lower the price and still make a significant margin, and so the concept was put on hold.


2. Fake brand / LinkedIn advert


In this next example, I was working with a SAAS software company who was looking to develop a new product to sell alongside their established offering. An initial round of research consisting mainly of customer interviews surfaced three potential problems that a new product could solve. We needed a way to determine which of these was the most compelling opportunity, so a fake door test made sense.

 

We created three adverts for a fake product (one for each of the three customer problems we had identified) and created a LinkedIn campaign to push people to a landing page which described the product idea in more detail.  We were able to target individuals in the ‘buying’ role using their job titles. 


It took a few days for our campaign budget to be used up, and at the end we had generated 23k impressions. Looking at the data we could see that one of the CTAs generated 150% more clicks than its nearest rival - a clear winner was found. 


Conclusion

It's crucial to remember that fake door tests are just one piece of the product development puzzle. Ideally they should be used in conjunction with other research methods and always conducted ethically, with respect for user experience and transparency. You also need to show restraint - the overuse of this type of experiment on your customers can damage the credibility of your brand. For that reason it’s best to limit these types of tests to strategically important initiatives.


Managed properly though, fake door tests are a powerful tool in your product development arsenal, offering a cost-effective and low-risk method to validate ideas before committing resources. By following the steps outlined in this article - defining a clear hypothesis, designing an engaging test, and carefully analysing the results - you can gain valuable data into customer preferences and market demand. The insights you gain might just be the key to your next big success.








103 views0 comments

Comentarios


bottom of page