How to pass the A/B test interview question in tech companies
Product analytics and data scientist interviews at tech companies often include an A/B testing component. Here is my framework on how to answer A/B testing interview questions. Please note that this is not necessarily a guide to design a good A/B test. Rather, it is a guide to help you convince an interviewer that you know how to design A/B tests.
A/B Test Interview Framework
Imagine during the interview that you get asked “Walk me through how you would A/B test this new feature?”. This framework will help you pass these types of questions.
Phase 1: Set the context for the experiment. Why do we want to AB test, what is our goal, what do we want to measure?
The first step is to clarify the purpose and value of the experiment with the interviewer. Is it even worth running an A/B test? Interviewers want to know that the candidate can tie experiments to business goals.
Specify what exactly is the treatment, and what hypothesis are we testing? Too often I see candidates fail to specify what the treatment is, and what is the hypothesis that they want to test. It’s important to spell this out for your interviewer.
After specifying the treatment and the hypothesis, you need to define the metrics that you will track and measure.
Success metrics: Identify at least 2-3 candidate success metrics. Then narrow it down to one and propose it to the interviewer to get their thoughts.
Guardrail metrics: Guardrail metrics are metrics that you do not want to harm. You don’t necessarily want to improve them, but you definitely don’t want to harm them. Come up with 2-4 of these.
Tracking metrics: Tracking metrics help explain the movement in the success metrics. Come up with 1-4 of these.
Phase 2: How do we design the experiment to measure what we want to measure?
Now that you have your treatment, hypothesis, and metrics, the next step is to determine the unit of randomization for the experiment, and when each unit will enter the experiment. You should pick a unit of randomization such that you can measure success your metrics, avoid interference and network effects, and consider user experience.
As a simple example, let’s say you want to test a treatment that changes the color of the checkout button on an ecommerce website from blue to green. How would you randomize this? You could randomize at the user level and say that every person that visits your website will be randomized into the treatment or control group. Another way would be to randomize at the session level, or even at the checkout page level.
When each unit will enter the experiment is also important. Using the example above, you could have a person enter the experiment as soon as they visit the website. However, many users will not get all the way to the checkout page so you will end up with a lot of users who never even got a chance to see your treatment, which will dilute your experiment. In this case, it might make sense to have a person enter the experiment once they reach the checkout page. You want to choose your unit of randomization and when they will enter the experiment such that you have minimal dilution. In a perfect world, every unit would have the chance to be exposed to your treatment.
The next step is to conduct a power analysis to determine the number of observations required and how long to run the experiment. You can either state that you would conduct a power analysis using an alpha of 0.05 and power of 80%, or ask the interviewer of the company has standards you should use.
I’m not going to go into how to calculate power here, but know that in any AB test interview question, you will have to mention power. For some companies, and in junior roles, just mentioning this will be good enough. Other companies, especially for more senior roles, might ask you more specifics about how to calculate power.
Next, you need to determine which statistical test(s) you will use to analyze the results. Is a simple t-test sufficient, or do you need quasi-experimental techniques like difference in differences? Do you require heteroskedastic robust standard errors or clustered standard errors?
The t-test and z-test of proportions are two of the most common tests.
If your unit of randomization is larger than your analysis unit, you may need to adjust how you calculate your standard errors.
You might be thinking “why would I need to use difference-in-difference in an AB test”? In my experience, this is common when doing a geography based randomization on a relatively small sample size. Let’s say that you want to randomize by city in the state of California. It’s likely that even though you are randomizing which cities are in the treatment and control groups, that your two groups will have pre-existing biases. A common solution is to use difference-in-difference. I’m not saying this is right or wrong, but it’s a common solution that I have seen in tech companies.
Final considerations for the experiment design:
Are you testing multiple metrics? If so, account for that in your analysis. A really common academic answer is the Bonferonni correction. I've never seen anyone use it in real life though, because it is too conservative. A more common way is to control the False Discovery Rate. You can google this. Alternatively, the book Trustworthy Online Controlled Experiments by Ron Kohavi discusses how to do this (note: this is an affiliate link).
Do any stakeholders need to be informed about the experiment?
Are there any novelty effects or change aversion that could impact interpretation?
Phase 3: The experiment is over. Now what?
After you “run” the A/B test, you now have some data. Consider what recommendations you can make from them. What insights can you derive to take actionable steps for the business? Speaking to this will earn you brownie points with the interviewer.
For example, can you think of some useful ways to segment your experiment data to determine whether there were heterogeneous treatment effects?
Common follow-up questions, or “gotchas”
These are common questions that interviewers will ask to see if you really understand A/B testing.
Let’s say that you are mid-way through running your A/B test and the performance starts to get worse. It had a strong start but now your success metric is degrading. Why do you think this could be?
A common answer is novelty effect
Let’s say that your AB test is concluded and your chosen p-value cutoff is 0.05. However, your success metric has a p-value of 0.06. What do you do?
Some options are: Extend the experiment. Run the experiment again.
You can also say that you would discuss the risk of a false positive with your business stakeholders. It may be that the treatment doesn’t have much downside, so the company is OK with rolling out the feature, even if there is no true improvement. However, this is a discussion that needs to be had with all relevant stakeholders and as a data scientist or product analyst, you need to help quantify the risk of rolling out a false positive treatment.
Your success metric was stat sig positive, but one of your guardrail metrics was harmed. What do you do?
Investigate the cause of the guardrail metric dropping. Once the cause is identified, work with the product manager or business stakeholders to update the treatment such that hopefully the guardrail will not be harmed, and run the experiment again.
Alternatively, see if there is a segment of the population where the guardrail metric was not harmed. Release the treatment to only this population segment.
Your success metric ended up being stat sig negative. How would you diagnose this?
Final notes
I know this is really long but honestly, most of the steps I listed could be an entire blog post by itself. If you don't understand anything, I encourage you to do some more research about it, or get the book that I linked above (I've read it 3 times through myself). Lastly, don't feel like you need to be an A/B test expert to pass the interview. We hire folks who have no A/B testing experience but can demonstrate framework of designing AB tests such as the one I have just laid out. For more advice on how to pass product analytics interviews in tech, check out my other post here. Good luck!