In this interview, Ashit Kumar, User Growth Optimization at Spotify, shares his winning tactics for connecting marketing data to experimentation success metrics.
Types of experiments for optimizing the conversion funnel at Spotify
D.D.: To get us started, what type of experiments are you conducting at Spotify with the goal of optimizing the conversion funnel?
A.K.: When it comes to our team, our remit is to optimize the user conversion flows. Since most of Spotify's conversions happen on the web, our team focuses on optimizing those flows, some of those being:
Landing page optimization
One of the key areas of our optimization processes is landing page optimization. A massive chunk of traffic lands on our premium landing pages across different regions and countries. During the last few years, our strategy has been to work closely with the market teams and make sure that landing pages are optimized according to their own regions and their own additional nuances. We make sure that the landing page has all the right information that our subscribers or potential subscribers need.
Churn optimization
What we have also started focusing more on, as the overall subscriber pool reduces in some of our more mature markets, is the cancellation flow.
For example, one of the past experiments was to add the benefits of the premium plan in the cancellation flow so that users are aware that they would lose these benefits if they canceled their premium plan.
Upsell optimization
In our mature markets, we also focus on making existing subscribers aware of what they gain with higher value plans and try to convert them to those plans.
Many of the Spotify plans are based on our life cycle. You start as a student, you convert to an individual plan, you get the duo plan when you have a partner, and upgrade to the family plan when you get a family. These life stages need to be put in front of our users so that they are aware that these plans exist when they need them.
adoption
Key metrics for user growth optimization
D.D. What metrics do you focus on when you assess the success of an experiment?
A.K.: We have a variety of metrics to focus on depending upon the premise of an experiment. Some of those are:
- Conversion rate (User level)
- Activity metrics ( DAU / WAU / MAU)
- Cancellation rate
- LTV (lifetime value)
We also started looking into LTV or lifetime value as an average from variance. Our subscribers are very different in their behavior. If you e.g., convert a subscriber on a trial plan, that doesn't mean that the business will get the same amount of revenue as converting to an individual plan without any trials. So we have started to take into account some of those nuances when we run experiments.
Depending upon the hypothesis of the experiment, we may or may not choose one metric over the other. We also have to make sure that we choose one or two good metrics as our primary metric, but then choose one or two other metrics as our guardrail metrics. Mostly because when you are trying to get more convergence, you have to ensure that your experiment does not destroy user experience.
Analyzing our experiment based upon marketing channels is incredibly important for our team, since we primarily focus on optimizing for new subscribers.
How to connect marketing data to experimentation success metrics
D.D.: How do you tie in marketing data to experimentation success metrics?
A.K.: Analyzing our experiment based upon marketing channels is incredibly important for our team, since we primarily focus on optimizing for new subscribers.
In order to import marketing data in our analysis we pass the marketing attributes to our experimentation datasets, and then, during analysis, we make sure that we run a check on these primary marketing channels to see if those underperform/overperform as compared to the overall variant performance.
We make sure that the definitions of those channels are always kept to date via a consistent custom SQL UDF (user-defined) function.
The importance of data quality
D.D.: We know that marketing data is needed for experiments with the goal of optimizing conversions. Why is the quality of marketing data critical?
A.K.: Data quality should be part of the overall experimentation or analysis lifecycle. The quality of marketing data is, as you may have guessed, extremely important for our tests. We set up unit tests in our pipelines to make sure that the data is auto tested on different pre-set parameters.
Our team is responsible for making sure that we have instrumentation in place for data collection and that we only collect data where we have explicit user consent. It is really important to ensure that the data we collect is reliable enough for us to use and our over 150 stakeholders across Spotify to make decisions on.
When you are optimizing your landing pages, a massive chunk of the traffic coming to those landing pages is via marketing channels, which in turn means that for your experiments to work, they must be aligned with your overall marketing strategy.
Usually different teams follow different naming conventions, which would make identification of those channels more difficult than it has to be.
Consistency is key when you want to analyze on the basis of the marketing channels since without it, you’d have to create really complicated business logic to categorize them.
2 key layers of data quality
D.D.: What aspects of data quality do you consider important for your experiments?
Consistency
A.K.: Besides accuracy, consistency is imperative when you are trying to categorize marketing data under different channel groups, for instance.
It is important for us that all of our marketing teams are aligned when it comes to making sure that the UTM campaign parameters they use are consistent.
Timeliness
Apart from that, we also set up alerts via pagerduty, which would inform us if there is any kind of lateness in those datasets, leading to early debugging and fixing of those problems.
Communicating marketing data insights
D.D.: An essential part of successful experiments is activating the insights. You play a key role at Spotify in enabling other teams to optimize the customer journey.
How do you use and communicate marketing data insights to ensure that data helps drive the right decisions?
A.K.: As a centralized optimization team, we have to make sure that we are disseminating the learning from experiments in the most democratized way possible, so our team maintains an open backlog of requests that anyone can submit an idea to. Those ideas are then prioritized the same as anyone else’s ideas.
Similarly, after we run experiments, all of those results, regardless of whether the experiments were successful or not, are pushed to our Knowledge Base tagged with different meta tags with what that experiment idea related to, for example:: a specific page optimization, a region or group of markets, campaigns, also relevant stakeholder teams etc. Using these meta tags, anyone who is looking at the knowledge base can easily filter and find the insights they are looking for
D.D.: What 3 tips would you share with fellow analysts to get more value from their marketing data and drive successful experiments?
A.K.: As someone once said, you can’t optimize something you cannot measure. Similarly, you can’t measure using any kind of data unless it is of high quality.
- make sure your data is of the highest quality possible
- automate all repetitive processes of your analysis/experimentation
- make sure that the results of those experiments are accessible to anyone in your organization
About Ashit Kumar:
Ashit Kumar is a User Growth Lead at Spotify. In his day-to-day, he manages data instrumentation, workflows/pipelines, and insight generation for all A/B tests run within his team. While he is very hands-on with technical and marketing tools, he often works with business stakeholders all around Spotify to help them understand the value of experimentation and behavioral data. When he is not thinking about data, he is busy exploring emerging technologies and economic theories.
organization
fall behind