Ashit Kumar is a User Growth Lead at Spotify. In his day-to-day, he manages data instrumentation, workflows/pipelines, and insight generation for all A/B tests run within his team. While he is very hands-on with technical and marketing tools, he often works with business stakeholders all around Spotify to help them understand the value of experimentation and behavioral data. When he is not thinking about data, he is busy exploring emerging technologies and economic theories.
A.K.: When it comes to our team, our remit is to optimize the user conversion flows. Since most of Spotify's conversions happen on the web, our team focuses on optimizing those flows, some of those being:
One of the key areas of our optimization processes is landing page optimization. A massive chunk of traffic lands on our premium landing pages across different regions and countries. During the last few years, our strategy has been to work closely with the market teams and make sure that landing pages are optimized according to their own regions and their own additional nuances. We make sure that the landing page has all the right information that our subscribers or potential subscribers need.
What we have also started focusing more on, as the overall subscriber pool reduces in some of our more mature markets, is the cancellation flow.
For example, one of the past experiments was to add the benefits of the premium plan in the cancellation flow so that users are aware that they would lose these benefits if they canceled their premium plan.
In our mature markets, we also focus on making existing subscribers aware of what they gain with higher value plans and try to convert them to those plans.
Many of the Spotify plans are based on our life cycle. You start as a student, you convert to an individual plan, you get the duo plan when you have a partner, and upgrade to the family plan when you get a family. These life stages need to be put in front of our users so that they are aware that these plans exist when they need them.
A.K.: We have a variety of metrics to focus on depending upon the premise of an experiment. Some of those are:
We also started looking into LTV or lifetime value as an average from variance. Our subscribers are very different in their behavior. If you e.g., convert a subscriber on a trial plan, that doesn't mean that the business will get the same amount of revenue as converting to an individual plan without any trials. So we have started to take into account some of those nuances when we run experiments.
Depending upon the hypothesis of the experiment, we may or may not choose one metric over the other. We also have to make sure that we choose one or two good metrics as our primary metric, but then choose one or two other metrics as our guardrail metrics. Mostly because when you are trying to get more convergence, you have to ensure that your experiment does not destroy user experience.
A.K.: Analyzing our experiment based upon marketing channels is incredibly important for our team, since we primarily focus on optimizing for new subscribers.
In order to import marketing data in our analysis we pass the marketing attributes to our experimentation datasets, and then, during analysis, we make sure that we run a check on these primary marketing channels to see if those underperform/overperform as compared to the overall variant performance.
We make sure that the definitions of those channels are always kept to date via a consistent custom SQL UDF (user-defined) function.
A.K.: Data quality should be part of the overall experimentation or analysis lifecycle. The quality of marketing data is, as you may have guessed, extremely important for our tests. We set up unit tests in our pipelines to make sure that the data is auto tested on different pre-set parameters.
Our team is responsible for making sure that we have instrumentation in place for data collection and that we only collect data where we have explicit user consent. It is really important to ensure that the data we collect is reliable enough for us to use and our over 150 stakeholders across Spotify to make decisions on.
When you are optimizing your landing pages, a massive chunk of the traffic coming to those landing pages is via marketing channels, which in turn means that for your experiments to work, they must be aligned with your overall marketing strategy.
Usually different teams follow different naming conventions, which would make identification of those channels more difficult than it has to be.
Consistency is key when you want to analyze on the basis of the marketing channels since without it, you’d have to create really complicated business logic to categorize them.
A.K.: Besides accuracy, consistency is imperative when you are trying to categorize marketing data under different channel groups, for instance.
It is important for us that all of our marketing teams are aligned when it comes to making sure that the UTM campaign parameters they use are consistent.
Apart from that, we also set up alerts via pagerduty, which would inform us if there is any kind of lateness in those datasets, leading to early debugging and fixing of those problems.
A.K.: As a centralized optimization team, we have to make sure that we are disseminating the learning from experiments in the most democratized way possible, so our team maintains an open backlog of requests that anyone can submit an idea to. Those ideas are then prioritized the same as anyone else’s ideas.
Similarly, after we run experiments, all of those results, regardless of whether the experiments were successful or not, are pushed to our Knowledge Base tagged with different meta tags with what that experiment idea related to, for example:: a specific page optimization, a region or group of markets, campaigns, also relevant stakeholder teams etc. Using these meta tags, anyone who is looking at the knowledge base can easily filter and find the insights they are looking for.
A.K.: As someone once said, you can’t optimize something you cannot measure. Similarly, you can’t measure using any kind of data unless it is of high quality.