A/B testing is a valuable way to measure the true impact of recommendation modules in your emails. By creating two versions of an email and comparing their performance, you can determine which elements drive more engagement and conversion. This approach helps you make data-informed decisions about including modules like “Top selling,” “Contextual,” “Alternative,” or “Complementary” products.
How it works
An A/B test involves sending two variations of the same email to randomised segments of your audience. One version includes a specific recommendation module, such as “Top selling” or “Contextual”, while the other does not. After the campaign is sent, you compare the results based on performance metrics like click-through rate (CTR) and conversion rate (CVR).
For example, if you’re unsure whether to highlight “Alternative” or “Complementary” products, you can run a test using each version in a split audience. This method shows you which type of recommendation is more effective for that particular campaign or audience segment.
How to run an A/B test with recommendation modules
- Create your original email using the desired recommendation module (e.g. “Top selling”).
- Duplicate the email and remove the connected recommendation module from the copy.
- Set up a split test by assigning the two versions to a randomized segment of your audience.
- Send out the campaign and let the system distribute the emails evenly.
- Analyze the results after the send-out using email-level performance metrics such as CTR and CVR.
This testing method gives you actionable insights into which recommendation strategies are most effective, helping you optimise future campaigns with confidence.
Article last reviewed
Comments
0 comments
Please sign in to leave a comment.