Blog

    A/B testing best practices: how to create campaigns that convert

    Last updated: October 7, 2024

    While marketing decisions are becoming more grounded in observable customer data, it can still be difficult to predict exactly what copy or design will resonate with your audience. (Who hasn’t stressed out over a subject line?) But if you’re depending on email marketing for ad revenue and/or product sales, you can’t afford too many misfires.  

    With A/B testing, you can automatically test variations in subject lines, email copy, and other elements against a control. This allows you to isolate the email elements that drive conversions, use them in the very same email, and then replicate them over time. Repeat over multiple trials and you can optimize your emails for maximum revenue, retention and growth.  

    In this post, we’ll walk through the benefits of A/B testing, then present best practices for building actionable, predictive A/B tests.  

    Stay ahead and read our Q3 2024 email engagement report to uncover trends & best practices for success:

    What is A/B testing in email marketing? 

    A/B testing is a user experience experiment that compares a change in copy, design or language against a control. When it’s used in email marketing, two emails are sent to the list: a test email and a control.   

    These two messages are identical save for the variable being tested (usually subject lines, content, or calls to action). The test and control messages are sent to a pre-set percentage of the list. The email service provider automatically scans the results to determine a “winner,” then sends the winning option to the rest of the list.   

    What are the benefits of A/B testing?   

    Not only can you test your hunches on a live audience, but you can also automatically adjust your strategy in favor of the winning option. Say that you’re experimenting with wittier subject lines, but your audience doesn’t respond as well as you thought. If you’re not running an A/B test, your whole campaign might underperform, which can have negative ramifications for your business.  

    But if you run an A/B test by subject line, your email service provider will send the email with your experimental pithy subject line to a small percentage of your audience, and your control subject line to another portion of your list.  

    After a predetermined amount of time, the email service provider then automatically designates the email with the highest-performing subject line the winner, then sends it to the rest of your audience. So the penalty for picking the wrong subject line or copy variation are lower.  

    That doesn’t just protect your revenue and email metrics, but it encourages more creativity and brainstorming over time. A/B testing also provides an empirical record of what works and what doesn’t, which can help you justify marketing investments to leadership.   

    What variables can I test?   

    You can test the following email elements via A/B testing:  

    • From name 
    • Subject line  
    • Email layout or design  
    • Email copy 
    • CTA button text, color or design  

    On Omeda, you can choose whether to choose a winner by the amount of unique opens or the amount of unique clicks it generates 

    Best practices for A/B testing   

    1.Test one variable at a time  

    If your emails are underperforming, you might be tempted to change everything all at once. But testing multiple variables at once makes it much more difficult to identify what individual element is actually driving any lift in performance.  

    So instead of testing your subject line and your CTA buttons at once, consider testing multiple subject line variations in one test, then multiple CTA button variations in another test. (On Omeda, you can conduct an A/B test with up to 5 variations, also called splits.) This way, you can quickly isolate your most impactful tactics without complicating attribution later on.  

    2. Test high-impact, low-effort elements first   

    Running multiple tests in a short period of time can be time-intensive and confusing. So as you get started, focus on elements that are critical to your email’s success, but can be added and removed easily within your email service provider. This includes:  

    • subject line 
    • preview text 
    • CTA button text 

    This way, you can duplicate your control message within your ESP, then quickly change your test variable, instead of designing a whole new email from scratch.  

    3. Prioritize emails that generate revenue  

    As mentioned, even small improvements in conversion rates can net huge increases in revenue, retention and customer lifetime value over time. For that reason, focus your A/B testing on the messages most likely to drive conversions, like welcome and onboarding emails, your newsletter, or other high-priority promotional emails.   

    4. Run tests on a sizable audience   

    Imagine that you’re conducting a survey about your audience’s favorite candy. You’re expecting most of your audience to name classics like M&Ms and Reese’s, but then your first five respondents give you obscure choices like 100 Grand or Baby Ruth.   

    If you limited your survey to those five people, your survey wouldn’t reflect the true preferences of your population. Your results wouldn’t reflect anything more than random chance. But if you asked 1000 more people, your results would start to return to the norm over time — and portray the attitudes of your broader population.   

    This example is obviously exaggerated, but it demonstrates the importance of statistical significance.  

    In order to generate insights that actually predict success, you need to ensure that your results are statistically significant, i.e., any differences in performance can’t be attributed to random chance.   

    Generally speaking, the bigger your test population, the more likely that your findings will reach this threshold. The more people in your audience, the less impact any one outlier has on the average. 

    So you’re best served running your A/B test on a list size of 1,000 recipients or more (and if you want more specific guidance, use these resources to find your ideal test size and to determine whether your results are significant).  

    5. Give your audience enough time to respond 

    Once you’ve created your A/B test emails and designed your test, you need to decide the duration of the test. How long will your test emails be sent before a winner is decided?  

    For a test size of 1,000 people, we recommend waiting about 2 hours. This gives your audience more time to engage with each message, so it’s more likely that your sample size will be large and representative enough to generate actionable results. 

    6. Evaluate your results — and be patient  

    If your email service provider uses A/B testing, you’ll be able to track your results immediately and isolate the results for your control group and your splits. 

    Here’s how it works on Omeda: When you review your email results, you will notice two deployments: the main deployment that went to most of your audience and the sample/test deployment that had the A/B split and was sent to your test audience. The sample deployment will have “S” appended to the end of the tracking Id and “-sample” appended to the deployment name. From there, you can compare performance across both versions — and get one step closer to perfecting your email strategy. 

    Subscribe to our newsletter

    Sign up to get our latest articles sent directly to your inbox.

    What you should do now

    1. Schedule a Demo to see how Omeda can help your team.
    2. Read more Marketing Technology articles in our blog.
    3. If you know someone who’d enjoy this article, share it with them via Facebook, Twitter, LinkedIn, or email.