A Split Test (also known in Meta as an A/B Test) allows you to compare the performance of your ads by changing certain aspects such as creative, description or placements.
Here is how to run a Split Test with Smartly:
- At the top right of any page on the Smartly website, click on the Home button
- Click Ad Studies.
- To the top right, click Create Ad Study and select Split test.
- Enter the name, description and duration of your split test
- Select the metric which you want to run the study for (CTR, CR or CPA)
- Select the level at which you want to run the study (Account, Campaign or Ad Set) and select which Accounts, Campaigns or Ad Sets belong to each study cell
Considerations during a split test
To guarantee that your split testing is successful and meaningful, there are a couple of things to be mindful of.
Make sure that each study cell spends equally. When you spend more, your cost of advertising (CPA) naturally increases. If a study cell spends significantly more or less than the others, we cannot compare the results as the CPA is higher due to higher spending.
It is also good to avoid making changes during the test – unless, of course, the changes are the one thing you want to test. For the same reason try to avoid using optimization strategies as they do automatic changes. In some cases, changes are acceptable, though they should be avoided generally speaking.
Do's and don'ts during the ad study
This list depends on what the objective of the test is. Here are some general guidelines depending on whether it's a test related to audiences, creatives or optimization.
Audience test:
- Do: You can increase bids and budgets, add or remove creatives, as long as the same changes are made across all study cells.
- Don't: No updates to the audience should be done, as it will be a new audience and hence a new test.
Creative test:
- Do: You can increase bids and budgets as long as the same changes are made across all study cells.
- Don't: No updates to the creatives should be done, as this changes the entire test setup.
Optimization or other tests:
Whether we are comparing two optimization goals or current way of managing bids and budgets to smartly optimization features or something else, they should be evaluated case-by-case. Discuss do's and don'ts separately with your account manager.
Limitations during the split test
Once the split test has started, you can:
- Rename the ad study or ad study cells
- Add new campaigns or ad sets into the ad study cells
The following actions should not be possible according to Meta, but we have found out they can be done:
- Remove campaigns from the ad study cells
- Remove ad study cells
In theory, you could also add new ad study cells into the ad study, but you can't change the percentages of existing cells, and the sum of percentages cannot exceed 100%. Pay extra attention when you are adding or removing campaigns or ad sets from an active ad study. The audience split will be lifted from those campaigns, and their ads will start showing to 100% of the targeted population. You can also extend the duration of the ad study by updating the end time, if the study has not yet ended. However, while the ad study is already running, you can not
- Change the percentages of the existing ad study cells
- Add or modify objectives in a lift study
End a split test
An ad study ends automatically when the end time is reached. If you want to extend the duration of your Facebook split test, you can edit the end time to some future date while the test is still running.
Note A split test cannot be reactivated once it has ended.
To end an ad study immediately:
- In the main navigation, click Ad Studies.
- Find the split test you want to end.
- Under Actions, click the edit icon .
- Click End Ad study.
Allow the study to run until there are enough conversions. If Campaign A received 14 conversions and Campaign B only 10, Campaign A is not necessarily better: the difference might be due to random variation instead of an underlying true difference in performance. The only way to tell the two apart is to collect enough data.
When you create an ad study in Smartly, you will automatically see an estimate on how much data you are likely to need. When the ad study is running, the statistical significance calculator will indicate if the ad study can be stopped, or whether you still need more conversions for a statistically significant result.
Most people tend to underestimate the magnitude of random variation. A good rule of thumb is that differences smaller than 2 ⋅ √CV are not statistically significant (CV = number of conversions). For example, if one campaign receives 30 conversions and the other 37, the difference is not statistically significant (7 < 2√30 ≈ 11) . You would need at least 41 conversions in the second campaign until the difference would be even close to statistically significant – and that would require the second campaign to be almost 40% better than the first one.
Run the ad study at least one week or the duration of your sales cycle Also, just because you have enough conversions does not mean that the test has been running long enough. You should run the study long enough for ads to have an effect. As a rule of thumb, the ad study duration should be as long as your sales cycle is. Performance can differ between weekdays, especially between working days and the weekend. By running the ad study at least one full week, you get a better idea about its performance.
FAQ
Does the ad set reach estimate reflect audience size with or without an ad study?
Without — the reach estimate always shows how big the audience would be without the ad study.
Will the cells have identical delivery?
Unfortunately, not always. Facebook will split the audience according to your split settings, but the ads within the campaigns will go normally through the auction as every other campaign. Therefore it can happen, through randomness, that one split cell gets slightly more delivery than the other.
What happens to the audience when the ad study ends?
When the ad study ends, the audience split no longer applies and both campaigns are shown to the entire audience. Notice that this may lead to campaigns having overlapping audiences. Typically, you want to pause one or both campaigns after the ad study ends.