- For more information on split tests, see Split Tests
In split tests (also known as A/B tests), Meta will split your target audience into randomized, non-overlapping segments. This enables comparing campaigns' performance against each other by only showing certain user ads from one test cell. After that, you simply measure the results of each test cell using the standard Facebook conversion attribution model to see which test cell drove the cheapest conversions, better CTR or higher ROAS.
Note: Split tests can be created in the Ad Studies section, which is accessible by any Smartly page, by clicking on the Home button, at the upper left-hand corner of your screen. To create a split test:
- Click on the Home button and click on "Ad Studies"
- On the right-hand side of the screen, click on "Create Ad Study"
- Click on "Split test"
Goal setting
Make sure you have a clear goal for your Facebook split test. This can be achieved by having a clear question in your mind as to what you try to answer to with the test. For example, the following questions are two different tests:
- Should I run a mix of creatives: link ads, videos, collection ads?
- Which creative types should I put more resources to develop new creative concepts: link ads, videos, collection ads?
Budgeting
When creating a split test, you can see the following view when clicking on "Advanced settings". The tool calculates how many conversions you need and how large a budget you need based on your CTR, CR, or CPA.
- Note that reliably finding very small differences requires large amounts of conversions, and can thus be costly compared to the potential improvement in performance.
Important points to know before running a split test
Start by reading general Best practices for ad studies, including:
- Make sure that your conversion events are working properly
- Use newly created or cloned campaigns
- Make sure you do not have other campaigns overlapping with the test campaigns
- Don't use the same posts
- Set yourself a reminder before the test ends
- AA/BB testing: Don't do it!
Expect a hit in performance: Whenever you're doing Facebook split testing, the audience will be split, likely resulting in a dip in performance KPIs. Therefore, any comparisons should be made between the study cells and not against, for example, historical performance.
Ideally, there should be only one difference between study cells: This is the only way to interpret the results reliably. When the study cells differ in multiple ways (e.g. both creative types and bidding), you cannot know for sure how each of these differences affects the results.
For example, if you want to test whether manual bidding is better than automatic bidding, create two campaigns that are identical except that one uses manual bidding and the other uses automatic bidding. When you observe a difference, you will know exactly what caused it.
It is easy to setup campaigns like this in Smartly. First, create one campaign, then clone it, and change just the one thing you want to test in the cloned campaign.
Understand what kind of changes you can make during the test: The key is to treat all the study cells equally and not make changes that can skew the interpretation of results. For example, if you are running a test comparing two different target audiences, both the ad set targeting the control audience and the ad set targeting the test audience should contain exactly the same creatives. If you add creatives during the test, you should add the same creatives to both ad sets.