"Ad study" is the umbrella term for split tests and lift tests for Meta campaigns.
Ad studies allow you to test your campaign performance in a scientifically rigorous way. Keeping the test scientifically and statistically sound is handled automatically by Smartly — all you have to do is decide what to test.
Ads testing is based on Randomized Controlled Trials (RTC): your target audience is randomly divided into two or more equal parts, which are treated with different ads (or not treated at all), and then we measure the differences in outcome.
Depending on what you want to test, you can choose between the two following ad study types on Smartly:
- Split test (A/B test)
-
Conversion Lift Test
- Single-cell
- Multi-cell
Note A third type of ad study, the Brand Lift Study is currently not available in Smartly. You can find more information in this article on Meta's Business Help Centre.
See Ad Studies in Smartly
- In the main navigation, click Ad Studies.
For concrete examples on which test you should use, see split testing case examples.
Split tests
With split tests, you can compare the performance of ad sets, campaigns or ad accounts against each other within a selected attribution model. This is done by randomly splitting the audience and only delivering one variation of your ads to each part of the audience.
A split test is your default choice for A/B testing. Applicable examples for testing would be questions such as:
- Should I show price on a creative or not?
- Which Call-To-Actions to use on my creatives?
- How do changes in conversion funnel on my website affect conversion rates?
Conversion Lift Tests
In lift tests, you compare the results of your ads against a situation where no ads are showed at all. The trick is that during a lift test, Facebook tracks even those users who are not shown any ads at all, which is not the case normally.
To test if your advertising has an overall effect – either throughout your whole funnel or within an individual funnel step (most often tested for retargeting).
Multi-cell lift tests
Multi-cell lift tests are a sub-type of lift tests. In a multi-cell lift test, you compare the incremental conversions caused by your ads between two alternatives. The difference to a split test is that you compare the results in terms of incrementality and not within a selected attribution model.
You could always use a multi-cell lift study for A/B testing since it measures incrementality – the true effect of your advertising.
Still, most often a split test is sufficient and, since it's easier and cheaper to run, recommended in most cases.
Regular split tests assume that lift is equal in all cells. This is not always the case: for example, if you test different delivery optimization features, one campaign might find people who would have converted anyway, and the other might convert non-customers to customers more effectively. Testing this requires a multi-cell lift test. Examples of such situations include:
- Bidding towards reach vs. purchase
- Using a significantly different target audience (e.g., Broad Audience vs. 1 % Lookalike)
- Including or not including Audience Network as a placement in your campaigns
In each situation described above, we are reaching a very different group of people in the cells. Therefore, if there is a difference in attributed conversions, we cannot know if the difference is due to our advertising being effective, or if one of the audiences just managed to harvest attributed conversions from people who would have purchased anyways. This limitation can be avoided by carrying out Facebook ads testing using a multi-cell lift test instead.
See also: Introduction to Lift Tests
Facebook ads testing with ad studies in Smartly
You can find all ad studies created for your company in the Ad Studies section, accessed from any party of the Smartly website, by clicking on the Home button at the top left of the screen.
The Smartly tool offers flexible and powerful workflows for setting up many kinds of tests, whether it is split or lift testing or a combination of those, or whether it is testing ad sets, campaigns or ad accounts.
You can start creating an ad study and save it as a draft. You'll see drafts in the same listing where you see ad studies. Instead of starting from scratch, you can also clone an existing ad study or draft, to pre-fill all the relevant fields. This saves a lot of time in, for example, setting up lift test objectives.
Other value-adds include the useful Power Analysis tool, which helps you estimate the required duration or budget for your test. To help collaboration in testing, you can save unfinished ad studies as drafts, as well as clone existing ad studies and drafts. Finally, as the test is running, Smartly gives a recommendation on whether you can stop the test and which cell won on which metric, or whether you should continue the test to gather more data. The results are presented in an easy-to-understand fashion.
Ad studies in Smartly v. in the Ads Manager
There is no difference. Smartly uses Meta's features to create the tests, so the methodology is fully aligned with Meta's. In other words, Meta ad testing with an ad study made in Smartly uses exactly the same testing process as one that is made on Ads Manager. Any ad study you create on Smartly exists also on Meta, and vice versa.
Import an ad study
- In the Ad Studies section, click on "Create Ad Study"
- Click on "Import by ID"
- Select the Ad Account the study is for
- Input the Meta Ad Study ID in the respective field
- Click on "Import"
Find an Ad Study's Meta ID
- If a Meta employee created the ad study for you, they can probably provide the ID you need.
- If you can see the ad study in Ads Manager's Test and Learn tool, you can find its ID by opening the ad study report and looking for the ID in the browser's address bar:
Ad studies in the Campaigns view
You can see your ad studies and combine them with many other dimensions in the Campaigns view. You can choose from many dimensions like name, ID, start & end time and type of the ad study.
Please note that you can only see the "regular" Meta ad statistics for your ads here – not the lift test results. Lift results are a special kind of data, free from attribution models and windows. In other words, you can't really analyze your lift tests' results in Campaigns, just view them and their contents.
Limitations
Once an ad study has started, you can:
- Rename the ad study or ad study cells,
- Add new ad accounts, campaigns or ad sets into the ad study cells,
- Remove items from the ad study cells
- Remove ad study cells
- Change end time
- When you click End Ad Study, we simply set the end time to a near future
- When the ad study ends, audience splits will be removed, and results gathering will stop
- (Technically, you can add new ad study cells, but only if the sum of existing cells' shares is less than 100%)
Pay extra attention when you are adding or removing campaigns or ad sets to/from an active ad study. Only add campaigns that haven't delivered yet. Adding new campaigns into a running ad study will include those campaigns' results also from the time their audiences were not yet split, thus contaminating the results.
You can also extend the duration of the ad study by updating the end time if the study has not yet ended.
However, while the ad study is running you can not
- Edit the start time
- Change the percentages of existing cells
- Add or modify objectives in a lift study
Frequently Asked Questions (FAQ)
Why do I see multiple ad studies per row?
If you use an ad study dimension and see multiple ad studies per row, that means you have ads that have been part of multiple ad studies over time. The results of those ads will be shown on a line that signifies that these statistics may have been generated during either of these ad studies' run time.
The statistics under ad study dimensions do not reflect the ad study duration, but just the timeframe you selected in Campaigns. To analyze the results during the ad study's running dates, see the results view in the Ad Studies section
Why can't I see my ad study?
Ad studies created in the API and Ads Manager behave a bit differently in terms of visibility. Ads Manager create them to be visible to the user only, while Smartly connects them to the Business Manager. Ads Manager only shows ad studies connected to you, the user, while Smartly lets the whole business share the test results and learnings. We are working on improving this to have all ad studies visible in Smartly.
If you don't see an ad study in Smartly, don't worry! You can import it.
How is the audience split created?
When you create the ad study, Meta first splits the set of all users and then apply targeting on top of that. The split is different in each ad study, and it is currently not possible to use the exact same split in two different ad studies.
How does split work if my audiences are not identical?
If you have different targetings with some overlap and create a 50-50 split, the overlapping part of the audiences will be split perfectly 50-50, but you will lose 50% of both of the non-overlapping audiences. So, it is worth noting that if you wish to test only different audiences, meaning that all your audiences are completely non-overlapping, it is not necessary to run an ad study and you should not do it, as the feature will simply cut your specified targetings to 50%.
Does the ad set reach estimate reflect the audience size with or without the ad study?
Without. The reach estimate always shows how big the audience would be without the ad study.
What happens to the audience when the ad study ends?
When the ad study ends, the audience split no longer applies and both campaigns are shown to the entire audience. Note that this may lead to campaigns having overlapping audiences. Typically you want to pause one or both campaigns after the ad study ends.
Will the cells have identical delivery?
The ad study does not ensure even delivery. Meta will just split the audience, but the ads deliver normally through the ad auction process like every other campaign. Therefore, you need to ensure your cells deliver evenly, for you to be able to compare the results. Read more in Best practices for ad studies.