- In the navigation menu, go to Ad Studies.
- Click Create Ad Study > Split test.
For TikTok, split tests for campaigns are not yet supported. Instead, you can set up campaigns for brand lift tests inside our platform, and those campaigns will be configured into brand lift campaigns by your TikTok representative.
Step 1: Basics
First, set up the test structure and basic information: Name, Start Time, and End Time. Set the duration according to how long it takes to get the required amount of conversions typically.
You can edit the start time if the test has not yet started (useful for creating drafts), and the end time if the test has not yet ended.
Step 2: Power analysis
Next, estimate how much data you are likely to need to reach a statistically significant result for your split test. You can do this by entering some details into the estimation tool in the ad study creation dialog in Smartly.
The default settings are designed to work in most cases, but you can also enter your own.
- The Metric and Conversion goals define what you want to measure. These are used as default settings when you view the results of the ad study and can be easily changed at any point. In most cases, the metric and event should be based on your business KPI (eg. CPA for purchase). The selection here is also used as default when estimating the statistical significance of the test, once the results start coming in. The initially selected conversion goal is your primary account reporting goal. You can change it in Settings → Reporting.
- The Smallest interesting difference is the most important factor in calculating how much data you will need to collect. The smaller the differences you want to find, the more data you will need to collect to distinguish the differences from random variation. In most cases, values between 10%–20% give the best compromise between the price of the ad study and the value gained from learnings.
- The Confidence level defines how certain you want to be that the difference you find is a true difference, and not a result of random variation. Suppose there is no difference at all between the ad study cells, and the confidence level is set to 95%. In that case, there is a 5% probability of a false positive, or that a statistically significant difference is detected anyway. A larger value means that the outcome is more likely to be correct furthermore, more data must be collected to reach a statistically significant result.
- Because randomness is at the core of statistical testing, it is impossible to predict precisely how much data is needed. Sometimes, you get lucky and 300 conversions are enough, but you might need 800 in most cases. The Statistical power allows you to explore this uncertainty in advance. The default value of 80% means that by collecting the indicated number of conversions you will get a statistically significant result with 80% probability (assuming the difference is exactly as large as the smallest interesting difference). In this case, you would need to collect more data with a 20% probability. Note that the statistical power is only used to calculate the estimate, and does not affect the actual calculation of statistical significance at the end of the ad study.
- CPA can be filled in to estimate the total cost of the ad study. If you have already run a campaign that is similar to the one you are planning to test, the CPA from the previous campaign might be a good estimate here. Remember to account for late conversions: with long attribution windows it might take a while after the impression event for the conversion to happen
The displayed number of required conversions (or clicks in case of CTR) is always the total number of conversions in all cells. If you add new cells, the estimate will increase accordingly. The estimate is calculated assuming the total budget for the ad study is split proportional to the cell sizes.
Step 3: Study cells
- Add cells for each variation you want to test. Each cell will have a unique, non-overlapping split of the target audience - You can define the size (% of target audience) of each cell, but the sum needs to be smaller or equal to 100%.
- Give your cells descriptive names. This will make it easier to read the results, as they are shown by the ad study cell.
- When the structure is set up, click on the Select button on each cell to add ad sets, campaigns, or even full accounts to each cell. Usually, you want to compare only one feature at a time (such as the effect of bidding type) while keeping everything else identical. The easiest way to achieve this is by creating one campaign or ad set first, then cloning it and modifying whatever feature you want to test in the new cell.
- You can add one or multiple campaigns, ad sets, or accounts to each cell. Accounts, campaigns, or ad sets in the same cell will use the same segment of the split audience, and thus their audiences may overlap within the cell.
- When you are done, click Publish to save the ad study. When your Facebook split test starts, the target audience of each campaign will be limited to the selected segment.
FAQ
Can I create an ad study with two different DPA campaigns?
Yes. This Facebook split testing setup is a good way to test differences in CTR (or conversions) between two Dynamic Image Templates, for example. Our tests with clients have shown that there can be major differences in performance between different image templates.
ℹ Note: If you create an ad study with DPA campaigns that have different dynamic targeting rules, people will not "jump" from one audience to the other. So for example, if one audience includes people who have viewed a product within 1-7 days and the other 8-14 days, a person in the first audience will not jump to the second audience after 8 days have passed, due to the Facebook audience split being still in effect.
How do you define "Smallest interesting difference"?
You could also say: "How big do you think the difference will be?". We ask this because it makes a big difference in the estimated test duration. Big differences are easy to find with little data, but small differences require lots of data to detect.
We define the smallest interesting difference as the total difference relative to the mean: if CPA is $20 in Cell A and $25 in Cell B, the relative difference is $5 / $22.5 ≈ 22.2%. This definition is used because usually, you do not know a priori which cell is better, so it makes sense to use a symmetric difference.
An alternative would be to define the smallest interesting difference relative to a known control cell. This makes sense when you compare as a new setup against an old setup. For example, if Cell A would be the old setup used as a control, then the relative difference could be defined as $5/$20 = 25%.
There is actually a simple mapping between these. If X is the smallest interesting difference according to the first definition, and Y according to the second, then X = 2Y/(2+Y) and Y = 2X/(2-X). In other words, if you want to use the smallest interesting difference of 20% relative to the control cell (Y = 0.2), you can set the value to 18.2% in the selection.
I can't find an account, campaign, or ad set for a cell, but it is available. What should I do?
Smartly shows only 50 results matching your search for an object for an ad study cell. If you have more objects matching the search term, the item you actually want may not be shown, and thus you won't be able to select it. We recommend that you have as descriptive and unique names as possible, but in case this happens, you can use the Smartly ID of the object as a search term.
There are 2 ways to get the Smartly ID:
- Open the account, campaign, or ad set so that you can see its details, and check the URL in your browser address bar. Here are some examples of ID placement, depending on the object (and view) you open:
Account ID
Campaign ID
Ad Set ID
- In Campaigns, select the account, campaign, or ad set Smartly ID as a column:
3. The Smartly ID for the account, campaign, or ad set is then displayed in the report.