“Half the money I spend on advertising is wasted; the trouble is I don't know which half.”
- John Wanamaker (1838-1922)
In theory, Wanamaker’s gripe still holds true; in practice, modern attribution questions are more complex.
A DTC brand came to us with a similar concern: “I am pretty sure most of my advertising is productive, but how can I be sure?”
A year ago, they were spending 80K/month on Facebook/Instagram ads and another 10K on Google Search (mostly branded keywords). Attribution was simple, even in a post iOS14 App Tracking Transparency landscape.
Fast forward to today, their budgets have increased five-fold. Their current channel mix has expanded to include:
… and a significant increase in the Google Search, Shopping, and Facebook advertising budgets.
As their advertising investment scaled, their earned revenue has followed. Their Marketing Efficiency Ratio (MER) remained positive, leading the CEO to believe that, overall, the advertising is productive.
But how productive? Is this mix and budget allocation optimal? Should we be spending more on Pinterest? Are Discovery Ads actually making a difference, or are they just nice to have?
A century later, these are all different iterations of Wanamaker’s original gripe.
After researching a few attribution partners, the DTC brand concluded that most solutions seemed too expensive or incomplete. They weren’t interested in committing to a six-figure annual contract for a robust attribution solution when they were pretty sure that most of their advertising mix was already optimal.
Instead, they wanted to bootstrap a custom attribution model from the ground up; one that they could believe in, understand the methodology, and alter according to their own anecdotal-but-also-relevant theories about the consumer journey.
Disclaimer for the PPC folks—at some point as you read the following paragraphs, you might say to yourself, it seems like they are just attempting to reinvent Google’s Data-Driven attribution model. Yes, you are correct. This is the same methodology that we applied here.
A key piece of any attribution project is to understand the concepts of brand lift and the halo effect. Brand Lift is the impact that advertising has on your bottom line that is not measured by other direct-response tools. For example, if a podcast advertisement influenced your decision to go directly to a brand’s website and purchase, traditional conversion tracking methods would not be able to attribute your purchase back to the podcast ad. Instead, advertisers rely on statistical models to prove a lift in site revenue (or another KPI).
A Halo Effect is a similar phenomenon, but instead of drawing a correlation between the advertising investment and site revenue, we would attempt to measure the potential impact that one advertising impression might have on the performance of another advertising impression.
Successful marketing strategies are built like dynasty NBA teams—not exclusively relying on one piece to win multiple championships. The purpose of some ads are to score the points; others should be relied on for assists or playing defense against the competition.
If a user was served a YouTube ad and then performed a Google Search for our product category, what is the likelihood that they would convert compared to similar users that only performed that same Google Search (and were not previously exposed to our brand via YouTube advertising)?
If the YouTube impression proved to have a positive impact on the expected conversion rate of our Google Search traffic, then we can conclude that a halo effect exists and therefore justify the YouTube investment.
Both of these concepts rely on incrementality analysis: we seek to understand the incremental results of a given advertisement on either site revenue (brand lift) or another advertising message (halo effect).
For the purposes of this analysis, Google refers to Search and Shopping ads. YouTube, Display and Discovery are specified as YouTube, Display and Discovery, even though they also run through Google.
It's important to note that because of manual data entry, and limitations with how the advertising channels give us data, we aren't able to match back purchasers who view an ad. The largest gaps are on Facebook and YouTube.
At the time of this analysis, GA4 had not yet been installed. Once we move to GA4, the data should be more accurate, but never perfect.
Since CTV data is not included in Google Analytics (and we can't get session data for these users), we cannot include it in the full analysis. Instead, for CTV, we looked at all users with CTV in their journey, and gave full credit to CTV if the user also interacted with branded traffic, but split up the credit if the user also interacted with other non-branded ads. So, if a user saw a CTV ad. and then came to the site by searching for a branded keyword, we assume that the CTV ad is how they learned about the brand.
(This is the sort of anecdotal-but-also-relevant twist that we mentioned at the top…)
For Pinterest, we do have session data in Google Analytics; however, due to limits of Pinterest reporting, we can't match back view-throughs, which make up most of the conversions. Instead, we compared attribution models within Pinterest's platform and saw that conversion numbers dramatically increase as we give credit to people who view an ad and then purchase over a longer time period. When we give credit to Pinteret for 7 days after someone views an ad, instead of only 1 day after they view, conversions increase by 40 (75 compared with 35).
Overall, Pinterest is an exploration platform, where people go to learn new things, so it makes sense that the sales journey would be longer than on other channels. ‘
(Once again, this was partially a judgment call, but was agreed upon by all parties and determined to be a more realistic view of the role Pinterest plays in the conversion process.)
Ultimately we really don't know if these people converted because of Pinterest, or if they're just the same people we are targeting on other channels. Therefore, we would like to run a match market test, opting out of a few markets, to determine what the impact of Pinterest actually is.
If you’d like to see copies of the original datasets, see the links below:
Here is the doc with the raw data. The following tabs are most useful for the analysis:
Here is the summary doc with the incrementality results:
In total, this analysis took more than 100 working hours to compile, much of which included manually tagging ad interactions across hundreds of user profiles.
It’s important to recognize that this was never intended to be a working model that we can use for future ad campaigns. Instead, we used this to help establish a basic attribution framework and develop a few key understandings and hypotheses about the customer journey.
After the Pinterest Match Market test, successful implementation of GA4, and other analyses, we will provide a recommendation for an automated attribution solution to use for future campaigns. This might include a cost-effective vendor that can plug directly into their Shopify store, or perhaps a custom solution (built via cloud-based data infrastructure) will be more appropriate.
We'll get back to you within a day to schedule a quick strategy call. We can also communicate over email if that's easier for you.