One of the most common methods in-house marketing teams use to estimate the lift generated by a TV ad is via surveys, as this approach has long served as a reliable way for a company to measure the brand and conversion lift attributed to their marketing spend. The issue here is that no matter how meticulously a survey is crafted, it will always inherently be prone to error—especially the “human” kind.
At Tatari, we have solved this issue by removing the human element, and instead leverage data science and machine learning to offer a highly accurate mechanism for estimating TV lift. This is why a company may see different results between Tatari’s methodology and various survey models. In this article, we’ll discuss why these discrepancies occur.
How Tatari Does it
Simply put, Tatari utilizes 1-to-1 IP-level matching to measure the lift of a TV ad in both an immediate (viewers taking action within 5 minutes of an ad airing) and delayed (up to 30 days post-airing) fashion. These groups are then aggregated and compared to a brand’s standard baseline of website visitors, which allows us to accurately quantify the “total lift” of a TV ad.
There are some nuances between how we measure this on the linear vs. streaming side of the business. For additional info, we have a video that explains our linear methodologies, as well as one that gives a detailed look at how we measure incremental streaming lift.
Surveys as an Alternative
A survey-based attribution approach starts with a brand asking a buyer “Where did you hear about us?” — the choices might be TV, radio, Facebook, a Google Search, or a multitude of other options. This is traditionally sent post-purchase so as to not interrupt the “add to cart” excitement; and can be delivered immediately after the payment is processed, or days/weeks later.
Responses are tabulated in the form of the same lower-funnel metrics that Tatari compiles, like CPA and ROAS.
But who is to say that the survey-taker didn’t accidentally click the wrong box? Or better yet, think that they heard about the brand from one channel when in reality, it was another? What if they received the survey five days later and just filled it out aimlessly to take advantage of the all-too-familiar “enter to win” opportunity. This level of human dependency is a major flaw in the survey approach. So while results may be similar at times to those that Tatari delivers, the discrepancies can also be significant.
The Biggest Miss: Delayed Conversions & Brand Building
One of the biggest factors when weighing surveys as attributions tools against Tatari’s approach comes in the form of delayed conversions, as they are often misattributed with a survey-based approach.
This issue is known as the “indirect reporting effect”. In short, this is where an initial touchpoint (say, TV) is likely to get underreported, whereas the last touchpoint—such as an email that was triggered due to a consumer who saw the TV ad and visited a brand’s website—is likely to be over-attributed. Relying on a consumer’s memory, especially in a delayed setting, is unreliable. In the hypothetical example below, we see that eight days have passed since the consumer was served an initial advertisement to when they completed a survey. Therefore, even if a TV ad is what triggered purchase interest, the last touchpoint is likely to be credited as the purchase driver simply due to the time elapsed from initial exposure.
When it comes to brand building, TV has long reigned supreme - even for TV spots geared toward direct response. Hence, initial acquisition numbers following TV airings undercount acquisitions and overstate CPV and CPA. For example, if in week 1, TV spend is $100,000 and a client gains 100 new customers in that week, the client would seem to have a CPA of $1,000. However, given that TV programs are time-shifted and that some viewers might take longer than a week to convert, that contemporaneous estimate does not account for those viewers.
Acquisition figures in the initial weeks following TV spend don’t fully capture the benefit of that spend. In fact, owing to brand awareness and improved perception, TV benefits can extend well into the future. This case study highlights a company that used survey responses for attribution, and how those results were largely reconciled despite TV initially appearing to be much less efficient using survey results.
Learning from Client Experiences with Surveys
At Tatari, we’ve had several clients use surveys as a means of attribution, and some have gone so far as to halt TV campaigns owing to survey results. Why? Shortly after starting TV, these clients had incredible sales results and low TV CPVs. However, their survey results attributed very little of their success to TV. They instead indicated unexpectedly strong results in other marketing channels. In one case, a client turned off TV based on the fact that Facebook CPAs appeared to be well below those of TV. Alas, while TV was dark, their Facebook performance took a nosedive. They were back on TV a couple of weeks later and—presto—Facebook results improved.
Simply put, survey results often fail to correctly attribute credit for conversions. Do TV results look weak according to your survey, but sales numbers look great overall? You need to consider that your TV results might be better than they appear. By looking at all of your marketing data—with and without TV—it is possible to uncover the truth.