Tracking user behaviour online has never been easy, especially if you rely solely on Google Analytics. Although it has been an effective method in the past, we are now facing increasing challenges. Tracking users online is becoming increasingly complex.
In addition, cookies are gradually losing their meaning and providing less useful data. As a result, we get fewer signals and data that we could use to track users online. All of these factors limit our digital analytics capabilities.
Tracking users at an individual level is the foundation of digital attribution - the set of patterns you see in Google Analytics and every other advertising platform you can think of. Digital attribution still has a very important place, and the purpose of this article is not just to point out its shortcomings. On the contrary, our goal is to make everyone aware of these shortcomings and think about when this method is a reasonable option for evaluating campaigns and when it's not.
Last-Click Model in Google Analytics:
Google uses what is known as the Last non-direct click model. This assigns a 100% conversion rate to the last source of a visit that was not direct.
Digital attribution still has a very wide application. The main reason for this use is that it is largely free and provides a very detailed view of the performance of marketing activities. Thanks to Google Analytics, we can see what the most common customer paths to conversion are. By combining different models, we can then check for ourselves which channels contributed to revenue and conversions and at what stage of the buying process. This gives us a great tool for a daily overview of where and how we are using our investments effectively. However, this assumes that we are actually interested in analysing the data and not just content with what we see in the "source/medium" reports, where only the Last-Click model is commonly used.
The problem arises when you have a complex media mix that includes not only online advertising but also a lot of offline activity. Of course, you won't see these in the report. Similarly, if you are interested in incrementality, you will find it very difficult to evaluate it using attribution.
Even though you are using all the available signals, they may not be sufficient because they may overlap or the data is incomplete. We give reasons why this is the case below.
In recent years, the data in Google Analytics has become increasingly inaccurate. Adblockers and issues with tracking users on different devices lead to incomplete conversion path data.
Apple with its ITP (Intelligent Tracking Prevention) system shortens the lifetime of cookies, distorting the user journey for longer periods of time. Further complicating matters is the new 2022 cookie regulation, which requires user consent to track their activity.
We anticipate that data will be even more limited in the future due to increasing measures to protect user privacy online.
Advertising platforms often favour last-click models that overvalue channels like Google search and undervalue channels like Facebook or YouTube, which often generate demand.
As an example, you may have started to heavily promote your products and your brand. You use several channels such as TV, OOH, YouTube, etc. and some of them may have already driven users to the site in the past.
A few people, after seeing your ad, decide to visit your site and do what many people do - "Google" you. Because your market is quite highly competitive, you use paid brand ads to make sure you appear first in searches for your brand. So the user clicks on the ad and you see their visit after searching for your brand. And if they convert, it's attributed to that search. A TV ad, OOH or any other channel that has virtually created a demand for your product doesn't get a penny.
Instead, Google Analytics will tell you that branded search brought in 300k crowns per month. But in reality, paying for branded search advertising may have only added 30k crowns to your budget (the rest would have gone to your competitors). It's an incrementality problem, which is worse the better known your brand is. The effect of displaying paid advertising is then low.
Digital attribution often assigns conversions immediately and based on short-term perception. This can be problematic if you are promoting your brand and your products on channels with a long-term reach, the effect of which may not be felt until a longer period of time. It's quite possible that a TV ad you turned off a month ago is still generating revenue today - but you don't know it. So digital attribution does not take into account the long-term effect of advertising.
Not accounting for the delayed effect of advertising adds another shortcoming, namely that digital attribution does not account for channel saturation. We explain the saturation effect with an example.
For example, if you started advertising on Facebook with a lower investment, you probably reached the most potential users. But as you gradually increased your investment, you started reaching users who were less likely to buy. This meant that, over time, one crown invested became less and less effective. This effect is called the investment saturation effect.
It simply isn't picked up by digital attribution and is hard to extract from the data. Taking this effect into account is important when optimising the budget, as it makes more sense to invest in more efficient channels.
Digital attribution often neglects external factors and seasonality, which can significantly impact marketing effectiveness. These factors can include changes in consumer behaviour, availability of products or services, media and communications, and competition. It is important to take these into account when evaluating your efforts.
Fortunately, there are alternatives that we can use. However, it is important to note that none of these methods are intended to replace digital attribution. These methods complement each other - the effectiveness of campaign evaluation is enhanced by using all of these available tools for the right purposes, because you are aware of their shortcomings. In other words, you get closer to the truth if you use all methods in combination.
"How did you hear about us" questionnaires: mandatory questionnaires before completing a purchase or other type of conversion.
Randomised experiments (A/B tests): Users are divided into random groups, with one group making a change and the other not. Experiments can also be conducted directly in platforms such as Google Ads and Facebook. You can test different campaign parameters.
GEO experiments: this method is mainly used when we can't use random experiments. The principle is to introduce changes for different geographic areas, where we randomly decide whether to run the campaign in them or not. We then compare the results and evaluate the effectiveness of the campaign. Methods such as Google's Causal Impact or Meta's GeoLift are available. It's important to note that the split doesn't necessarily have to be by geographic unit - it can also be by keyword or product.
MMM is an attribution model that doesn't need individual user input (and therefore measurement inaccuracy doesn't diminish its effect) and can account for all of the above shortcomings. Moreover, models can be reliably combined with experimental results. Unfortunately, MMM does not offer as detailed a view as digital attribution. The results are usually at the channel level. For insight into campaign effectiveness, it needs to be combined with experiments or attribution.
Recent events focused on user privacy online have intensified the debate among marketers about the biases and shortcomings of using Google Analytics to analyse marketing mix and investments. Over time, increasingly inaccurate tracking of web behaviour, and therefore an incomplete understanding of the customer journey, leads to less accurate conversion attribution and less effective evaluation of marketing activities.
Key issues include incomplete data, preference for 'click channels', short-term perception, failure to account for changes in channel effectiveness, external factors and seasonality. These issues can lead to misinterpretation of data and poor marketing decisions.
On the other hand, there are alternatives to digital attribution that, when combined, can provide a much more realistic and valuable view of the effectiveness of marketing investments. These include "How did you hear about us" questionnaires, randomised experiments (A/B tests), GEO experiments and marketing mix modelling (MMM). These methods can provide more accurate and useful data for evaluating and optimising marketing investments.
This article does not want to discredit Google Analytics, which undoubtedly still has a place in marketing and web analytics and can be very useful. However, it is important for marketers to think critically about the biases and imperfect signals of digital attribution. And also that we all take a more consistent approach to attribution and evaluation of marketing activities. Only by doing so will we get a more accurate picture of what is truly incremental and what is not.