Previously serving as the in-house growth lead for several startups over a couple of years, Barbara shifted to consulting full-time and has been working with mostly B2B companies to help them with growth, measuring, and marketing:
"I help them understand why certain data points a certain way and how they can make decisions from that data."
She has worked with a variety of companies, but her ideal client and projects involve companies that have already found their product-market fit:
"They are usually the companies that are in the post product-market fit stage and they've already found a few acquisition channels that are working for them. I like helping them scale that or what I call, 'pouring gasoline into the fire.'"
Invest in measurement and marketing early
Barbara believes that even early-stage startups could definitely benefit from investing in measurement early:
"There will come a point in your company's journey when you decide that you need to scale more aggressively. Maybe because you have internal targets or your investors are pressuring you. If you don't invest in measurement, you'll need to start from scratch.
Because if you didn't do that early on, you don't really know what has worked up until this point so you'll have to figure it out before you scale."
Track the necessary minimum
Barbara believes in tracking the minimum metrics necessary:
"Obviously there's the big concern of data privacy. You also have the concern of page speeds if you're tracking things on the client side."
Aside from these, having more metrics adds complexity to the setup of your measuring tool:
"And then a big aspect of measuring tools is their complexity. Every metric you track adds more complexity to how things are set up. If a metric stops working, you also need to have the team available to fix that."
The three types of metrics you should track
Reach
"Depending on the activity, it could be impressions for paid media or page views for a certain activity."
Engagement
"Let's say your activity has reached a certain number of eyeballs. But how many of these eyeballs were interested in what you had to offer?
In the case of paid media, it can be something like clicks. For other activities, it can be things like pages on sites, visiting multiple pages, or downloading assets."
Value
"You've reached an X number of people, and a Y number of people were interested. But what was the value of the people that were interested?
So that can be the lifetime value (LTV), or the predicted calculated lead value. In the case of ecommerce, it can be the cart revenue directly to the purchase."
Enrich the data with segmentation
After tracking, Barbara looks at segmentation to answer questions such as:
- What segments show different numbers?
- Is there a certain marketing source that is responsible for higher value users?
- Is there a certain landing page that is responsible for the most interest?
Predicting the dollar value of leads
While there's no one and universal formula for determining dollar value, companies can take a weighted approach within the context of their industry.
Barbara says that there are two types of data sources to estimate this value.
Product-based signals
These product-based signals are activities that tell you how likely a lead is to convert into a customer. You can look at things like:
- How did the lead use the product in the first 24 hours?
- Did they visit your pricing page?
- Depending on how you define activation, have they fully activated?
- Did they invite people to join their account?
For companies that don't use these signals, Barbara says they usually use the conversion predictability and use the LTV to calculate that dollar value.
Firmographic information
Another data source that you can use to predict the value is the firmographic information or the types of information that can be used to categorize companies.
Companies would usually get this type of information based on the personal information given by the lead:
"A user can create an account with your product, or maybe they registered for your webinar, or downloaded your paper. You now have their email address and the company that they're currently affiliated with. You can then use enrichment tools like Clearbit or Apollo to get more information on who that customer is."
You can then determine things like:
- Does this lead have a role in their company that fits the ideal customer profile (ICP)?
- Does the company they work for use a stack that's compatible with the stack of the ICP?
Depending on what type of company you are, you can use the type of data source that is more important to you.
"How it works is you'll have these different data sources that you can use to predict the value. You can then use that value to understand if your campaigns are working, at least in the short term."
Dealing with data discrepancies
To illustrate the problem with data discrepancies, let's say that you've run a paid media campaign and set up conversion tracking within your paid media channel. This could look like:
- In Google Ads campaigns, you've set up a conversion tag that fires whenever the user completes a certain event in their lifecycle (i.e. first subscription or product sign-up)
- In another tool (i.e. Google Analytics, Mixpanel, Amplitude, or Segment), you're also tracking the same event
"When you look into your paid media reporting, the numbers from the tools don't match. You might see an entirely different number on the app platform versus their website analytics. And if you've implemented something like server-side tracking for these ad platforms, the discrepancy can be really huge."
Instead of forcing the numbers to match exactly or determine which one to use, Barbara believes that we should instead look at the different benefits that these data points offer:
"These data points all have different benefits. They tell you different stories because they were measured in different ways by the tools you used.
So it's not necessarily that one number is right and the other one is wrong. It's just that they were measured in different ways."
To deal with the discrepancies, Barbara implements what she calls a triangulated attribution for her clients:
"Instead of just looking at one source, you're looking at different ways of measurement and how they all tell a different story."
You can use data at different levels to get a fuller picture:
- Zero party data: asking the customer directly where they heard about you
- First party data: website tracking using Snowplow or Segment
- Third party data: ad platform report
Barbara adds that you can also add another layer that has data modeling and prediction to understand the correlation between marketing activities and conversions (i.e. marketing mix modeling and econometrics).
"All of these different ways of measurement will provide a unique insight."
The risk of not using multiple data sources
Barbara shares a client's experience using both Meta and Google. They were using Google Analytics on these platforms to measure the customer acquisition cost (CAC).
"This is a very common use case. But the problem is that Meta and Google are two strategies that are served to users on different devices, and in different moments of their user journey."
Because Meta is more often served on a mobile device, the user is viewing the ad on their smartphone. It also means that the user is not thinking about purchasing your products yet.
On the other hand, Google is used based on intent so a user is most likely searching on their computer for a solution to their business problem. This also means they're also likely to convert.
"We have two factors here: you have cross-device and the lookback window. Both of those things are going to be very different for Google and Meta.
In the case of B2B SaaS, most conversions happen on a desktop computer because it's a business service. This means that Google Analytics isn't able to attribute the conversions from Meta as well as it can for Google Ads because the device changes. Then the user changes because the user takes a long time to convert. It's also not including the lookback window."
And because Google Analytics can't attribute the conversions to Meta, some companies might decide to focus solely on Google Ads instead:
"Their overall sales are going down, bringing fewer conversions than before. The issue was that they were looking at a source of data that couldn't measure both platforms in a comparable way.
What they thought was an apples to apples comparison was more like an apples to pizza comparison. It's so far off so they make the wrong decisions."
Marketing mix modeling
Marketing mix modeling (MMM) tries to create a correlation between your different data sets.
"On one hand you have your ad platform data sets. So it's the data that says, 'I spent X and served this many impressions on this day.' On the other hand, you have your first party data. 'On this day I serve this many impressions. And on the other day, I had this many conversions.'
MMM creates a mathematical model that can calculate the correlation between the first data sources and the second data source."
For example, you run a holdout test using a Meta campaign in Germany for one month and turn it off:
"MMM would then calculate the correlation between the campaign's numbers that you saw in Germany and the overall sales numbers that you saw there before the period of the campaign, within the period, a bit after the period, and way after the period.
Based on these numbers, MMM will let you calculate what the ROI of the campaign was."
How tracking changed because of data privacy
As platforms focus on data privacy, this affected third party tracking and cookies. This is also the reason why MMM and server-side tracking has been gaining popularity:
"A lot of companies have been moving away from the client side, tag-based tracking to using server-side tracking for ad platform tracking, or different types of measurements like MMM with zero power data for more holistic measurement."
Final advice
Don't rely on just one data source for making budget decisions.
"Try to have a full picture, especially if you're running multiple channels at the same time by using different forms of attribution."
Do take everything with a grain of salt.
"All of these are predictions. None of these numbers are real. This isn't math so it can all be wrong. There are edge cases for everything."
Thanks for listening! If you found the episode useful, please spread the word on Twitter mentioning @userlist, or leave us a review on iTunes.