Book a demo

Blog / Why Your Marketing Data Doesn’t Match (and How to Fix It)

Why Your Marketing Data Doesn’t Match (and How to Fix It)

If you’ve worked with marketing data for any length of time, you’ve probably experienced the same moment. You open two reports that should show the same numbers. But they don’t. 

One says 124,312 impressions.

Another says 123,870.

And someone inevitably asks:

“So, which one is correct?”

Well, the uncomfortable truth is; they both might be.

Because in a very real sense, there is no such thing as universally “correct” when it comes to data.*

Different systems count things differently. Different methodologies group, attribute, and aggregate events in different ways. The same underlying activity can therefore produce different answers depending on the rules applied to it.

Often the issue isn’t that one dataset is right and another wrong, although that can happen, but that they are measuring slightly different things.

Numbers only become meaningful once we decide how they should be defined, measured, and interpreted. In practice, this is what a single source of truth actually is. Not some objective, universal reality, but a specific view based on a shared framework for how those numbers are defined and reported.

And that, ultimately, is why data discrepancies occur.

As such, data discrepancies are a normal part of modern marketing analytics. They happen because data is collected, processed, and reported through many different systems, each with their own rules.

The key is understanding why differences appear, how to identify the cause, and ensuring all your data follows the same rules.

If you want a quick way to diagnose these issues, we’ve put together a simple checklist you can use to identify the root cause of most discrepancies. You can download it here.

Below we’ll run through some of the most common reasons marketing data doesn’t match across platforms, exports, and reporting systems. In many cases, the cause is surprisingly simple.

*That doesn’t mean there isn’t such a thing as ‘incorrect’ though.

1. Start here: The simple issues most teams miss

Most discrepancies aren’t caused by complex data issues.

They’re caused by things that are surprisingly easy to overlook.

Manual data handling

If you are relying on spreadsheets, there is a good chance the discrepancy is being introduced there.

Once data is exported and moved into Excel or Google Sheets, even small changes can affect the numbers.

Date formats, filters, pivot tables, formulas referencing the wrong range. It doesn’t take much.

In these cases, the issue isn’t with the platform at all. It’s with what happened after the export.

If you suspect this is the issue, check:

  • Are formulas correct and referencing the right data?
  • Are filters applied consistently?
  • Are date formats aligned?
  • Are any rows duplicated or missing?

If spreadsheets are involved, start here.

Date ranges and time definitions

Another common cause of data discrepancies is also the simplest.

You’re not actually looking at the same time period.

It sounds obvious, but it happens constantly. One report is looking at June 1–30, another is looking at the last 30 days, and someone else has selected May 31–June 29 without realizing it.

I’ve seen entire discussions about “bad data” end when someone notices the reports were showing different years.

Even when the dates look the same, subtle differences can still creep in. Inclusive vs exclusive ranges.

UTC vs account timezone. These things shift numbers.

This is one of the reasons many teams centralize their data before reporting. Platforms like Adverity standardize timezones and reporting windows across sources so you’re not comparing slightly different definitions of time.

If you suspect this is the issue, check:

  • Are you using the exact same date range?
  • Are timezones aligned?
  • Are both reports treating start/end dates the same way? 

This is one of the first things to rule out.

Authorizations and account levels

One common issue simply comes down to whether you actually have permissions or data access.

Most platforms have multiple account levels, roles, and permissions. It’s very easy to pull data from a slightly different scope without realising it.

Take LinkedIn as an example. You might have access to a Page, but not the associated ad account in Campaign Manager. Or you might be able to see campaign-level data, but not creative-level performance.

The same applies in Google Ads or Meta. Access can vary by account, campaign, or even billing level. In larger organizations, it’s also common to have multiple accounts split by region or product, and not everyone has access to all of them.

So two people can quite easily be looking at “the same report” but be pulling data from slightly different scopes.

If you suspect this is the issue, check:

  • Are you looking at the same accounts or properties?
  • Do you have full access to all required data?
  • Is the API connection authorized for all accounts? 

Missing access means missing data and this comes up more often than people expect.

2. Are you measuring the same thing?

Once you’ve ruled out the simple issues, the next step is to check whether the numbers are actually comparable.

Because very often, they aren’t.

Currency conversions

Another obvious one, but a source of confusion I’ve seen trip people up countless times.

When you’re combining data across markets, currencies need to be standardised. But different systems handle that differently.

Some use the exchange rate at the time of the transaction. Others use daily averages. Others convert everything at reporting time.

Even small differences in exchange rates will show up once you aggregate spend. And if you’re spending any meaningful amount on your campaigns, those differences can quickly balloon.

If you suspect this is the issue, check:

  • Are you using the same reporting currency?
  • Are exchange rates applied in the same way?
  • Is conversion happening at the same stage? 

Granularity and reporting breakdowns

This is where things start to get more nuanced.

Most platforms let you view data at different levels. Campaign vs ad vs post. Add breakdowns like device or geography, and the numbers shift again.

Even if the underlying activity is the same, the way the data is grouped changes how it is aggregated.

This is because the same interaction can be counted multiple times depending on how it’s broken down.

A single user might see multiple ads, or appear across different devices or regions. At one level that activity is counted once, at another it may be counted multiple times.

So two reports can be “right” and still not match.

If you suspect this is the issue, check:

  • Are you comparing the same level (campaign, ad, post)?
  • Are the same breakdowns applied?
  • Are you aggregating data in the same way? 

Different grouping leads to different totals.

Metrics that cannot be aggregated

Some metrics look simple, but behave very differently once you start combining them.

Common offenders are reach, unique users, frequency, CTR, or CPC.

And that’s because these aren’t simple counts, they rely on deduplication or calculation.

If you’re not familiar with the concept, it’s worth understanding what a non-aggregatable metric actually is and why they behave this way.

Summing reach across campaigns, for example, will almost always inflate the number. Averaging CTR meanwhile will always give you the wrong answer.

This is something that trips people up constantly, particularly when reporting on Reach. If you are reporting on Reach, check out How to Report on Reach to go deeper into the common pitfalls here and how to handle them properly.

If you suspect this is the issue, check:

  • Are you summing non-aggregatable metrics like reach?
  • Are you averaging calculated metrics like CTR or CPC?
  • Are you comparing calculated values instead of raw data? 

If in doubt, go back to clicks, impressions, and spend.

3. Are you even looking at the same data set?

At this point, the issue usually isn’t definitions.

It’s the data itself.

How data is retrieved and reported

Even within a single platform, there isn’t always one dataset.

Different reporting layers exist depending on how the data is retrieved. UI, exports, APIs, and warehouse tables don’t always match.

Google Analytics 4 is a good example of this. The reporting interface, Explorations, API, and BigQuery exports can all return slightly different results.

Exports can also be limited. Google Search Console, for example, only returns 1,000 rows, and query data is partially hidden for privacy reasons.

APIs introduce another layer of complexity. They can limit both what data is returned and how much can be retrieved at any given time (in fact, API quota limits are a topic in their own right).

What this means is that if one person is looking at data via an API and another is looking at an exported spreadsheet from the platform, there’s a good chance the numbers won’t match.

If you suspect this is the issue, check:

  • Are you comparing UI, export, and API data?
  • Are there any row limits, sampling, or thresholds?
  • Are both reports using the same dataset? 

Same platform does not mean the same data. This is where having a single, consistent data layer becomes important. Rather than relying on different reporting endpoints, tools like Adverity pull and standardize data from source APIs into one governed dataset.

Data refresh and overwrite rules

Another common cause is that the data has simply changed.

Marketing data isn’t static. Conversions come in late. Engagement accumulates. Platforms update historical data.

Most systems deal with this by refreshing recent data on a rolling basis.

One system might overwrite the last 7 days. Another the last 30. Another might not overwrite anything at all.

So even if everything else matches, one report may just be more up to date.

If you suspect this is the issue, check:

  • Are both systems refreshing data on the same schedule?
  • Are they overwriting the same time window?
  • Are attribution windows aligned? 

How to fix marketing data discrepancies

By this point, it should be clear that most data discrepancies aren’t caused by something being broken.

They’re caused by differences in definitions, methodologies, and data handling.

Different systems count things differently. Different reports structure data differently. Different pipelines refresh data at different times.

So when numbers don’t match, the goal isn’t to force them to.

It is to understand why they don’t.

In practice, this is what building a “single source of truth” really means. Not finding one perfect number, but agreeing on a consistent way of defining and reporting your data, and applying that framework across your organization.

In practice, that often means using a platform that enforces those definitions across all your data sources.

Once you do that, discrepancies become far less of a problem. Not because they disappear, but because you understand where they come from.

If you’re regularly running into mismatched numbers and want a faster way to diagnose the issue, you can use this checklist to identify the most common causes. 

62ed2f3c-1bbc-4123-8a4d-19f8ae5c16b9

Find out more about how Adverity can help you today.

Book a demo
book-demo