Cross-Platform Performance Reporting

Redefining how advertisers trust, measure, and fund prospecting

Role

UX Researcher

Industry

In-Depth Interview

Duration

3 months

Overview

In 2025, Google’s Retail Ads (RAX) team set an ambitious objective: generate $XXXM in incremental booked revenue from the Artemis advertiser cohort. To hit this goal, Google needed to improve its standing in upper-funnel prospecting campaigns. However, advertisers were increasingly relying on third-party attribution tools (e.g. Northbeam, TripleWhale) as their “source of truth” for performance data. These independent platforms offered neutrality, unified cross-channel insights, and speed directly challenging Google Ads’ credibility in performance reporting.

As the Senior UX Researcher embedded in RAX, I led a strategic research initiative to diagnose this challenge. Our mission: understand how advertisers measure Google vs. Meta campaign performance across platforms, and reveal what it would take to restore trust in Google’s metrics. The research insights would inform Google’s Prospecting Goals measurement strategy and unlock revenue opportunities by recapturing prospecting budgets.

*(Due to confidentiality, specific metrics and examples are omitted.)

The Challenge

Advertisers don’t make budget decisions inside a single platform.

They operate across ecosystems where:

  • Attribution is fragmented

  • Platforms over-claim success

  • Latency delays action

  • Revenue truth lives outside ad systems

In this environment, advertisers default to neutral arbiters—third-party tools that reconcile performance across platforms and align with backend sales data.

The result:
Google Ads data was directionally trusted, but rarely decisive.

When prospecting budgets were evaluated, Google was measured through efficiency lenses, while Meta was evaluated through acquisition narratives. This structural asymmetry limited Google’s growth—regardless of product capability.

What We Set Out to Understand

This work focused on understanding how advertisers form trust in performance data, how they reconcile discrepancies across platforms, and how prospecting success is ultimately judged. The goal was not to benchmark tools, but to surface the decision logic that governs real budget movement especially when data conflicts and stakes are high.

Approach

I led and moderated in-depth interviews with Artemis advertisers actively investing across Google Ads, Meta, and third-party attribution platforms. Rather than documenting workflows, the research focused on mental models: how advertisers interpret data, where confidence breaks down, and what information they rely on when making irreversible budget decisions.

The work was conducted in close partnership with Product, UX, Engineering, and Legal, ensuring findings could translate directly into measurement strategy and roadmap decisions without compromising competitive or compliance constraints.



Overview

In 2025, Google’s Retail Ads (RAX) team set an ambitious objective: generate $XXXM in incremental booked revenue from the Artemis advertiser cohort. To hit this goal, Google needed to improve its standing in upper-funnel prospecting campaigns. However, advertisers were increasingly relying on third-party attribution tools (e.g. Northbeam, TripleWhale) as their “source of truth” for performance data. These independent platforms offered neutrality, unified cross-channel insights, and speed directly challenging Google Ads’ credibility in performance reporting.

As the Senior UX Researcher embedded in RAX, I led a strategic research initiative to diagnose this challenge. Our mission: understand how advertisers measure Google vs. Meta campaign performance across platforms, and reveal what it would take to restore trust in Google’s metrics. The research insights would inform Google’s Prospecting Goals measurement strategy and unlock revenue opportunities by recapturing prospecting budgets.

*(Due to confidentiality, specific metrics and examples are omitted.)

The Challenge

Advertisers don’t make budget decisions inside a single platform.

They operate across ecosystems where:

  • Attribution is fragmented

  • Platforms over-claim success

  • Latency delays action

  • Revenue truth lives outside ad systems

In this environment, advertisers default to neutral arbiters—third-party tools that reconcile performance across platforms and align with backend sales data.

The result:
Google Ads data was directionally trusted, but rarely decisive.

When prospecting budgets were evaluated, Google was measured through efficiency lenses, while Meta was evaluated through acquisition narratives. This structural asymmetry limited Google’s growth—regardless of product capability.

What We Set Out to Understand

This work focused on understanding how advertisers form trust in performance data, how they reconcile discrepancies across platforms, and how prospecting success is ultimately judged. The goal was not to benchmark tools, but to surface the decision logic that governs real budget movement especially when data conflicts and stakes are high.

Approach

I led and moderated in-depth interviews with Artemis advertisers actively investing across Google Ads, Meta, and third-party attribution platforms. Rather than documenting workflows, the research focused on mental models: how advertisers interpret data, where confidence breaks down, and what information they rely on when making irreversible budget decisions.

The work was conducted in close partnership with Product, UX, Engineering, and Legal, ensuring findings could translate directly into measurement strategy and roadmap decisions without compromising competitive or compliance constraints.



What We Learned

Advertisers do not rely on a single source of truth. Instead, trust is layered. Google’s native reporting is used for speed and day-to-day optimization, third-party tools are used to reconcile cross-platform performance and reduce bias, and backend sales data serves as the final authority. No system wins alone. Confidence emerges when these layers align.

Third-party tools are not viewed as competitors to platforms, but as infrastructure. They succeed not because they are more powerful, but because they are perceived as neutral. Their value lies in consolidating fragmented data, aligning with real revenue, and producing narratives advertisers can stand behind internally.

Although advertisers apply the same core metrics across platforms, they tolerate different outcomes. Google is expected to deliver efficiency due to explicit user intent, while Meta is allowed greater variance in pursuit of new customers. This expectation is reinforced by how performance is reported and explained, not by campaign mechanics alone.

Trust erodes when attribution feels inflated or delayed. Latency and opacity were not described as technical flaws, but as barriers to confident decision-making—particularly during seasonal peaks when speed and certainty matter most.

Why This Mattered

This research reframed measurement as a growth lever. The issue was not building better dashboards, but earning decisiveness at the moment budgets are set. Advertisers fund what they can explain. When reporting fails to support cross-platform narratives, investment follows the tools that do.

For Google to compete in prospecting, reporting needed to move beyond optimization and support justification. Transparency, attribution control, and alignment with external sources of truth became essential not optional.

Impact on Strategy and Roadmap

The findings informed how Google approached prospecting measurement, attribution transparency, and reporting credibility. Rather than attempting to replace third-party tools, teams began designing for complementarity—positioning Google as best-in-class for execution and optimization, while integrating cleanly into advertisers’ broader measurement ecosystems.

This shift influenced roadmap prioritization, measurement strategy, and how performance is framed to advertisers. It helped align cross-functional teams around a shared understanding of where trust is won and lost, and how product decisions directly affect revenue flow.


Executive Takeaway

Growth does not stall when products underperform. It stalls when measurement loses authority. This work clarified that reclaiming prospecting investment requires designing for trust, not just accuracy. By grounding strategy in real advertiser decision logic, the team moved from debating attribution to designing credibility unlocking both roadmap clarity and revenue opportunity.

*Due to confidentiality, detailed data, internal metrics, and research artifacts are not shown. I’m happy to walk through the work live.*

Executive Takeaway

Growth does not stall when products underperform. It stalls when measurement loses authority. This work clarified that reclaiming prospecting investment requires designing for trust, not just accuracy. By grounding strategy in real advertiser decision logic, the team moved from debating attribution to designing credibility unlocking both roadmap clarity and revenue opportunity.

*Due to confidentiality, detailed data, internal metrics, and research artifacts are not shown. I’m happy to walk through the work live.*

Other projects

Interested in connecting?

Let’s talk projects, collaborations, or anything design!

Interested in connecting?

Let’s talk projects, collaborations, or anything design!

Interested in connecting?

Let’s talk projects, collaborations, or anything design & Research!