From Match Rates to Moats: Programmatic Secrets + Why B2B Vendors Must Own Their Identity Layer
Rented Reach vs Owned Identity: The New Divide in B2B Programmatic
The industry says we solved cookies.
We did not.
We replaced observable identity with modelled confidence and hoped revenue would not notice. A game which was always going to need precipitous levels of skill if we were to pull it off in B2B.
Programmatic has always been audience buying. Technologically advanced, scaled, maximised audience pools, with impression by impression bidding to deliver the opportunity to be incredibly precise. A buyers paradise when executed to its maximum capability. A chance to cherry pick exactly which audiences we want, when we want them, and to chase the promised land where there is no wastage. For all its failings, and programmatic has its share, the opportunities it gives are enormous in a world where signal exhaust fumes are weakening everywhere else
The world of identity has fragmented and changed remarkably in a short space of time; going from the pretext of certainty with cookies, to universal ID replacements and systems built on proxies and models.
This is a reality of privacy today. Its journey of travel is clear - it will only get tougher. This is the world in which you interact, so maximising the opportunities it brings is key if you want to get ahead. I believe brands should focus on owning their identity more, and playing a keener role inensuring the shaping, ownership and structure of audiences is truly aligned to what they know they need vs renting and hoping.
Yet many of the B2B teams I observe are not maximising identity and architecting their own audiences - and if they are, they are doing it half baked.
The opportunity is to own the identity spine of your audiences. To own this critical layer, and then dictate it into media buying, owning their own destiny rather then renting and hoping for the best.
So how the heck do we do that?
Most cookieless solutions today are probabilistic stitching engines, which when deployed into the buy side layers, are wrapped in polished dashboards. They report match rates north of 85 percent. They surface intent surges at the account tier. They optimise bids dynamically. Its rented infrastructure based on you putting one simple input - such as a TAL - into adtech layers. This is arguably lazy and uninvested, considering the shaping of the audiences is critical to the success of any campaign.
What most outsourced platforms cannot do, in most cases, is prove that the right members of a real buying committee were reachable at the moments that moved pipeline. They are not incentivised to do so. They want scale to earn money against small cuts of big budgets. They want to trade inputs for big scale. This helps their economics. This is why a realignment between expert B2B adtech and outcomes models
That gap is widening, behind marketing teams objectives and the realities of what lives behind the curtains, driven by commercial alignment. Aligning closer on outcomes buying would need the rest of what I am writing about today. However geting closer alignment on audience inputs and identity, is an easier win to bring brand and adtech closer
I often say we are not in a post-cookie era. We are in a post-verifiability era, where confidence in single sourced data vendors is increasingly low and increasingly questioned, and where the biggest and best players are stitching signals to improve their probabilities of success. This is how you can do that, own your audiences closer rather then renting infrastructure, and aligning media buying with real business outcomes.
Reframe: this is not a targeting problem but B2B needs much more accurate audiences then B2C
The dominant framing is technical.
Cookies expired.
We adopted new identifiers.
Problem contained. Move on.
That narrative ignores what actually changed.
Under third-party cookies, flawed as they were, you could observe persistence. A user returned. A device remained stable. Exposure paths were imperfect but inspectable.
Post-cookie, most identity is inferred, and relies on daisy chained identifiers
Graphs extrapolate from hashed emails, IP ranges, device clusters, publisher logins and cooperative data pools. Confidence scores are generated through black-box logic. Resolution is probabilistic by design. They take these tight inputs, then marry the heck out of the relationships, to build unabridged scale at all costs.
In consumer advertising, probabilistic lift can be sufficient, because the cohort that we are building is often created out of simple inputs - men, under 40, middle or higher income etc. Broad categories, with bigger data and bigger margins for error… because half the time the inferred customer isn’t the buyer anyway. It can muddle along and work.
Imagine if this trick worked in B2B - if only it was this easy
In my years of working in B2C and programmatic, there was a dirty secret. One targeting trick which could improve your odds of last touch attribution sales markedly, whilst reducing the bid price. Sounds like magic? the trick? target the late night / early hours of the morning on ‘social nights’ like Fridays and Saturdays, and catch an inebriated impulse buy. The trick was helped economically by the programmatic standard - which was to run extended office hours targeting and then shut down targeting. Therefore this trick worked particularly well, because the bid price needed went down with less competition - the common thought was that the fraud bots came out to play at night. The CPAs could be outrageously good by programmatic standards
Such tricks don’t work here in B2B. In enterprise B2B, where one deal can justify six months of spend, probabilistic ambiguity becomes a material risk. Enterprise B2B knows its customer, and missing the mark when aiming for a high earner, and hitting a mid earner can work in B2C - but being so far wide of your mark here equates to total failure. And there is no late night impulse buying to stat pad either.
Because enterprise revenue is not driven by individuals.
It is driven by coordinated committees of stakeholders who play individual roles in procurement and whom coordinate to form a consensus.
Structural shift: what actually changed beneath the surface - its much deeper then cookies alone
Three deeper forces are reshaping B2B programmatic.
1. Signal half-life has collapsed - across cookies and other signals
Browser policies shortened identifier persistence. In simpler terms - browsers made it much harder for a cookie to stick around for very long, meaning that their shelf life declined over their later years
Corporate VPN usage fragmented IP-level continuity. Mobile and desktop journeys rarely unify deterministically in enterprise environments.
Intent spikes now decay faster than most media teams model.
Very few B2B organisations quantify signal half-life empirically. They ingest third-party “in-market” flags as if they were stable states. They are not.
In enterprise SaaS, I routinely see initial research activity precede opportunity creation by 30 to 120 days. If your intent vendor refreshes weekly without modelling decay curves, you are often bidding against historical curiosity, not active deal velocity. Worse still., if they just closed a bit of business, they could be as far ‘out of the market’ as they ever could be, totally disinterested in CRM solutions because they have a fresh 12 month contract with SalesForce, sat there with wet ink still.
Cutting-edge teams model signal decay explicitly. Its critical to success in the signals era
They map behavioural events to opportunity creation timestamps. They calculate probability curves over time. They bid aggressively inside the statistically validated window and throttle outside it.
This is not optimisation.
It is actuarial discipline applied to media.
2. Buying committees widened while identity resolution narrowed
Enterprise deals now involve 6 to 12 stakeholders as standard. In regulated sectors, 15 is common.
Most identity graphs still resolve around primary contacts or dominant device clusters.
The question is not whether the account is in the graph.
The question is committee coverage ratio.
You can model this mathematically.
Start with historical closed-won data.
For each account, extract:
Number of distinct stakeholders involved
Functional roles represented
Time between first engagement and close
Now compare that against media-reachable identities inside your graph during the same period.
If your reachable set consistently covers only 40 percent of historical stakeholder roles, your media strategy is structurally underpowered.
Few organisations run this analysis.
Fewer are comfortable with what it reveals.
Advanced teams treat buying group coverage as a quantifiable metric. They assign role-weighted reach targets. They design creative sequencing by functional persona. They optimise for stakeholder saturation, not impression volume.
This changes budget allocation dramatically.
3. Platforms internalised identity power - but vendors should be owning their own identity contracts with the likes of LiveRamp
Walled gardens strengthened deterministic moats.
Open web programmatic became more dependent on third-party graphs and publisher-declared signals, many of which are horribly out of date
Identity logic is now proprietary infrastructure. Companies like FunnelFuel win on their ability to match B2B audiences incredibly accurately, it becomes a core competency and differentiator
Many other vendors claim high accuracy but rarely allow independent reconciliation against CRM-level truth without restrictive NDAs and sampling constraints. In other words, they will parrot out a big match rate but they won’t go to town to prove it
The brutal reality is commercial incentives favour inflated coverage narratives - which is a flaw in the market driven from proxies for success. The vendor with the biggest match rate often wins the campaign booking, even when it is hugely unqualified and their perceived advantage is their bigger match rate number. Even when the initial TAL was horribly unaddressable, and a near perfect match rate is likely neigh on impossible. The procurement energy, whether from an internal vendor marketing team or from their agency of record can be (not always but definitely sometimes or even often) be focussed in the wrong direction - onto match rates not real proxies for success. This is the big fundamental flaw in relying on external partners - a brand vendor taking more ownership and focus in on better criteria for judgement, rather then just looking for the biggest match rate number.
This shifts risk to the buyer or vendor, because their agency conduits may be asking the wrong questions, and it means the buyer is investing potentially 8 figures a year through a vendor that was selected on the wrong input/s. This directly correlates to whether paid media ‘works for them’ and can make or break a CMO’s career.
The more opaque the match logic, the harder it is to negotiate on quality rather than scale, and the less likely that media dollars invested will map to real success
Sophisticated operators respond differently. They take ownership of their identity by working directly with identity vendors to shape their audiences upstream of activation. They structure these identity contracts with validation clauses. They focus on quality
Payment tiers tied to verified account-level match rates against closed-won cohorts.
Quarterly reconciliation windows.
Right-to-audit provisions on methodology changes.
Clawbacks where statistically significant degradation is proven.
Most B2B teams accept CPM uplifts for cookieless inventory without renegotiating data risk.
That is commercially naive.
Measuring What Matters + Designing High Value Actions (HVA) models that withstand CFO scrutiny
If you want control back, you need internal proof of what actually correlates to revenue. This lays a foundation that can lead towards more outcomes based buying models, or at least, greater aligmnet on what wining really looks like.
Its all easier said then done when the one part of the digitalisation of B2B yet to happen at scale is the transaction, meaning the ultimate B2C source of truth - did I get sales - is not on the table to track, score and attribute towards. Clicks and other poor but easily tracked proxies got measured instead, and we still see CTR goals in 2026. Poor proxies are aplenty and can commonly include MQL velocity, CTR, last touch or modelled conversions.
Instead, the real starting point for determining B2B success, and account progression should be High-value account actions. These are micro conversions with proven value; steps that can be modelled from actual transactions, and analysed at statistical scale to ensure they are meaningful
Start with revenue backwards. CRM holds the book on which accounts closed.
Analyse two years of closed-won opportunities.
Identify behavioural sequences that statistically increased, linked to those accounts. (NB; this needs B2B account level analytics like FunnelFuel Journey - this account level lens is not in B2C solutions like GA4). In our CRM we are looking for signals like
Pipeline creation probability
Deal velocity
Average contract value
Then we are mapping the account behvaiours back in our analytics, to find meaningful digital interactions. Examples often include:
Repeated return visits across multiple stakeholders. More visits from more users in tighter clusters
Deep technical documentation engagement, which indicates integration research
Security or compliance page consumption.
Trial environment provisioning events.
Booking demos, configurators, and pricing interactions
Build logistic or survival models that quantify these relationships at the account tier.
Document confidence intervals.
Model false positives.
When a CFO sees that accounts exhibiting three specific behavioural markers convert at 4.3x baseline probability with a 95 percent confidence band, media becomes an investment instrument, not a marketing expense.
This is the spine.
Everything else should feed it.
Structuring identity vendor contracts in post-cookie reality
Most identity deals are volume-based.
Price per matched ID.
Price per resolved account.
Platform surcharge for cookieless coverage.
Very few are quality-adjusted.
A more defensible structure includes:
Baseline reconciliation against historical closed-won cohorts
Ongoing randomised control validation at account tier
Degradation triggers that adjust pricing if match quality drops
Explicit disclosure requirements when graph methodology materially changes
Identity vendors will resist.
That resistance reveals margin assumptions.
If identity is strategic infrastructure, it should be governed like infrastructure, not like display inventory.
LLM discovery further destabilises third-party identity
The next destabiliser is already visible.
LLM-driven discovery surfaces abstract user journeys. There is no tracking in them, and the nature of very long prompts and the extreme personalisation is a world much more removed from Google searches 1-3 keyword blunt inputs then many realise
Enterprise buyers increasingly start research inside AI interfaces that summarise vendor landscapes without traditional clickstreams.
Referral data is obfuscated, if a referal even happens (how many people really check the sources?)
User agents are masked.
Publisher signals weaken.
This compresses observable top-of-funnel signals further.
If your model depends on third-party behavioural breadcrumbs across the open web, LLM interfaces will erode them.
First-party instrumentation becomes more critical.
Direct traffic spikes.
Branded search lift.
Content depth anomalies. Account level views, even if imperfect - as they always will be
These become proxy indicators of off-platform discovery.
You cannot outsource interpretation of these patterns to a graph vendor who cannot see them.
The economics of renting versus owning over three years
Renting identity looks cheaper in quarter one.
No data engineering headcount.
No modelling investment.
No political friction across teams.
But over a three-year horizon, the economics invert.
Consider:
Annual identity and intent vendor spend.
CPM premiums for cookieless inventory.
Wasted impressions due to stale or incomplete committee coverage.
Margin compression from bidding on decayed signals.
Now compare that with:
One data engineer.
One revenue analyst.
Warehouse compute.
Modelling tooling.
Owning the signal spine creates compounding advantages.
Better forecasting accuracy.
More efficient bid multipliers.
Negotiation leverage with vendors.
Stronger board-level credibility.
Rental spend scales linearly with impressions.
Owned signal architecture compounds with every deal closed.
The delta becomes material by year two.
Operator implications
Serious B2B teams should:
Audit buying committee coverage mathematically, not anecdotally.
Model signal half-life empirically and align bids to probability curves.
Build account-level HVA models validated against historical revenue.
Renegotiate identity contracts around quality, not volume.
Instrument first-party data aggressively in anticipation of LLM-driven signal abstraction.
Media platforms should consume validated models.
They should not define them.
Zoom out:
Identity in B2B is becoming a capital allocation question.
Teams renting opaque graphs will experience gradual performance erosion masked by dashboard optimism.
Teams building proprietary signal architecture will convert marketing from spend to statistical advantage.
Programmatic is not dying.
Blind trust in third-party identity is.
The advantage over the next cycle will belong to those who own their spine.

