ABM 2.0 Playbook: How the Best B2B Teams Will Win in 2025–2026
From match-rate theatre to Engaged Account Scoring—this is the operating system that turns ABM from “targeting” into pipeline velocity.
The Problem You’re Feeling (And Why It Won’t Fix Itself)
“We’re doing ABM—we’ve got a named account list and ads running to them. But conversions could be better. How do we refine it?”
I’ve heard this seven times this month from B2B CMOs. Technically, that setup is ABM. Practically, it’s ABM’s warm-up, not the game.
Jaleh Rezaei said it best:
“ABM is the teenage sex of B2B marketing - everyone talks about it; few actually do it; almost nobody is good at it.”
In 2025, the difference between doing ABM and doing it well comes down to two things:
How you measure success (account momentum, not vanity signals), and
How your org is built to sustain it (CoE + pods + GTM alignment).
This playbook gives you both—measurement + organisation—so you can stop chasing clicks and start compounding pipeline.
Part 1 - The Measurement Maturity Test
From match rates to Engaged Accounts
Most programs grade the warm-up: “90% match rate.” “3× reach.” “+X% HVAs.”
Great but match rate ≠ reach ≠ pipeline. You can have strong identity and still have no momentum.
Define “Engaged Account” (what you really want)
An Engaged Account is an account-level state, not a click. It’s where identity, attention, and behaviour converge inside a 30–45-day window:
Verified identity: IP + HEM + UID (not cookies)
ICP fit: firmo + techno + TAL hygiene + disqualifiers
Attention Index (AIx) ≥ threshold: cross-channel, normalised by attention metrics
HVA score ≥ threshold: 2+ meaningful on-site actions
Buying-group penetration: ≥2 roles or ≥3 users engaged
Recency: signal in last 14 days
That’s when you stop talking about matches and start managing momentum.
Attention metrics: the great normaliser
Clicks bias the score toward clickable channels and undervalue CTV, DOOH, podcast, and other non-clickable paid channels. Mature teams build an Attention Index (AIx) that normalises exposure quality across channels:
Viewable time, dwell, scroll depth
Video quartiles / completion
CTV/DOOH exposure logs + proximity + probabilistic models on vendor site visits
Post-exposure site behaviour (brand → direct → page depth)
Roll into a single 0–100 AIx so display, CTV, social, and DOOH can be compared without pretending CTR is the currency of B2B.
HVAs are your currency
High Value Actions reveal buying intent at the account level.
Now the key part of the above sentence is the “at the account level” part - many vendors measure HVA’s, but without linking them to the account. They run their media buying and their measurement as two distinct silos, which does not work - a single unified solution needs to drive the targeting AND the reporting - providing a 360 degree feedback loop which includes HVA’s.
Then we need to weaponise HVA’s; Weight them, score them, decay them, and multiply by buying-group depth:
Heavy (10 pts): pricing page, API/spec docs, demo scheduler open
Medium (5 pts): case study reads, product tour progress, repeat sessions
Light (2 pts): calculator opens, long dwell, multi-page engaged visit
The Engaged Account Score (EAS)
Collapse Fit + Attention + Behaviour into one optimisation signal:
EAS = (0.35 × Fit) + (0.25 × Attention) + (0.35 × HVA_Score)
× (1 + 0.20 × BuyingGroupDepth) × RecencyMultiplier
Aware: EAS ≥ 35
Engaged: EAS ≥ 60
Sales-Ready: EAS ≥ 75
This is what your CRM, CDP, and DSP should optimise toward—not CTR or impressions.
Kill “Match-Rate Theatre” And Replace It With Vendor Evaluation That Actually Predicts Pipeline
Advanced teams don’t let vendors perform match-rate theatre, bragging about identity precision without account-level behavioural lift. There is no clear way to validate these numbers, and as a result, they are often, ahem, modelled…
Its simplistic and lacks the nuance to be your way to measure vendors. Instead, evaluate partners with a seven-point matrix:
Identity Quality – Do they resolve corporate networks, WFH IPs, and Safari/iOS persistently (IP + HEM + UID), across your real TAL footprint? A true identity graph is required to power the collapsing open web, and most vendors have not built them
Attention Depth – Can they prove dwell, scroll, CTV exposure, video quartiles—and connect it to accounts across paid media exposure to on site behaviour?
HVA Visibility – Can they attribute HVAs to accounts and support clean-room joins / log-level exports?
Latency – Can signals round-trip in <48 hours so media, creative, and sequencing react mid-flight not after a QBR?
Reach Quality – Is reach truly within your TAL (and on premium, curated supply), or padded with adjacent lookalikes and junk inventory?
Experimentability – Will they run holdouts, split-graphs, geo/time tests to prove incremental lift?
Pricing Transparency – Clear CPM, data fees, platform rake. If you can’t trace the pound, you can’t judge efficiency.
Pass/Fail: If a vendor can’t help you prove account-level HVA lift within 45 days, they’re a presentation deck, not a partner.
Part 2 — The Organisational Maturity Test
What sets advanced programs apart in Q4 2025 as we move towards a 2026 lens?
Even a perfect measurement layer fails without the organisational spine to drive execution internally
1) A real Center of Excellence (CoE)
Only ~22% of orgs have one. The CoE is not branding; it’s the backbone:
Templating: TAL hygiene, EAS, HVA taxonomy, playbooks, QA
Governance: identity use, data rights, measurement standards
Enablement: training pods in signal literacy & orchestration
Iteration cadence: weekly—review leading/lagging indicators and update playbooks
2) Dedicated ABM practitioners or pods
~57% have dedicated ABM headcount; the rest treat ABM like a campaign.
Run small pods (Strategist + Analyst + SDR liaison, shared creative/media) measured on Engaged Account → Pipeline velocity, not MQLs.
3) Account-based strategy across GTM
ABM done right isn’t a marketing motion - it’s a GTM operating system:
The same account-level data and EAS drive sales prioritisation, marketing activation, and CS outreach.
Friction drops. Momentum compounds.
Part 3 — Put It All Together (The Four-Layer Lens)
1) Identity & Fit
Do we know who we’re targeting, with clean TAL data?
Do: ICP scoring, firmo + techno overlays, disqualifiers, deduplication, and sales-capacity-aligned list sizes. Qualify engagement with website signals - are the accounts showing in-market behaviour based on their interactions with the vendors site?
2) Attention & Behaviour
Can we measure how accounts engage?
Do: Unified AIx across display/CTV/DOOH/social, paired with weighted HVAs mapped to buying roles.
3) Measurement & Feedback
Can we see what’s working weekly (not quarterly)?
Do: EAS in dashboards shared with sales; track leading (under our control) + lagging (market response) indicators.
4) Org Design
Can we sustain it as we scale?
Do: CoE governance + dedicated pods + cross-functional planning cadence.
Outcome: you stop grading impressions and start optimising account progression velocity.
Part 4 — The Six Places ABM Breaks (and How to Fix Each)
1) Account List Building & Prioritisation
Bad: static sales wishlists; broad vertical targeting; no intent or research; too many accounts for sales capacity.
Good: dynamic, use-case clusters; Cluster ICP → Future Pipeline → Active Focus; weekly reprioritisation with sales.
How to fix: mine last 12–18 months of won/lost deals to define qualification + disqualification; align list size to buyers per account × sales hours available.
2) Account Research
Bad: unstandardised “random Googling,” AI scraping, siloed notes → account blindness.
Good: documented research questions (what/why/how/where to store), insider signals (initiatives, hiring, org shape), stored centrally (CRM).
How to fix: standardise the process first; use Clay/AI after you know what you need and how you’ll use it; produce account love letters and buyer-enablement content, not just “personalised” cold emails.
3) Buyer Engagement
Bad: linear drips, static content to 100s, automated outreach.
Good: human-first, multi-threaded engagement with marketing air cover; sales runs meaningful non-sales touchpoints, event bridges, and social commentary.
How to fix: co-create playbooks with sales; map pre/during/post event flows; build progressive profiling into webinars, roundtables, and research.
4) Measurement
Bad: MQLs and bookings only (no diagnosis).
Good: Leading indicators (actions you control) + lagging indicators (responses) + pipeline.
How to fix: start in a sheet; set weekly targets; run weekly planning/review with sales to adapt in real time.
5) Playbook Orchestration
Bad: static, quarterly plans designed up front.
Good: live, adaptive orchestration; content follows account reality; sprints with sales to focus on the hottest clusters/accounts.
How to fix: build a quarterly scaffold (research → event → content → outreach), then adjust weekly based on EAS and buying-group penetration.
6) Conversion Benchmarks
North Star: Account → Pipeline conversion.
Low-efficacy: <1%
High-efficacy: 5–15% (20%+ for best programs/clusters)
How to fix: if <1%, revisit account selection & research; if 1–5%, fix engagement orchestration; if >5%, double-down.
Implementation Checklist (copy/paste for your team)
Define Engaged Account & EAS thresholds for your motion
Build AIx v1: normalise attention across channels
Finalise HVA taxonomy + weights + decay + buying-group multiplier
Audit TAL hygiene; add disqualification rules; right-size list to capacity
Standardise account research (what/why/how/where); store in CRM
Co-create sales + marketing playbooks; add progressive profiling
Replace vanity metrics with leading/lagging/pipeline dashboard
Schedule a weekly planning + review with sales (non-negotiable)
Re-contract vendors on the 7-point Partner Evaluation Matrix
Add creative/sequence hooks that react to <48h signal changes
TL;DR (for your execs)
Engaged Accounts → Pipeline is the only true north.
EAS = Fit + Attention + HVAs, multiplied by buying-group depth and recency.
Kill match-rate theatre; demand account-level HVA lift in ≤45 days.
Build a CoE; fund ABM pods; treat account-based strategy as your GTM OS.
Review weekly, not quarterly. Optimise movement, not motion.
The Complete ABM Operating System (Templates & How-Tos)
Login to members.theb2bstack.com to access the tools - paid members get their Substac$k email verified within 24 hours and access is then granted to the members hub
1) Engaged Account Score (EAS) Toolkit
Editable calculator (with default weights, decay, and buying-group multiplier)
Threshold guidance for Aware / Engaged / Sales-Ready by segment/ACV
CRM & DSP optimisation tips (how to push EAS back into activation)
2) Attention Index (AIx) Builder
Channel-by-channel normalisation guide (display, video, CTV, DOOH, social)
Data sources & minimum viable telemetry (what to collect if you can’t collect it all)
How to use AIx in creative sequencing and budget routing
3) HVA Taxonomy + Scoring Model
100+ HVAs mapped to funnel stages & buying roles (weights + decay)
Account-level roll-up logic (how to avoid double-counting)
“HVA → action” recipes (what to do when X happens)
4) Partner Evaluation Matrix (7-point)
Scorecard + pass/fail rules
Example “45-day lift” test designs (holdouts, split-graphs, geo/time)
Data contract checklist (identity, latency, exports, pricing)
5) CoE Charter (Done-for-You Outline)
Roles, responsibilities, definitions, governance guardrails
Weekly & quarterly cadences; change-control & experiment registry
Enablement plan (signal literacy → pod playbooks)
Final Word
ABM isn’t a more targeted lead-gen tactic. It’s the account-based operating system for GTM. When you measure attention, behaviour, and fit at the account level—and structure your org to act weekly—you stop chasing attribution hacks and start compounding pipeline velocity.
Let’s build the system that makes “we’re doing ABM” mean something.

