Why Clicks Are a Lie: Building Goals-Based Optimisation for B2B
B2B typically lacks the conversion event which feeds adtech optimisation - here is how we can solve that from a product POV
Clicks are the currency of digital media - but in B2B, I would argue that they’re mostly a distraction.
For years, marketers have been trained to obsess over click-through rates, cost per click, and top-of-funnel "engagement." But here’s the problem: clicks don’t correlate with pipeline, revenue, or real intent, especially not in complex, multi-touch B2B journeys.
In fact there’s data that suggests that higher click through rates may inversely correlate, meaning that optimising for clicks can actively work against your goals.
1. The Case Against Clicks in B2B
The traditional logic goes like this: if your ad gets clicked, it’s working. But in B2B - especially in account-based (ABM/ABA/ABX) strategies - that logic breaks down fast.
Let’s start with one of the biggest distortions: account-based campaigns.
When you run a Target Account List (TAL) through programmatic channels, your impressions per account are inherently low - often 1–5 impressions per user per campaign. Now lets assume those impressions land against these users in environments where clicks are even a possibility - immediately discounting programmatic audio, Digital out of Home, CTV and other elite tier addressable channels - that means it only takes a single click from an account to generate a 20–33% click-through rate. But that’s not a real metric - that’s statistical noise. It’s not predictive, it’s not scalable, and it certainly doesn’t reflect qualified buying behavior.
To give this some product perspective; when I spent years building optimisation engine specs for Demand Side Platform (DSP) ops layers, we always worked to a MINIMUM of 100,000 impressions, 500-1k clicks and a CTR therefore of around 0.5%. These numbers are skewed UP because I was building for a native advertising platform, where CTRs are ahead of banner display.
Our objective with these ML models was to estimate CTR with a low standard error, to avoid noise - such as accidental clicks, low impression, high click samples, and to avoid data sets which would inherently cause false optimisation. We wanted statistical stability, so when we were designing systems which could predict the inventory which would give say a 10% CTR lift with >95% confidence (e.g. 0.5% to 0.55% CTR) we’d want 1.2m impressions to run an A/B sampling. 100k impressions would give us nothing more then a directional optimisation view. When we were building realtime CTR prediction we’d want at least 10,000 labelled samples (impressions with click/no click labels) to start supervising learning models, with balanced datasets because clicks are scarce at under half a percent, and frankly for advanced modelling we’d be looking at millions of rows.
All this is to say - click optimisation and click based learning needs significant pools of data. The data simply isn’t there to optimise to clicks at an ABM/account-level.
Now this is a product view formed from trying to build optimisation layers in DSPs for click events, but in the complex world of B2B where real attribution events like sales are essentially non existent in the digital journey, does the click hold value for understanding which accounts are likely to take a solid ‘next action’ on their nurture journey?
Sadly it isn’t that easy, and in fact, high CTRs are often a red flag in B2B. After over a decade working in the space, I've seen this pattern repeatedly:
When click-through rates rise well above the average (e.g. 2%+), post-click engagement and on-site goal completions tend to collapse.
Ouch, Why?
Accidental clicks - especially on mobile or in-app inventory - where poor UX met with limited real-estate and often cluttered layouts lead to false positives. AKA ‘fat thumbs’.
Low-quality publisher environments generate more clicks then high quality ones - where inventory is inflated, placements are aggressive, and user intent is minimal. Cheaper inventory with more aggressive ad placements [compared to premium publishers] are the perfect ‘crack’ for DSP optimisation layers - cheaper CPM x higher click rate = optimising to these placements even if the backend metrics suck
Irrelevant audiences - clicks from non-decision-makers, students, or bots. Often propagated by buying third party audience pools which are inflated and un-auditable. These data sets are sold on cost per thousand (CPM) models which by definition means the seller wants to make them as big as possible to generate more revenue. The brutal truth is that these inflated audience pools means there’s more matching IDs for the DSP to bid on and a greater chance to drive higher CTRs, which in turn makes it look like these data sets are performing when in fact they are not
The end result? these environments often over-index on CTR but underperform against downstream metrics like form fills, pricing page views, or meaningful site engagement.
The Problem is Structural
Most DSPs - like The Trade Desk, DV360, and Adform - were built for FMCG and eCommerce, because these advertisers have near-limitless budgets when their sales metrics are being hit - where short funnel cycles and high-frequency purchases make CTR a semi-reasonable proxy for performance. The fast sales cycle always lent itself to third party cookies with their short time-to-live, and the single decision maker in B2C meant the cookie based trading model work/ed.
Drive lots of clicks —> use post view cookies to attribute —> ‘generate’ lots of sales —> get a bigger budget —> rinse and repeat. The flywheel of adtech
But B2B is not that world.
In B2B:
Buying journeys are long - in enterprise contexts they are ~330 days or 11 months in 2025
Buyers are anonymous, numerous and their careers determine that they need a level of rigour that doesn’t go into a 20 bucks FMCG product. The buying committee is now 6-10 stakeholders , averaging 7.4 but in some cases upwards of 20 people making decisions - further friction comes from cross team collaboration, global culture differences and the need for consensus at scale
Intent is slow and scattered; it oscillates around peaks and troughs in engagement driven by internal procurement, board meetings, sign offs, stakeholder escalation in the process as research gets passed upstream. Long periods of near dormancy which are punctuated by spikes in engagement (or surges in Bombora’s world).
Complexity is growing; according to AdWeek’s deep dive, 77% of B2B buyers now describe their purchase as ‘very complex or difficult’.
Touch points are increasing; as Covid first struck in March 2020, the average touch points for a major B2B transaction was 17. Fast forward through the digital journey that B2B brands have been going in the 5 years that have passed since, and that number is now 60, but in some cases (strategic deals, high value deals, critical deals) that can exceed 100. As described by Acorn, the B2B journey is now ‘longer, larger, and way more layered’
Of all the stats which may back up CTR as a metric - the fact that 97% of buyers will visit the vendor website before they buy according to Corporate Visions, linked above. So whilst ads and their clicks may not drive the purchase, the website and its ability (hopefully) to capture first party intent, segment it, and link back to high value actions will nearly always capture the buyers - potentially validating the idea of building audiences and measuring off of high value website interactions instead of clicks
Therefore, CTR as a core-KPI penalises quality and over-rewards noise.
2. What Actually Matters: High-Value Goals
So we just learned that our 2025 B2B buyer has;
An average buying cycle length of 11 months
77% of them are complex multi-faceted journeys
6-10 stakeholders make up their committee
60+ touch points before they buy
Self serve research is driving the journey - 70% complete the journey before talking to sales and 81% have a vendor in mind
91% know the vendor before their first meeting
Buyers re-define their problem 3.1 times before they buy
And they research across 4-10+ sources - and often 7-10 more during evaluation
AI/LLM layers are hiding research from the usual intent breadcrumbs of content consumption, further mystifying the buying journey
So when I think about a product and GTM strategy, these points come to mind
Model for longitudinal intent: the signals we use need to be persistent and stitched across months and months, not single sessions
Account based product logic: These sessions need to be linked to accounts not users - in-fact the average US employee tenure as we have talked about before, has dipped to just 18 months - the likelihood of engaged committee members leaving the account before the purchase is very real. Therefore behavioural ebbs and flows need mapping to companies not people
Each loop needs its own content architecture - analyst reports, case studies, tools etc which the media journey needs to support
Media attribution needs to live amongst this thinking - over longer flights there should be and will be dips in attribution which could last for months at a time - feedback loops need to reward persistence not the first or last touch
Measurement cadence needs to capture this sentiment, with a focus on what happen sin areas the brand can control (their website), knowing that 97% of all buyers will go on their website before they buy
If you want to build pipeline, your media needs to optimize toward behavior that actually correlates with intent.
What Actually Matters: Measuring High-Value Actions, Not Vanity Engagement
If you’re serious about turning media into pipeline, you need to stop measuring surface-level signals - and start instrumenting for deliberate, strategic behavior. That means moving beyond CTR, bounce rates, or time-on-site averages, and building a measurement architecture that reflects how real B2B buyers behave.
Let’s call these what they are: High-Value Actions (HVAs). These are not just “micro-conversions” or “engagement proxies.” These are the moments of signal that suggest buying intent, internal momentum, or stakeholder alignment within an account.
Here’s what I believe you should be measuring, scoring, and optimising toward in 2025 — and why.
a. Visit to a Purchase-Intent Page (e.g. Contact, Pricing, Request Demo)
Why it matters: These are the rarest, clearest moments of hand-raising behavior and it would stand to reason that a good percentage of our 97% of future buyers who are on our website make it to these sorts of pages. They indicate that an account has moved from exploration to evaluation.
Product insight:
Not all visits are equal. Track scroll depth, form visibility, hover behavior, and time on screen to filter out noise. Really key pages like these warrant heatmaps and deeper exploration - the ‘no purchase on digital’ excuse doesn’t pass muster here - think like a Direct to Consumer (D2C) brand and get into the data
Consider time-of-day and geo signal - visits at 9:17 AM from a corporate IP in Munich tell a very different story than a midnight bounce from a café IP in Jakarta.
What to optimise for:
The depth and intensity of interaction on these pages — not just raw visits.
b. Sustained Attention on Solution-Defining Pages (>90s Dwell Time)
Examples: Product overview, integrations, industry-specific use cases, customer proof
Why it matters: This is where buyers educate themselves. Pages that explain “how it works,” “why us,” or “why now” are key in shaping internal consensus.
Product insight:
90 seconds is a meaningful threshold. It suggests a real evaluation window, not a passive tab open. A fat thumbed high CTR click would have long since left if the session was not deliberate, especially in B2B where solutions pages only talk to those researching.
Combine dwell time + engagement signals (scroll, clicks, internal navigation) for confidence.
What to optimise for:
Optimise creative and media placements that attract this kind of interaction, then weight bids toward inventory that delivers deep site engagement.
c. Form Fill Events - Especially High-Intent Forms
Examples: “Book a Demo,” “Talk to Sales,” “Start Free Trial”
Not: “Download Whitepaper” (unless extremely targeted)
Why it matters: In the sea of anonymous behavior, forms are one of the few hard identifiers. A form fill is an intent payload - and a potential entry point into multi-threaded buying activity.
Product insight:
Track not just the submission, but time on form, field abandonment, and return-to-complete behavior.
Use funnel-step tracking to see where friction occurs (e.g. 60% drop-off at “Phone Number” field? Fix it.)
What to optimise for:
Campaigns that drive qualified form completions — not sheer volume. Filter out job seekers and junior researchers by mapping email + IP to seniority and function.
d. Return Visits Within a Short Window (3–7 Days) AKA surging accounts
Why it matters: B2B buyers don't binge; they loop. A return visit in a short window suggests internal interest and ongoing research, or a link shared internally to other committee members
Product insight:
Look for repeat visits to the same section (e.g. pricing page twice = evaluation mode).
Track whether the second visit comes from a different person at the same company - a signal of team-based engagement. This is easier said then done but cross device graphs, and device level signals like MAIDs can give a basis for attempting this, along with looking for different regional or overseas exploration from the same company because the committee may be geographically disparate
If relevant to ticket size; look for potential visits from a corporate HQ - a potential signal of escalation to decision. Most big purchases pass through HQ
What to optimise for:
Media that seeds first-touch awareness — and retargeting logic that encourages smart re-entry. Your DSP should reward the loop, not just the first interaction.
e. Engagement with Technical, Integration, or Security Content
Why it matters: These pages aren’t flashy - but they’re mission critical. Visits here suggest IT involvement, security reviews, or technical validation is underway. This is another layer of the solution exploration, and when these technical stakeholders are in-play, the ecision timeline is tightening
Product insight:
Visits to
/security,/integrations,/api-docs, or/complianceare often the clearest mid-funnel signals for enterprise buyers.These often precede procurement involvement - meaning budget conversations are near.
What to optimise for:
Tactics that warm up the IT audience. Intent on technical pages can be your earliest “green light” that an opportunity is real.
f. Multi-Page Session with Lateral Exploration
Why it matters: A single page view might be curiosity. But five or more in one session - especially across distinct areas (e.g. Solutions > Pricing > Team > Security > Demo) — suggests an informed internal conversation is underway.
Product insight:
Use journey maps to visualise common high-conversion paths. Optimise content layout accordingly.
Track exit points - do they bounce after hitting “Pricing”? That’s a friction signal. It also could be a signal to surround the account with media, serving creative optimised to any known pricing friction
What to optimise for:
Inventory and creative that initiates full-path exploration - not just single-page landings.
g. Behavior from New Stakeholders in the Same Account
Why it matters: B2B buying is team-based. Seeing new visitors from the same company within a short window is a signal of internal consensus-building.
Product insight:
Match IP, domain, or behavioural fingerprint to company entity (not just user).
Track time gaps between first and second visitor — a <3 day delay often maps to same-week internal referral.
What to optimise for:
Signals that indicate account-wide engagement velocity, not just user-level metrics.
So as we start to navigate beyond CTR and think about these sorts of signals, critically:
They’re harder to fake
They correlate with revenue
They can be mapped to accounts, not just anonymous users
But Here's the Catch: You Must Build the Measurement Layer First
Before you buy a single impression, you need to fully instrument your site:
Shameless plug, use FunnelFuel Analytics (our analytics stack built to track the above goals) or GA4 / Adobe / open-source tools to track micro and macro conversions.
De-anonymize corporate traffic with tools like Leadfeeder, Lead Forensics, or similar resolution overlays.
Define what good engagement looks like - not just page views, but scroll depth, revisit frequency, time on intent pages.
This foundational measurement layer is your source of truth in partnership with CRM. Without it, you’re flying blind. With it, you can move beyond volume to velocity - and eventually to qualification.
3. The Product Fix: Goals-Based Optimisation
This is where product strategy and media execution collide. This is taking our journey of click—>HVA, and productising it it for optimising media buying, globally, in near realtime - aka the sexy product bit!
Most demand-side platforms (DSPs) aren’t built for B2B optimisation. Their models are trained on fast-funnel, user-level conversions - not account-based goals or form completions that happen 6 sessions later in a different part of the world. Adtech - and having spent my career in it - was not built for this use case.
So if you’re serious about B2B performance, your product roadmap needs to include the following or a partner who can help you execute on this
Step 1: Define and Categorise Website Goals AKA High Value Actions
Label key on-site behaviours (e.g. contact page views, form starts, scroll depth) inside your analytics platform, and segment the pages by funnel stage
Organise them into a goals hierarchy:
Core Goals (e.g. Form Submissions)
Engagement Goals (e.g. Pricing Page View, >60s Time on Page)
Exploration Goals (e.g. Industry Use Case Page)
Step 2: Score and Aggregate by Account
Assign point values to each goal and calculate a composite account score - I have tooling for this which is available to paid members
Use IP de-anonymisation and fingerprinting to match these behaviors to known accounts
Store and analyse these over time to understand account progression
Step 3: Feed Goals Back Into the Media Engine
Fire custom pixels or postbacks when goals are completed, fuelling the optimisation engines to work to your bespoke inputs
Push those signals into platforms like The Trade Desk via custom events or use intermediary layers built into the TradeDesk like FunnelFuel to manage the pipelines (targeting + reporting)
Build a proprietary optimisation layer — or use AI agents (e.g. Gentic-style models) to model and feed goal-weighted conversions back into your bidding logic
Step 4: Capture the Invisible Clicks
Many high-value sessions won’t look like standard post-click activity - especially on Safari, iOS, and in cookieless environments
Your measurement layer must include dark session detection, anonymous goal captures, and browser-agnostic event tracking
This trains the optimisation loop on real behavior, not just what Chrome allows
Product Principle:
Don’t let your DSP dictate what performance looks like. Build a B2B-native optimisation engine, and teach your ad tech stack what matters.
4. Real-World Example: Optimising to Clicks vs. Optimising to Goals
Campaign A (Click-Optimised):
0.25% CTR
500 clicks
4 Contact Page views
→ Contact Rate = 0.8%
Campaign B (Goal-Optimised):
0.15% CTR
300 clicks
12 Contact Page views
→ Contact Rate = 4%
Even though Campaign B drove fewer clicks, it tripled the volume of high-value on-site actions. And the account match rate (based on IP) revealed a significantly stronger ICP overlap - particularly among enterprise tech buyers in the UK and DACH regions.
That’s the power of shifting from engagement volume to goal density.
5. Final Thought: Stop Optimising for the Metric That Lies
Clicks are easy. They’re cheap. They make reports look good.
But in B2B, they’re not predictive, they’re not accountable, and they’re not how pipeline is built.
If you want to win:
Build your measurement infrastructure before your media plan
Capture account-level goals, not just user clicks
Train your DSP to think like a B2B marketer
And most importantly - remember this:
Clicks don’t buy. Accounts do. Optimise accordingly.

