Appreciate this — and you’ve nailed a really important nuance.
Signal processing speed is the real constraint, not tooling adoption. A lot of teams mistake automation for leverage, when in reality they’ve just sped up the production of low-value signals.
Love the LinkedIn dwell time vs website depth example too. That kind of weighting forces you to ask what behaviour actually indicates cognitive engagement, not just “what’s easiest to measure.”
The output-metric trap (email CTR, last-touch clicks, etc.) is exactly why so many stacks feel busy but underpowered. Scoring infrastructure first → activation second is the inversion most teams still haven’t made.
I will always believe that volume has a place. And if you actively learn from volume you can scale faster.
Appreciate this — and you’ve nailed a really important nuance.
Signal processing speed is the real constraint, not tooling adoption. A lot of teams mistake automation for leverage, when in reality they’ve just sped up the production of low-value signals.
Love the LinkedIn dwell time vs website depth example too. That kind of weighting forces you to ask what behaviour actually indicates cognitive engagement, not just “what’s easiest to measure.”
The output-metric trap (email CTR, last-touch clicks, etc.) is exactly why so many stacks feel busy but underpowered. Scoring infrastructure first → activation second is the inversion most teams still haven’t made.