It’s a big and well-known pain point for ABM teams: building scoring models that sales actually uses. Most get ignored. Marketing tweaks the thresholds, tries different data sources, nothing moves. The problem isn’t the math. It’s that account-level scores don’t tell Sales who to call or why it matters now.
That’s why I built Multi-Badge Account Scoring — to score accounts from a different perspective, based on the “why now” instead of aggregate activity scores I never saw work well.
Then Anna Tsymbalist, Head of ABM at Influ2, came in full ‘hold my beer’ mode and explained how she killed the old account scoring entirely and rebuilt her program around contact-level signals, timing, and immediate action.
It confirmed my doubts and clarified what actually matters.
Traditional account scoring fails (and we’ve all lived it)
When Anna started telling the Shelf story, I typed in the chat: “I am having PTSD.” Anna couldn’t hold her laugh. Lisa Pansini, another attendee, replied: “SAME.” We all knew exactly where this was going.
At her previous company, Shelf, they built the classic scoring model — MQLs, thresholds, conversion rates per stage. The math looked clean on paper. Then the CEO wanted more pipeline, and the solution seemed simple: lower the MQL threshold.
Sales came back with feedback: “You’re bringing shit leads. I’m talking to students. This is a waste of time.”
So they raised the threshold back. Still no pipeline. Then, they tracked which channels converted to revenue and discovered that most closed deals came from Capterra search bids, so they threw money at Capterra.
Turns out Capterra leads aren’t endless. There’s only a certain number of people searching for your category at any given time. The bid got expensive; the well dried up, and pipeline didn’t move.
That’s when they admitted: scoring doesn’t tell you who to talk to or why now. It just gives you a number that sales doesn’t trust.
What works instead
Anna Tsymbalist rebuilt her ABM program with no scoring. Just buying groups defined by name in Salesforce. Not roles or titles in aggregate — actual people. Sarah from RevOps. Melissa from Marketing Ops. The VP who signs off.
Contacts divided by persona. Not because there was different messaging for each (there was no capacity for that, something we all encounter often), but to control what content each group saw and manage exposure.
The system: if Sarah clicks an ad about competitor comparison, the SDR gets a Slack alert the same day. Not a score. A name and a topic.
The “ad fatigue” myth died.
The old model was fear-based: rotate contacts in three-month batches to avoid overexposure. Run ads for 90 days, pull back, let them rest, move to the next batch.
Then the question came: what if there’s no such thing as ad fatigue?
Anna replaced batches with continuous nurture. Contacts not being actively prospected by SDRs still saw content, just less of it, and they only reached out when they saw engagement.
It worked because the timing was right, because timing mattered more than volume.
If a company just hired a new CMO who’s overhauling everything, the VP of Marketing isn’t shopping for new tools. If they just got acquired and leadership is cutting costs, your pitch lands flat no matter how good it is.
Timing is everything. The problem was never the content. It was being there when the pain showed up.
Contact-level intent builds the case for Sales
The entire point of ABM is giving Sales a reason they can’t ignore. Not a score. Not a hunch. A case so clear there’s no excuse not to act.
Contact-level intent makes that possible. When you know Sarah from RevOps searched for your competitor twice this week, read three migration case studies, and posted about gaps in her current tool, you’re not guessing. You have the story.
That’s not personalization. It’s relevance. And it’s irrefutable.
Before contact-level intent, Sales got “this account is surging” and shrugged. They didn’t know who was interested, what sparked it, or whether it mattered. Now the signal comes with a name, a topic, and a timeline. The case builds itself.
That’s the shift ABM was grasping for a long time.
How this challenges Multi-Badge Account Scoring
The foundation of Multi-Badge Account Scoring is: build the case for Sales to chase an account. One badge at a time. One clear reason.
Anna’s approach proves that works — but at contact-level, not account-level.
Her “badges” aren’t mine. I defined Expansion, Growth-Spike, Tech-Refresh, Intent-Surge. She runs persona-based campaigns, tracks engagement cycles, and flags competitor research signals.
That makes me question: Was I too specific? Do we need more badges? Or does the framework need to flex based on the signals you have?
What I know: the foundation is right. Build the case at contact-level. Give Sales one irrefutable reason to act now. The rest? Still figuring it out, this is a work in progress that I don’t think it will never stops.
When I developed Multi-Badge Account Scoring, I didn’t know about contact-level intent. I was working with account-level signals and trying to make them useful. The model worked because it forced one badge at a time — one clear story instead of a vague score.
When Influ2 launched contact-level intent, it enriched the entire model, and it allowed me to go from: “This account shows Tech-Refresh signals. To Sarah Doe. The director of RevOps at Acme and two other managers reporting to her, commented on X social media platforms that they were researching ‘Competitor X vs Your Tool’ and read three migration articles this week.
The badge still fires. But now there’s a name, a topic, and a timeline.
Anna’s reality shows that the model needs to keep maturing. The foundation — build the case at contact-level — is solid. But the badges themselves? Maybe they need to be more adaptive; or there are more than four. Maybe the model shouldn’t prescribe them at all.
Contact-level intent doesn’t just make Multi-Badge Account Scoring better. It forces it to evolve.
What the infrastructure requires
Building this infrastructure means admitting your buying groups are probably wrong.
Most teams define personas once, maybe twice, and assume they’re good for a year. But 30% of the contacts change jobs regularly. The VP you spent three months warming up left the company two weeks ago, and you’re showing ads to someone who doesn’t work there anymore.
So, the first requirement isn’t tooling. It’s discipline. Someone has to own revalidating buying groups monthly. That’s unglamorous work because no one usually celebrates “we updated Salesforce” (except for ABMers) but, if you skip it, everything downstream breaks.
The second requirement is speed. If Sarah searches your competitor on Tuesday, the SDR needs to know by Wednesday. Not next week when they review the dashboard, because by then, Sarah’s already had three sales calls and an opinion.
That means alerts. Slack, Salesforce, email — whatever gets eyes on it same day. And it means SDRs who trust the signal enough to act on it, which requires the leadership team have stopped measuring MQLs.
Most teams don’t have all of this. But you don’t need to. Start with 50 accounts and define the buying groups by name. Then, track engagement at the contact level, and act on the signals the same day.
Prove it works before you try to scale because if you can’t make it work for 50, it won’t work for 500.
A game changer
Contact-level intent changes everything because it gives you the name, the topic, the timing. It makes the case irrefutable.
It aligns with the foundation of the Multi-Badge Account Scoring model: building the case at contact-level, and give Sales one clear reason to act.
Anna Tsymbalist approach showed me that the badges themselves need to flex, and another thing became clear from her approach:
There is no such thing as ad fatigue. There’s bad timing. If you’re in front of the right person when the pain shows up, they’ll engage. If you’re not, no amount of “nurturing” fixes it.
Contact-level intent doesn’t just make Multi-Badge Account Scoring model better. It forces the model to evolve. If you can see exactly who searched what and when, the “badge” becomes less important than the story you build from the signal.
Contact-level intent Takeaways
Contact-level intent tracks engagement and buying signals at the individual person level, not the account level. Instead of “Account XYZ is surging,” you get “Sarah from RevOps at Account XYZ searched your competitor twice this week, read three migration case studies, posted about gaps in her current tool.” Contact-level intent gives you the name, topic, and timing—building an irrefutable case for sales to act now.
Contact-level intent shifts ABM from account-level scores to buying groups defined by name in Salesforce—real people (Sarah from RevOps, Melissa from Marketing Ops, the VP who signs off). When Sarah clicks an ad about competitor comparison, the SDR gets a Slack alert the same day with a name and topic, not a score. This builds the case for sales: when you know Sarah searched your competitor twice this week and read migration case studies, you’re not guessing—you have the story. That’s relevance, not personalization.
No. The old model was fear-based: rotate contacts in three-month batches to avoid overexposure. Run ads for 90 days, pull back, let them rest. But there’s no such thing as ad fatigue—there’s bad timing. If you’re in front of the right person when the pain shows up, they’ll engage. Continuous nurture works: contacts not actively prospected by SDRs still see content (just less), and you only reach out when you see engagement. Timing matters more than volume.
