Case Study
Fintech
How Nova Lending Built a Lead Scoring Model That Sales Actually Used
A commercial lending platform had a lead scoring model that had been built 18 months ago and never validated. Sales had stopped trusting it. Marketing kept sending scored leads. Nobody was measuring whether the scores predicted anything.
+44%
Conversion lift
22%
Shorter sales cycle
38%
Cost-per-loan reduction
We're a tech company helping B2B teams extract CRM data, find revenue leaks, and unlock growth. Our approach is simple, combine AI with strategy so you can focus on closing what matters most.
The situation
Nova Lending connected small business borrowers with commercial lenders for working capital and equipment financing. The marketing team ran paid acquisition across Google, LinkedIn, and industry directories, generating 800-1,200 leads per month. The scoring model had been built by a marketing ops consultant during a period of rapid growth and was based on firmographic criteria: business age, revenue range, industry, and a simplified form completion score.
By the time we joined, the scoring model had been running for 18 months without any validation against outcomes. The sales team's unofficial policy was to call everyone and use the score to prioritize only when the queue backed up. When asked directly, sales managers said they didn't trust the scores because "they didn't predict which leads would actually fund."
What we found
We pulled 18 months of lead data with scores, sales activity, application status, and funding outcomes and ran a correlation analysis between every scoring criterion and the ultimate outcome (funded loan).
The results invalidated three of the five scoring criteria. Business revenue range had essentially no correlation with funding probability. Industry had weak correlation. Form completion score had moderate correlation but was being weighted far too heavily relative to its predictive value.
Two factors that weren't in the scoring model had significant predictive value. Time-in-business over 4 years correlated with funding probability at 2.3x the rate of the existing business age criterion (which used a broader band). And the specific entry point into the funnel mattered enormously: leads from industry-specific directories funded at 3.8x the rate of leads from generic financial comparison sites, but both were being treated as equivalent in the model.
The source finding had immediate budget implications. Industry directory leads were being acquired at a higher CPL than comparison site leads, which had kept budget allocation weighted toward the cheaper source. On a cost-per-funded-loan basis, the economics were reversed.
What changed
The scoring model was rebuilt using only the two high-predictive criteria (time-in-business above 4 years, source type) alongside a simplified engagement score. Leads that met both primary criteria were flagged as high-priority regardless of other factors. The model was implemented with a monitoring framework to recalibrate every quarter as new funding data accumulated.
Sales adopted the new model within 30 days because it was built around a briefing with the sales team: here's what the data shows predicts funding, here's why we changed the weights, here's what a high-priority lead actually looks like. The transparency produced trust that the previous model had never had.
Budget was reallocated toward industry directory channels. Lead volume from those channels increased, funded loan volume increased, and cost-per-funded-loan dropped 38%. Lead-to-application conversion increased 44% because reps were spending more time on the leads most likely to fund and less time on low-probability volume.
