RevOps

Why Your Lead Scoring Model Doesn't Work

Sales ignores your lead scores. MQLs don't convert. You've rebuilt the model twice. The problem isn't the model.

January 2026 · 11 min read

You spent weeks building a lead scoring model. Company size, industry, job title for fit. Page visits, email opens, content downloads for engagement. The logic made sense. Marketing and sales agreed on the thresholds. MQL at 75 points, SQL at 100.

Six months later, sales doesn't look at the scores. They work leads based on gut feel and how recently the lead came in. When you analyze MQL-to-opportunity conversion, there's no correlation between score and conversion rate. High scores convert at roughly the same rate as medium scores. Sometimes worse.

So you rebuild the model. Tweak the weights. Add new factors. Maybe try predictive scoring from your marketing automation platform. Same result.

The model isn't the problem. The data feeding the model is.

How Scoring Breaks Down

Lead scoring combines two types of signals: fit (who they are) and engagement (what they do). Both depend entirely on data quality.

Fit Scoring and Missing Data

Fit scores evaluate whether a lead matches your ideal customer profile. Typical factors:

  • Job title / seniority (+20 points for Director+)
  • Company size (+15 points for 100-1000 employees)
  • Industry (+10 points for target industries)
  • Location (+5 points for target regions)

Now check your database. What percentage of leads have values in each of these fields?

For most companies: job title is populated on maybe 70% of leads. Company size? 40%. Industry? 50%. These aren't bad databases. This is normal. Forms don't ask for this information because it kills conversion rates.

But here's what happens to scoring:

Lead A has perfect fit data. Director at a 500-person company in your target industry. Gets the full +45 fit points.

Lead B is also a Director at a 500-person company in your target industry. But those fields are blank because they came through a form that only asked for email. Gets +0 fit points.

Same lead. Same potential. Completely different scores. Your model isn't evaluating Lead B's fit at all because there's nothing to evaluate.

Engagement Scoring and Fragmented Data

Engagement scores measure behavior: page visits, email interactions, content consumption, webinar attendance. These signals should indicate buying intent.

The data problems here are different:

Duplicate records split engagement. If a lead exists twice in your system (different email addresses, created from different sources), their engagement is split across records. One record shows 3 page visits and 2 email opens. The other shows 5 page visits and a webinar attendance. Neither record has the full picture. Neither scores correctly.

Anonymous activity isn't attributed. Someone visits your site 15 times before filling out a form. Most of that activity is tracked under a cookie but not yet tied to a person. When they finally convert, do those 15 visits get attributed to their lead record? Maybe. Depends on your tracking setup. Often, the lead record only shows activity after form submission.

Cross-device behavior is lost. They research on mobile during their commute, download content on their laptop at work, and request a demo from their iPad at home. Three devices, three cookies, one person. Your system might see three anonymous visitors instead of one engaged buyer.

The Inconsistency Problem

Even when data exists, inconsistency corrupts scoring.

Your model gives +20 points for "VP" or above. But your job title field has:

  • VP of Marketing
  • VP Marketing
  • Vice President, Marketing
  • Vice President of Marketing
  • Marketing VP
  • VP - Marketing

If your scoring rule checks for titles containing "VP," some of these match. If it checks for titles starting with "VP," others don't. If you built a list of exact matches, you missed variations you didn't think of.

Same problem with industry, company size ranges, location formats. Every inconsistency is a potential scoring error.

Diagnosing Your Scoring Problems

Before rebuilding anything, understand what's actually broken.

Score Distribution Analysis

Pull a distribution of your lead scores. What does it look like?

If most leads cluster at the same score: Your scoring model isn't differentiating. This often means most scoring fields are blank, so everyone gets similar (low) scores based only on a few fields that are actually populated.

If scores are evenly distributed but don't predict conversion: The scores are differentiating, but not on the right signals. Check whether the factors you're scoring on actually correlate with conversion.

If high scores don't convert and low scores do: Your scoring factors might be wrong, but more often the data in those factors is wrong or stale.

Field Completeness Audit

For every field in your scoring model, check:

  • What percentage of leads have a value?
  • What percentage of won opportunities had a value (at time of scoring)?
  • Is there a gap?

If company size is only populated on 35% of leads, but 80% of won opportunities had company size data, you have a gap. The opportunities might have converted for reasons your score couldn't capture.

High-Score / Low-Convert Analysis

Take your top 20% of leads by score that didn't convert. Review them:

  • Was the firmographic data accurate?
  • Was the engagement data complete (or split across duplicates)?
  • Were they actually a good fit, or did wrong data inflate their score?

Similarly, look at low-score leads that did convert. What scoring data was missing that should have elevated them?

Fixing the Data

Your scoring model will only work if the underlying data is there and accurate.

Enrich Fit Data

If you're scoring on company size, industry, and seniority, you need those fields populated. Options:

Enrich on form submission. When a lead comes in, automatically enrich with firmographic data from a provider (Clearbit, ZoomInfo, Apollo, etc.). This happens before scoring, so the model has data to work with.

Progressive forms. Ask for more information on subsequent form submissions. First touch: just email. Second touch: company name. Third touch: job title. Build the profile over time.

Bulk enrichment. For existing leads, run a one-time enrichment to fill gaps. This immediately improves your scoring accuracy across the database.

For detailed guidance: How to Enrich HubSpot Data | How to Enrich Salesforce Data

Standardize Scoring Fields

Inconsistent values mean inconsistent scores. Standardize the fields your model depends on:

Job titles: Create a title normalization map. "VP Marketing," "VP of Marketing," "Vice President Marketing" all become "VP Marketing." Apply via workflow or data cleaning. How to Standardize Job Titles

Industry: Use picklists, not free text. Map variations to canonical values.

Company size: Standardize on ranges (1-50, 51-200, 201-1000, 1000+) rather than exact numbers.

Deduplicate to Consolidate Engagement

Engagement data split across duplicate records makes behavioral scoring useless. Merge duplicates so each person has a single record with complete engagement history.

For detailed guidance: Deduplicating Salesforce Contacts | Deduplicating HubSpot Contacts

Fix Tracking Gaps

Make sure pre-form engagement gets attributed to the lead record once they convert. Your marketing automation and CRM should handle this, but verify it's actually working:

  • Visit your site anonymously multiple times
  • Fill out a form
  • Check the lead record
  • Are those pre-form visits attributed?

If not, you're missing significant engagement signal.

Rebuilding the Model (After Fixing Data)

Once you have clean data, your model has a chance of working.

Score Only What You Can Measure

Don't include a factor in your scoring model unless you have data for it on most leads. A brilliant scoring factor that's only populated on 20% of records creates more noise than signal.

Start with factors where data completeness is >70%. Add others as you improve data quality.

Weight Based on Evidence

Don't guess at scoring weights. Analyze your won opportunities:

  • What percentage came from target industries?
  • What was the average engagement before conversion?
  • What seniority levels actually buy?

Weight factors based on how strongly they actually correlate with conversion, not how important you think they should be.

Test and Iterate

Once you have clean data and evidence-based weights, test the model:

  1. Score all leads in your current pipeline
  2. Wait 30-60 days
  3. Check correlation between score and conversion
  4. Adjust weights based on what you learn

This is an ongoing process. Scoring models need regular recalibration as your market, product, and buyer behavior evolve.

The Bottom Line

Lead scoring is a data problem disguised as a model problem.

If your scoring fields are 30-50% blank, the model can't work. If duplicate records split engagement data, behavioral scores are wrong. If field values are inconsistent, matching logic fails.

No amount of model tuning fixes these issues. You can rebuild the scoring logic ten times. Without clean, complete data, the output will still be noise.

Fix the data first. Then the model has a chance.

Common Questions

Why doesn't my lead scoring model predict conversions?

Lead scoring fails when the data it relies on is incomplete or inconsistent. If 40% of leads are missing firmographic data, those leads get incomplete scores. If engagement data is split across duplicates, behavioral scores are wrong. The model logic might be sound, but garbage in means garbage out.

Should I use demographic scoring, behavioral scoring, or both?

Both, but only if you have the data to support both. Demographic scoring tells you if someone could buy. Behavioral scoring tells you if they're interested now. You need both dimensions, but each is only as good as the underlying data.

How do I know if it's a data problem?

Check your highest and lowest scored leads. For high scores that didn't convert, was the data accurate? For low scores that did convert, were the fields populated at all? If you find lots of blanks or wrong data, you've found your problem.

Lead scores that don't predict anything useful?

Fix My Scoring Data

Related: Lead Routing Data Problems | Cleaning Salesforce Data | Cleaning HubSpot Data