Jason Horsley LexisNexis

Jason Horsley

Predictive modeling allows companies to leverage data in order to forecast probable outcomes and trends.

In a very real sense, they are a key element to strategy decisions across industries — and they only continue to strengthen with the advent of big data and ever-improving technology.

The collections market has gotten smarter about deploying these modeling scores, and has come to rely on them to set various business and operational strategies:

  • Prioritizing call campaigns,
  • Assigning treatment strategies
  • Setting cost per account budgets.

Because agencies are placing such great importance on these numbers, confirming these scores has never been more important.

Like collecting or skip tracing, the practice of model-building is part science and part art. For that reason, not all model builders will produce the same outcome for a given problem. However, with the proper data, technology and domain expertise, a predictive model should become invaluable to any collection operation. While building such a model can be outsourced, agencies should take a second look at internal ownership of validating their models, ensuring that they’re still providing the expected business value to the agency.

Once a score has become a strategic linchpin, putting it on auto-pilot is a mistake. With the rapid evolution of both modeling techniques and market conditions, nurturing your score and how it is used is an essential component to any business that values continuous improvement.

No one decision maker has full control over consumer behavior; it is important to recertify your predictive model to make sure the analytic solution effectively measures relevant consumer behaviors. Consumers may or may not change over time, but by re-validating an existing model or re-modeling all together, the consumer, changed or not, will be better understood. With renewed understanding, consumers will slide along the propensity to pay continuum and your strategy will therefore become more effective.

If your existing score is on auto-pilot for at least a year, lumped in as part of a package, underutilized or heavily relied upon to drive strategy, then it is time to validate your score and restore your confidence in its value. Good scores are worth their cost many times over, while bad, outdated and misused scores are detrimental to your profitability.

Prior to testing, you will need to make several decisions to properly set up your test design.

Ask yourself these four questions:

  1. Am I testing a model that has just been developed or am I validating an existing model?
  2. Do I have performance data?
  3. Will I be comparing the performance of multiple models?
  4. Should I do a retrospective test or real time test?

Your answers will lead to:

  1. (Re)validation or (Re)development
  2. With or Without Performance Data
  3. Champion-Challenger or No Incumbent
  4. Retrospective or Real-Time

If you are testing a new model and have performance data for a retrospective test, but are not comparing to another model, then your test design will look much different from validating an existing model in a real-time test against another model.

Here are three scenarios:

Validation, With Performance Data, No Incumbent, Retrospective

  • The agency selects a group of accounts that have been worked in an active collections campaign.
  • The agency collects and appends some collection disposition from the account, for example, whether the account holder repaid the debt and how much they paid.
  • Next, they send the account information, account holder information and collections disposition to a score provider.
  • The score provider would validate the account against an existing scoring model, generate a new “custom” scoring model or would apply advanced analytics to an optimized strategy.
  • With both the performance data and the score, the score provider can conduct a complete analysis showing performance results such as, score distribution tables, dollars collected capture rates by score, unit paid rates by score and workflow strategies using score and placement balance.

Validation, Without Performance Data, No Incumbent, Retrospective

  • The agency selects a group of accounts that have been worked in an active collections campaign.
  • The agency forwards basic account information for this group of accounts to the score provider.
  • The score provider runs the accounts through the scoring model and appends assigned score.
  • Since no performance data had been provided, the score provider can only partially complete an analysis.
  • That partial analysis might include such things as score distribution, attribute distribution and regional analysis.
  • Additionally, the same analysis could be conducted on high-value sub-populations.
  • The agency can either forward performance data once they have reviewed the initial score analysis or they can conduct the remaining analysis independent from the score provider.

Validation, Without Performance Data, Champion-Challenger, Real-time 

  • The agency selects a group of accounts that they plan to work.
  • The agency forwards basic account information for this group to the score providers (same accounts sent to each provider).
  • The score providers run the accounts through their respective scoring models and append assigned score.
  • When all the scores are returned, the agency begins working the accounts in accordance to Champion-Challenger test plan (plan to compare an existing strategy to a new strategy).
  • The test plan should include specific goals (i.e. outperform existing score by increasing liquidation rate by X%), details of the new approach, sample size, timeline and success criteria.
  • Usually, the test is conducted on a small percentage of the agency’s portfolio.
  • This approach minimizes possible negative impacts and positions the agency to either conduct further tests, continue testing on a larger scale or commit to one of the strategies.
  • With a solid plan in place, the test will be easily executed and the results will be more informative.
  • The accounts are worked without using the scores.
  • At the 30/60/90 day mark, the actual performance of the accounts is compared to the predicted performance of each score. 

For many agencies, maintaining organization confidence in their predictive score is a challenge. That difficulty often leads to blind use, half-hearted use or discontinued use. But this doesn’t have to be your story because while score testing may seem daunting, it can be done with relative ease with the right resources in place. Elite score providers, like LexisNexis Risk Solutions, can either be that resource or support agency resources to get the job done. Make today the day you commit to determining if your score is an anchor dragging down your business or a propeller that will drive your business forward.

Next Article: MDHBA Honors Its Best At Annual Conference ...