LLMs in Lending: They’re Not What You Think

Alright, I admit it, that’s a clickbait headline. And given my network, I’m sure many of you are thinking about Large Language Models (LLMs) the right way.

But I’ve talked with hundreds of credit and risk professionals recently, and one thing is clear: most people still imagine LLMs as chatbots making credit decisions. That’s not really how it’s going to work.

When people imagine using an LLM for lending, they often picture something like this:

				
					ChatGPT: What can I help with? 

Analyst Prompt: Given Mike Smith’s financial attributes <upload> and our credit policy <upload>, tell me what credit card offers he’s eligible for, including rates, limits, and rank them by the chance he’ll say yes. 

ChatGPT: Certainly. Here’s the ranked list:
Member Rewards Gold: 13.5% APR, $25,000 limit
Member Saver: 11.7% APR, $20,000 limit     
				
			

Now, an LLM could pull that off. But here’s the problem:

  • How do you prove the model is following Regulation B and Fair Lending rules? You can’t see what biases it may have learned.
  • If you’re not hosting the LLM yourself, sending applicant PII to it is a straight-up privacy violation (think FCRA, Gramm-Leach-Bliley).
  • With today’s AI models, there’s no way you can effectively run this on thousands of applicants a day with low-latency responses.

So I understand why many lending professionals think LLMs are just hype for now, something that’s “years away” from being practical.

But that’s not how I see it.

I’ll be honest: I had that same skeptical reaction at first. Then we started using AI to help write code, and the lightbulb went on. That’s when I became an advocate for LLMs for lending.

Here’s the key: when an LLM like ChatGPT generates numbers or projections, it’s not just predicting words. Behind the scenes, it’s writing code, typically in Python, to calculate those results. LLMs are great at building syntax from examples, and code is just another type of syntax.

Imagine using that power in lending, not to make the actual credit decision, but to build the logic, analysis, and automation that help us make better decisions.

Picture this instead:

				
					Lending AI: What can I help you with today? 

Risk Analyst: Show me our indirect auto lending performance. Break it down by credit tier and show conversion rates as well as expected ROA. Include competitive rates from BankRate.com for comparison. 

Lending AI: (Generates an interactive chart with drill-downs) 

Risk Analyst: Please update our rate table with new segments based on credit risk, vehicle LTV, vehicle risk rating, and competitive rate to optimize our portfolio return including a back-test on historical applications.

Lending AI: Perfect, here is an updated risk and performance-adjusted rate table as well as a comparison of projected returns compared with your historic policy.        
				
			

Behind the scenes, the LLM is building executable logic in your risk platform that pulls in your historical loan and performance data. It then optimizes your rate table, all based on English prompts. No PII data is going to the LLM; it’s just generating logic that analyzes your data locally.

More than automated analysis tools.

And that’s just the start. This isn’t about replacing credit analysts’ insights with AI. It’s about giving them powerful, no-code tools to test scenarios, analyze performance, deploy automation, and validate compliance faster than ever.

It goes even further; once your policy is deployed, LLMs can be used to take prompts from your applicants to dynamically reconfigure an offer, such as refinancing an existing vehicle or determining the best terms based on an applicant’s budget.

In my next post, I’ll talk in more detail about how we can move beyond retrospective insights to real-time, optimized credit policies — decision engines you can build and deploy entirely through interactive prompts.

I’m excited my team gets to help create that future. If you’re exploring this space or curious how it fits into your lending strategy, let’s connect. I’d love to hear what you’re building.

Tom Tobin, CEO – Modelshop