blog - Modelshop
az

Instant Income Verification Using Plaid

When evaluating a loan application, the most important thing a lender must do is verify the applicant’s income. By integrating Plaid’s bank transaction report into a credit model, lenders can verify employment status and income without a manual review step. The result is a frictionless consumer experience and improved conversion rates.

The ease with which Modelshop can integrate third party data and identify patterns such as income deposits helps lenders deliver an interactive origination experience and book better loans. Watch Modelshop in action as we step through the process of integrating Plaid and automating income verification using analytics.

No code ontologies for serious business applications

There is a recent resurgence in no code software platforms that promise to build applications an order of magnitude faster than traditional software tools. This is not the first time this trend has swept through the industry. Teams frustrated with a highly repetitive and fragile software development life cycle are constantly looking to speed their time to market, reduce development costs and increase software quality. Today’s no code frameworks promise all three.

No code’s checkered past

There are no code skeptics who have lived through past attempts that include 4GL, Case Tools and business rules management systems. The challenge with replacing code with visual development tools became obvious once users attempted to build realistic applications. Those early attempts at no code simply were not flexible enough to handle the nuances of real-world business requirements. This was especially true in the case of the ‘unhappy path’. The unhappy path describes all the ways a process can go wrong. Non developers may be surprised to learn that software teams often spend more time coding to handle the unhappy paths than they do coding for expected behavior (the happy path). The proliferation of unhappy paths in any enterprise class business process soon explodes in complexity and it is one of the main reasons no code tools fell short of expectations.

The ongoing challenge of computational complexity

So far, no code platforms have been suitable for simpler business processes that do not require significant logic or computation. Even then, applications developed using many no code solutions can struggle when handling exception behavior, or the unhappy paths. At the heart of this problem is the inability of most no code tools to handle computational sophistication. In enterprise applications, possible execution paths multiply and the business logic around handling edge-cases increases. The workflow paradigm driving most no code tools means each path must be defined to handle this complexity. With thousands of possible permutations of unhappy paths, this becomes virtually impossible and the result is a fragile or incomplete application experience. While there are a handful of emerging no code vendors who claim they can deliver enterprise class software, most cannot support the computational depth needed to completely automate complex enterprise applications. Think about the logic complexity of a typical financial model used to calculate profit and loss.  It involves thousands of calculations, multiple scenarios and forward-looking projections that are too complex to be created in most drag-and-drop no code tools. Financial calculations are just one example of the types of logic an enterprise class application might need to handle, but there are unlimited computationally sophisticated business applications across industries including financial services, healthcare, retail and transportation.

Ontologies and model-first no code tools

There is a new approach to no code designed to handle computationally complex applications. Instead of following a traditional workflow/form development process, these tools first model ontologies that capture data relationships as well as calculations and rules that define the business process. Ontologies are models where the logic that defines how a business process should behave are codified as a series of data relationships (acyclic directed graphs), calculations and business rules that govern the application’s behavior.  For example, an ontology that describes the interrelationships between a banking customer, their accounts, the transactions they conduct and the types of transactions other customers like them typically conduct can all be defined as part of the ontology of a fraud detection application.  The example below is very simplistic compared to real ontologies, which often involve dozens of entities and thousands of logical relationships.

A simple banking fraud ontology

By starting with an ontology, no code application designers can describe all the possible behavioral paths, including ‘unhappy paths’ and ensure they are handled effectively. This new breed of ontology modeling tool empowers application designers to manage hundreds of data relationships, thousands of calculations, rules and event triggers all without the exploding complexity common in traditional workflow-first no code platforms. By focusing on getting the underlying business model right up front, the resulting business process and forms become simpler and more robust. Model-first applications can be deployed as run-time services that add intelligence to existing applications, or they can be connected to user interfaces to deliver computationally robust business applications. The resulting applications are both less fragile and more adaptive than traditional workflow-first no code applications.

No code ontologies are the precursor to true AI

This model-first approach to application development is the missing capability that has caused prior generations of no code technologies to fail and current tools like Robotic Process Automation to be overly fragile. As our application modeling tools become more powerful and easier to use, a new generation of developer will begin to emerge who have the right blend of business model understanding and technical creativity needed to build increasingly powerful and innovative software solutions, all without writing a single line of code. As this happens, our applications will be smarter, will evolve faster and will become the basis for adaptive, intelligent applications that deliver on the promise of artificial intelligence.

About Modelshop

Modelshop is a platform for creating model driven applications without code. Modelshop has been used to create sophisticated analytic applications that make credit decisions, automate regulatory reporting, prevent fraud and optimize healthcare decisions.

Frictionless Lending with Credit Automation

The first blog in this series highlights how credit automation is changing the way lenders make credit decisions. In this edition, we’ll dive deeper and outline how automation is now infused throughout the modern credit application process. As a result, credit decision engines are evolving into ‘credit automation engines’ that can impact not only the lending decision, but all aspects of originating high-value loans.

The term ‘frictionless lending’ refers to making it easy and streamlined for borrowers to apply for and obtain loans. This process is important for several reasons:

  • Improved customer experience: Frictionless loan origination offers a better customer experience by reducing the time and effort required to apply for a loan. This can lead to increased customer satisfaction and loyalty.
  • Increased efficiency: By automating and simplifying the loan origination process, lenders can save time and resources, allowing them to process loan applications more quickly and efficiently.
  • Competitive advantage: In a crowded lending marketplace, offering a frictionless loan origination process can be a key differentiator for lenders looking to attract and retain customers.
  • Reduced risk: A frictionless loan origination process can help reduce the risk of fraud and errors, as automated systems can detect and flag potential issues more quickly and accurately than manual processes.

Lenders are pursuing frictionless lending as a top priority. According to “The Rise of Frictionless Lending: Trends and Opportunities” by Deloitte, a survey of lenders found that 30% had already implemented frictionless lending processes, and another 27% planned to do so within the next year. 

Frictionless lending is more than an improved UX

Millennials and Gen-Z think about financial products differently than Generation X and Baby Boomers. The younger generations grew up with instant access to data, unlimited choices in entertainment and an expectation that products are served up in an easy to consume way. Generations that never opened a map for directions are not prepared for a complicated or opaque loan application. They’re not expecting to wait for an answer, and they’re certainly not used to being rejected.

According to a study by Bank of America, 57% of Gen Zers prioritize a fast and easy loan application process over getting the best interest rate. Additionally, a survey by Experian found that 73% of Gen Zers would prefer to complete a loan application online, rather than in person or over the phone. These findings suggest that traditional loan application processes, which can be lengthy and complicated, may not be effective in attracting younger customers.

57% of Gen Zers prioritize a fast and easy loan application process over getting the best interest rate

Bank of America

Innovative lenders and FinTechs have already raised the bar on the user experience (UX) while applying for credit.  Many have also integrated alternative data sources to help reduce risk and deliver an easier application process.  These improvements alone will not be enough to remain competitive unless lenders are willing to go further and change the way they analyze risk and make credit decisions.

Using intelligence to reduce application friction

Delivering a frictionless lending experience requires the use of intelligence across the origination lifecycle. Intelligent origination solutions determine what information is required as the application progresses, clarify possible exceptions using intelligent chat, and enable the applicant to adjust loan structures and collateral choices in real-time.

As an example, if a credit report identifies a lien on an applicant’s property while applying for a loan, a typical underwriting process would be to ask the applicant to explain the lien, provide additional documentation and then make an expert decision whether the lien will adversely impact the borrower’s ability to repay the loan.  Today, this type of exception handling is manual for most lenders.

In the frictionless origination world, this basic process will remain the same, it will just be instant. Predictive models will estimate the impact the lien will have on the probability of default, and the credit intelligence engine will be adaptive enough to ask for (or electronically retrieve) additional qualifying details that inform the risk model. This has to happen in seconds so that the applicant remains engaged in the process. Once an application enters a wait state, completion and conversion rates can drop precipitously.

Credit Intelligence Engines simplify decision automation for frictionless origination

There are examples of using data and analytics to streamline the process already in the market today. For example, access to online banking transaction data through providers such as Plaid have enabled lenders to instantly verify an applicant’s income. Advanced analytics, such as time-series analysis and unstructured text classification have enabled income verification algorithms to expand beyond standard payroll patterns and provide instant income estimations for self-employed and gig-economy applicants. Improving the process is not just about alternative data – intelligent decisions require a combination of data and real-time analytics to replicate the intelligence of a human credit analyst in order to deliver a frictionless experience.

Adjusting lending terms in real-time

An advantage in attracting and converting applicants will be presenting the optimal loan configuration as the first or even the only option. This emerging generation of borrowers expect to be shown products that match their needs by retailers and streaming services. They will expect the same from lenders. The era of sitting in the finance office at a car dealership while the finance officer picks through a list of potential programs is ending. 

Next generation lending models will be intelligent enough to predict the most likely loan structure (lowest payments vs. lowest lifetime cost vs. flexible payments) and can create a custom loan offer that maximizes the fit for the applicant, instantly. More educated borrowers will want to know their financing options before they walk into a dealership or put an offer on a home, and they’ll demand the ability to restructure the loan on the fly if their purchase decisions change.

This means two things from a technology perspective. First, predictive analytics, including both risk and propensity predictors, need to be built directly into decision and pricing models instead of being developed as offline static risk tables. Second, decision models will no longer be able to decision a single loan configuration. 

The time where 100% straight-through application processing is a competitive minimum requirement is rapidly approaching. 

Instead, they will need to explore a spectrum of options, typically in multiple dimensions, to find the best ‘cost function’ value.  For example, loan pricing may be determined by projected lifetime value of the loan informed by default and churn predictors.  The loan structure that is the most likely to be accepted while maximizing the profit to the lender and creating a satisfied customer is the loan that should be presented.  To add to the complexity, that best solution may change on the fly as the borrower explores different purchase options, for example upgrades during a vehicle purchase.

This ‘nonlinear solver’ technology is not new, it has been used extensively in operations research applications such as figuring out how to best pack your Amazon package in a UPS truck, but it is relatively new to the lending arena. More importantly, the entire lending technology stack will need to evolve to support optimized product configurations, decisions and pricing in real-time in order for lenders to remain competitive.

Coming up next…

In my next post I will discuss the impact embedded financing and alternate lending products such as Buy Now Pay Later are having on credit intelligence engines.

AI is the convergence of codeless and LLMs

I’m fascinated with developments in codeless platforms and Large Language Models (LLMs) as they make inroads towards replacing traditional coding. These innovations have been evolving for decades, but as they converge, the role of software will be forever altered.

Codeless platforms are not new

The term codeless or no-code has entered a new hype cycle in recent years, but the concept is far from new. WordPress, which I’m using to create this blog, is a great example of a codeless website builder. Salesforce has been leading the no-code CRM revolution for decades and AWS is a great practical application of a codeless solution for IT management. Each of these solutions have enabled less technical resources to build working software solutions much more quickly than traditional coding techniques.

As it becomes easier to turn intention directly into working software, the role of developer as an intermediary between creatives and their solutions will disappear as completely as scribes and typists have disappeared from the creation of documents. The ability to turn ideas into working business solutions is now becoming as simple as writing a document or building a spreadsheet.  As that happens, being able to ‘code’ will be similar to being able to write English – most people will be able to do some form of it, but what will be most important is what you have to say.

The part that I find most fascinating are what I consider near-term misconceptions in how codeless will evolve. Codeless advocates are advertising the ability to create software solutions with zero language or syntax. I believe that is naïve – building a complex solution with no language interface would be like trying to write a legal brief with emojis. There is too much nuance in how you expect your unique solution to behave for someone to create a GUI that fully anticipates your needs. Language and syntax are what makes humans unique in our ability to conceive of and communicate an unlimited spectrum of ideas, and I believe language, whether based on expressions or prose, whether written or spoken, must be a critical part of any future autonomous software solutions.

LLMs make codeless much more interesting

That is where large language models (LLMs) come it. While deep learning has been around for decades, recent breakthroughs bring us much closer to bridging the gap between knowledge, data, and natural language syntax. That is exciting. These developments are what will allow us to mature the vision of codeless software for more mainstream use. There will still be syntax required to fully articulate a solution, but very soon that syntax will be more efficient, more natural and easier to access by non-technologists.

The current state of LLMs being used to write code are a bit naïve. Our current coding frameworks are entirely too repetitive and pedantic. Code scaffolding has to be created across multiple levels of architecture to create even simple working software and we have employed legions of coders to create and re-create that scaffolding. Early LLM code generation examples show how software can be accelerated by allowing AI to gather scaffolding examples and rework them into working code. Impressive, but largely unnecessary. Codeless solutions are making that redundant scaffolding unnecessary by providing a higher level of tools to turn ideas directly into working software.

The magic happens when you leverage LLMs on top of codeless platforms.

What becomes very interesting will be solutions that layer LLMs on top of codeless platforms. As described above, point and click is not enough to create complex software solutions, but using traditional logic syntax is not accessible to non-technologists. LLMs can resolve that, by translating natural language syntax into directives that can be implemented in codeless frameworks.

AWS – A practical example

As an example, think about AWS and how they’ve used codeless (including syntax, a.k.a. chef recipes) to accelerate the building and maintenance of IT systems. This is powerful in itself, but while you can point and click together basic IT components, serious users of the platform need to understand simple scripting grammar and syntax to automate and scale their solutions.  Adding LLM builders on top of that tool set will be tremendously powerful, allowing less technical staff to create their IT infrastructure with natural language commands (just like Star Trek).  By comparison, asking LLMs to directly construct raw datacenters from the ground up would be a disaster, you would end up with an unmaintainable mash-up of scripts and configuration files. True synergy is bringing codeless and LLMs together. 

This synergy will soon be coming to all businesses.  The intersection between advances in codeless frameworks and LLMs is where true AI is happening and it is poised to change software forever.

Lenders are Rethinking Credit Risk Models

In my last post, I outlined an evolution of credit origination due to a shift in demographics, the economy and technology. In part II of this series, I’ll dive deeper into why these changes are impacting how lenders think about credit risk models.

Risk models as a competitive advantage

Credit models for prime borrowers (FICO 660+) have historically been very straight forward, typically using a segmentation of applicants based on their credit score, a handful of financial indicators and some disqualifying knock-out rules (such as a bankruptcy). There was not a lot of motivation for prime lenders to innovate on credit risk models and they focused instead on competing for the same consumers on price.

In non-prime credit markets, lenders are more motivated to look beyond an applicant’s traditional financial indicators to find that ‘needle in the haystack’, where someone with a non-ideal credit score represents an acceptable risk.

Changing demographics, generational attitudes and global economic events including covid are forcing lenders across credit spectrums to look for more creative ways to model risk while keeping a sharp eye on regulatory impact.

Lenders deploying next-generation risk models have seen revenues increase by 5 to 15 percent

McKinsey

Innovative lenders and FinTechs are applying emerging data and technology to create more targeted, real-time and accurate credit risk models. Lenders who do not evolve will find themselves losing in the competition for the highest value customers. According to a McKinsey Insights, lenders deploying next generation credit models have seen revenues increase 5 to 15 percent through higher acceptance rates, lower cost of acquisition and a better customer experience. An even bigger concern for lenders who do not evolve their credit risk models is that they will start to absorb higher-risk customers passed over by sharper credit instruments.

Today’s borrowers are evolving

There are multiple factors impacting the credit industry’s customer segments. As Millennials and Generation Z become the dominant consumers of credit, non-traditional attitudes towards employment and finance are making them less predictable using traditional models. Excessive student debt, delays buying cars and homes, less loyalty to financial institutions and economic impacts of the pandemic make it harder to classify applicants. As a result, traditional credit scores and underwriting models are becoming less effective.  

At the same time, an increasing population of immigrants do not have an established US credit history. These applicants can represent a great opportunity for lenders able to efficiently make underwriting decisions using non-traditional means. Emerging alternatives like NOVA Credit can help lenders access these populations, but not without rethinking their approach to risk modeling.

Finally, borrowers’ have increasing expectations for a frictionless credit origination experience. Groomed on instant access to information, the next generation of borrowers have little patience for a drawn our application process where lenders request information that can be accessed electronically.

Credit scores and financial ratios are no longer enough

The heart of a credit decision is a calculation of the probability the credit will be repaid or the relationship will be profitable. Traditional risk models analyze historic credit performance across large populations and formulate risk ‘buckets’ that can be used to segment new applicants. These segmentations can be represented as simple thresholds, or they can be combined into non-linear models that capture interactions between financial attributes.

For example, someone’s debt-to-income ratio and the level they utilize their revolving credit are two predictors of credit risk that have non-linear correlation.  A higher credit line utilization is less concerning for applicants whose debt is a small percentage of their income – their use of credit is likely a convenience.  Models like the FICO score combine multiple financial indicators in ways that are statistically predictive of risk.

The limitation of traditional credit models is they are relatively blunt instruments. Aggregated financial indicators hide details about an applicant’s financial situation that could be useful in assessing risk.  More importantly, deeper insights can identify deserving applicants whose financial indicators do not line up with the rest of the population.

As our evolving culture and economy introduce more variation in what is considered ‘standard behavior’, credit models must evolve past the traditional segmentation of applicants. Not only will next-generation credit models deliver better results for lenders, they will be more fair for applicants with non-traditional financial histories and more transparent to regulatory scrutiny.

Alternate data for better underwriting decisions

In the last decade, there has been an explosion of new data sources and aggregator services available to lenders that deliver insights into consumer behavior. Many of these data sources, such as banking transaction data, require opt-in from the consumer. Younger generations tend to be more willing to share data, having grown up in the ‘put it all out there’ culture of social media.

Using alternate data result in 20 to 40 percent efficiency gains.

McKinsey – Designing Next-Generation Credit Risk Models

Used responsibly, data such as utility bills, driving records and banking transactions provide a wealth of insight that can help lenders differentiate good and bad risk while underwriting credit.  In addition, use of this data can help create a more frictionless origination experience (the topic of my next blog) by allowing the lender to instantly verify applicant information such as income and bank balances. Lenders using alternate data to automate credit models have seen 20 to 40 percent efficiency gains as a result of eliminating manual verification steps and through an increase in straight-through processing.

Real-time cash flow underwriting models

A promising use of alternate data is the emergence of real-time cash flow underwriting models. Cash flow underwriting is the practice of looking at income and expense transactions to determine if an applicant’s free cash flow supports the loan or credit line. Subprime and commercial lenders have been doing cash flow analysis on borrowers for decades, but it has historically been a manual process (for example using a spreadsheet) and it caused delays in the underwriting process. For applicants with poor credit or for larger loans, waiting for this analysis was the only option and accepted as part of the process. 

With the availability of detailed banking data and more sophisticated credit decision engines, real-time cash flow underwriting models are becoming popular across credit segments. Cash flow models can assign risk probabilities to each aspect of an applicant’s financial life. Income streams can be assigned a probability of interruption. If income is interrupted, expenses can be scored for how essential or fungible they are, and debts can be ranked to predict which are likely not to be paid in the event of a cash crunch.

Cash flow models use multiple time adjusted predictions to project overall ROI on loans, enabling lenders to book more targeted and profitable loans.

These calculations can be used to simulate overall credit risk and expected profitability in milliseconds, helping lenders book more profitable loans.  McKinsey reports that lenders can realize a 20-40 percent reduction in loss rates by using credit risk models that more precisely determine a customers’ likelihood of default.

Challenges deploying next-generation credit models

Many lenders struggle to build and deploy next-generation models. Many organizations have a deeply ingrained culture of risk avoidance when it comes to credit models, and often a seemingly insignificant threshold change can take months to test, document and deploy. The thought of approaching credit risk from a completely new perspective can be paralyzing for some teams.

Risk aversion is not the only barrier to adopting alternate underwriting models. Many lenders have limited IT resources and rely heavily on third-party technologies to deploy credit decisions. Modern data sources can have sophisticated layouts (similar to the credit bureaus), and their current technology does not support a data vendor the lender can’t leverage it.

Taking an alternative approach to credit models can be paralyzing for risk averse teams.

Accessing the data is only half the battle. Modern credit models utilize more than simple attributes calculated from the data, they often create risk or propensity weighted projections using sophisticated calculations. Adding to the complexity, these calculations must be run both while training and tuning models, as well as in the production decision environment. Even for lenders with software development teams, creating solutions to automate these simulation models can be difficult.

Next-generation credit decision engines

An emerging class of next-generation credit decision engines are solving this problem using no-code technologies. Teams creating next-generation credit models and implementing real-time decision engines now have the power of no-code tools to more effectively create advanced logic that traditionally required custom software. Eliminating a custom code step can not only accelerate a lender’s path to creating a modern credit model, it can significantly reduce the friction of updating credit models and allow lenders to be more competitive by reacting to market or economic changes.

Modern credit decision engines go beyond business rules and risk scoring. The scope of decisioning includes not only risk or fraud, but now includes application intent, pricing elasticity and conversion probability modeling. To support this role, next-generation platforms will provide a seamless data integration hub that will support plug-and-play connectors to a wide range of vendors as well as provide out-of-the-box logic and analytics that will allow credit issuers to leverage multiple alternate data sources in their decisioning.

Coming up next…

In my next post I will talk about the importance of a frictionless credit origination and how modern credit decision engines have taken a much more important role in that experience.

Dawn of a new era in Credit Model Automation

Changes in demographics, the economy and technology are forcing credit providers to raise the bar on how they originate credit. To meet the challenge, a new generation of ‘credit model automation engines’ have emerged that reduce friction, embrace alternate data and empower risk teams to book better loans.

In the coming weeks, I will be exploring the trends that are driving these changes, the potential implications on the credit industry, and the technology powering the next generation of origination intelligence.

Credit origination is undergoing revolutionary change

Automated credit decisioning has been around for decades. The first instant credit offers were credit cards in the mid 90’s. Surprisingly, the industry’s approach to credit origination decisions hasn’t changed much in the last 30 years.  

There are converging economic, cultural and competitive factors that have accelerated demand for alternative credit decisioning techniques.  The credit industry is now recognizing that status quo origination strategies will not be competitive for long.

As Generation X gives way to Millennials and Gen Z, credit providers are forced to operate on the younger generations’ terms. These changing expectations, combined with the end of historically low interest rates, are creating an environment that is not only ripe for innovation, but actively demands it.

Changing culture, demographics and economics

The shift in generations means that qualified applicants may not have traditional financial profiles.  Employment irregularity due to COVID, gig economy careers and lifestyle choice is becoming more prevalent and may no longer correlate with credit risk. Successful immigrants may not have a traditional US credit history.  Increasing generational wealth and first generation higher education can create new opportunities for profitable financial relationships that may not exhibit traditional financial markers.

New sources of data are needed that can identify high-value segments without introducing unintentional bias.

Finding high value customers who do not have a traditional financial profile requires next-generation decision engines that are flexible enough to go beyond traditional score and ratio decision models. New data sources are needed, along with new analytic techniques that can identify higher value segments while not introducing unintentional bias. Next-generation ‘intelligence engines’ will incorporate dozens of alternate data sources and support multiple real-time statistical and predictive analytic techniques that will increase decision flexibility while reducing regulatory risk. Read More

The demand for frictionless origination

The most pressing change in credit origination is that younger consumers will no longer wait for a credit decision. According to a survey, 40% of young consumers will abandon a loan application process because it is too lengthy. It is too easy to move to another credit provider at the first sign of friction in origination, and the threshold for inconvenience is dropping quickly. Something as simple as entering a date of birth and driver’s license number is now considered a nuisance when a simple snap of your driver’s license should suffice.

With open and flexible APIs, the next generation of credit intelligence engines will take part in the origination conversation. At each stage of the process, AI will customize the applicant’s experience to minimize friction. To achieve this, intelligence engines must be able to call out to hundreds of alternative data sources in real-time and incorporate insights from those data sources in sub-second time frames. Read more.

Disintermediation of credit relationships

Traditional banks and lending institutions rely on brand and relationships to maintain their customer base. According to the Bank Administration Institute, younger generations are less loyal in their banking relationship and are increasingly using alternate financing and credit products. Buy-now, pay-later credit at the point of sale is a convenience that not only avoids an extra financing step, but helps reduce the number of accounts an individual has to manage in their lives. Financing through Amazon, Tesla or Carvana is easier and can garner more trust than obtaining an independent line of credit or loan through a financial institution.

Modern credit intelligence engines will require flexibility in how credit is structured.  They will need the ability to incorporate detailed information about the products being financed and use information about existing customer relationships to make smarter decisions. As non-traditional lenders integrate credit into their existing customer relationships, credit intelligence engines will need to be easier to implement and customize with industry specific AI. As more businesses become digital first, intelligence engines must integrate into existing technology stacks with minimum coding.

Increasing need for regulatory transparency

A conversation about next-generation credit origination is not complete without addressing concerns about regulatory impact. While there is understandable concern about using AI to make credit decisions, advanced risk simulations have the potential to reduce compliance risk.  Real-time risk modeling will be competitively unavoidable and next-generation credit intelligence engines will need to provide visibility into decision algorithms that will make it easier to justify regulatory compliance while empowering policy agility.

No-code AI technologies can trace the use of each data point through every interim calculation to ensure that no protected data is leveraged in a credit decision.  To ensure compliance, embedded statistical engines can continuously monitor the correlation between protected classifications and decision outcomes to quickly identify unintentional bias.

Coming up next…

All of these changes are causing a convergence where a lot of lenders are having to rethink their traditional policies, models and origination tools. In the coming weeks, I will drill into each of these trends and talk about how next-generation credit intelligence engines are helping lenders reduce risk and book better loans while delivering a better customer experience.

Next week’s topic will explore how lenders are using alternate data and non-conventional models to access a broader client base with less risk and drill into more detailed examples of cash flow based underwriting models.

An agile tool to fight fraud: no code AI

I had the privilege of working with a brilliant team of data scientists and engineers who put a big dent in credit card fraud in the early 90s. The product was called Falcon and it was a very effective early application of what we’re now calling artificial intelligence (AI). Using a combination of neural network predictive models, real-time profiling, business rules and real-time decision strategies, that small team was able to cut credit cards fraud rates in half in a few years. Falcon was groundbreaking in its use of neural networks, but an arguably more critical feature was the real-time technology that allowed it to make fraud/not-fraud decisions in the few millisecond window available while you paid your dinner tab.

Flash forward 30 years.  Given our early success preventing fraud with AI I would think we would have eliminated financial crime by now, but that clearly has not happened. Industry experts (and marketers) will tell you that the criminals are just getting smarter and that it’s a never-ending technological arms race. There’s some truth to that, but that’s not the root cause for our slow progress. Our current anti-fraud solutions are plugging the obvious holes, successfully thwarting the dumb criminals, but not presenting much of a deterrence for more sophisticated ones.  Unlike depictions in the movies, it doesn’t take a black-hat hacker to successfully commit fraud today. In 2020, an estimated $42B1 was lost to fraud. These losses have become so normalized that budgets simply account for it as operating expenses, and the cost is passed on to the consumer.

After 30 years of fighting crime with AI technology, why haven’t we made more progress? Every year we see a slew of new anti-fraud solution providers with creative new techniques to identify fraudsters. Meanwhile, financial services companies continue to struggle to get a handle on their losses. What’s really going on here?

Every organization’s fraud problem is unique

Financial services companies and fraud technology companies are approaching the fraud problem with the wrong mindset. The reason preventing crime is so hard is because every financial relationship is unique. There is no off-the-shelf anti-fraud platform or tool that can fully understand the relationship a financial services company has with their customers or the actions they have available to them to prevent crime.  

For example, a large top-tier bank is likely to experience very different fraud attack patterns than a mid-sized credit union, and they each have different tools available to combat that fraud. The large bank will rely on the sheer volume of their client relationships to finely tune behavioral models that can detect outlier patterns quickly.  They are also more likely to implement technologies like automated text confirmations and machine learning to handle the sheer volume of alerts coming from their models and from risk scoring vendors.  A credit union, on the other hand, will leverage the depth of their member relationships to better identify high risk behavior and they will take a more personalized high-touch approach to treating alerts that better aligns with their member centric culture.  These organizations are different enough that they need different tools and strategies to effectively manage their unique fraud risks.

Why generalized fraud platforms aren’t the solution

One could argue that anti-fraud vendors could create multiple solutions, each tailored to a specific segment of the market, for example, one for big banks, and one for credit unions.  There are a few problems with this approach. First, it’s not economically viable for a software company to target just one market niche.  In addition to multiple banking tiers, there are many additional dimensions to fraud, including payment, merchant, deposit, lending and electronic transfer fraud.  There would have to be dozens of flavors of these platforms, each requiring a significant software development and go-to-market investment that would make it difficult for these vendors to be profitable.

Instead, a few dominant anti-crime platform vendors have created products that are ‘configurable’ to handle the nuances of each business or type of fraud.  The problem with the large fraud platforms is that they really aren’t configurable enough, and every implementation turns into a large, costly customization project that leave the customer making compromises on their requirements and with a solution that is hard to modify once deployed. The result is a general disillusionment regarding the effectiveness of anti-fraud technology and a continual stream of new vendors trying to fill an unmet need.

Combating fraud is a dynamic process

The other problem with off-the-shelf anti-fraud platforms is that crime attack vectors aren’t static. The fraud fighting team at any financial institution is operating in a dynamic environment where new fraud patterns emerge and compensating strategies are deployed on a continuous basis. A commercial anti-fraud solution can’t anticipate these patterns ahead of time nor provide responsive counter-measures fast enough to prevent losses.  Despite claims, these platforms simply aren’t flexible enough to add new predictive data sources and create entirely new anti-fraud strategies on the fly.

I will use the analogy of your finance department. Every day, your finance team is dealing with new business requirements to work with operational data, do analysis and make financial decisions – creating new financial models in spreadsheets daily or weekly.  Your fraud department needs to work in a similar way – constantly working with new data, creating new strategies and executing crime counter measures in real-time.  It would be hard to imagine a 3rd party vendor coming in with a ‘finance’ platform, customizing it once for a hefty consulting fee, then leaving the solution to run your finance department. A much more dynamic and agile solution is needed for finance, and the same is true for anti-fraud shops.

As a result, the industry has produced a lot of point solutions, risk scores and device monitoring technologies that each add value, but which are not assembled into effective anti-fraud solutions tailored for any company or financial product. The ultimate solution is left up to internal IT teams or one of the generalized anti-fraud platform vendors who promise flexibility and customization that they’re not able to deliver. Assembling all of these tools and technologies into an effective anti-crime solution is difficult for even the most advanced IT teams. It is almost impossible for small to mid-sized financial organizations. The emergence of a new class of technologies is starting to change that.

No code AI platforms offer an alternative

The emergence of no code AI platforms is beginning impact multiple industries. No code AI provides the technologies required to create custom intelligent applications like fraud prevention in one integrated platform that can be deployed without requiring software development. Think of no code AI as a tool for fraud analysts that is similar to a spreadsheet for a financial analyst. Using a no code platform, fraud teams can assemble risk signals from multiple vendors, combine them with customer data, create profiles, scorecards and predictive analytics, automatically create and prioritize fraud alerts, drive automated counter-measures and create fraud analyst workbenches – all in one tool, without requiring code or custom implementation services.  Depending on the complexity of an organizations anti-crime policies, a solution like this can be created with no code AI tools in weeks.

Allowing teams to build maintain their own custom anti-crime platforms really changes the game. Instead of long, expensive vendor implementations that become stale quickly, teams are now able to implement custom fraud prevention strategies and continually evolve them as attack patterns change.  As new data or technologies emerge in the market that improve the efficiency of fraud counter-measures, they can be incorporated into custom fraud strategies in days or hours, without waiting for third-party vendor software releases or expensive consulting projects.

The emergence of a new class of fraud fighters

Using no code AI, even very small organizations who have minimal IT staff can work with fraud consulting companies to implement highly customized anti-fraud solutions at a very low cost. These boutique consulting companies are highly specialized, bringing unique insight into a particular class of fraud. They now have an opportunity to deliver more value to their clients using no code AI tools that are more effective and cost a fraction of implementing a big platform vendor solution.

No code AI has the potential to transform many industries, but one of the areas I’m most excited about is fraud prevention.  Having been around the fraud problem for almost three decades, it frustrates me to see organizations continue to struggle with this problem and to see how ineffective and expensive many of the vendor solutions are after working on them for so many years.  It’s not the vendor’s fault.  Like running a finance department, running a fraud shop is by its nature a ‘self-serve’ operation that needs flexible and agile tools to be effective. No code AI provides a transformational opportunity for financial services organizations to finally begin to win the war on fraud.

1 PwC’s Global Economic Crime and Fraud Survey 2020
About the author

Tom Tobin is the founder and CEO of Modelshop, a no code AI automation platform.  Prior to founding Modelshop, Tom has spent 30 years building automated AI solutions in the financial services industry including leading the financial crime software division of Fiserv.

The Race to Digital Lending: Build vs. Buy Decision Models

Lenders know that they must digitize origination if they want to remain competitive. Completing an online application through funding in a single sitting is already available for most types of loans. Upper spectrum lending with minimum collateral has been automated for decades. Lenders are now rapidly automating origination for a broader spectrum of risk, more sophisticated loan structures and complex collateral loans including mortgages.  Lending has become a data and technology arms race which the pandemic’s stay-at-home culture has helped accelerate.

While digital transformation has caused significant consolidation in marketing, social networking and retail, it seems that traditional lenders have been less adversely impacted so far. Customers seem to value established trust relationships in financial transactions more than a frictionless lending experience. That luxury will likely fade as the primary target for lending shifts to younger generations and non-traditional lenders offer credit services through their existing platforms (such as Amazon and Costco). Established lenders have a short window to transform their origination experience before they are left behind.

Many lenders are choosing to build a custom digital origination experience

A question facing lenders as they approach digital transformation is whether they should build a unique customer experience or leverage an off-the-shelf digital lending platform. The former has the potential to strengthen their digital brand and create competitive differentiation while the latter promises lower costs and a faster time to market.

An important trend in lending origination is to embed an interactive financing option directly in an existing consumer experience. Digital first brands such as Carvana allow you to explore cars for sale online while being presented with pre-approved financing instantly for each car. Credit at the point of sale from lenders like CreditKey is becoming more common. I have been working with multiple mortgage lenders who are able to connect pre-approved financing with home listing. Each of these examples require a flexible technical platform with custom integration with point-of-sale platforms at multiple points.

I am encountering more lenders who believe that building a custom platform makes sense. Whether they are making this decision based on a prior negative experience with off-the-shelf software or based on their optimism about creating a differentiated experience, more lenders seem to be striking out on their own path towards digital lending.

Creating credit origination models in custom code can introduce material risk

This build vs. buy tradeoff is not new, and most lenders realize that building a solution completely from scratch can be fraught with risk.  Lenders are not experts at building software and investing in development skills internally can be very expensive for what will be a limited timeframe investment.

Instead, I am seeing lenders work with outsourced development teams experienced in building consumer facing applications.  The challenge is that the skills and technology needed to build a consumer lending experience are not the same that are needed to automate credit risk, collateral, and pricing decisions.  Executed poorly, these next generation origination platforms will result in the same opaque, fragile, and poorly managed decision logic that plagued custom origination solutions created in the 80’s the 90’s.  The only difference is that lenders are writing these new platforms in Python and Ruby instead of COBOL.

No-code models provide a hybrid alternative

There is another choice. A new generation of no-code platforms provide an alternative for lenders looking to create a customized consumer experience powered by robust decision automation models ‘under the hood’.  No-code is a great choice for rapidly creating decision models that can be maintained by the business instead of by developers. This is especially important when development teams are hired to create a new lending solution but are not likely to be part of the business long-term. 

Putting critical credit logic in a no-code tool facilitates a clean change history for policies as well as a convenient tool for reviewing credit models during audits. By separating critical data, calculations and rules from other custom code, risk and marketing teams can more easily analyze the historic results needed to build projections and predictive analytics that can drive improved portfolio performance.

Lending automation involves much more than credit and pricing decisions

There are multiple points in lending origination where no-code models can help drive smarter and more transparent decisions. Credit risk and pricing decisions clearly benefit from automation, but the entire lending lifecycle including marketing, identity verification, income verification, collateral analysis, fraud prevention, loan structuring, fee calculations and truth in lending disclosures can be automated using models that put the logic and results in the hands of the business. 

Each of these decision areas involve data from vendors that inform decisions, such as the credit bureaus, banking transactions from vendors like Plaid, fraud scores and collateral reports.  Effectively using these data sources can be complex and requires teams to create sophisticated calculations that extract insights critical to making an automated origination decision. Most no-code platforms include flexible data connectors as well as plug-ins for common data providers which make the process for retrieving and interpreting alternative data sources easier.  Combined with the flexibility to create custom calculations, rules and predictive analytics using that data give lenders the tools they need to deliver a highly custom and frictionless origination experience.

Continuous learning and point-of-sale choices will provide better outcomes

The bigger opportunity emerging from digital lending is the agility that allows lenders to rapidly evolve their lending policies. Connecting the no-code models that power origination to performance simulation and machine learning tools enables risk and marketing teams to rapidly learn from historic results and deploy improved decision models without a lengthy coding cycle. Continuous learning where teams are testing and promoting new credit risk, pricing and marketing models with a click will become the new normal for lenders.

The solutions emerging from this digital transformation will provide more than a frictionless experience for borrowers, they will open entire new channels of credit for consumers. As lenders move towards instant decisions powered by intelligent and adaptive models, credit will become more integrated into everyday transactions and it will become available to a wider spectrum of borrowers at more competitive rates. Point of sale channels such as retail, online auto dealers or even home listing sites will seamlessly integrate financing options into their buying process. Informed consumers will be able to understand their purchasing power and make more responsible financing decisions.

The digitization of lending is happening very quickly, and lenders who evolve have an opportunity to build larger portfolios of more profitable customers. Those who are unable to evolve will find it hard to participate in the next generation digital economy. 

No-code AI Models

(Part II)

There is a common perception that machine learning is the only technology behind AI. As described in Part I of this blog, this view misses critical capabilities that teams creating intelligent software must master. These include dimensional data modeling, business rules, projections and the ability to transform projections into optimal decisions in real-time.

Established AI innovators like Google and Amazon have developed proprietary frameworks that assemble these capabilities to rapidly deliver intelligent solutions.  Only small parts of these frameworks have been shared as open source tools. Similarly, innovative AI startup companies have created their own custom software, typically focused on very specific use-cases such as lending or marketing automation.

We are rapidly approaching a future where every business will need to incorporate AI into their products in order to compete. Fortunately, there are emerging trends in no-code modeling platforms that will make it easier for businesses to do just that.

AI powered products: now required

If you aren’t convinced that AI is soon to become a required feature of every product you are not alone. Artificial Intelligence is an overly vague term that has been the subject of substantial hype, leading to confusion and skepticism. In some ways, it reminds me of the hype around the internet in the late 90’s. I have a clear memory of using early tools like MapQuest and bulletin boards thinking that the pundits are over-selling the potential impact, even though I worked in the software industry. It turned out that I was wrong.

Like the internet, AI will lead to a profound disruption of not only business, but of everyday life. The reason isn’t that AI delivers smarter answers, it’s that AI delivers instant answers.  Your next generation of customers will have grown up getting instant answers for everything: entertainment, shopping, banking. AI is less about having the absolutely smartest answer and more about removing humans from every process or decision standing between your customer and an instant answer to their current needs. Most businesses that can’t complete transactions without human intervention will become obsolete in less than 5 years.

Why is AI so hard?

AI is hard because all of the technology needed to automate intelligence live in different tools today. Databases and ETL for data modeling, spreadsheets or BI for analysis, data science platforms such as Python for machine learning, and technologies that automate decisions that are most likely in custom code. Moving data between these tools is hard enough, but transferring intelligence between them currently requires manual re-writes of the variables, logic and calculations across multiple systems.

This constant transformation between siloed teams and tools is expensive and time consuming. A common estimate is that 70% of the time involved in building AI solutions is spent extracting and cleaning the data so it can be used by tools across the development process.  Not only does this slow down a team’s ability to build AI solutions, it dramatically decreases agility when reacting to new opportunities. If halfway through a project an analyst realizes they need more data or that they made an incorrect assumption, they often have to go back through an expensive process of re-requesting data extracts, which can delay progress by days or weeks.

The end result is that despite tremendous investment, many existing businesses have made limited progress towards AI enabled solutions. In traditional organizations, data scientists, developers and business experts continue to work in silos, continue to use their own tools, and are slow to innovate. By comparison, emerging digital first competitors have integrated these teams with end-to-end proprietary AI frameworks. These teams are now all working on the same modeling tools with the same data structures and a common view of how they apply intelligence to deliver solutions.

No-code to the rescue?

A promising solution to these challenges are no-code platforms that integrate and automate much of the tedious work around creating AI solutions. By leveraging an automated platform that automates data management, machine learning and solution deployment, subject matter experts without coding experience can build complete AI solutions.

Most technologists are skeptics of no-code tools for valid reasons.  Early attempts at no-code were not successful because they didn’t have the flexibility needed to handle the nuances of building a true custom software solution.  Pointing and clicking together a business application was too tedious and too restrictive compared to more robust software languages with a full range of logic expressions. 

There is one well-known exception, the spreadsheet. A spreadsheet is one of the earliest and most successful no-code solutions, and since its invention in the late 70’s the spreadsheet has arguably been the most successful business tool ever.  Unlike coding, the spreadsheet allows business experts to work with multiple data sources, create data relationships, build calculations, project future outcomes and make decisions – all without a formal coding language. 

One feature of Excel that has contributed to its success is its use of a declarative expression syntax.  Excel users have the full power of an expression language without having to worry about the more tedious aspects of coding such as data management, storage, messaging or execution flow.  A declarative calculation engine can calculate very complex systems based simply on the business logic created in each cell.  If you’ve ever had to re-implement a complex spreadsheet as a software application, it would become clear that a model built in a spreadsheet in a couple of days could take months to re-create in a programming language.

The future of No-code AI

There is a new breed of no-code solutions on the market that are much more capable than prior technologies. These tools promise to deliver a x10 acceleration developing solutions compared to traditional coding. These technologies have primarily focused on less complicated applications such as websites, UI forms and workflow automation, but early no-code AI platforms like Modelshop are maturing rapidly.

Using Excel’s success as a no-code solution informs our path forward to the next generation of no-code AI modeling tools.  No-code modeling platforms that do not require the user to worry about data marshaling, storage or execution paths but provide the full power of a declarative syntax will evolve as the successors to Excel and will become the primary tool for subject matter experts to create AI solutions.  Unlike Excel, these next-generation no-code modeling tools will be shareable online, able to handle high-dimension data and large datasets, include embedded machine learning and they will be able to process real-time transactions.

No-code AI tools will deliver everything needed to build intelligent applications in one integrated platform:

Online, multi-user environment

Multi-dimensional data modeling

Plug-and-play data connectors

Declarative variables and business rules

Automated machine learning

One-click deployment as real-time APIs

Towards Citizen AI

In five years, the entire software industry will look very different than it does today thanks to an emerging new breed of no-code AI tools.  Platform vendors will develop a common way to exchange not only data, but business logic and analytics between systems without requiring code. Intelligent solutions, armed with the ability to make decisions automatically, will begin to transact with each other and commerce between organizations will simultaneously accelerate and become more efficient.

Most importantly, subject matter experts, whether risk analysts, financial analysts or healthcare workers, will be able to create custom AI solutions with no software development or data science background.  This era of ‘citizen AI’ development will become a tremendous accelerator of innovation. When experts have an idea and they are able to turn it into an intelligent and automated service in hours instead of months or years, creativity will explode and we will see entirely new social and business ecosystems evolve in ways we can’t currently imagine.

What is an AI model?

(Part I)

When I started a company called Modelshop I got a lot of questions about the name. Models like in fashion? Model trains? I rationalized the confusion as coming from friends not in the IT industry. Surely the concept of models is less confusing in technology circles. Well, as it turns out, not so much. There is significant debate about the definition of models and their role in intelligent software solutions.

If you look up the term model in the dictionary, you get no fewer than 21 definitions, including nouns, verbs and adjectives. The definition closest to the models we leverage in AI is:

Model: 3. A system of postulates, data, and inferences presented as a mathematical description of an entity or state of affairs. also: A computer simulation based on such a system

Merriam-Webster

Even the dictionary is conflicted enough to add the ‘also’ part.  It turns out that this subtle clarification is an important part of how models add intelligence to AI solutions.

Models are more than Predictive Analytics

The most prevalent interpretation I hear is that a model is the same as a predictive analytic, for example a logistic regression or a neural network. While these are models, predictions are typically a small part of a model driven AI solution.  A predictive analytic is more or less a curve fitter. If you observe historic data points, a predictive model will ‘draw a line’ through the points so that when you observe new data points you can ‘plot’ them to predict outcomes. This is an over-simplification of course. There are more sophisticated fitting techniques that classify and cluster points and fit equations across hundreds of dimensions in multiple stages (deep learning), but the theory is the same. I’m including techniques like unsupervised classification in this predictive category for simplicity – crudely a prediction of class.

In order for AI to approve a loan, or autonomously drive a car or conduct a human-like conversation as a bot you need a lot more than predictors. The models that drive AI solutions turn out to be ensembles of predictive models, calculations, rules, decision trees and simulations that together project multiple outcomes and ultimately take action to achieve the desired goal.  

As an example, if a customer calls to ask about refinancing a loan, an AI solution can not only predict the intent of the call from a chat-bot conversation, but also determine what programs they qualify for, allow the caller to select terms, recalculate their payments and complete the refinance without involving human intervention.

The Fog of Modeling Tools

When you look at everything that comprises an AI model it becomes clear that the analytics industry is too focused on predictive modeling tools and techniques. There are a ton of data science frameworks and comparatively little innovation happening in decision automation. The result is that machine learning (the process for training a predictive analytic) ends up being disconnected from the rest of the decision-making process. This disconnect, where data scientists focus on their predictions, leaving the rest of an organization grapples with cleaning data, creating strategies, calculating projections, presenting options and making decisions using entirely different tools.

Let’s talk about these other types of modeling tools, the ones that gather data, do calculations, create projections and ultimately make decisions. These processes are often scattered across multiple teams and tools, making it difficult to move data and information from one modeling step to another. If you work in finance or risk you think of models as financial models, created in spreadsheets. Technology teams are working on data models, mapping and merging data to produce new data sources using SQL or ETL platforms. Product teams often think in terms of sales and pricing models, probably also running projections in spreadsheets, to the chagrin of CRM vendors.

All of these models are important, but similar to feeling the parts of an elephant, teams working with these independent models in their disconnected tools are often not seeing the entire picture.  This is why progress on AI solutions is so slow and expensive for most organization. Bringing all of these parts together into cohesive AI modeling tools that can begin to automate business and operational decisions using advanced analytics is an evolution still in its infancy.

Thank you Dr. Fauci

The good news is that we are evolving. One positive sign is that the layperson now has a better idea of what we mean by a model, thanks to briefings during the early stages of our current COVID pandemic. While questions about model trains and fashion models have subsided, I still find myself in conversations where an informed technologist immediately jumps to predictive models and asks how Modelshop is different than other data science tools on the market.

The ensuing conversation about the different types of models  can fall one of two ways. Hard-core data scientists sometimes tune out, which can result in an awkward conversation. I suspect some would rather ignore the messy aspects of turning predictive models into production applications capable of making business decisions. More and more often, however, audiences are starting to see the potential of unifying all of these modeling tools into a single AI framework.

As we evolve past the honeymoon stage of AI, businesses are beginning to demand a return on their investment, and a new generation of technologists (both in IT and data science) are emerging with a business-first focus. We will continue to evolve, and we will begin to link together these different types of models across our organizations. The result will be an ability to rapidly create unified solutions that have the power to fully automate intelligent decisions and deliver on the promise of AI.

In the second blog of my two-part series, I will talk about a convergence that is happening between modeling tools and no-code technologies that is accelerating this unification of modeling and helping create a path towards ‘citizen AI’.

Hello World AI risk demo Watch the Demo