• Future of AI
  • Posts
  • How Financial Services can use Generative AI for faster Loan Decisioning

How Financial Services can use Generative AI for faster Loan Decisioning

Solution architected and deployed on AWS

The modern financial landscape is driven by a seamless integration of technology and data. With the burgeoning volume of unstructured data in organizations, there's an imperative need for tools that can sift through and derive actionable insights. Generative AI, a revolutionary approach, is transforming how we access and interpret information, especially in the financial services sector.

The AWS architecture diagram unravels a sophisticated system that harnesses Machine Learning and AI, predominantly the Generative AI model, to streamline and elucidate loan application decisions rooted in a customer's financial history and previous applications.

Source: AWS Machine Learning Blog

Key Components of the Architecture:

1. Employee Interaction & Web Application: The loan process commences with an institution's employee using an AWS-hosted front-end application. It serves as a portal for entering pertinent loan application details.

2. AWS Lambda: Central to the system, AWS Lambda processes data in real-time, negating the need for dedicated servers. It orchestrates the workflow, encompassing intent recognition via LangChain, assessing financial and credit scores, referencing the internal CRM system, evaluating against loan policies, and finally, notifying customers.

3. Amazon DynamoDB: This NoSQL database service holds the reins of the data storage department. By archiving loan application data and facilitating the extraction of financial and credit histories, it becomes an indispensable cog in decision-making.

4. Decision Data Sources:

- Financial and Credit Database: A reservoir of financial data, this database sheds light on an applicant's fiscal behavior, central to assessing their credibility.

- Loan Policies: These standardized guidelines ensure that every loan decision aligns with the institution's predefined risk parameters.

5. Amazon SNS: As a decision crystallizes, Amazon's Simple Notification Service (SNS) leaps into action, intimating relevant parties promptly, fostering efficient communication and rapid execution.

6. Amazon Kendra: Empowered by machine learning, Kendra delves into extensive data concerning past loan applications, delivering crucial insights that influence the loan verdict.

7. Amazon Sagemaker & LLM: The pièce de résistance is the Generative AI model, hosted on Amazon Sagemaker. Imbibing data from Kendra and the financial database, this Large Language Model forecasts loan repayment probabilities based on historical patterns. More crucially, it elucidates the rationale behind every loan decision, championing transparency and fortifying the bond of trust between institutions and clients.

By capitalizing on Generative AI's capabilities and AWS's robust service suite, financial enterprises can usher in an era of rapid, informed, and transparent loan decisions, anchoring them on empirical data.

Detailed Solution Overview:

The proposed solution leverages transformer models to provide curated answers to customer inquiries about internal documents, even if the model hasn't been trained on that specific data, a technique called zero-shot prompting. The benefits customers can anticipate include:

- Precise answers from existing internal documents.

- Rapid responses to intricate queries with the latest data using Large Language Models (LLMs).

- A centralized dashboard to search past queries.

- Time saved by reducing manual searches, thereby lessening stress.

Retrieval Augmented Generation (RAG):

The RAG approach overcomes certain LLM query limitations. It extracts answers from the knowledge base and uses LLMs to condense the findings into clear responses. Implementing RAG with Amazon Kendra offers a solution to challenges like:

- Hallucinations: LLMs, trained on vast data, might produce inaccurate answers based on statistical probabilities.

- Multiple data repositories: Aggregating data from various sources manually can be cumbersome.

- Security: Even with Amazon Comprehend's filtering, there's a risk of exposing sensitive data, underscoring the need for stringent access controls.

- Data Relevance: LLMs, based on outdated data, can be costly to update frequently. Thus, the onus of maintaining current indexed document content lies with organizations.

- Cost: Running LLMs requires significant computational power, which can be expensive, especially for large-scale operations. However, AWS's pay-as-you-go model can alleviate some cost concerns.

Utilizing Amazon SageMaker JumpStart:

For transformer models, Amazon SageMaker JumpStart is invaluable. It provides an array of pre-configured machine learning models, encompassing text generation and question-answering models, facilitating easy deployment.

Integrating Security:

Prioritizing security, the solution adheres to the Security Pillar of the Well-Architected Framework. It uses Amazon Cognito for authentication, compatible with several third-party identity providers and frameworks like OAuth, OpenID Connect (OIDC), and Security Assertion Markup Language (SAML). This ensures user activities are traceable, enhancing accountability.

Furthermore, the solution employs Amazon Comprehend's feature to detect personally identifiable information (PII), automatically redacting sensitive details like addresses, social security numbers, emails, etc. Any user-provided PII through queries is neither stored nor used by Amazon Kendra or fed into the LLM.

Conclusion:

The solution proposes a sophisticated method of leveraging AI to provide immediate, concise answers to customer queries related to internal documents. While it offers several advantages like rapid responses and reduced manual efforts, considerations related to security, data relevancy, and costs must be meticulously addressed. AWS tools like Amazon Kendra, SageMaker JumpStart, and Comprehend form the backbone of this solution, ensuring efficiency, security, and flexibility in its deployment.