Jan 21, 2026

Jan 21, 2026

15 Essential Questions to Ask AI Lending Vendors About Compliance

15 essential compliance questions to ask AI lending vendors about FDCPA controls, fair lending, data privacy, and audit readiness. CFPB and NCUA guidance included.

Regulators aren't asking whether you use AI anymore. They're asking how you govern it. When the CFPB, OCC, or state examiners review your AI vendor relationships, they expect you to demonstrate that you asked the right questions before deployment, not after an enforcement action.

This guide covers the 15 questions that matter most when evaluating AI lending vendors, organized around the compliance areas where examiner scrutiny runs deepest: data privacy, fair lending, collections controls, and audit readiness.

Why Compliance Questions Matter for AI Lending Vendors

When evaluating AI lending vendors, compliance with fair lending laws like ECOA and the Fair Housing Act matters as much as technical capability. Vendors need to demonstrate that their AI is transparent, fair, and auditable because regulators at the CFPB, OCC, FDIC, and NCUA are paying close attention to how lenders deploy automation.

Federal guidance reinforces this principle: NCUA's third-party risk framework emphasizes that outsourcing business functions amplifies inherent risks rather than eliminating them. Credit unions remain fully responsible for safeguarding member assets and ensuring sound operations regardless of third-party involvement.

Generic AI vendor questionnaires fall short here. They cover security basics but skip the lending-specific controls examiners actually look for: FDCPA compliance in collections, explainability for adverse action notices, contact frequency limits, and interaction-level audit trails.

The questions below are organized around the areas where AI compliance risk runs highest in consumer lending.

What to Include in an AI Vendor Questionnaire for Lenders

A thorough AI vendor questionnaire for lenders covers four core areas. Each one maps to specific regulatory expectations.

Data and Privacy Questions

Data and privacy questions address how borrower information is collected, stored, accessed, and kept separate between clients. These form the foundation for GLBA compliance and state privacy law requirements. In 2026, this also includes addressing consumer rights under laws like California's CCPA and Colorado's AI Act, which grant borrowers the right to opt out of certain automated decision-making.

Model Transparency and Fair Lending Questions

Model transparency questions cover explainability, bias testing, and model risk management documentation. Examiners focus heavily on this area during fair lending reviews, including whether your AI system qualifies as a "model" under regulatory guidance or operates as a simpler decision tool.

Governance and Regulatory Compliance Questions

Governance questions address the vendor's AI oversight framework, UDAAP monitoring practices, and ability to adapt when regulations change. This includes alignment with frameworks like the NIST AI Risk Management Framework and procedures for human review of AI decisions.

Operations and Integration Questions

Operations questions cover system compatibility, audit logging completeness, and the vendor's ability to support regulatory exams with documentation on demand.

Questions About Data Privacy and Borrower Information

Data privacy forms the foundation of any AI vendor evaluation. Regulators expect lenders to know exactly how borrower information flows through third-party systems.

1. How is borrower data collected, used, and stored

This question covers the full data lifecycle. You're looking for specifics here, not generalities.

  • Collection methods: What borrower data does the AI access, and through what channels?

  • Storage location: Where is data stored, and is it within the United States? While not strictly required under federal law, examiners increasingly expect U.S.-based data residency for sensitive financial information. Offshore data storage raises immediate red flags during examinations.

  • Encryption standards: What encryption protects data at rest and in transit?

  • Retention policies: How long is data kept, and what triggers deletion?

2. Who has access to borrower information and under what controls

Access controls determine who can see borrower data and under what circumstances. This includes the vendor's employees, subcontractors, and any third-party processors.

Ask about role-based permissions, access logging, and whether the vendor can demonstrate least-privilege principles in their admin interface during a demo.

With new state privacy laws in effect, also ask: "If a borrower exercises their right to opt out of automated processing under state law, does your system have a documented workflow to route their case to human review?"

3. How do you prevent borrower data from training models for other clients

Data segregation is a concern many lenders overlook until it becomes a problem. Understanding how vendors use customer data for model improvement is critical.

Ask for transparency about data usage practices. Some vendors maintain strict separation where your data never influences models serving other clients. Others may use aggregated, anonymized patterns across customers to improve platform performance. Either approach can work if it's clearly disclosed and contractually documented.

Look for explicit documentation of the vendor's data usage policy. Sophisticated vendors now use techniques like differential privacy or zero-party data approaches that allow model improvement without retaining specific borrower identities. Some describe their approach as a "clean room" environment where individual customer data remains isolated.

The key is transparency. If the vendor hedges on this point or can't clearly explain their data usage practices, that's worth exploring further before signing a contract.

Questions About Model Transparency and Fair Lending

Model transparency is where regulatory scrutiny runs deepest. Examiners expect lenders to explain AI decisions as if they made them directly because legally, they did.

4. How do you test models for disparate impact and fair lending risk

Bias testing isn't a one-time event. Examiners expect ongoing monitoring across protected classes, with documented methodologies and results. Best practices now include testing for "intersectional" disparate impact. For example, examining outcomes not just for women or for Black borrowers, but specifically for Black women, as outcomes can differ across intersecting protected classes.

Ask how often testing occurs, what fairness metrics the vendor uses, and whether you can see actual test results for your specific model instance rather than just a description of the vendor's general process.

5. Can you explain how AI decisions are made to borrowers and regulators

ECOA requires lenders to provide specific reasons when taking adverse action. If the AI recommends a decision, you still need to explain it in plain language to the borrower.

The CFPB has made clear that generic explanations like "credit score" or "internal risk rating" don't satisfy legal requirements. Per CFPB Circular 2023-03, adverse action notices must identify the specific factors that contributed to the decision. For instance, "inconsistent payment history in the last six months" rather than just "payment history."

Ask whether the vendor can generate the specific, actionable reasons required for adverse action notices and whether those explanations hold up to examiner scrutiny.

6. How do you document model validation for MRM requirements

Model risk management expectations from OCC and Federal Reserve guidance (SR 11-7) apply to AI vendors. This means independent validation, a maintained model inventory, and ongoing performance monitoring.

One key question: Does the vendor classify their system as a "model" that requires full MRM governance, or as a simpler "tool" that operates on business rules? The distinction is becoming blurrier. The CFPB has signaled that if a system uses complex algorithms to predict borrower behavior, they will likely treat it as a model regardless of what the vendor calls it.

Ask who conducts independent validation, how often it occurs, and what documentation the vendor can provide for your MRM files.

Questions About AI Governance and Regulatory Compliance

Weak vendor governance becomes your compliance problem. Examiners hold lenders accountable for the AI they deploy, regardless of who built it.

7. Do you have an AI governance framework aligned with federal guidance

A mature AI governance framework includes documented policies, assigned accountability, and alignment with interagency AI guidance.

  • Named accountability: Who owns AI governance at the vendor, by name and title?

  • Policy documentation: Can they share their AI ethics and governance policies?

  • Framework alignment: Do they follow NIST AI RMF, ISO 42001, or similar standards?

The NIST AI Risk Management Framework has become a common reference point for demonstrating structured AI governance. Ask whether the vendor has mapped their practices to this framework.

8. How do you monitor AI outputs for UDAAP risk

UDAAP (unfair, deceptive, or abusive acts or practices) is where AI can create problems quickly. Automated systems can produce patterns that look fine individually but create UDAAP exposure at scale.

Ask how the vendor detects problematic patterns before regulators do, and what monitoring runs continuously for UDAAP risk.

This includes monitoring for "model drift," the phenomenon where an AI model's accuracy or fairness degrades over time as economic conditions change or as the borrower population shifts. For vendors using generative AI or large language models in collections, also ask about "hallucination monitoring." These systems can occasionally generate responses that don't align with actual policy, such as accidentally promising debt forgiveness the lender never authorized.

Rule-based systems or tightly constrained AI architectures may have lower risk in this area, but documentation of controls remains important regardless of approach. Ask: "How do you detect when your model's performance begins to drift, and what triggers a review?"

9. How do you adapt to regulatory changes and new guidance

Regulations evolve. CFPB guidance on AI, state-level requirements, and examiner expectations shift over time.

Ask how quickly the vendor can implement new requirements when guidance changes. Even better, ask for examples of past adaptations. How did they respond to recent CFPB statements on AI?

Also ask about human oversight: "When an AI decision is disputed by a borrower, what is the process for a human agent to review the decision, explain it in detail, and if necessary override the system?" Regulators increasingly expect a clear path for human review of contested automated decisions. While not yet required under U.S. federal law, the EU AI Act mandates human oversight for high-risk AI systems, and this is becoming a de facto expectation for U.S. vendors serving global financial institutions.

Questions About Collections Compliance and FDCPA Controls

Collections is where AI compliance risk runs highest. Consumer-facing interactions trigger FDCPA, Regulation F, and state-specific requirements that generic AI tools weren't built to handle.

10. How do you enforce contact frequency limits and time-of-day restrictions

Regulation F's 7-in-7 rule limits call attempts to seven within a seven-day period. Time-of-day restrictions prohibit contact before 8 a.m. or after 9 p.m. in the borrower's time zone. State laws often add stricter limits.

Ask how the system prevents violations automatically, not through manual oversight, but through built-in controls that can't be overridden. These are sometimes called "hard guardrails."

For example: "Even if the AI predicts the best time to reach a borrower is 9:30 p.m., does the system have hard-coded rules that prevent any contact attempt outside legal hours?"

11. How are required disclosures like Mini-Miranda delivered

Debt collection communications require specific disclosures: the Mini-Miranda warning, validation notices, and debt collector identification. Requirements vary by channel and state.

Ask how the vendor ensures consistency across voice, text, and email and how they document that disclosures were delivered. Look for timestamped records that show not just that a disclosure was sent, but that it was delivered and acknowledged by the borrower.

12. How do you handle cease and desist and dispute requests

When a borrower requests no further contact or disputes a debt, specific legal obligations kick in. The AI needs to recognize these requests, stop contact immediately, and route appropriately.

This includes recognizing natural language variations. A borrower might say "stop calling me at work" rather than using the formal phrase "cease and desist." Ask: "How does your natural language processing identify cease-and-desist requests even when borrowers don't use legal terminology?"

Ask about escalation workflows, documentation of these requests, and how the system prevents accidental contact after a cease-and-desist.

Questions About Audit Logging and Regulatory Exam Readiness

Regulators expect lenders to produce documentation on demand. Vendors that can't support this create exam risk that falls entirely on you.

13. What audit trails do you maintain for AI interactions

Complete audit trails include timestamps, decision rationale, actions taken, and outcomes. Gaps in logs raise examiner concerns about what happened in the missing periods.

Ask whether logs capture everything, not just what the AI said, but what it did and why. Some vendors call this "decision lineage": the full path from data input, through the model version and logic used, to the final outcome.

For example: "Can you produce a report showing which specific data points and logic rules led to a particular borrower's outcome?"

14. Can you produce documentation and evidence packs for examiners

When examiners request records, turnaround time matters. Some vendors can produce evidence packs in hours; others take weeks.

During on-site examinations, examiners often issue "overnight requests" for documentation. A vendor that needs two weeks to produce records becomes a liability during a live exam when examiners expect responses by the next morning.

Ask about export formats, report generation capabilities, and realistic timelines for producing records during an exam. Look for vendors who can generate documentation (including bias audit reports and incident logs) quickly when examiners request it.

15. How do you log what was said, done, and why for each borrower interaction

This question gets at interaction-level detail: full transcripts, decision explanations, and system updates triggered by each conversation.

Platforms built for regulated lending log what was said, what was done, and why for every interaction. Salient, for example, is designed to create this documentation trail automatically, the kind examiners expect to see.

Questions About Integration with Lending Systems

AI vendors that require ripping out existing infrastructure aren't realistic for most lenders. Integration with current systems is a practical requirement, not a nice-to-have.

LMS and Loan Servicing System Compatibility

Ask about bi-directional integration with your loan management system. The AI needs to read borrower data and write actions back (promises, payment arrangements, due date changes) without manual intervention.

For example: "If a borrower makes a promise to pay, does the system automatically update our LMS in real-time, or does someone need to manually enter that information?"

Contact Center and CCaaS Integration

Seamless handoffs between AI and human agents require integration with your existing contact center platform. Ask about compatibility with your CCaaS provider and how transfers are handled.

Specifically: "When the AI transfers a borrower to a human agent, is the full conversation context passed along, or does the agent start from scratch?"

Payment Provider Connectivity

If the AI can take payments, it needs to connect with your payment processors. Ask which providers are supported and what guardrails exist around payment actions.

For instance: "What controls prevent the AI from processing a payment amount the borrower didn't explicitly authorize?"

What to Include in Contracts with AI Lending Vendors

Questionnaire answers mean nothing without contractual protections. Vendor promises become enforceable obligations only when they're written into the agreement.

NCUA guidance specifies minimum contractual elements for third-party relationships including scope of services, performance standards, audit rights, data security provisions, and termination clauses. These requirements apply equally to AI lending vendors.

Data Ownership and Borrower Rights

Clarify who owns borrower data, what restrictions exist on vendor use, and what happens to data upon termination.

Be explicit about data usage policies. Document whether your borrower data can be used for model improvement, and if so, under what conditions (aggregated only, anonymized, opt-out available, etc.). Some lenders include language requiring the vendor to maintain a "clean room" environment for their data.

Compliance Warranties and Representations

Include vendor warranties about regulatory compliance and specify what happens when regulations change mid-contract.

With evolving state laws around AI, consider including warranties covering ECOA, FDCPA, CCPA, and emerging state AI laws like Colorado's AI Act.

Audit and Regulatory Examination Rights

Ensure your right to audit the vendor and their obligation to cooperate with regulatory exams are explicitly documented.

This should include access to model documentation, bias testing results, and where appropriate, the ability to review the logic and weights the vendor's models use to make decisions.

Termination Provisions and Data Return

Address data portability, transition assistance timelines, and whether data is returned or destroyed after termination.

How to Evaluate AI Lending Vendors the Way Regulators Do

Thinking like an examiner changes how you evaluate vendor responses. Examiners don't just ask whether controls exist. They ask for evidence that controls work.

Federal supervisory guidance reinforces that lenders cannot outsource accountability. NCUA examiners expect the same due diligence for AI vendors as any critical business function, with ongoing monitoring to ensure third parties meet their contractual obligations and regulatory requirements.

Not because vendors are hiding something. Because generic AI tools were never built for regulated lending environments.

The vendors that stand up to examiner scrutiny are those built specifically for lending supervision. Salient works directly with risk, compliance, and operations leaders to design pilots that teams would be comfortable showing to regulators because that's exactly what examiners will ask to see.

Book a demo.

FAQs About AI Vendor Compliance Questions for Lenders

How do I evaluate AI lending vendors if my compliance team lacks AI expertise?

Partner with your vendor risk management team and consider engaging external consultants who specialize in AI governance for regulated industries. The questions in this guide provide a starting framework regardless of technical background.

Focus on outcomes rather than technical implementation. You don't need to understand the underlying code. You need to understand what the system does, how decisions are explained, and whether the vendor can demonstrate that their controls work.

Should I require SOC 2 certification from AI lending vendors?

SOC 2 Type II certification demonstrates that a vendor has established security controls, but it doesn't address lending-specific compliance requirements like fair lending or FDCPA. Treat it as a baseline, not a substitute for the compliance questions above.

How do state licensing requirements affect AI vendor selection?

Some states require entities engaged in debt collection or loan servicing activities to hold specific licenses. Confirm whether your AI vendor's activities trigger licensing obligations in states where you operate.

Can I use the same AI vendor questionnaire for different lending products?

A core questionnaire works across products, but product-specific questions help too. Auto lending has different disclosure requirements than personal loans, and BNPL has emerging state-level regulations that may not apply to traditional installment products.