Yellow Shape

Responsible AI
Due Diligence Tool

Responsible AI
Due Diligence Tool

Responsible AI Due Diligence Tool

The AI application and infrastructure stack is evolving at exceptional speed. This framework is a v1 due-diligence tool developed with international input from GPs, LPs, operators, and domain experts. It is intentionally agile and iterative: future updates will refine the questions as technologies mature, new risks and opportunities emerge, and our collective understanding of AI’s impact on markets and societies deepens.

This is an evolving project. We welcome your suggestions on both content and format.

Developed by Reframe Venture and ImpactVC with our partner Project Liberty Institute, with support from Zendesk.

Last updated: December 2025

HOW TO USE

HOW TO USE

HOW TO USE

STRUCTURE

STRUCTURE

STRUCTURE

Download as .xlsx

Tell us about a company doing this well

  1. Good Governance for AI Companies

Regulatory Awareness & Compliance Readiness

Regulatory Awareness & Compliance Readiness

All stages:

  1. What existing and emerging AI regulations or compliance requirements are you aware of that could affect your business?

  1. How are you preparing for regulatory changes, and what would compliance look like at your current stage versus when you scale?

Strong responses
Strong responses
Strong responses
Weak responses
Weak responses
Weak responses
Why ask?
Why ask?
Why ask?

Accountability Structure

Accountability Structure

All stages:

  1. Who in your organisation owns AI safety decisions?

  2. How do they interact with product, legal, and business teams?

Strong responses
Strong responses
Strong responses
Weak responses
Weak responses
Weak responses
Why ask?
Why ask?
Why ask?

Transparency, Documentation, and Audit

Transparency, Documentation, and Audit

Pre-Seed / Seed:

  1. Do you have a responsible AI policy in place? What does it cover?

  2. What steps are you taking to make your data and model processes transparent and well-documented?

  3. Do you have any form of model cards, data sheets, or documentation that describes what your AI, how it was trained, and its limitations?

Series A+:

  1. Show us your data and model documentation system (model cards, data cards).

  2. How do you maintain audit trails for AI decisions that would support compliance or incident investigation?

Strong responses
Strong responses
Strong responses
Weak responses
Weak responses
Weak responses
Why ask?
Why ask?
Why ask?
  1. Data Input and Technical Foundation

Data Ownership & Access Risk

Data Ownership & Access Risk

All stages:

  1. Walk us through your data sources.

  2. What do you own vs. license vs. scrape?

  3. What happens if access to these sources gets restricted or costs increase significantly?

  4. How representative are these sources of your target users?

Strong responses
Strong responses
Strong responses
Weak responses
Weak responses
Weak responses
Why ask?
Why ask?
Why ask?

Data Privacy & Security Risk

Data Privacy & Security Risk

All stages:

  1. How do you handle sensitive data in your AI systems?

  2. What are your data retention, deletion, and access control policies?

  3. What's your plan if personal data gets leaked through model outputs or attacks?

  4. If an agentic system, how does the system manage and act with personal or sensitive data?

Strong responses
Strong responses
Strong responses
Weak responses
Weak responses
Weak responses
Why ask?
Why ask?
Why ask?

Model Dependencies

Model Dependencies

Pre-Seed / Seed:

  1. What third-party AI services/models are you using?

  2. How do you evaluate their reliability at the integration stage, and how do you plan to monitor change in terms or performance?

  3. How easily can you switch between AI models? Can users choose their model?

Series A+:

  1. How do you evaluate and monitor the responsible AI practices of your AI suppliers and partners?

  2. What happens if a vendor fails your standards?

  3. How easily can you switch between AI models? Can users choose their model?

Strong responses
Strong responses
Strong responses
Weak responses
Weak responses
Weak responses
Why ask?
Why ask?
Why ask?

Compute Cost, Energy, and Water Use

Compute Cost, Energy, and Water Use

Pre-Seed / Seed:

  1. How are you thinking about strategically balancing model quality with cost?

  2. How do you think about the trade-offs between model performance and efficiency?

  3. What's your current approach to measuring or estimating your AI system's energy and water consumption?

Series A+:

  1. How does energy efficiency factor into your model selection and infrastructure decision; is GPU access a concern?

  2. Show us your water, energy, and carbon monitoring infrastructure. What tools do you use and what metrics do you track?

  3. How do you benchmark your models' energy performance against alternatives, and how does this inform procurement or B2B sales conversations?

Strong responses
Strong responses
Strong responses
Weak responses
Weak responses
Weak responses
Why ask?
Why ask?
Why ask?
  1. Output, Performance, and Algorithm

AI Failure & Information Quality Management

AI Failure & Information Quality Management

All stages:

  1. What happens when your AI fails or produces incorrect information?

  2. If agentic, how does your system manage cascading failures? How will you navigate liability?

  3. How do users experience failures, and what safeguards prevent them from acting on wrong AI outputs?

Strong responses
Strong responses
Strong responses
Weak responses
Weak responses
Weak responses
Why ask?
Why ask?
Why ask?

Bias Detection and Mitigation

Bias Detection and Mitigation

All stages:

  1. What are some sensitive bottlenecks in which your AI driven decision can harm users if biased?

  2. What user groups may be effected by bias in your AI?

  3. How did you test your AI for bias?

Strong responses
Strong responses
Strong responses
Weak responses
Weak responses
Weak responses
Why ask?
Why ask?
Why ask?

Pre-Deployment Evaluation & Testing

Pre-Deployment Evaluation & Testing

All stages:

  1. Walk us through your testing process before deploying AI changes.

  2. How do you validate that your AI system works correctly, safely and without bias before real users interact with it?

  3. What scenarios do you test for?

  4. Do you have sandbox (test) environments or limited user testing to catch issues between development and full deployment?

Strong responses
Strong responses
Strong responses
Weak responses
Weak responses
Weak responses
Why ask?
Why ask?
Why ask?

Post-Deployment Monitoring & Incident Response

Post-Deployment Monitoring & Incident Response

All stages:

  1. How do you monitor your AI system's performance and behavior after it's live with users?

  2. What happens when you discover the AI has made errors or caused problems, or acted in a bias manner, and how do you respond?

  3. How do feedback loops from monitoring influence your development process?

Strong responses
Strong responses
Strong responses
Weak responses
Weak responses
Weak responses
Why ask?
Why ask?
Why ask?
  1. Designing for adoption

User Transparency and Trust

User Transparency and Trust

All stages:

  1. How do you communicate AI capabilities and limitations to users?

  2. How are you thinking about building trust amongst customers?

Strong responses
Strong responses
Strong responses
Weak responses
Weak responses
Weak responses
Why ask?
Why ask?
Why ask?

Human-AI Decision Boundaries

Human-AI Decision Boundaries

Pre-Seed / Seed:

  1. Where exactly does human judgment end and AI decision-making begin in your product?

  2. What decisions should never be fully automated, or what mitigation procesess do you have to ensure any such delegation of decision making to AI is done properly?

Series A+:

  1. Show us how you've systematically implemented human-in-the-loop design across your product, or explain how you were able to avoid its necessity in risky automation decision points.

  2. What governance processes ensure human oversight boundaries are maintained as you scale?

  3. How do you train and support users in effective human-AI collaboration?

Strong responses
Strong responses
Strong responses
Weak responses
Weak responses
Weak responses
Why ask?
Why ask?
Why ask?

Human AI Interaction

Human AI Interaction

All stages:

  1. How do you see the human AI interaction designed in your product?

  2. Is your AI integration supportive for human bias, machine bias, and biases that can stem from human and AI interactions?

Strong responses
Strong responses
Strong responses
Weak responses
Weak responses
Weak responses
Why ask?
Why ask?
Why ask?

Stakeholder Risk & Misuse / Malicious Use Prevention

Stakeholder Risk & Misuse / Malicious Use Prevention

Pre-Seed / Seed:

  1. Who could be harmed if your AI makes systematic errors or is misused by bad actors?

  2. What unintended use cases could emerge that you haven't designed for?

  3. How would you detect these issues happening?

Series A+:

  1. What monitoring systems do you have to detect malicious use patterns, and how do you respond when harmful use cases emerge?

Strong responses
Strong responses
Strong responses
Weak responses
Weak responses
Weak responses
Why ask?
Why ask?
Why ask?

Explainability

Explainability

Pre-Seed / Seed:

  1. What records do you keep about your AI decisions?

  2. If a customer asks 'why did the AI do X,' can you explain it?

  3. In what bottlenecks in your product design such decisions' explanations are crucial?

Series A+:

  1. Demonstrate your systematic approach to AI explainability across different user types and use cases.

  2. How do you ensure explanations remain meaningful and actionable as your models become more complex?

  3. What infrastructure supports explainability at scale?

Strong responses
Strong responses
Strong responses
Weak responses
Weak responses
Weak responses
Why ask?
Why ask?
Why ask?

Churchill House, 137-139 Brent Street, London, England, NW4 4DJ


Get in touch: hello@reframeventure.com

Churchill House, 137-139 Brent Street, London, England, NW4 4DJ


Get in touch: hello@reframeventure.com

Churchill House, 137-139 Brent Street, London, England, NW4 4DJ


Get in touch: hello@reframeventure.com