ThinkTech
Tool12 min read

AI Procurement Checklist: 47 Questions Before Buying AI Tools

A structured checklist of 47 questions for procurement teams evaluating AI vendors. Covers data handling, model transparency, compliance, support, and exit strategy.

By ThinkTech Research|Published April 1, 2026

You are evaluating an AI tool or vendor for your organization. What should you ask before signing a contract?

This checklist covers 47 questions across seven categories. It is designed for procurement teams, IT decision-makers, and anyone responsible for bringing AI tools into an organization. Each question has a brief explanation of why it matters.

Category 1: Data handling (questions 1-8)

How the vendor handles your data is the highest-risk area of any AI procurement.

  1. **Where is our data stored?** Specific regions, data centers, and jurisdictions matter for compliance (GDPR, data residency laws).

2. **Is our data used to train models?** Many vendors use customer data to improve their models by default. Confirm whether you can opt out and whether the opt-out is retroactive.

3. **Who can access our data within the vendor organization?** Get specifics: engineering teams, support staff, contractors, subprocessors.

4. **What happens to our data if we cancel the contract?** Confirm deletion timelines and whether you get a data export.

5. **Does the vendor use subprocessors?** If yes, which ones, and do they meet the same data handling standards?

6. **Is data encrypted at rest and in transit?** This should be standard, but verify the specific encryption standards.

7. **Can we bring our own encryption keys?** Important for regulated industries. Customer-managed keys give you control over data access.

8. **How are data breaches handled?** Notification timelines, incident response process, and liability allocation.

Category 2: Model transparency (questions 9-16)

Understanding what is inside the model helps you assess risk.

9. **What training data was used?** The vendor may not share full details, but they should describe data sources, collection methods, and time periods.

10. **Has the model been tested for bias?** Ask for specific test results, not just a statement that testing occurred.

11. **What are the known limitations?** Every model has failure modes. A vendor that claims none is either uninformed or evasive.

12. **Can you explain individual predictions?** For high-stakes decisions (hiring, credit, healthcare), explainability is both an ethical and legal requirement.

13. **What benchmarks were used to evaluate the model?** Generic benchmarks (MMLU, HumanEval) may not reflect performance on your specific use case.

14. **How often is the model updated?** Understand the update cycle and whether updates can change behavior in production.

15. **Can we test the model on our own data before purchasing?** A vendor that resists evaluation on your data may be hiding poor performance.

16. **Is the model architecture documented?** You do not need trade secrets, but you need enough information for a risk assessment.

Category 3: Compliance and legal (questions 17-24)

AI procurement creates legal obligations that traditional software procurement may not cover.

17. **Does the tool comply with the EU AI Act?** If you operate in the EU or serve EU residents, this is mandatory. Ask which risk category the tool falls into.

18. **Is there a Data Processing Agreement (DPA)?** Standard for GDPR compliance. Review it carefully, not just confirm it exists.

19. **Who owns the outputs?** AI-generated content, decisions, and analyses need clear IP ownership terms.

20. **What is the liability allocation for AI errors?** If the AI makes a wrong decision that harms someone, who is responsible?

21. **Has the vendor completed SOC 2 Type II certification?** Or equivalent security certifications relevant to your industry.

22. **Can the vendor provide an AI impact assessment?** Under the EU AI Act, high-risk systems require documented impact assessments.

23. **Are there regulatory reporting requirements?** Some jurisdictions require reporting when AI is used in specific contexts (hiring, lending, healthcare).

24. **What insurance does the vendor carry for AI-related claims?** Professional liability, errors and omissions, and cyber liability.

Category 4: Performance and reliability (questions 25-32)

AI tools fail differently from traditional software. Your SLA needs to reflect that.

25. **What is the uptime SLA?** Standard SLA terms, but confirm they apply to the AI model endpoint, not just the web interface.

26. **What happens during model downtime?** Does the system fall back to a non-AI method, or does it stop working entirely?

27. **What is the latency for AI predictions?** Matters for real-time applications. Get p50, p95, and p99 latency numbers.

28. **How does performance degrade under load?** AI inference can be computationally expensive. Ask about scaling behavior.

29. **What monitoring is available?** You should be able to track accuracy, latency, error rates, and usage from your own dashboard.

30. **Are there rate limits?** Per-minute, per-day, per-month. Understand how your usage patterns fit within limits.

31. **What is the model's accuracy on tasks similar to yours?** Generic accuracy numbers are less useful than performance on domain-specific evaluations.

32. **How do you handle model degradation over time?** Data drift can reduce model accuracy. Ask how the vendor detects and addresses this.

Category 5: Integration and support (questions 33-38)

How the tool fits into your existing systems affects total cost of ownership.

33. **What APIs and integration methods are available?** REST, GraphQL, SDK, webhook, batch processing.

34. **Is the API versioned?** Breaking changes without versioning cause production incidents.

35. **What is the support response time for critical issues?** AI-specific issues (model producing harmful outputs) may need faster response than standard software bugs.

36. **Is there a dedicated account or solutions engineer?** Important for complex deployments.

37. **What documentation is available?** API docs, integration guides, troubleshooting guides, change logs.

38. **Can we run the model on-premises or in our own cloud?** Matters for data residency, latency, and cost control.

Category 6: Cost structure (questions 39-43)

AI pricing models are different from traditional SaaS.

39. **What is the pricing model?** Per-call, per-token, per-seat, per-outcome. Understand what drives cost.

40. **Are there hidden costs?** Fine-tuning, custom model training, premium support, data storage, overage fees.

41. **How does cost scale with usage?** Get pricing for 10x and 100x your current expected usage.

42. **Are there minimum commitments?** Annual contracts with minimum spend can lock you in.

43. **What is the total cost of ownership?** Include integration engineering, monitoring, compliance review, and staff training.

Category 7: Exit strategy (questions 44-47)

Plan for the end of the relationship before it begins.

44. **What is the contract termination process?** Notice period, data export, transition support.

45. **Can we export our data and configuration?** You should be able to leave with everything you brought in, plus any data generated during use.

46. **Is there vendor lock-in through proprietary formats?** Data in proprietary formats that only works with this vendor is a risk.

47. **What is the migration path to an alternative?** If you need to switch vendors, how difficult is the transition?

How to use this checklist

This checklist is a starting point. Not every question applies to every procurement. Prioritize based on:

FactorPriority questions
Regulated industry (healthcare, finance)17-24 (compliance), 1-8 (data)
Consumer-facing product10-12 (bias, explainability), 5-6 (data handling)
Mission-critical system25-32 (reliability), 33-38 (support)
Cost-sensitive organization39-43 (cost), 44-47 (exit strategy)
First AI purchaseAll categories, with emphasis on 9-16 (transparency)

Score each question as: Satisfactory, Needs Improvement, or Unacceptable. Any "Unacceptable" answer in categories 1-3 should be a deal-breaker.

Sources

  1. [1]
  2. [2]
  3. [3]
  4. [4]

Related