AI Readiness Quiz: Interactive Self-Assessment for Business Leaders
TL;DR:
- This quiz uses scenario-based questions rather than yes/no checklists, which produces more honest answers and more useful readiness profiles
- Fifteen questions across five dimensions, designed to be completed by a single business leader in 15 minutes
- Each scenario describes a realistic situation and asks how your organization would respond, which reveals actual readiness rather than aspirational readiness
- Your results map to specific next steps based on where your strengths and gaps are
Most AI readiness assessments ask abstract questions: “Do you have a data governance policy?” Answering “yes” to that question tells you almost nothing about whether the policy works, whether people follow it, or whether it covers AI-specific scenarios. This quiz takes a different approach. It presents realistic scenarios and asks how your organization would actually respond, which tends to produce more honest and more useful answers.
The quiz is designed for business leaders (not technical teams) because readiness decisions are organizational, not technical. A CTO can evaluate infrastructure readiness, but a business leader sees the full picture: whether the data, governance, people, technology, and strategic alignment are sufficient to make AI work as a business investment.
For the comprehensive checklist format, see our AI readiness checklist. For a quicker directional assessment, the AI readiness scorecard takes ten minutes. For the full evaluation with detailed scoring, use the AI readiness assessment template.
How to Take the Quiz
Read each scenario and select the response that most honestly describes how your organization would handle the situation. Don’t select the response you wish were true, or the response you’re working toward. Select the one that describes today’s reality.
Score each answer: (a) = 0 points, (b) = 1 point, (c) = 2 points.
Data Readiness
Scenario 1: A new AI tool needs access to your customer data to personalize its outputs. How quickly can you provide that data in a structured, machine-readable format?
(a) We’d need weeks to extract and clean the data from multiple disconnected systems, and we’re not sure we’d capture everything.
(b) We could pull the core data from our CRM within a few days, but supplementary data from other systems would take longer and might require manual formatting.
(c) Our customer data is in a centralized system with API access. We could provide a structured data feed within a day or two.
Scenario 2: Someone on your team asks, “How accurate is our product catalog data?” How does the organization respond?
(a) Nobody knows. We don’t measure data quality in any systematic way. The data is probably okay, but we haven’t checked.
(b) We know there are quality issues (duplicates, outdated records, inconsistent formatting), and we’ve done some cleanup, but we don’t have ongoing quality metrics.
(c) We measure data quality regularly. We know our completeness rate, accuracy (from sampling), and consistency across systems, and we have processes for maintaining quality over time.
Scenario 3: Your AI application needs historical data from the past three years. When you look at the data, you discover that the format changed twice during that period, one data source was migrated to a new system, and six months of records from a legacy system are in a different structure. How does your team handle this?
(a) This would be a major project. We don’t have the tools or processes to reconcile data across format changes and system migrations. Someone would need to figure it out manually.
(b) Our data team could handle the reconciliation, but it would take significant effort and the result might not be perfect. We’ve dealt with similar issues before but not specifically for AI applications.
(c) We have data transformation pipelines that can handle format changes and system migrations. Our data engineering team has reconciled similar issues before and has documented processes for it.
Governance Readiness
Scenario 4: Your AI system incorrectly flags a loyal customer as high-risk, triggering an automated response that restricts their account access. The customer contacts you, upset. What happens next?
(a) Whoever receives the complaint would try to figure out what happened, but there’s no defined process for AI-related errors. It would depend on which employee picks up the issue.
(b) The customer service team would escalate the issue, and someone would manually override the restriction. We’d know the AI made the error, but we don’t have a systematic process for investigating why or preventing recurrence.
(c) We have an incident response process for AI errors. The restriction would be reversed immediately, the error would be logged and investigated by the system owner, and the findings would feed into model improvement. The customer would receive an explanation and assurance of correction.
Scenario 5: Your legal team asks, “Which of our AI systems would be classified as high-risk under the EU AI Act?” How does the organization respond?
(a) We don’t have a clear inventory of our AI systems, and we haven’t evaluated any of them against the EU AI Act or other AI regulations.
(b) We know which AI tools we use, and we’re aware of the EU AI Act, but we haven’t done a formal classification or compliance assessment.
(c) We’ve inventoried our AI systems, classified them against the EU AI Act risk tiers, and identified specific compliance requirements for any high-risk systems. See our EU AI Act compliance checklist for this assessment framework.
Scenario 6: An employee discovers that your AI-powered hiring screening tool appears to score candidates from certain universities systematically lower than equally qualified candidates from other universities. What happens?
(a) The employee might mention it to their manager, but there’s no formal process for reporting or investigating potential AI bias. It might get addressed, or it might not.
(b) The issue would be raised with the HR and IT teams. We’d investigate manually, but we don’t have bias monitoring tools or a defined process for this type of investigation.
(c) The employee reports it through our AI incident process. The system owner investigates using performance data segmented by the relevant variable. If bias is confirmed, the system is suspended pending remediation, and affected candidates are re-evaluated.
Workforce Readiness
Scenario 7: You deploy an AI tool that drafts customer communications based on account data. After a month, you notice that the sales team uses the drafts without modification 95% of the time. Is this a good sign or a problem?
(a) That sounds like the tool is working well. We’d consider it a success.
(b) We’d want to check whether the drafts are actually good, or whether the team is just not reviewing them carefully. But we don’t have a process for evaluating this.
(c) This raises a concern about automation bias. We’d audit a sample of unmodified drafts for quality, check whether the high acceptance rate reflects genuine quality or insufficient review, and adjust the oversight process if needed.
Scenario 8: You announce that AI will be used to assist with project estimation, a task currently done by senior project managers. How does the team react?
(a) Significant resistance. The senior PMs see this as a threat to their expertise and autonomy. The announcement generates more anxiety than enthusiasm.
(b) Mixed reaction. Some team members are curious, others are wary. There’s a general willingness to try, but concerns about whether the AI can handle the nuances of their specific projects.
(c) Constructive engagement. The team asks practical questions: What data will the AI use? How will it handle unusual projects? Will they be able to override its estimates? They see it as a tool that could handle routine estimates and free them for more complex work.
Infrastructure Readiness
Scenario 9: A promising AI vendor says their product integrates with “any modern CRM and ERP system via standard APIs.” Your IT team evaluates. What do they find?
(a) Our core systems don’t have usable APIs. Integration would require custom development, and our IT team doesn’t have the capacity for it on a reasonable timeline.
(b) Some systems have APIs, but they’re limited or outdated. Integration is possible but will require workarounds and ongoing maintenance. There will be some manual data transfer steps.
(c) Our systems have well-documented APIs that support the vendor’s integration requirements. The IT team estimates integration will take days to weeks, not months.
Scenario 10: Your AI application has been running for six months. How would you know if its accuracy has degraded?
(a) We wouldn’t, unless users started complaining about obviously wrong outputs. We don’t have monitoring for AI performance.
(b) We track some basic metrics (usage, error rates reported by users), but we don’t have automated performance monitoring that would catch gradual accuracy decline.
(c) We have monitoring dashboards that track model performance metrics continuously. Accuracy thresholds trigger alerts when performance drops below acceptable levels, and we have a process for investigating and remediating degradation.
Strategic Readiness
Scenario 11: Your CEO asks the leadership team to identify the three highest-value opportunities for AI in the business. What happens?
(a) The discussion is vague. People mention general areas (“marketing,” “operations,” “customer service”) but can’t identify specific processes, quantify potential value, or describe what the AI would actually do.
(b) The team identifies reasonable opportunities but struggles to quantify the value or assess feasibility. The discussion mixes high-value, high-difficulty applications with quick wins without distinguishing between them.
(c) The team identifies specific processes with quantified potential value, governance feasibility assessments, and preliminary data readiness evaluations. They can distinguish between quick wins and strategic bets.
Scenario 12: Your board asks: “How much have we budgeted for AI this year, and what does the budget cover?” What’s the answer?
(a) There’s no dedicated AI budget. Any AI spending would come from existing departmental budgets and would compete with other priorities.
(b) There’s some budget allocated for AI exploration (tool licenses, a pilot project), but it doesn’t cover ongoing operations, data preparation, governance, or workforce training.
(c) The AI budget covers tool licensing, data preparation, governance implementation, workforce training, ongoing monitoring, and operational costs. It’s structured to support production deployment, not just experimentation.
Scoring
Add your points across all 12 questions. Maximum score: 24.
| Score | Readiness Level | What It Means |
|---|---|---|
| 0-7 | Not Ready | Significant gaps across multiple dimensions. Foundational work is needed before AI investment will produce returns. |
| 8-13 | Getting Ready | Some foundations exist, but notable gaps remain. A comprehensive AI readiness assessment will identify specific priorities. |
| 14-18 | Pilot Ready | Your organization has the foundation for well-scoped AI pilots. Select use cases that align with your strongest dimensions. |
| 19-24 | Production Ready | Strong readiness across dimensions. Focus on deploying, measuring, and scaling AI applications. |
Reading Your Results by Dimension
Your total score matters less than your dimensional pattern. Calculate your score for each section:
Data (Q1-Q3, max 6): Scores below 3 indicate data readiness gaps that will affect any AI application. Start with our data readiness for AI guide.
Governance (Q4-Q6, max 6): Scores below 3 indicate governance gaps that make production deployment risky. Start with the AI governance readiness guide.
Workforce (Q7-Q8, max 4): Scores below 2 indicate cultural or skills readiness issues. Start with building an AI-ready culture.
Infrastructure (Q9-Q10, max 4): Scores below 2 indicate integration or monitoring gaps. These are often the fastest to fix through targeted investment.
Strategy (Q11-Q12, max 4): Scores below 2 indicate that AI hasn’t progressed from interest to intent. Use the how to assess AI readiness guide to move from vague interest to specific planning.
The dimension with the lowest score is your binding constraint, the one gap that will limit AI success regardless of strength elsewhere.
Frequently Asked Questions
Why are the governance questions weighted more heavily (three questions) than infrastructure (two)?
Because governance is the most common binding constraint on AI deployment. Seampoint’s research found a 76-point gap between technical AI capability and governance-safe delegation. Organizations are far more likely to be blocked by governance gaps than by infrastructure gaps. The question distribution reflects where readiness assessments most frequently reveal problems.
I answered honestly and scored very low. Should I be discouraged?
No. A low score means the assessment is working. It identified gaps that, if unaddressed, would lead to failed AI projects, wasted budget, and organizational frustration. Knowing your gaps before investing is worth far more than discovering them through project failure. Every gap has a remediation path. See the signs your company is not ready for AI article for specific remediation guidance.
Can I use this quiz with my leadership team?
That’s the ideal use case. Have each leadership team member complete the quiz independently, then compare scores. Differences in how leaders perceive the same organizational reality are as diagnostic as the scores themselves. A CEO who scores governance at 6 while the General Counsel scores it at 1 has revealed an important disconnect that needs resolution before AI deployment.
How is this different from the AI readiness checklist?
The AI readiness checklist asks direct diagnostic questions (“Do you have X?”). This quiz presents scenarios (“When Y happens, what does your organization do?”). Scenario-based questions produce more honest answers because they describe concrete situations rather than abstract capabilities. People overestimate their readiness when asked abstractly (“Do you have a data governance policy?”) but answer more accurately when presented with a specific situation that tests whether the policy actually works.