A Psychometric Design Blueprint for Measuring Real AI Skill
How do you actually measure AI capability? Most tools claim to assess AI skills. In reality, they measure familiarity, tool usage, or confidence. Very few measure judgement, decision-making, or cognitive capability when using AI. This is the gap the AI Capability Profile is designed to address. This article explains how to design a psychometrically grounded AI capability profile using the Mosaic Skills Framework and the AI Literacy Capability Framework. It sets out a clear methodology for building a scalable, valid, and commercially deployable AI diagnostic. Download the AI Capability Profile diagnostic below or book a consultation to implement this across your organisation.What Is an AI Capability Profile?
An AI Capability Profile is a structured assessment of how effectively an individual uses AI systems in real-world contexts. It moves beyond:- Tool familiarity
- Prompt tricks
- Self-reported confidence
- Judgement under uncertainty
- Evaluation of AI outputs
- Decision-making with AI assistance
- Risk awareness and ethical reasoning
The Mosaic Skills Framework: The Foundation Layer
The AI Capability Profile is grounded in the Mosaic Skills Framework, which defines the underlying cognitive architecture of AI capability. The nine core pillars include:- Analytical Reasoning
- Cognitive Flexibility
- Ethical Judgement
- Information Credibility
- AI Output Validation
- Structured Decision-Making
- Bias Recognition
- Learning Agility
- Attention Control
The AI Literacy Capability Framework: The Performance Layer
The AI Literacy Capability Framework defines eight observable capabilities:- Understanding AI
- Prompting
- Evaluation
- Decision-making
- Ethical awareness
- Workflow use
- Credibility judgement
- Confidence
- Mosaic = underlying capability (why)
- AI Literacy = observable performance (what)
Step 1: Define the Construct Clearly
The first principle of psychometric design is construct clarity. For an AI Capability Profile, each dimension must answer:- What exactly are we measuring?
- What behaviours indicate this skill?
- What is explicitly excluded?
- The ability to critically evaluate AI-generated content
- Identify inaccuracies, hallucinations, or weak reasoning
- Cross-check against reliable sources
- General intelligence
- Subject knowledge alone
- Confidence in AI
Step 2: Choose the Right Measurement Model
Most AI assessments fail because they rely on self-report questionnaires. The AI Capability Profile uses scenario-based measurement, often in the form of situational judgement tests (SJTs). Why?- They simulate real decision contexts
- They capture judgement, not opinion
- They reduce social desirability bias
- Evidence-based reasoning
- Risk awareness
- Decision quality
Step 3: Design for Reliability
A single question cannot measure a capability reliably. Each capability area should include:- Multiple scenarios
- Different contexts
- Consistent scoring logic
- Internal consistency
- Stability of measurement
- Reduced noise
Step 4: Build Validity Into the Design
Validity is not a single test. It is a framework. The AI Capability Profile supports:- Content validity through mapping to Mosaic and AI Literacy frameworks
- Construct validity through behavioural indicators
- Face validity through realistic scenarios
- Criterion validity (performance outcomes)
Step 5: Control Bias and Ensure Accessibility
AI assessments must be inclusive. This means:- Clear, simple language
- No cultural assumptions
- Neurodiversity-friendly design
- Avoiding overly technical prompts
Step 6: Design the Scoring Model
Scoring must be meaningful. The AI Capability Profile uses:- Capability-level scores
- Overall profile
- Development bands (e.g. emerging, competent, advanced)
- Percentiles
- Benchmark comparisons
Step 7: Build the Interpretation Layer
Most tools fail here. The AI Capability Profile provides:- Strengths summary
- Risk indicators
- Practical development recommendations
Step 8: Integrate AI Responsibly
AI can enhance the system, but must be controlled. In this design:- AI may assist in generating scenarios
- AI may support feedback generation
- AI does NOT determine final scores
Where Most Vendors Get This Wrong
Most AI skill tools:- Measure confidence, not capability
- Reward speed, not judgement
- Rely on self-report data
- Decision quality
- Risk awareness
- Cognitive capability
Commercial Applications
The AI Capability Profile can be deployed across:- Corporate AI readiness audits
- School AI literacy programmes
- Individual skill development (via Mosaic)
How to Build This (Step-by-Step)
Step 1: Define 8 capability areas Step 2: Write 3 scenarios per capability Step 3: Create scoring logic (1–4 scale) Step 4: Build using WordPress + form plugin Step 5: Connect to email report output ⚠️ Advanced versions may require automation tools such as Zapier or custom dashboards.Limitations
This profile does not measure:- Domain-specific expertise
- Technical AI development skills
- Long-term learning outcomes
Conclusion
The future of AI capability is not about tools. It is about how people think when using those tools. The AI Capability Profile provides a rigorous, scalable way to measure this. Download the diagnostic or book a consultation to implement this in your organisation.AI Literacy Training Options
You can find our full AI Literacy Training and AI Skills Development program here. There are modules for:
Our Partner Resources
Working with Us
We help organisations evaluate validity, fairness, and candidate experience across AI-enabled recruitment processes and assessments. Typical corporate engagement areas include AI-enhanced assessment design (SJTs, simulations, structured interviews), validation strategy, bias and fairness monitoring/audits, and construct definitions.
Or contact Rob Williams Assessment Ltd at
E: rrussellwilliams@hotmail.co.uk
(C) 2026 Rob Williams Assessment Ltd. This article is educational and not legal advice. Always align to your local jurisdiction, counsel, and internal governance requirements.