AI Question Generator — GLA Curriculum Mapping
Artificial Intelligence · Educational Technology · Assessment
AI Question Generator — GLA Curriculum Mapping
Prototype developed 2024–25 · Imperial London, Faculty of Medicine
Overview
The Imperial Question Creator is an AI-powered tool that generates MS AKT-style (Applied Knowledge Test) multiple-choice questions mapped directly to Imperial London’s Graduate Learning Assessment (GLA) curriculum. Developed in response to a request from Professor Amir Sam and Teaching Fellow Sinthiya Sivarajah, the tool addresses a practical and time-consuming challenge facing medical educators: the rapid generation of high-quality, curriculum-aligned assessment questions at scale.
Rather than manually authoring questions to match specific curriculum objectives, educators can select a programme phase, clinical condition, and question type — and receive a fully-formed AKT-style clinical scenario with answer options, the correct answer, a clinical justification, and referenced medical literature, all in seconds.
Project at a Glance
| Status | Live Prototype — Academic Year 2024–25 |
| Curriculum | GLA (Graduate Learning Assessment), MBBS — Imperial London |
| Phases Covered | Phase 1c · Phase 3a · Phase 3b |
| Question Types | Diagnosis · Management · Medical & Lab Science · Ethics |
| Platform | gla-curriculum.vercel.app (login required) |
| Technology | Next.js · TypeScript · Claude AI (Anthropic) · Supabase · Vercel |
| Requested by | Professor Amir Sam · Sinthiya Sivarajah (Teaching Fellow) |
How It Works
The tool presents educators with a simple interface. They select a curriculum phase (Phase 1c, 3a or 3b — corresponding to different years of the Imperial MBBS programme), then choose a medical condition and the type of question they need: Diagnosis, Management, Medical and Lab Science, or Ethics. An optional clinical context field allows them to specify patient demographics, setting, or complexity level.
Claude AI then generates a structured MS AKT-style question: a realistic clinical scenario, five answer options, the correct answer with a detailed clinical justification, and relevant medical references. Each question is tagged with its phase, question type, and topic, and carries a quality confidence score.
The output is immediately usable in assessments, question banks, or formative practice resources — and can be regenerated or adjusted on demand. The system is designed to support the rapid development of question banks aligned to a new or revised curriculum without placing additional burden on faculty time.
Context & Motivation
The GLA curriculum represents a significant revision to how Imperial structures its undergraduate medical assessment. Building a question bank to match a new curriculum framework is an enormous task — one traditionally dependent on senior academic time. This tool was developed as a proof-of-concept to explore whether AI could take on the first-pass generation work, leaving faculty to focus on review, refinement and quality assurance rather than blank-page authoring.
The request came directly from clinical academics working within the Faculty of Medicine who were facing this challenge practically, making it a genuine faculty-initiated innovation rather than a top-down technology project.
Partners & Collaborators
Imperial London, Faculty of Medicine — Adrian Cowell (Innovation Lead), Professor Amir Sam, Sinthiya Sivarajah (Teaching Fellow).
Next Steps
The prototype is live and in active use. Planned development includes expansion to cover all GLA curriculum topic areas across all phases, a question review and editing workflow for faculty, export to standard question bank formats (QTI/Moodle-compatible), and integration with institutional assessment systems.