Prompting Guide
Getting More from AI
A practical prompting guide for medicine and health academics, going well beyond the basics to what actually produces reliable, useful results in 2026.
What this isn’t: a repeat of how LLMs work or what ChatGPT is. Your colleagues have that covered brilliantly at the Faculty GenAI site. This guide starts where they leave off, focused on technique, craft, and real use cases for research, teaching and clinical education.
The AI Landscape in 2026
A lot has changed since the early days of “just ask ChatGPT”. Understanding the current environment helps you choose the right tool, set the right expectations, and get far better results.
Reasoning Models are Mainstream
Models like Claude Sonnet 4 and GPT-5 now reason before answering; they can plan, check their own logic, and work through multi-step problems. This changes how you prompt: you can ask for structured reasoning, not just answers.
Multiple Models, One Platform
Imperial’s dAIsy gives you access to Claude, GPT-5, and DeepSeek under one secure, data-protected login. Different models have different strengths. Knowing which to use when matters.
Prompts are Now Systems
The shift from “ask a question” to “design a workflow” is complete. Practitioners now think about system prompts, multi-turn conversations, personal agents and reusable templates, not single queries.
Multimodal is Standard
Text, images, PDFs and documents can all be fed into modern models. For medicine: annotating histology images, summarising papers, analysing data tables, all now possible within a single prompt session.
Hallucinations Haven’t Gone Away
Models are more accurate but still fabricate citations, statistics, and clinical details with great confidence. For academic and clinical work, verification remains non-negotiable, especially for anything patient-facing.
Data Protection Still Matters
General AI tools (ChatGPT free, Gemini) may use your inputs for training. Imperial’s dAIsy is configured so prompts are NOT used to train external models; use it for anything sensitive to your work.
The CRAFT Framework
Most mediocre AI outputs come from vague prompts. The CRAFT framework gives you five levers to pull; use all five and you’ll see a dramatic improvement in quality, consistency and usefulness.
Context
Tell the model who you are, what you’re working on, and any relevant background. The more specific the context, the more tailored the response.
Role
Assign the model a relevant expert persona. This “primes” it to draw on domain-specific knowledge and apply appropriate professional judgment.
Action
Be precise about what you want done. Use active, specific verbs. Break complex tasks into numbered steps. Avoid open-ended asks that force the model to guess.
Format
Specify exactly how you want the output structured. Length, headers, bullet points, tables, tone; the model will match your specification if you give one.
Test & Iterate
The first response is rarely the final answer. Strong prompt engineers treat the conversation as a loop: review, refine, and ask the model to improve its own output.
Techniques That Actually Move the Needle
These techniques consistently produce better results on complex tasks, particularly the kind of analytical, writing and reasoning work common in medicine and research.
For Medicine & Health Academics
The real value of AI isn’t writing generic text; it’s accelerating the high-effort tasks that consume your time. Here are the use cases where prompting makes the biggest difference in your context.
Literature Synthesis
Feed multiple abstracts or key sections into dAIsy and ask it to synthesise themes, identify gaps, or structure a narrative for your review; saving hours on first drafts.
Always verify: AI cannot access papers you don’t provide. It may generate plausible-sounding but fictitious citations. Use it for synthesis only on text you’ve given it.
Teaching Content Creation
Generate MCQ banks, case studies, learning objectives, lecture outlines and formative feedback, then refine with follow-up prompts. Use few-shot examples for consistent formatting.
Best for: rapid prototyping of materials, variation of scenarios, adapting content for different year groups.
Grant & Publication Writing
AI is excellent at restructuring arguments, tightening impact statements, and translating dense technical language into accessible lay summaries. It works best as an editor, not a first author.
Try: “You are an expert grant reviewer. Critique this impact statement for clarity, specificity and fundability.”
Clinical Reasoning Practice
Use AI to generate case vignettes with varying complexity, create worked examples of ABCDE assessment, or produce structured differential diagnoses. Always validate outputs against current guidelines.
Clinical outputs must always be reviewed by a qualified clinician before any patient-facing use.
Research Design Support
Brainstorm methodology options, stress-test your research questions, draft ethics application sections, or generate participant information sheets in plain English.
Note: If using patient-identifiable data or sensitive information, dAIsy is the only appropriate tool, and review your institution’s research ethics requirements first.
Student Feedback Drafting
Provide anonymised student work and a marking rubric. Ask AI to draft developmental feedback aligned to the criteria. Review and personalise before sending; this should augment your judgment, not replace it.
Always: remove any identifying information before pasting into any AI tool.
Making the Most of dAIsy
dAIsy is Imperial’s secure, multi-model AI platform. It’s not just a browser wrapper for ChatGPT; it has features that make a significant difference to how useful it is for academic work.
What dAIsy gives you that consumer tools don’t
Data protection: prompts and files are not used to train external models. Access to multiple models (Claude Sonnet 4, GPT-5, DeepSeek) under one Imperial login. Ability to create Personal Agents, essentially customised AI assistants with persistent instructions and uploaded documents.
Access dAIsy (Imperial login required) →Choosing the Right Model
- Claude Sonnet 4: Best for: nuanced writing, medical reasoning, long documents, balanced outputs
- GPT-5: Best for: coding, structured data tasks, when you want a second opinion
- DeepSeek: Best for: technical analysis, research-heavy tasks
Personal Agents
Create a custom agent with a system prompt that persists across every conversation. Examples:
- “Teaching assistant” pre-loaded with your module handbook
- “Grant reviewer” with your funding body’s criteria
- “Literature buddy” with your research focus
System Prompt for Your Agent
When creating a Personal Agent, your system prompt sets the persistent context. A good template:
The Verification Layer
AI in 2026 is significantly more accurate than it was in 2023, but “significantly more accurate” still means it makes confident errors. In medicine and research, the stakes of unchecked errors are high.
Things AI Gets Wrong Most Often
- Citations and references: frequently fabricated with believable author names, journal titles and DOIs
- Specific statistics: incidence rates, drug doses, trial results, often slightly wrong
- Very recent guidelines: training data has a cutoff; NICE, BNF and SIGN updates may not be reflected
- Nuanced clinical judgment: AI gives answers, not the uncertainty that characterises good clinical practice
- Local policies and protocols: AI has no knowledge of your trust or department’s specific procedures
Always verify before publishing, teaching or using clinically: any statistics or numerical claims, any specific citations, any drug interactions or dosing information, any reference to current clinical guidelines.
Build Verification Into Your Prompts
Ask the model to flag its own uncertainty before you have to find it yourself.
The Four-Question Check
Before using any AI output, ask yourself:
- Have I checked every statistic against a primary source?
- Have I verified every citation actually exists?
- Is anything here patient-facing or used in assessment?
- Would I be comfortable putting my name on this without checking?
Prompt Template Library
Copy and adapt these templates. The brackets [ ] mark what you should customise: everything else is structured to get consistently good results.
Literature Synthesis
ResearchYou are a systematic review specialist in [your field]. I will paste [N] abstracts below. For each one: 1. Summarise the key finding in 1-2 sentences 2. Note the study design and sample size 3. Flag any methodological limitations Then provide an overall synthesis paragraph identifying: - Common themes across the papers - Contradictions or gaps in the evidence - The strongest conclusion the evidence supports [PASTE ABSTRACTS HERE] VERIFY: list any claims I should cross-check against the source papers.
Tip: paste up to 5-6 abstracts at once in dAIsy. For full papers, upload the PDFs via the attachment feature.
MCQ Generation
TeachingYou are an experienced medical educator writing formative assessment questions. Create [N] single-best-answer MCQs on the topic of [TOPIC]. Target audience: [Year group / level, e.g. "Year 3 MBBS students"] For each question: - Clinical stem (realistic scenario, 2-4 sentences) - 5 answer options (A-E), one correct, four plausible distractors - Correct answer - Brief explanation (max 3 sentences) why the answer is correct and the most tempting distractor is wrong Difficulty: [foundation / intermediate / advanced] Avoid: [any topics or drugs you want excluded]
Grant Impact Statement
ResearchYou are a senior research grant advisor with expertise in [NIHR / Wellcome / MRC / other]. Review the following draft impact statement for my [grant type] application. Critique it on: 1. Clarity: is the significance immediately obvious to a non-specialist panel member? 2. Specificity: are the claimed benefits concrete and measurable? 3. Feasibility: does the impact claim seem realistic given the project scope? 4. Patient/public benefit, is this made explicit? Then rewrite the statement incorporating your suggestions. Keep to [word limit] words. [PASTE YOUR DRAFT IMPACT STATEMENT]
Patient Information Leaf (Plain English)
Research / Clinical EdYou are a health literacy specialist. Rewrite the following clinical/research text as a patient information document. Requirements: - Reading age: no higher than 12 years (Flesch-Kincaid) - Avoid jargon; explain any medical terms you must include - Use short paragraphs (max 3 sentences) - Include: What this is, Why it matters, What happens next - Tone: warm, reassuring, informative [PASTE SOURCE TEXT] After completing the document, flag any section where simplification may have changed the clinical meaning , these will need clinical review before use.
Case Study Generator
TeachingYou are a clinical educator designing case-based learning for [YEAR / PROGRAMME]. Generate a case study on [CLINICAL SCENARIO / CONDITION]. Structure: 1. Presenting complaint and history (realistic, 150 words) 2. Examination findings (relevant positives and negatives) 3. Initial investigations with results 4. Three discussion questions progressing from recall to application 5. Key learning points (maximum 5 bullet points) 6. Suggested further reading topics (do not fabricate citations) Complexity: [foundation / intermediate / advanced] Avoid: any patient-identifiable details; keep demographics generic.
After generating, ask: “Now create an answer guide for facilitators for the three discussion questions.”
Where to Go Next
This guide focuses on practical technique. For everything else: the fundamentals, ethical frameworks, events and support, your colleagues and Imperial central have you covered.
dAIsy Platform
Imperial’s secure multi-model AI platform. Login with your Imperial credentials. Safe for research-adjacent work.
Faculty GenAI Site
Introduction to GenAI, training events, Coffee & Cake sessions, and tools overviews from your Faculty colleagues.
dAIsy Guidance (ICT)
Official how-to guides, use policy, training videos and the quick-start walkthrough from Imperial ICT.
AI & Education Hub
Imperial’s official staff guidance on GenAI in teaching, assessment design, and academic integrity.
Library AI Guidance
How to reference and acknowledge AI use in academic work. Harvard and Vancouver formats provided.