Physicians as Context Engineers in the Era of Generative AI
About This Resource
A provocative correspondence in Nature Medicine arguing that physicians must become 'context engineers' who actively design the conditions under which AI systems operate in clinical care. The authors identify four dimensions -- data context, task context, tool context, and normative context -- and warn that passive adoption of vendor-defined tools risks deploying powerful but misaligned systems that amplify inequities and erode clinical judgment.
What you'll learn
- Context engineering: deliberately designing how AI operates in clinical settings
- Four dimensions of clinical AI context: data, task, tool, and normative
- Why passive adoption of vendor AI tools risks misalignment with patient care
- How physicians can embed clinical knowledge and values into AI systems
- Practical examples in documentation assistance, decision support, and workflow automation
What to Read Next
Prompt Engineering for Clinicians
Practical framework (CREF method) for writing effective clinical prompts.
Dos and Don'ts of Using LLMs in Medicine
Essential safety guidelines for using AI in clinical settings. HIPAA, accuracy, and ethical boundaries.
What About Patient Safety? What Every Clinician Needs to Know About AI Trust
The non-negotiable safety principles every clinician needs before integrating AI into their workflow.
How Do I Get Better Answers? The Art of Asking AI the Right Questions
Prompt engineering primer for clinicians. Learn why framing matters, how to avoid sycophancy, and how to get genuinely useful AI output.
