This study evaluates how well anonymized artificial-intelligence (AI) tools perform on standardized pediatric case vignettes and whether showing AI suggestions can improve clinicians' answers. About 30 board-certified/eligible pediatric specialists at a single hospital complete a one-time session. Participants are randomized to two groups. Group A (n≈15): physicians answer each vignette once. Group B (n≈15): physicians answer and rate confidence (1-10), then review anonymized suggestions from five different AI tools (tool names not shown) and may keep or change their answer; changes and confidence are recorded. Primary focus: measure AI performance (diagnostic accuracy, medication-dosing accuracy, interpretation accuracy) overall and by difficulty tier, and record AI response time. Secondary focus: quantify how AI suggestions affect human performance (change in accuracy, direction of change, confidence shift, and time). No patients or biospecimens are involved; risks are minimal (time and possible discomfort with performance review). Findings may inform safe, evidence-based ways to use AI alongside clinicians in pediatrics.
See this in plain English?
AI-rewrites the medical criteria so a patient or caregiver can understand them. Always confirm with the trial site.
AI Interpretation Accuracy (%)
Timeframe: Day 1
AI Diagnostic Accuracy (%)
Timeframe: Day 1
AI Medication-Dosing Accuracy (%)
Timeframe: Day 1