This study examines whether individual differences in how speakers respond to hearing versus physical sensation during speech can predict who benefits most from visual feedback during a speech task. Healthy adults will complete a series of tasks in which auditory feedback is altered in real time through headphones, with and without an added visual display of the speech signal. A computational model will be used to estimate how strongly each participant relies on hearing versus physical sensation when monitoring speech. The study will then test whether this individual profile predicts how much the visual display improves each participant's ability to respond to the altered feedback.
See this in plain English?
AI-rewrites the medical criteria so a patient or caregiver can understand them. Always confirm with the trial site.
Mean difference in F1 compensation magnitude between auditory-visual and auditory-only feedback conditions of standard adaptive feedback task
Timeframe: During study visit (Day 1)