Visual information about speech content from the talker’s mouth is often available before auditory information from the talker's voice. This experiment examines perceptual responses to words with and without this visual head start.
Participants were presented with audio or audiovisual stimuli and asked to respond by typing what they heard into a text entry box. The stimuli have 3 factors, each with 2 levels:
If you want to see how I created this experiment, organised and analysed the data, you can watch my video tutorials on Gorilla Academy.
This is a replication of Karas et al (2019).
Built with Experiment
Creative Commons Attribution (CC BY)
Built with Task Builder 1
Creative Commons Attribution (CC BY)
Built with Questionnaire Builder 1
Creative Commons Attribution (CC BY)
Built with Questionnaire Builder 1
Fully open! Access by URL and searchable from the Open Materials search page