Subscribe to Gorilla Grants
We regularly run grants to help researchers and lecturers get their projects off the ground. Sign up to get notified when new grants become available
Spotlight on...
I research all things related to how people process speech. Some of my focuses include how visual cues like seeing the talker affect speech recognition, and the cognitive demands associated with processing speech in adverse listening conditions.
The McGurk effect is a commonly cited audiovisual illusion in which discrepant auditory and visual syllables can lead to a fused percept (e.g., auditory “ba” paired with visual “ga” is often perceived as “da”; McGurk & MacDonald, 1976).
To experience the effect first hand, watch this video. First listen to the video with your eyes closed, then watch it!
The McGurk effect is robust in pooled group data, but people differ in the extent to which they are susceptible to the McGurk effect — some individuals always report the auditory syllable (they are ‘immune’) and others always report the visual syllable. What accounts for the difference?
Despite its prevalence in the audiovisual speech perception literature, little is known about why people differ in their susceptibility to the effect. Previous research suggests that better lipreaders may be more susceptible (Strand, Cooperman, Rowe, & Simenstad, 2014), but other perceptual and cognitive correlates have not been identified. In our study, we addressed whether McGurk susceptibility is related to six potential correlates: lipreading ability, ability to extract information about place of articulation from the visual signal (this provides us with a more fine-grained measure of lipreading ability), auditory perceptual gradiency (this is a measure of participants’ ability to detect where an ambiguous syllable falls on a continuum ranging from “da” to “ta”), attentional control, processing speed, and working memory capacity. We implemented each of these tasks using Gorilla, and recruited 206 participants from Amazon Mechanical Turk.
All of our data, code for analyses, and materials can be accessed at https://osf.io/gz862/, and details regarding our pre-registered hypotheses, sample size, exclusion criteria, and analyses can be viewed at http://osf.io/us2xd/.
We found that better lipreaders tended to be more susceptible to the McGurk effect (consistent with the results of Strand et al., 2014), but did not find evidence that any other perceptual or cognitive abilities were associated with McGurk susceptibility.
These results suggest that a small amount of the variability in this classic audiovisual speech illusion is related to lipreading skill, and that other perceptual and cognitive traits that are commonly used in individual differences studies appear to be unrelated to susceptibility to the McGurk effect. The original paper on the McGurk effect has been cited over 6,000 times, so it is somewhat surprising that correlates of McGurk susceptibility have not been identified in the published literature. We suspect that this may be attributable to publication bias, which can make it difficult to publish null effects. This study is therefore an important contribution to the literature, and may be particularly useful for researchers attempting to identify perceptual and cognitive correlates of McGurk susceptibility, which, as it turns out, remain elusive.
Given that we study speech perception, we needed to ensure that participants could actually hear the auditory stimuli, so we required that they used headphones during the experiment. To check that they were actually following these instructions, we implemented a recently-developed headphone check (Woods, Siegel, Traer, & McDermott, 2017). Participants are presented six trials with three tones each, and on each trial, one of the tones is presented out of phase across stereo channels. When listeners are wearing headphones, this tone sounds noticeably quieter than the other two, but this difference is extremely difficult to detect when stimuli are presented through loudspeakers. If participants incorrectly identified the quietest tone in at least two of the six trials (i.e., they could get one of the six wrong), they were not allowed to advance to the actual experiment.
Yes, this study just came out in PLoS One.
I’m really excited about all of my research, but one line of research that I’m especially excited about right now concerns the amount of effort that listeners have to expend to recognize audiovisual speech. In a study that recently came out in Psychonomic Bulletin & Review, we showed that a modulating circle does not improve speech recognition, but it substantially reduces the effort necessary to do so (Strand, Brown, & Barbour, 2018).
As more methods are developed to ensure good quality data (and detect bots), conducting research online will give us the means to efficiently collect data for high-powered studies. I think online research will play an important role in replication, because not only will more studies be conducted to test whether effects can replicate in an online sample, but the rapid rate of data collection will also facilitate the replication of studies that would otherwise be time-consuming to conduct in the lab.
Most of our research is conducted on healthy, motivated, normal-hearing college students. Though I would argue that our findings are usually generalizable beyond this population, conducting research online allows us to test whether the effects we’re interested in are present in a more ecologically valid setting, and additionally allows us to achieve a larger sample size than is typical in studies conducted in the lab.
We wanted to collect data through Amazon Mechanical Turk to ensure a large and diverse sample, and we needed a flexible online platform to design our experiment. The experiment we conducted had several tasks that varied in the types of stimuli we presented (e.g., videos with audio, audio-only stimuli, text), and Gorilla could effectively present each of these stimulus types.
Not only was Gorilla’s interface intuitive, which enabled us to quickly and effectively design our study, but this platform also allowed us to collect data from 206 participants in just a few weeks for a relatively low cost.
Given that we assessed individual differences in participants’ abilities in six different tasks, the stand-out feature in Gorilla was its flexibility in allowing us to administer each of these tasks.
Julia Strand — my former undergraduate research mentor and current collaborator and dear friend — gets all the credit for instilling in me not only a fascination with speech perception, but also a love for research more generally. I worked as an undergraduate research assistant in her lab throughout my time at Carleton College, and was her lab manager for a little over a year after I graduated. Although I quickly developed an interest in spoken word recognition, what drew me in (and maintained my interest) was Julia’s enthusiasm toward research and doing good, open science. It’s contagious, and luckily I caught the bug early. I couldn’t be more grateful to her.
I used to bartend at a craft distillery in Northfield, Minnesota called Loon Liquors. The distillery is “grain to glass,” which means they receive grain from local farms and turn it into their product on-site (rather than receiving what’s known as “neutral grain spirit,” which is distilled by another company), affording them precise control over the quality and consistency of the product. As a result, the bartenders learn to carefully construct cocktails that perfectly complement the flavors in the spirits, and this has made me appreciate the art and science that goes into making a good drink. I really love bartending, and although I don’t have time to bartend in graduate school, I still make cocktails with my friends and teach them about the process.
I’d highly recommend the podcast The Black Goat with Simine Vazire, Sanjay Srivastava, and Alexa Tullett.
I loved reading The Seven Deadly Sins of Psychology by Chris Chambers. This book clearly and effectively lays out seven issues that are prevalent in psychological research and includes suggestions for how we can address these issues.
We regularly run grants to help researchers and lecturers get their projects off the ground. Sign up to get notified when new grants become available