This interview was written in December 2018 about the work Eva Poort did during her PhD at University College London.
What do you work on?
My area of research concerns the bilingual mental lexicon — the mental dictionary of people who speak more than one language.
Specifically, I’m interested to see how bilinguals process words that exist in multiple languages, which can have either the same meaning, like the word “wolf” in Dutch and English, or a different meaning, like the word “angel”, which in Dutch means “insect’s sting”.
For my research I mainly use lexical decision tasks (Is this an existing word or not?) and semantic relatedness tasks (Are these two words related to each other in meaning?).
“This has the potential to really change how we think about how bilinguals process words that exist in multiple languages.”
What did you do using Gorilla?
I’ve by now run multiple studies on Gorilla. In the first one, I used a cross-lingual long-term priming paradigm. In the first task of the experiment, participants read sentences in Dutch that contained either a cognate, like “wolf”, or an interlingual homograph, like “angel”. Just to make sure that the participants were actually reading the sentences, I asked them to indicate for each sentence whether a subsequent probe was related in meaning to the whole sentence.
Later on during the experiment, they completed an English lexical decision task which included some of the same cognates and interlingual homographs as the first task. During this task, they were asked to indicate whether each stimulus was a real English word or not.
In the past (see Poort, Warren, & Rodd, 2016), I had found that participants responded more quickly to cognates they had seen earlier on during the experiment, despite the fact that when they had seen them earlier it was in a Dutch context. In contrast, having seen an interlingual homograph in a Dutch sentence context slowed the participants down in the English task. Unfortunately, the Gorilla experiment didn’t replicate this effect.
With my latest experiment, however, I showed that the type of task you use influences the size of two well-known effects in the bilingual literature, the cognate facilitation effect and the interlingual homograph inhibition effect. In lexical decision tasks, bilinguals often process cognates more quickly than words that exist in only one of the languages they know (e.g. “carrot”, which only exists in English and not in Dutch), but they process interlingual homographs about as quickly those control words. In my experiment I used a semantic relatedness task, in which participants saw pairs of words (e.g. “carrot” and “vegetable”) and were asked to indicate whether those words were related in meaning or not.
In this task, the participants did not respond more quickly (or slowly) to the pairs that included a cognate than to pairs that did not, but they did respond more slowly to pairs that included an interlingual homograph. This difference in the size of these effects in lexical decision versus semantic relatedness has important implications for current theories of the bilingual mental lexicon.
What was your study protocol?
My experiments are always very complicated!
I’ll just describe the general protocol briefly. I use a Questionnaire for my demographics questions, then use Branches to filter out participants who don’t meet my requirements. I usually create my own tasks (I created both the lexical decision task and semantic relatedness task described above) but have also used the in-built Towers of Hanoi task (with the maximum duration feature). I also often use (chained) randomisers, but not really any of the other flow controllers (e.g. Counterbalance, Delay).
Did you include any special features in your study to ensure good quality data? If so, what did you do?
I usually do a combination of the following:
- Ask participants to fill in my demographics questionnaire at the start, so that I can use branches to filter out participants who do not meet the (stated) participation requirements.
- Include some vocabulary measures and exclude participants who do not meet some minimum score.
- Exclude participants who do not achieve at least 80% correct on the main tasks of interest.
- Excluded participants who took too long to complete priming experiments, to make sure the priming delay is about the same for each participant.
(For the avoidance of doubt, participants are always paid, but I exclude their data to ensure good data quality.)
“If I were starting my online research journey, I would definitely choose Gorilla.”
Has this study been published?
The study that used the cross-lingual long-term priming paradigm is Experiment 2 of this preprint:
The semantic relatedness experiment is available as a preprint here:
For you, what is the stand-out feature in Gorilla?
There are too many! My experiments often consist of different versions with participants completing only one of those versions, so the good randomisation functionality and spreadsheet manipulation feature have certainly saved my life. And I haven’t used them yet, but I really like the sound of the Quota nodes. That will definitely make it easier in future to recruit equal numbers of participants for the different versions in a multi-version experiment.
I’d also like to point out, although it isn’t a feature of Gorilla per se, the support offered by the team is phenomenal. Whenever I get stuck with anything, they’re more than happy to help and they’re always open to feature requests (I’m so excited to use the Quota nodes!).
What is the most exciting piece of work or research you’ve ever done?
I would say the semantic relatedness experiment that I described above. The results allowed me to draw a parallel with research conducted in the monolingual field and this has the potential to really change how we think about how bilinguals process words that exist in multiple languages (i.e. cognates and interlingual homographs).
How do you think online research is going to change your field?
Aside from the obvious advantages of being able to recruit greater numbers of participants, I think for my field specifically online research will make it much easier to study effects of language proficiency and language dominance.
What is the biggest advantage of online research methods?
Being able to reach more diverse populations and greater numbers of participants!
Why did you choose to use Gorilla?
For the very practical reason that UCL had just acquired a license for it and the software that I had been using (the Qualtrics Reaction Time Engine) no longer worked for me. That said, if I were starting my online research journey, I would definitely choose Gorilla, mainly for the ease of use and the wide range of features.
How did Gorilla make your life or research better, easier or faster?
What improvements would you like to see in Gorilla to make your research easier?
It’s been a while since I last used Gorilla, but back then even though I could download the data for all nodes of the same task in one go, it would still give me a different datafile for each node. My life would be even easier if these were merged automatically into a single file.
What do you believe to be true that you cannot prove (yet)?
I still (perhaps foolishly) believe that the cross-lingual priming effect is real, even though in follow-up experiments I haven’t been able to convincingly replicate it.
Are there any online courses, podcasts, discussion groups or resources that you’d recommend to others?
Yes! Anyone doing anything with statistics should do Daniël Lakens’ Improving Your Statistical Inferences course on Coursera. And learn how to use R.
When you’re not working, what do you enjoy doing?
I’ve got too many hobbies, really. I love to read, watch TV, knit, sew, go for long hikes, cook.