The lab experience
TraĀdiĀtionĀalĀly, most behavĀiourĀal research is done in labĀoĀraĀtoĀry setĀtings. As a PhD student or a research assisĀtant, you spend time designĀing flyers to hang up in your departĀment, or use your universityās parĀticĀiĀpant pool to recruit people to take part in your experiment. You then painstakĀingĀly schedĀule indiĀvidĀual experiment sesĀsions for each of your parĀticĀiĀpants and invite them to your uniĀverĀsiĀty buildĀing to do some tasks on a comĀputĀer for an hour or so, quite posĀsiĀbly in some tiny cubicle in a dingy basement.
AddiĀtionĀalĀly, you also have to spend that same time in that same dingy baseĀment, usually not doing anyĀthing more proĀducĀtive than engagĀing in small talk about the weather, typing in your parĀticĀiĀpantsā ID, and perhaps startĀing up a new experiment proĀgramme every now and then. And thatās if youāre lucky and your parĀticĀiĀpant actuĀalĀly shows up.
Iāve done my fair share of lab-based data colĀlecĀtion and I have to say, it really is exhaustĀing. Itās a massive time comĀmitĀment from the researcher; you can easily spend months on end going through the recruitĀment-schedĀulĀing-testing cycle.
Apart from the researcherās time, lab-based data colĀlecĀtion is also resource-intenĀsive for your uniĀverĀsiĀty. They need to provide enough rooms and equipĀment for all the active researchers to be able to run their experĀiĀments, and ā as an experĀiĀmenter ā you often find yourĀself comĀpetĀing for those spaces with your colĀleagues. This comĀpeĀtiĀtion can get parĀticĀuĀlarĀly tireĀsome during prime ātesting seasonā, where sadly, it is often the senior researchers and presĀtiĀgious labs that win.
Lab-based research has another big problem: its parĀticĀiĀpants. What popĀuĀlaĀtion do you think is the easiest to recruit for experĀiĀments that are run in the dingy psyĀcholĀoĀgy departĀment baseĀment? Thatās right, people who are already physĀiĀcalĀly in the departĀment. Itās no surĀprise that much of the classic litĀerĀaĀture in psyĀcholĀoĀgy is based on samples of uniĀverĀsiĀty stuĀdents. They are there, they are willing, you can bribe them with course credits ā itās just so convenient!
The problem with this approach is clearly that the typical underĀgradĀuĀate student tends to be quite difĀferĀent to the popĀuĀlaĀtion that we as psyĀcholĀoĀgists usually want to make inferĀences about. For starters, they might be parĀticĀuĀlarĀly homoĀgeĀneous when it comes to socioeĀcoĀnomĀic status, and often also race and gender. Itās comĀmendĀable when researchers try to widen their parĀticĀiĀpant pool to comĀmuĀniĀty samples, e.g. by putting up their flyers someĀwhere other than the student union. However, when weāre recruitĀing people for lab-based research we will always autoĀmatĀiĀcalĀly restrict our sample to those who are both physĀiĀcalĀly able and willing to come to the university.
With the repliĀcaĀtion crisis looming over all our heads, the field has quite rightly recogĀnised that we need to run our experĀiĀments on bigger and more diverse samples. Sadly, this is often not realĀisĀtic with lab-based research ā espeĀcialĀly if youāre the sole experĀiĀmenter on your project. Thereās only so many hours in the day for testing!
So, what can be done about this?
Online research
The last few years have seen a change in where research and data colĀlecĀtion happens. InterĀnet browsers and peopleās own elecĀtronĀic devices are now powĀerĀful enough to display complex stimuli and can measure response times remoteĀly with a great degree of accuĀraĀcy. These techĀnoĀlogĀiĀcal advances have allowed researchers in the behavĀiourĀal sciĀences to shift from lab-based to web-based research.
RecruitĀment platĀforms like MTurk and ProĀlifĀic AcaĀdĀeĀmĀic allow researchers to adverĀtise their experĀiĀments online and attract parĀticĀiĀpants far away from the uniĀverĀsiĀty campus who can then do the tasks on their own device and in their own time.
It really seems like a no-brainer:
- Running experĀiĀments online saves the researcher valuĀable time because there is no need to schedĀule indiĀvidĀual sesĀsions, you can easily test mulĀtiĀple people at the same time and you can use the time it takes to collect the data in more proĀducĀtive ways.
- Online research saves uniĀverĀsiĀty resources as there is no need to provide as many rooms and comĀputĀers specifĀiĀcalĀly for data colĀlecĀtion purĀposĀes. More room for coffee machines in psyĀcholĀoĀgy departments!
- RecruitĀing parĀticĀiĀpants online allows us to diverĀsiĀfy our parĀticĀiĀpant pool and collect larger sample sizes to increase the power of our analyses.
The chalĀlenges of online research
It is worth noting that nothing is perfect. Like lab-based research, online research has its chalĀlenges. When I first started thinkĀing about running my own experĀiĀments online, I admit that I had doubts. However, I have since found that many of my initial conĀcerns either equally applied to research conĀductĀed in the lab or were avoidĀable by making senĀsiĀble deciĀsion at the experiment design stage.
Does the absence of an experĀiĀmenter lead to lower-quality data?
By its nature, online research does not offer the same control over the experiment enviĀronĀment as the lab does. Without an experĀiĀmenter there to check, you may ask yourĀself whether your parĀticĀiĀpants are in a reaĀsonĀably quiet enviĀronĀment, and whether they really are doing the task by themĀselves. Are they maybe lisĀtenĀing to disĀtractĀing music or podĀcasts while theyāre supĀposed to be conĀcenĀtratĀing on your task? Are they checkĀing their email? Are they having a conĀverĀsaĀtion with someone else in the room at the same time? How can we be sure that online parĀticĀiĀpants take the tasks seriĀousĀly, and donāt just, say, press buttons at random? It is good to think about these issues before startĀing an online experiment. Luckily, a lot of these worries can be resolved and there are various ways of addressĀing these questions.
Can I really trust my participants?
Are my āparĀticĀiĀpantsā real people? The infilĀtraĀtion of Amazonās MechanĀiĀcal Turk parĀticĀiĀpant dataĀbase with bots has recentĀly led to a big scandal, see Max Hui Baiās blog post about the issue.
So how can we tell real people from bots? Can we trust that online parĀticĀiĀpants truly fit our parĀticĀiĀpaĀtion criĀteĀria? What if people lied about e.g. their age or other demoĀgraphĀic details in order to be able to parĀticĀiĀpate in our study? The issues menĀtioned above are both legitĀiĀmate conĀcerns and they are imporĀtant quesĀtions that need to be conĀsidĀered by anyone wanting to collect their data online.
The truth is, we can never be 100% sure that your online parĀticĀiĀpants took your task as seriĀousĀly as you would have liked them to. However, similar conĀcerns apply to your parĀticĀiĀpants in the lab. Can you always be 100% sure the parĀticĀiĀpants who step into your dingy testing baseĀment are as old as they say they are, or that they fulfil your demoĀgraphĀic incluĀsion criĀteĀria? Even with you watchĀing over them, can you always be sure that theyāre conĀcenĀtratĀing on the task, rather than being lost in their own thoughts?
In one of my eye-trackĀing studies I used a remote tracker without a headĀrest, so I made sure to remind parĀticĀiĀpants mulĀtiĀple times to remain in approxĀiĀmateĀly the same posiĀtion in front of the comĀputĀer screen and not to move their head too much during the task. One parĀticĀiĀpant went on to grab her water bottle from under the table and had a drink THREE TIMES during her session, even after I had remindĀed her of the instrucĀtions and re-calĀiĀbratĀed the tracker twice. After the second time, I knew her data was going to be useless, so I just waited for the task to finish and sent her home. The point is: someĀtimes, you have to deal with shoddy data, whether thatās in the lab or online.
There are many ways to maxĀimise the likeĀliĀhood that your remoteĀly colĀlectĀed data will be of good quality. Firstly, Iād recĀomĀmend making good use of tools that will make your life much, much easier. SecĀondĀly, by making senĀsiĀble adjustĀments to your tasks you can optiĀmise your data quality and increase your chances of catchĀing any bots in your sample.
The ultiĀmate proof is in the quality of online behavĀiourĀal research that we can see being pubĀlished. Many sciĀenĀtists are taking their research online, colĀlectĀing data faster and accelĀerĀatĀing their research. You can read about some examĀples here
10 tips to optiĀmise the quality of the data you collect online:
1. Adapt your paradigm
While many parĀaĀdigms can be taken online with no changes, some ā parĀticĀuĀlarĀly those with a memory assessĀment ā might need some tweakĀing. When we collect our data remoteĀly, we cannot monitor what the parĀticĀiĀpant does during the task. This doesnāt mean that data colĀlecĀtion is imposĀsiĀble, it just means that we need to think careĀfulĀly about how we can adapt the tasks we use.
For example, one of my experĀiĀments includĀed a short-term memory parĀaĀdigm. In the lab, I would have been able to prevent parĀticĀiĀpants from writing down the inforĀmaĀtion that they were supĀposed to hold in memory ā this was of course imposĀsiĀble when I ran the task online. So instead, the encodĀing phase includĀed a task element where parĀticĀiĀpants needed to use their mouse to click on images on the screen. I could then check in my data whether people used the mouse to click. If they did use the mouse, I inferred that they couldnāt have taken notes with their hand during the encodĀing phase. This is of course quite a crude method, but it illusĀtrates that we need to be thinkĀing creĀativeĀly to make our tasks suitĀable for online use, if your task demands it.
2. Use a good experiment platform
If youāre like me and started your PhD without any proĀgramĀming expeĀriĀence, and are, frankly, a bit scared of learnĀing how to deal with variĀables, funcĀtions and loops, then youāll save yourĀself a ton of time and frusĀtraĀtion by finding a good experiment platform.
I started using Gorilla, which was super intuĀitive and allowed me to easily set up my experĀiĀments and tinker with them ā all mostly without having to ask for help. This gave me more time for thinkĀing about approĀpriĀate ways to adapt my tasks for online data collection.
3. Build in checks for data quality
One of the main conĀcerns about online research is that your research will be comĀpletĀed by parĀticĀiĀpants who arenāt really paying attenĀtion. However, you shouldĀnāt worry too much, as there are a variety of ways to trip up inatĀtenĀtive parĀticĀiĀpants and exclude their data. There are a variety of ways to trip up bots and inatĀtenĀtive parĀticĀiĀpants during your experiment, so that you can later discard their data, for example:
- You can set a maximum or minimum amount of time for indiĀvidĀual tasks or quesĀtionĀnaires if youāve got a reaĀsonĀable idea of how long people should be spendĀing on them. With Gorilla, you can check the time your parĀticĀiĀpants spent reading your instrucĀtion screen, for example. This way, you can exclude data from parĀticĀiĀpants who clearly didnāt read the instrucĀtions propĀerĀly. Or you could check your parĀticĀiĀpantsā overall time spent on a task, and discard those who spent an unreaĀsonĀably long time comĀpletĀing a section, under the assumpĀtion that maybe they took a tea break that they werenāt supĀposed to. SimĀiĀlarĀly, if youāve got a good idea of how long parĀticĀiĀpants should be spendĀing on indiĀvidĀual trials, you can use average response times as excluĀsion criteria.
- You can also include a speĀcifĀic āattenĀtion checkā trial within your experiment. These might be parĀticĀuĀlarĀly easy quesĀtions, and you could use these trials to exclude anyone who got them wrong.
- You could also include some audiĀtoĀry filler trials to ensure people wear their headĀphones for the duraĀtion of the task if you wanted them to be in a quiet enviĀronĀment. Instruct them to put on their headĀphones in the beginĀning and have them respond in some way to the audio to check that they were in fact listening.
- If you want to make sure that parĀticĀiĀpantsā demoĀgraphĀic data are legitĀiĀmate, you can ask the same quesĀtion twice and check for any inconĀsisĀtenĀcies in responses.
Again, be creĀative and use the type of ābot catchā that will work best for your parĀticĀuĀlar paradigm.
4. Make your experiment exciting
To ensure that your parĀticĀiĀpants happily comĀplete your whole experiment, I strongĀly recĀomĀmend making your task as short as posĀsiĀble, and as fun as posĀsiĀble. Part of this is making your experiment look nice and proĀfesĀsionĀal, which is pretty much a given if youāre using Gorilla as your experiment platform.
I have found that parĀticĀiĀpants really appreĀciĀate feedĀback about their perĀforĀmance, e.g. you could gamify your task by letting parĀticĀiĀpants collect points for correct answers or give them their overall score at the end. With Gorilla, for example, you can easily set up feedĀback zones that you can indiĀvidĀuĀalise with your own graphĀics: Click here to have a look at example.
5. Pilot, pilot, pilot!
FigĀurĀing all of this out relies on trial & error. You will need to find the approĀpriĀate adjustĀments to your task that still ensure you get the kind of data that you want, and you will need to test out whether your āattenĀtion checksā actuĀalĀly work. Iāve had a colĀleague who found that even legitĀiĀmate parĀticĀiĀpants tended to fail her check trials for some unknown reason. The way to iron out those kinks is to try out your experiment first.
Invest time in experiment design and do a lot ā and I mean a lot! ā of pilotĀing before you go āliveā with your experiment!
6. Data Analysis
Once youāve piloted your study with a small sample, set up your data analyĀsis. This will make sure that youāve got all the meta-data set up corĀrectĀly for easy analyĀsis. Excel Pivot Tables are super powĀerĀful and tremenĀdousĀly useful in many careers, make them your friend.
7. Choose a fair and reliĀable platĀform for parĀticĀiĀpant recruitment
During my PhD, I linked my Gorilla experĀiĀments to the ProĀlifĀic AcaĀdĀeĀmĀic platĀform for parĀticĀiĀpant recruitĀment. So far, ProĀlifĀic has been spared any scanĀdals about bots in their dataĀbase, and my perĀsonĀal expeĀriĀence sugĀgests that the people who are signed up as members are genuine and genĀerĀalĀly quite eager to perform experĀiĀmenĀtal tasks diligently.
Members of ProĀlifĀic provide their demoĀgraphĀic inforĀmaĀtion when they first sign up to the platĀform, so I was able to directĀly target only those who were eliĀgiĀble for my experiment, without having to worry about them lying only to be able to take part in my study.
Prolificās large dataĀbase meant that I could collect data from over 100 people within a day.
8. Pay your parĀticĀiĀpants fairly
Itās imporĀtant to ensure that your parĀticĀiĀpants are rewardĀed approĀpriĀateĀly for their time ā not just for the obvious ethical reasons, but also because you are much more likely to get good quality data from people who are satĀisĀfied and feel like their time is valuĀable to you.
I have paid my online parĀticĀiĀpants at the same hourly rate that my uniĀverĀsiĀty recĀomĀmends for lab-based participants.
9. Run your experiment in batches
This is a major, major tip to avoid any data loss. Rather than setting your recruitĀment target to the maximum immeĀdiĀateĀly, I recĀomĀmend recruitĀing your parĀticĀiĀpants in batches of, say, about 20 at a time.
Itās also senĀsiĀble to do a brief data quality check once youāve run each batch (without peeking at staĀtisĀtics of course!) so that you have a better overview of how many more datasets you still need to collect.
10. Final thoughts
I am by no means an expert in online research, but I hope that these tips will be helpful for anyone planĀning their first (or even their 100th) online study. For more inforĀmaĀtion about all things online research, you can check out Jenni Roddās fanĀtasĀtic article, and watch videos from the BeOnĀline conĀferĀence.
By running 3 out of 5 of my experĀiĀments for my thesis online, I saved a lot of time and learnt a lot about approĀpriĀate experĀiĀmenĀtal design. It also meant that I was able to run more experĀiĀments than I had origĀiĀnalĀly planned and invesĀtiĀgate some interĀestĀing but tanĀgenĀtial research quesĀtions. It also meant that I could be involved in the superĀviĀsion of underĀgradĀuĀate stuĀdents who were able to easily set up their own experĀiĀments and collect their own data within the time frame of a typical underĀgradĀuĀate research project.
The future of behavĀiourĀal research?
The future of behavĀiourĀal science may well lie online. Online research gives us the ability to reach more diverse parĀticĀiĀpants, groups that may have preĀviĀousĀly been parĀticĀuĀlarĀly difĀfiĀcult to recruit, and to collect larger samples.
Asking people to do experĀiĀments in the comfort of their own home, on their own devices, gives us the opporĀtuĀniĀty to collect data that is not influĀenced by experĀiĀmenter-parĀticĀiĀpant interĀacĀtions. In fact, one could even argue that this setting is more ānaturalā than the lab.
For online research to be sucĀcessĀful, however, we need to be flexĀiĀble and creĀative ā our tasks will inevitably need to be adapted. It is imporĀtant that we as a field find ways to adapt our stanĀdard parĀaĀdigms and stanĀdardĀised testing batĀterĀies for use with online populations.
To ensure high data quality, we need to invest time and effort into experĀiĀmenĀtal design, as well as show appreĀciĀaĀtion to our parĀticĀiĀpants by paying them fairly.