The lab experience

TraĀ­diĀ­tionĀ­alĀ­ly, most behavĀ­iourĀ­al research is done in labĀ­oĀ­raĀ­toĀ­ry setĀ­tings. As a PhD student or a research assisĀ­tant, you spend time designĀ­ing flyers to hang up in your departĀ­ment, or use your universityā€™s parĀ­ticĀ­iĀ­pant pool to recruit people to take part in your experiment. You then painstakĀ­ingĀ­ly schedĀ­ule indiĀ­vidĀ­ual experiment sesĀ­sions for each of your parĀ­ticĀ­iĀ­pants and invite them to your uniĀ­verĀ­siĀ­ty buildĀ­ing to do some tasks on a comĀ­putĀ­er for an hour or so, quite posĀ­siĀ­bly in some tiny cubicle in a dingy basement.

AddiĀ­tionĀ­alĀ­ly, you also have to spend that same time in that same dingy baseĀ­ment, usually not doing anyĀ­thing more proĀ­ducĀ­tive than engagĀ­ing in small talk about the weather, typing in your parĀ­ticĀ­iĀ­pantsā€™ ID, and perhaps startĀ­ing up a new experiment proĀ­gramme every now and then. And thatā€™s if youā€™re lucky and your parĀ­ticĀ­iĀ­pant actuĀ­alĀ­ly shows up.

Iā€™ve done my fair share of lab-based data colĀ­lecĀ­tion and I have to say, it really is exhaustĀ­ing. Itā€™s a massive time comĀ­mitĀ­ment from the researcher; you can easily spend months on end going through the recruitĀ­ment-schedĀ­ulĀ­ing-testing cycle.

Apart from the researcherā€™s time, lab-based data colĀ­lecĀ­tion is also resource-intenĀ­sive for your uniĀ­verĀ­siĀ­ty. They need to provide enough rooms and equipĀ­ment for all the active researchers to be able to run their experĀ­iĀ­ments, and ā€“ as an experĀ­iĀ­menter ā€“ you often find yourĀ­self comĀ­petĀ­ing for those spaces with your colĀ­leagues. This comĀ­peĀ­tiĀ­tion can get parĀ­ticĀ­uĀ­larĀ­ly tireĀ­some during prime ā€˜testing seasonā€™, where sadly, it is often the senior researchers and presĀ­tiĀ­gious labs that win.

Lab-based research has another big problem: its parĀ­ticĀ­iĀ­pants. What popĀ­uĀ­laĀ­tion do you think is the easiest to recruit for experĀ­iĀ­ments that are run in the dingy psyĀ­cholĀ­oĀ­gy departĀ­ment baseĀ­ment? Thatā€™s right, people who are already physĀ­iĀ­calĀ­ly in the departĀ­ment. Itā€™s no surĀ­prise that much of the classic litĀ­erĀ­aĀ­ture in psyĀ­cholĀ­oĀ­gy is based on samples of uniĀ­verĀ­siĀ­ty stuĀ­dents. They are there, they are willing, you can bribe them with course credits ā€“ itā€™s just so convenient!

The problem with this approach is clearly that the typical underĀ­gradĀ­uĀ­ate student tends to be quite difĀ­ferĀ­ent to the popĀ­uĀ­laĀ­tion that we as psyĀ­cholĀ­oĀ­gists usually want to make inferĀ­ences about. For starters, they might be parĀ­ticĀ­uĀ­larĀ­ly homoĀ­geĀ­neous when it comes to socioeĀ­coĀ­nomĀ­ic status, and often also race and gender. Itā€™s comĀ­mendĀ­able when researchers try to widen their parĀ­ticĀ­iĀ­pant pool to comĀ­muĀ­niĀ­ty samples, e.g. by putting up their flyers someĀ­where other than the student union. However, when weā€™re recruitĀ­ing people for lab-based research we will always autoĀ­matĀ­iĀ­calĀ­ly restrict our sample to those who are both physĀ­iĀ­calĀ­ly able and willing to come to the university.

With the repliĀ­caĀ­tion crisis looming over all our heads, the field has quite rightly recogĀ­nised that we need to run our experĀ­iĀ­ments on bigger and more diverse samples. Sadly, this is often not realĀ­isĀ­tic with lab-based research ā€“ espeĀ­cialĀ­ly if youā€™re the sole experĀ­iĀ­menter on your project. Thereā€™s only so many hours in the day for testing!

So, what can be done about this?

Online research

The last few years have seen a change in where research and data colĀ­lecĀ­tion happens. InterĀ­net browsers and peopleā€™s own elecĀ­tronĀ­ic devices are now powĀ­erĀ­ful enough to display complex stimuli and can measure response times remoteĀ­ly with a great degree of accuĀ­raĀ­cy. These techĀ­noĀ­logĀ­iĀ­cal advances have allowed researchers in the behavĀ­iourĀ­al sciĀ­ences to shift from lab-based to web-based research.

RecruitĀ­ment platĀ­forms like MTurk and ProĀ­lifĀ­ic AcaĀ­dĀ­eĀ­mĀ­ic allow researchers to adverĀ­tise their experĀ­iĀ­ments online and attract parĀ­ticĀ­iĀ­pants far away from the uniĀ­verĀ­siĀ­ty campus who can then do the tasks on their own device and in their own time.

It really seems like a no-brainer:

  • Running experĀ­iĀ­ments online saves the researcher valuĀ­able time because there is no need to schedĀ­ule indiĀ­vidĀ­ual sesĀ­sions, you can easily test mulĀ­tiĀ­ple people at the same time and you can use the time it takes to collect the data in more proĀ­ducĀ­tive ways.
  • Online research saves uniĀ­verĀ­siĀ­ty resources as there is no need to provide as many rooms and comĀ­putĀ­ers specifĀ­iĀ­calĀ­ly for data colĀ­lecĀ­tion purĀ­posĀ­es. More room for coffee machines in psyĀ­cholĀ­oĀ­gy departments!
  • RecruitĀ­ing parĀ­ticĀ­iĀ­pants online allows us to diverĀ­siĀ­fy our parĀ­ticĀ­iĀ­pant pool and collect larger sample sizes to increase the power of our analyses.

The chalĀ­lenges of online research

It is worth noting that nothing is perfect. Like lab-based research, online research has its chalĀ­lenges. When I first started thinkĀ­ing about running my own experĀ­iĀ­ments online, I admit that I had doubts. However, I have since found that many of my initial conĀ­cerns either equally applied to research conĀ­ductĀ­ed in the lab or were avoidĀ­able by making senĀ­siĀ­ble deciĀ­sion at the experiment design stage.

Does the absence of an experĀ­iĀ­menter lead to lower-quality data?

By its nature, online research does not offer the same control over the experiment enviĀ­ronĀ­ment as the lab does. Without an experĀ­iĀ­menter there to check, you may ask yourĀ­self whether your parĀ­ticĀ­iĀ­pants are in a reaĀ­sonĀ­ably quiet enviĀ­ronĀ­ment, and whether they really are doing the task by themĀ­selves. Are they maybe lisĀ­tenĀ­ing to disĀ­tractĀ­ing music or podĀ­casts while theyā€™re supĀ­posed to be conĀ­cenĀ­tratĀ­ing on your task? Are they checkĀ­ing their email? Are they having a conĀ­verĀ­saĀ­tion with someone else in the room at the same time? How can we be sure that online parĀ­ticĀ­iĀ­pants take the tasks seriĀ­ousĀ­ly, and donā€™t just, say, press buttons at random? It is good to think about these issues before startĀ­ing an online experiment. Luckily, a lot of these worries can be resolved and there are various ways of addressĀ­ing these questions.

Can I really trust my participants?

Are my ā€œparĀ­ticĀ­iĀ­pantsā€ real people? The infilĀ­traĀ­tion of Amazonā€™s MechanĀ­iĀ­cal Turk parĀ­ticĀ­iĀ­pant dataĀ­base with bots has recentĀ­ly led to a big scandal, see Max Hui Baiā€™s blog post about the issue.

So how can we tell real people from bots? Can we trust that online parĀ­ticĀ­iĀ­pants truly fit our parĀ­ticĀ­iĀ­paĀ­tion criĀ­teĀ­ria? What if people lied about e.g. their age or other demoĀ­graphĀ­ic details in order to be able to parĀ­ticĀ­iĀ­pate in our study? The issues menĀ­tioned above are both legitĀ­iĀ­mate conĀ­cerns and they are imporĀ­tant quesĀ­tions that need to be conĀ­sidĀ­ered by anyone wanting to collect their data online.

The truth is, we can never be 100% sure that your online parĀ­ticĀ­iĀ­pants took your task as seriĀ­ousĀ­ly as you would have liked them to. However, similar conĀ­cerns apply to your parĀ­ticĀ­iĀ­pants in the lab. Can you always be 100% sure the parĀ­ticĀ­iĀ­pants who step into your dingy testing baseĀ­ment are as old as they say they are, or that they fulfil your demoĀ­graphĀ­ic incluĀ­sion criĀ­teĀ­ria? Even with you watchĀ­ing over them, can you always be sure that theyā€™re conĀ­cenĀ­tratĀ­ing on the task, rather than being lost in their own thoughts?

In one of my eye-trackĀ­ing studies I used a remote tracker without a headĀ­rest, so I made sure to remind parĀ­ticĀ­iĀ­pants mulĀ­tiĀ­ple times to remain in approxĀ­iĀ­mateĀ­ly the same posiĀ­tion in front of the comĀ­putĀ­er screen and not to move their head too much during the task. One parĀ­ticĀ­iĀ­pant went on to grab her water bottle from under the table and had a drink THREE TIMES during her session, even after I had remindĀ­ed her of the instrucĀ­tions and re-calĀ­iĀ­bratĀ­ed the tracker twice. After the second time, I knew her data was going to be useless, so I just waited for the task to finish and sent her home. The point is: someĀ­times, you have to deal with shoddy data, whether thatā€™s in the lab or online.

There are many ways to maxĀ­imise the likeĀ­liĀ­hood that your remoteĀ­ly colĀ­lectĀ­ed data will be of good quality. Firstly, Iā€™d recĀ­omĀ­mend making good use of tools that will make your life much, much easier. SecĀ­ondĀ­ly, by making senĀ­siĀ­ble adjustĀ­ments to your tasks you can optiĀ­mise your data quality and increase your chances of catchĀ­ing any bots in your sample.

The ultiĀ­mate proof is in the quality of online behavĀ­iourĀ­al research that we can see being pubĀ­lished. Many sciĀ­enĀ­tists are taking their research online, colĀ­lectĀ­ing data faster and accelĀ­erĀ­atĀ­ing their research. You can read about some examĀ­ples here

10 tips to optiĀ­mise the quality of the data you collect online:

1. Adapt your paradigm

While many parĀ­aĀ­digms can be taken online with no changes, some ā€” parĀ­ticĀ­uĀ­larĀ­ly those with a memory assessĀ­ment ā€” might need some tweakĀ­ing. When we collect our data remoteĀ­ly, we cannot monitor what the parĀ­ticĀ­iĀ­pant does during the task. This doesnā€™t mean that data colĀ­lecĀ­tion is imposĀ­siĀ­ble, it just means that we need to think careĀ­fulĀ­ly about how we can adapt the tasks we use.

For example, one of my experĀ­iĀ­ments includĀ­ed a short-term memory parĀ­aĀ­digm. In the lab, I would have been able to prevent parĀ­ticĀ­iĀ­pants from writing down the inforĀ­maĀ­tion that they were supĀ­posed to hold in memory ā€“ this was of course imposĀ­siĀ­ble when I ran the task online. So instead, the encodĀ­ing phase includĀ­ed a task element where parĀ­ticĀ­iĀ­pants needed to use their mouse to click on images on the screen. I could then check in my data whether people used the mouse to click. If they did use the mouse, I inferred that they couldnā€™t have taken notes with their hand during the encodĀ­ing phase. This is of course quite a crude method, but it illusĀ­trates that we need to be thinkĀ­ing creĀ­ativeĀ­ly to make our tasks suitĀ­able for online use, if your task demands it.

2. Use a good experiment platform

If youā€™re like me and started your PhD without any proĀ­gramĀ­ming expeĀ­riĀ­ence, and are, frankly, a bit scared of learnĀ­ing how to deal with variĀ­ables, funcĀ­tions and loops, then youā€™ll save yourĀ­self a ton of time and frusĀ­traĀ­tion by finding a good experiment platform.

I started using Gorilla, which was super intuĀ­itive and allowed me to easily set up my experĀ­iĀ­ments and tinker with them ā€“ all mostly without having to ask for help. This gave me more time for thinkĀ­ing about approĀ­priĀ­ate ways to adapt my tasks for online data collection.

3. Build in checks for data quality

One of the main conĀ­cerns about online research is that your research will be comĀ­pletĀ­ed by parĀ­ticĀ­iĀ­pants who arenā€™t really paying attenĀ­tion. However, you shouldĀ­nā€™t worry too much, as there are a variety of ways to trip up inatĀ­tenĀ­tive parĀ­ticĀ­iĀ­pants and exclude their data. There are a variety of ways to trip up bots and inatĀ­tenĀ­tive parĀ­ticĀ­iĀ­pants during your experiment, so that you can later discard their data, for example:

  • You can set a maximum or minimum amount of time for indiĀ­vidĀ­ual tasks or quesĀ­tionĀ­naires if youā€™ve got a reaĀ­sonĀ­able idea of how long people should be spendĀ­ing on them. With Gorilla, you can check the time your parĀ­ticĀ­iĀ­pants spent reading your instrucĀ­tion screen, for example. This way, you can exclude data from parĀ­ticĀ­iĀ­pants who clearly didnā€™t read the instrucĀ­tions propĀ­erĀ­ly. Or you could check your parĀ­ticĀ­iĀ­pantsā€™ overall time spent on a task, and discard those who spent an unreaĀ­sonĀ­ably long time comĀ­pletĀ­ing a section, under the assumpĀ­tion that maybe they took a tea break that they werenā€™t supĀ­posed to. SimĀ­iĀ­larĀ­ly, if youā€™ve got a good idea of how long parĀ­ticĀ­iĀ­pants should be spendĀ­ing on indiĀ­vidĀ­ual trials, you can use average response times as excluĀ­sion criteria.
  • You can also include a speĀ­cifĀ­ic ā€˜attenĀ­tion checkā€™ trial within your experiment. These might be parĀ­ticĀ­uĀ­larĀ­ly easy quesĀ­tions, and you could use these trials to exclude anyone who got them wrong.
  • You could also include some audiĀ­toĀ­ry filler trials to ensure people wear their headĀ­phones for the duraĀ­tion of the task if you wanted them to be in a quiet enviĀ­ronĀ­ment. Instruct them to put on their headĀ­phones in the beginĀ­ning and have them respond in some way to the audio to check that they were in fact listening.
  • If you want to make sure that parĀ­ticĀ­iĀ­pantsā€™ demoĀ­graphĀ­ic data are legitĀ­iĀ­mate, you can ask the same quesĀ­tion twice and check for any inconĀ­sisĀ­tenĀ­cies in responses.

Again, be creĀ­ative and use the type of ā€˜bot catchā€™ that will work best for your parĀ­ticĀ­uĀ­lar paradigm.

4. Make your experiment exciting

To ensure that your parĀ­ticĀ­iĀ­pants happily comĀ­plete your whole experiment, I strongĀ­ly recĀ­omĀ­mend making your task as short as posĀ­siĀ­ble, and as fun as posĀ­siĀ­ble. Part of this is making your experiment look nice and proĀ­fesĀ­sionĀ­al, which is pretty much a given if youā€™re using Gorilla as your experiment platform.

I have found that parĀ­ticĀ­iĀ­pants really appreĀ­ciĀ­ate feedĀ­back about their perĀ­forĀ­mance, e.g. you could gamify your task by letting parĀ­ticĀ­iĀ­pants collect points for correct answers or give them their overall score at the end. With Gorilla, for example, you can easily set up feedĀ­back zones that you can indiĀ­vidĀ­uĀ­alise with your own graphĀ­ics: Click here to have a look at example.

5. Pilot, pilot, pilot!

FigĀ­urĀ­ing all of this out relies on trial & error. You will need to find the approĀ­priĀ­ate adjustĀ­ments to your task that still ensure you get the kind of data that you want, and you will need to test out whether your ā€˜attenĀ­tion checksā€™ actuĀ­alĀ­ly work. Iā€™ve had a colĀ­league who found that even legitĀ­iĀ­mate parĀ­ticĀ­iĀ­pants tended to fail her check trials for some unknown reason. The way to iron out those kinks is to try out your experiment first.

Invest time in experiment design and do a lot ā€“ and I mean a lot! ā€“ of pilotĀ­ing before you go ā€˜liveā€™ with your experiment!

6. Data Analysis

Once youā€™ve piloted your study with a small sample, set up your data analyĀ­sis. This will make sure that youā€™ve got all the meta-data set up corĀ­rectĀ­ly for easy analyĀ­sis. Excel Pivot Tables are super powĀ­erĀ­ful and tremenĀ­dousĀ­ly useful in many careers, make them your friend.

7. Choose a fair and reliĀ­able platĀ­form for parĀ­ticĀ­iĀ­pant recruitment

During my PhD, I linked my Gorilla experĀ­iĀ­ments to the ProĀ­lifĀ­ic AcaĀ­dĀ­eĀ­mĀ­ic platĀ­form for parĀ­ticĀ­iĀ­pant recruitĀ­ment. So far, ProĀ­lifĀ­ic has been spared any scanĀ­dals about bots in their dataĀ­base, and my perĀ­sonĀ­al expeĀ­riĀ­ence sugĀ­gests that the people who are signed up as members are genuine and genĀ­erĀ­alĀ­ly quite eager to perform experĀ­iĀ­menĀ­tal tasks diligently.

Members of ProĀ­lifĀ­ic provide their demoĀ­graphĀ­ic inforĀ­maĀ­tion when they first sign up to the platĀ­form, so I was able to directĀ­ly target only those who were eliĀ­giĀ­ble for my experiment, without having to worry about them lying only to be able to take part in my study.

Prolificā€™s large dataĀ­base meant that I could collect data from over 100 people within a day.

8. Pay your parĀ­ticĀ­iĀ­pants fairly

Itā€™s imporĀ­tant to ensure that your parĀ­ticĀ­iĀ­pants are rewardĀ­ed approĀ­priĀ­ateĀ­ly for their time ā€“ not just for the obvious ethical reasons, but also because you are much more likely to get good quality data from people who are satĀ­isĀ­fied and feel like their time is valuĀ­able to you.

I have paid my online parĀ­ticĀ­iĀ­pants at the same hourly rate that my uniĀ­verĀ­siĀ­ty recĀ­omĀ­mends for lab-based participants.

9. Run your experiment in batches

This is a major, major tip to avoid any data loss. Rather than setting your recruitĀ­ment target to the maximum immeĀ­diĀ­ateĀ­ly, I recĀ­omĀ­mend recruitĀ­ing your parĀ­ticĀ­iĀ­pants in batches of, say, about 20 at a time.

Itā€™s also senĀ­siĀ­ble to do a brief data quality check once youā€™ve run each batch (without peeking at staĀ­tisĀ­tics of course!) so that you have a better overview of how many more datasets you still need to collect.

10. Final thoughts

I am by no means an expert in online research, but I hope that these tips will be helpful for anyone planĀ­ning their first (or even their 100th) online study. For more inforĀ­maĀ­tion about all things online research, you can check out Jenni Roddā€™s fanĀ­tasĀ­tic article, and watch videos from the BeOnĀ­line conĀ­ferĀ­ence.

By running 3 out of 5 of my experĀ­iĀ­ments for my thesis online, I saved a lot of time and learnt a lot about approĀ­priĀ­ate experĀ­iĀ­menĀ­tal design. It also meant that I was able to run more experĀ­iĀ­ments than I had origĀ­iĀ­nalĀ­ly planned and invesĀ­tiĀ­gate some interĀ­estĀ­ing but tanĀ­genĀ­tial research quesĀ­tions. It also meant that I could be involved in the superĀ­viĀ­sion of underĀ­gradĀ­uĀ­ate stuĀ­dents who were able to easily set up their own experĀ­iĀ­ments and collect their own data within the time frame of a typical underĀ­gradĀ­uĀ­ate research project.

 

The future of behavĀ­iourĀ­al research?

The future of behavĀ­iourĀ­al science may well lie online. Online research gives us the ability to reach more diverse parĀ­ticĀ­iĀ­pants, groups that may have preĀ­viĀ­ousĀ­ly been parĀ­ticĀ­uĀ­larĀ­ly difĀ­fiĀ­cult to recruit, and to collect larger samples.

Asking people to do experĀ­iĀ­ments in the comfort of their own home, on their own devices, gives us the opporĀ­tuĀ­niĀ­ty to collect data that is not influĀ­enced by experĀ­iĀ­menter-parĀ­ticĀ­iĀ­pant interĀ­acĀ­tions. In fact, one could even argue that this setting is more ā€œnaturalā€ than the lab.

For online research to be sucĀ­cessĀ­ful, however, we need to be flexĀ­iĀ­ble and creĀ­ative ā€“ our tasks will inevitably need to be adapted. It is imporĀ­tant that we as a field find ways to adapt our stanĀ­dard parĀ­aĀ­digms and stanĀ­dardĀ­ised testing batĀ­terĀ­ies for use with online populations.

To ensure high data quality, we need to invest time and effort into experĀ­iĀ­menĀ­tal design, as well as show appreĀ­ciĀ­aĀ­tion to our parĀ­ticĀ­iĀ­pants by paying them fairly.