Embracing online research during your PhD

Back to blog

A logo of Gorilla Experiment Builder

The lab experience

Traditionally, most behavioural research is done in laboratory settings. As a PhD student or a research assistant, you spend time designing flyers to hang up in your department, or use your university’s participant pool to recruit people to take part in your experiment. You then painstakingly schedule individual experiment sessions for each of your participants and invite them to your university building to do some tasks on a computer for an hour or so, quite possibly in some tiny cubicle in a dingy basement.

Additionally, you also have to spend that same time in that same dingy basement, usually not doing anything more productive than engaging in small talk about the weather, typing in your participants’ ID, and perhaps starting up a new experiment programme every now and then. And that’s if you’re lucky and your participant actually shows up.

I’ve done my fair share of lab-based data collection and I have to say, it really is exhausting. It’s a massive time commitment from the researcher; you can easily spend months on end going through the recruitment-scheduling-testing cycle.

Apart from the researcher’s time, lab-based data collection is also resource-intensive for your university. They need to provide enough rooms and equipment for all the active researchers to be able to run their experiments, and – as an experimenter – you often find yourself competing for those spaces with your colleagues. This competition can get particularly tiresome during prime ‘testing season’, where sadly, it is often the senior researchers and prestigious labs that win.

Lab-based research has another big problem: its participants. What population do you think is the easiest to recruit for experiments that are run in the dingy psychology department basement? That’s right, people who are already physically in the department. It’s no surprise that much of the classic literature in psychology is based on samples of university students. They are there, they are willing, you can bribe them with course credits – it’s just so convenient!

The problem with this approach is clearly that the typical undergraduate student tends to be quite different to the population that we as psychologists usually want to make inferences about. For starters, they might be particularly homogeneous when it comes to socioeconomic status, and often also race and gender. It’s commendable when researchers try to widen their participant pool to community samples, e.g. by putting up their flyers somewhere other than the student union. However, when we’re recruiting people for lab-based research we will always automatically restrict our sample to those who are both physically able and willing to come to the university.

With the replication crisis looming over all our heads, the field has quite rightly recognised that we need to run our experiments on bigger and more diverse samples. Sadly, this is often not realistic with lab-based research – especially if you’re the sole experimenter on your project. There’s only so many hours in the day for testing!

So, what can be done about this?

Online research

The last few years have seen a change in where research and data collection happens. Internet browsers and people’s own electronic devices are now powerful enough to display complex stimuli and can measure response times remotely with a great degree of accuracy. These technological advances have allowed researchers in the behavioural sciences to shift from lab-based to web-based research.

Recruitment platforms like MTurk and Prolific Academic allow researchers to advertise their experiments online and attract participants far away from the university campus who can then do the tasks on their own device and in their own time.

It really seems like a no-brainer:

  • Running experiments online saves the researcher valuable time because there is no need to schedule individual sessions, you can easily test multiple people at the same time and you can use the time it takes to collect the data in more productive ways.
  • Online research saves university resources as there is no need to provide as many rooms and computers specifically for data collection purposes. More room for coffee machines in psychology departments!
  • Recruiting participants online allows us to diversify our participant pool and collect larger sample sizes to increase the power of our analyses.

The challenges of online research

It is worth noting that nothing is perfect. Like lab-based research, online research has its challenges. When I first started thinking about running my own experiments online, I admit that I had doubts. However, I have since found that many of my initial concerns either equally applied to research conducted in the lab or were avoidable by making sensible decision at the experiment design stage.

Does the absence of an experimenter lead to lower-quality data?

By its nature, online research does not offer the same control over the experiment environment as the lab does. Without an experimenter there to check, you may ask yourself whether your participants are in a reasonably quiet environment, and whether they really are doing the task by themselves. Are they maybe listening to distracting music or podcasts while they’re supposed to be concentrating on your task? Are they checking their email? Are they having a conversation with someone else in the room at the same time? How can we be sure that online participants take the tasks seriously, and don’t just, say, press buttons at random? It is good to think about these issues before starting an online experiment. Luckily, a lot of these worries can be resolved and there are various ways of addressing these questions.

Can I really trust my participants?

Are my “participants” real people? The infiltration of Amazon’s Mechanical Turk participant database with bots has recently led to a big scandal, see Max Hui Bai’s blog post about the issue.

So how can we tell real people from bots? Can we trust that online participants truly fit our participation criteria? What if people lied about e.g. their age or other demographic details in order to be able to participate in our study? The issues mentioned above are both legitimate concerns and they are important questions that need to be considered by anyone wanting to collect their data online.

The truth is, we can never be 100% sure that your online participants took your task as seriously as you would have liked them to. However, similar concerns apply to your participants in the lab. Can you always be 100% sure the participants who step into your dingy testing basement are as old as they say they are, or that they fulfil your demographic inclusion criteria? Even with you watching over them, can you always be sure that they’re concentrating on the task, rather than being lost in their own thoughts?

In one of my eye-tracking studies I used a remote tracker without a headrest, so I made sure to remind participants multiple times to remain in approximately the same position in front of the computer screen and not to move their head too much during the task. One participant went on to grab her water bottle from under the table and had a drink THREE TIMES during her session, even after I had reminded her of the instructions and re-calibrated the tracker twice. After the second time, I knew her data was going to be useless, so I just waited for the task to finish and sent her home. The point is: sometimes, you have to deal with shoddy data, whether that’s in the lab or online.

There are many ways to maximise the likelihood that your remotely collected data will be of good quality. Firstly, I’d recommend making good use of tools that will make your life much, much easier. Secondly, by making sensible adjustments to your tasks you can optimise your data quality and increase your chances of catching any bots in your sample.

The ultimate proof is in the quality of online behavioural research that we can see being published. Many scientists are taking their research online, collecting data faster and accelerating their research.

10 tips to optimise the quality of the data you collect online:

  1. Adapt your paradigm

    While many paradigms can be taken online with no changes, some — particularly those with a memory assessment — might need some tweaking. When we collect our data remotely, we cannot monitor what the participant does during the task. This doesn’t mean that data collection is impossible, it just means that we need to think carefully about how we can adapt the tasks we use.

    For example, one of my experiments included a short-term memory paradigm. In the lab, I would have been able to prevent participants from writing down the information that they were supposed to hold in memory – this was of course impossible when I ran the task online. So instead, the encoding phase included a task element where participants needed to use their mouse to click on images on the screen. I could then check in my data whether people used the mouse to click. If they did use the mouse, I inferred that they couldn’t have taken notes with their hand during the encoding phase. This is of course quite a crude method, but it illustrates that we need to be thinking creatively to make our tasks suitable for online use, if your task demands it.

  2. Use a good experiment platform

    If you’re like me and started your PhD without any programming experience, and are, frankly, a bit scared of learning how to deal with variables, functions and loops, then you’ll save yourself a ton of time and frustration by finding a good experiment platform.

    I started using Gorilla, which was super intuitive and allowed me to easily set up my experiments and tinker with them – all mostly without having to ask for help. This gave me more time for thinking about appropriate ways to adapt my tasks for online data collection.

  3. Build in checks for data quality

    One of the main concerns about online research is that your research will be completed by participants who aren’t really paying attention. However, you shouldn’t worry too much, as there are a variety of ways to trip up inattentive participants and exclude their data. There are a variety of ways to trip up bots and inattentive participants during your experiment, so that you can later discard their data, for example:

    • You can set a maximum or minimum amount of time for individual tasks or questionnaires if you’ve got a reasonable idea of how long people should be spending on them. With Gorilla, you can check the time your participants spent reading your instruction screen, for example. This way, you can exclude data from participants who clearly didn’t read the instructions properly. Or you could check your participants’ overall time spent on a task, and discard those who spent an unreasonably long time completing a section, under the assumption that maybe they took a tea break that they weren’t supposed to. Similarly, if you’ve got a good idea of how long participants should be spending on individual trials, you can use average response times as exclusion criteria.
    • You can also include a specific ‘attention check’ trial within your experiment. These might be particularly easy questions, and you could use these trials to exclude anyone who got them wrong.
    • You could also include some auditory filler trials to ensure people wear their headphones for the duration of the task if you wanted them to be in a quiet environment. Instruct them to put on their headphones in the beginning and have them respond in some way to the audio to check that they were in fact listening.
    • If you want to make sure that participants’ demographic data are legitimate, you can ask the same question twice and check for any inconsistencies in responses.

    Again, be creative and use the type of ‘bot catch’ that will work best for your particular paradigm.

  4. Make your experiment exciting

    To ensure that your participants happily complete your whole experiment, I strongly recommend making your task as short as possible, and as fun as possible. Part of this is making your experiment look nice and professional, which is pretty much a given if you’re using Gorilla as your experiment platform.

    I have found that participants really appreciate feedback about their performance, e.g. you could gamify your task by letting participants collect points for correct answers or give them their overall score at the end. With Gorilla, for example, you can easily set up feedback components that you can individualise with your own graphics.

  5. Pilot, pilot, pilot!

    Figuring all of this out relies on trial & error. You will need to find the appropriate adjustments to your task that still ensure you get the kind of data that you want, and you will need to test out whether your ‘attention checks’ actually work. I’ve had a colleague who found that even legitimate participants tended to fail her check trials for some unknown reason. The way to iron out those kinks is to try out your experiment first.

    Invest time in experiment design and do a lot – and I mean a lot! – of piloting before you go ‘live’ with your experiment!

  6. Data Analysis

    Once you’ve piloted your study with a small sample, set up your data analysis. This will make sure that you’ve got all the meta-data set up correctly for easy analysis. Excel Pivot Tables are super powerful and tremendously useful in many careers, make them your friend.

  7. Choose a fair and reliable platform for participant recruitment

    During my PhD, I linked my Gorilla experiments to the Prolific Academic platform for participant recruitment. So far, Prolific has been spared any scandals about bots in their database, and my personal experience suggests that the people who are signed up as members are genuine and generally quite eager to perform experimental tasks diligently.

    Members of Prolific provide their demographic information when they first sign up to the platform, so I was able to directly target only those who were eligible for my experiment, without having to worry about them lying only to be able to take part in my study.

    Prolific’s large database meant that I could collect data from over 100 people within a day.

  8. Pay your participants fairly

    It’s important to ensure that your participants are rewarded appropriately for their time – not just for the obvious ethical reasons, but also because you are much more likely to get good quality data from people who are satisfied and feel like their time is valuable to you.

    I have paid my online participants at the same hourly rate that my university recommends for lab-based participants.

  9. Run your experiment in batches

    This is a major, major tip to avoid any data loss. Rather than setting your recruitment target to the maximum immediately, I recommend recruiting your participants in batches of, say, about 20 at a time.

    It’s also sensible to do a brief data quality check once you’ve run each batch (without peeking at statistics of course!) so that you have a better overview of how many more datasets you still need to collect.

  10. Final thoughts

    I am by no means an expert in online research, but I hope that these tips will be helpful for anyone planning their first (or even their 100th) online study. For more information about all things online research, you can check out Jenni Rodd’s fantastic article, and watch videos from the BeOnline conference.

    By running 3 out of 5 of my experiments for my thesis online, I saved a lot of time and learnt a lot about appropriate experimental design. It also meant that I was able to run more experiments than I had originally planned and investigate some interesting but tangential research questions. It also meant that I could be involved in the supervision of undergraduate students who were able to easily set up their own experiments and collect their own data within the time frame of a typical undergraduate research project.

The future of behavioural research?

The future of behavioural science may well lie online. Online research gives us the ability to reach more diverse participants, groups that may have previously been particularly difficult to recruit, and to collect larger samples.

Asking people to do experiments in the comfort of their own home, on their own devices, gives us the opportunity to collect data that is not influenced by experimenter-participant interactions. In fact, one could even argue that this setting is more “natural” than the lab.

For online research to be successful, however, we need to be flexible and creative – our tasks will inevitably need to be adapted. It is important that we as a field find ways to adapt our standard paradigms and standardised testing batteries for use with online populations.

To ensure high data quality, we need to invest time and effort into experimental design, as well as show appreciation to our participants by paying them fairly.

Subscribe to Gorilla Grants

We regularly run grants to help researchers and lecturers get their projects off the ground. Sign up to get notified when new grants become available