Online vs lab-based behav­ioral research: Letting go of the illu­sion of control

Some researchers have resist­ed the move to online research over the last few years, but the COVID crisis has forced many to switch to online methods. However, one ques­tion keeps coming up: “Jo, is there a good way to monitor the par­tic­i­pant envi­ron­ment when testing remotely?”

The fear of losing control of the testing envi­ron­ment when taking research online is real, so let’s address it.

But first, let’s look at the ben­e­fits of online and lab research to get on the same page before we look into what’s pos­si­ble in terms of envi­ron­men­tal mon­i­tor­ing — and what I think may be a better approach.

Ben­e­fits of behav­ioral online research: Speed, reach, scale

  1. Online data col­lec­tion can be com­plet­ed at an incred­i­ble speed. The tools for online research are now so good, that it can take only a few hours to create a study. Long gone are the days of painful­ly coding both the par­tic­i­pant and server-side expe­ri­ence. Couple behav­ioral science soft­ware with any number of par­tic­i­pant recruit­ment ser­vices and you can see the data come flying in.
  2. Go large scale and say goodbye to under­pow­ered studies. As you no longer need to sit in the lab with each par­tic­i­pant, mul­ti­ple par­tic­i­pants can com­plete your experiment simul­ta­ne­ous­ly, leading to much larger samples. With experiment soft­ware, you can get data from thou­sands of par­tic­i­pants in a day.
  3. Extend your reach and recruit the par­tic­i­pants you need. Do you need a more diverse sample? Or a really spe­cif­ic group of par­tic­i­pants? Inte­grate your exper­i­men­ta­tion plat­form with a recruit­ment service like Pro­lif­ic or SONA and reach groups that you couldn’t have done in the lab.

Ben­e­fits of behav­ioral lab research: Proxy for par­tic­i­pant attention

So with all these ben­e­fits, why do we stay in the lab? Control!

As researchers, we like to feel like we’re in control in the lab. We want control over the envi­ron­ment because (1) we had it in the lab and (2) we use it as a proxy for par­tic­i­pant atten­tion.

We may fear that remain­ing online removes this sense of control. It seems scary to have to trust that our par­tic­i­pants will pay atten­tion to the task we give them — espe­cial­ly if we’re not there to keep things on track. It’s scary to think about all the reasons why we may need to exclude par­tic­i­pants and to come up with a list of pre-defined exclu­sion criteria.

But in reality, these are things we should be think­ing about anyway. Perhaps we don’t have full control in the lab after all — perhaps the control is just an illu­sion. You had control of the envi­ron­ment, but you never had control of their mind.

The illu­sion of control: The mind is free

When a par­tic­i­pant comes into the lab we can inter­act with them and watch them com­plete the task. We can make sure they are in a quiet, dis­trac­tion-free room, and sit in a sen­si­ble workspace.

Yet, we cannot control where their atten­tion is focused. They may look like they are paying atten­tion to the task, but perhaps they are day­dream­ing or just not taking it seri­ous­ly, and you can often only see this in the data later on in the research process.

Online we can ask par­tic­i­pants to find a quiet space, but we can never be sure if they have done this. Again this is some­thing that we wouldn’t nec­es­sar­i­ly spot until we look at the data.

Envi­ron­men­tal Mon­i­tor­ing is problematic

Of course, we could with the consent of our par­tic­i­pants, inter­leave task trials with short bursts of record­ing the back­ground audio (with the audio zone) and video of the home envi­ron­ment (from the webcam).

However:

  1. Your par­tic­i­pants may not like this at all! They could rightly be worried about secu­ri­ty. And the very act of asking for this sets up an antag­o­nis­tic rela­tion­ship with your par­tic­i­pant. I’ve written before about the impor­tance of making par­tic­i­pants a research partner and treat­ing them with respect.
  2. Your ethics com­mit­tee might not like this at all! We’re now col­lect­ing per­son­al­ly iden­ti­fy­ing data and col­lect­ing data that isn’t nec­es­sar­i­ly rel­e­vant to the task. In terms of secu­ri­ty, a good rule of thumb is to collect the minimum data pos­si­ble. This goes against that rule of thumb.
  3. You will now have to watch and listen to all these files (it can’t be auto­mat­ed), and you might regret your choice espe­cial­ly when there are better and more auto­mat­ed ways to achieve the same results. Read on!

Pilot­ing and pre-reg­is­ter­ing is the way to go

So, some­thing can happen both in lab research and online research… and we want to deal with the issue. The best way to do this is through strong pilot­ing of your study and working out objec­tive exclu­sion cri­te­ria based on data quality. From this, we can pre-reg­is­ter our cri­te­ria strength­en­ing the trust other sci­en­tists can have in our work.

You could pilot your study this way: Once you’ve designed your par­tic­i­pant expe­ri­ence in the testing plat­form, do some user testing. Get 10 par­tic­i­pants to take part while you watch over zoom. You’ll get incred­i­ble feed­back about what is clear, and what’s con­fus­ing and this will allow you to make your par­tic­i­pant expe­ri­ence better. I know we love quan­ti­ta­tive research, but qual­i­ta­tive research has its place, espe­cial­ly when it comes to user testing.

Next, collect a small set of data remote­ly, and use the per­for­mance data to iden­ti­fy objec­tive quan­ti­ta­tive exclu­sion cri­te­ria. Time spent on the instruc­tions. Number of missed trials. Maximum and minimum response thresh­olds. This allows you to objec­tive­ly exclude trials and exclude par­tic­i­pants that are behav­ing dif­fer­ent­ly and which you assume to be dis­tract­ed at that moment.

Finally, to ensure you aren’t cherry-picking the data, pre-reg­is­ter these objec­tive cri­te­ria and then apply them rigorously.

Pre-reg­is­ter­ing ele­ments of our study is some­thing that does give us some control over our research – think­ing about these things ahead of data col­lec­tion and analy­sis is incred­i­bly impor­tant. More insight into main­tain­ing data quality when you can’t see your par­tic­i­pants can be found in Jenni Rodd’s BeOn­line 2020 lecture.

Level up: Gamify tasks to max­i­mize data quality

I’ve written before about how to harness par­tic­i­pant engage­ment and atten­tion to max­i­mize data quality when testing online. In a nut­shell, you harness par­tic­i­pant atten­tion by making your task inter­est­ing and engag­ing par­tic­i­pants in your research question.

Top tips include making your par­tic­i­pant a research partner and making your task fun. You can even con­sid­er gam­i­fi­ca­tion — it’s easier than you might think!

The pain of face-to-face testing can be over

Many types of behav­ioral science research involves working with one par­tic­i­pant at a time, and bring­ing them to the lab. Maybe you can book 2 par­tic­i­pants per day, so to get a sample of 100 par­tic­i­pants, that will be 50 days – but that’s only if every person turns up. Add in week­ends and no-shows, you’re looking at around 2 months of data collection.

Instead, imagine putting your study online and col­lect­ing data from 500 par­tic­i­pants in one hour. Even if you had to exclude say 10% due to poor data quality, that’s still 450 par­tic­i­pants in one hour. The amount of time and stress saved is immense!

Many Ph.D. stu­dents are funded using public funds, and so this time saving is also a cost-saving and allows PhDs to focus on better experiment design or on a task that will benefit their future research objectives.

The flex­i­bil­i­ty embed­ded in online research also allows for a more rep­re­sen­ta­tive sample. Often face-to-face lab research will be missing out on par­tic­i­pants who are unable to attend the lab during the working day. Going online allows people to com­plete your study at a time that suits them, meaning you can get reach par­tic­i­pants that oth­er­wise will not have been account­ed for.

Unnat­ur­al behav­ior in arti­fi­cial situations

From another per­spec­tive, maybe too much control over par­tic­i­pants is a bad thing – we put par­tic­i­pants in an arti­fi­cial sit­u­a­tion, one that may be very new to them, and then sit and watch them com­plete a task. This may mean we are no longer getting a measure of “natural” human behav­ior, but how they respond in dif­fer­ent circumstances.

A par­tic­i­pant com­plet­ing a task online cannot have their behav­ior altered by our pres­ence in the same way it could in the lab. In fact, in real life we rarely do one task in iso­la­tion – we often need to focus on one thing amid dis­trac­tions, and there­fore research com­plet­ed by par­tic­i­pants at home may be more reflec­tive of a real-world situation.

Ulti­mate­ly, we are inter­est­ed in how humans behave in real life. Real life is messy! It’s noisy! And it’s often chaotic.

If you find an effect that works in a quiet and clean lab what does that tell you about the real world? If you can find an effect that works in a messy and noisy sit­u­a­tion, it’s far more likely to repli­cate in other real-world sit­u­a­tions. So, lean into your lack of control over the testing envi­ron­ment — it might even make your research more robust.

Going back to the lab? Use what you learned

In the lab, we default to con­trol­ling the envi­ron­ment in an attempt to harness atten­tion. When we take research online, we can’t control the envi­ron­ment, and so we’ve learned to better harness atten­tion and objec­tive­ly detect poor task attention.

Now it’s time to take these approach­es back to the lab if you test onsite. Since data quality is driven by par­tic­i­pant engage­ment and atten­tion, you can simply use the same approach­es that we use online:

  1. Make your par­tic­i­pant a research partner, not a cog
  2. Make your task inter­est­ing and engaging
  3. Pre­reg­is­tra­tion of objec­tive mea­sures of poor data quality and use this to exclude trials and participants

That way you don’t have to control the envi­ron­ment to measure a proxy for atten­tion. You’ve learned how to harness and assess par­tic­i­pant atten­tion direct­ly — both online and in the lab.

Strong ben­e­fits are becom­ing more evident

As researchers, we were all looking forward to the time when we could go back onto cam­pus­es and into labs safely. Yet, the illu­sion of control in lab face-to-face testing is being shat­tered, and the strong ben­e­fits of online research are becom­ing more evident.

Online research tools allow us to conduct research faster, at a larger scale, and with greater reach which in turn gives us greater con­fi­dence in our results, and it’s here to stay.

Not already online? Why not? Users over­whelm­ing­ly report that it’s easier than they expect­ed. We offer a best prac­tice guide to online research as well as weekly onboard­ing webi­na­rs so that researchers can hit the ground running. See you there!

Jo Ever­shed

Jo is the CEO and co-founder of Caul­dron and Gorilla. Her mission is to provide behav­iour­al sci­en­tists with the tools needed to improve the scale and impact of the evi­dence-based inter­ven­tions that benefit society.