Star Find Gorilla Game Builder
Gam­i­fi­ca­tion in Behav­ioral Science: An Engag­ing Prospect for Online Research

The Covid-19 pan­dem­ic has forced most behav­ioral science researchers to tran­si­tion from in-person and lab-based testing to online research. The sudden explo­sion in the quan­ti­ty of online research studies has inevitably affect­ed both par­tic­i­pant recruit­ment and engage­ment. In this increas­ing­ly crowded market space, how can researchers max­i­mize data quality and recruitment?

In the last decade, gam­i­fi­ca­tion has been increas­ing­ly employed as a tool to promote recruit­ment, engage­ment, and learn­ing in a range of fields (i.e., mar­ket­ing, edu­ca­tion, health and pro­duc­tiv­i­ty tools, and fun­da­men­tal research). Here, we review the concept of gam­i­fi­ca­tion in behav­ioral science research, outline some of the dif­fer­ent ways it is used, and discuss exam­ples where gam­i­fi­ca­tion has been employed to great effect in online research.

What Is Gamification?

The gam­i­fi­ca­tion trend is believed to have started around 2010; however, the concept of embed­ding tasks within a game to increase learn­ing and moti­va­tion has been around for hun­dreds of years (Zicher­mann & Cun­ning­ham, 2011). A game is a system in which players engage in a fic­tion­al chal­lenge, defined by rules, inter­ac­tiv­i­ty, and feed­back, which results in a quan­tifi­able outcome often pro­vok­ing an emo­tion­al reac­tion (Kankan­hal­li et al., 2012). Given this def­i­n­i­tion, we can define gam­i­fi­ca­tion as the use of these game ele­ments and tech­niques in non-game con­texts (Deter­d­ing et al., 2011; Kankan­hal­li et al., 2012).

A Tax­on­o­my of Gamification

There are many poten­tial ways to gamify a behav­ioral science task, each of which elicits dif­fer­ing levels of moti­va­tion and requires dif­fer­ent levels of input from the exper­i­menter. In order to help researchers decide which game ele­ments are more appro­pri­ate in a par­tic­u­lar context, Toda et al. (2019) elo­quent­ly sum­ma­rize the dif­fer­ent approach­es to gam­i­fi­ca­tion, using a clear tax­on­o­my of dif­fer­ent game ele­ments (e.g., points, badges, leader boards, etc.). The authors also suggest a dis­tinc­tion between extrin­sic and intrin­sic game ele­ments. An extrin­sic game element can be per­ceived clearly and objec­tive­ly by the user, whereas an intrin­sic element is pre­sent­ed more subtly so that the user is unaware of per­ceiv­ing it (Toda et al., 2019).

In their tax­on­o­my, Toda et al. (2019) described 21 game ele­ments and their syn­onyms. These ele­ments were val­i­dat­ed by two surveys with experts in the field of gam­i­fi­ca­tion in edu­ca­tion. This result­ed in a tax­on­o­my of five gam­i­fi­ca­tion dimen­sions: Performance/measurement, Social, Eco­log­i­cal, Per­son­al, and Fic­tion­al (see Figure 1). Whilst this tax­on­o­my is highly useful for anyone wishing to develop a gam­i­fied task, it is worth remem­ber­ing that the tax­on­o­my pro­posed by Toda et al. (2019) was based solely on expert opinion, not on actual users.

gamification-taxonomy

Figure 1: The tax­on­o­my of gam­i­fi­ca­tion pro­posed by Toda et al. (2019). Image from Toda et al. (2019)

 

Performance/measurement

Performance/measurement ele­ments are the most com­mon­ly used in gam­i­fi­ca­tion. They include reward­ing per­for­mance by using points, levels, and achievements/badges. There are hun­dreds of exam­ples where per­for­mance mea­sure­ments have been used to turn dif­fer­ent tasks into games, such as Dat­a­camp (gam­i­fy­ing com­put­er pro­gram­ming edu­ca­tion), Habit­i­ca (gam­i­fy­ing habit forming), and Peloton (gam­i­fy­ing fitness), to name but a few. All of these exam­ples use extrin­sic fea­tures to moti­vate par­tic­i­pants and provide feedback.

Social

Another common feature in gam­i­fi­ca­tion is to compare learn­ing and per­for­mance with other users, i.e., the social dimen­sion. The most common way to do this is through leader boards, either making users work hard to catch up with friends or work hard to stay on top and main­tain their rep­u­ta­tion. Depend­ing on the tasks, there are plenty of oppor­tu­ni­ties for social inter­ac­tion either via com­pe­ti­tion or coop­er­a­tion. Towards the end of this article, we will describe the HIVE, a fun and pow­er­ful game for explor­ing con­for­mi­ty and diver­si­ty across mul­ti­ple individuals.

Eco­log­i­cal

The eco­log­i­cal dimen­sion relates to the envi­ron­ment imple­ment­ed in gam­i­fi­ca­tion. This dimen­sion includes ele­ments such as chance (manip­u­lat­ing the prob­a­bil­i­ty of winning or the size of the prize), time pres­sure, or rarity of prizes (e.g., avail­abil­i­ty certain Pokémon). Decades of research in the fields of neu­roe­co­nom­ics and value-based deci­sion-making have pro­vid­ed robust neu­ro­com­pu­ta­tion­al models that describe these behav­iors (Rangel et al., 2008). Cru­cial­ly, the emerg­ing field of com­pu­ta­tion­al psy­chi­a­try is linking these mech­a­nisms to psy­cho­log­i­cal and psy­chi­atric phe­no­types (Mon­tague et al., 2012). There­fore, gam­i­fi­ca­tion could become a useful tool for clin­i­cal diag­nos­tics and treat­ment. In the case studies section, we will discuss Fun­Maths, which has used gam­i­fi­ca­tion to help chil­dren that strug­gle with arith­metic and under­stand­ing number relations.

Per­son­al

The per­son­al dimen­sion is related to the user of the envi­ron­ment; for example, is the game repet­i­tive, or does it stay fresh and novel? Is it simple, or do the tasks chal­lenge the user? Are there pleas­ing sen­sa­tions for the user, such as vibrant colors or sounds, to improve their expe­ri­ence? These ele­ments are implic­it­ly reward­ing to the user.

Fic­tion­al

Lastly, the fic­tion­al dimen­sion aims to link the user expe­ri­ence with context through nar­ra­tive and sto­ry­telling ele­ments. In these games, the user is unaware they are learn­ing a skill or per­form­ing a task. A number of edu­ca­tion­al board games have been devel­oped over the years, using the fic­tion­al dimen­sion (for example playing CBT), and more recent­ly, apps are being devel­oped to extract behav­iors through nat­u­ral­is­tic game play (Sea Quest Hero) and even treat behav­ioral con­di­tions like ADHD (Neu­ro­rac­er). We will later discuss Trea­sure Col­lec­tor, a game for chil­dren that takes a basic psy­cho­log­i­cal par­a­digm and turns it into an excit­ing adventure!

Does Gam­i­fi­ca­tion Improve Behav­ioral Science?

In normal set­tings, exper­i­men­tal tasks and ques­tion­naires may be tedious for par­tic­i­pants. They often use sim­plis­tic stimuli, are repet­i­tive, and provide no feed­back on per­son­al per­for­mance or the per­for­mance of others (scoring poorly on four of the five gam­i­fi­ca­tion dimen­sions men­tioned above). In any experiment or survey, par­tic­i­pants’ atten­tion gets lower over time, thereby increas­ing the error rate. This is ampli­fied in remote and online testing, where there are likely to be any number of dis­trac­tions that are not present in the lab. When the exper­i­menter is not present, par­tic­i­pants are more likely to drop out if they become bored—increasingly so when exper­i­ments require more than one session (Palan & Schit­ter, 2018). It has been sug­gest­ed that engag­ing and reward­ing par­tic­i­pants through gam­i­fi­ca­tion can help solve these prob­lems and there­fore increase data quality by increas­ing atten­tion and motivation.

Bailey et al. (2015) inves­ti­gat­ed the impact of gam­i­fi­ca­tion on survey respons­es. Here, the authors refer to ‘soft gam­i­fi­ca­tion’, where tra­di­tion­al survey respons­es were replaced with more inter­est­ing tools like drag­ging and select­ing images. We con­sid­er this an increase in the per­son­al dimen­sion in Toda et al.’s (2019) tax­on­o­my. Bailey et al. (2015) found that gam­i­fi­ca­tion led to richer respons­es (a sig­nif­i­cant increase in the numbers of words used in respond­ing) and par­tic­i­pants were engaged for longer. This is just one of many exam­ples that show the ben­e­fits of gam­i­fy­ing research.

Looys­ten et al. (2017) con­duct­ed a sys­tem­at­ic review of online studies, employ­ing gam­i­fi­ca­tion to inves­ti­gate the effects of game-based envi­ron­ments on online research. To do so, they looked at dif­fer­ent mea­sures of engage­ment, e.g., the amount of time spent on the program and numbers of visits. Taken togeth­er, their results suggest that gam­i­fi­ca­tion increas­es engage­ment in online pro­grams and enhances other out­comes, such as learn­ing and health behav­ior. However, the authors also suggest that the impact of gam­i­fi­ca­tion fea­tures reduces over time as the novelty of points, lev­el­ling up, and badges wears off.

Looyestyn et al. (2017) cite the example of the gam­i­fied app Four-square, which was hugely suc­cess­ful upon release but failed to retain users after 6–12 months. I am sure we can all think of other exam­ples of web­sites and apps that were hugely popular at first but failed to retain cus­tomers. This sug­gests that only uti­liz­ing performance/measurement ele­ments in gam­i­fi­ca­tion will lead to initial spikes but fail to retain customers/participants. This may not be a problem for most behav­ioral science exper­i­ments where long-term reten­tion is not required; however, we suggest it is worth con­sid­er­ing when devel­op­ing lon­gi­tu­di­nal/­mul­ti-session games.

The sys­tem­at­ic review by Looyestyn et al. (2017) pro­vides the strongest evi­dence to date that gam­i­fi­ca­tion sig­nif­i­cant­ly increas­es par­tic­i­pant engage­ment. However, it is also worth high­light­ing some lim­i­ta­tions. First, the pos­i­tive effect of gam­i­fi­ca­tion was not found for all mea­sures of engage­ment, which sheds some doubt on the gen­er­al­iz­abil­i­ty of these results. Second, although they began with 1,017 online studies, only 15 studies remained for analy­sis after the exclu­sion cri­te­ria were applied. This small sample size, and large het­ero­gene­ity in terms of pop­u­la­tion, methods, and out­comes, meant these studies were not direct­ly com­pa­ra­ble, and thus it was not pos­si­ble to conduct a meta-analysis.

To over­come this lim­i­ta­tion, future research should provide more stan­dard­ized testing, mea­sures, and analy­sis methods in online research. There is some promis­ing work in this direc­tion. In a recent study, Chier­chia et al. (2019) pro­vid­ed a battery of novel ability tests to inves­ti­gate non-verbal abstract rea­son­ing. The battery was val­i­dat­ed on ado­les­cents and adults who per­formed matrix rea­son­ing by iden­ti­fy­ing rela­tion­ships between shapes. While non-verbal ability tests are usually pro­tect­ed by copy­right, Chier­chia et al. (2019) made their battery open access for aca­d­e­m­ic research.

Gam­i­fi­ca­tion Case Studies

Games That Improve Learn­ing: Fun­maths Gam­i­fies Arith­metic Skills for Children

Dyscal­cu­lia is a devel­op­men­tal con­di­tion that affects the ability to acquire arith­meti­cal skills, i.e., dyslex­ia for numbers. Indi­vid­u­als with dyscal­cu­lia lack an intu­itive grasp of numbers and their rela­tions. Reports suggest that around 5–7% of chil­dren may have devel­op­men­tal dyscal­cu­lia (similar preva­lence to devel­op­men­tal dyslex­ia), and it is esti­mat­ed that low numer­a­cy skills cost the UK £2.4billion annu­al­ly (But­ter­worth et al., 2011). But­ter­worth et al. (2011) further propose that bring­ing the lowest 19.4% of Amer­i­cans to the minimum level of numer­a­cy would lead to a 0.74% increase in GDP growth.

There are clear eco­nom­ic and social ben­e­fits to improv­ing arith­meti­cal skills in the general pop­u­la­tion. Pro­fes­sor Diana Lau­ril­lard (Pro­fes­sor of Learn­ing with Digital Tech­nolo­gies at UCL Insti­tute of Edu­ca­tion) devel­oped a series of math games to train math skills in dyscal­culic chil­dren through simple manip­u­la­tions of objects.

In one of the games (Num­ber­Beads), chil­dren learn about addi­tion, sub­trac­tion, and the number line by com­bin­ing and seg­ment­ing strings of beads. Figure 2 shows an example where the target is a chain of two beads. In this instance, a knife is being used to slice the larger chain of beads into chains of two beads. Par­tic­i­pants can combine and cut these up as they wish, and when this is done cor­rect­ly the chain dis­ap­pears in a pleas­ing puff of success!

The game con­tin­u­al­ly adapts to the player’s ability, build­ing up their knowl­edge of the number line, fluency of the number line, and under­stand­ing of numer­als (uti­liz­ing per­for­mance and per­son­al gam­i­fi­ca­tion dimen­sions). In an inter­view, Prof. Lau­ril­lard calls it a “con­struc­tion­ist” game, as chil­dren “are actu­al­ly con­struct­ing the game them­selves.” She also sug­gests that similar games could be devel­oped to train lan­guage skills in people with dyslex­ia. You can read the full inter­view here.

numberbeads

Figure 2: Num­ber­Beads: Use a knife to split beads to match the target

 

Games such as these provide a high-quality edu­ca­tion­al resource at a very low cost. Accord­ing to Prof. Lau­ril­lard, these games have tremen­dous value, because they provide indi­vid­u­alised and enjoy­able math­e­mat­ics tuition to stu­dents both with and without dyscal­cu­lia. One player said, “I’d play it all day,” while a teacher said “I was absolute­ly astound­ed by the work they were doing with this. They were clearly seeing things in a dif­fer­ent way.” Stu­dents clearly enjoy playing these games, which con­tin­u­al­ly stretch and extend them (per­son­al dimen­sion of gam­i­fi­ca­tion taxonomy).

Games to Increase Moti­va­tion: Trea­sure Col­lec­tor Gam­i­fies Exec­u­tive Func­tion Train­ing in Children

We pre­vi­ous­ly men­tioned the chal­lenges of attri­tion for online lon­gi­tu­di­nal research (Palan & Schit­ter, 2018). Pro­fes­sor Niko­laus Stein­beis, based at UCL, wanted chil­dren (7–10yrs) to train 10 minutes a day for 8 weeks, in order to improve exec­u­tive func­tion. Such a task would have been impos­si­ble without gamification.

Chil­dren would be train­ing on the Go/No-Go task, which tests atten­tion and response inhi­bi­tion by asking par­tic­i­pants to respond to certain stimuli as fast as pos­si­ble (Go trials) versus with­hold­ing a response to other stimuli (No-Go trials) (see an example here). To keep par­tic­i­pants engaged, the classic Go/No-Go task was embed­ded into a larger nar­ra­tive of being an explor­er (an example of the fic­tion­al dimen­sion of gamification).

Figure 3 shows how par­tic­i­pants chose their own avatar, which was inte­grat­ed into the story. The Go/No-Go task was then reskinned in a variety of sit­u­a­tions, includ­ing when to dig for trea­sure, or when to steal gold from a dragon (Figure 3), or when to drive straight or swerve to avoid ice on the road. The nar­ra­tive ele­ments and varied game­play increased com­pli­ance and helped deliver quality adap­tive train­ing for the research project.

treasure-collector

Figure 3: Trea­sure Col­lec­tor: Exec­u­tive func­tion train­ing tasks for children

 

Accord­ing to Prof Bishop, while you can typ­i­cal­ly get an adult to do around 100 trials of a boring adap­tive task, with kids, after three or four trials, they’ll say, “Is there much more of this?”. This is bad news if you want them to train every day for 8 weeks! And yet, with the Trea­sure Col­lec­tor Games, Prof Stein­beis had stu­dents train for 10 minutes on the game, four times a week for 8 weeks. Overall, par­tic­i­pants com­plet­ed around 4,000 trials in total — and report­ed that they still enjoyed the game. It is clear that without gam­i­fi­ca­tion, this study would have been impos­si­ble, and so by employ­ing gam­i­fi­ca­tion, a range of devel­op­men­tal research ques­tions become possible.

Games That Answer New Research Ques­tions – the Hive and Mul­ti­play­er Games

The Hive (devel­oped with Pro­fes­sor Daniel Richard­son at UCL) is a research plat­form for study­ing how people think, feel, and behave togeth­er in groups (Bazazi et al., 2019; Neville et al., 2020). It works as an app that people can access with their smart­phone. After logging in, the Hive envi­ron­ment dis­plays a dot that can be dragged around. The coor­di­nate of each dot is record­ed, thus allow­ing exper­i­menters to analyze tra­jec­to­ries and rest periods in a similar way to exper­i­ments uti­liz­ing eye or mouse track­ing. Each par­tic­i­pant sees their own dot, and other par­tic­i­pants’, moving on the central display. Then, they perform dif­fer­ent tasks while mon­i­tor­ing other indi­vid­u­als’ deci­sions, rep­re­sent­ed by the move­ments of the other dots (see Figure 4).

schematic-experimental-set-up-minimal-group-assignment

Figure 4: Schemat­ic of an experiment set up and minimal group assign­ment. Image from Neville et al. (2020)

 

One of the studies involv­ing the Hive inves­ti­gates the link between mimicry and self-cat­e­go­riza­tion, and it attempts to answer the fol­low­ing ques­tion: Do we always do what others do and, if not, what are the factors that influ­ence our deci­sions in a group? (Neville et al., 2020). The experiment has been con­duct­ed at mul­ti­ple public events such as the Science Museum in London with groups between four and 12 people. Par­tic­i­pants are assigned to two groups i.e., the red and the blue dot. Then, they play a series of games involv­ing moving their dots and looking at the choices of the other par­tic­i­pants (belong­ing to both their own group and to the other one).

Overall, the results show that par­tic­i­pants are influ­enced by the move­ments of the con­fed­er­ate dots that are the same colors as their own. The authors con­clude that mimicry is affect­ed by in-group/out-group knowl­edge, i.e., knowl­edge of whether people belong to the same cat­e­go­ry as us. The Hive project allows one to study a fun­da­men­tal ques­tion, namely: Do people take deci­sions dif­fer­ent­ly when they think as indi­vid­u­als or as a crowd? Instead of paying par­tic­i­pants for per­form­ing a long and boring experiment in a lab, the Hive allows researchers to inves­ti­gate this issue every­where, using peo­ple’s smart­phones, and without any addi­tion­al costs whilst main­tain­ing the pre­ci­sion of lab-based testing.

Will Gam­i­fi­ca­tion Influ­ence My Findings?

Our lit­er­a­ture review sug­gests that gam­i­fi­ca­tion is effec­tive at increas­ing par­tic­i­pant engage­ment and reten­tion, and even increas­ing data quality in both qual­i­ta­tive and quan­ti­ta­tive exper­i­ments. However, Bailey et al. (2015) report­ed a concern that apply­ing gaming mechan­ics to ques­tions can change the char­ac­ter of the answers and lead to qual­i­ta­tive­ly dif­fer­ent respons­es. There­fore, to what extent does gam­i­fi­ca­tion change behavior?

A common concern for researchers is whether gam­i­fi­ca­tion will fun­da­men­tal­ly change the out­comes of task behav­ior or surveys being admin­is­tered. Will gam­i­fy­ing my research mean the find­ings are no longer valid? We can boil these ques­tions down to ‘do exter­nal rewards and moti­va­tors change behav­ior?’ The answer to this final ques­tion is cer­tain­ly ‘yes’.

A fas­ci­nat­ing series of studies by Manohar et al. (2015) inves­ti­gat­ed the effect of extrin­sic reward on the speed-accu­ra­cy trade-off, which is sup­posed to be a fun­da­men­tal law—as we move faster, we become less accu­rate. However, mon­e­tary incen­tives break this law, and par­tic­i­pants became both faster AND more accu­rate. This is just one of many exam­ples showing that extrin­sic rein­forcers change behav­ior. That said, exter­nal rein­forcers are often present in tra­di­tion­al lab-based behav­ioral eco­nom­ics studies. Total score bars are com­mon­place in value-based deci­sion-making studies, yet researchers in the field still argue about the extent to which this biases behav­ior in line with assump­tions of prospect theory (Kah­ne­man & Tversky, 1979).

Ryan and Deci (2000) dis­tin­guish between two dif­fer­ent forms of moti­va­tion: Intrin­sic and extrin­sic. Intrin­sic moti­va­tion relates to the indi­vid­u­al’s sat­is­fac­tion in per­form­ing an activ­i­ty in and of itself, while extrin­sic moti­va­tion occurs when the activ­i­ty is per­formed to obtain another and tan­gi­ble outcome, e.g., money as a reward. This dichoto­my is mir­rored by the implicit/explicit dimen­sions of gam­i­fi­ca­tion noted in Toda et al.’s (2019) tax­on­o­my. It is there­fore likely that employ­ing implic­it vs explic­it dimen­sions of gam­i­fi­ca­tion will affect intrin­sic or extrin­sic moti­va­tion in unique ways.

Mekler et al. (2017) inves­ti­gat­ed the effects of indi­vid­ual game ele­ments on intrin­sic and extrin­sic moti­va­tion in an image anno­ta­tion task. Mekler et al. (2017) found that gam­i­fi­ca­tion sig­nif­i­cant­ly improved extrin­sic factors like per­for­mance, espe­cial­ly when using leader boards and points, but not intrin­sic moti­va­tion or com­pe­tence (the per­ceived extent of one’s own actions as the cause of desired con­se­quences in one’s envi­ron­ment). This point was also raised by Looyestyn et al. (2017), who noted that the pos­i­tive effects of gam­i­fi­ca­tion seemed to lessen over time: The performance/measurement dimen­sion of gam­i­fi­ca­tion is only effec­tive in the short run.

Looyestyn et al. (2017) suggest that in order to be suc­cess­ful in the long term, gam­i­fied appli­ca­tions should focus on intrin­sic, instead of extrin­sic, moti­va­tion, i.e., focus on the per­son­al and fic­tion­al dimen­sions of gam­i­fi­ca­tion. For future appli­ca­tions, it is crucial to design game envi­ron­ments that enhance users’ intrin­sic moti­va­tion to keep them engaged over time, poten­tial­ly moving more towards games instead of gam­i­fi­ca­tion (see below).

Lastly, we wish to suggest the pos­si­bil­i­ty that dif­fer­ences between tra­di­tion­al tasks and games may not be such a bad thing for gam­i­fied research. In these sit­u­a­tions, we mostly con­sid­er lab-based testing to be the ‘ground-truth’ in psy­chol­o­gy and behav­ioral science. However, lab con­di­tions and tasks can actu­al­ly be quite arti­fi­cial. Psy­cho­log­i­cal tasks are often reduced to their most basic ele­ments so that sci­en­tists can make accu­rate infer­ences about the factors that influ­ence behav­ior. However, it is often the case that lab-based find­ings are not effec­tive at pre­dict­ing behav­iors outside the lab (King­stone et al., 2008; Shamay-Tsoory & Mendel­sohn, 2019).

There­fore, even if you do find dif­fer­ent results between par­a­digms run in the lab and gam­i­fied ver­sions of tasks, that does not mean that your game-based find­ings are inher­ent­ly wrong or less valid. We do not yet know which of these is closer to the ‘ground truth’. It may be that games, which are often more natural and more intrin­si­cal­ly moti­vat­ing, are in fact more rel­e­vant to real-world decision-making.

Gam­i­fi­ca­tion vs Games in Research

There is a subtle, but impor­tant, dis­tinc­tion to be made between using gam­i­fi­ca­tion and games in research. Whilst gam­i­fi­ca­tion refers to adding game ele­ments to exist­ing tasks, it is also pos­si­ble to create research games instead of gam­i­fy­ing exist­ing research par­a­digms. Research games will be intrin­si­cal­ly moti­vat­ing (and thus, hope­ful­ly, main­tain engage­ment over time) and allow for the explo­ration of more nat­u­ral­is­tic behaviors.

Typ­i­cal­ly, the objec­tive of gam­i­fi­ca­tion is to increase moti­va­tion and engage­ment. This is often achieved by using extrin­sic moti­va­tors such as points, badges, and leader boards (i.e., the performance/measurement dimen­sion), but what is the point of points? We can imagine gam­i­fy­ing reading by stating that each page is a point, thus moti­vat­ing someone to read more each day, as it is worth more points. Help­ful­ly, books already have points printed on each page (the page numbers) so you have a running total score, but that’s prob­a­bly not what motives any of us to read.

This approach to gam­i­fi­ca­tion ignores the fact that the book itself is intrin­si­cal­ly moti­vat­ing, or to put it another way, a good book doesn’t need gam­i­fy­ing. The objec­tive of a game is plea­sure or to learn a new skill, and there­fore the moti­va­tion to play it is often intrin­sic (i.e., the per­son­al and fic­tion­al dimen­sions). This intrinsic/extrinsic dis­tinc­tion in moti­va­tion changes the way behav­iors are learned and reinforced.

Most research tasks are designed to test a very spe­cif­ic ques­tion, and as such they will only have a limited number of response options that can easily be cat­e­go­rized as correct or incor­rect. However, games typ­i­cal­ly have a larger range of respons­es, which can lead to impro­vi­sa­tion. They also offer the player the oppor­tu­ni­ty to explore a world, and learn­ing is often implic­it and direct­ed by the player rather than by the exper­i­menter. Com­pared to gam­i­fi­ca­tion, games often employ a more con­struc­tion­ist approach that leads to dis­cov­ery learning.

The Fun­Maths game is one such example, as par­tic­i­pants can achieve their goals in several dif­fer­ent ways and are given the oppor­tu­ni­ty to explore dif­fer­ent options. This is dif­fer­ent to Trea­sure Col­lec­tor, which uses game ele­ments such as a story nar­ra­tive to increase moti­va­tion and engage­ment for a single, simple task (Go/No-Go task). Thus, one can argue that Trea­sure Col­lec­tor is an example of gam­i­fi­ca­tion of the Go/No-Go task, whereas Fun­Maths is an edu­ca­tion­al game designed to improve learn­ing. However, when gam­i­fi­ca­tion is done well, it should be near impos­si­ble to dis­tin­guish it from a game.

When using games to inves­ti­gate nat­u­ral­is­tic behav­iors, researchers must contend with a wider array of behaviors—statistically, we could refer to this as a larger para­me­ter space. Each deci­sion is not made in iso­la­tion, and choices are likely to inter­act with one another, cre­at­ing large, multi-fac­to­r­i­al designs. Rich datasets like this are perfect for machine learn­ing algo­rithms, which can help iden­ti­fy which com­bi­na­tions of behav­iors best predict out­comes. However, gen­er­at­ing mean­ing­ful infer­ences from poten­tial­ly enor­mous matri­ces of behav­ior com­bi­na­tions requires an even larger number of dat­a­points, i.e., lots of participants.

With tra­di­tion­al lab-based or online testing this would increase par­tic­i­pant costs hugely. However, we have already high­light­ed that games can be intrin­si­cal­ly moti­vat­ing and gen­uine­ly enjoy­able, thus sig­nif­i­cant­ly reduc­ing par­tic­i­pants’ fees (maybe even remov­ing them all togeth­er). For each experiment, there will be a breakeven point where, if you want more than a certain number of par­tic­i­pants, it becomes cheaper to invest in devel­op­ing an excit­ing game com­pared to using tra­di­tion­al behav­ioral science par­a­digms and paying par­tic­i­pants for their time.

In the case of Sea Hero Quest, they are report­ed to have record­ed data from 4.3 million players, who have played for a total of over 117 years. Col­lect­ing 117 years’ worth of data via a recruit­ment service such as Pro­lif­ic (117 par­tic­i­pants, 525,600 minutes each at £7.50 per hour) would cost over £10.7 million. As tools for making games become cheaper and more acces­si­ble, and the need for larger samples gets stronger (i.e., repro­ducibil­i­ty), games are going to be an impor­tant aspect of scaling up exper­i­men­tal, social, behav­ioral, and eco­nom­ic research.

Con­clu­sions

Herein, we have reviewed the role of gam­i­fi­ca­tion in behav­ioral science. We have endeav­ored to define gam­i­fi­ca­tion and outline the dif­fer­ent ele­ments that can be con­sid­ered when cre­at­ing behav­ioral science games. We have also pro­vid­ed exam­ples of three dif­fer­ent behav­ioral science games (videos of these games and more can be found here. We propose that gam­i­fi­ca­tion will increase engage­ment and reten­tion in online behav­ioral science studies. However, one must con­sid­er whether this will in some way affect the data being col­lect­ed. Anec­do­tal evi­dence from researchers and par­tic­i­pants sug­gests that the ben­e­fits of employ­ing gam­i­fi­ca­tion and game-based learn­ing far out­weigh these concerns.

First pub­lished in the BEGuide 2021, which can be accessed here.

For infor­ma­tion on how to gamify your research watch the webinar “Gorilla Presents … Game Builder and Multiplayer”.

Joshua Bal­sters

Dr Joshua Bal­sters is a Psy­chol­o­gist and Neu­ro­sci­en­tist with a BSc in Psy­chol­o­gy and PhD in Cog­ni­tive Neu­ro­science from Royal Hol­loway Uni­ver­si­ty of London. He’s cur­rent­ly working as client liaison at Gorilla and helps researchers get started with online research and sup­ports their projects. 

Jo Ever­shed

Jo is the CEO and co-founder of Caul­dron and Gorilla. Her mission is to provide behav­iour­al sci­en­tists with the tools needed to improve the scale and impact of the evi­dence-based inter­ven­tions that benefit society.