Do you know your par­tic­i­pants? Each time a par­tic­i­pant launch­es a study on Gorilla, we collect some basic infor­ma­tion about the equip­ment they are using and their loca­tion. We have col­lat­ed and analysed summary sta­tis­tics on around 200,000 par­tic­i­pants whose data was includ­ed in a data down­load after taking part in a study on the platform.

If you run studies online, or are plan­ning to in the near future, this article will tell you — through descrip­tive data — about the equip­ment that your par­tic­i­pants are likely to be using, and hint at the demo­graph­ics you may have access to.

The results are also pub­lished in our paper: Real­is­tic pre­ci­sion and accu­ra­cy of online experiment plat­forms, web browsers, and devices.

Table of Contents


The vast major­i­ty of par­tic­i­pants use a com­put­er (Desktop or Laptop). Note this does not add to 100% due to some devices not enabling logging on their browsers.

Par­tic­i­pants accessed Gorilla and com­plet­ed exper­i­ments using over 1100 dif­fer­ent devices, ranging from Desktop Com­put­ers to touch-screen mp3 players — and even 11 Xbox users. This shows the strength of Gorilla as a flex­i­ble plat­form, as researchers do not have to adapt their tasks to work on a massive range of devices — the task builder sorts this out for you.

Smart­phones account­ed for 20% of our users. The most popular smart­phone devices for par­tic­i­pants were the iPhone, fol­lowed by Samsung range and then by Huawei devices. We can compare this to infor­ma­tion from StatsCounter which col­lects user data from over 10 billion page views every month. Rel­a­tive to this broader market Gorilla Par­tic­i­pants are much more likely to be using an Apple phone (54% vs 22%), this is likely rep­re­sen­ta­tive of the con­sumer markets par­tic­i­pants are located in (more on this below).

The rel­a­tive­ly small number of tablet users was dom­i­nat­ed by iPads, fol­lowed by Samsung users. The others amount­ed to an almost neg­li­gi­ble number. This is likely a biased sample, as it is so small.

The dom­i­nance of desktop/laptop may reflect the desire of researchers con­cerned with pre­ci­sion timing and con­sis­ten­cy across par­tic­i­pants using the Require­ments feature in Gorilla to limit par­tic­i­pants to Com­put­ers (Desk­tops and Laptops). A screen­shot of this can be seen below. has built in tools to limit par­tic­i­pants by browser or device. This will help control your data quality.

The massive variety we see in user’s devices really out­lines the impor­tance of (1) using the browser testing tools to check the appear­ance on a range of devices, or (2) apply­ing a require­ment in order to limit your par­tic­i­pants to a spe­cif­ic device or browser.



Note this does not add to 100% due to some devices not enabling logging on their browsers.

We can see a het­ero­gene­ity of browsers here. Chrome, by far, is the most common — which is in line with glob­al­ly report­ed trends on usage (64% used Chrome) from StatsCounter.

We now see that more par­tic­i­pants are access­ing exper­i­ments using the Face­book app browser than they are access­ing with inter­net explor­er. This is likely a reflec­tion that researchers com­mon­ly adver­tise their research on Face­book groups — par­tic­i­pants are highly likely to click on those links on the mobile app (which uses this browser). The per­cent­age of users using the Face­book browser (4%) is much higher than the broader user stats linked to above.

This also out­lines the decline of Inter­net Explor­er which is at 2% of our samples, with Edge being ahead. Also, note that more par­tic­i­pants are using the mobile version of Safari than the local version. WebKit is the mobile Safari browser but used on other devices (e.g. Kindle’s use WebKit).


Oper­at­ing Systems

Pre­dictably, Windows remains the dom­i­nant OS in use. Com­par­ing this again to stats from Stat­Counter, we can see that, overall, our par­tic­i­pants are more likely to use some form of Laptop/Desktop over a mobile device than the average world user, and are more likely to use macOS than the average world user. This prob­a­bly reflects the require­ment of being an online par­tic­i­pant (easier on a per­son­al com­put­er), and the western device market (more Macs).


Recruit­ment Platforms

There are a variety of ways that you can recruit par­tic­i­pants to take part in a study on Gorilla. The most straight­for­ward one is a simple link that can be shared however you please (online, on posters, via email). Many researchers are moti­vat­ed to take their research online in order to use a recruit­ment service (e.g. Pro­lif­ic, mTurk, Qualtrics Panels, SONA, etc) to source participants.

You can see from the graph below, that using recruit­ment ser­vices accounts for 53% of par­tic­i­pants, the other 43% come from simple links, whereas 3% are under a pilot recruit­ment policy. This policy is often used to pilot tasks before sending them out into the wilderness.

About 1% of par­tic­i­pants are using the more niche require­ments poli­cies. For instance, super­vised (which create a unique log in code for each par­tic­i­pant) and is often used in classrooms.

Dis­tri­b­u­tion of users recruit­ing via mTurk vs Prolific

MTurk and Pro­lif­ic are the most common recruit­ment plat­forms in use. They both have pre-qual­i­fied participants/workers who are paid to take part in research. Within the recruit­ment plat­form, these par­tic­i­pants follow a link to a study hosted on Gorilla. Gorilla then hosts the exper­i­ments and cap­tures the data. A com­ple­tion ver­i­fi­ca­tion process then allows the par­tic­i­pant to collect their reimbursement.

Com­par­ing the two popular recruit­ment plat­forms we can see that the geo­graph­ic dis­tri­b­u­tion of par­tic­i­pants recruit­ed onto Gorilla is dif­fer­ent. The major­i­ty of Pro­lif­ic par­tic­i­pants are within Europe, with almost a third in America, and a much smaller number in Aus­tralia, Africa, and Asia. MTurk par­tic­i­pants are over­whelm­ing­ly likely to be from America, with a much higher number located in Africa or Asia rel­a­tive to Prolific.

This data sig­ni­fies the impor­tance of choos­ing your recruit­ment plat­form care­ful­ly, depend­ing on who you wish to be rep­re­sent­ed in your research. Both mTurk and Pro­lif­ic allow you to specify your target loca­tion, however, within each plat­form this is likely to effect your uptake rate and poten­tial­ly your total sample size.



The par­tic­i­pants recruit­ed on Gorilla are mainly based in Europe, fol­lowed by America, then Asia, and then Australia.


This is also reflect­ed in the users by city timezone:

It’s worth noting that the ‘cities’ record­ed in the time­zone browser data are likely to be course — e.g. ‘London’ rep­re­sents the time­zone that covers the entire­ty of the United Kingdom as well Por­tu­gal and the west coast of Africa. In the United States, Chicago, New York, and Los Angeles are likely to rep­re­sent dif­fer­ent times and cover large areas in the U.S.

This geo­graph­ic dis­tri­b­u­tion prob­a­bly goes part of the way to explain­ing why our user’s devices and brands are so depart­ed from the broad inter­net user pop­u­la­tion. For instance, when we look at UK inter­net users, the most common mobile device is an Apple phone.

In order to access any given pop­u­la­tion, The Gorilla Experiment Builder allows you to define which loca­tions you want to restrict your users to.

Screen Size

The 99th per­centile of Device screen dimen­sions. Kerned Density Plots & his­tograms with bin widths of 30 pixels shown for each axis.

Through the use of our logging tools, Gorilla is able to detect the screen size users are using. Of inter­est to researchers is the vari­ance of screen dimen­sions we record­ed. These ranged from 320×205 pixels to 2800×5982 pixels. The graph above had to be fil­tered to the 99th quan­tile to get rid of these extremes — oth­er­wise it was very dif­fi­cult to interpret.

The clus­ter­ing of points along diag­o­nal lines indi­cates the common aspect ratio for mon­i­tors, the most common by far (indi­cat­ed by the lower blue line above) is the 16:9 aspect ratio.

The orange/mobile points appear to be in two diag­o­nal clus­ters, this rep­re­sents par­tic­i­pants with land­scape vs por­trait orientations.

The width of screens is rel­a­tive­ly dis­tinct for each device type, whereas the height seems to be more closely grouped. This likely rep­re­sents the mix between por­trait and land­scape use for mobiles and tablets — which is less common than in com­put­er users.

This vari­ance out­lines the impor­tance of think­ing about par­tic­i­pant screen dimen­sions in your online research — espe­cial­ly if you are running tasks that would nor­mal­ly be done on a spec­i­fied monitor in a lab setting. Gorilla allows this by restrict­ing by device type — Com­put­ers will be much better for these types of research.

Browser Window Coverage

The browser window refers to the space in which the browser has to display content. This can vary a lot, as users may not put their browser in full screen, or max­imise the window. To try and capture this vari­ance we cal­cu­lat­ed the per­cent­age of each participant’s screen that was covered by the browser window.

Kernel Density Esti­ma­tion for each devices’ view­port cov­er­age on the screen. Binned with a width of 2%. The lines above the X axis shows indi­vid­ual participant’s points for reference.

Rather reas­sur­ing­ly the cov­er­age was, on average, rel­a­tive­ly high. Par­tic­i­pants had a mean of 81% cov­er­age, with a stan­dard devi­a­tion of 11%. This means that most par­tic­i­pants have a large amount of the screen covered by the Gorilla experiment.

Com­put­ers have much more vari­able cov­er­age, as you can see by the tail in the graph above. The stan­dard devi­a­tion of com­put­ers was 11% cov­er­age, com­pared with 6% for mobile and tablets. Although, this is obvi­ous­ly because these devices have a limited ability to shrink windows within them.

A note about graphs

If you’re into that kinda thing, the graphs were con­struct­ed in Python using:

  • Seaborn
  • Mat­plotlib
  • geopan­das (for the map — Chloro­pleth if you’re being technical)
  • The screen size plot was made by adap­tion from this code:

Alex Anywl-Irvine

Alex is a part-time Devel­op­er at Gorilla and a PhD student at the MRC Cog­ni­tion and Brain Science in Cam­bridge. He helps build con­cepts for new fea­tures, runs sci­en­tif­ic val­i­da­tion on the plat­form, writes up results and helps exper­i­menters create some of the more complex online experiments.

Alex’s PhD — super­vised by Dr Duncan Astle — is on the devel­op­ment of resilience in chil­dren, and how this can be mod­elled through brain scans and big-data methodologies.