This walkthrough offers guidance and advice on best practice for creating and launching an experiment in Gorilla.
Create the Components
The first thing to do is to create the components of your survey: these are your individual Questionnaires and reaction time Tasks.
Even a simple experiment is likely to have three components:
As you create these components, use the preview functionality a lot! This will help you get it working exactly as you want. Particularly, make sure you've tested your experiment thoroughly, across all the browsers and devices that you intend to support. This video will show you how to test different devices (iPad, iPhone etc).
Once the components of your experiment are working, make sure you preview each of them and look at the metrics. Scrutinising the metrics will help you be confident that Gorilla is collecting the metrics that you need.
When you commit (save) versions in Gorilla, you can write a commit message. Use this to write a note to your future self about what is up and working and what is left to be done. This helps when you return to building the task or questionnaire after a break. Your future self will thank you!
Build your Experiment
Once you have created the individual components; put them together in the experiment tree! Simply add new nodes to your tree and link them together between the start and finish nodes.
Once you have created your experiment, you can experience your whole experiment almost as a participant would by previewing your experiment You can then download the data for each questionnaire/task individually at the end. We recommend you do this at least once before piloting your study!
After previewing your experiment, you may find you need to make changes to one of the Questionnaire or Task components. Once you have made these changes you’ll need to commit them and then update the corresponding Node in your experiment tree to the latest version.
Learn about the Experiment Tree here.
We strongly advocate the use of Checkpoint Nodes throughout your experiment. Place them at major steps in your experiment to monitor participant progress and aid in data analysis.
We recommend you place Checkpoint Nodes after consent and demographics questionnaires and at the beginning of new experimental branches from Branch or Randomiser Nodes. These nodes will be invaluable in assessing your participants progress through your experiment as well as identifying any potential problems in your experimental design. They are also great help when it comes to analysing your data especially when using pivot tables in MS Excel.
If you are using Pay-per-Participant, Checkpoint nodes will allow you to clearly identify participants who have not sufficiently progressed through your experiment so you can reject them confidently. Conversely, they'll allow you to identify participants who have progressed through enough of the experiment to merit including them and collecting their data.
Test your experiment
Before you launch your experiment, it is prudent to do some piloting. We suggest using the Pilot Recruitment Policy. Because you are now running an experiment – for real – this process will consume tokens. When you purchase tokens you will receive 5% additional tokens for free which are intended to be used for this purpose.
The Pilot Recruitment policy requires participants to type in some text as an ID. When I’m testing I use names that help me remember what functionality I was testing. For example: Jo_Test_1 or Jo_Test_Branch_EnglishOnly.
The pilot ID can also be useful when you want feedback from your whole lab. You can send out the link and each person can use their name.
Remember to set a Recrutiment Target.
Once all the data is in you can then download the metrics from the Data Tab and check that you have everything you need to run your analysis.
Trial your Experiment
Congratulations - You’re now ready to launch you experiment online! Maybe.
Let’s imagine you’re using Facebook to recruit participants. We’d recommend initially launching just a few participants (5 to 10) to allows participants to raise any questions.
You may want to include an additional feedback questionnaire at the end of your experiment: Check out our example one here. Once you've finished piloting and taken the feedback on board you can then remove this questionnaire from your Experiment Tree.
Once you've got the data, now is the time to run through how you are going to do your analysis. Always check:
Make further improvements by running through the analysis: This gives you an opportunity to add metadata, checkpoints and make corrections - without using up a lot tokens and losing potential participants (and their data!).
Launch your Experiment
Once you are happy that:
Congratulations, you’re now ready to launch you experiment online: For real this time!
Select a Recruitment Policy that meets your studys requirements. You may wish to Crowdsource your participants by sticking a Simple Link on Facebook or other social media channels. If you already have a list of participants the Email ID or Email Shot may be the right recruitment policies for you.
You can find a full list of available Recruitment Policies here.
Participant attrition - the loss of participants from your experiment by any means - is a factor to consider in any field experiment whether that is a lab study or an online study.
One of the great benefits of conducting online research - by putting your study online - is the increased 'Scale and Reach': That is, the wide availability of diverse participants and access to otherwise 'hard-to-reach' populations. The result: participant sample sizes in the 1000s rather than 10s or 100s is now an achievable possibility!
However, the upshot of it being easier to join an experiment, is that it is also easier to leave one. In other words, you should expect the attrition rate for online experiments to be higher than for the same experiment conducted in a Lab. There is no way of knowing why they have stopped, and for ethical reasons participants have the right to withdraw at any time.
When you set a recruitment target, both your Complete and Live participants will contribute towards your target. So, if you have a recruitment target of 20, 8 Completes and 12 Live, Gorilla will mark your experiment as FULL and prevent further participants from joining your study. This is because they might all complete the experiment. However, some participants will leave your study without completing and will remain 'Live'. While they are 'Live' they have reserved a token. Consequently, it is important to regularly reject participant who have dropped out.
There are two ways reject participants that have dropped out.
For a participant to be marked as Completed by Gorilla they must have completed the last task And reached the 'Finish' Node in your Experiment Tree. Nevertheless, you may be willing to pay for the data if a participant has completed most – but not all - of the experiment.
Whether or not you include participants who have dropped out during your experiment is up to you. We suggest including a Checkpoint Node (i.e. Keep) at the point at which you are happy to purchase the data. A checkpoint node will allow you to identify the participants that you want to manually include on the Participants page. For more information take a look at the Checkpoint Node documentation or watch this video walkthrough youTube: The Checkpoint Node.
Using Recruitment Services
Often the fastest way of getting participants for your study is to work with a Recruitment Service who provides them.
Recruitment Services will take care of both finding the participant and also paying them. For this they will take a commission for the work that they have done.
Another option is to use the 3rd Party Recruitment Policy with a Market Research agency.
Market Research agencies exist all over the world and in nearly every jurisdiction. They are more expensive than recruitment services like mTurk and Prolific, and their participants are used to questionnaires about products, but this can be a good option if you need participants fast.
There are a number of challenges to manage when using a recruitment service:
While a participant may be familiar with taking studies through a particular recruitment service they may not have taken a Gorilla Study before.
Unlike other platforms Gorilla has been designed to reduce participant 'barriers to entry' thus and protect participant anonymity; Gorilla does NOT require participants to sign-up for a Gorilla account. Nor are participants required to download anything to their computer in order to run your study.
Typically recruitment services will allow you to add a description and/or instructions for a participant which they will view before they click the link to your experiment and start taking part.
Thus, we highly recommend you inform your participants of these differences: particularly that they Do Not need to sign up for a Gorilla account in order to take part in your experiment. Indeed mentioning these two factors can increase the uptake of your study by participants!
You are using two paid for services that must both protect you from overspending. Consequently, you’ll need to set the number of participants you want to recruit in both services. In Gorilla you do this by setting the Recruitment Target to be the total number of Participants that you want to recruit in your study.
Problems can occur when the two different systems get ‘out of balance’. For example, Gorilla may think you have finished participant recruitment while the recruitment service may still send participants to Gorilla. The result is a frustrating experience for a participant as they will encounter a message saying 'this experiment is not currently available'.
How can this happen?
A recruitment service may have a way of keeping track of participant attrition that we don’t have in Gorilla (and vice-versa):
Continue reading below to learn how to avoid this and stay on top of participant attrition!
If you are on an Unlimited Gorilla licence or have an Unlimited Experiment Token. The easiest way to avoid 'unbalanced recruited participant numbers' in both systems is to set your Gorilla 'Recruitment Target' to 'Unlimited'. This will allow the recruitment service to send the number of participants it considers to be correct.
If instead you are setting a specific 'Recruitment Target' in Gorilla (in addition to the recruitment Target set in the Recruitment Service) you will need to make sure you keep the number of recruited participants in both systems in check:
Another way you can alleviate this issue - if you do not wish to monitor your participants - is to use over-allocate participant to your study:
So if you want 200 complete participants and you are expecting 30% attrition, then assign 300 tokens to your experiment.
You can also set a maximum time that a participant can take on a study. If they have not completed within this time, they are automatically rejected. If your study takes 20 minutes on average you could set this to 30 minutes (and reject slow people) or 2 or 24 hours if you wanted to be generous.
Microsoft Azure guarantees that our servers will be working 99.95% of the time, but they can still go down. See Server Downtime page for more details.
If you are paying a lot for your recruited participants or recruiting a hard to reach demographic then low participant attrition may be a crucial factor in both your experimental design and recruitment phase.
In these cases we highly recommend launching experiments in small enough batches that you can afford to lose every participant that is currently active.
To do this, set the Recruitment Target to 20 and once these are all complete, update the Recruitment Target to 40. Continue adding batches of 20 until you reach your total.
Consequently, don’t publish a study with 200 Ns on your chosen recruitment service, publish it with 20 Ns. Once that data is in, release a further 20. This way you can protect yourself from the cost of participant attrition.
Some participant recruitment services give you the option of limiting how many participants can take part simultaneously.
Ethics / IRB Approval
You may need ethical approval from your Internal Review Board or Ethics Committee to run your experiment. If so, you will find a useful template here.