Experiments: From Creation to Launch


Overview


This walkthrough offers guidance and advice on best practice for creating and launching an experiment in Gorilla.

Create the Components


The first thing to do is to create the components of your survey: these are your individual Questionnaires and reaction time Tasks.

Even a simple experiment is likely to have three components:

  1. A consent form
  2. A demographics questionnaire
  3. A task.

As you create these components, use the preview functionality a lot! This will help you get it working exactly as you want. Particularly, make sure you've tested your experiment thoroughly, across all the browsers and devices that you intend to support. This video will show you how to test different devices (iPad, iPhone etc).

Once the components of your experiment are working, make sure you preview each of them and look at the metrics. Scrutinising the metrics will help you be confident that Gorilla is collecting the metrics that you need.

When you commit (save) versions in Gorilla, you can write a commit message. Use this to write a note to your future self about what is up and working and what is left to be done. This helps when you return to building the task or questionnaire after a break. Your future self will thank you!

Learn about the Questionnaire Builder here.
Learn about the Task Builder here.
Learn about the Experiment Tree here.

Build your Experiment


Once you have created the individual components; put them together in the experiment tree! Simply add new nodes to your tree and link them together between the start and finish nodes.

Once you have created your experiment, you can experience your whole experiment almost as a participant would by previewing your experiment You can then download the data for each questionnaire/task individually at the end. We recommend you do this at least once before piloting your study!

  • If you have Branch Nodes, make sure you test the experience for each possible response.
  • If you are using Randomiser Nodes, the preview tool will randomly assign you to a condition. However, the preview tool does not have any memory of previous previews. Consequently, if you’ve set up a Balanced Randomiser, you’ll experience it as a Random Randomiser.
  • If you are using Quota Nodes, the preview tool will not send you along the reject path, as the preview tool does not consume tokens, and so will not fill your Quotas.
  • If you are using an Order Node, the preview will only give you one of the possible orders!

After previewing your experiment, you may find you need to make changes to one of the Questionnaire or Task components. Once you have made these changes you’ll need to commit them and then update the corresponding Node in your experiment tree to the latest version.

Learn about the Experiment Tree here.

Checkpoint Nodes


We strongly advocate the use of Checkpoint Nodes throughout your experiment. Place them at major steps in your experiment to monitor participant progress and aid in data analysis.

We recommend you place Checkpoint Nodes after consent and demographics questionnaires and at the beginning of new experimental branches from Branch or Randomiser Nodes. These nodes will be invaluable in assessing your participants progress through your experiment as well as identifying any potential problems in your experimental design. They are also great help when it comes to analysing your data especially when using pivot tables in MS Excel.

If you are using Pay-per-Participant, Checkpoint nodes will allow you to clearly identify participants who have not sufficiently progressed through your experiment so you can reject them confidently. Conversely, they'll allow you to identify participants who have progressed through enough of the experiment to merit including them and collecting their data.

Test your experiment


Before you launch your experiment, it is prudent to do some piloting. We suggest using the Pilot Recruitment Policy. Because you are now running an experiment – for real – this process will consume tokens. When you purchase tokens you will receive 5% additional tokens for free which are intended to be used for this purpose.

The Pilot Recruitment policy requires participants to type in some text as an ID. When I’m testing I use names that help me remember what functionality I was testing. For example: Jo_Test_1 or Jo_Test_Branch_EnglishOnly.

The pilot ID can also be useful when you want feedback from your whole lab. You can send out the link and each person can use their name.

Once all the data is in you can then download the metrics from the Data Tab and check that you have everything you need to run your analysis.

Learn more about the different Recruitment Policies here
Learn more about your Metrics here

Trial your Experiment


Congratulations - You’re now ready to launch you experiment online! Maybe.

Let’s imagine you’re using Facebook to recruit participants. We’d recommend initially launching just a few participants (5 to 10) to allows participants to raise any questions.

You may want to include an additional feedback questionnaire at the end of your experiment: Check out our example one here. Once you've finished piloting and taken the feedback on board you can then remove this questionnaire from your Experiment Tree.

Once you've got the data, now is the time to run through how you are going to do your analysis. Always check:

  • Scoring is working as you want.
  • Metadata is coming through to make data analysis easy, e.g. to pivot the data.
  • Checkpoint Nodes are in all the right places.

Make further improvements by running through the analysis: This gives you an opportunity to add metadata, checkpoints and make corrections - without using up a lot tokens and losing potential participants (and their data!).

Launch your Experiment


Once you are happy that:

  • Your experiment is working smoothly for real participants.
  • You are collecting all the data you need from real participants.

Congratulations, you’re now ready to launch you experiment online: For real this time!

Select a Recruitment Policy that meets your studys requirements. You may wish to Crowdsource your participants by sticking a Simple Link on Facebook or other social media channels. If you already have a list of participants the Email ID or Email Shot may be the right recruitment policies for you.

Alternatively many researchers choose to use Third Party Recruitment Services. You can find out more about using recruitment services here

You can find a full list of available Recruitment Policies here.

Attrition


Participant attrition - the loss of participants from your experiment by any means - is a factor to consider in any field experiment whether that is a lab study or an online study.

One of the great benefits of conducting online research - by putting your study online - is the increased 'Scale and Reach': That is, the wide availability of diverse participants and access to otherwise 'hard-to-reach' populations. The result: participant sample sizes in the 1000s rather than 10s or 100s is now an achievable possibility!

However, the upshot of it being easier to join an experiment, is that it is also easier to leave one. In other words, you should expect the attrition rate for online experiments to be higher than for the same experiment conducted in a Lab. There is no way of knowing why they have stopped, and for ethical reasons participants have the right to withdraw at any time.

When you set a recruitment target, both your Complete and Live participants will contribute towards your target. So, if you have a recruitment target of 20, 8 Completes and 12 Live, Gorilla will mark your experiment as FULL and prevent further participants from joining your study. This is because they might all complete the experiment. However, some participants will leave your study without completing and will remain 'Live'. While they are 'Live' they have reserved a token. Consequently, it is important to regularly reject participant who have dropped out.

There are two ways reject participants that have dropped out.

  • Experiment Time Limit: The easiest way to do this is by setting a Time Limit. This option can be found in the Requirements section of your experiment, under 'Recruitment'. Participants who take longer than the time limit will be automatically rejected. (Note: The Time Limit features is not recommended for longitudinal studies). If you're recruiting via Prolific, make sure that your Gorilla experiment Time Limit matches the time limit set in Prolific.
  • Manual Rejection: Alternatively, you can manually reject Live participants at any point to return the token. For experiments that are completed in one sitting, a good rule of thumb is to reject participants that started over 24 hours ago.

For a participant to be marked as Completed by Gorilla they must have completed the last task And reached the 'Finish' Node in your Experiment Tree. Nevertheless, you may be willing to pay for the data if a participant has completed most – but not all - of the experiment.

Whether or not you include participants who have dropped out during your experiment is up to you. We suggest including a Checkpoint Node (i.e. Keep) at the point at which you are happy to purchase the data. A checkpoint node will allow you to identify the participants that you want to manually include on the Participants page. For more information take a look at the Checkpoint Node documentation or watch this video walkthrough youTube: The Checkpoint Node.

Using Recruitment Services


Often the fastest way of getting participants for your study is to work with a Recruitment Service who provides them.

Recruitment Services will take care of both finding the participant and also paying them. For this they will take a commission for the work that they have done.

We highly recommend PROLIFIC.ac; they are specialists for behavioural scientists. A full list of other integrated recruitment services can be found here.

Another option is to use the 3rd Party Recruitment Policy with a Market Research agency.
Market Research agencies exist all over the world and in nearly every jurisdiction. They are more expensive than recruitment services like mTurk and Prolific, and their participants are used to questionnaires about products, but this can be a good option if you need participants fast.

There are a number of challenges to manage when using a recruitment service:

  1. Fully inform your participants how to interact with Gorilla and 'complete' your study.
  2. Setting Participant Recruitment Numbers in both Gorilla and the recruitment service.
  3. Keeping on top of participant attrition.
  4. Taking account of server downtime - limiting live participants.

Informing your participants about Gorilla

While a participant may be familiar with taking studies through a particular recruitment service they may not have taken a Gorilla Study before.

Unlike other platforms Gorilla has been designed to reduce participant 'barriers to entry' thus and protect participant anonymity; Gorilla does NOT require participants to sign-up for a Gorilla account. Nor are participants required to download anything to their computer in order to run your study.

Typically recruitment services will allow you to add a description and/or instructions for a participant which they will view before they click the link to your experiment and start taking part.

Thus, we highly recommend you inform your participants of these differences: particularly that they Do Not need to sign up for a Gorilla account in order to take part in your experiment. Indeed mentioning these two factors can increase the uptake of your study by participants!


Setting Participant recruitment numbers

You are using two paid for services that must both protect you from overspending. Consequently, you’ll need to set the number of participants you want to recruit in both services. In Gorilla you do this by setting the Recruitment Target to be the total number of Participants that you want to recruit in your study.

Problems can occur when the two different systems get ‘out of balance’. For example, Gorilla may think you have finished participant recruitment while the recruitment service may still send participants to Gorilla. The result is a frustrating experience for a participant as they will encounter a message saying 'this experiment is not currently available'.

How can this happen?
A recruitment service may have a way of keeping track of participant attrition that we don’t have in Gorilla (and vice-versa):

Scenario 1:

  1. A participant clicks on a link to your study.
  2. They click the Gorilla Start button. At this point, in Gorilla, the participant reserves a participant Token.
  3. The participant consents.
  4. They come to a screening questionnaire which they fail to pass.
  5. They are sent to a Gorilla Reject Node. At this point in Gorilla, typically the participant Token is returned to the pool, and the participant is not counted towards the 'Recruitment Target'.
  6. Depending upon how you have set up your Reject Node the participant may or may not be redirected back to the recruitment service. Therefore this may (or may not) change how many participants the recruitment service believes it has recruited.

Scenario 2:

  1. A participant clicks on a link to your study.
  2. They click the Gorilla Start button. At this point, in Gorilla, the participant reserves a token.
  3. The participant consents.
  4. They start the task – but for whatever reason decide to drop out. The participant goes back to the recruitment service to 'manually drop out' of your experiment.
  5. At this point Gorilla may not be told by the recuitment service that the participant has withdrawn. As such the participant's status will remain 'Live' their token is still reserved and the researcher must manually reject this participant.

Continue reading below to learn how to avoid this and stay on top of participant attrition!


Keeping on top of participant attrition

If you are on an Unlimited Gorilla licence or have an Unlimited Experiment Token. The easiest way to avoid 'unbalanced recruited participant numbers' in both systems is to set your Gorilla 'Recruitment Target' to 'Unlimited'. This will allow the recruitment service to send the number of participants it considers to be correct.

If instead you are setting a specific 'Recruitment Target' in Gorilla (in addition to the recruitment Target set in the Recruitment Service) you will need to make sure you keep the number of recruited participants in both systems in check:

  1. Be sure to read our How To: Participant Tokens Guide so you fully understand when and where tokens will be reserved, spent and returned in Gorilla.
  2. Be aware of how participants are considered 'recruited' within your choosen Recruitment Service. For example: commonly the recruitment service requries that they need to be returned to their site to be considered 'complete'.
  3. Keep a close eye on participants whom have started your study – and reserved a token in Gorilla – but have since dropped out (their status remains live).
  4. You may need to manually reject participants who look to have dropped out in order to return their 'reserved' token to the pool.

Another way you can alleviate this issue - if you do not wish to monitor your participants - is to use over-allocate participant to your study:
So if you want 200 complete participants and you are expecting 30% attrition, then assign 300 tokens to your experiment.


You can also set a maximum time that a participant can take on a study. If they have not completed within this time, they are automatically rejected. If your study takes 20 minutes on average you could set this to 30 minutes (and reject slow people) or 2 or 24 hours if you wanted to be generous.


Throttling Live Participants

Microsoft Azure guarantees that our servers will be working 99.95% of the time, but they can still go down. See Server Downtime page for more details.

If you are paying a lot for your recruited participants or recruiting a hard to reach demographic then low participant attrition may be a crucial factor in both your experimental design and recruitment phase.

In these cases we highly recommend launching experiments in small enough batches that you can afford to lose every participant that is currently active.

To do this, set the Recruitment Target to 20 and once these are all complete, update the Recruitment Target to 40. Continue adding batches of 20 until you reach your total.

Consequently, don’t publish a study with 200 Ns on your chosen recruitment service, publish it with 20 Ns. Once that data is in, release a further 20. This way you can protect yourself from the cost of participant attrition.

Some participant recruitment services give you the option of limiting how many participants can take part simultaneously.

Ethics / IRB Approval


You may need ethical approval from your Internal Review Board or Ethics Committee to run your experiment. If so, you will find a useful template here.