Troubleshooting


Welcome to Troubleshooting


Explore the list of questions in the menu to the left. Here you can troubleshoot solutions to commonly asked questions.

If you have general questions about using Gorilla try these How To Guides:

I Can’t Commit Questionnaires


This is often because two nodes (questionnaire widgets) have the same key. If this is the case, an error message, like the one below, will be displayed.

    Error Message
    ‘Two of your nodes have the same key:  response-2’

Each widget in the questionnaire has to have a unique key. You can leave them as they are when you add the widget (response-1, response-2, response-n), or you can rename them to be more meaningful.

I Can’t Edit A Task


There are generally three reasons why you might not be able to edit a task:

  1. You haven't pressed the Edit button
  2. Someone else is has an open change on the same task (someone else is editing the task)
  3. You don't have edit rights (often the case in your Library or the Samples folder)

Details of what to do in each case:

  1. Once you have gone into a task, if you want to edit it, the first thing to do is press the red EDIT button near the top right below the red PREVIEW button
  2. If someone else has an open change, when you click EDIT, a modal dialogue will come up saying that you can't edit the task because someone else is. They will need to commit (or discard) their change, and then you will be able to EDIT the task.
  3. To make sure you have edit rights click on setting>collaborate near the top right of the page. Only someone with Admin rights can change the permissions.
  4. If you don't have edit right, you can always Clone the task to create your own editable copy. Create New Task > From Existing > Select the task

I Can’t Upload A Spreadsheet


There are three common reasons why you might not be able to upload a spreadsheet:

  1. You have not pressed the Edit button. You can only upload spreadsheets to a task while you are in Edit mode.
  2. Try uploading the spreadsheet in a different file format (XLSX, CSV or ODS).
  3. The file is corrupted. If this is the case trying opening the original spreadsheet in your preferred package and try again.

One of my Task Displays is not Appearing.


'I've set up my display, but when I preview my task, this display is being skipped over and isn't shown.'

There are two common causes for this issue:

  1. You have set up the Display in the Task Structure, but you haven't entered the display name correctly into the spreadsheet.
  2. Your Display name contains spaces, special characters or punctuation.
  3. Tried the above but Display still not showing?

Here's what to do in each case:

1. You have set up the Display in the Task Structure, but you haven't entered the display name correctly into the spreadsheet.

Solution:

  1. Download the spreadsheet you are running your Task from.
  2. Highlight and Copy the name of the Display you are having problems with.
  3. Paste the name into the 'Display' Column of your spreadsheet & double check there are no trailing spaces at either end.
  4. Save your amended spreadsheet and upload it to Gorilla.

2. Your Display name contains spaces, special characters or punctuation.

Display names are Casesensitive. Thus, the most common reason for a Display not appearing is 'stray spaces'.

Its very easy to accidentally put an extra space into your Display names (or when entering these names into the spreadsheet). Most commonly this occurs at the beginning or end of the name when entering the display into the spreadsheet.

Solution:

  1. Remove all spaces from your Task's Display names. Instead use camelCase or PascalCase as the format of your display names.
  2. Remove all special characters & Punctuation from your Task's Display names.
  3. Download your existing spreadsheet and replace your old display names with your new ones.
  4. Save your amended spreadsheet and upload it to Gorilla.

3. My Display still isn't showing?

'I've removed all spaces & special characters from my display names and reuploaded a new spreadsheet
but it's still not showing this Display.'

Get in contact with us via our Support Contact Form. Provide us with a link to your task and we will happily take a look at your issue directly.

I'm not seeing what I expect when I Preview my Task


There are three common reasons why your task preview may be different from what you expected:

  1. I get a blank screen on previewing my Task/ part-way through previewing my Task

    1. You have included a 'blank' or 'empty' Screen in one of the Displays of your task.
    2. Identify where the blank screens are in your displays.
    3. Either; (1) Delete the blank screens. (2) Add a 'Timelimit' (Screen) Zone to the blank screen (If you wish to keep the blank screen); this way gorilla will know to move on from this screen automatically. (3) Add a 'Fixation' or 'Continue Button' Zone to your blank screen; this way a participant can move on from this screen.

  2. I get a blank screen and the error message 'No source for the media could be found! Try refreshing the page. If the problem persists, contact the researcher for the experiment'.

    1. A Stimulus is missing or misnamed.
    2. First check the Stimuli tab and double check that All your stimuli are uploaded? If you have any doubts or cannot find a stimuli upload your stimuli again and re-preview you task.
    3. Go through the Stimuli file names in your Zones (or your Spreadsheet). Check that the file name is correct, with no spelling errors, nor spaces before or after the file name. Make sure you have included the full file name (including the .mp3/.png/.jpg ect.). Correct any mistakes in the file names where they appear.

  3. When I preview my Task it doesn't look how I am expecting it to (but my displays are working as I expect them to).

    1. Your Spreadsheet is incorrect.
    • First check the spreadsheet tab: Select the spreadsheet you are trying to preview from the 'Show spreadsheet' drop-down menu. Does this look correct?
      • If in doubt; Press the Edit button. Make sure the correct spreadsheet is selected under 'Show spreadsheet' drop down menu. Press 'Upload Spreadsheet' button. Upload the correct spreadsheet.
    1. You are using Manipulations and haven't selected the correct Spreadsheet.
    • When previewing an experiment, be sure to select the correct spreadsheet before pressing the 'Launch' button.

My Video stops part-way through an Experiment or Preview


If your video stops playing part way through, this is typically because your video file is corrupted in some way.

On Chrome, an error message will appear when this occurs, and video play error information will appear in your metrics. This means that if you launch an experiment containing a corrupted video, only participants on Chrome will show any error in the data.

In this case, you need to use a different video file.

It doesn't work when I preview my experiment (but each element works fine)


There are two common reasons you might not be able to preview an experiment:

  1. You haven't connected the nodes together properly. To check, pick each node up and check the arrows are connected.
  2. One of the nodes has never been committed. If this is the case, there is usually a V999... or V0 version of the node in the tree. Open the original element, commit it, and then update it in the tree.

Some of my participants are still live


As soon as a participant starts an experiment (clicks on the recruitment link or is sent a recruitment email) they are marked as 'Live'. They will remain 'Live' until they reach a finish node, at which point they are marked as 'Completed'. It is also at this point that they are automatically 'included' and their metrics become available in your data. For Pay-per-Participant users, this is also the point where a participant token will be consumed.

When a participant is Live, and has only recently joined the experiment, its reasonable to assume that they are still taking part and just need a little more time to finish. If a participant has been Live for a very long time the most likely scenario is that, unfortunately, they have chosen to leave the experiment. This could have happened very early on: perhaps they chose not to consent. Or, very late in the experiment: they may have gotten interrupted by something else and had to leave the experiment.

To help you work out where a participant is in an experiment, its really helpful to include Checkpoint Nodes. These are available in the experiment tree and we recommend putting then in between key components of your task. The most recent Checkpoint node that a participant passed through is recorded in the Participant tab of your experiment. This way you can work out how far through your experiment a Live participant is and how likely it is that they are still working on your experiment, or have simply left. For Pay-per-Participant users, this is allows you to make an informed choice on whether or not to include a live participants data, consuming a token, or reject them and have the token returned. You can then choose to include only those participants who have progressed far enough through your experiment that their data is worthwhile including.

My Experiment is 'Full'


If your Experiment is ‘Full’ this means that your Recruitment Target has been reached. In other words, that number of participants have entered your Experiment. This means that some of your participants have not completed the Experiment and are still ‘Live’.

If your participants have been Live for a large amount of time, you may need to manually reject them in order to allow more participants to enter the experiment and complete your recruitment target.

For information on how to do this, see the How To: Participant Tokens guide.

Media Error Messages


For a full guide to Media Error Messages check out our dedicated Help Page: Troubleshooting: Media Error Messages


There are three places where you may encounter a 'Media Error Message'.

  1. On your Task Stimuli page when uploading an image, audio or video stimuli.
  2. When previewing your Task either in Task preview or Experiment Preview.
  3. As feedback from a participant who has taken part in your Live experiment.

Explanation:

Gorilla's Media Error messages have been added to help you create a task which will work consitently and robustly across all browsers and devices.

Media Error messages will be triggered if Gorilla notices that there is something wrong with your stimuli which could result in your task performing sub-optimally or, in the worst case, prevent your task from working at all.

We have a dedicated Media Error Messages help page which you can find here

There are many different Media Error messages which you may encounter for a full explanation and solution for each one please refer to our dedicated Media Error Messages help page. Below you find only the most common solution for each of the 3 cases.

Solution:

  1. The most common Media Error message which occurs will be informing you that the stimuli you have just uploaded is not in a supported format.

When you preview your Task you may find that your task and stimuli seem to work just fine. However, depending upon the stimuli type you may find that performance will differ between different browsers and different devices.

To avoid any risk and disapointment that your task will not display correctly for some participants we highly recommend you convert your stimuli to one of the supported web-compatible file types.

If you are sure of which file types are supported you can find out here: Supported File Types. Alternatively, you can find out which file types are supported on a particular Zone by reviewing the Zone's dedicated Tooling Reference Guide page.

  1. The most common Media Error message which occurs when previewing a Task is that the media cannot be found.
  • First, Identify the name of the stimuli that is missing and check that you have Uploaded the stimuli on your Stimuli page.
  • Check that the name of your stimuli in your Task Zone or Spreadsheet is identical (case sensitive) to that found on your stimuli page.
  • Check that there are no extra spaces or line breaks around your stimuli name within your task or spreadsheet. White space is counted within your task names and will count as a mismatch if it is not identical.

If these steps do not fix your error please refer to our dedicated Media Error Messages help page.

  1. As feedback from a participant who has taken part in your Live experiment.
  • Check your metrics for this participant: Any media error messages that occured while the participant was undertaking your task will be written into your metrics.

Use our dedicated Media Error Messages help page to identify the message you recieved and find the appropriate solution.

The majority of reports from participants about Media Error messages can be avoided by previewing your task and experiment fully before releasing your experiment Live. Once this is successful a small pilot across all browsers you intend to use during your experiment is very highly recommended.

Piloting your full study across all browsers and devices you intend to use will usually pick up any additional media errors which may occur and allow you to prevent them from reoccuring when you come to launch your full experiment.

The message 'Please switch to landscape mode' won't go away


If your participant receives a 'Please switch to landscape mode' message when completing your task on mobile and this doesn't go away when the phone is rotated, there are two common explanations.

  1. Your participant has auto-rotate locked on their phone. This means the phone, and therefore Gorilla, will not switch to landscape mode when the participant turns their phone. To continue with the task, your participant needs to turn this off. We recommend that when recruiting via mobile, you ask participants to turn auto-rotate lock off before they begin the task.

  2. Your participant has opened the experiment link through an app, such as Facebook or Instagram. This may open the link inside the app rather than within a full browser, and this may not be able to pick up on phone rotation. We strongly recommend that when recruiting via mobile, you ask participants to copy-paste the link into their browser.

I see strange 'diamond' characters in my spreadsheet


As of August 2018, this problem should not occur because Gorilla is now able to interpret special characters in a non-universal format. However, if this does occur, follow the steps below.

This problem is typically encountered when uploading a spreadsheet for your Task in a csv format.

Explanation: This problem occurs because, in many programs like Excel, csv files are not saved using UTF-8 encoding by default. This means that these files don’t save special characters in a way that can be universally understood and recognised. When they are uploaded to Gorilla, because the special characters aren’t readable, they are instead replaced with the diamond with a question mark symbol which looks like this: �.

Solution: The way to resolve this is to save your csv file as a XLSX or ODS file, or as an 'CSV UTF-8 (Comma delimited)' compliant one.

Steps:

  1. In your spreadsheet creating program e.g. Excel, use ‘Save As’
  2. Under 'Save as Type': Select the CSV format that indicates UTF-8 compliance. In modern versions of EXCEL this is ‘CSV UTF-8’, or ‘xlsx’ or ‘open document spreadsheet.
  3. Upload your reformatted spreadsheet to Gorilla and your special characters should now be viewable.

Older versions of EXCEL (2013 and earlier) don't directly offer a 'CSV UTF-8' file format. If you still wish to save this as a CSV, follow the steps below:

  1. Under 'Save as Type' select the csv format.
  2. In the tools menu, select 'Web Options', then 'Web Encoding' and then select 'UTF-8'.

If you are using an older version of EXCEL and trying to resave a csv file to use UTF-8 encoding, the above steps do not work reliably! In this case:

  1. Open your csv file in Notepad (NOT Notepad++) and use 'Save as'.
  2. Select 'All files' in the File Type drop down - this will keep the file as a csv.
  3. In the encoding drop down, select 'UTF-8'.

This will then convert your csv file to use UTF-8 encoding.

Warning: Be careful!

In most file explorers, you won’t get an indication of whether a csv has been saved as UTF-8 or not.

Further, the default for most spreadsheet programs is to save the csv as a normal (non-UTF-8) CSV, even if this is a previously UTF-8 compliant CSV that you’ve downloaded from Gorilla. Always make sure it is being saved in the UTF-8 compliant format if you are using special characters!

All my metrics are in one column


There are two main reasons for this issue:

  1. When resaving a CSV metric file within Excel, Excel can default to saving your file in Unicode Text format, which will result in a loss of formatting.
  2. Your computers 'local' settings for reading CSV (Comma Separated Value) files is 'non-standard'. This means that Excel cannot work out where to separate out the metrics files correctly.

Here are the solutions for each case:

  1. When you save your metrics upon downloading your data, sometimes the default file-type selected by your computer will be something other than CSV. Often it defaults to 'text-file' instead. This can cause problems when you then open this file later to view the data.
    This issue is easily fixed; re-download the data from your node(s) and check that the file-type is one of those previously listed before saving your data.
  2. CSV (Comma Separated Value) files can use a variety of syntax to act as column separators, or to denote number formats. The defaults of these values can vary across computers. While Gorilla is able to accept variations of CSV syntax when you upload a spreadsheet, via the 'encoding' and 'separator' settings it has no control of your computers local settings. In this case even though you are sure that you have saved the file as CSV your local spreadsheet reader is not able to read the content correctly. Instead, download a CSV (Semicolon) or CSV (Tab) file.

For those who have downloaded the data file prior to August 2018:

The solution for this issue is to change your computers local 'Delimiter' settings. Please refer to the instructions below, appropriate for your operating system.

Windows-7

Step 1: Click the Start button, and then click Control Panel

Step 2: Click on 'Clock, Language and Region' menu section.


Step 3: In this sub-menu, click 'Region' to display the Region 'dialog-box'


Step 4: Click Additional Settings button, to open the Customize Format 'dialog-box'.


Step 5: Change your settings to match those shown here:

Windows-10

The instructions for windows 10 are identical to Windows-7 with one exception:

Step1: Locate the Control Panel by typing in 'Control Panel' in the Start Menu, search bar.

Participants Reaction Times Look Odd


'One of my participants reaction time metrics are much shorter/longer than I expect should be possible'

Explanation:
In order to accurately measure and record a participants reaction times Gorilla must be the active window, at all times, while the participant is undertaking your Task. If the participant navigates away from Gorilla - by switching to a different tab/browser or opening and using a different programme - while undertaking your Task this can lead to inaccurate recording of their reaction times.

This behaviour will be clearly discernible to you in your metrics as a distinct set of reaction times: typically a reaction time much longer than usual followed by a series of shorter reaction times (See the image example below).

Participant behaviour such as this is an example of 'divided attention'. It indicates that the participant is 'distracted' and is not paying full attention to your Task. As such you will probably want to take this into account in your analysis or else redesign your task to mitigate such behaviour - see our suggestions below.

You will likely only encounter this behaviour if your task has lots of time-limited screens back-to-back. i.e. Screens which are set to auto-advance the participant after a set time and which do not require participant input/response in order to advance.

Example of typical 'distracted-participant' reaction time pattern:

Solution:
While you cannot force participants to stay focused on your task there are improvements and changes you can implement in your tasks to reduce the likelihood of such participant behaviour.

Here are some suggestions:

  • In your instructions before your task ask participants to keep Gorilla as their active Window. Explain to your participants how it may effect their recorded results if they switch tabs or open and use other programs while undertaking your task.
  • Avoid using multiple Time-limited sections back to back (multiple screens which auto advance after a set time.) - This has a tendency to discourage participants from taking part if they realise they do not need to interact with your task in order to complete it. Instead split time-limited sections up into blocks of trials with break intervals in between. Make sure these break intervals are advanced manually (for example; with a continue button).
  • When using Time limits on response screens (to auto advance participants who answer too slowly) make sure you pilot different time limits. Setting a limit which is too short for the average participant can make the task too difficult for participants causing them to give up. Again, split time-limited sections up into blocks of trials with break intervals in between. Make sure these break intervals are advanced manually (for example; with a continue button). This allows participants to 'catch their breath' if the task has gotten too difficult for them.
  • In your experiment tree use performance branching to reject participants early on who do not meet your threshold for accuracy on your task. You can find an example of performance branching here.

My metrics appear 'Out of order'


'Some metrics appear in a different order in my data between participants'

Explanation:
The most common occurance of this issue is found in Questionnaire metrics.

Downloaded metrics are, among other things, ordered by UTC timestamp, which is the time the metric is recieved by our database. For Questionnaires, because all of the responses in a Questionnaire are collected and uploaded simultaneously, individual responses can sometimes arrive at the database at slightly different times. This can result a participants responses appearing a different order to other participants. The same can occur for tasks, where metrics that are uploaded very close together can sometimes appear in a different order than would be expected.

Solution:
In both cases, there is no change necessary nor cause for concern.

If you wish you can reorder your metrics within Excel based on Local Timestamp (which is the time the metric was initially recorded on the respondents device) rather than UTC as this may mitigate the issue.

Missing Metrics


There are a couple of reasons why it can look like you have missing metrics:

  1. Check you have 'Included' all participants on your participants page.
  2. Check you are downloading data from the right 'Version'.
  3. Are you sure the participant completed all parts of your Experiment?
  4. Did you use any Experiment 'Requirement' settings
  5. Other issues?

  1. Have you checked to make sure all the necessary participants are 'included'?

When a participant is 'included' their data is added to your available metrics download. A participant is included automatically when they reach a finish node and Complete your experiment. However, by default, participants who are still 'live' i.e. still working through your experiment, are not included and their data won't appear in your download. If you want to include the data from participants who are still live, go to the Participants tab on your experiment, click the 'Actions' button on a participant and select 'Include.' Alternatively, you can use the 'Force Include All' option to include everyone. Note for Pay-per-Participant users: remember that including a participant consumes a participant token and this process is irreversible. Make sure you only include participants you really want the data for and purchase more tokens if you need to.

  1. Are you looking at the right version of your experiments data?

Participant data is associated with a version of an experiment. If you made any changes to your experiment during data collection, your data will be split across the different versions of your experiment. For example, your overall participant count may be 40 but 20 of them were collected in version 2 of your experiment and another 20 were in version 3. To gather the data for all fourty participants, you would need to download the data from both version 2 and version 3. To change the version, you are currently downloading from, on the Data tab, select the appropriate version from the Version Picker. To see what versions of your experiment your participants saw, go to the Participants tab and review the contents of the Version column.

  1. Are you sure the participant completed all parts of your experiment?

Not every participant will complete the whole of your experiment! At any time, a participant has the right to withdraw and you will receive no notification of this, other than a sudden stop in the metrics and the participant status remaining 'Live'. If a participant’s data seems to stop part way through a task, check to see if data from that participant appears in any later stages of your experiment. Try to find that participants unique private or publicID in later questionnaires/tasks. If you can't find them there then they most likely decided to leave! All experiments will experience some form of attrition and there is little that can be done to prevent this, unfortunately.

  1. Did you use any Experiment Requirement settings?

If you choose to restrict participation in your experiment via any of the requirements set on your experiments recruitment page. i.e. Restrictions based on Device Types, Browser Types, Location or Connection Speed. Participants who enter your experiment but whom fail to meet your specified requirements will not see any of your experiment and will therefore have no metrics recorded.

If you wish to calculate the number of participants being rejected via your selected experiment requirements use a checkpoint node directly after the start node(s) of your experiment. For pay-per-participant accounts holders, we recommend using this method if you wish to determine which 'live' participants you may wish to reject and which you wish to include.

  1. Have you tried all the solutions described above?

If you have reviewed and tested all of the solutions listed on this page and they have not resolved your issue, contact us via our contact form and we'll look into it for you as a matter of priority.

Duplicate Metrics


As of October 2017, improvements to the system for recording metrics means that duplicates are no longer loaded into your results. You should no longer see duplicate metrics appearing in your data download for participant data gathered from October 2017 onwards. Note that we still carry out the safety check described below to make sure that you collected data is stored successfully.

If you believe you are still experiencing this problem please get in touch with us via our contact form


When the browser is uploading metrics to the server it expects to recieve a message back saying the metric was uploaded successfully. If it doesn't recieve this message, it will retry after a timeout as a failsafe. Sometimes, the server is just being a bit slow or the participants internet connection is unreliaable, and so while the first metric did get stored just fine, the browser thinks that it might not have gotten there, and so tries again. This then results in two entries of the same metric. In these cases, we think it's most scientifically appropriate to just disregard the later values.

The participant hasn't see the trial twice, it is simply that the metrics have been uploaded to the server twice. The metrics are identical, so just delete one of the rows.

Repeated Trials


As of October 2017, a rare edge case was found where a participant with a poor internet connection could refresh on the final screen of a task, while loading the next part of the experiment, and receive the final screen again. This edge case has now been resolved and should no longer occur.

As of August 2017 we have improved how Gorilla records a participant's progress through a task. As a result it should no longer be possible for participants to see a trial twice; due to their having lost connection (e.g. poor internet connection) with the server. As a result the metrics data should not contain repeated trial metrics.

If you believe you have found repeated trials in your data where you are not expecting them, please get in touch with us via our contact form


For Data collected before August 2017:

If a participant's connection fails during the experiment, they can fail to synchronise their current progress through the task. When they then refresh the page, they go back to the last point at which their progress was synchronised, which may be earlier in the task.
Typically, at the start of the last trial. This can lead to some trials appearing twice or metrics appearing to be out-of-order.

In this situation the participant has seen the trial twice, because the participants connection failed, their progress wasn't saved to the server as the server has no way to know where the participant was. In these cases, we think it is best to use the responses from the first exposure to the trial.

Other Issues


In the first instance, try using Gorilla in Chrome. We are committed to supporting all browsers, but some bugs may get through our testing.

If you find a bug (which then doesn't happen in Chrome), we would be immensely grateful if you could fill out the Support Contact Form with details of the bug and what browser you were using as this will help us address the issue! We won't be able to fix it right away, so use Chrome, but it will get it fixed as soon as possible.