Support Home Legacy Tools Eye Tracking

Eye Tracking in Gorilla

  • Overview
  • Setup
  • Metrics
  • Analysis
  • Analysis of Eye Tracking Data using R
  • Eye Tracking FAQs
Warning

You're viewing the support pages for our Legacy Tooling and, as such, the information may be outdated. Now is a great time to check out our new and improved tooling, and make the move to Questionnaire Builder 2 and Task Builder 2! Our updated onboarding workshop (live or on-demand) is a good place to start.

Overview


Welcome to the Eye Tracking in Gorilla page!

Navigate through the menu to the left for information on how to understand your eye tracking data, prepare it for analysis and see answers to most commonly asked questions.

Warning

You're viewing the support pages for our Legacy Tooling and, as such, the information may be outdated. Now is a great time to check out our new and improved tooling, and make the move to Questionnaire Builder 2 and Task Builder 2! Our updated onboarding workshop (live or on-demand) is a good place to start.

Setting up the Eye Tracking Zone


Detailed instructions on how to set up your Eye Tracking Zone are available on our Task Builder Zones guide page.

Warning

You're viewing the support pages for our Legacy Tooling and, as such, the information may be outdated. Now is a great time to check out our new and improved tooling, and make the move to Questionnaire Builder 2 and Task Builder 2! Our updated onboarding workshop (live or on-demand) is a good place to start.

Metrics


Downloading your eye tracking data

When you download your data, you will (as standard on Gorilla) receive one data file which will contain all of your task metrics for all of your participants. This will contain summarised eye-tracking data. You will receive information on the absolute and relative time participants spent looking at each quadrant and each half of the screen. Screen quadrants are represented by the letters A, B, C and D (where A = top-left, B = top-right, C = bottom-left, and D = bottom-right). For many experiments, this will be the only eye tracking data you need.

If you would like to download the full coordinate data for the eye tracking zone, you will need to manually select this in the configuration settings, under Advanced Data Collection Settings. You will receive the full eye-tracking data in separate files, with one file per participant (all contained inside a zip file). You can also access these files via a unique url for each participant that will be contained in your main data file - when you preview a task, this is the only way to obtain your eye-tracking data. Eye-tracking files contain a lot of raw data, so the guidance below is provided to help you understand it:

  • The type column (column F) says what type of record each row is. It's either a flag denoting the start or end of a screen, a zone, or a prediction. The start and end of the screens are just to show you when screens started or finished, and the zones give you the coordinates of each zone on that particular screen.
  • Each prediction row corresponds to a single eyetracking sample for a given participant. The eye tracking runs as fast as it can, up to the refresh rate of the monitor (normally 60Hz), so under ideal conditions you should get about 60 samples per second.
  • For each prediction, we give you the raw x and y pixel coordinates (x_pred and y_pred) which is where the eyetracking thinks the participant is looking. This is in normal screen space (so 0,0 is the bottom left of the screen, +x is moving right, +y is moving up). For each zone, you also get zone_x, zone_y, zone_width and zone_height, which gives you the bounds of each zone on the screen in the same coordinate space. You can use these to determine which zones the participant was looking at.
  • We also give the exact same data, but in what we call 'normalised' space. The main issue with the raw data is that you cannot compare two participants who are using differently sized screens, so we also normalise coordinates into a unified space. The way the Gorilla layout engine works is that we lay everything out in a frame which is always in a 4:3 ratio, and then we try and make that frame as big as possible. The normalised coordinates are then relative to this frame, where 0,0 is the bottom left of the frame and 1,1 is the top right of the frame. The normalised coordinates are comparable between different participants - 0.5,0.5 will always be the centre of the screen, regardless of how big the screen is.

A more detailed explanation of getting and processing the data can be found below.

Additionally, on our Data Analysis page we offer a detailed walkthrough of analysing your eye tracking data with R.

Getting the Data

This will depend on whether your Experiment is in active data collection (i.e. recruiting) or if you are testing a constituent Task in Preview.

Data in Preview:

  • At the end of previewing a task, or using the menu in the bottom right corner, you will download one spreadsheet.
  • This will contain the summary of the task data (by screen and trial), but not the eye tracking data.
  • To retrieve the data, there is a URL for each collection segment and each calibration in the response column for rows with the ‘eye_tracking’ Zone Type.

Data in Experiment:

  • In the data tab of the Experiment, you have to generate your data through ‘Download Experiment Data’ or by clicking on the relevant task itself.
  • You will then be able to download a zip folder, which will contain the main experiment data, and separate eyetracking/calibration files for each participant.
Warning

You're viewing the support pages for our Legacy Tooling and, as such, the information may be outdated. Now is a great time to check out our new and improved tooling, and make the move to Questionnaire Builder 2 and Task Builder 2! Our updated onboarding workshop (live or on-demand) is a good place to start.

Analysis

Once you have your data, this section describes the relevant column variables you need to look at your data.

The metrics output by Gorilla’s EyeTracking Zone are in longform, with each row representing WebGazer’s prediction of where the participant is looking on the screen. There are rows in the metrics denoted by having ‘prediction’ in the ‘type’ column.

Prediction rows

For predictions, the key variables for each sample/row are:

  • ‘x_pred’ & ‘y_pred’
    • Predicted gaze location in pixels (these will vary between participants with different window and screen sizes).
  • ‘x_pred_normalised’ & ‘y_pred_normalised’
    • These are predictions normalised to the window size of the participants, so can be used to compare between participants.
  • ‘convergence’
    • This is the mean convergence value (from last 10 iterations) for fitting a facial model (clmtrackr), it represents the model’s confidence in finding a face (and therefore accurately predicting eye movements).
    • A number under 0.5 indicated the model has probably converged. Convergence is a very highbar when working with real data, the ‘face_conf’ is a better indication.
  • 'face_conf'
    • The Support Vector Machine (SVM) classifier score for the face model fit. The SVM rates how strongly the image under the model resembles a face. 0 (no fit) to 1 (perfect fit). Values over 0.5 are ideal.
  • 'time_stamp'
    • This indicates the current absolute timestamp for recording each prediction, without any adjustment for frame rendering. It represents the time in milliseconds between when the current metric was uploaded to the file and the start time for the screen.
    • A prediction is requested every 10ms (100Hz), but the recorded timestamp may be slightly different.
    • WebGazer.js does not provide a consistent sampling rate, as there is a slight variable delay in generating predictions – based on the participant’s computer and browser power. On your average set-up a sample is taken every 10ms, with a 3ms variability, and occasionally (once every 100 or so trials) a longer interval of ~20ms.
    • In tools that require a fixed sampling rate, calculate an average interval (usually 10ms) and generate a dummy column incrementing each sample by this average.
  • 'time-elapsed'
    • This indicates the time that passed since the initialisation of the webgazer module. It doesn't always coincide with the start of the screen (i.e. the time_stamp metric), but it still is a fixed point in the task history.
    • This metric indicates what data was used (and when, in the past, it was generated) to create the gaze prediction.
    • If you're just generating a heatmap, then time_stamp will do fine. If you're trying to do any kind of delineation on what quadrant or areas participants are looking at and for how long, you may want to make use of the time_elapsed data instead.

Collection Screens and Zones

Metric files are also broken up into ‘collection screens’, these represent the screens being shown in gorilla, these are timepoints for data collection (different trials, for example).

In the ‘type’ column the beginning and end of these timepoints are denoted by ‘new collection screen’ and ‘End of Collection Screen’. In screens you are also able to setup content zones, the coordinates of which are recorded in the metrics, before tracking samples are collected – these can represent location of items made in the experiment builder. They have coordinates of origin points, and then height and width. You can use these to calculate occupancy of fixations in these zones

There is also a column called ‘screen_index’ which is gives a numerical index for each screen on each row – this can be used to filter predictions

Calibration/Validation files

There is a separate file for each validation and calibration, for each participant. The format of these files differs somewhat from the eyetracking collection files.

The predictions are not included in here, as they cannot be made until the eyetracker has been trained/calibrated.

Rows containing ‘validation’ in the ‘type’ column, have the calibration point coordinates in real and normalised format (columns: point_x, point_y, point_x_normalised, point_y_normalised). There is a row for each sample taken at each point.

Rows containing ‘accuracy’ in the ‘type’ column have the validation information.

The relevant columns are (centroid information):

  • mean_centroid_x and mean_centroid_y
    • Both contain the X and Y coordinates of the average centroid based on validation predictions.
  • mean_centroid_x_normalised and mean_centroid_y_normalised
    • Same as above, but normalised to be comparable between participants.
  • SD_centroid_x and SD_centroid_y
    • The standard deviation of the validation data for each centroid.
  • SD_centroid_x_normalised and SD_centroid_y_normalised
    • Same as above, but normalised to be comparable between participants.

Pointers for analysing data

For toolboxes you need to gather the actual data from your experiment, these are stored in url links in the main experiment metric spreadsheets. You will need to download these, and then export them to CSV, making sure the timestamps are printed out in full.

You can use a combination of the ‘screen_index’ and ‘type’ columns to filter data into a format usable with most eyetracking analysis toolboxes.

Using your preferred data processing tool (R, Python, Matlab etc), filter out rows containing ‘prediction’ and then use screen_index to separate each trial or timepoints of data capture.

The data produced by WebGazer and the Gorilla experiment builder works best for Area of Interest (AOI) type data analyse. This is where we pool samples into falling into different areas on the screen, and use this as an index of attention.

Due to the predictive nature of the models used for webcam eyetracking, the estimates can jump around quite a bit – this makes the standard fixation and saccade detection a challenge in lots of datasets.

Toolboxes for data analysis

  • R: ‘Saccades’
  • R: ‘Gazepath’
    • Toolbox for converting eyetracking data into Fixations and Saccades for analysis.
    • Simply needs X & Y, estimated distance (we suggest using a dummy variable) and a trial index.
  • R: ‘eyetrackingR’
    • Area of Interest (AOI) based tracking. Here you specify windows of interest and the toolbox analyses data based on if the gaze is placed in these AOIs or not.
    • Various tools available for: Window analysis (i.e. in a certain window of time where did people look), Growth Curve Analysis (i.e. modelling timecourse of attention towards targets), Cluster analysis (identify spatio-temporal clusters of fixations in your data)
  • R: Tutorial for reading in data
    • Note: you have to use the ‘add_aoi’ function to convert X,Y data into AOI data
  • Python: ‘Pygaze Analyzer’
    • Basic tool for visualising: Raw data, Fixation maps, Scanpaths, Heatmaps
    • Pygaze Analyser
Warning

You're viewing the support pages for our Legacy Tooling and, as such, the information may be outdated. Now is a great time to check out our new and improved tooling, and make the move to Questionnaire Builder 2 and Task Builder 2! Our updated onboarding workshop (live or on-demand) is a good place to start.

Analysis of Eye Tracking Data using R

This guide contains code for using R to analyse your eye-tracking data using the Saccades package from GitHub. For information about getting and processing your eye tracking data, please consult the Eye Tracking Zone in the eye tracking metrics section at the bottom of the page.


To use the script below, copy and paste everything in the box into the top left-hand section of your new RStudio script. Then follow the instructions written in the comments of the script itself. Comments are written in the hashtags (#).

library("devtools")
install_github("tmalsburg/saccades/saccades", dependencies=TRUE)
install.packages('tidyverse')
install.packages('jpeg')
library('saccades')
library('tidyverse')
library('ggplot2')
library('jpeg')

#Load in file -- this is a single trial of freeviewing 
data <- read.csv('Documents/puppy-1-2.csv')
#Drop rows that are not predictions 
preds <- data[grepl("prediction", data$type),]

#Make dataframe with just time, x,y and trial columns 
preds_minimal <- preds %>%
  select(time_stamp, x_pred_normalised, y_pred_normalised, screen_index)
preds_minimal <- preds_minimal %>%
  rename(time = time_stamp, x = x_pred_normalised, y = y_pred_normalised, trial = screen_index) 

#visualise trials -- note how noisy the predictions are 
#it is difficult to tell what is going on though without seeing the images 
ggplot(preds_minimal, aes(x, y)) +
  geom_point(size=0.2) +
  coord_fixed() +
  facet_wrap(~trial)

# lets align it with the stimuli we had placed
img <- readJPEG('Documents/puppy.jpg') # the image 

#but we need to align it with our eye coordinate space, fortunately we have that in our 'zone' rows
zone <- data[grepl("Zone2", data$zone_name),] # Zone2 was our image zone 

# we extract coordinate info 
orig_x <- zone$zone_x_normalised
orig_y <- zone$zone_y_normalised
width <- zone$zone_width_normalised
height <- zone$zone_height_normalised

# now we add this image using ggplot2 annotation raster with coordinates calculated for the image
m <- ggplot(preds_minimal, aes(x, y)) +
  annotation_raster(img, xmin=orig_x, xmax=orig_x+width, ymin=orig_y, ymax=orig_y+height) +
  geom_point()

# If you look at the image it makes a bit more sense now 

# put on some density plots for aid 
m + geom_density_2d(data=preds_minimal)


# But this is not all we can do, lets try extracting some fixation data! 
#Detect fixations 
fixations <- subset(detect.fixations(preds_minimal), event=="fixation")

#Visualise diagnostics for fixations -- again note the noise
diagnostic.plot(preds_minimal, fixations)


# plot the fixations onto our ggplot, with lines between them 
m+ geom_point(data=fixations, colour="red") + geom_path(data=fixations, colour="red")


#as you can see, this is pretty rough and ready, but this hopefully gives you an idea of how you can visualise eye tracking data

# You could filter the data using a convergence threshold, or use this value to throw out trials
preds <- preds[preds$convergence <= 10, ] 

# After running the above line, try all the plotting functions and look at the difference, you should be able to see more fixations on the image 


# But if generally bad for a given participant you may need to exclude them
# Unfortunately this is a necessary issue with eye tracking data online at this time

# The best thing to increase data quality is to given clear instructions on how to setup the camera, and to repeat validation and calibration frequently

Pressing CTRL+ENTER will run the line of code you are currently on and move you onto the next line. Making your way through the code using CTRL+ENTER will allow you to see how the dataset gradually takes shape after every line of code. CTRL+ENTER will also run any highlighted code, so if you want to run the whole script together, highlight it all and press CTRL+ENTER. You can also press CTRL+ALT+R to run the entire script without highlighting anything.

We will be adding more guides for data transformation using R soon. For more information about Gorilla please consult our support page which contains guides on metrics.


Warning

You're viewing the support pages for our Legacy Tooling and, as such, the information may be outdated. Now is a great time to check out our new and improved tooling, and make the move to Questionnaire Builder 2 and Task Builder 2! Our updated onboarding workshop (live or on-demand) is a good place to start.

FAQ

Click to expand

Where can I access the raw eye tracking data?

This is only available by enabling this feature in the zone's advanced options - see Metrics above.

I have enabled advanced options and still can't find the raw data

If you are piloting the task using preview, you will have to go into the summary metric file and find the links for raw data of each trial. If you are collecting data in a study, this data will be bundled with your download.

Can I test children?

You can, however, there are two main issues here 1) the calibration stage requires the participant to look at a series of coloured dots, which would be a challenge with young children, and 2) getting children to keep their head still will be more difficult. If the child is old enough to follow the calibration it should work but you will want to check your data carefully and you may want to limit the time you are using the eye tracking for.

The Example R scipt produces an error with my data

We provide R scripts as an example of how you might investigate raw data from the eye tracking zone. We are not able to provide support for running your analysis, beyond Gorilla's platform. The example is intentionally minimal and focuses on one trial, it is not sufficient to use for a whole study. We suggest you look at the different packages described above, and follow some tutorials on them before running analysis yourself.

The calibrate button is greyed out in my task

The zone only allows you to calibrate the tracker once it has detected a face in the webcam. You may need to move hair off the eyes, come closer to the camera, or move around.

Can I detect fixations, saccades or blinks?

Yes and No - but mostly No. The nature of webgazer.js means that predictions will be a function of how well the eyes are detected, and how good the calibration is. Innacuracies in these can come from any number of sources (e.g. lighting, webcam, screen-size, participant behaviour).

The poorer the predictions, the more random noise they include, and this stochasticity prevents standard approaches to detecting fixations, blinks and saccades. One option is to use spatio-temporal smoothing -- but you need to know how to implement this yourself.

In our experience less than 30% of your participants will give good enough data to detect these things.

You will get the best results by using a heatmap, or percentage occupancy of a region type analysis. If you are interested in knowing more, have a look at this Twitter thread .

What do the normalised predicted coordinates mean?

We've created a mock-up image which should make this clearer (note: image not to scale). To work out the normalised X, we need to take into account the white space on the side of the 4:3 area in which Gorilla studies are presented.

Schematic showing visual representation of normalised x and y coordinates

Can you provide support with data analysis?

We’ve created some materials to help you analyse your eye tracking data, which you can find on this eye tracking support page. If you have a specific question about your data, you can get in touch with our support desk, but unfortunately we’re not able to provide extensive support for eye tracking data analysis. If you want to analyse the full coordinate eye tracking data, you should ensure you have the resources to conduct your analysis before you run your full experiment.

Are there any studies published using the eye tracking zone?

We are so far aware of three published studies using the Eye Tracking zone - please let us know if you have published or are writing up a manuscript!

Lira Calabrich, S., Oppenheim, G., & Jones, M. (2021). Episodic memory cues in the acquisition of novel visual-phonological associations: a webcam-based eyetracking study. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 43, pp. 2719-2725). https://escholarship.org/uc/item/76b3c54t

Greenaway, A. M., Nasuto, S., Ho, A., & Hwang, F. (2021). Is home-based webcam eye-tracking with older adults living with and without Alzheimer's disease feasible? Presented at ASSETS '21: The 23rd International ACM SIGACCESS Conference on Computers and Accessibility. https://doi.org/10.1145/3441852.3476565

Prystauka, Y., Altmann, G. T. M., & Rothman, J. (2023). Online eye tracking and real-time sentence processing: On opportunities and efficacy for capturing psycholinguistic effects of different magnitudes and diversity. Behavior Research Methods. https://doi.org/10.3758/s13428-023-02176-4