TranĀ­script

Jo EverĀ­shed:

Hello everyĀ­one. We have 31 attenĀ­dees. Hello. HopeĀ­fulĀ­ly youā€™re beginĀ­ning to be able to hear me. Thank you for joining us today. Now, some first things first that Iā€™d like you to do is open up the chat and tell us where youā€™re coming from. So, introĀ­duce yourĀ­self say, ā€œHi, Iā€™m blah, blah, blah, blah,ā€ from wherĀ­evĀ­er youā€™re from. I will have a go at doing that now so that you can see me. But open up the chat and type in who you are. So Iā€™m Jo from Gorilla. I love eviĀ­dence-based visions.

Jo EverĀ­shed:

Felix Trudeau from the UniĀ­verĀ­siĀ­ty of Toronto. Hello, Felix. Hello, Pete. Hello, Karen. Hello Jen. Hello Jerry, Nick, and Gita and Mave, and Dan. I think weā€™re up to about 91 people now. Yonas.

Jo EverĀ­shed:

Okay. Now, next thing I want you to start answerĀ­ing in the chat is, what made you embrace online research? Youā€™re all here to hear about online research. Hey, Sam. Nice to see you. So what made you embrace online research? COVID, lots of COVID responsĀ­es. Yes, and we hear from people the whole time, COVID was the push I needed to take my research online. I embrace it because of easier access to parĀ­ticĀ­iĀ­pants, but also COVID. Yes, it is so great to be able to collect data so much more quickly than having to test people face-to-face in the lab. High quality data is obviĀ­ousĀ­ly the future for behavioral research.

Jo EverĀ­shed:

No more underĀ­powĀ­ered samples. HopeĀ­fulĀ­ly, that can start to be a thing of the past. Now, the next quesĀ­tion I want you guys to answer in the chat. Weā€™ve got 108 people here now, so thatā€™s fanĀ­tasĀ­tic, is what do you see as the benĀ­eĀ­fits of online research? ObviĀ­ousĀ­ly COVID was the push, but what are you hoping to get from it? Weā€™ve heard a little bit about more access to parĀ­ticĀ­iĀ­pants, but diverse samples, quicker, less costly, more varied samples, scalĀ­aĀ­bilĀ­iĀ­ty, lovely answers, these. You can be a bit more longer. Wider parĀ­ticĀ­iĀ­paĀ­tion time, and whatā€™s this going to do for your research? Is it going to make your research better, faster, easier? Cross-culĀ­turĀ­al studies, less costly. Thank you so much.

Jo EverĀ­shed:

FinĀ­ished data colĀ­lecĀ­tion in two weeks. Wow, that mustā€™ve felt amazing. Now, so this is great. Youā€™re answerĀ­ing, what are the benĀ­eĀ­fits of online research? Now, final quesĀ­tion. What chalĀ­lenges to research do you face that youā€™re hoping to learn about today? So this is a great quesĀ­tion, a general quesĀ­tion that you can put into the chat and our panĀ­elists will be reading them and that will help them give you the best posĀ­siĀ­ble answers. What you can also do, if youā€™ve got speĀ­cifĀ­ic quesĀ­tions, this is the time to open the Q&A panel, which you should have access to at the bottom. So if youā€™ve got a quesĀ­tion, make it detailed so that theā€¦ Not an essay, obviĀ­ousĀ­ly, but a detailed, speĀ­cifĀ­ic quesĀ­tion. So, yeah. Frukaā€™s done a brilĀ­liant one, how reliĀ­able is online eye trackĀ­ing. FanĀ­tasĀ­tic, but if you can, instead of putting that in the chat, can you put that into the Q&A panel, which is a difĀ­ferĀ­ent panel, you should also be able to access it from the bottom.

Jo EverĀ­shed:

And then as our panĀ­elists are talking, they will start answerĀ­ing those quesĀ­tions. So I think weā€™re up to 120 attenĀ­dees. Thatā€™s fanĀ­tasĀ­tic. Thank you so much for joining us today. Hi, Iā€™m Jo EverĀ­shed. Iā€™m the founder, CEO of Gorilla Experiment Builder and Iā€™m your host today. Iā€™ve been helping researchers to take their studies online since 2012. So Iā€™ve been doing this for a while, over nine years. For the last two years, weā€™ve also brought researchers togethĀ­er for our online summer conĀ­ferĀ­ence BeOnĀ­line, which stands for behavioral science online, where pioĀ­neerĀ­ing researchers from all over the world share insights into online methods. Papers have there place for recordĀ­ing what was done, but unforĀ­tuĀ­nateĀ­ly they arenā€™t a playĀ­book for how to run research sucĀ­cessĀ­fulĀ­ly. And this is why we run a methods conĀ­ferĀ­ence. Donā€™t miss it.

Jo EverĀ­shed:

I think my colĀ­league, Ashley, is going to put a link to BeOnĀ­line into the chat now. And if you go to the BeOnĀ­line site and pre-regĀ­isĀ­ter for the conĀ­ferĀ­ence, when the tickets are availĀ­able, itā€™s all comĀ­pleteĀ­ly free, youā€™ll find out about it and youā€™ll be able to come and have several hours worth of methods related conĀ­ferĀ­ence. And we might have more stuff on eye trackĀ­ing then, because the world mightā€™ve moved in three months, who knows?

Jo EverĀ­shed:

But we canā€™t wait a year to share methodĀ­ologĀ­iĀ­cal best pracĀ­tice. Life is just moving too fast. So we are now conĀ­venĀ­ing monthly, Gorilla Presents to help researchers take studies online, to learn from the best. Now weā€™ve got a poll coming up. Josh, can you share the poll. Weā€™ve got this one final quesĀ­tion for you now, and itā€™s how much expeĀ­riĀ­ence do you have with Gorilla? If you can all answer that quesĀ­tion, that would be great. Now, as you know, todayā€™s webinar is all about eye trackĀ­ing and mouse trackĀ­ing online.

Jo EverĀ­shed:

We know from lisĀ­tenĀ­ing to our users that eye trackĀ­ing and mouse trackĀ­ing is very popular, and we thought it would be great opporĀ­tuĀ­niĀ­ty to bring togethĀ­er this vibrant comĀ­muĀ­niĀ­ty to discuss all the highs and lows of moving eye trackĀ­ing and mouse trackĀ­ing research out of the lab. And weā€™ve conĀ­vened this panel of experts here to help us, theyā€™ve been running eye trackĀ­ing and mouse trackĀ­ing research online for the last little while and theyā€™re going to discuss what worked, what was chalĀ­lengĀ­ing, and what we still need in order to do top quality, eye trackĀ­ing and mouse trackĀ­ing research online. So please welcome Dr. Jens Madsen from CCNY, Simone Lira CalĀ­abrich from Bangor UniĀ­verĀ­siĀ­ty, ProĀ­fesĀ­sor Tom ArmĀ­strong from Whitman College, and Jonathan Tsay from UC BerkeĀ­ley. And Iā€™m now going to let each of them introĀ­duce themĀ­selves. So, Jens over to you.

Dr Jens Madsen:

Yeah. Iā€™m Jens Madsen, Iā€™m a Postdoc at the City College of New York and have a pretty diverse backĀ­ground. I have like in comĀ­putĀ­er science, I did my PhD in machine learnĀ­ing, and now Iā€™m in a neural engiĀ­neerĀ­ing. So weā€™re doing quite a diverse set of recordĀ­ings, all the way from neural responsĀ­es to eye moveĀ­ments, heart rate, skin, you know, you name it, we record everyĀ­thing. And we actuĀ­alĀ­ly started a project about online eduĀ­caĀ­tion. So, we were already doing webcam eye trackĀ­ing before the panĀ­demĀ­ic hapĀ­pened and then the panĀ­demĀ­ic hapĀ­pened and we were like, ā€œOh, this is great, youā€™re coming to where we are already.ā€ So that was interĀ­estĀ­ing. And yeah, weā€™re doing quite a lot of research with web cam eye trackĀ­ing, colĀ­lectĀ­ing over a thouĀ­sand peoĀ­pleā€™s eye moveĀ­ments when they watch eduĀ­caĀ­tionĀ­al videos.

Jo EverĀ­shed:

Oh, thatā€™s awesome. FanĀ­tasĀ­tic. So Simone, over to you. What are you up to?

Simone Lira Calabrich:

So Iā€™m Simone, Iā€™m a PhD student at Bangor UniĀ­verĀ­siĀ­ty, which is in North Wales, and my superĀ­viĀ­sors are Dr. Manon Jones and Gary OppenĀ­heim. And we are curĀ­rentĀ­ly invesĀ­tiĀ­gatĀ­ing how indiĀ­vidĀ­uĀ­als with dyslexĀ­ia acquire novel visual phonoĀ­logĀ­iĀ­cal assoĀ­ciĀ­aĀ­tions or how they learn assoĀ­ciĀ­aĀ­tions between letters and sounds, and how they do that as comĀ­pared to typical readers. And weā€™ve been using paired-assoĀ­ciate learnĀ­ing and looks-at-nothing parĀ­aĀ­digm in our invesĀ­tiĀ­gaĀ­tion. And this is actuĀ­alĀ­ly the first time that Iā€™ve been working with eye trackĀ­ing research, and because of the panĀ­demĀ­ic, I had to immeĀ­diĀ­ateĀ­ly move to online based eye-tracking.

Jo EverĀ­shed:

ExcelĀ­lent. Now, Tom, over to you.

Prof Tom Armstrong:

Iā€™m an AssoĀ­ciate ProĀ­fesĀ­sor at Whitman College and Iā€™m an affecĀ­tive and clinĀ­iĀ­cal sciĀ­enĀ­tist. And so affecĀ­tive in the sense that I study the emoĀ­tionĀ­al modĀ­uĀ­laĀ­tion of attenĀ­tion and then clinĀ­iĀ­cal in the sense that I study the emoĀ­tionĀ­al modĀ­uĀ­laĀ­tion of attenĀ­tion in the context of anxiety disĀ­orĀ­ders and other mental illĀ­nessĀ­es. And Iā€™ve been using eye trackĀ­ing in that work for about 10 years now, and with a parĀ­ticĀ­uĀ­lar focus on meaĀ­surĀ­ing the stress with eye trackĀ­ing. And then through the panĀ­demĀ­ic, I teamed up with Alex Anwyl-Irvine and Edwin DalĀ­maiĀ­jer to create this mouse-based alterĀ­naĀ­tive to eye trackĀ­ing that we could take online.

Jo EverĀ­shed:

Yeah, and weā€™re going to have more about (INAUDIBLE). Well, Tomā€™s going to talk much more about that later, and then Iā€™ve got excitĀ­ing news at the end of today. And Jonathan, sorry, last but not least over to you.

Jonathan Tsay:

Hello everyĀ­one. Iā€™m Jonathan Tsay, but you can call me JT like Justin TimĀ­berĀ­lake. Iā€™m a third year grad student at UC BerkeĀ­ley and I study how we learn and acquire skilled moveĀ­ments. And hopeĀ­fulĀ­ly we can apply what we learn here at BerkeĀ­ley to rehaĀ­bilĀ­iĀ­taĀ­tion and physĀ­iĀ­cal therapy.

Jo EverĀ­shed:

ExcelĀ­lent. FanĀ­tasĀ­tic. Now, one other note to the attenĀ­dees here today, our panĀ­elists have put their Twitter handles next to their names, I should probĀ­aĀ­bly put mine in there as well, one minute. So if you want to follow any of us so that you hear what weā€™re up to, when weā€™re up to it, and read their latest papers, and get their latest advice, do do that. Now, letā€™s go to the meat of it. Jens, how about we start with you giving your preĀ­senĀ­taĀ­tion about your research and Iā€™ll come back to you in about five minutes and make sure you cover your hints and tips. So we want to know what youā€™ve done, what worked, what was chalĀ­lengĀ­ing, what you might do difĀ­ferĀ­entĀ­ly in hindsight.

Dr Jens Madsen:

Yeah. So, we started this online webcam eye trackĀ­ing quite a few years ago. So this is, I think, I donā€™t know how long Gorilla has had their impleĀ­menĀ­taĀ­tion of WebGazĀ­er, but this is pre-COVID, as Iā€™ve said. I can try to share my screen somehow.

Jo EverĀ­shed:

Thatā€™d be great.

Dr Jens Madsen:

This will stop other people from sharing their screen. Is that okay?

Jo EverĀ­shed:

Yeah. Yeah, yeah, do that.

Dr Jens Madsen:

Thatā€™s great. So, just connect to here. Is it posĀ­siĀ­ble? Iā€™m just going to go ahead and do this. So I think the reason why you conĀ­tactĀ­ed me, because I came up with this paper where we use eye trackĀ­ing to improve and hopeĀ­fulĀ­ly make online eduĀ­caĀ­tion better. So we both use proĀ­fesĀ­sionĀ­al eye trackĀ­ing, which is where we are comĀ­fortĀ­able, and then we thought if we can actuĀ­alĀ­ly make this scale, weā€™re going to use the webcam. And then we can read more about it, Iā€™m just going to give a quick spiel about what we actuĀ­alĀ­ly did in this study, okay?

Dr Jens Madsen:

So, we saw, in the beginĀ­ning of a couple of years ago, that online eduĀ­caĀ­tion was increasĀ­ing rapidly, and we wanted to see sort of what are the chalĀ­lenges of online comĀ­pared to the classĀ­room? Much like right now, I have no idea whether or not any of you thatā€™s lisĀ­tenĀ­ing to me are actuĀ­alĀ­ly there. I donā€™t know if youā€™re lisĀ­tenĀ­ing, if youā€™re paying attenĀ­tion to what Iā€™m saying, I have absoluteĀ­ly no clue about. I mean, because I can see the panĀ­elists, but I just donā€™t know anybody else. Maybe I can see the chat, if people actuĀ­alĀ­ly are interĀ­actĀ­ing. But a teacher in a classĀ­room, they can actuĀ­alĀ­ly see that, right? You can see whether or not the stuĀ­dents are falling asleep or whatĀ­evĀ­er, and they can interĀ­act with the stuĀ­dents and change and react accordĀ­ingĀ­ly if theyā€™re too boring. And so we wanted to develop tools that can measure the level of attenĀ­tion and engageĀ­ment in an online setting. And, essenĀ­tialĀ­ly, we need a mechĀ­aĀ­nism to measure and react to the level of attenĀ­tion of stuĀ­dents and hopeĀ­fulĀ­ly make them engaged in the education.

Dr Jens Madsen:

And so, essenĀ­tialĀ­ly, we did a very simple experiment. We basiĀ­calĀ­ly measure peoĀ­pleā€™s eye moveĀ­ments while they watch short eduĀ­caĀ­tionĀ­al videos, and then we asked them a bunch of quesĀ­tions about the videos. And so we wanted to see whether or not we could use eye trackĀ­ing, both in a setting of a proĀ­fesĀ­sionĀ­al eye tracker, but also the webcam, to predict the test scores and measure the level of attenĀ­tion, okay? And so, I develĀ­oped my own platĀ­form, sorry, Gorilla, but we usedā€¦ The platĀ­form is called Elicit, and basiĀ­calĀ­ly we use this softĀ­ware called WebGazĀ­er, and WebGazĀ­er is basiĀ­calĀ­ly taking the pixels of your eyes. I just learned that you are disĀ­abling the mouse. We had probĀ­lems with that, mouse moveĀ­ments and mouse clicks, because thatā€™s how the webcam actuĀ­alĀ­ly works.

Dr Jens Madsen:

You can get an idea, this is just me making instrucĀ­tionĀ­al videos for my subĀ­jects, because I can tell you that calĀ­iĀ­bratĀ­ing this is going to be a nightĀ­mare for people. I had over 1000 people through this, and I sat there and talked to people about how to calĀ­iĀ­brate this and I got a lot of mad, mad, mad responsĀ­es. So be aware of that. And you can also get an idea of the quality of the eye trackĀ­ing, so spaĀ­tialĀ­ly, itā€™s jitĀ­terĀ­ing about and itā€™s all aboutā€¦ Key things are light, so how much light there is on your face, the quality, how close you are to the webcam, and thereā€™s a couple of other things.

Dr Jens Madsen:

So I did two experĀ­iĀ­ments, one in the classĀ­room. So I litĀ­erĀ­alĀ­ly had stuĀ­dents coming in after their lab session and sitting there doing this online thing. And there I can go around and show how people, how to do it. Key things is reflecĀ­tions in peoĀ­pleā€™s glasses, itā€™s a nightĀ­mare. If you have light in the backĀ­ground, nightĀ­mare. The problem with webcams is that it throtĀ­tles the frame rate, so dependĀ­ing on the light, the frame rate will just drop or go up, dependĀ­ing on the light.

Dr Jens Madsen:

Another thing that will happen is that it changes the conĀ­trast. All of a sudden, the person will comĀ­pleteĀ­ly move, because itā€™s found someĀ­thing interĀ­estĀ­ing in the backĀ­ground, and there you lose the eye trackĀ­ing comĀ­pleteĀ­ly. And thereā€™s many of those small, finicky things that can cause this to go wrong. So I actuĀ­alĀ­ly, in this at-home experiment that I did, I recruitĀ­ed over 1000 people from ProĀ­lifĀ­ic and Amazon MechanĀ­iĀ­cal Turk. I can tell you that ProĀ­lifĀ­ic was a delight to work with, that I ended up using a sort of a instrucĀ­tionĀ­al video, where I litĀ­erĀ­alĀ­ly show people how to do it, because I got so many mad emails that I had to do a video about it. Yeah. I can talk about signal quality and all that later, but that was kind of the pracĀ­tiĀ­cal uses and pracĀ­tiĀ­cal tips that I can give about using this eye-trackĀ­ing software.

Jo EverĀ­shed:

Thatā€™s fanĀ­tasĀ­tic, Jens. Could you say a little bit more about what the content of the video was? Because that sounds like such a great idea, and itā€™s actuĀ­alĀ­ly someĀ­thing we heard from the guys last month talking about audiĀ­toĀ­ry recepĀ­tors. She was like, I had to show them a picture of where they should put their hands on the keyĀ­boards and then they got it. So this sounds the same, you canā€™t just write it in text, but if you show someĀ­body a video of like, thisā€¦ Was it litĀ­erĀ­alĀ­ly that? Like, ā€œHereā€™s the video, this is what it looks like, this is what youā€™re going to happen.ā€ And then they get it, right?

Dr Jens Madsen:

Yeah. So I made a cartoon, Iā€™ve wrote instrucĀ­tions. I mean, the first hunĀ­dreds of people I went like batches of 20, nobody got it, nobody got it, nobody got it. So itā€™s just increĀ­menĀ­tal. Okay, they didnā€™t underĀ­stand that. Why didnā€™t they underĀ­stand that? I donā€™t know. And I asked my colĀ­leagues, ā€œI underĀ­stand it.ā€ Because youā€™re there. Youā€™re like, ā€œWell, you can see what I mean.ā€ And so, I donā€™t know how the calĀ­iĀ­braĀ­tion of Gorilla works, but we have to [crosstalk 00:16:04].

Jo EverĀ­shed:

Very similar to yours. We have our [crosstalk 00:16:04] in slightĀ­ly difĀ­ferĀ­ent places.

Dr Jens Madsen:

Right, yeah. EssenĀ­tialĀ­ly, I mean, you can imagine you have this wire frame thatā€™s fitting around your face, right? And that wire frame has to be there because itā€™s essenĀ­tialĀ­ly finding your eyes and it takes those pixels of your eyes, and then use the model to predict where youā€™re looking on the screen. Now, if this wire frame as you saw in the image is over here, you can move your eyes as much as you want, itā€™s not going to happen, you know? Itā€™s imporĀ­tant, and also you donā€™t move around, because thatā€™s the wire frame. And at this point, I had a beard. That was a huge problem because it didnā€™t like the shape of my face, I guess. My beard was a problem. So I litĀ­erĀ­alĀ­ly showed them a video of me going through it and showing them, ā€œOh, you see now itā€™s going wrong because the wire frame is over there. Now, I go back. Oh, this is working. Now, I turned off the light. You can see what happens. Itā€™s wrong,ā€ you know?

Dr Jens Madsen:

And also just a human interĀ­acĀ­tion with the subĀ­jects because when I get these people from ProĀ­lifĀ­ic and Amazon MechanĀ­iĀ­cal Turk, this is just text. Iā€™m not a person. They donā€™t really care. Theyā€™re just like, ā€œI want to make money, I want to make money.ā€ But then, if you see a person like, ā€œThis is my research, please do well. Come on guys, do it for me.ā€ Youā€™re like, ā€œOkay,ā€ and then they actuĀ­alĀ­lyā€¦ People thanked me even for parĀ­ticĀ­iĀ­patĀ­ing. So that was really a nice experience.

Jo EverĀ­shed:

Oh, thatā€™s fanĀ­tasĀ­tic. So attenĀ­dees, what I want you to type into the chat now, is in terms of top tips, what was the most valuĀ­able for you? Was it, do a video instrucĀ­tion, because then your parĀ­ticĀ­iĀ­pants will underĀ­stand what they need to do? Was it, do a video instrucĀ­tion, because then theyā€™ll like you, and theyā€™ll want to do your experiment for you, you as the person? Or was it, make sure you donā€™t have men with beards or ask them to shave first?

Jo EverĀ­shed:

So into the chat now. Which do you like? Video instrucĀ­tions to get better data. Video instrucĀ­tions are great to people watchĀ­ing them. Video instrucĀ­tions, glasses and backĀ­ground, get better data. So you can see, Jens, everyĀ­body is learnĀ­ing a lot from what youā€™ve said already. That was tremenĀ­dousĀ­ly helpful, parĀ­ticĀ­uĀ­larĀ­ly video instrucĀ­tions for all of the reasons. ExcelĀ­lent. So, weā€™re not going to go over to Simone. Simone, how do you want to share what youā€™ve been doing? Because youā€™ve been taking someĀ­what of a difĀ­ferĀ­ent approachĀ­es to eye tracking.

Simone Lira Calabrich:

Yes, let me share my screen here now. Okay. So Iā€™m assumĀ­ing that you guys can see my screen.

Jo EverĀ­shed:

Yeah, we can see that. Yeah.

Simone Lira Calabrich:

Okay. So first of all, Iā€™d like to thank you guys for invitĀ­ing me to this webinar. So Iā€™ll talk a little bit about my perĀ­sonĀ­al expeĀ­riĀ­ence as a Gorilla user. And also, itā€™s my first time doing an eye trackĀ­ing research as well. And Iā€™ll try to give you a couple of tips as well on what you could do to get high-quality data. And thereā€™s going to be, I think, some overĀ­lapĀ­ping with what Jens just menĀ­tioned right now. Okay.

Simone Lira Calabrich:

So as I briefly menĀ­tioned in my introĀ­ducĀ­tion, in our lab, weā€™re invesĀ­tiĀ­gatĀ­ing the difĀ­ferĀ­ent processĀ­es underĀ­pinĀ­ning acquiĀ­siĀ­tion of novel letter-sound assoĀ­ciĀ­aĀ­tions in our lab. And our aim with that is to better underĀ­stand how we bind visual and phonoĀ­logĀ­iĀ­cal inforĀ­maĀ­tion togethĀ­er and what exactly makes this process so effortĀ­ful for some readers.

Simone Lira Calabrich:

So, in Gorilla, we used a paired assoĀ­ciate learnĀ­ing parĀ­aĀ­digm in one of the tasks. So as you can see in the demonĀ­straĀ­tion, in each trial there were three shapes on the screen. ParĀ­ticĀ­iĀ­pants would first learn which sort of words would go with which one of the shapes, and then they would be tested on their ability to recĀ­ogĀ­nize the pairs. After preĀ­sentĀ­ing the bindĀ­ings, we play one of the three words from each trial. We then present a blank screen and then we show the parĀ­ticĀ­iĀ­pants the three pairs again. And what we do is we track parĀ­ticĀ­iĀ­pantsā€™ looks during the blank screen preĀ­senĀ­taĀ­tion to see if they will visuĀ­alĀ­ly revisit the screen locaĀ­tions that were preĀ­viĀ­ousĀ­ly occuĀ­pied by the target.

Simone Lira Calabrich:

The ratioĀ­nale behind this is that someĀ­times when we are trying to rememĀ­ber someĀ­thing, we might look at the spatial locaĀ­tion where that inforĀ­maĀ­tion or that piece of inforĀ­maĀ­tion was preĀ­sentĀ­ed. We do that even if the spatial locaĀ­tion is now empty, right? So this task that we adminĀ­isĀ­tered in Gorilla is an attempt to repliĀ­cate the findĀ­ings from preĀ­viĀ­ous similar eye trackĀ­ing study done by my superĀ­viĀ­sors, Jones and colĀ­leagues, and itā€™s a similar parĀ­aĀ­digm using paired-assoĀ­ciate learnĀ­ing and looking-at-nothing as well, in typical and dyslexĀ­ic readers.

Simone Lira Calabrich:

So, one of the things, this has a lot to do with Jens was menĀ­tionĀ­ing before, one of the things that I would strongĀ­ly suggest that you check when youā€™re pre-proĀ­cessĀ­ing your eye trackĀ­ing data in Gorilla, is to check the face_conf values. So the values in this column here, they range from zero to one. And what it meaĀ­sures is how strongĀ­ly the image under the model actuĀ­alĀ­ly resemĀ­bles a face. So one means that there was a perfect fit and zero means that there was no fit, as you can see here in the illusĀ­traĀ­tion. AccordĀ­ing to GorilĀ­laā€™s recĀ­omĀ­menĀ­daĀ­tion, values that are over 0.5 are ideal.

Simone Lira Calabrich:

And the reason why I think itā€™s so imporĀ­tant to check this careĀ­fulĀ­ly, is because some of your parĀ­ticĀ­iĀ­pants might move their heads during the task as Jens was menĀ­tionĀ­ing before, or they might acciĀ­denĀ­talĀ­ly cover their faces if theyā€™re bored, or someĀ­thing like that, they might put their glasses on or take their glasses off during the experĀ­iĀ­ments, there might be some changes in the lightĀ­ing conĀ­diĀ­tions as well. So a lot of things can happen mid-experĀ­iĀ­ments and then their faces will no longer be detectĀ­ed. But itā€™s imporĀ­tant that you exclude preĀ­dicĀ­tions that have a very low face_conf value. Thatā€™s extremeĀ­ly important.

Simone Lira Calabrich:

So one thing which we have been doing is we add a quesĀ­tion there at the beginĀ­ning of the experiment, and then we ask parĀ­ticĀ­iĀ­pants the conĀ­diĀ­tions under which they will be doing the tasks. So some of the quesĀ­tions that I thought they were relĀ­eĀ­vant to eye-trackĀ­ing research are the ones that are highĀ­lightĀ­ed here. So we asked them, in what kind of lightĀ­ing will we be doing the tasks? Is it dayĀ­light? Are they going to be using artiĀ­fiĀ­cial lightĀ­ing? Are they going to be placing their laptops on their lap or on their desks? We cannot, unforĀ­tuĀ­nateĀ­ly, force parĀ­ticĀ­iĀ­pants to place their laptops on the desk, which would be ideal, and some of them still end up placing their laptops on their laps. And we also ask them if theyā€™re going to be wearing glasses during the experĀ­iĀ­ments, because we can not always exclude parĀ­ticĀ­iĀ­pants who are wearing glasses.

Simone Lira Calabrich:

So what I do with this, based on parĀ­ticĀ­iĀ­pantsā€™ responsĀ­es, I try to genĀ­erĀ­ate some plots so that I can visuĀ­alĀ­ly inspect what may be causing the poor face_conf values for some of the parĀ­ticĀ­iĀ­pants. So, as an overall, as you can see here, the mean value for all of the conĀ­diĀ­tions was above the recĀ­omĀ­mendĀ­ed threshĀ­old. But you can see also that the data quality was affectĀ­ed to some extent in some of the conĀ­diĀ­tions. So, in this parĀ­ticĀ­uĀ­lar sample here, the model fit was equally fine for people wearing or not wearing glasses, but in one of the other pilots that we conĀ­ductĀ­ed, it was really, really poor for parĀ­ticĀ­iĀ­pants wearing glasses. So, you have to think that it would be okay for you to exclude parĀ­ticĀ­iĀ­pants wearing glasses from your experĀ­iĀ­ments. We can not do that.

Simone Lira Calabrich:

The second plot, sugĀ­gests that natural dayĀ­light seems to be a bit better for the remote eye tracker. So what Iā€™ve been trying to do is, I release the experĀ­iĀ­ments in batches, and I try to schedĀ­ule them to become availĀ­able early in the morning so that I can try to recruit more people who are, probĀ­aĀ­bly going to be doing the task during the day, and someĀ­times I just pause the experĀ­iĀ­ments. Here, you can see as well, their placing the comĀ­putĀ­er on the lap is also not ideal, but honĀ­estĀ­ly, I donā€™t know how to conĀ­vince parĀ­ticĀ­iĀ­pants not to do that. I try to ask them, I give visual instrucĀ­tions as well, but it doesnā€™t always work.

Simone Lira Calabrich:

The last one, you can see that in my experĀ­iĀ­ments, we have six blocks, in lots ofā€¦ We have 216 trials in each one of the blocks, so itā€™s a very long experiment. And the impresĀ­sion that I get is that as people get tired over the course of the experiment, they start moving more or they start touchĀ­ing their faces and doing things like that. So, the data quality will tend to decrease towards the end of the experiment. So thatā€™s why itā€™s imporĀ­tant for you to counter-balance everyĀ­thing that you can and ranĀ­domĀ­ize everyĀ­thing. So, this is it for now. I would like to thank my superĀ­viĀ­sors as well. And I have a couple more tips which I might show you guys later if we have time. You are muted, Jo.

Jo EverĀ­shed:

Thank you so much, Simone. That was actuĀ­alĀ­ly fanĀ­tasĀ­tic. So attenĀ­dees, what I want you to answer there is, what for you was the most valuĀ­able thing that Simone said? Maybe it was face config, checkĀ­ing those numbers. Or it mightā€™ve been the setĀ­tings and quesĀ­tions, just asking people what their setup is so that you can exclude parĀ­ticĀ­iĀ­pants if theyā€™ve got a setup that you donā€™t like. Or was it only experĀ­iĀ­ments in the morning, checkĀ­ing integriĀ­ty of face models. Or was it, actuĀ­alĀ­ly just seeing how each of those setĀ­tings reduces the quality of the data, because I found that fasĀ­ciĀ­natĀ­ing, seeing those plots where you can just see the quality of the data. Yes, the face config stuff is super imporĀ­tant. LightĀ­ing wasnā€™t imporĀ­tant, whereas the laptop was placed. Yeah. So everyĀ­bodyā€™s getting so much value from what you said, Simone. Thank you so much for that. So next, weā€™re going to go to Tom ArmĀ­strong, whoā€™s going to talk to us, I think, about MouseView.

Prof Tom Armstrong:

All right. Let me get my screen share going here.

Prof Tom Armstrong:

Okay. So Iā€™m going to be talking about a tool that I co-created with Alex Anwyle-Irvine and Edwin DalĀ­maiĀ­jer, that is a online alterĀ­naĀ­tive to eye trackĀ­ing. And big thanks to Alex for develĀ­opĀ­ing this brilĀ­liant JavaScript to make this thing happen, and for Edwin, for really guiding us in terms of how to mimic the visual system and bringĀ­ing his experĀ­tise as a cogĀ­niĀ­tive sciĀ­enĀ­tist to bear.

Prof Tom Armstrong:

So I menĀ­tioned before, Iā€™m an affecĀ­tive and clinĀ­iĀ­cal sciĀ­enĀ­tist. And so in these areas, people often use passive viewing tasks to study the emoĀ­tionĀ­al modĀ­uĀ­laĀ­tion of attenĀ­tion, or as itā€™s often called attenĀ­tionĀ­al bias. And in these tasks, parĀ­ticĀ­iĀ­pants are asked to look at stimuli, however they please. And these stimuli are typĀ­iĀ­calĀ­ly preĀ­sentĀ­ed in arrays of, from two to as many as 16 stimuli. Some of them are neutral, and then some of the images are affecĀ­tive or emoĀ­tionĀ­alĀ­ly [inaudiĀ­ble 00:27:39] charge.

Prof Tom Armstrong:

Hereā€™s some data from a task with just two images, a disĀ­gustĀ­ing image paired with a neutral image, or a pleasĀ­ant image paired with a neutral image. And Iā€™ll just give you a sense of some of the comĀ­poĀ­nents of gaze that are modĀ­uĀ­latĀ­ed by emotion in these studies.

Prof Tom Armstrong:

And so, one thing we see is that at the beginĀ­ning of the trial, people tend to orient towards any emoĀ­tionĀ­al or affecĀ­tive image. MarĀ­garet Bradley and Peter Lang have called this natural selecĀ­tive attenĀ­tion. And in general, when people talk about attenĀ­tionĀ­al bias for threat, or attenĀ­tionĀ­al bias for motiĀ­vaĀ­tionĀ­alĀ­ly relĀ­eĀ­vant stimuli, theyā€™re talking about this pheĀ­nomĀ­eĀ­non. Itā€™s often meaĀ­sured with reacĀ­tion time measures.

Prof Tom Armstrong:

Whatā€™s more unique about eye trackĀ­ing is this other comĀ­poĀ­nent that I refer to as strateĀ­gic gaze or volĀ­unĀ­tary gaze. And this plays out a little bit later in the trial when parĀ­ticĀ­iĀ­pants kind of take control of the wheel with their eye moveĀ­ments. And here, you see a big difĀ­ferĀ­ence accordĀ­ing to whether people like a stimĀ­uĀ­lus, whether they want what they see in the picture, or whether they are repulsed by it. And so, you donā€™t see a valence difĀ­ferĀ­ences with that first comĀ­poĀ­nent, but here in this more volĀ­unĀ­tary gaze, you see some really interĀ­estĀ­ing effects.

Prof Tom Armstrong:

And so you can measure this with total dwell time during a trial. And one of the great things about this measure is that in comĀ­parĀ­iĀ­son to these reacĀ­tion time meaĀ­sures of attenĀ­tionĀ­al bias that have been pretty thorĀ­oughĀ­ly criĀ­tiqued, and also the eye trackĀ­ing measure of that initial capture, this metric is very reliĀ­able. Also, itā€™s valid, in the sense that, for example, if you look at how much people look away from someĀ­thing thatā€™s gross, thatā€™s going to corĀ­reĀ­late strongĀ­ly with how gross they say the stimĀ­uĀ­lus is. And the same thing for appetite stimĀ­uĀ­lus. So how much people want to eat food that they see will corĀ­reĀ­late with how much they look at it?

Prof Tom Armstrong:

So, Iā€™ve been doing this for about 10 years. Every study I do involves eye trackĀ­ing, but it comes with some limĀ­iĀ­taĀ­tions. So, first itā€™s expenĀ­sive. Edwin DalĀ­maiĀ­jer has done a really amazing job democĀ­raĀ­tizĀ­ing eye trackĀ­ing by develĀ­opĀ­ing a toolbox that wraps around cheap, comĀ­merĀ­cial grade, eye trackĀ­ers. But even with it being posĀ­siĀ­ble to now buy 10 eye trackĀ­ers, for example, itā€™s still hard to scale up the research, like what Jo was talking about earlier, how no more underĀ­powĀ­ered research, more diverse samples. Well, itā€™s hard to do that with the hardware.

Prof Tom Armstrong:

And then as I learned, about a year ago, itā€™s not panĀ­demĀ­ic proof. And so you got to bring folks into the lab. You canā€™t really do this online, although as we just heard, there are some pretty excitĀ­ing options. And really for me, webcam eye trackĀ­ing is a holy grail. But in the meanĀ­time, I wanted to see if there were some other alterĀ­naĀ­tives that would be ready to go out of the box for eye-trackĀ­ing researchers. And one traĀ­diĀ­tion, it turns out, is using MouĀ­seĀ­View, where thereā€™s a mouse that conĀ­trols a little small aperĀ­ture and allows you to sort of look through this little window and explore an image.

Prof Tom Armstrong:

Now, I thought this was a pretty novel idea. It turns out folks have been doing this for maybe 20 years. And they came up with some pretty clever terms like fovea, for the way that sort of mimics foveal vision. Also, thereā€™s been a lot of valĀ­iĀ­daĀ­tion work showing that mouse viewing corĀ­reĀ­lates a lot with regular viewing as meaĀ­sured by an eye tracker. So, what we were setting out to do was first to see if mouse viewing would work in affecĀ­tive and clinĀ­iĀ­cal sciĀ­ences, to see if youā€™d get this sort of hot attenĀ­tion, as well as the cold attenĀ­tion that you see in just sort of browsĀ­ing a webpage.

Prof Tom Armstrong:

And then in parĀ­ticĀ­uĀ­lar, we wanted to create a tool, sort of in the spirit of Gorilla, that would be immeĀ­diĀ­ateĀ­ly accesĀ­siĀ­ble to researchers and you could use without proĀ­gramĀ­ming and having techĀ­niĀ­cal skills. And so we actuĀ­alĀ­ly usedā€¦ We did this in Gorilla and we colĀ­lectĀ­ed some data on Gorilla over ProĀ­lifĀ­ic, and we have dataā€¦ This is from a pilot study. We did our first study with 160 parĀ­ticĀ­iĀ­pants. And let me just show you what the task looks like. Iā€™m going to zip ahead, because Iā€™m a disgust researcher and you donā€™t want to see whatā€™s on the first trial. At least you can see it blurred, but thatā€™s good enough. Okay. So you can see someĀ­oneā€™s moving a cursor-locked aperĀ­ture and thereā€™s this GaussĀ­ian filter used to blur the screen to mimic periphĀ­erĀ­al vision and parĀ­ticĀ­iĀ­pants can explore the image with the mouse. Okay. We move on.

Prof Tom Armstrong:

Okay. So one of the great things about MouĀ­seĀ­View is that Alex has created it in a really flexĀ­iĀ­ble manner where users can cusĀ­tomize the overlay. So you can use the GaussĀ­ian blur, you can use a solid backĀ­ground, you can use difĀ­ferĀ­ent levels of opacity. You can also vary the size of the aperĀ­ture. And this is someĀ­thing that we havenā€™t really sysĀ­temĀ­atĀ­iĀ­calĀ­ly buried yet. Right now itā€™s just sort of set to mimic foveal vision to be about two degrees or so.

Prof Tom Armstrong:

So weā€™ve done this pilot study, about 160 people, and the first thing we wanted to see is, does the mouse scanĀ­ning resemĀ­ble gaze scanĀ­ning? And Edwin did some really brilĀ­liant analyĀ­ses to be able to sort of answer this quanĀ­tiĀ­taĀ­tiveĀ­ly and staĀ­tisĀ­tiĀ­calĀ­ly. And we found that the two really conĀ­verge, you can see it here in the scan pass. Like for example, if you look over the right, disgust five, really similar pattern of exploĀ­ration. We blurred that so that you canā€™t see the proĀ­priĀ­etary IX images.

Prof Tom Armstrong:

Now the bigger quesĀ­tion for me, does this capture hot attenĀ­tion? Does this capture the emoĀ­tionĀ­al modĀ­uĀ­laĀ­tion of attenĀ­tion that we see with eye trackĀ­ing? And so here on the left, you can see the eye trackĀ­ing plot that I showed you before. Over here on the right, is the MouĀ­seĀ­View plot. And in terms of that second comĀ­poĀ­nent of gaze I talked about, that strateĀ­gic gaze, we see that coming through in the mass view data really nicely. Even some of these subtle effects, like the fact that people look more at unpleasĀ­ant images the first time before they start avoidĀ­ing them, so we have that approach and that avoidĀ­ance in the strateĀ­gic gaze.

Prof Tom Armstrong:

The one thing thatā€™s missing, maybe not surĀ­prisĀ­ingĀ­ly, is this more autoĀ­matĀ­ic capture of gaze at the beginĀ­ning of the trial because the mouse moveĀ­ments are more effortĀ­ful, more volĀ­unĀ­tary. Weā€™ve now done a couple more of these studies and weā€™ve found that this dwell time index with the mouse viewing is very reliĀ­able in terms of interĀ­nal conĀ­sisĀ­tenĀ­cy. Also, weā€™re finding that it corĀ­reĀ­lates very nicely with self-report ratings of images and indiĀ­vidĀ­ual difĀ­ferĀ­ences related to images like we see with eye gaze. So it seems like a pretty promisĀ­ing tool. And I can tell you more about it in a minute, but I just wanted to really quickly thank Gorilla. Iā€™m excited about any announceĀ­ment that might be coming, and my college for funding some of this valĀ­iĀ­daĀ­tion research, and the members of my lab who are curĀ­rentĀ­ly doing a within-person valĀ­iĀ­daĀ­tion against eye-trackĀ­ing in-person in the lab.

Jo EverĀ­shed:

Thank you so much, Tom. That was absoluteĀ­ly fasĀ­ciĀ­natĀ­ing. A number of people have said in the chat. I liked that. That was just absoluteĀ­ly fasĀ­ciĀ­natĀ­ing, your research. Iā€™m so impressed. What Iā€™d love to hear from attenĀ­dees, like what do you think about MouĀ­seĀ­View? Doesnā€™t that look tremenĀ­dous? Iā€™m so excited by what thatā€¦ Because there are limits to what eye trackĀ­ing we can do with the webcam, right? Iā€™m sure we can get two zones, maybe six, but what I think is really excitĀ­ing about MouĀ­seĀ­View, is it allows you to do that much more detailed eye trackĀ­ing-like research. Itā€™s a difĀ­ferĀ­ent methodĀ­olĀ­oĀ­gy thatā€™s going to make stuff that othĀ­erĀ­wise wouldĀ­nā€™t be posĀ­siĀ­ble to take online, posĀ­siĀ­ble. Tom, Iā€™d never heard of this before. It sounds so excitĀ­ing. It seems like such a reaĀ­sonĀ­able way to invesĀ­tiĀ­gate voliĀ­tionĀ­al attenĀ­tion in an online context. I think people have been really inspired by what youā€™ve said Tom.

Jo EverĀ­shed:

And the excitĀ­ing news for those of you lisĀ­tenĀ­ing today, MouĀ­seĀ­View is going to be a closed beta zone from next week in Gorilla. To get access to any closed beta zone, all you need to do is go to the support desk, fill out the form, ā€œI want access to a closed beta zone,ā€ this one, and it gets applied instantĀ­ly to your account. Thatā€™s just the case for eye trackĀ­ing, itā€™ll be the case for MouĀ­seĀ­View. Theyā€™ll be able to be used withoutā€¦ You donā€™t need any coding to be able to use them. If theyā€™re in closed beta, itā€™s just an indiĀ­caĀ­tion from us that there isnā€™t a lot of pubĀ­lished research out there, we havenā€™t valĀ­iĀ­datĀ­ed it, so we say handle with care, right? Like run your pilots, check your data, check it thorĀ­oughĀ­ly, make addiĀ­tionĀ­alā€¦ Data quality checks than you would otherwise.

Jo EverĀ­shed:

With things like showing images, you can see that itā€™s correct, right? And the data that youā€™re colĀ­lectĀ­ing isnā€™t comĀ­pliĀ­catĀ­ed. So there were reserves that we donā€™t need to have in closed beta. Until things have been pubĀ­lished and been valĀ­iĀ­datĀ­ed, we keep things in closed beta where theyā€™re more techĀ­niĀ­calĀ­ly complex. Thatā€™s what that means.

Jo EverĀ­shed:

But yes, you can have access. So, MouĀ­seĀ­View coming to Gorilla next week. And thank you to Tom and to Alex, who I think is on the call, and Edwin. Theyā€™re all here today. If youā€™re impressed by MouĀ­seĀ­View, can you type MouĀ­seĀ­View into the chat here, just so that Tom, and Edwin, and Alex will get to like whoop whoop from you guys. Because theyā€™ve put massive amount of work into getting this done and I think they deserve the equivĀ­aĀ­lent of a little round of applause for that. Thank you so much. Now finally, over to Jonathan to talk about what youā€™ve been up to.

Jonathan Tsay:

Okay. Can you see my screen? Okay, perfect, perfect. So, my name is Jonathan, I go by JT, and I study how humans control and acquire skilled moveĀ­ment. So let me give you an example of this through this video.

Jonathan Tsay:

Okay, my talkā€™s over. No, I was just kidding. So this happens every day, how we adapt and adjust our moveĀ­ments to changes in the enviĀ­ronĀ­ment and the body. And this process, this learnĀ­ing process requires mulĀ­tiĀ­ple comĀ­poĀ­nents. Itā€™s a lot of trial and error. Your body just kind of figures it out, but itā€™s also a lot of instrucĀ­tion. How the father here instructs the son to jump on this chair, and of course reward too at the end with the hug.

Jonathan Tsay:

And we studied this in the lab by asking people to do someĀ­thing a little bit more mundane. So, typĀ­iĀ­calĀ­ly, youā€™re in this dark room, youā€™re asked to hold this digĀ­iĀ­tizĀ­ing pen, you donā€™t see your arm, and youā€™re asked to reach to this blue target, conĀ­trolĀ­ling this red cursor on the screen. And we described this as playing fruit ninja. Fun slice through the blue dot, using your red cursor.

Jonathan Tsay:

On the right side, Iā€™m going to show you some data. So, iniĀ­tialĀ­ly, when people reached target, conĀ­trolĀ­ling this red cursor, they canā€™t see their hand, people are on target. On target means hand angle is zero and xā€‘axis is time. So more reaches means youā€™re moving across the xā€‘axis. But then we add a perĀ­turĀ­baĀ­tion. So we introĀ­duce a 15 degree offset from the target. The cursor is always going to move 15 degrees away from the target. We tell you, we say, ā€œJoe, this cursor has nothing to do with you. Ignore it. Just keep on reachĀ­ing to the target.ā€ And so you see here on the right, this is parĀ­ticĀ­iĀ­pants data. People canā€™t keep reachĀ­ing through the target. They implicĀ­itĀ­ly respond to this red cursor by moving in the oppoĀ­site direcĀ­tion, they drift off further and further away to 20 degrees.

Jonathan Tsay:

EvenĀ­tuĀ­alĀ­ly, they reach an asympĀ­tote, around 20 degrees, and when we turn off the feedĀ­back, when we turn off the cursor, people drifted a little bit back to the target. And this whole process is implicĀ­it. If I asked you where your hand is, your actual hand is 20 degrees away from the target. If I asked you where your hand is, you tell me your hand is around the target. This is where you feel your hand. You feel your hand to be at the target, your hand is 20 degrees off the target. And this is how we study implicĀ­it motor learnĀ­ing in the lab. But because of the panĀ­demĀ­ic, we built a tool to test this online. And so, in a paper preprint recentĀ­ly released, we comĀ­pared in-person data using this kind of sophisĀ­tiĀ­catĀ­ed machinĀ­ery that typĀ­iĀ­calĀ­ly costs around $10,000 to set up.

Jonathan Tsay:

And you can see on the bottom, this is the data we have in the lab. We just create difĀ­ferĀ­ent offsets away from the target, but nonetheĀ­less people drift further and further away from the target. And we have data from online using this model, this temĀ­plate we created to track your mouse moveĀ­ments and your reachĀ­ing to difĀ­ferĀ­ent targets. The behavĀ­ior in person and online seem quite similar. But in person, online research affords some great advanĀ­tages and Iā€™m preachĀ­ing to the choir here. For in lab results, we took around six months of in-person just to come to the lab, collect your data. For the online results, we colĀ­lectĀ­ed 1 to 20 people in a day and so thatā€™s a huge time-saver, and in terms of cost as well. And of course, we have a more diverse popĀ­uĀ­laĀ­tion. I just want to give a few tips before I sign off here.

Jonathan Tsay:

So a few tips are instrucĀ­tion checks. So, for instance, in our study, we ask people to reach to the target and ignore the cursor feedĀ­back, just conĀ­tinĀ­ue reachĀ­ing. So, an instrucĀ­tion check quesĀ­tion we ask is where are you going to reach? Option A, the target, option two, away from the target? And if you choose away from the target, then we say, ā€œSorry, that was the wrong answer and please try again next time.ā€

Jonathan Tsay:

Catch trials. So, for instance, someĀ­times we would say, ā€œDonā€™t reach to this target.ā€ The target presents itself, and we say, donā€™t reach the target, and if we see that parĀ­ticĀ­iĀ­pants conĀ­tinĀ­ue to reach to the target, they might be just not paying attenĀ­tion and just swiping their hand towards the target. So we use some catch trials to filter out good and bad subĀ­jects. We have baseĀ­line variĀ­abilĀ­iĀ­ty meaĀ­sures. So reach the target and if we see youā€™re reachĀ­ing in a erratic way, then we typĀ­iĀ­calĀ­ly say, ā€œOkay, sorry, try again next time.ā€ Again, moveĀ­ment time is a great indiĀ­caĀ­tor, espeĀ­cialĀ­ly for a mouse tracking.

Jonathan Tsay:

If you, in the middle of the experiment, you go to the restroom and you come back. These are things that can be tracked using moveĀ­ment time, which is typĀ­iĀ­calĀ­ly, someone might not take your experiment seriĀ­ousĀ­ly, but not always. And Simone brought this up, but batchĀ­ing and iterĀ­atĀ­ing, getting feedĀ­back from a lay person to underĀ­stand instrucĀ­tions is huge for us. And last but not least, someĀ­thing that Tom brought up, was someĀ­times when you see behavĀ­ior thatā€™s difĀ­ferĀ­ent between in-lab and online, this is someĀ­thing we strugĀ­gle with, is it reflecĀ­tive of someĀ­thing thatā€™s interĀ­estĀ­ing thatā€™s difĀ­ferĀ­ent between online and in-person or is it noise? So thatā€™s someĀ­thing weā€™re strugĀ­gling with, but we came to the conĀ­cluĀ­sion that someĀ­times itā€™s just difĀ­ferĀ­ent. Youā€™re using a mouse versus a robot in the lab. So, that can be very different.

Jonathan Tsay:

What Iā€™m excited about this mouse trackĀ­ing research and how it relates to motor learnĀ­ing, is typĀ­iĀ­calĀ­ly motor learnĀ­ing patient research is around 10 people, but now, because we can just send a link to these parĀ­ticĀ­iĀ­pants, theyā€™re able to access the link and do these experĀ­iĀ­ments at home. We can access some huge, larger group of patient popĀ­uĀ­laĀ­tions that typĀ­iĀ­calĀ­ly, maybe logisĀ­tiĀ­calĀ­ly, hard to invite to the lab. And second, teachĀ­ing. Iā€™m not going to belabor this point. Third is public outĀ­reach. So, we put our experiment on this TestĀ­MyĀ­Brain website and people just try out the game and learn a little bit about their brain, and thatā€™s an easy way to collect data, but also for people to learn a little bit about themselves.

Jonathan Tsay:

Hereā€™s some open resources, you can take a screenĀ­shot, we share our temĀ­plate, how to impleĀ­ment these mouse trackĀ­ing experĀ­iĀ­ments online. Itā€™s also inteĀ­gratĀ­ed with Gorilla. We have a manual to help you set it up, we have a paper, and hereā€™s a demo, you can try it out yourĀ­self. And last thing, I want to thank my team. So, Alan, who did a lot of the work coding up the experiment, my advisor, Rich Ivry, and Guy, and Ken NakayaĀ­ma. They all worked colĀ­lecĀ­tiveĀ­ly really put this togethĀ­er and weā€™re really excited about where itā€™s going. So thank you again for your time.

Jo EverĀ­shed:

Thank You so much. Jon, that was fanĀ­tasĀ­tic. Now, what I want to hear from the panel from the attenĀ­dees, did you like more from Jon there, the advanĀ­tages of online research, the cost-saving, the time-saving, or were you more blown away by his tips for taking and getting good quality mouse trackĀ­ing data online. Tips, instrucĀ­tion, check quesĀ­tions, tips, tips, more tips. And that girl who jumped onto that stool at the beginĀ­ning, were you not just blown away by her? If you were blown away by her resilience to jumping upā€¦ Theyā€™re entireĀ­ly blown away in the chat. I thought she was tremenĀ­dous. She must be about the same age as my son and he would not do that for sure. That was someĀ­thing quite excitĀ­ing. Jon, I want to ask a follow up quesĀ­tion. Your mouse trackĀ­ing experiment, have you shared that in Gorilla Open Materials?

Jonathan Tsay:

Yeah, yeah. That is in Gorilla Open Materials.

Jo EverĀ­shed:

If youā€™ve got the link, do you want to dump that into the chat? Because then if anybody wants to do a repliĀ­caĀ­tion or an extenĀ­sion, they can just clone the study and see how youā€™ve done it, see how youā€™ve impleĀ­mentĀ­ed it. Itā€™s just a super easy way of sharing research and allowĀ­ing people to build on the research thatā€™s gone before, without wasting time. ActuĀ­alĀ­ly, make sure weā€™ve got a link to that as well, can you? So that when we send a follow up email on Monday, we can make sure that everyĀ­body whoā€™s here today can get access to that. Oh, I think Josh has already shared it, Jon. Youā€™re off the hook. ExcelĀ­lent. Weā€™ve now come to Q&A time. There are lots and lots of quesĀ­tions. There are a total of 32 quesĀ­tions, of which 16 have already been answered by you fine people as we go through. There are some more quesĀ­tions though. Edwin has got a quesĀ­tion of how do people deal with the huge attriĀ­tion of parĀ­ticĀ­iĀ­pants in web-based eye-trackĀ­ing. Simone or Jens, can either of you speak on that one? How have you dealt with attrition?

Simone Lira Calabrich:

Yeah, itā€™s a bit comĀ­pliĀ­catĀ­ed because my experiment is a very long one and parĀ­ticĀ­iĀ­pants end up getting tired and they quit the experiment in the middle of it. There is not much that we can do about it, but just keep recruitĀ­ing more parĀ­ticĀ­iĀ­pants. So we ran a power analyĀ­sis, which sugĀ­gestĀ­ed that we needed 70 parĀ­ticĀ­iĀ­pants for our study. So, our goal was to recruit 70 parĀ­ticĀ­iĀ­pants no matter what. So, if someone quits midway, we just reject the parĀ­ticĀ­iĀ­pant and we just recruit an addiĀ­tionĀ­al one as a subĀ­stiĀ­tute, as a replacement.

Dr Jens Madsen:

So I think at least for my perĀ­specĀ­tive, itā€™s very difĀ­ferĀ­ent comĀ­parĀ­ing staĀ­tionĀ­ary, like viewing or eye trackĀ­ing of images, and then in my case, video. So video is moving conĀ­stantĀ­ly, right? And so, you can show an image and they have to watch the whole video, and I have to synĀ­chroĀ­nize it in time. And it also depends on the analyĀ­sis method you do. In my case, I donā€™t really look into new spatial inforĀ­maĀ­tion. Spatial inforĀ­maĀ­tion for me is irrelĀ­eĀ­vant. I use the corĀ­reĀ­laĀ­tion. So how similar peoĀ­pleā€™s eye moveĀ­ments are across time. I use other people as a refĀ­erĀ­ence. And in that sense it can be very noisy, the data, actuĀ­alĀ­ly. You can actuĀ­alĀ­ly move around and itā€™s quite robust in that sense. And so it depends on the level of noise you induce in a system. For my case, because it was video, I put in auxĀ­ilĀ­iary tasks, like people can watch, look at dots to see if they were actuĀ­alĀ­ly there or not or things like that just to control for those things or else youā€™re in big trouble because you have no clue whatā€™s happening.

Dr Jens Madsen:

And so having those extra things to make sure that theyā€™re there. And also it turns out the attenĀ­tion span of an online user, at least in eduĀ­caĀ­tionĀ­al content, itā€™s around five, six minutes, after that theyā€™re gone. You canā€™t, it doesnā€™t matter. Theyā€™re bored, they couldĀ­nā€™t be bothĀ­ered. And so my task were always around there. The videos that I showed were always five, six-minutes long, three-minutes long, and then some quesĀ­tions. But they couldĀ­nā€™t be asked to sit still because when you use WebGazĀ­er, you have to sit still. It depenĀ­dence onā€¦ You guys are using spatial tasks, right? So this would be a problem for you. For me itā€™s fine because I use the temĀ­poĀ­ral course. But for spatial people thatā€™s going to be an issue because the whole thing is just going to shift. And how do you detect that, right? Do you have to somehow either insert some things like now you look at this dot and now I can recalĀ­iĀ­brate my data or someĀ­thing? I donā€™t know how you guys are dealing with that. But yeah, those are the things that you need to worry about.

Simone Lira Calabrich:

Yeah. I was just going to add someĀ­thing, that because we have a very long experiment with lots of trials, we can lose some data and itā€™s still going to be fine, right? So I have 216 trials in my experĀ­iĀ­ments. So itā€™s not a six-minute long one, itā€™s two-hour experĀ­iĀ­ments. So, even if I do lose some data, relĀ­aĀ­tiveĀ­ly, itā€™s still fine. I have enough power for that.

Dr Jens Madsen:

I mean, you still have the calĀ­iĀ­braĀ­tion? You do a calĀ­iĀ­braĀ­tion, right? And Iā€™m assumĀ­ing you do it once, right? And you have to sort ofā€¦ Or you do it mulĀ­tiĀ­ple times?

Simone Lira Calabrich:

MulĀ­tiĀ­ple times. Yeah.

Dr Jens Madsen:

You have to do that.

Simone Lira Calabrich:

So we have six blocks.

Dr Jens Madsen:

Yeah, that makes sense.

Simone Lira Calabrich:

So we do it at the beginĀ­ning of each block and also in the middle of each block as well, just to make sure that itā€™s as accuĀ­rate as possible.

Dr Jens Madsen:

You saw what I just did there, right? I readĀ­justĀ­ed myself and this is someĀ­thing natural. Itā€™s like I just needā€¦ Ah, yeah, thatā€™s better, you know? Thatā€™s a problem.

Simone Lira Calabrich:

Exactly, yeah. So, yeah, thatā€™s why we do that mulĀ­tiĀ­ple times.

Dr Jens Madsen:

And we do it even without knowing it.

Jo EverĀ­shed:

Simone, how often do you recalibrate?

Simone Lira Calabrich:

So, we have six blocks. So at the beginĀ­ning of each block and in the middle of each block. So, every 18 trials.

Jo EverĀ­shed:

Okay, that makes sense. So in preĀ­viĀ­ous lecĀ­tures weā€™ve had about online methods. People have said, a good length for an online experiment is around 20 minutes, much longer than that people start to get tired. If you pay people better, you get better quality parĀ­ticĀ­iĀ­pants. So, thatā€™s another way that you can reduce attriĀ­tion, double your fees and see what happens. People are willing to stick around longer if theyā€™re being paid well for their time.

Jo EverĀ­shed:

And then one of their researchers, Ralph Miller, from New York, he does long studies like Simone does online, and what he does is about every 15 minutes, he puts in a five minute break and he says, ā€œLook, please go away, get up, walk around, do someĀ­thing else, stretch, maybe you need to go to the loo, maybe thereā€™s someĀ­thing you need to deal with, but you have to be back in five minutes.ā€ When you press next, that five minute I think it happens autoĀ­matĀ­iĀ­calĀ­ly. And that gives people that ability to, ā€œOh, I really need to stretch and move.ā€ So that you can build in an expeĀ­riĀ­ence that is manĀ­ageĀ­able for your participants.

Jo EverĀ­shed:

And so if youā€™re strugĀ­gling with attriĀ­tion, the thing to do is to pilot difĀ­ferĀ­ent ideas until you find what works for your experĀ­iĀ­ments. There arenā€™t things that will work for everyĀ­one, but there are techĀ­niques and approachĀ­es that you can try out, sort of experĀ­iĀ­menĀ­taĀ­tion in real time, find out whatā€™s going to work. And that can be really helpful too. Tom, there are quite a few quesĀ­tions aboutā€¦ Can you guys see the Q&As? If you pull up the Q&A panel, there were some nice ones about mouse trackĀ­ing here that I think Tom might be able to answer. So one here, how viable is it to use mouse trackĀ­ing in reading research, for example, asking parĀ­ticĀ­iĀ­pants to move that cursor as they read. And then simĀ­iĀ­larĀ­ly, Jens and Simone, there are quesĀ­tions about eye fixĀ­aĀ­tions and data quality. You can also type answers. So I think weā€™ll run out of time if we try and cover all of those live, but maybe Jens and Simone, you can have a go answerĀ­ing some of the ones that are more techĀ­niĀ­cal. But Tom, perhaps you could speak about mouse trackĀ­ing, eye trackĀ­ing, the crossover. Youā€™re muted. Youā€™re muted.

Prof Tom Armstrong:

There are so many, but let me try to do it justice. So, I mean, right now, I think, I donā€™t know what unique processĀ­es we get from MouĀ­seĀ­View. Iā€™m thinkĀ­ing it as being just a stand in for that volĀ­unĀ­tary exploĀ­ration that we see with the eye trackĀ­ing. In terms of what that gets you beyond, great quesĀ­tion, about beyond just self-report, there are some interĀ­estĀ­ing ways in which self-report and eye trackĀ­ing do diverse that weā€™ve found that I canā€™t do justice to right now. So I think that you often pick up things with self-report that you donā€™t get withā€¦ Iā€™m sorry, you get things with eye-trackĀ­ing that you donā€™t get with self-report. For example, Edwin and I found that eye moveĀ­ment avoidĀ­ance of disĀ­gustĀ­ing stimuli doesnā€™t habitĀ­uĀ­ate, whereas people will say theyā€™re less disĀ­gustĀ­ed, but then theyā€™ll conĀ­tinĀ­ue to look away from things.

Prof Tom Armstrong:

And so someĀ­times, thereā€™s more or so things people can introĀ­spect on. About reading, Edwin took that quesĀ­tion on. Left versus right mouse? FasĀ­ciĀ­natĀ­ing, Iā€™m not sure. And then imporĀ­tantĀ­ly, the touch screens that is in the works. So maybe if Alex can jump on that quesĀ­tion. Thatā€™s the next thing that heā€™s working on, making this sort of work with touch screens. Right now itā€™s just for desktop, laptop, Chrome, Edge or Firefox.

Jo EverĀ­shed:

AnyĀ­thing that works in Gorilla, probĀ­aĀ­bly might already work for touch. I donā€™t know when, unforĀ­tuĀ­nateĀ­ly, [inaudiĀ­ble 00:54:56] but I will make sure that that quesĀ­tion gets asked next week, because by default, everyĀ­thing in Gorilla is touch comĀ­patĀ­iĀ­ble as well.

Prof Tom Armstrong:

Cool.

Jo EverĀ­shed:

Iā€™m trying to pick out a good next quesĀ­tion. Whatā€™s the next one at the top? Can we learn someĀ­thing from online mouse trackĀ­ing that we cannot learn from online eye trackĀ­ing? Can anyone speak to that or have you already?

Dr Jens Madsen:

What was the quesĀ­tion? Sorry.

Jo EverĀ­shed:

Can we learn someĀ­thing from online mouse trackĀ­ing that we cannot learn from online eye trackĀ­ing? I think that there are difĀ­ferĀ­ent methods that answer difĀ­ferĀ­ent quesĀ­tions, right?

Dr Jens Madsen:

So thereā€™s cerĀ­tainĀ­ly a corĀ­reĀ­laĀ­tion between where you look and where the mouse is, right? So this is clear. And also it depends on the task. In my case, with the video, youā€™re not moving around the mouse where youā€™re looking, because youā€™re watchĀ­ing a video, thatā€™s not a natural behavĀ­ior. But if you [inaudiĀ­ble 00:55:57] of just using UI buttons and things like that, surely theyā€™re highly corĀ­reĀ­latĀ­ed. So, it very much depends on the task.

Jo EverĀ­shed:

Thatā€™s really good. We are now five minutes to six. So, Iā€™m going to wrap this up. There are lots and lots more quesĀ­tions, but I donā€™t think we can get through all of them today. HopeĀ­fulĀ­ly, weā€™ve managed to answer 24 quesĀ­tions. So I think weā€™ve done a really, really great job there. ActuĀ­alĀ­ly, thereā€™s one more which I think Simone might be able to answer quickly. Whatā€™s the relaĀ­tionĀ­ship between face config and calĀ­iĀ­braĀ­tion accuĀ­raĀ­cy measure. Did you look at both of those?

Simone Lira Calabrich:

No, I didnā€™t actuĀ­alĀ­ly invesĀ­tiĀ­gate that, but what I did was I did similar plots for the calĀ­iĀ­braĀ­tion analyĀ­sis as well in Gorilla. They were very similar to what I demonĀ­stratĀ­ed to you guys. So, dependĀ­ing on whether parĀ­ticĀ­iĀ­pants were wearing glasses or not, there were some lower values for that. What I try to do, I strive to use the five-points calĀ­iĀ­braĀ­tion in Gorilla and if calĀ­iĀ­braĀ­tion fails for at least one of the points, the calĀ­iĀ­braĀ­tion has to be reatĀ­temptĀ­ed. So, Iā€™m trying to be very strict in that sense. Thatā€™s my default mode now. So, if it fails just one of the points, I think itā€™s just best to try to recalĀ­iĀ­brate, which can be quite frusĀ­tratĀ­ing for some parĀ­ticĀ­iĀ­pants, but that will ensure that we have better data quality.

Jo EverĀ­shed:

Yeah, that was great. Now, I have one last quesĀ­tion for the panel which is what do you see the next year bringĀ­ing to this area of research. And weā€™re going to do this in reverse order, so startĀ­ing with JT.

Jonathan Tsay:

Iā€™m going to say that, at least in my field, Iā€™m most excited about larger scale patient research and thatā€™s number one. ReachĀ­ing indiĀ­vidĀ­uĀ­als who are typĀ­iĀ­calĀ­ly harder to reach. So, larger scale in that sense, but another is reachĀ­ing, for instance, people without proĀ­priĀ­oĀ­cepĀ­tion. For instance, you donā€™t have a sense of body awareĀ­ness. Iā€™m pretty sure most of you have never met someone like that because in my view, I think thereā€™s only three people in the world that have learned the litĀ­erĀ­aĀ­ture, and kind of being able to work with these people remoteĀ­ly would be a great opporĀ­tuĀ­niĀ­ty in the future.

Jo EverĀ­shed:

Thatā€™s brilĀ­liant. Tom, how about for you? What does the next year hold?

Prof Tom Armstrong:

So, one, getting MouĀ­seĀ­View on to mobile devices to work with touch screen. Then just seeing the method get adopted by people in difĀ­ferĀ­ent areas, and to see how a lot of these eye trackĀ­ing finds repliĀ­cate. Also, to, hopeĀ­fulĀ­ly, get this into task zones with some difĀ­ferĀ­ent variĀ­eties of eye trackĀ­ing tasks. So, Larger matriĀ­ces, 16 [inaudiĀ­ble 00:58:54] and just increĀ­menĀ­talĀ­ly working like that.

Jo EverĀ­shed:

I think thatā€™s always so excitĀ­ing when you create a new method, is you donā€™t know how people are going to use it, and someĀ­bodyā€™s going to see that and go, ā€œOoh, I could do someĀ­thing that youā€™d never imagĀ­ined,ā€ and sudĀ­denĀ­ly a whole new area of research becomes posĀ­siĀ­ble. Thatā€™s hugely excitĀ­ing. Simone, how about you? What does the next year hold?

Simone Lira Calabrich:

I was just thinkĀ­ing perhaps the posĀ­siĀ­bilĀ­iĀ­ty of testing parĀ­ticĀ­iĀ­pants who are speakĀ­ers of difĀ­ferĀ­ent lanĀ­guages. That would be really nice as well. So, with remote eye trackĀ­ing, we can do that more easily. So hopefullyā€¦

Jo EverĀ­shed:

HopeĀ­fulĀ­ly, that will.

Simone Lira Calabrich:

.. thatā€™s whatā€™s going to happen.

Jo EverĀ­shed:

And, Jens, finally to you.

Dr Jens Madsen:

We were working in online eduĀ­caĀ­tion and we were meaĀ­surĀ­ing the level of attenĀ­tion of stuĀ­dents when they watch these eduĀ­caĀ­tionĀ­al mateĀ­rĀ­iĀ­al. And what weā€™re excited about is that we can actuĀ­alĀ­ly reverse that process so we can have the person in the browser measure the level of attenĀ­tion and we can adapt the eduĀ­caĀ­tionĀ­al content to the level of attenĀ­tion. So, if stuĀ­dents are dropĀ­ping out or not looking, we can actuĀ­alĀ­ly interĀ­vene and make interĀ­venĀ­tions so that hopeĀ­fulĀ­ly we can improve online eduĀ­caĀ­tion. Youā€™re muted.

Jo EverĀ­shed:

Sorry. Tom just dropped out. So I was just checkĀ­ing what hapĀ­pened there. The online eduĀ­caĀ­tion thing, I can see it being been tremenĀ­dous and thatā€™s what everyĀ­body needs. If you had one tip for everyĀ­body watchĀ­ing today to improve online eduĀ­caĀ­tion, what would it be?

Dr Jens Madsen:

Do short, and show your face, and skip the boring long PowerPoints.

Jo EverĀ­shed:

ExcelĀ­lent. All about human interĀ­acĀ­tion, isnā€™t it?

Dr Jens Madsen:

Itā€™s all about the interĀ­acĀ­tion. If you can see a perĀ­sonĀ­ā€™s face, youā€™re there.

Jo EverĀ­shed:

Yeah., yeah, yeah. So, maybe itā€™s when youā€™ve got your stuĀ­dents in your class, get them to turn their videos on, right? Theyā€™ll feel like theyā€™re there togethĀ­er in a room.

Dr Jens Madsen:

Itā€™s so important.

Jo EverĀ­shed:

So imporĀ­tant. Back to the parĀ­ticĀ­iĀ­pants, there were 150 of you for most of today. Thank you so much for joining our third Gorilla Presents webinar. Each month weā€™ll be addressĀ­ing a difĀ­ferĀ­ent topic on online behavioral research. So, why not write in the chat with sugĀ­gesĀ­tions of what youā€™d like us to cover next? Yes, thank you mesĀ­sages, please, through to our amazing panĀ­elists and Tom as well. Itā€™s very difĀ­fiĀ­cult to judge how much value youā€™ve got out from here, but big thank yous really helps these guys know that you really appreĀ­ciĀ­atĀ­ed the wisdom that theyā€™ve had for you today.

Jo EverĀ­shed:

There will be a survey. I think we email you with a survey straight after this to help us make these sesĀ­sions more useful. Please fill this out. Itā€™s tremenĀ­dousĀ­ly useful to us and it allows us to make each session better and better. You guys can see the value that youā€™ve got out of this today. By giving us feedĀ­back, we can make future sesĀ­sions even better. So youā€™re doing a solid for the whole research community.

Jo EverĀ­shed:

The next webinar is going to be about speech proĀ­ducĀ­tion experĀ­iĀ­ments online. It is going to be in late April. So, if speech proĀ­ducĀ­tion experĀ­iĀ­ments, where people talkā€¦ Itā€™s going to be 29th of April, there you go. Where people talk and youā€™re colĀ­lectĀ­ing their voice, if thatā€™s your bag, then make sure you sign up for that one as well. Thank you and good night. One final massive thank you to the panĀ­elists. Thank you so much for giving your time to the research comĀ­muĀ­niĀ­ty today and weā€™ll chat in a minute in the next room.

Simone Lira Calabrich:

Thank you, everyone.

Dr Jens Madsen:

Yeah, thank you.

Jonathan Tsay:

Thank you.