This webinar is all about conĀductĀing eye and mouse trackĀing research online.
With the help of our experts:
- Dr Jens Madsen, City College of New York (twitter: @cogniemotion)
- Simone Lira CalĀabrich, Bangor UniĀverĀsiĀty (twitter: @Simonecalabrich)
- Prof Tom ArmĀstrong, Whitman College (twitter: @PEEP_lab)
- Jonathan Tsay, BerkeĀley UniĀverĀsiĀty of CalĀiĀforĀnia (twitter: @tsay_jonathan)
Here are some of the recent artiĀcles, blogs, and talks theyāve pubĀlished on todayās topic:
TranĀscript
Jo EverĀshed:
Hello everyĀone. We have 31 attenĀdees. Hello. HopeĀfulĀly youāre beginĀning to be able to hear me. Thank you for joining us today. Now, some first things first that Iād like you to do is open up the chat and tell us where youāre coming from. So, introĀduce yourĀself say, āHi, Iām blah, blah, blah, blah,ā from wherĀevĀer youāre from. I will have a go at doing that now so that you can see me. But open up the chat and type in who you are. So Iām Jo from Gorilla. I love eviĀdence-based visions.
Jo EverĀshed:
Felix Trudeau from the UniĀverĀsiĀty of Toronto. Hello, Felix. Hello, Pete. Hello, Karen. Hello Jen. Hello Jerry, Nick, and Gita and Mave, and Dan. I think weāre up to about 91 people now. Yonas.
Jo EverĀshed:
Okay. Now, next thing I want you to start answerĀing in the chat is, what made you embrace online research? Youāre all here to hear about online research. Hey, Sam. Nice to see you. So what made you embrace online research? COVID, lots of COVID responsĀes. Yes, and we hear from people the whole time, COVID was the push I needed to take my research online. I embrace it because of easier access to parĀticĀiĀpants, but also COVID. Yes, it is so great to be able to collect data so much more quickly than having to test people face-to-face in the lab. High quality data is obviĀousĀly the future for behavĀioral research.
Jo EverĀshed:
No more underĀpowĀered samples. HopeĀfulĀly, that can start to be a thing of the past. Now, the next quesĀtion I want you guys to answer in the chat. Weāve got 108 people here now, so thatās fanĀtasĀtic, is what do you see as the benĀeĀfits of online research? ObviĀousĀly COVID was the push, but what are you hoping to get from it? Weāve heard a little bit about more access to parĀticĀiĀpants, but diverse samples, quicker, less costly, more varied samples, scalĀaĀbilĀiĀty, lovely answers, these. You can be a bit more longer. Wider parĀticĀiĀpaĀtion time, and whatās this going to do for your research? Is it going to make your research better, faster, easier? Cross-culĀturĀal studies, less costly. Thank you so much.
Jo EverĀshed:
FinĀished data colĀlecĀtion in two weeks. Wow, that mustāve felt amazing. Now, so this is great. Youāre answerĀing, what are the benĀeĀfits of online research? Now, final quesĀtion. What chalĀlenges to research do you face that youāre hoping to learn about today? So this is a great quesĀtion, a general quesĀtion that you can put into the chat and our panĀelists will be reading them and that will help them give you the best posĀsiĀble answers. What you can also do, if youāve got speĀcifĀic quesĀtions, this is the time to open the Q&A panel, which you should have access to at the bottom. So if youāve got a quesĀtion, make it detailed so that the⦠Not an essay, obviĀousĀly, but a detailed, speĀcifĀic quesĀtion. So, yeah. Frukaās done a brilĀliant one, how reliĀable is online eye trackĀing. FanĀtasĀtic, but if you can, instead of putting that in the chat, can you put that into the Q&A panel, which is a difĀferĀent panel, you should also be able to access it from the bottom.
Jo EverĀshed:
And then as our panĀelists are talking, they will start answerĀing those quesĀtions. So I think weāre up to 120 attenĀdees. Thatās fanĀtasĀtic. Thank you so much for joining us today. Hi, Iām Jo EverĀshed. Iām the founder, CEO of Gorilla Experiment Builder and Iām your host today. Iāve been helping researchers to take their studies online since 2012. So Iāve been doing this for a while, over nine years. For the last two years, weāve also brought researchers togethĀer for our online summer conĀferĀence BeOnĀline, which stands for behavĀioral science online, where pioĀneerĀing researchers from all over the world share insights into online methods. Papers have there place for recordĀing what was done, but unforĀtuĀnateĀly they arenāt a playĀbook for how to run research sucĀcessĀfulĀly. And this is why we run a methods conĀferĀence. Donāt miss it.
Jo EverĀshed:
I think my colĀleague, Ashley, is going to put a link to BeOnĀline into the chat now. And if you go to the BeOnĀline site and pre-regĀisĀter for the conĀferĀence, when the tickets are availĀable, itās all comĀpleteĀly free, youāll find out about it and youāll be able to come and have several hours worth of methods related conĀferĀence. And we might have more stuff on eye trackĀing then, because the world mightāve moved in three months, who knows?
Jo EverĀshed:
But we canāt wait a year to share methodĀologĀiĀcal best pracĀtice. Life is just moving too fast. So we are now conĀvenĀing monthly, Gorilla Presents to help researchers take studies online, to learn from the best. Now weāve got a poll coming up. Josh, can you share the poll. Weāve got this one final quesĀtion for you now, and itās how much expeĀriĀence do you have with Gorilla? If you can all answer that quesĀtion, that would be great. Now, as you know, todayās webinar is all about eye trackĀing and mouse trackĀing online.
Jo EverĀshed:
We know from lisĀtenĀing to our users that eye trackĀing and mouse trackĀing is very popular, and we thought it would be great opporĀtuĀniĀty to bring togethĀer this vibrant comĀmuĀniĀty to discuss all the highs and lows of moving eye trackĀing and mouse trackĀing research out of the lab. And weāve conĀvened this panel of experts here to help us, theyāve been running eye trackĀing and mouse trackĀing research online for the last little while and theyāre going to discuss what worked, what was chalĀlengĀing, and what we still need in order to do top quality, eye trackĀing and mouse trackĀing research online. So please welcome Dr. Jens Madsen from CCNY, Simone Lira CalĀabrich from Bangor UniĀverĀsiĀty, ProĀfesĀsor Tom ArmĀstrong from Whitman College, and Jonathan Tsay from UC BerkeĀley. And Iām now going to let each of them introĀduce themĀselves. So, Jens over to you.
Dr Jens Madsen:
Yeah. Iām Jens Madsen, Iām a Postdoc at the City College of New York and have a pretty diverse backĀground. I have like in comĀputĀer science, I did my PhD in machine learnĀing, and now Iām in a neural engiĀneerĀing. So weāre doing quite a diverse set of recordĀings, all the way from neural responsĀes to eye moveĀments, heart rate, skin, you know, you name it, we record everyĀthing. And we actuĀalĀly started a project about online eduĀcaĀtion. So, we were already doing webcam eye trackĀing before the panĀdemĀic hapĀpened and then the panĀdemĀic hapĀpened and we were like, āOh, this is great, youāre coming to where we are already.ā So that was interĀestĀing. And yeah, weāre doing quite a lot of research with web cam eye trackĀing, colĀlectĀing over a thouĀsand peoĀpleās eye moveĀments when they watch eduĀcaĀtionĀal videos.
Jo EverĀshed:
Oh, thatās awesome. FanĀtasĀtic. So Simone, over to you. What are you up to?
Simone Lira Calabrich:
So Iām Simone, Iām a PhD student at Bangor UniĀverĀsiĀty, which is in North Wales, and my superĀviĀsors are Dr. Manon Jones and Gary OppenĀheim. And we are curĀrentĀly invesĀtiĀgatĀing how indiĀvidĀuĀals with dyslexĀia acquire novel visual phonoĀlogĀiĀcal assoĀciĀaĀtions or how they learn assoĀciĀaĀtions between letters and sounds, and how they do that as comĀpared to typical readers. And weāve been using paired-assoĀciate learnĀing and looks-at-nothing parĀaĀdigm in our invesĀtiĀgaĀtion. And this is actuĀalĀly the first time that Iāve been working with eye trackĀing research, and because of the panĀdemĀic, I had to immeĀdiĀateĀly move to online based eye-tracking.
Jo EverĀshed:
ExcelĀlent. Now, Tom, over to you.
Prof Tom Armstrong:
Iām an AssoĀciate ProĀfesĀsor at Whitman College and Iām an affecĀtive and clinĀiĀcal sciĀenĀtist. And so affecĀtive in the sense that I study the emoĀtionĀal modĀuĀlaĀtion of attenĀtion and then clinĀiĀcal in the sense that I study the emoĀtionĀal modĀuĀlaĀtion of attenĀtion in the context of anxiety disĀorĀders and other mental illĀnessĀes. And Iāve been using eye trackĀing in that work for about 10 years now, and with a parĀticĀuĀlar focus on meaĀsurĀing the stress with eye trackĀing. And then through the panĀdemĀic, I teamed up with Alex Anwyl-Irvine and Edwin DalĀmaiĀjer to create this mouse-based alterĀnaĀtive to eye trackĀing that we could take online.
Jo EverĀshed:
Yeah, and weāre going to have more about (INAUDIBLE). Well, Tomās going to talk much more about that later, and then Iāve got excitĀing news at the end of today. And Jonathan, sorry, last but not least over to you.
Jonathan Tsay:
Hello everyĀone. Iām Jonathan Tsay, but you can call me JT like Justin TimĀberĀlake. Iām a third year grad student at UC BerkeĀley and I study how we learn and acquire skilled moveĀments. And hopeĀfulĀly we can apply what we learn here at BerkeĀley to rehaĀbilĀiĀtaĀtion and physĀiĀcal therapy.
Jo EverĀshed:
ExcelĀlent. FanĀtasĀtic. Now, one other note to the attenĀdees here today, our panĀelists have put their Twitter handles next to their names, I should probĀaĀbly put mine in there as well, one minute. So if you want to follow any of us so that you hear what weāre up to, when weāre up to it, and read their latest papers, and get their latest advice, do do that. Now, letās go to the meat of it. Jens, how about we start with you giving your preĀsenĀtaĀtion about your research and Iāll come back to you in about five minutes and make sure you cover your hints and tips. So we want to know what youāve done, what worked, what was chalĀlengĀing, what you might do difĀferĀentĀly in hindsight.
Dr Jens Madsen:
Yeah. So, we started this online webcam eye trackĀing quite a few years ago. So this is, I think, I donāt know how long Gorilla has had their impleĀmenĀtaĀtion of WebGazĀer, but this is pre-COVID, as Iāve said. I can try to share my screen somehow.
Jo EverĀshed:
Thatād be great.
Dr Jens Madsen:
This will stop other people from sharing their screen. Is that okay?
Jo EverĀshed:
Yeah. Yeah, yeah, do that.
Dr Jens Madsen:
Thatās great. So, just connect to here. Is it posĀsiĀble? Iām just going to go ahead and do this. So I think the reason why you conĀtactĀed me, because I came up with this paper where we use eye trackĀing to improve and hopeĀfulĀly make online eduĀcaĀtion better. So we both use proĀfesĀsionĀal eye trackĀing, which is where we are comĀfortĀable, and then we thought if we can actuĀalĀly make this scale, weāre going to use the webcam. And then we can read more about it, Iām just going to give a quick spiel about what we actuĀalĀly did in this study, okay?
Dr Jens Madsen:
So, we saw, in the beginĀning of a couple of years ago, that online eduĀcaĀtion was increasĀing rapidly, and we wanted to see sort of what are the chalĀlenges of online comĀpared to the classĀroom? Much like right now, I have no idea whether or not any of you thatās lisĀtenĀing to me are actuĀalĀly there. I donāt know if youāre lisĀtenĀing, if youāre paying attenĀtion to what Iām saying, I have absoluteĀly no clue about. I mean, because I can see the panĀelists, but I just donāt know anybody else. Maybe I can see the chat, if people actuĀalĀly are interĀactĀing. But a teacher in a classĀroom, they can actuĀalĀly see that, right? You can see whether or not the stuĀdents are falling asleep or whatĀevĀer, and they can interĀact with the stuĀdents and change and react accordĀingĀly if theyāre too boring. And so we wanted to develop tools that can measure the level of attenĀtion and engageĀment in an online setting. And, essenĀtialĀly, we need a mechĀaĀnism to measure and react to the level of attenĀtion of stuĀdents and hopeĀfulĀly make them engaged in the education.
Dr Jens Madsen:
And so, essenĀtialĀly, we did a very simple experiment. We basiĀcalĀly measure peoĀpleās eye moveĀments while they watch short eduĀcaĀtionĀal videos, and then we asked them a bunch of quesĀtions about the videos. And so we wanted to see whether or not we could use eye trackĀing, both in a setting of a proĀfesĀsionĀal eye tracker, but also the webcam, to predict the test scores and measure the level of attenĀtion, okay? And so, I develĀoped my own platĀform, sorry, Gorilla, but we used⦠The platĀform is called Elicit, and basiĀcalĀly we use this softĀware called WebGazĀer, and WebGazĀer is basiĀcalĀly taking the pixels of your eyes. I just learned that you are disĀabling the mouse. We had probĀlems with that, mouse moveĀments and mouse clicks, because thatās how the webcam actuĀalĀly works.
Dr Jens Madsen:
You can get an idea, this is just me making instrucĀtionĀal videos for my subĀjects, because I can tell you that calĀiĀbratĀing this is going to be a nightĀmare for people. I had over 1000 people through this, and I sat there and talked to people about how to calĀiĀbrate this and I got a lot of mad, mad, mad responsĀes. So be aware of that. And you can also get an idea of the quality of the eye trackĀing, so spaĀtialĀly, itās jitĀterĀing about and itās all about⦠Key things are light, so how much light there is on your face, the quality, how close you are to the webcam, and thereās a couple of other things.
Dr Jens Madsen:
So I did two experĀiĀments, one in the classĀroom. So I litĀerĀalĀly had stuĀdents coming in after their lab session and sitting there doing this online thing. And there I can go around and show how people, how to do it. Key things is reflecĀtions in peoĀpleās glasses, itās a nightĀmare. If you have light in the backĀground, nightĀmare. The problem with webcams is that it throtĀtles the frame rate, so dependĀing on the light, the frame rate will just drop or go up, dependĀing on the light.
Dr Jens Madsen:
Another thing that will happen is that it changes the conĀtrast. All of a sudden, the person will comĀpleteĀly move, because itās found someĀthing interĀestĀing in the backĀground, and there you lose the eye trackĀing comĀpleteĀly. And thereās many of those small, finicky things that can cause this to go wrong. So I actuĀalĀly, in this at-home experiment that I did, I recruitĀed over 1000 people from ProĀlifĀic and Amazon MechanĀiĀcal Turk. I can tell you that ProĀlifĀic was a delight to work with, that I ended up using a sort of a instrucĀtionĀal video, where I litĀerĀalĀly show people how to do it, because I got so many mad emails that I had to do a video about it. Yeah. I can talk about signal quality and all that later, but that was kind of the pracĀtiĀcal uses and pracĀtiĀcal tips that I can give about using this eye-trackĀing software.
Jo EverĀshed:
Thatās fanĀtasĀtic, Jens. Could you say a little bit more about what the content of the video was? Because that sounds like such a great idea, and itās actuĀalĀly someĀthing we heard from the guys last month talking about audiĀtoĀry recepĀtors. She was like, I had to show them a picture of where they should put their hands on the keyĀboards and then they got it. So this sounds the same, you canāt just write it in text, but if you show someĀbody a video of like, this⦠Was it litĀerĀalĀly that? Like, āHereās the video, this is what it looks like, this is what youāre going to happen.ā And then they get it, right?
Dr Jens Madsen:
Yeah. So I made a cartoon, Iāve wrote instrucĀtions. I mean, the first hunĀdreds of people I went like batches of 20, nobody got it, nobody got it, nobody got it. So itās just increĀmenĀtal. Okay, they didnāt underĀstand that. Why didnāt they underĀstand that? I donāt know. And I asked my colĀleagues, āI underĀstand it.ā Because youāre there. Youāre like, āWell, you can see what I mean.ā And so, I donāt know how the calĀiĀbraĀtion of Gorilla works, but we have to [crosstalk 00:16:04].
Jo EverĀshed:
Very similar to yours. We have our [crosstalk 00:16:04] in slightĀly difĀferĀent places.
Dr Jens Madsen:
Right, yeah. EssenĀtialĀly, I mean, you can imagine you have this wire frame thatās fitting around your face, right? And that wire frame has to be there because itās essenĀtialĀly finding your eyes and it takes those pixels of your eyes, and then use the model to predict where youāre looking on the screen. Now, if this wire frame as you saw in the image is over here, you can move your eyes as much as you want, itās not going to happen, you know? Itās imporĀtant, and also you donāt move around, because thatās the wire frame. And at this point, I had a beard. That was a huge problem because it didnāt like the shape of my face, I guess. My beard was a problem. So I litĀerĀalĀly showed them a video of me going through it and showing them, āOh, you see now itās going wrong because the wire frame is over there. Now, I go back. Oh, this is working. Now, I turned off the light. You can see what happens. Itās wrong,ā you know?
Dr Jens Madsen:
And also just a human interĀacĀtion with the subĀjects because when I get these people from ProĀlifĀic and Amazon MechanĀiĀcal Turk, this is just text. Iām not a person. They donāt really care. Theyāre just like, āI want to make money, I want to make money.ā But then, if you see a person like, āThis is my research, please do well. Come on guys, do it for me.ā Youāre like, āOkay,ā and then they actuĀalĀly⦠People thanked me even for parĀticĀiĀpatĀing. So that was really a nice experience.
Jo EverĀshed:
Oh, thatās fanĀtasĀtic. So attenĀdees, what I want you to type into the chat now, is in terms of top tips, what was the most valuĀable for you? Was it, do a video instrucĀtion, because then your parĀticĀiĀpants will underĀstand what they need to do? Was it, do a video instrucĀtion, because then theyāll like you, and theyāll want to do your experiment for you, you as the person? Or was it, make sure you donāt have men with beards or ask them to shave first?
Jo EverĀshed:
So into the chat now. Which do you like? Video instrucĀtions to get better data. Video instrucĀtions are great to people watchĀing them. Video instrucĀtions, glasses and backĀground, get better data. So you can see, Jens, everyĀbody is learnĀing a lot from what youāve said already. That was tremenĀdousĀly helpful, parĀticĀuĀlarĀly video instrucĀtions for all of the reasons. ExcelĀlent. So, weāre not going to go over to Simone. Simone, how do you want to share what youāve been doing? Because youāve been taking someĀwhat of a difĀferĀent approachĀes to eye tracking.
Simone Lira Calabrich:
Yes, let me share my screen here now. Okay. So Iām assumĀing that you guys can see my screen.
Jo EverĀshed:
Yeah, we can see that. Yeah.
Simone Lira Calabrich:
Okay. So first of all, Iād like to thank you guys for invitĀing me to this webinar. So Iāll talk a little bit about my perĀsonĀal expeĀriĀence as a Gorilla user. And also, itās my first time doing an eye trackĀing research as well. And Iāll try to give you a couple of tips as well on what you could do to get high-quality data. And thereās going to be, I think, some overĀlapĀping with what Jens just menĀtioned right now. Okay.
Simone Lira Calabrich:
So as I briefly menĀtioned in my introĀducĀtion, in our lab, weāre invesĀtiĀgatĀing the difĀferĀent processĀes underĀpinĀning acquiĀsiĀtion of novel letter-sound assoĀciĀaĀtions in our lab. And our aim with that is to better underĀstand how we bind visual and phonoĀlogĀiĀcal inforĀmaĀtion togethĀer and what exactly makes this process so effortĀful for some readers.
Simone Lira Calabrich:
So, in Gorilla, we used a paired assoĀciate learnĀing parĀaĀdigm in one of the tasks. So as you can see in the demonĀstraĀtion, in each trial there were three shapes on the screen. ParĀticĀiĀpants would first learn which sort of words would go with which one of the shapes, and then they would be tested on their ability to recĀogĀnize the pairs. After preĀsentĀing the bindĀings, we play one of the three words from each trial. We then present a blank screen and then we show the parĀticĀiĀpants the three pairs again. And what we do is we track parĀticĀiĀpantsā looks during the blank screen preĀsenĀtaĀtion to see if they will visuĀalĀly revisit the screen locaĀtions that were preĀviĀousĀly occuĀpied by the target.
Simone Lira Calabrich:
The ratioĀnale behind this is that someĀtimes when we are trying to rememĀber someĀthing, we might look at the spatial locaĀtion where that inforĀmaĀtion or that piece of inforĀmaĀtion was preĀsentĀed. We do that even if the spatial locaĀtion is now empty, right? So this task that we adminĀisĀtered in Gorilla is an attempt to repliĀcate the findĀings from preĀviĀous similar eye trackĀing study done by my superĀviĀsors, Jones and colĀleagues, and itās a similar parĀaĀdigm using paired-assoĀciate learnĀing and looking-at-nothing as well, in typical and dyslexĀic readers.
Simone Lira Calabrich:
So, one of the things, this has a lot to do with Jens was menĀtionĀing before, one of the things that I would strongĀly suggest that you check when youāre pre-proĀcessĀing your eye trackĀing data in Gorilla, is to check the face_conf values. So the values in this column here, they range from zero to one. And what it meaĀsures is how strongĀly the image under the model actuĀalĀly resemĀbles a face. So one means that there was a perfect fit and zero means that there was no fit, as you can see here in the illusĀtraĀtion. AccordĀing to GorilĀlaās recĀomĀmenĀdaĀtion, values that are over 0.5 are ideal.
Simone Lira Calabrich:
And the reason why I think itās so imporĀtant to check this careĀfulĀly, is because some of your parĀticĀiĀpants might move their heads during the task as Jens was menĀtionĀing before, or they might acciĀdenĀtalĀly cover their faces if theyāre bored, or someĀthing like that, they might put their glasses on or take their glasses off during the experĀiĀments, there might be some changes in the lightĀing conĀdiĀtions as well. So a lot of things can happen mid-experĀiĀments and then their faces will no longer be detectĀed. But itās imporĀtant that you exclude preĀdicĀtions that have a very low face_conf value. Thatās extremeĀly important.
Simone Lira Calabrich:
So one thing which we have been doing is we add a quesĀtion there at the beginĀning of the experiment, and then we ask parĀticĀiĀpants the conĀdiĀtions under which they will be doing the tasks. So some of the quesĀtions that I thought they were relĀeĀvant to eye-trackĀing research are the ones that are highĀlightĀed here. So we asked them, in what kind of lightĀing will we be doing the tasks? Is it dayĀlight? Are they going to be using artiĀfiĀcial lightĀing? Are they going to be placing their laptops on their lap or on their desks? We cannot, unforĀtuĀnateĀly, force parĀticĀiĀpants to place their laptops on the desk, which would be ideal, and some of them still end up placing their laptops on their laps. And we also ask them if theyāre going to be wearing glasses during the experĀiĀments, because we can not always exclude parĀticĀiĀpants who are wearing glasses.
Simone Lira Calabrich:
So what I do with this, based on parĀticĀiĀpantsā responsĀes, I try to genĀerĀate some plots so that I can visuĀalĀly inspect what may be causing the poor face_conf values for some of the parĀticĀiĀpants. So, as an overall, as you can see here, the mean value for all of the conĀdiĀtions was above the recĀomĀmendĀed threshĀold. But you can see also that the data quality was affectĀed to some extent in some of the conĀdiĀtions. So, in this parĀticĀuĀlar sample here, the model fit was equally fine for people wearing or not wearing glasses, but in one of the other pilots that we conĀductĀed, it was really, really poor for parĀticĀiĀpants wearing glasses. So, you have to think that it would be okay for you to exclude parĀticĀiĀpants wearing glasses from your experĀiĀments. We can not do that.
Simone Lira Calabrich:
The second plot, sugĀgests that natural dayĀlight seems to be a bit better for the remote eye tracker. So what Iāve been trying to do is, I release the experĀiĀments in batches, and I try to schedĀule them to become availĀable early in the morning so that I can try to recruit more people who are, probĀaĀbly going to be doing the task during the day, and someĀtimes I just pause the experĀiĀments. Here, you can see as well, their placing the comĀputĀer on the lap is also not ideal, but honĀestĀly, I donāt know how to conĀvince parĀticĀiĀpants not to do that. I try to ask them, I give visual instrucĀtions as well, but it doesnāt always work.
Simone Lira Calabrich:
The last one, you can see that in my experĀiĀments, we have six blocks, in lots of⦠We have 216 trials in each one of the blocks, so itās a very long experiment. And the impresĀsion that I get is that as people get tired over the course of the experiment, they start moving more or they start touchĀing their faces and doing things like that. So, the data quality will tend to decrease towards the end of the experiment. So thatās why itās imporĀtant for you to counter-balance everyĀthing that you can and ranĀdomĀize everyĀthing. So, this is it for now. I would like to thank my superĀviĀsors as well. And I have a couple more tips which I might show you guys later if we have time. You are muted, Jo.
Jo EverĀshed:
Thank you so much, Simone. That was actuĀalĀly fanĀtasĀtic. So attenĀdees, what I want you to answer there is, what for you was the most valuĀable thing that Simone said? Maybe it was face config, checkĀing those numbers. Or it mightāve been the setĀtings and quesĀtions, just asking people what their setup is so that you can exclude parĀticĀiĀpants if theyāve got a setup that you donāt like. Or was it only experĀiĀments in the morning, checkĀing integriĀty of face models. Or was it, actuĀalĀly just seeing how each of those setĀtings reduces the quality of the data, because I found that fasĀciĀnatĀing, seeing those plots where you can just see the quality of the data. Yes, the face config stuff is super imporĀtant. LightĀing wasnāt imporĀtant, whereas the laptop was placed. Yeah. So everyĀbodyās getting so much value from what you said, Simone. Thank you so much for that. So next, weāre going to go to Tom ArmĀstrong, whoās going to talk to us, I think, about MouseView.
Prof Tom Armstrong:
All right. Let me get my screen share going here.
Prof Tom Armstrong:
Okay. So Iām going to be talking about a tool that I co-created with Alex Anwyle-Irvine and Edwin DalĀmaiĀjer, that is a online alterĀnaĀtive to eye trackĀing. And big thanks to Alex for develĀopĀing this brilĀliant JavaScript to make this thing happen, and for Edwin, for really guiding us in terms of how to mimic the visual system and bringĀing his experĀtise as a cogĀniĀtive sciĀenĀtist to bear.
Prof Tom Armstrong:
So I menĀtioned before, Iām an affecĀtive and clinĀiĀcal sciĀenĀtist. And so in these areas, people often use passive viewing tasks to study the emoĀtionĀal modĀuĀlaĀtion of attenĀtion, or as itās often called attenĀtionĀal bias. And in these tasks, parĀticĀiĀpants are asked to look at stimuli, however they please. And these stimuli are typĀiĀcalĀly preĀsentĀed in arrays of, from two to as many as 16 stimuli. Some of them are neutral, and then some of the images are affecĀtive or emoĀtionĀalĀly [inaudiĀble 00:27:39] charge.
Prof Tom Armstrong:
Hereās some data from a task with just two images, a disĀgustĀing image paired with a neutral image, or a pleasĀant image paired with a neutral image. And Iāll just give you a sense of some of the comĀpoĀnents of gaze that are modĀuĀlatĀed by emotion in these studies.
Prof Tom Armstrong:
And so, one thing we see is that at the beginĀning of the trial, people tend to orient towards any emoĀtionĀal or affecĀtive image. MarĀgaret Bradley and Peter Lang have called this natural selecĀtive attenĀtion. And in general, when people talk about attenĀtionĀal bias for threat, or attenĀtionĀal bias for motiĀvaĀtionĀalĀly relĀeĀvant stimuli, theyāre talking about this pheĀnomĀeĀnon. Itās often meaĀsured with reacĀtion time measures.
Prof Tom Armstrong:
Whatās more unique about eye trackĀing is this other comĀpoĀnent that I refer to as strateĀgic gaze or volĀunĀtary gaze. And this plays out a little bit later in the trial when parĀticĀiĀpants kind of take control of the wheel with their eye moveĀments. And here, you see a big difĀferĀence accordĀing to whether people like a stimĀuĀlus, whether they want what they see in the picture, or whether they are repulsed by it. And so, you donāt see a valence difĀferĀences with that first comĀpoĀnent, but here in this more volĀunĀtary gaze, you see some really interĀestĀing effects.
Prof Tom Armstrong:
And so you can measure this with total dwell time during a trial. And one of the great things about this measure is that in comĀparĀiĀson to these reacĀtion time meaĀsures of attenĀtionĀal bias that have been pretty thorĀoughĀly criĀtiqued, and also the eye trackĀing measure of that initial capture, this metric is very reliĀable. Also, itās valid, in the sense that, for example, if you look at how much people look away from someĀthing thatās gross, thatās going to corĀreĀlate strongĀly with how gross they say the stimĀuĀlus is. And the same thing for appetite stimĀuĀlus. So how much people want to eat food that they see will corĀreĀlate with how much they look at it?
Prof Tom Armstrong:
So, Iāve been doing this for about 10 years. Every study I do involves eye trackĀing, but it comes with some limĀiĀtaĀtions. So, first itās expenĀsive. Edwin DalĀmaiĀjer has done a really amazing job democĀraĀtizĀing eye trackĀing by develĀopĀing a toolbox that wraps around cheap, comĀmerĀcial grade, eye trackĀers. But even with it being posĀsiĀble to now buy 10 eye trackĀers, for example, itās still hard to scale up the research, like what Jo was talking about earlier, how no more underĀpowĀered research, more diverse samples. Well, itās hard to do that with the hardware.
Prof Tom Armstrong:
And then as I learned, about a year ago, itās not panĀdemĀic proof. And so you got to bring folks into the lab. You canāt really do this online, although as we just heard, there are some pretty excitĀing options. And really for me, webcam eye trackĀing is a holy grail. But in the meanĀtime, I wanted to see if there were some other alterĀnaĀtives that would be ready to go out of the box for eye-trackĀing researchers. And one traĀdiĀtion, it turns out, is using MouĀseĀView, where thereās a mouse that conĀtrols a little small aperĀture and allows you to sort of look through this little window and explore an image.
Prof Tom Armstrong:
Now, I thought this was a pretty novel idea. It turns out folks have been doing this for maybe 20 years. And they came up with some pretty clever terms like fovea, for the way that sort of mimics foveal vision. Also, thereās been a lot of valĀiĀdaĀtion work showing that mouse viewing corĀreĀlates a lot with regular viewing as meaĀsured by an eye tracker. So, what we were setting out to do was first to see if mouse viewing would work in affecĀtive and clinĀiĀcal sciĀences, to see if youād get this sort of hot attenĀtion, as well as the cold attenĀtion that you see in just sort of browsĀing a webpage.
Prof Tom Armstrong:
And then in parĀticĀuĀlar, we wanted to create a tool, sort of in the spirit of Gorilla, that would be immeĀdiĀateĀly accesĀsiĀble to researchers and you could use without proĀgramĀming and having techĀniĀcal skills. And so we actuĀalĀly used⦠We did this in Gorilla and we colĀlectĀed some data on Gorilla over ProĀlifĀic, and we have data⦠This is from a pilot study. We did our first study with 160 parĀticĀiĀpants. And let me just show you what the task looks like. Iām going to zip ahead, because Iām a disgust researcher and you donāt want to see whatās on the first trial. At least you can see it blurred, but thatās good enough. Okay. So you can see someĀoneās moving a cursor-locked aperĀture and thereās this GaussĀian filter used to blur the screen to mimic periphĀerĀal vision and parĀticĀiĀpants can explore the image with the mouse. Okay. We move on.
Prof Tom Armstrong:
Okay. So one of the great things about MouĀseĀView is that Alex has created it in a really flexĀiĀble manner where users can cusĀtomize the overlay. So you can use the GaussĀian blur, you can use a solid backĀground, you can use difĀferĀent levels of opacity. You can also vary the size of the aperĀture. And this is someĀthing that we havenāt really sysĀtemĀatĀiĀcalĀly buried yet. Right now itās just sort of set to mimic foveal vision to be about two degrees or so.
Prof Tom Armstrong:
So weāve done this pilot study, about 160 people, and the first thing we wanted to see is, does the mouse scanĀning resemĀble gaze scanĀning? And Edwin did some really brilĀliant analyĀses to be able to sort of answer this quanĀtiĀtaĀtiveĀly and staĀtisĀtiĀcalĀly. And we found that the two really conĀverge, you can see it here in the scan pass. Like for example, if you look over the right, disgust five, really similar pattern of exploĀration. We blurred that so that you canāt see the proĀpriĀetary IX images.
Prof Tom Armstrong:
Now the bigger quesĀtion for me, does this capture hot attenĀtion? Does this capture the emoĀtionĀal modĀuĀlaĀtion of attenĀtion that we see with eye trackĀing? And so here on the left, you can see the eye trackĀing plot that I showed you before. Over here on the right, is the MouĀseĀView plot. And in terms of that second comĀpoĀnent of gaze I talked about, that strateĀgic gaze, we see that coming through in the mass view data really nicely. Even some of these subtle effects, like the fact that people look more at unpleasĀant images the first time before they start avoidĀing them, so we have that approach and that avoidĀance in the strateĀgic gaze.
Prof Tom Armstrong:
The one thing thatās missing, maybe not surĀprisĀingĀly, is this more autoĀmatĀic capture of gaze at the beginĀning of the trial because the mouse moveĀments are more effortĀful, more volĀunĀtary. Weāve now done a couple more of these studies and weāve found that this dwell time index with the mouse viewing is very reliĀable in terms of interĀnal conĀsisĀtenĀcy. Also, weāre finding that it corĀreĀlates very nicely with self-report ratings of images and indiĀvidĀual difĀferĀences related to images like we see with eye gaze. So it seems like a pretty promisĀing tool. And I can tell you more about it in a minute, but I just wanted to really quickly thank Gorilla. Iām excited about any announceĀment that might be coming, and my college for funding some of this valĀiĀdaĀtion research, and the members of my lab who are curĀrentĀly doing a within-person valĀiĀdaĀtion against eye-trackĀing in-person in the lab.
Jo EverĀshed:
Thank you so much, Tom. That was absoluteĀly fasĀciĀnatĀing. A number of people have said in the chat. I liked that. That was just absoluteĀly fasĀciĀnatĀing, your research. Iām so impressed. What Iād love to hear from attenĀdees, like what do you think about MouĀseĀView? Doesnāt that look tremenĀdous? Iām so excited by what that⦠Because there are limits to what eye trackĀing we can do with the webcam, right? Iām sure we can get two zones, maybe six, but what I think is really excitĀing about MouĀseĀView, is it allows you to do that much more detailed eye trackĀing-like research. Itās a difĀferĀent methodĀolĀoĀgy thatās going to make stuff that othĀerĀwise wouldĀnāt be posĀsiĀble to take online, posĀsiĀble. Tom, Iād never heard of this before. It sounds so excitĀing. It seems like such a reaĀsonĀable way to invesĀtiĀgate voliĀtionĀal attenĀtion in an online context. I think people have been really inspired by what youāve said Tom.
Jo EverĀshed:
And the excitĀing news for those of you lisĀtenĀing today, MouĀseĀView is going to be a closed beta zone from next week in Gorilla. To get access to any closed beta zone, all you need to do is go to the support desk, fill out the form, āI want access to a closed beta zone,ā this one, and it gets applied instantĀly to your account. Thatās just the case for eye trackĀing, itāll be the case for MouĀseĀView. Theyāll be able to be used without⦠You donāt need any coding to be able to use them. If theyāre in closed beta, itās just an indiĀcaĀtion from us that there isnāt a lot of pubĀlished research out there, we havenāt valĀiĀdatĀed it, so we say handle with care, right? Like run your pilots, check your data, check it thorĀoughĀly, make addiĀtionĀal⦠Data quality checks than you would otherwise.
Jo EverĀshed:
With things like showing images, you can see that itās correct, right? And the data that youāre colĀlectĀing isnāt comĀpliĀcatĀed. So there were reserves that we donāt need to have in closed beta. Until things have been pubĀlished and been valĀiĀdatĀed, we keep things in closed beta where theyāre more techĀniĀcalĀly complex. Thatās what that means.
Jo EverĀshed:
But yes, you can have access. So, MouĀseĀView coming to Gorilla next week. And thank you to Tom and to Alex, who I think is on the call, and Edwin. Theyāre all here today. If youāre impressed by MouĀseĀView, can you type MouĀseĀView into the chat here, just so that Tom, and Edwin, and Alex will get to like whoop whoop from you guys. Because theyāve put massive amount of work into getting this done and I think they deserve the equivĀaĀlent of a little round of applause for that. Thank you so much. Now finally, over to Jonathan to talk about what youāve been up to.
Jonathan Tsay:
Okay. Can you see my screen? Okay, perfect, perfect. So, my name is Jonathan, I go by JT, and I study how humans control and acquire skilled moveĀment. So let me give you an example of this through this video.
Jonathan Tsay:
Okay, my talkās over. No, I was just kidding. So this happens every day, how we adapt and adjust our moveĀments to changes in the enviĀronĀment and the body. And this process, this learnĀing process requires mulĀtiĀple comĀpoĀnents. Itās a lot of trial and error. Your body just kind of figures it out, but itās also a lot of instrucĀtion. How the father here instructs the son to jump on this chair, and of course reward too at the end with the hug.
Jonathan Tsay:
And we studied this in the lab by asking people to do someĀthing a little bit more mundane. So, typĀiĀcalĀly, youāre in this dark room, youāre asked to hold this digĀiĀtizĀing pen, you donāt see your arm, and youāre asked to reach to this blue target, conĀtrolĀling this red cursor on the screen. And we described this as playing fruit ninja. Fun slice through the blue dot, using your red cursor.
Jonathan Tsay:
On the right side, Iām going to show you some data. So, iniĀtialĀly, when people reached target, conĀtrolĀling this red cursor, they canāt see their hand, people are on target. On target means hand angle is zero and xāaxis is time. So more reaches means youāre moving across the xāaxis. But then we add a perĀturĀbaĀtion. So we introĀduce a 15 degree offset from the target. The cursor is always going to move 15 degrees away from the target. We tell you, we say, āJoe, this cursor has nothing to do with you. Ignore it. Just keep on reachĀing to the target.ā And so you see here on the right, this is parĀticĀiĀpants data. People canāt keep reachĀing through the target. They implicĀitĀly respond to this red cursor by moving in the oppoĀsite direcĀtion, they drift off further and further away to 20 degrees.
Jonathan Tsay:
EvenĀtuĀalĀly, they reach an asympĀtote, around 20 degrees, and when we turn off the feedĀback, when we turn off the cursor, people drifted a little bit back to the target. And this whole process is implicĀit. If I asked you where your hand is, your actual hand is 20 degrees away from the target. If I asked you where your hand is, you tell me your hand is around the target. This is where you feel your hand. You feel your hand to be at the target, your hand is 20 degrees off the target. And this is how we study implicĀit motor learnĀing in the lab. But because of the panĀdemĀic, we built a tool to test this online. And so, in a paper preprint recentĀly released, we comĀpared in-person data using this kind of sophisĀtiĀcatĀed machinĀery that typĀiĀcalĀly costs around $10,000 to set up.
Jonathan Tsay:
And you can see on the bottom, this is the data we have in the lab. We just create difĀferĀent offsets away from the target, but nonetheĀless people drift further and further away from the target. And we have data from online using this model, this temĀplate we created to track your mouse moveĀments and your reachĀing to difĀferĀent targets. The behavĀior in person and online seem quite similar. But in person, online research affords some great advanĀtages and Iām preachĀing to the choir here. For in lab results, we took around six months of in-person just to come to the lab, collect your data. For the online results, we colĀlectĀed 1 to 20 people in a day and so thatās a huge time-saver, and in terms of cost as well. And of course, we have a more diverse popĀuĀlaĀtion. I just want to give a few tips before I sign off here.
Jonathan Tsay:
So a few tips are instrucĀtion checks. So, for instance, in our study, we ask people to reach to the target and ignore the cursor feedĀback, just conĀtinĀue reachĀing. So, an instrucĀtion check quesĀtion we ask is where are you going to reach? Option A, the target, option two, away from the target? And if you choose away from the target, then we say, āSorry, that was the wrong answer and please try again next time.ā
Jonathan Tsay:
Catch trials. So, for instance, someĀtimes we would say, āDonāt reach to this target.ā The target presents itself, and we say, donāt reach the target, and if we see that parĀticĀiĀpants conĀtinĀue to reach to the target, they might be just not paying attenĀtion and just swiping their hand towards the target. So we use some catch trials to filter out good and bad subĀjects. We have baseĀline variĀabilĀiĀty meaĀsures. So reach the target and if we see youāre reachĀing in a erratic way, then we typĀiĀcalĀly say, āOkay, sorry, try again next time.ā Again, moveĀment time is a great indiĀcaĀtor, espeĀcialĀly for a mouse tracking.
Jonathan Tsay:
If you, in the middle of the experiment, you go to the restroom and you come back. These are things that can be tracked using moveĀment time, which is typĀiĀcalĀly, someone might not take your experiment seriĀousĀly, but not always. And Simone brought this up, but batchĀing and iterĀatĀing, getting feedĀback from a lay person to underĀstand instrucĀtions is huge for us. And last but not least, someĀthing that Tom brought up, was someĀtimes when you see behavĀior thatās difĀferĀent between in-lab and online, this is someĀthing we strugĀgle with, is it reflecĀtive of someĀthing thatās interĀestĀing thatās difĀferĀent between online and in-person or is it noise? So thatās someĀthing weāre strugĀgling with, but we came to the conĀcluĀsion that someĀtimes itās just difĀferĀent. Youāre using a mouse versus a robot in the lab. So, that can be very different.
Jonathan Tsay:
What Iām excited about this mouse trackĀing research and how it relates to motor learnĀing, is typĀiĀcalĀly motor learnĀing patient research is around 10 people, but now, because we can just send a link to these parĀticĀiĀpants, theyāre able to access the link and do these experĀiĀments at home. We can access some huge, larger group of patient popĀuĀlaĀtions that typĀiĀcalĀly, maybe logisĀtiĀcalĀly, hard to invite to the lab. And second, teachĀing. Iām not going to belabor this point. Third is public outĀreach. So, we put our experiment on this TestĀMyĀBrain website and people just try out the game and learn a little bit about their brain, and thatās an easy way to collect data, but also for people to learn a little bit about themselves.
Jonathan Tsay:
Hereās some open resources, you can take a screenĀshot, we share our temĀplate, how to impleĀment these mouse trackĀing experĀiĀments online. Itās also inteĀgratĀed with Gorilla. We have a manual to help you set it up, we have a paper, and hereās a demo, you can try it out yourĀself. And last thing, I want to thank my team. So, Alan, who did a lot of the work coding up the experiment, my advisor, Rich Ivry, and Guy, and Ken NakayaĀma. They all worked colĀlecĀtiveĀly really put this togethĀer and weāre really excited about where itās going. So thank you again for your time.
Jo EverĀshed:
Thank You so much. Jon, that was fanĀtasĀtic. Now, what I want to hear from the panel from the attenĀdees, did you like more from Jon there, the advanĀtages of online research, the cost-saving, the time-saving, or were you more blown away by his tips for taking and getting good quality mouse trackĀing data online. Tips, instrucĀtion, check quesĀtions, tips, tips, more tips. And that girl who jumped onto that stool at the beginĀning, were you not just blown away by her? If you were blown away by her resilience to jumping up⦠Theyāre entireĀly blown away in the chat. I thought she was tremenĀdous. She must be about the same age as my son and he would not do that for sure. That was someĀthing quite excitĀing. Jon, I want to ask a follow up quesĀtion. Your mouse trackĀing experiment, have you shared that in Gorilla Open Materials?
Jonathan Tsay:
Yeah, yeah. That is in Gorilla Open Materials.
Jo EverĀshed:
If youāve got the link, do you want to dump that into the chat? Because then if anybody wants to do a repliĀcaĀtion or an extenĀsion, they can just clone the study and see how youāve done it, see how youāve impleĀmentĀed it. Itās just a super easy way of sharing research and allowĀing people to build on the research thatās gone before, without wasting time. ActuĀalĀly, make sure weāve got a link to that as well, can you? So that when we send a follow up email on Monday, we can make sure that everyĀbody whoās here today can get access to that. Oh, I think Josh has already shared it, Jon. Youāre off the hook. ExcelĀlent. Weāve now come to Q&A time. There are lots and lots of quesĀtions. There are a total of 32 quesĀtions, of which 16 have already been answered by you fine people as we go through. There are some more quesĀtions though. Edwin has got a quesĀtion of how do people deal with the huge attriĀtion of parĀticĀiĀpants in web-based eye-trackĀing. Simone or Jens, can either of you speak on that one? How have you dealt with attrition?
Simone Lira Calabrich:
Yeah, itās a bit comĀpliĀcatĀed because my experiment is a very long one and parĀticĀiĀpants end up getting tired and they quit the experiment in the middle of it. There is not much that we can do about it, but just keep recruitĀing more parĀticĀiĀpants. So we ran a power analyĀsis, which sugĀgestĀed that we needed 70 parĀticĀiĀpants for our study. So, our goal was to recruit 70 parĀticĀiĀpants no matter what. So, if someone quits midway, we just reject the parĀticĀiĀpant and we just recruit an addiĀtionĀal one as a subĀstiĀtute, as a replacement.
Dr Jens Madsen:
So I think at least for my perĀspecĀtive, itās very difĀferĀent comĀparĀing staĀtionĀary, like viewing or eye trackĀing of images, and then in my case, video. So video is moving conĀstantĀly, right? And so, you can show an image and they have to watch the whole video, and I have to synĀchroĀnize it in time. And it also depends on the analyĀsis method you do. In my case, I donāt really look into new spatial inforĀmaĀtion. Spatial inforĀmaĀtion for me is irrelĀeĀvant. I use the corĀreĀlaĀtion. So how similar peoĀpleās eye moveĀments are across time. I use other people as a refĀerĀence. And in that sense it can be very noisy, the data, actuĀalĀly. You can actuĀalĀly move around and itās quite robust in that sense. And so it depends on the level of noise you induce in a system. For my case, because it was video, I put in auxĀilĀiary tasks, like people can watch, look at dots to see if they were actuĀalĀly there or not or things like that just to control for those things or else youāre in big trouble because you have no clue whatās happening.
Dr Jens Madsen:
And so having those extra things to make sure that theyāre there. And also it turns out the attenĀtion span of an online user, at least in eduĀcaĀtionĀal content, itās around five, six minutes, after that theyāre gone. You canāt, it doesnāt matter. Theyāre bored, they couldĀnāt be bothĀered. And so my task were always around there. The videos that I showed were always five, six-minutes long, three-minutes long, and then some quesĀtions. But they couldĀnāt be asked to sit still because when you use WebGazĀer, you have to sit still. It depenĀdence on⦠You guys are using spatial tasks, right? So this would be a problem for you. For me itās fine because I use the temĀpoĀral course. But for spatial people thatās going to be an issue because the whole thing is just going to shift. And how do you detect that, right? Do you have to somehow either insert some things like now you look at this dot and now I can recalĀiĀbrate my data or someĀthing? I donāt know how you guys are dealing with that. But yeah, those are the things that you need to worry about.
Simone Lira Calabrich:
Yeah. I was just going to add someĀthing, that because we have a very long experiment with lots of trials, we can lose some data and itās still going to be fine, right? So I have 216 trials in my experĀiĀments. So itās not a six-minute long one, itās two-hour experĀiĀments. So, even if I do lose some data, relĀaĀtiveĀly, itās still fine. I have enough power for that.
Dr Jens Madsen:
I mean, you still have the calĀiĀbraĀtion? You do a calĀiĀbraĀtion, right? And Iām assumĀing you do it once, right? And you have to sort of⦠Or you do it mulĀtiĀple times?
Simone Lira Calabrich:
MulĀtiĀple times. Yeah.
Dr Jens Madsen:
You have to do that.
Simone Lira Calabrich:
So we have six blocks.
Dr Jens Madsen:
Yeah, that makes sense.
Simone Lira Calabrich:
So we do it at the beginĀning of each block and also in the middle of each block as well, just to make sure that itās as accuĀrate as possible.
Dr Jens Madsen:
You saw what I just did there, right? I readĀjustĀed myself and this is someĀthing natural. Itās like I just need⦠Ah, yeah, thatās better, you know? Thatās a problem.
Simone Lira Calabrich:
Exactly, yeah. So, yeah, thatās why we do that mulĀtiĀple times.
Dr Jens Madsen:
And we do it even without knowing it.
Jo EverĀshed:
Simone, how often do you recalibrate?
Simone Lira Calabrich:
So, we have six blocks. So at the beginĀning of each block and in the middle of each block. So, every 18 trials.
Jo EverĀshed:
Okay, that makes sense. So in preĀviĀous lecĀtures weāve had about online methods. People have said, a good length for an online experiment is around 20 minutes, much longer than that people start to get tired. If you pay people better, you get better quality parĀticĀiĀpants. So, thatās another way that you can reduce attriĀtion, double your fees and see what happens. People are willing to stick around longer if theyāre being paid well for their time.
Jo EverĀshed:
And then one of their researchers, Ralph Miller, from New York, he does long studies like Simone does online, and what he does is about every 15 minutes, he puts in a five minute break and he says, āLook, please go away, get up, walk around, do someĀthing else, stretch, maybe you need to go to the loo, maybe thereās someĀthing you need to deal with, but you have to be back in five minutes.ā When you press next, that five minute I think it happens autoĀmatĀiĀcalĀly. And that gives people that ability to, āOh, I really need to stretch and move.ā So that you can build in an expeĀriĀence that is manĀageĀable for your participants.
Jo EverĀshed:
And so if youāre strugĀgling with attriĀtion, the thing to do is to pilot difĀferĀent ideas until you find what works for your experĀiĀments. There arenāt things that will work for everyĀone, but there are techĀniques and approachĀes that you can try out, sort of experĀiĀmenĀtaĀtion in real time, find out whatās going to work. And that can be really helpful too. Tom, there are quite a few quesĀtions about⦠Can you guys see the Q&As? If you pull up the Q&A panel, there were some nice ones about mouse trackĀing here that I think Tom might be able to answer. So one here, how viable is it to use mouse trackĀing in reading research, for example, asking parĀticĀiĀpants to move that cursor as they read. And then simĀiĀlarĀly, Jens and Simone, there are quesĀtions about eye fixĀaĀtions and data quality. You can also type answers. So I think weāll run out of time if we try and cover all of those live, but maybe Jens and Simone, you can have a go answerĀing some of the ones that are more techĀniĀcal. But Tom, perhaps you could speak about mouse trackĀing, eye trackĀing, the crossover. Youāre muted. Youāre muted.
Prof Tom Armstrong:
There are so many, but let me try to do it justice. So, I mean, right now, I think, I donāt know what unique processĀes we get from MouĀseĀView. Iām thinkĀing it as being just a stand in for that volĀunĀtary exploĀration that we see with the eye trackĀing. In terms of what that gets you beyond, great quesĀtion, about beyond just self-report, there are some interĀestĀing ways in which self-report and eye trackĀing do diverse that weāve found that I canāt do justice to right now. So I think that you often pick up things with self-report that you donāt get with⦠Iām sorry, you get things with eye-trackĀing that you donāt get with self-report. For example, Edwin and I found that eye moveĀment avoidĀance of disĀgustĀing stimuli doesnāt habitĀuĀate, whereas people will say theyāre less disĀgustĀed, but then theyāll conĀtinĀue to look away from things.
Prof Tom Armstrong:
And so someĀtimes, thereās more or so things people can introĀspect on. About reading, Edwin took that quesĀtion on. Left versus right mouse? FasĀciĀnatĀing, Iām not sure. And then imporĀtantĀly, the touch screens that is in the works. So maybe if Alex can jump on that quesĀtion. Thatās the next thing that heās working on, making this sort of work with touch screens. Right now itās just for desktop, laptop, Chrome, Edge or Firefox.
Jo EverĀshed:
AnyĀthing that works in Gorilla, probĀaĀbly might already work for touch. I donāt know when, unforĀtuĀnateĀly, [inaudiĀble 00:54:56] but I will make sure that that quesĀtion gets asked next week, because by default, everyĀthing in Gorilla is touch comĀpatĀiĀble as well.
Prof Tom Armstrong:
Cool.
Jo EverĀshed:
Iām trying to pick out a good next quesĀtion. Whatās the next one at the top? Can we learn someĀthing from online mouse trackĀing that we cannot learn from online eye trackĀing? Can anyone speak to that or have you already?
Dr Jens Madsen:
What was the quesĀtion? Sorry.
Jo EverĀshed:
Can we learn someĀthing from online mouse trackĀing that we cannot learn from online eye trackĀing? I think that there are difĀferĀent methods that answer difĀferĀent quesĀtions, right?
Dr Jens Madsen:
So thereās cerĀtainĀly a corĀreĀlaĀtion between where you look and where the mouse is, right? So this is clear. And also it depends on the task. In my case, with the video, youāre not moving around the mouse where youāre looking, because youāre watchĀing a video, thatās not a natural behavĀior. But if you [inaudiĀble 00:55:57] of just using UI buttons and things like that, surely theyāre highly corĀreĀlatĀed. So, it very much depends on the task.
Jo EverĀshed:
Thatās really good. We are now five minutes to six. So, Iām going to wrap this up. There are lots and lots more quesĀtions, but I donāt think we can get through all of them today. HopeĀfulĀly, weāve managed to answer 24 quesĀtions. So I think weāve done a really, really great job there. ActuĀalĀly, thereās one more which I think Simone might be able to answer quickly. Whatās the relaĀtionĀship between face config and calĀiĀbraĀtion accuĀraĀcy measure. Did you look at both of those?
Simone Lira Calabrich:
No, I didnāt actuĀalĀly invesĀtiĀgate that, but what I did was I did similar plots for the calĀiĀbraĀtion analyĀsis as well in Gorilla. They were very similar to what I demonĀstratĀed to you guys. So, dependĀing on whether parĀticĀiĀpants were wearing glasses or not, there were some lower values for that. What I try to do, I strive to use the five-points calĀiĀbraĀtion in Gorilla and if calĀiĀbraĀtion fails for at least one of the points, the calĀiĀbraĀtion has to be reatĀtemptĀed. So, Iām trying to be very strict in that sense. Thatās my default mode now. So, if it fails just one of the points, I think itās just best to try to recalĀiĀbrate, which can be quite frusĀtratĀing for some parĀticĀiĀpants, but that will ensure that we have better data quality.
Jo EverĀshed:
Yeah, that was great. Now, I have one last quesĀtion for the panel which is what do you see the next year bringĀing to this area of research. And weāre going to do this in reverse order, so startĀing with JT.
Jonathan Tsay:
Iām going to say that, at least in my field, Iām most excited about larger scale patient research and thatās number one. ReachĀing indiĀvidĀuĀals who are typĀiĀcalĀly harder to reach. So, larger scale in that sense, but another is reachĀing, for instance, people without proĀpriĀoĀcepĀtion. For instance, you donāt have a sense of body awareĀness. Iām pretty sure most of you have never met someone like that because in my view, I think thereās only three people in the world that have learned the litĀerĀaĀture, and kind of being able to work with these people remoteĀly would be a great opporĀtuĀniĀty in the future.
Jo EverĀshed:
Thatās brilĀliant. Tom, how about for you? What does the next year hold?
Prof Tom Armstrong:
So, one, getting MouĀseĀView on to mobile devices to work with touch screen. Then just seeing the method get adopted by people in difĀferĀent areas, and to see how a lot of these eye trackĀing finds repliĀcate. Also, to, hopeĀfulĀly, get this into task zones with some difĀferĀent variĀeties of eye trackĀing tasks. So, Larger matriĀces, 16 [inaudiĀble 00:58:54] and just increĀmenĀtalĀly working like that.
Jo EverĀshed:
I think thatās always so excitĀing when you create a new method, is you donāt know how people are going to use it, and someĀbodyās going to see that and go, āOoh, I could do someĀthing that youād never imagĀined,ā and sudĀdenĀly a whole new area of research becomes posĀsiĀble. Thatās hugely excitĀing. Simone, how about you? What does the next year hold?
Simone Lira Calabrich:
I was just thinkĀing perhaps the posĀsiĀbilĀiĀty of testing parĀticĀiĀpants who are speakĀers of difĀferĀent lanĀguages. That would be really nice as well. So, with remote eye trackĀing, we can do that more easily. So hopefullyā¦
Jo EverĀshed:
HopeĀfulĀly, that will.
Simone Lira Calabrich:
.. thatās whatās going to happen.
Jo EverĀshed:
And, Jens, finally to you.
Dr Jens Madsen:
We were working in online eduĀcaĀtion and we were meaĀsurĀing the level of attenĀtion of stuĀdents when they watch these eduĀcaĀtionĀal mateĀrĀiĀal. And what weāre excited about is that we can actuĀalĀly reverse that process so we can have the person in the browser measure the level of attenĀtion and we can adapt the eduĀcaĀtionĀal content to the level of attenĀtion. So, if stuĀdents are dropĀping out or not looking, we can actuĀalĀly interĀvene and make interĀvenĀtions so that hopeĀfulĀly we can improve online eduĀcaĀtion. Youāre muted.
Jo EverĀshed:
Sorry. Tom just dropped out. So I was just checkĀing what hapĀpened there. The online eduĀcaĀtion thing, I can see it being been tremenĀdous and thatās what everyĀbody needs. If you had one tip for everyĀbody watchĀing today to improve online eduĀcaĀtion, what would it be?
Dr Jens Madsen:
Do short, and show your face, and skip the boring long PowerPoints.
Jo EverĀshed:
ExcelĀlent. All about human interĀacĀtion, isnāt it?
Dr Jens Madsen:
Itās all about the interĀacĀtion. If you can see a perĀsonĀās face, youāre there.
Jo EverĀshed:
Yeah., yeah, yeah. So, maybe itās when youāve got your stuĀdents in your class, get them to turn their videos on, right? Theyāll feel like theyāre there togethĀer in a room.
Dr Jens Madsen:
Itās so important.
Jo EverĀshed:
So imporĀtant. Back to the parĀticĀiĀpants, there were 150 of you for most of today. Thank you so much for joining our third Gorilla Presents webinar. Each month weāll be addressĀing a difĀferĀent topic on online behavĀioral research. So, why not write in the chat with sugĀgesĀtions of what youād like us to cover next? Yes, thank you mesĀsages, please, through to our amazing panĀelists and Tom as well. Itās very difĀfiĀcult to judge how much value youāve got out from here, but big thank yous really helps these guys know that you really appreĀciĀatĀed the wisdom that theyāve had for you today.
Jo EverĀshed:
There will be a survey. I think we email you with a survey straight after this to help us make these sesĀsions more useful. Please fill this out. Itās tremenĀdousĀly useful to us and it allows us to make each session better and better. You guys can see the value that youāve got out of this today. By giving us feedĀback, we can make future sesĀsions even better. So youāre doing a solid for the whole research community.
Jo EverĀshed:
The next webinar is going to be about speech proĀducĀtion experĀiĀments online. It is going to be in late April. So, if speech proĀducĀtion experĀiĀments, where people talk⦠Itās going to be 29th of April, there you go. Where people talk and youāre colĀlectĀing their voice, if thatās your bag, then make sure you sign up for that one as well. Thank you and good night. One final massive thank you to the panĀelists. Thank you so much for giving your time to the research comĀmuĀniĀty today and weāll chat in a minute in the next room.
Simone Lira Calabrich:
Thank you, everyone.
Dr Jens Madsen:
Yeah, thank you.
Jonathan Tsay:
Thank you.