Gorilla Presents… Eye and Mouse Track­ing Research Online (Webinar)

This webinar is all about con­duct­ing eye and mouse track­ing research online.

 

With the help of our experts:


Here are some of the recent arti­cles, blogs, and talks they’ve pub­lished on today’s topic:

Tran­script

Jo Ever­shed:

Hello every­one. We have 31 atten­dees. Hello. Hope­ful­ly you’re begin­ning to be able to hear me. Thank you for joining us today. Now, some first things first that I’d like you to do is open up the chat and tell us where you’re coming from. So, intro­duce your­self say, “Hi, I’m blah, blah, blah, blah,” from wher­ev­er you’re from. I will have a go at doing that now so that you can see me. But open up the chat and type in who you are. So I’m Jo from Gorilla. I love evi­dence-based visions.

Jo Ever­shed:

Felix Trudeau from the Uni­ver­si­ty of Toronto. Hello, Felix. Hello, Pete. Hello, Karen. Hello Jen. Hello Jerry, Nick, and Gita and Mave, and Dan. I think we’re up to about 91 people now. Yonas.

Jo Ever­shed:

Okay. Now, next thing I want you to start answer­ing in the chat is, what made you embrace online research? You’re all here to hear about online research. Hey, Sam. Nice to see you. So what made you embrace online research? COVID, lots of COVID respons­es. Yes, and we hear from people the whole time, COVID was the push I needed to take my research online. I embrace it because of easier access to par­tic­i­pants, but also COVID. Yes, it is so great to be able to collect data so much more quickly than having to test people face-to-face in the lab. High quality data is obvi­ous­ly the future for behav­ioral research.

Jo Ever­shed:

No more under­pow­ered samples. Hope­ful­ly, that can start to be a thing of the past. Now, the next ques­tion I want you guys to answer in the chat. We’ve got 108 people here now, so that’s fan­tas­tic, is what do you see as the ben­e­fits of online research? Obvi­ous­ly COVID was the push, but what are you hoping to get from it? We’ve heard a little bit about more access to par­tic­i­pants, but diverse samples, quicker, less costly, more varied samples, scal­a­bil­i­ty, lovely answers, these. You can be a bit more longer. Wider par­tic­i­pa­tion time, and what’s this going to do for your research? Is it going to make your research better, faster, easier? Cross-cul­tur­al studies, less costly. Thank you so much.

Jo Ever­shed:

Fin­ished data col­lec­tion in two weeks. Wow, that must’ve felt amazing. Now, so this is great. You’re answer­ing, what are the ben­e­fits of online research? Now, final ques­tion. What chal­lenges to research do you face that you’re hoping to learn about today? So this is a great ques­tion, a general ques­tion that you can put into the chat and our pan­elists will be reading them and that will help them give you the best pos­si­ble answers. What you can also do, if you’ve got spe­cif­ic ques­tions, this is the time to open the Q&A panel, which you should have access to at the bottom. So if you’ve got a ques­tion, make it detailed so that the… Not an essay, obvi­ous­ly, but a detailed, spe­cif­ic ques­tion. So, yeah. Fruka’s done a bril­liant one, how reli­able is online eye track­ing. Fan­tas­tic, but if you can, instead of putting that in the chat, can you put that into the Q&A panel, which is a dif­fer­ent panel, you should also be able to access it from the bottom.

Jo Ever­shed:

And then as our pan­elists are talking, they will start answer­ing those ques­tions. So I think we’re up to 120 atten­dees. That’s fan­tas­tic. Thank you so much for joining us today. Hi, I’m Jo Ever­shed. I’m the founder, CEO of Gorilla Experiment Builder and I’m your host today. I’ve been helping researchers to take their studies online since 2012. So I’ve been doing this for a while, over nine years. For the last two years, we’ve also brought researchers togeth­er for our online summer con­fer­ence BeOn­line, which stands for behav­ioral science online, where pio­neer­ing researchers from all over the world share insights into online methods. Papers have there place for record­ing what was done, but unfor­tu­nate­ly they aren’t a play­book for how to run research suc­cess­ful­ly. And this is why we run a methods con­fer­ence. Don’t miss it.

Jo Ever­shed:

I think my col­league, Ashley, is going to put a link to BeOn­line into the chat now. And if you go to the BeOn­line site and pre-reg­is­ter for the con­fer­ence, when the tickets are avail­able, it’s all com­plete­ly free, you’ll find out about it and you’ll be able to come and have several hours worth of methods related con­fer­ence. And we might have more stuff on eye track­ing then, because the world might’ve moved in three months, who knows?

Jo Ever­shed:

But we can’t wait a year to share method­olog­i­cal best prac­tice. Life is just moving too fast. So we are now con­ven­ing monthly, Gorilla Presents to help researchers take studies online, to learn from the best. Now we’ve got a poll coming up. Josh, can you share the poll. We’ve got this one final ques­tion for you now, and it’s how much expe­ri­ence do you have with Gorilla? If you can all answer that ques­tion, that would be great. Now, as you know, today’s webinar is all about eye track­ing and mouse track­ing online.

Jo Ever­shed:

We know from lis­ten­ing to our users that eye track­ing and mouse track­ing is very popular, and we thought it would be great oppor­tu­ni­ty to bring togeth­er this vibrant com­mu­ni­ty to discuss all the highs and lows of moving eye track­ing and mouse track­ing research out of the lab. And we’ve con­vened this panel of experts here to help us, they’ve been running eye track­ing and mouse track­ing research online for the last little while and they’re going to discuss what worked, what was chal­leng­ing, and what we still need in order to do top quality, eye track­ing and mouse track­ing research online. So please welcome Dr. Jens Madsen from CCNY, Simone Lira Cal­abrich from Bangor Uni­ver­si­ty, Pro­fes­sor Tom Arm­strong from Whitman College, and Jonathan Tsay from UC Berke­ley. And I’m now going to let each of them intro­duce them­selves. So, Jens over to you.

Dr Jens Madsen:

Yeah. I’m Jens Madsen, I’m a Postdoc at the City College of New York and have a pretty diverse back­ground. I have like in com­put­er science, I did my PhD in machine learn­ing, and now I’m in a neural engi­neer­ing. So we’re doing quite a diverse set of record­ings, all the way from neural respons­es to eye move­ments, heart rate, skin, you know, you name it, we record every­thing. And we actu­al­ly started a project about online edu­ca­tion. So, we were already doing webcam eye track­ing before the pan­dem­ic hap­pened and then the pan­dem­ic hap­pened and we were like, “Oh, this is great, you’re coming to where we are already.” So that was inter­est­ing. And yeah, we’re doing quite a lot of research with web cam eye track­ing, col­lect­ing over a thou­sand peo­ple’s eye move­ments when they watch edu­ca­tion­al videos.

Jo Ever­shed:

Oh, that’s awesome. Fan­tas­tic. So Simone, over to you. What are you up to?

Simone Lira Calabrich:

So I’m Simone, I’m a PhD student at Bangor Uni­ver­si­ty, which is in North Wales, and my super­vi­sors are Dr. Manon Jones and Gary Oppen­heim. And we are cur­rent­ly inves­ti­gat­ing how indi­vid­u­als with dyslex­ia acquire novel visual phono­log­i­cal asso­ci­a­tions or how they learn asso­ci­a­tions between letters and sounds, and how they do that as com­pared to typical readers. And we’ve been using paired-asso­ciate learn­ing and looks-at-nothing par­a­digm in our inves­ti­ga­tion. And this is actu­al­ly the first time that I’ve been working with eye track­ing research, and because of the pan­dem­ic, I had to imme­di­ate­ly move to online based eye-tracking.

Jo Ever­shed:

Excel­lent. Now, Tom, over to you.

Prof Tom Armstrong:

I’m an Asso­ciate Pro­fes­sor at Whitman College and I’m an affec­tive and clin­i­cal sci­en­tist. And so affec­tive in the sense that I study the emo­tion­al mod­u­la­tion of atten­tion and then clin­i­cal in the sense that I study the emo­tion­al mod­u­la­tion of atten­tion in the context of anxiety dis­or­ders and other mental ill­ness­es. And I’ve been using eye track­ing in that work for about 10 years now, and with a par­tic­u­lar focus on mea­sur­ing the stress with eye track­ing. And then through the pan­dem­ic, I teamed up with Alex Anwyl-Irvine and Edwin Dal­mai­jer to create this mouse-based alter­na­tive to eye track­ing that we could take online.

Jo Ever­shed:

Yeah, and we’re going to have more about (INAUDIBLE). Well, Tom’s going to talk much more about that later, and then I’ve got excit­ing news at the end of today. And Jonathan, sorry, last but not least over to you.

Jonathan Tsay:

Hello every­one. I’m Jonathan Tsay, but you can call me JT like Justin Tim­ber­lake. I’m a third year grad student at UC Berke­ley and I study how we learn and acquire skilled move­ments. And hope­ful­ly we can apply what we learn here at Berke­ley to reha­bil­i­ta­tion and phys­i­cal therapy.

Jo Ever­shed:

Excel­lent. Fan­tas­tic. Now, one other note to the atten­dees here today, our pan­elists have put their Twitter handles next to their names, I should prob­a­bly put mine in there as well, one minute. So if you want to follow any of us so that you hear what we’re up to, when we’re up to it, and read their latest papers, and get their latest advice, do do that. Now, let’s go to the meat of it. Jens, how about we start with you giving your pre­sen­ta­tion about your research and I’ll come back to you in about five minutes and make sure you cover your hints and tips. So we want to know what you’ve done, what worked, what was chal­leng­ing, what you might do dif­fer­ent­ly in hindsight.

Dr Jens Madsen:

Yeah. So, we started this online webcam eye track­ing quite a few years ago. So this is, I think, I don’t know how long Gorilla has had their imple­men­ta­tion of WebGaz­er, but this is pre-COVID, as I’ve said. I can try to share my screen somehow.

Jo Ever­shed:

That’d be great.

Dr Jens Madsen:

This will stop other people from sharing their screen. Is that okay?

Jo Ever­shed:

Yeah. Yeah, yeah, do that.

Dr Jens Madsen:

That’s great. So, just connect to here. Is it pos­si­ble? I’m just going to go ahead and do this. So I think the reason why you con­tact­ed me, because I came up with this paper where we use eye track­ing to improve and hope­ful­ly make online edu­ca­tion better. So we both use pro­fes­sion­al eye track­ing, which is where we are com­fort­able, and then we thought if we can actu­al­ly make this scale, we’re going to use the webcam. And then we can read more about it, I’m just going to give a quick spiel about what we actu­al­ly did in this study, okay?

Dr Jens Madsen:

So, we saw, in the begin­ning of a couple of years ago, that online edu­ca­tion was increas­ing rapidly, and we wanted to see sort of what are the chal­lenges of online com­pared to the class­room? Much like right now, I have no idea whether or not any of you that’s lis­ten­ing to me are actu­al­ly there. I don’t know if you’re lis­ten­ing, if you’re paying atten­tion to what I’m saying, I have absolute­ly no clue about. I mean, because I can see the pan­elists, but I just don’t know anybody else. Maybe I can see the chat, if people actu­al­ly are inter­act­ing. But a teacher in a class­room, they can actu­al­ly see that, right? You can see whether or not the stu­dents are falling asleep or what­ev­er, and they can inter­act with the stu­dents and change and react accord­ing­ly if they’re too boring. And so we wanted to develop tools that can measure the level of atten­tion and engage­ment in an online setting. And, essen­tial­ly, we need a mech­a­nism to measure and react to the level of atten­tion of stu­dents and hope­ful­ly make them engaged in the education.

Dr Jens Madsen:

And so, essen­tial­ly, we did a very simple experiment. We basi­cal­ly measure peo­ple’s eye move­ments while they watch short edu­ca­tion­al videos, and then we asked them a bunch of ques­tions about the videos. And so we wanted to see whether or not we could use eye track­ing, both in a setting of a pro­fes­sion­al eye tracker, but also the webcam, to predict the test scores and measure the level of atten­tion, okay? And so, I devel­oped my own plat­form, sorry, Gorilla, but we used… The plat­form is called Elicit, and basi­cal­ly we use this soft­ware called WebGaz­er, and WebGaz­er is basi­cal­ly taking the pixels of your eyes. I just learned that you are dis­abling the mouse. We had prob­lems with that, mouse move­ments and mouse clicks, because that’s how the webcam actu­al­ly works.

Dr Jens Madsen:

You can get an idea, this is just me making instruc­tion­al videos for my sub­jects, because I can tell you that cal­i­brat­ing this is going to be a night­mare for people. I had over 1000 people through this, and I sat there and talked to people about how to cal­i­brate this and I got a lot of mad, mad, mad respons­es. So be aware of that. And you can also get an idea of the quality of the eye track­ing, so spa­tial­ly, it’s jit­ter­ing about and it’s all about… Key things are light, so how much light there is on your face, the quality, how close you are to the webcam, and there’s a couple of other things.

Dr Jens Madsen:

So I did two exper­i­ments, one in the class­room. So I lit­er­al­ly had stu­dents coming in after their lab session and sitting there doing this online thing. And there I can go around and show how people, how to do it. Key things is reflec­tions in peo­ple’s glasses, it’s a night­mare. If you have light in the back­ground, night­mare. The problem with webcams is that it throt­tles the frame rate, so depend­ing on the light, the frame rate will just drop or go up, depend­ing on the light.

Dr Jens Madsen:

Another thing that will happen is that it changes the con­trast. All of a sudden, the person will com­plete­ly move, because it’s found some­thing inter­est­ing in the back­ground, and there you lose the eye track­ing com­plete­ly. And there’s many of those small, finicky things that can cause this to go wrong. So I actu­al­ly, in this at-home experiment that I did, I recruit­ed over 1000 people from Pro­lif­ic and Amazon Mechan­i­cal Turk. I can tell you that Pro­lif­ic was a delight to work with, that I ended up using a sort of a instruc­tion­al video, where I lit­er­al­ly show people how to do it, because I got so many mad emails that I had to do a video about it. Yeah. I can talk about signal quality and all that later, but that was kind of the prac­ti­cal uses and prac­ti­cal tips that I can give about using this eye-track­ing software.

Jo Ever­shed:

That’s fan­tas­tic, Jens. Could you say a little bit more about what the content of the video was? Because that sounds like such a great idea, and it’s actu­al­ly some­thing we heard from the guys last month talking about audi­to­ry recep­tors. She was like, I had to show them a picture of where they should put their hands on the key­boards and then they got it. So this sounds the same, you can’t just write it in text, but if you show some­body a video of like, this… Was it lit­er­al­ly that? Like, “Here’s the video, this is what it looks like, this is what you’re going to happen.” And then they get it, right?

Dr Jens Madsen:

Yeah. So I made a cartoon, I’ve wrote instruc­tions. I mean, the first hun­dreds of people I went like batches of 20, nobody got it, nobody got it, nobody got it. So it’s just incre­men­tal. Okay, they didn’t under­stand that. Why didn’t they under­stand that? I don’t know. And I asked my col­leagues, “I under­stand it.” Because you’re there. You’re like, “Well, you can see what I mean.” And so, I don’t know how the cal­i­bra­tion of Gorilla works, but we have to [crosstalk 00:16:04].

Jo Ever­shed:

Very similar to yours. We have our [crosstalk 00:16:04] in slight­ly dif­fer­ent places.

Dr Jens Madsen:

Right, yeah. Essen­tial­ly, I mean, you can imagine you have this wire frame that’s fitting around your face, right? And that wire frame has to be there because it’s essen­tial­ly finding your eyes and it takes those pixels of your eyes, and then use the model to predict where you’re looking on the screen. Now, if this wire frame as you saw in the image is over here, you can move your eyes as much as you want, it’s not going to happen, you know? It’s impor­tant, and also you don’t move around, because that’s the wire frame. And at this point, I had a beard. That was a huge problem because it didn’t like the shape of my face, I guess. My beard was a problem. So I lit­er­al­ly showed them a video of me going through it and showing them, “Oh, you see now it’s going wrong because the wire frame is over there. Now, I go back. Oh, this is working. Now, I turned off the light. You can see what happens. It’s wrong,” you know?

Dr Jens Madsen:

And also just a human inter­ac­tion with the sub­jects because when I get these people from Pro­lif­ic and Amazon Mechan­i­cal Turk, this is just text. I’m not a person. They don’t really care. They’re just like, “I want to make money, I want to make money.” But then, if you see a person like, “This is my research, please do well. Come on guys, do it for me.” You’re like, “Okay,” and then they actu­al­ly… People thanked me even for par­tic­i­pat­ing. So that was really a nice experience.

Jo Ever­shed:

Oh, that’s fan­tas­tic. So atten­dees, what I want you to type into the chat now, is in terms of top tips, what was the most valu­able for you? Was it, do a video instruc­tion, because then your par­tic­i­pants will under­stand what they need to do? Was it, do a video instruc­tion, because then they’ll like you, and they’ll want to do your experiment for you, you as the person? Or was it, make sure you don’t have men with beards or ask them to shave first?

Jo Ever­shed:

So into the chat now. Which do you like? Video instruc­tions to get better data. Video instruc­tions are great to people watch­ing them. Video instruc­tions, glasses and back­ground, get better data. So you can see, Jens, every­body is learn­ing a lot from what you’ve said already. That was tremen­dous­ly helpful, par­tic­u­lar­ly video instruc­tions for all of the reasons. Excel­lent. So, we’re not going to go over to Simone. Simone, how do you want to share what you’ve been doing? Because you’ve been taking some­what of a dif­fer­ent approach­es to eye tracking.

Simone Lira Calabrich:

Yes, let me share my screen here now. Okay. So I’m assum­ing that you guys can see my screen.

Jo Ever­shed:

Yeah, we can see that. Yeah.

Simone Lira Calabrich:

Okay. So first of all, I’d like to thank you guys for invit­ing me to this webinar. So I’ll talk a little bit about my per­son­al expe­ri­ence as a Gorilla user. And also, it’s my first time doing an eye track­ing research as well. And I’ll try to give you a couple of tips as well on what you could do to get high-quality data. And there’s going to be, I think, some over­lap­ping with what Jens just men­tioned right now. Okay.

Simone Lira Calabrich:

So as I briefly men­tioned in my intro­duc­tion, in our lab, we’re inves­ti­gat­ing the dif­fer­ent process­es under­pin­ning acqui­si­tion of novel letter-sound asso­ci­a­tions in our lab. And our aim with that is to better under­stand how we bind visual and phono­log­i­cal infor­ma­tion togeth­er and what exactly makes this process so effort­ful for some readers.

Simone Lira Calabrich:

So, in Gorilla, we used a paired asso­ciate learn­ing par­a­digm in one of the tasks. So as you can see in the demon­stra­tion, in each trial there were three shapes on the screen. Par­tic­i­pants would first learn which sort of words would go with which one of the shapes, and then they would be tested on their ability to rec­og­nize the pairs. After pre­sent­ing the bind­ings, we play one of the three words from each trial. We then present a blank screen and then we show the par­tic­i­pants the three pairs again. And what we do is we track par­tic­i­pants’ looks during the blank screen pre­sen­ta­tion to see if they will visu­al­ly revisit the screen loca­tions that were pre­vi­ous­ly occu­pied by the target.

Simone Lira Calabrich:

The ratio­nale behind this is that some­times when we are trying to remem­ber some­thing, we might look at the spatial loca­tion where that infor­ma­tion or that piece of infor­ma­tion was pre­sent­ed. We do that even if the spatial loca­tion is now empty, right? So this task that we admin­is­tered in Gorilla is an attempt to repli­cate the find­ings from pre­vi­ous similar eye track­ing study done by my super­vi­sors, Jones and col­leagues, and it’s a similar par­a­digm using paired-asso­ciate learn­ing and looking-at-nothing as well, in typical and dyslex­ic readers.

Simone Lira Calabrich:

So, one of the things, this has a lot to do with Jens was men­tion­ing before, one of the things that I would strong­ly suggest that you check when you’re pre-pro­cess­ing your eye track­ing data in Gorilla, is to check the face_conf values. So the values in this column here, they range from zero to one. And what it mea­sures is how strong­ly the image under the model actu­al­ly resem­bles a face. So one means that there was a perfect fit and zero means that there was no fit, as you can see here in the illus­tra­tion. Accord­ing to Goril­la’s rec­om­men­da­tion, values that are over 0.5 are ideal.

Simone Lira Calabrich:

And the reason why I think it’s so impor­tant to check this care­ful­ly, is because some of your par­tic­i­pants might move their heads during the task as Jens was men­tion­ing before, or they might acci­den­tal­ly cover their faces if they’re bored, or some­thing like that, they might put their glasses on or take their glasses off during the exper­i­ments, there might be some changes in the light­ing con­di­tions as well. So a lot of things can happen mid-exper­i­ments and then their faces will no longer be detect­ed. But it’s impor­tant that you exclude pre­dic­tions that have a very low face_conf value. That’s extreme­ly important.

Simone Lira Calabrich:

So one thing which we have been doing is we add a ques­tion there at the begin­ning of the experiment, and then we ask par­tic­i­pants the con­di­tions under which they will be doing the tasks. So some of the ques­tions that I thought they were rel­e­vant to eye-track­ing research are the ones that are high­light­ed here. So we asked them, in what kind of light­ing will we be doing the tasks? Is it day­light? Are they going to be using arti­fi­cial light­ing? Are they going to be placing their laptops on their lap or on their desks? We cannot, unfor­tu­nate­ly, force par­tic­i­pants to place their laptops on the desk, which would be ideal, and some of them still end up placing their laptops on their laps. And we also ask them if they’re going to be wearing glasses during the exper­i­ments, because we can not always exclude par­tic­i­pants who are wearing glasses.

Simone Lira Calabrich:

So what I do with this, based on par­tic­i­pants’ respons­es, I try to gen­er­ate some plots so that I can visu­al­ly inspect what may be causing the poor face_conf values for some of the par­tic­i­pants. So, as an overall, as you can see here, the mean value for all of the con­di­tions was above the rec­om­mend­ed thresh­old. But you can see also that the data quality was affect­ed to some extent in some of the con­di­tions. So, in this par­tic­u­lar sample here, the model fit was equally fine for people wearing or not wearing glasses, but in one of the other pilots that we con­duct­ed, it was really, really poor for par­tic­i­pants wearing glasses. So, you have to think that it would be okay for you to exclude par­tic­i­pants wearing glasses from your exper­i­ments. We can not do that.

Simone Lira Calabrich:

The second plot, sug­gests that natural day­light seems to be a bit better for the remote eye tracker. So what I’ve been trying to do is, I release the exper­i­ments in batches, and I try to sched­ule them to become avail­able early in the morning so that I can try to recruit more people who are, prob­a­bly going to be doing the task during the day, and some­times I just pause the exper­i­ments. Here, you can see as well, their placing the com­put­er on the lap is also not ideal, but hon­est­ly, I don’t know how to con­vince par­tic­i­pants not to do that. I try to ask them, I give visual instruc­tions as well, but it doesn’t always work.

Simone Lira Calabrich:

The last one, you can see that in my exper­i­ments, we have six blocks, in lots of… We have 216 trials in each one of the blocks, so it’s a very long experiment. And the impres­sion that I get is that as people get tired over the course of the experiment, they start moving more or they start touch­ing their faces and doing things like that. So, the data quality will tend to decrease towards the end of the experiment. So that’s why it’s impor­tant for you to counter-balance every­thing that you can and ran­dom­ize every­thing. So, this is it for now. I would like to thank my super­vi­sors as well. And I have a couple more tips which I might show you guys later if we have time. You are muted, Jo.

Jo Ever­shed:

Thank you so much, Simone. That was actu­al­ly fan­tas­tic. So atten­dees, what I want you to answer there is, what for you was the most valu­able thing that Simone said? Maybe it was face config, check­ing those numbers. Or it might’ve been the set­tings and ques­tions, just asking people what their setup is so that you can exclude par­tic­i­pants if they’ve got a setup that you don’t like. Or was it only exper­i­ments in the morning, check­ing integri­ty of face models. Or was it, actu­al­ly just seeing how each of those set­tings reduces the quality of the data, because I found that fas­ci­nat­ing, seeing those plots where you can just see the quality of the data. Yes, the face config stuff is super impor­tant. Light­ing wasn’t impor­tant, whereas the laptop was placed. Yeah. So every­body’s getting so much value from what you said, Simone. Thank you so much for that. So next, we’re going to go to Tom Arm­strong, who’s going to talk to us, I think, about MouseView.

Prof Tom Armstrong:

All right. Let me get my screen share going here.

Prof Tom Armstrong:

Okay. So I’m going to be talking about a tool that I co-created with Alex Anwyle-Irvine and Edwin Dal­mai­jer, that is a online alter­na­tive to eye track­ing. And big thanks to Alex for devel­op­ing this bril­liant JavaScript to make this thing happen, and for Edwin, for really guiding us in terms of how to mimic the visual system and bring­ing his exper­tise as a cog­ni­tive sci­en­tist to bear.

Prof Tom Armstrong:

So I men­tioned before, I’m an affec­tive and clin­i­cal sci­en­tist. And so in these areas, people often use passive viewing tasks to study the emo­tion­al mod­u­la­tion of atten­tion, or as it’s often called atten­tion­al bias. And in these tasks, par­tic­i­pants are asked to look at stimuli, however they please. And these stimuli are typ­i­cal­ly pre­sent­ed in arrays of, from two to as many as 16 stimuli. Some of them are neutral, and then some of the images are affec­tive or emo­tion­al­ly [inaudi­ble 00:27:39] charge.

Prof Tom Armstrong:

Here’s some data from a task with just two images, a dis­gust­ing image paired with a neutral image, or a pleas­ant image paired with a neutral image. And I’ll just give you a sense of some of the com­po­nents of gaze that are mod­u­lat­ed by emotion in these studies.

Prof Tom Armstrong:

And so, one thing we see is that at the begin­ning of the trial, people tend to orient towards any emo­tion­al or affec­tive image. Mar­garet Bradley and Peter Lang have called this natural selec­tive atten­tion. And in general, when people talk about atten­tion­al bias for threat, or atten­tion­al bias for moti­va­tion­al­ly rel­e­vant stimuli, they’re talking about this phe­nom­e­non. It’s often mea­sured with reac­tion time measures.

Prof Tom Armstrong:

What’s more unique about eye track­ing is this other com­po­nent that I refer to as strate­gic gaze or vol­un­tary gaze. And this plays out a little bit later in the trial when par­tic­i­pants kind of take control of the wheel with their eye move­ments. And here, you see a big dif­fer­ence accord­ing to whether people like a stim­u­lus, whether they want what they see in the picture, or whether they are repulsed by it. And so, you don’t see a valence dif­fer­ences with that first com­po­nent, but here in this more vol­un­tary gaze, you see some really inter­est­ing effects.

Prof Tom Armstrong:

And so you can measure this with total dwell time during a trial. And one of the great things about this measure is that in com­par­i­son to these reac­tion time mea­sures of atten­tion­al bias that have been pretty thor­ough­ly cri­tiqued, and also the eye track­ing measure of that initial capture, this metric is very reli­able. Also, it’s valid, in the sense that, for example, if you look at how much people look away from some­thing that’s gross, that’s going to cor­re­late strong­ly with how gross they say the stim­u­lus is. And the same thing for appetite stim­u­lus. So how much people want to eat food that they see will cor­re­late with how much they look at it?

Prof Tom Armstrong:

So, I’ve been doing this for about 10 years. Every study I do involves eye track­ing, but it comes with some lim­i­ta­tions. So, first it’s expen­sive. Edwin Dal­mai­jer has done a really amazing job democ­ra­tiz­ing eye track­ing by devel­op­ing a toolbox that wraps around cheap, com­mer­cial grade, eye track­ers. But even with it being pos­si­ble to now buy 10 eye track­ers, for example, it’s still hard to scale up the research, like what Jo was talking about earlier, how no more under­pow­ered research, more diverse samples. Well, it’s hard to do that with the hardware.

Prof Tom Armstrong:

And then as I learned, about a year ago, it’s not pan­dem­ic proof. And so you got to bring folks into the lab. You can’t really do this online, although as we just heard, there are some pretty excit­ing options. And really for me, webcam eye track­ing is a holy grail. But in the mean­time, I wanted to see if there were some other alter­na­tives that would be ready to go out of the box for eye-track­ing researchers. And one tra­di­tion, it turns out, is using Mou­se­View, where there’s a mouse that con­trols a little small aper­ture and allows you to sort of look through this little window and explore an image.

Prof Tom Armstrong:

Now, I thought this was a pretty novel idea. It turns out folks have been doing this for maybe 20 years. And they came up with some pretty clever terms like fovea, for the way that sort of mimics foveal vision. Also, there’s been a lot of val­i­da­tion work showing that mouse viewing cor­re­lates a lot with regular viewing as mea­sured by an eye tracker. So, what we were setting out to do was first to see if mouse viewing would work in affec­tive and clin­i­cal sci­ences, to see if you’d get this sort of hot atten­tion, as well as the cold atten­tion that you see in just sort of brows­ing a webpage.

Prof Tom Armstrong:

And then in par­tic­u­lar, we wanted to create a tool, sort of in the spirit of Gorilla, that would be imme­di­ate­ly acces­si­ble to researchers and you could use without pro­gram­ming and having tech­ni­cal skills. And so we actu­al­ly used… We did this in Gorilla and we col­lect­ed some data on Gorilla over Pro­lif­ic, and we have data… This is from a pilot study. We did our first study with 160 par­tic­i­pants. And let me just show you what the task looks like. I’m going to zip ahead, because I’m a disgust researcher and you don’t want to see what’s on the first trial. At least you can see it blurred, but that’s good enough. Okay. So you can see some­one’s moving a cursor-locked aper­ture and there’s this Gauss­ian filter used to blur the screen to mimic periph­er­al vision and par­tic­i­pants can explore the image with the mouse. Okay. We move on.

Prof Tom Armstrong:

Okay. So one of the great things about Mou­se­View is that Alex has created it in a really flex­i­ble manner where users can cus­tomize the overlay. So you can use the Gauss­ian blur, you can use a solid back­ground, you can use dif­fer­ent levels of opacity. You can also vary the size of the aper­ture. And this is some­thing that we haven’t really sys­tem­at­i­cal­ly buried yet. Right now it’s just sort of set to mimic foveal vision to be about two degrees or so.

Prof Tom Armstrong:

So we’ve done this pilot study, about 160 people, and the first thing we wanted to see is, does the mouse scan­ning resem­ble gaze scan­ning? And Edwin did some really bril­liant analy­ses to be able to sort of answer this quan­ti­ta­tive­ly and sta­tis­ti­cal­ly. And we found that the two really con­verge, you can see it here in the scan pass. Like for example, if you look over the right, disgust five, really similar pattern of explo­ration. We blurred that so that you can’t see the pro­pri­etary IX images.

Prof Tom Armstrong:

Now the bigger ques­tion for me, does this capture hot atten­tion? Does this capture the emo­tion­al mod­u­la­tion of atten­tion that we see with eye track­ing? And so here on the left, you can see the eye track­ing plot that I showed you before. Over here on the right, is the Mou­se­View plot. And in terms of that second com­po­nent of gaze I talked about, that strate­gic gaze, we see that coming through in the mass view data really nicely. Even some of these subtle effects, like the fact that people look more at unpleas­ant images the first time before they start avoid­ing them, so we have that approach and that avoid­ance in the strate­gic gaze.

Prof Tom Armstrong:

The one thing that’s missing, maybe not sur­pris­ing­ly, is this more auto­mat­ic capture of gaze at the begin­ning of the trial because the mouse move­ments are more effort­ful, more vol­un­tary. We’ve now done a couple more of these studies and we’ve found that this dwell time index with the mouse viewing is very reli­able in terms of inter­nal con­sis­ten­cy. Also, we’re finding that it cor­re­lates very nicely with self-report ratings of images and indi­vid­ual dif­fer­ences related to images like we see with eye gaze. So it seems like a pretty promis­ing tool. And I can tell you more about it in a minute, but I just wanted to really quickly thank Gorilla. I’m excited about any announce­ment that might be coming, and my college for funding some of this val­i­da­tion research, and the members of my lab who are cur­rent­ly doing a within-person val­i­da­tion against eye-track­ing in-person in the lab.

Jo Ever­shed:

Thank you so much, Tom. That was absolute­ly fas­ci­nat­ing. A number of people have said in the chat. I liked that. That was just absolute­ly fas­ci­nat­ing, your research. I’m so impressed. What I’d love to hear from atten­dees, like what do you think about Mou­se­View? Doesn’t that look tremen­dous? I’m so excited by what that… Because there are limits to what eye track­ing we can do with the webcam, right? I’m sure we can get two zones, maybe six, but what I think is really excit­ing about Mou­se­View, is it allows you to do that much more detailed eye track­ing-like research. It’s a dif­fer­ent method­ol­o­gy that’s going to make stuff that oth­er­wise would­n’t be pos­si­ble to take online, pos­si­ble. Tom, I’d never heard of this before. It sounds so excit­ing. It seems like such a rea­son­able way to inves­ti­gate voli­tion­al atten­tion in an online context. I think people have been really inspired by what you’ve said Tom.

Jo Ever­shed:

And the excit­ing news for those of you lis­ten­ing today, Mou­se­View is going to be a closed beta zone from next week in Gorilla. To get access to any closed beta zone, all you need to do is go to the support desk, fill out the form, “I want access to a closed beta zone,” this one, and it gets applied instant­ly to your account. That’s just the case for eye track­ing, it’ll be the case for Mou­se­View. They’ll be able to be used without… You don’t need any coding to be able to use them. If they’re in closed beta, it’s just an indi­ca­tion from us that there isn’t a lot of pub­lished research out there, we haven’t val­i­dat­ed it, so we say handle with care, right? Like run your pilots, check your data, check it thor­ough­ly, make addi­tion­al… Data quality checks than you would otherwise.

Jo Ever­shed:

With things like showing images, you can see that it’s correct, right? And the data that you’re col­lect­ing isn’t com­pli­cat­ed. So there were reserves that we don’t need to have in closed beta. Until things have been pub­lished and been val­i­dat­ed, we keep things in closed beta where they’re more tech­ni­cal­ly complex. That’s what that means.

Jo Ever­shed:

But yes, you can have access. So, Mou­se­View coming to Gorilla next week. And thank you to Tom and to Alex, who I think is on the call, and Edwin. They’re all here today. If you’re impressed by Mou­se­View, can you type Mou­se­View into the chat here, just so that Tom, and Edwin, and Alex will get to like whoop whoop from you guys. Because they’ve put massive amount of work into getting this done and I think they deserve the equiv­a­lent of a little round of applause for that. Thank you so much. Now finally, over to Jonathan to talk about what you’ve been up to.

Jonathan Tsay:

Okay. Can you see my screen? Okay, perfect, perfect. So, my name is Jonathan, I go by JT, and I study how humans control and acquire skilled move­ment. So let me give you an example of this through this video.

Jonathan Tsay:

Okay, my talk’s over. No, I was just kidding. So this happens every day, how we adapt and adjust our move­ments to changes in the envi­ron­ment and the body. And this process, this learn­ing process requires mul­ti­ple com­po­nents. It’s a lot of trial and error. Your body just kind of figures it out, but it’s also a lot of instruc­tion. How the father here instructs the son to jump on this chair, and of course reward too at the end with the hug.

Jonathan Tsay:

And we studied this in the lab by asking people to do some­thing a little bit more mundane. So, typ­i­cal­ly, you’re in this dark room, you’re asked to hold this dig­i­tiz­ing pen, you don’t see your arm, and you’re asked to reach to this blue target, con­trol­ling this red cursor on the screen. And we described this as playing fruit ninja. Fun slice through the blue dot, using your red cursor.

Jonathan Tsay:

On the right side, I’m going to show you some data. So, ini­tial­ly, when people reached target, con­trol­ling this red cursor, they can’t see their hand, people are on target. On target means hand angle is zero and x‑axis is time. So more reaches means you’re moving across the x‑axis. But then we add a per­tur­ba­tion. So we intro­duce a 15 degree offset from the target. The cursor is always going to move 15 degrees away from the target. We tell you, we say, “Joe, this cursor has nothing to do with you. Ignore it. Just keep on reach­ing to the target.” And so you see here on the right, this is par­tic­i­pants data. People can’t keep reach­ing through the target. They implic­it­ly respond to this red cursor by moving in the oppo­site direc­tion, they drift off further and further away to 20 degrees.

Jonathan Tsay:

Even­tu­al­ly, they reach an asymp­tote, around 20 degrees, and when we turn off the feed­back, when we turn off the cursor, people drifted a little bit back to the target. And this whole process is implic­it. If I asked you where your hand is, your actual hand is 20 degrees away from the target. If I asked you where your hand is, you tell me your hand is around the target. This is where you feel your hand. You feel your hand to be at the target, your hand is 20 degrees off the target. And this is how we study implic­it motor learn­ing in the lab. But because of the pan­dem­ic, we built a tool to test this online. And so, in a paper preprint recent­ly released, we com­pared in-person data using this kind of sophis­ti­cat­ed machin­ery that typ­i­cal­ly costs around $10,000 to set up.

Jonathan Tsay:

And you can see on the bottom, this is the data we have in the lab. We just create dif­fer­ent offsets away from the target, but nonethe­less people drift further and further away from the target. And we have data from online using this model, this tem­plate we created to track your mouse move­ments and your reach­ing to dif­fer­ent targets. The behav­ior in person and online seem quite similar. But in person, online research affords some great advan­tages and I’m preach­ing to the choir here. For in lab results, we took around six months of in-person just to come to the lab, collect your data. For the online results, we col­lect­ed 1 to 20 people in a day and so that’s a huge time-saver, and in terms of cost as well. And of course, we have a more diverse pop­u­la­tion. I just want to give a few tips before I sign off here.

Jonathan Tsay:

So a few tips are instruc­tion checks. So, for instance, in our study, we ask people to reach to the target and ignore the cursor feed­back, just con­tin­ue reach­ing. So, an instruc­tion check ques­tion we ask is where are you going to reach? Option A, the target, option two, away from the target? And if you choose away from the target, then we say, “Sorry, that was the wrong answer and please try again next time.”

Jonathan Tsay:

Catch trials. So, for instance, some­times we would say, “Don’t reach to this target.” The target presents itself, and we say, don’t reach the target, and if we see that par­tic­i­pants con­tin­ue to reach to the target, they might be just not paying atten­tion and just swiping their hand towards the target. So we use some catch trials to filter out good and bad sub­jects. We have base­line vari­abil­i­ty mea­sures. So reach the target and if we see you’re reach­ing in a erratic way, then we typ­i­cal­ly say, “Okay, sorry, try again next time.” Again, move­ment time is a great indi­ca­tor, espe­cial­ly for a mouse tracking.

Jonathan Tsay:

If you, in the middle of the experiment, you go to the restroom and you come back. These are things that can be tracked using move­ment time, which is typ­i­cal­ly, someone might not take your experiment seri­ous­ly, but not always. And Simone brought this up, but batch­ing and iter­at­ing, getting feed­back from a lay person to under­stand instruc­tions is huge for us. And last but not least, some­thing that Tom brought up, was some­times when you see behav­ior that’s dif­fer­ent between in-lab and online, this is some­thing we strug­gle with, is it reflec­tive of some­thing that’s inter­est­ing that’s dif­fer­ent between online and in-person or is it noise? So that’s some­thing we’re strug­gling with, but we came to the con­clu­sion that some­times it’s just dif­fer­ent. You’re using a mouse versus a robot in the lab. So, that can be very different.

Jonathan Tsay:

What I’m excited about this mouse track­ing research and how it relates to motor learn­ing, is typ­i­cal­ly motor learn­ing patient research is around 10 people, but now, because we can just send a link to these par­tic­i­pants, they’re able to access the link and do these exper­i­ments at home. We can access some huge, larger group of patient pop­u­la­tions that typ­i­cal­ly, maybe logis­ti­cal­ly, hard to invite to the lab. And second, teach­ing. I’m not going to belabor this point. Third is public out­reach. So, we put our experiment on this Test­My­Brain website and people just try out the game and learn a little bit about their brain, and that’s an easy way to collect data, but also for people to learn a little bit about themselves.

Jonathan Tsay:

Here’s some open resources, you can take a screen­shot, we share our tem­plate, how to imple­ment these mouse track­ing exper­i­ments online. It’s also inte­grat­ed with Gorilla. We have a manual to help you set it up, we have a paper, and here’s a demo, you can try it out your­self. And last thing, I want to thank my team. So, Alan, who did a lot of the work coding up the experiment, my advisor, Rich Ivry, and Guy, and Ken Nakaya­ma. They all worked col­lec­tive­ly really put this togeth­er and we’re really excited about where it’s going. So thank you again for your time.

Jo Ever­shed:

Thank You so much. Jon, that was fan­tas­tic. Now, what I want to hear from the panel from the atten­dees, did you like more from Jon there, the advan­tages of online research, the cost-saving, the time-saving, or were you more blown away by his tips for taking and getting good quality mouse track­ing data online. Tips, instruc­tion, check ques­tions, tips, tips, more tips. And that girl who jumped onto that stool at the begin­ning, were you not just blown away by her? If you were blown away by her resilience to jumping up… They’re entire­ly blown away in the chat. I thought she was tremen­dous. She must be about the same age as my son and he would not do that for sure. That was some­thing quite excit­ing. Jon, I want to ask a follow up ques­tion. Your mouse track­ing experiment, have you shared that in Gorilla Open Materials?

Jonathan Tsay:

Yeah, yeah. That is in Gorilla Open Materials.

Jo Ever­shed:

If you’ve got the link, do you want to dump that into the chat? Because then if anybody wants to do a repli­ca­tion or an exten­sion, they can just clone the study and see how you’ve done it, see how you’ve imple­ment­ed it. It’s just a super easy way of sharing research and allow­ing people to build on the research that’s gone before, without wasting time. Actu­al­ly, make sure we’ve got a link to that as well, can you? So that when we send a follow up email on Monday, we can make sure that every­body who’s here today can get access to that. Oh, I think Josh has already shared it, Jon. You’re off the hook. Excel­lent. We’ve now come to Q&A time. There are lots and lots of ques­tions. There are a total of 32 ques­tions, of which 16 have already been answered by you fine people as we go through. There are some more ques­tions though. Edwin has got a ques­tion of how do people deal with the huge attri­tion of par­tic­i­pants in web-based eye-track­ing. Simone or Jens, can either of you speak on that one? How have you dealt with attrition?

Simone Lira Calabrich:

Yeah, it’s a bit com­pli­cat­ed because my experiment is a very long one and par­tic­i­pants end up getting tired and they quit the experiment in the middle of it. There is not much that we can do about it, but just keep recruit­ing more par­tic­i­pants. So we ran a power analy­sis, which sug­gest­ed that we needed 70 par­tic­i­pants for our study. So, our goal was to recruit 70 par­tic­i­pants no matter what. So, if someone quits midway, we just reject the par­tic­i­pant and we just recruit an addi­tion­al one as a sub­sti­tute, as a replacement.

Dr Jens Madsen:

So I think at least for my per­spec­tive, it’s very dif­fer­ent com­par­ing sta­tion­ary, like viewing or eye track­ing of images, and then in my case, video. So video is moving con­stant­ly, right? And so, you can show an image and they have to watch the whole video, and I have to syn­chro­nize it in time. And it also depends on the analy­sis method you do. In my case, I don’t really look into new spatial infor­ma­tion. Spatial infor­ma­tion for me is irrel­e­vant. I use the cor­re­la­tion. So how similar peo­ple’s eye move­ments are across time. I use other people as a ref­er­ence. And in that sense it can be very noisy, the data, actu­al­ly. You can actu­al­ly move around and it’s quite robust in that sense. And so it depends on the level of noise you induce in a system. For my case, because it was video, I put in aux­il­iary tasks, like people can watch, look at dots to see if they were actu­al­ly there or not or things like that just to control for those things or else you’re in big trouble because you have no clue what’s happening.

Dr Jens Madsen:

And so having those extra things to make sure that they’re there. And also it turns out the atten­tion span of an online user, at least in edu­ca­tion­al content, it’s around five, six minutes, after that they’re gone. You can’t, it doesn’t matter. They’re bored, they could­n’t be both­ered. And so my task were always around there. The videos that I showed were always five, six-minutes long, three-minutes long, and then some ques­tions. But they could­n’t be asked to sit still because when you use WebGaz­er, you have to sit still. It depen­dence on… You guys are using spatial tasks, right? So this would be a problem for you. For me it’s fine because I use the tem­po­ral course. But for spatial people that’s going to be an issue because the whole thing is just going to shift. And how do you detect that, right? Do you have to somehow either insert some things like now you look at this dot and now I can recal­i­brate my data or some­thing? I don’t know how you guys are dealing with that. But yeah, those are the things that you need to worry about.

Simone Lira Calabrich:

Yeah. I was just going to add some­thing, that because we have a very long experiment with lots of trials, we can lose some data and it’s still going to be fine, right? So I have 216 trials in my exper­i­ments. So it’s not a six-minute long one, it’s two-hour exper­i­ments. So, even if I do lose some data, rel­a­tive­ly, it’s still fine. I have enough power for that.

Dr Jens Madsen:

I mean, you still have the cal­i­bra­tion? You do a cal­i­bra­tion, right? And I’m assum­ing you do it once, right? And you have to sort of… Or you do it mul­ti­ple times?

Simone Lira Calabrich:

Mul­ti­ple times. Yeah.

Dr Jens Madsen:

You have to do that.

Simone Lira Calabrich:

So we have six blocks.

Dr Jens Madsen:

Yeah, that makes sense.

Simone Lira Calabrich:

So we do it at the begin­ning of each block and also in the middle of each block as well, just to make sure that it’s as accu­rate as possible.

Dr Jens Madsen:

You saw what I just did there, right? I read­just­ed myself and this is some­thing natural. It’s like I just need… Ah, yeah, that’s better, you know? That’s a problem.

Simone Lira Calabrich:

Exactly, yeah. So, yeah, that’s why we do that mul­ti­ple times.

Dr Jens Madsen:

And we do it even without knowing it.

Jo Ever­shed:

Simone, how often do you recalibrate?

Simone Lira Calabrich:

So, we have six blocks. So at the begin­ning of each block and in the middle of each block. So, every 18 trials.

Jo Ever­shed:

Okay, that makes sense. So in pre­vi­ous lec­tures we’ve had about online methods. People have said, a good length for an online experiment is around 20 minutes, much longer than that people start to get tired. If you pay people better, you get better quality par­tic­i­pants. So, that’s another way that you can reduce attri­tion, double your fees and see what happens. People are willing to stick around longer if they’re being paid well for their time.

Jo Ever­shed:

And then one of their researchers, Ralph Miller, from New York, he does long studies like Simone does online, and what he does is about every 15 minutes, he puts in a five minute break and he says, “Look, please go away, get up, walk around, do some­thing else, stretch, maybe you need to go to the loo, maybe there’s some­thing you need to deal with, but you have to be back in five minutes.” When you press next, that five minute I think it happens auto­mat­i­cal­ly. And that gives people that ability to, “Oh, I really need to stretch and move.” So that you can build in an expe­ri­ence that is man­age­able for your participants.

Jo Ever­shed:

And so if you’re strug­gling with attri­tion, the thing to do is to pilot dif­fer­ent ideas until you find what works for your exper­i­ments. There aren’t things that will work for every­one, but there are tech­niques and approach­es that you can try out, sort of exper­i­men­ta­tion in real time, find out what’s going to work. And that can be really helpful too. Tom, there are quite a few ques­tions about… Can you guys see the Q&As? If you pull up the Q&A panel, there were some nice ones about mouse track­ing here that I think Tom might be able to answer. So one here, how viable is it to use mouse track­ing in reading research, for example, asking par­tic­i­pants to move that cursor as they read. And then sim­i­lar­ly, Jens and Simone, there are ques­tions about eye fix­a­tions and data quality. You can also type answers. So I think we’ll run out of time if we try and cover all of those live, but maybe Jens and Simone, you can have a go answer­ing some of the ones that are more tech­ni­cal. But Tom, perhaps you could speak about mouse track­ing, eye track­ing, the crossover. You’re muted. You’re muted.

Prof Tom Armstrong:

There are so many, but let me try to do it justice. So, I mean, right now, I think, I don’t know what unique process­es we get from Mou­se­View. I’m think­ing it as being just a stand in for that vol­un­tary explo­ration that we see with the eye track­ing. In terms of what that gets you beyond, great ques­tion, about beyond just self-report, there are some inter­est­ing ways in which self-report and eye track­ing do diverse that we’ve found that I can’t do justice to right now. So I think that you often pick up things with self-report that you don’t get with… I’m sorry, you get things with eye-track­ing that you don’t get with self-report. For example, Edwin and I found that eye move­ment avoid­ance of dis­gust­ing stimuli doesn’t habit­u­ate, whereas people will say they’re less dis­gust­ed, but then they’ll con­tin­ue to look away from things.

Prof Tom Armstrong:

And so some­times, there’s more or so things people can intro­spect on. About reading, Edwin took that ques­tion on. Left versus right mouse? Fas­ci­nat­ing, I’m not sure. And then impor­tant­ly, the touch screens that is in the works. So maybe if Alex can jump on that ques­tion. That’s the next thing that he’s working on, making this sort of work with touch screens. Right now it’s just for desktop, laptop, Chrome, Edge or Firefox.

Jo Ever­shed:

Any­thing that works in Gorilla, prob­a­bly might already work for touch. I don’t know when, unfor­tu­nate­ly, [inaudi­ble 00:54:56] but I will make sure that that ques­tion gets asked next week, because by default, every­thing in Gorilla is touch com­pat­i­ble as well.

Prof Tom Armstrong:

Cool.

Jo Ever­shed:

I’m trying to pick out a good next ques­tion. What’s the next one at the top? Can we learn some­thing from online mouse track­ing that we cannot learn from online eye track­ing? Can anyone speak to that or have you already?

Dr Jens Madsen:

What was the ques­tion? Sorry.

Jo Ever­shed:

Can we learn some­thing from online mouse track­ing that we cannot learn from online eye track­ing? I think that there are dif­fer­ent methods that answer dif­fer­ent ques­tions, right?

Dr Jens Madsen:

So there’s cer­tain­ly a cor­re­la­tion between where you look and where the mouse is, right? So this is clear. And also it depends on the task. In my case, with the video, you’re not moving around the mouse where you’re looking, because you’re watch­ing a video, that’s not a natural behav­ior. But if you [inaudi­ble 00:55:57] of just using UI buttons and things like that, surely they’re highly cor­re­lat­ed. So, it very much depends on the task.

Jo Ever­shed:

That’s really good. We are now five minutes to six. So, I’m going to wrap this up. There are lots and lots more ques­tions, but I don’t think we can get through all of them today. Hope­ful­ly, we’ve managed to answer 24 ques­tions. So I think we’ve done a really, really great job there. Actu­al­ly, there’s one more which I think Simone might be able to answer quickly. What’s the rela­tion­ship between face config and cal­i­bra­tion accu­ra­cy measure. Did you look at both of those?

Simone Lira Calabrich:

No, I didn’t actu­al­ly inves­ti­gate that, but what I did was I did similar plots for the cal­i­bra­tion analy­sis as well in Gorilla. They were very similar to what I demon­strat­ed to you guys. So, depend­ing on whether par­tic­i­pants were wearing glasses or not, there were some lower values for that. What I try to do, I strive to use the five-points cal­i­bra­tion in Gorilla and if cal­i­bra­tion fails for at least one of the points, the cal­i­bra­tion has to be reat­tempt­ed. So, I’m trying to be very strict in that sense. That’s my default mode now. So, if it fails just one of the points, I think it’s just best to try to recal­i­brate, which can be quite frus­trat­ing for some par­tic­i­pants, but that will ensure that we have better data quality.

Jo Ever­shed:

Yeah, that was great. Now, I have one last ques­tion for the panel which is what do you see the next year bring­ing to this area of research. And we’re going to do this in reverse order, so start­ing with JT.

Jonathan Tsay:

I’m going to say that, at least in my field, I’m most excited about larger scale patient research and that’s number one. Reach­ing indi­vid­u­als who are typ­i­cal­ly harder to reach. So, larger scale in that sense, but another is reach­ing, for instance, people without pro­pri­o­cep­tion. For instance, you don’t have a sense of body aware­ness. I’m pretty sure most of you have never met someone like that because in my view, I think there’s only three people in the world that have learned the lit­er­a­ture, and kind of being able to work with these people remote­ly would be a great oppor­tu­ni­ty in the future.

Jo Ever­shed:

That’s bril­liant. Tom, how about for you? What does the next year hold?

Prof Tom Armstrong:

So, one, getting Mou­se­View on to mobile devices to work with touch screen. Then just seeing the method get adopted by people in dif­fer­ent areas, and to see how a lot of these eye track­ing finds repli­cate. Also, to, hope­ful­ly, get this into task zones with some dif­fer­ent vari­eties of eye track­ing tasks. So, Larger matri­ces, 16 [inaudi­ble 00:58:54] and just incre­men­tal­ly working like that.

Jo Ever­shed:

I think that’s always so excit­ing when you create a new method, is you don’t know how people are going to use it, and some­body’s going to see that and go, “Ooh, I could do some­thing that you’d never imag­ined,” and sud­den­ly a whole new area of research becomes pos­si­ble. That’s hugely excit­ing. Simone, how about you? What does the next year hold?

Simone Lira Calabrich:

I was just think­ing perhaps the pos­si­bil­i­ty of testing par­tic­i­pants who are speak­ers of dif­fer­ent lan­guages. That would be really nice as well. So, with remote eye track­ing, we can do that more easily. So hopefully…

Jo Ever­shed:

Hope­ful­ly, that will.

Simone Lira Calabrich:

.. that’s what’s going to happen.

Jo Ever­shed:

And, Jens, finally to you.

Dr Jens Madsen:

We were working in online edu­ca­tion and we were mea­sur­ing the level of atten­tion of stu­dents when they watch these edu­ca­tion­al mate­r­i­al. And what we’re excited about is that we can actu­al­ly reverse that process so we can have the person in the browser measure the level of atten­tion and we can adapt the edu­ca­tion­al content to the level of atten­tion. So, if stu­dents are drop­ping out or not looking, we can actu­al­ly inter­vene and make inter­ven­tions so that hope­ful­ly we can improve online edu­ca­tion. You’re muted.

Jo Ever­shed:

Sorry. Tom just dropped out. So I was just check­ing what hap­pened there. The online edu­ca­tion thing, I can see it being been tremen­dous and that’s what every­body needs. If you had one tip for every­body watch­ing today to improve online edu­ca­tion, what would it be?

Dr Jens Madsen:

Do short, and show your face, and skip the boring long PowerPoints.

Jo Ever­shed:

Excel­lent. All about human inter­ac­tion, isn’t it?

Dr Jens Madsen:

It’s all about the inter­ac­tion. If you can see a per­son­’s face, you’re there.

Jo Ever­shed:

Yeah., yeah, yeah. So, maybe it’s when you’ve got your stu­dents in your class, get them to turn their videos on, right? They’ll feel like they’re there togeth­er in a room.

Dr Jens Madsen:

It’s so important.

Jo Ever­shed:

So impor­tant. Back to the par­tic­i­pants, there were 150 of you for most of today. Thank you so much for joining our third Gorilla Presents webinar. Each month we’ll be address­ing a dif­fer­ent topic on online behav­ioral research. So, why not write in the chat with sug­ges­tions of what you’d like us to cover next? Yes, thank you mes­sages, please, through to our amazing pan­elists and Tom as well. It’s very dif­fi­cult to judge how much value you’ve got out from here, but big thank yous really helps these guys know that you really appre­ci­at­ed the wisdom that they’ve had for you today.

Jo Ever­shed:

There will be a survey. I think we email you with a survey straight after this to help us make these ses­sions more useful. Please fill this out. It’s tremen­dous­ly useful to us and it allows us to make each session better and better. You guys can see the value that you’ve got out of this today. By giving us feed­back, we can make future ses­sions even better. So you’re doing a solid for the whole research community.

Jo Ever­shed:

The next webinar is going to be about speech pro­duc­tion exper­i­ments online. It is going to be in late April. So, if speech pro­duc­tion exper­i­ments, where people talk… It’s going to be 29th of April, there you go. Where people talk and you’re col­lect­ing their voice, if that’s your bag, then make sure you sign up for that one as well. Thank you and good night. One final massive thank you to the pan­elists. Thank you so much for giving your time to the research com­mu­ni­ty today and we’ll chat in a minute in the next room.

Simone Lira Calabrich:

Thank you, everyone.

Dr Jens Madsen:

Yeah, thank you.

Jonathan Tsay:

Thank you.