Gorilla Presents… Eye and Mouse Track­ing Research Online (Webi­nar)

This webi­nar is all about con­duct­ing eye and mouse track­ing research online.

 

YouTube

By load­ing the video, you agree to YouTube’s pri­va­cy pol­i­cy.
Learn more

Load video

With the help of our experts:


Here are some of the recent arti­cles, blogs, and talks they’ve pub­lished on today’s topic:

Tran­script

Jo Ever­shed:

Hello every­one. We have 31 atten­dees. Hello. Hope­ful­ly you’re begin­ning to be able to hear me. Thank you for join­ing us today. Now, some first things first that I’d like you to do is open up the chat and tell us where you’re com­ing from. So, intro­duce your­self say, “Hi, I’m blah, blah, blah, blah,” from wher­ev­er you’re from. I will have a go at doing that now so that you can see me. But open up the chat and type in who you are. So I’m Jo from Gorilla. I love evi­dence-based visions.

Jo Ever­shed:

Felix Trudeau from the Uni­ver­si­ty of Toron­to. Hello, Felix. Hello, Pete. Hello, Karen. Hello Jen. Hello Jerry, Nick, and Gita and Mave, and Dan. I think we’re up to about 91 peo­ple now. Yonas.

Jo Ever­shed:

Okay. Now, next thing I want you to start answer­ing in the chat is, what made you embrace online research? You’re all here to hear about online research. Hey, Sam. Nice to see you. So what made you embrace online research? COVID, lots of COVID respons­es. Yes, and we hear from peo­ple the whole time, COVID was the push I need­ed to take my research online. I embrace it because of eas­i­er access to par­tic­i­pants, but also COVID. Yes, it is so great to be able to col­lect data so much more quick­ly than hav­ing to test peo­ple face-to-face in the lab. High qual­i­ty data is obvi­ous­ly the future for behav­ioral research.

Jo Ever­shed:

No more under­pow­ered sam­ples. Hope­ful­ly, that can start to be a thing of the past. Now, the next ques­tion I want you guys to answer in the chat. We’ve got 108 peo­ple here now, so that’s fan­tas­tic, is what do you see as the ben­e­fits of online research? Obvi­ous­ly COVID was the push, but what are you hop­ing to get from it? We’ve heard a lit­tle bit about more access to par­tic­i­pants, but diverse sam­ples, quick­er, less cost­ly, more var­ied sam­ples, scal­a­bil­i­ty, love­ly answers, these. You can be a bit more longer. Wider par­tic­i­pa­tion time, and what’s this going to do for your research? Is it going to make your research bet­ter, faster, eas­i­er? Cross-cul­tur­al stud­ies, less cost­ly. Thank you so much.

Jo Ever­shed:

Fin­ished data col­lec­tion in two weeks. Wow, that must’ve felt amaz­ing. Now, so this is great. You’re answer­ing, what are the ben­e­fits of online research? Now, final ques­tion. What chal­lenges to research do you face that you’re hop­ing to learn about today? So this is a great ques­tion, a gen­er­al ques­tion that you can put into the chat and our pan­elists will be read­ing them and that will help them give you the best pos­si­ble answers. What you can also do, if you’ve got spe­cif­ic ques­tions, this is the time to open the Q&A panel, which you should have access to at the bot­tom. So if you’ve got a ques­tion, make it detailed so that the… Not an essay, obvi­ous­ly, but a detailed, spe­cif­ic ques­tion. So, yeah. Fruka’s done a bril­liant one, how reli­able is online eye track­ing. Fan­tas­tic, but if you can, instead of putting that in the chat, can you put that into the Q&A panel, which is a dif­fer­ent panel, you should also be able to access it from the bottom.

Jo Ever­shed:

And then as our pan­elists are talk­ing, they will start answer­ing those ques­tions. So I think we’re up to 120 atten­dees. That’s fan­tas­tic. Thank you so much for join­ing us today. Hi, I’m Jo Ever­shed. I’m the founder, CEO of Gorilla Exper­i­ment Builder and I’m your host today. I’ve been help­ing researchers to take their stud­ies online since 2012. So I’ve been doing this for a while, over nine years. For the last two years, we’ve also brought researchers togeth­er for our online sum­mer con­fer­ence BeOn­line, which stands for behav­ioral sci­ence online, where pio­neer­ing researchers from all over the world share insights into online meth­ods. Papers have there place for record­ing what was done, but unfor­tu­nate­ly they aren’t a play­book for how to run research suc­cess­ful­ly. And this is why we run a meth­ods con­fer­ence. Don’t miss it.

Jo Ever­shed:

I think my col­league, Ash­ley, is going to put a link to BeOn­line into the chat now. And if you go to the BeOn­line site and pre-reg­is­ter for the con­fer­ence, when the tick­ets are avail­able, it’s all com­plete­ly free, you’ll find out about it and you’ll be able to come and have sev­er­al hours worth of meth­ods relat­ed con­fer­ence. And we might have more stuff on eye track­ing then, because the world might’ve moved in three months, who knows?

Jo Ever­shed:

But we can’t wait a year to share method­olog­i­cal best prac­tice. Life is just mov­ing too fast. So we are now con­ven­ing month­ly, Gorilla Presents to help researchers take stud­ies online, to learn from the best. Now we’ve got a poll com­ing up. Josh, can you share the poll. We’ve got this one final ques­tion for you now, and it’s how much expe­ri­ence do you have with Gorilla? If you can all answer that ques­tion, that would be great. Now, as you know, today’s webi­nar is all about eye track­ing and mouse track­ing online.

Jo Ever­shed:

We know from lis­ten­ing to our users that eye track­ing and mouse track­ing is very pop­u­lar, and we thought it would be great oppor­tu­ni­ty to bring togeth­er this vibrant com­mu­ni­ty to dis­cuss all the highs and lows of mov­ing eye track­ing and mouse track­ing research out of the lab. And we’ve con­vened this panel of experts here to help us, they’ve been run­ning eye track­ing and mouse track­ing research online for the last lit­tle while and they’re going to dis­cuss what worked, what was chal­leng­ing, and what we still need in order to do top qual­i­ty, eye track­ing and mouse track­ing research online. So please wel­come Dr. Jens Mad­sen from CCNY, Simone Lira Cal­abrich from Ban­gor Uni­ver­si­ty, Pro­fes­sor Tom Arm­strong from Whit­man Col­lege, and Jonathan Tsay from UC Berke­ley. And I’m now going to let each of them intro­duce them­selves. So, Jens over to you.

Dr Jens Madsen:

Yeah. I’m Jens Mad­sen, I’m a Post­doc at the City Col­lege of New York and have a pret­ty diverse back­ground. I have like in com­put­er sci­ence, I did my PhD in machine learn­ing, and now I’m in a neur­al engi­neer­ing. So we’re doing quite a diverse set of record­ings, all the way from neur­al respons­es to eye move­ments, heart rate, skin, you know, you name it, we record every­thing. And we actu­al­ly start­ed a project about online edu­ca­tion. So, we were already doing web­cam eye track­ing before the pan­dem­ic hap­pened and then the pan­dem­ic hap­pened and we were like, “Oh, this is great, you’re com­ing to where we are already.” So that was inter­est­ing. And yeah, we’re doing quite a lot of research with web cam eye track­ing, col­lect­ing over a thou­sand peo­ple’s eye move­ments when they watch edu­ca­tion­al videos.

Jo Ever­shed:

Oh, that’s awe­some. Fan­tas­tic. So Simone, over to you. What are you up to?

Simone Lira Calabrich:

So I’m Simone, I’m a PhD stu­dent at Ban­gor Uni­ver­si­ty, which is in North Wales, and my super­vi­sors are Dr. Manon Jones and Gary Oppen­heim. And we are cur­rent­ly inves­ti­gat­ing how indi­vid­u­als with dyslex­ia acquire novel visu­al phono­log­i­cal asso­ci­a­tions or how they learn asso­ci­a­tions between let­ters and sounds, and how they do that as com­pared to typ­i­cal read­ers. And we’ve been using paired-asso­ciate learn­ing and looks-at-noth­ing par­a­digm in our inves­ti­ga­tion. And this is actu­al­ly the first time that I’ve been work­ing with eye track­ing research, and because of the pan­dem­ic, I had to imme­di­ate­ly move to online based eye-tracking.

Jo Ever­shed:

Excel­lent. Now, Tom, over to you.

Prof Tom Armstrong:

I’m an Asso­ciate Pro­fes­sor at Whit­man Col­lege and I’m an affec­tive and clin­i­cal sci­en­tist. And so affec­tive in the sense that I study the emo­tion­al mod­u­la­tion of atten­tion and then clin­i­cal in the sense that I study the emo­tion­al mod­u­la­tion of atten­tion in the con­text of anx­i­ety dis­or­ders and other men­tal ill­ness­es. And I’ve been using eye track­ing in that work for about 10 years now, and with a par­tic­u­lar focus on mea­sur­ing the stress with eye track­ing. And then through the pan­dem­ic, I teamed up with Alex Anwyl-Irvine and Edwin Dal­mai­jer to cre­ate this mouse-based alter­na­tive to eye track­ing that we could take online.

Jo Ever­shed:

Yeah, and we’re going to have more about (INAUDIBLE). Well, Tom’s going to talk much more about that later, and then I’ve got excit­ing news at the end of today. And Jonathan, sorry, last but not least over to you.

Jonathan Tsay:

Hello every­one. I’m Jonathan Tsay, but you can call me JT like Justin Tim­ber­lake. I’m a third year grad stu­dent at UC Berke­ley and I study how we learn and acquire skilled move­ments. And hope­ful­ly we can apply what we learn here at Berke­ley to reha­bil­i­ta­tion and phys­i­cal therapy.

Jo Ever­shed:

Excel­lent. Fan­tas­tic. Now, one other note to the atten­dees here today, our pan­elists have put their Twit­ter han­dles next to their names, I should prob­a­bly put mine in there as well, one minute. So if you want to fol­low any of us so that you hear what we’re up to, when we’re up to it, and read their lat­est papers, and get their lat­est advice, do do that. Now, let’s go to the meat of it. Jens, how about we start with you giv­ing your pre­sen­ta­tion about your research and I’ll come back to you in about five min­utes and make sure you cover your hints and tips. So we want to know what you’ve done, what worked, what was chal­leng­ing, what you might do dif­fer­ent­ly in hindsight.

Dr Jens Madsen:

Yeah. So, we start­ed this online web­cam eye track­ing quite a few years ago. So this is, I think, I don’t know how long Gorilla has had their imple­men­ta­tion of WebGaz­er, but this is pre-COVID, as I’ve said. I can try to share my screen somehow.

Jo Ever­shed:

That’d be great.

Dr Jens Madsen:

This will stop other peo­ple from shar­ing their screen. Is that okay?

Jo Ever­shed:

Yeah. Yeah, yeah, do that.

Dr Jens Madsen:

That’s great. So, just con­nect to here. Is it pos­si­ble? I’m just going to go ahead and do this. So I think the rea­son why you con­tact­ed me, because I came up with this paper where we use eye track­ing to improve and hope­ful­ly make online edu­ca­tion bet­ter. So we both use pro­fes­sion­al eye track­ing, which is where we are com­fort­able, and then we thought if we can actu­al­ly make this scale, we’re going to use the web­cam. And then we can read more about it, I’m just going to give a quick spiel about what we actu­al­ly did in this study, okay?

Dr Jens Madsen:

So, we saw, in the begin­ning of a cou­ple of years ago, that online edu­ca­tion was increas­ing rapid­ly, and we want­ed to see sort of what are the chal­lenges of online com­pared to the class­room? Much like right now, I have no idea whether or not any of you that’s lis­ten­ing to me are actu­al­ly there. I don’t know if you’re lis­ten­ing, if you’re pay­ing atten­tion to what I’m say­ing, I have absolute­ly no clue about. I mean, because I can see the pan­elists, but I just don’t know any­body else. Maybe I can see the chat, if peo­ple actu­al­ly are inter­act­ing. But a teacher in a class­room, they can actu­al­ly see that, right? You can see whether or not the stu­dents are falling asleep or what­ev­er, and they can inter­act with the stu­dents and change and react accord­ing­ly if they’re too bor­ing. And so we want­ed to devel­op tools that can mea­sure the level of atten­tion and engage­ment in an online set­ting. And, essen­tial­ly, we need a mech­a­nism to mea­sure and react to the level of atten­tion of stu­dents and hope­ful­ly make them engaged in the education.

Dr Jens Madsen:

And so, essen­tial­ly, we did a very sim­ple exper­i­ment. We basi­cal­ly mea­sure peo­ple’s eye move­ments while they watch short edu­ca­tion­al videos, and then we asked them a bunch of ques­tions about the videos. And so we want­ed to see whether or not we could use eye track­ing, both in a set­ting of a pro­fes­sion­al eye track­er, but also the web­cam, to pre­dict the test scores and mea­sure the level of atten­tion, okay? And so, I devel­oped my own plat­form, sorry, Gorilla, but we used… The plat­form is called Elic­it, and basi­cal­ly we use this soft­ware called WebGaz­er, and WebGaz­er is basi­cal­ly tak­ing the pix­els of your eyes. I just learned that you are dis­abling the mouse. We had prob­lems with that, mouse move­ments and mouse clicks, because that’s how the web­cam actu­al­ly works.

Dr Jens Madsen:

You can get an idea, this is just me mak­ing instruc­tion­al videos for my sub­jects, because I can tell you that cal­i­brat­ing this is going to be a night­mare for peo­ple. I had over 1000 peo­ple through this, and I sat there and talked to peo­ple about how to cal­i­brate this and I got a lot of mad, mad, mad respons­es. So be aware of that. And you can also get an idea of the qual­i­ty of the eye track­ing, so spa­tial­ly, it’s jit­ter­ing about and it’s all about… Key things are light, so how much light there is on your face, the qual­i­ty, how close you are to the web­cam, and there’s a cou­ple of other things.

Dr Jens Madsen:

So I did two exper­i­ments, one in the class­room. So I lit­er­al­ly had stu­dents com­ing in after their lab ses­sion and sit­ting there doing this online thing. And there I can go around and show how peo­ple, how to do it. Key things is reflec­tions in peo­ple’s glass­es, it’s a night­mare. If you have light in the back­ground, night­mare. The prob­lem with web­cams is that it throt­tles the frame rate, so depend­ing on the light, the frame rate will just drop or go up, depend­ing on the light.

Dr Jens Madsen:

Anoth­er thing that will hap­pen is that it changes the con­trast. All of a sud­den, the per­son will com­plete­ly move, because it’s found some­thing inter­est­ing in the back­ground, and there you lose the eye track­ing com­plete­ly. And there’s many of those small, finicky things that can cause this to go wrong. So I actu­al­ly, in this at-home exper­i­ment that I did, I recruit­ed over 1000 peo­ple from Pro­lif­ic and Ama­zon Mechan­i­cal Turk. I can tell you that Pro­lif­ic was a delight to work with, that I ended up using a sort of a instruc­tion­al video, where I lit­er­al­ly show peo­ple how to do it, because I got so many mad emails that I had to do a video about it. Yeah. I can talk about sig­nal qual­i­ty and all that later, but that was kind of the prac­ti­cal uses and prac­ti­cal tips that I can give about using this eye-track­ing software.

Jo Ever­shed:

That’s fan­tas­tic, Jens. Could you say a lit­tle bit more about what the con­tent of the video was? Because that sounds like such a great idea, and it’s actu­al­ly some­thing we heard from the guys last month talk­ing about audi­to­ry recep­tors. She was like, I had to show them a pic­ture of where they should put their hands on the key­boards and then they got it. So this sounds the same, you can’t just write it in text, but if you show some­body a video of like, this… Was it lit­er­al­ly that? Like, “Here’s the video, this is what it looks like, this is what you’re going to hap­pen.” And then they get it, right?

Dr Jens Madsen:

Yeah. So I made a car­toon, I’ve wrote instruc­tions. I mean, the first hun­dreds of peo­ple I went like batch­es of 20, nobody got it, nobody got it, nobody got it. So it’s just incre­men­tal. Okay, they did­n’t under­stand that. Why did­n’t they under­stand that? I don’t know. And I asked my col­leagues, “I under­stand it.” Because you’re there. You’re like, “Well, you can see what I mean.” And so, I don’t know how the cal­i­bra­tion of Gorilla works, but we have to [crosstalk 00:16:04].

Jo Ever­shed:

Very sim­i­lar to yours. We have our [crosstalk 00:16:04] in slight­ly dif­fer­ent places.

Dr Jens Madsen:

Right, yeah. Essen­tial­ly, I mean, you can imag­ine you have this wire frame that’s fit­ting around your face, right? And that wire frame has to be there because it’s essen­tial­ly find­ing your eyes and it takes those pix­els of your eyes, and then use the model to pre­dict where you’re look­ing on the screen. Now, if this wire frame as you saw in the image is over here, you can move your eyes as much as you want, it’s not going to hap­pen, you know? It’s impor­tant, and also you don’t move around, because that’s the wire frame. And at this point, I had a beard. That was a huge prob­lem because it did­n’t like the shape of my face, I guess. My beard was a prob­lem. So I lit­er­al­ly showed them a video of me going through it and show­ing them, “Oh, you see now it’s going wrong because the wire frame is over there. Now, I go back. Oh, this is work­ing. Now, I turned off the light. You can see what hap­pens. It’s wrong,” you know?

Dr Jens Madsen:

And also just a human inter­ac­tion with the sub­jects because when I get these peo­ple from Pro­lif­ic and Ama­zon Mechan­i­cal Turk, this is just text. I’m not a per­son. They don’t real­ly care. They’re just like, “I want to make money, I want to make money.” But then, if you see a per­son like, “This is my research, please do well. Come on guys, do it for me.” You’re like, “Okay,” and then they actu­al­ly… Peo­ple thanked me even for par­tic­i­pat­ing. So that was real­ly a nice experience.

Jo Ever­shed:

Oh, that’s fan­tas­tic. So atten­dees, what I want you to type into the chat now, is in terms of top tips, what was the most valu­able for you? Was it, do a video instruc­tion, because then your par­tic­i­pants will under­stand what they need to do? Was it, do a video instruc­tion, because then they’ll like you, and they’ll want to do your exper­i­ment for you, you as the per­son? Or was it, make sure you don’t have men with beards or ask them to shave first?

Jo Ever­shed:

So into the chat now. Which do you like? Video instruc­tions to get bet­ter data. Video instruc­tions are great to peo­ple watch­ing them. Video instruc­tions, glass­es and back­ground, get bet­ter data. So you can see, Jens, every­body is learn­ing a lot from what you’ve said already. That was tremen­dous­ly help­ful, par­tic­u­lar­ly video instruc­tions for all of the rea­sons. Excel­lent. So, we’re not going to go over to Simone. Simone, how do you want to share what you’ve been doing? Because you’ve been tak­ing some­what of a dif­fer­ent approach­es to eye tracking.

Simone Lira Calabrich:

Yes, let me share my screen here now. Okay. So I’m assum­ing that you guys can see my screen.

Jo Ever­shed:

Yeah, we can see that. Yeah.

Simone Lira Calabrich:

Okay. So first of all, I’d like to thank you guys for invit­ing me to this webi­nar. So I’ll talk a lit­tle bit about my per­son­al expe­ri­ence as a Gorilla user. And also, it’s my first time doing an eye track­ing research as well. And I’ll try to give you a cou­ple of tips as well on what you could do to get high-qual­i­ty data. And there’s going to be, I think, some over­lap­ping with what Jens just men­tioned right now. Okay.

Simone Lira Calabrich:

So as I briefly men­tioned in my intro­duc­tion, in our lab, we’re inves­ti­gat­ing the dif­fer­ent process­es under­pin­ning acqui­si­tion of novel let­ter-sound asso­ci­a­tions in our lab. And our aim with that is to bet­ter under­stand how we bind visu­al and phono­log­i­cal infor­ma­tion togeth­er and what exact­ly makes this process so effort­ful for some readers.

Simone Lira Calabrich:

So, in Gorilla, we used a paired asso­ciate learn­ing par­a­digm in one of the tasks. So as you can see in the demon­stra­tion, in each trial there were three shapes on the screen. Par­tic­i­pants would first learn which sort of words would go with which one of the shapes, and then they would be test­ed on their abil­i­ty to rec­og­nize the pairs. After pre­sent­ing the bind­ings, we play one of the three words from each trial. We then present a blank screen and then we show the par­tic­i­pants the three pairs again. And what we do is we track par­tic­i­pants’ looks dur­ing the blank screen pre­sen­ta­tion to see if they will visu­al­ly revis­it the screen loca­tions that were pre­vi­ous­ly occu­pied by the target.

Simone Lira Calabrich:

The ratio­nale behind this is that some­times when we are try­ing to remem­ber some­thing, we might look at the spa­tial loca­tion where that infor­ma­tion or that piece of infor­ma­tion was pre­sent­ed. We do that even if the spa­tial loca­tion is now empty, right? So this task that we admin­is­tered in Gorilla is an attempt to repli­cate the find­ings from pre­vi­ous sim­i­lar eye track­ing study done by my super­vi­sors, Jones and col­leagues, and it’s a sim­i­lar par­a­digm using paired-asso­ciate learn­ing and look­ing-at-noth­ing as well, in typ­i­cal and dyslex­ic readers.

Simone Lira Calabrich:

So, one of the things, this has a lot to do with Jens was men­tion­ing before, one of the things that I would strong­ly sug­gest that you check when you’re pre-pro­cess­ing your eye track­ing data in Gorilla, is to check the face_conf val­ues. So the val­ues in this col­umn here, they range from zero to one. And what it mea­sures is how strong­ly the image under the model actu­al­ly resem­bles a face. So one means that there was a per­fect fit and zero means that there was no fit, as you can see here in the illus­tra­tion. Accord­ing to Goril­la’s rec­om­men­da­tion, val­ues that are over 0.5 are ideal.

Simone Lira Calabrich:

And the rea­son why I think it’s so impor­tant to check this care­ful­ly, is because some of your par­tic­i­pants might move their heads dur­ing the task as Jens was men­tion­ing before, or they might acci­den­tal­ly cover their faces if they’re bored, or some­thing like that, they might put their glass­es on or take their glass­es off dur­ing the exper­i­ments, there might be some changes in the light­ing con­di­tions as well. So a lot of things can hap­pen mid-exper­i­ments and then their faces will no longer be detect­ed. But it’s impor­tant that you exclude pre­dic­tions that have a very low face_conf value. That’s extreme­ly important.

Simone Lira Calabrich:

So one thing which we have been doing is we add a ques­tion there at the begin­ning of the exper­i­ment, and then we ask par­tic­i­pants the con­di­tions under which they will be doing the tasks. So some of the ques­tions that I thought they were rel­e­vant to eye-track­ing research are the ones that are high­light­ed here. So we asked them, in what kind of light­ing will we be doing the tasks? Is it day­light? Are they going to be using arti­fi­cial light­ing? Are they going to be plac­ing their lap­tops on their lap or on their desks? We can­not, unfor­tu­nate­ly, force par­tic­i­pants to place their lap­tops on the desk, which would be ideal, and some of them still end up plac­ing their lap­tops on their laps. And we also ask them if they’re going to be wear­ing glass­es dur­ing the exper­i­ments, because we can not always exclude par­tic­i­pants who are wear­ing glasses.

Simone Lira Calabrich:

So what I do with this, based on par­tic­i­pants’ respons­es, I try to gen­er­ate some plots so that I can visu­al­ly inspect what may be caus­ing the poor face_conf val­ues for some of the par­tic­i­pants. So, as an over­all, as you can see here, the mean value for all of the con­di­tions was above the rec­om­mend­ed thresh­old. But you can see also that the data qual­i­ty was affect­ed to some extent in some of the con­di­tions. So, in this par­tic­u­lar sam­ple here, the model fit was equal­ly fine for peo­ple wear­ing or not wear­ing glass­es, but in one of the other pilots that we con­duct­ed, it was real­ly, real­ly poor for par­tic­i­pants wear­ing glass­es. So, you have to think that it would be okay for you to exclude par­tic­i­pants wear­ing glass­es from your exper­i­ments. We can not do that.

Simone Lira Calabrich:

The sec­ond plot, sug­gests that nat­ur­al day­light seems to be a bit bet­ter for the remote eye track­er. So what I’ve been try­ing to do is, I release the exper­i­ments in batch­es, and I try to sched­ule them to become avail­able early in the morn­ing so that I can try to recruit more peo­ple who are, prob­a­bly going to be doing the task dur­ing the day, and some­times I just pause the exper­i­ments. Here, you can see as well, their plac­ing the com­put­er on the lap is also not ideal, but hon­est­ly, I don’t know how to con­vince par­tic­i­pants not to do that. I try to ask them, I give visu­al instruc­tions as well, but it does­n’t always work.

Simone Lira Calabrich:

The last one, you can see that in my exper­i­ments, we have six blocks, in lots of… We have 216 tri­als in each one of the blocks, so it’s a very long exper­i­ment. And the impres­sion that I get is that as peo­ple get tired over the course of the exper­i­ment, they start mov­ing more or they start touch­ing their faces and doing things like that. So, the data qual­i­ty will tend to decrease towards the end of the exper­i­ment. So that’s why it’s impor­tant for you to counter-bal­ance every­thing that you can and ran­dom­ize every­thing. So, this is it for now. I would like to thank my super­vi­sors as well. And I have a cou­ple more tips which I might show you guys later if we have time. You are muted, Jo.

Jo Ever­shed:

Thank you so much, Simone. That was actu­al­ly fan­tas­tic. So atten­dees, what I want you to answer there is, what for you was the most valu­able thing that Simone said? Maybe it was face con­fig, check­ing those num­bers. Or it might’ve been the set­tings and ques­tions, just ask­ing peo­ple what their setup is so that you can exclude par­tic­i­pants if they’ve got a setup that you don’t like. Or was it only exper­i­ments in the morn­ing, check­ing integri­ty of face mod­els. Or was it, actu­al­ly just see­ing how each of those set­tings reduces the qual­i­ty of the data, because I found that fas­ci­nat­ing, see­ing those plots where you can just see the qual­i­ty of the data. Yes, the face con­fig stuff is super impor­tant. Light­ing was­n’t impor­tant, where­as the lap­top was placed. Yeah. So every­body’s get­ting so much value from what you said, Simone. Thank you so much for that. So next, we’re going to go to Tom Arm­strong, who’s going to talk to us, I think, about MouseView.

Prof Tom Armstrong:

All right. Let me get my screen share going here.

Prof Tom Armstrong:

Okay. So I’m going to be talk­ing about a tool that I co-cre­at­ed with Alex Anwyle-Irvine and Edwin Dal­mai­jer, that is a online alter­na­tive to eye track­ing. And big thanks to Alex for devel­op­ing this bril­liant JavaScript to make this thing hap­pen, and for Edwin, for real­ly guid­ing us in terms of how to mimic the visu­al sys­tem and bring­ing his exper­tise as a cog­ni­tive sci­en­tist to bear.

Prof Tom Armstrong:

So I men­tioned before, I’m an affec­tive and clin­i­cal sci­en­tist. And so in these areas, peo­ple often use pas­sive view­ing tasks to study the emo­tion­al mod­u­la­tion of atten­tion, or as it’s often called atten­tion­al bias. And in these tasks, par­tic­i­pants are asked to look at stim­uli, how­ev­er they please. And these stim­uli are typ­i­cal­ly pre­sent­ed in arrays of, from two to as many as 16 stim­uli. Some of them are neu­tral, and then some of the images are affec­tive or emo­tion­al­ly [inaudi­ble 00:27:39] charge.

Prof Tom Armstrong:

Here’s some data from a task with just two images, a dis­gust­ing image paired with a neu­tral image, or a pleas­ant image paired with a neu­tral image. And I’ll just give you a sense of some of the com­po­nents of gaze that are mod­u­lat­ed by emo­tion in these studies.

Prof Tom Armstrong:

And so, one thing we see is that at the begin­ning of the trial, peo­ple tend to ori­ent towards any emo­tion­al or affec­tive image. Mar­garet Bradley and Peter Lang have called this nat­ur­al selec­tive atten­tion. And in gen­er­al, when peo­ple talk about atten­tion­al bias for threat, or atten­tion­al bias for moti­va­tion­al­ly rel­e­vant stim­uli, they’re talk­ing about this phe­nom­e­non. It’s often mea­sured with reac­tion time measures.

Prof Tom Armstrong:

What’s more unique about eye track­ing is this other com­po­nent that I refer to as strate­gic gaze or vol­un­tary gaze. And this plays out a lit­tle bit later in the trial when par­tic­i­pants kind of take con­trol of the wheel with their eye move­ments. And here, you see a big dif­fer­ence accord­ing to whether peo­ple like a stim­u­lus, whether they want what they see in the pic­ture, or whether they are repulsed by it. And so, you don’t see a valence dif­fer­ences with that first com­po­nent, but here in this more vol­un­tary gaze, you see some real­ly inter­est­ing effects.

Prof Tom Armstrong:

And so you can mea­sure this with total dwell time dur­ing a trial. And one of the great things about this mea­sure is that in com­par­i­son to these reac­tion time mea­sures of atten­tion­al bias that have been pret­ty thor­ough­ly cri­tiqued, and also the eye track­ing mea­sure of that ini­tial cap­ture, this met­ric is very reli­able. Also, it’s valid, in the sense that, for exam­ple, if you look at how much peo­ple look away from some­thing that’s gross, that’s going to cor­re­late strong­ly with how gross they say the stim­u­lus is. And the same thing for appetite stim­u­lus. So how much peo­ple want to eat food that they see will cor­re­late with how much they look at it?

Prof Tom Armstrong:

So, I’ve been doing this for about 10 years. Every study I do involves eye track­ing, but it comes with some lim­i­ta­tions. So, first it’s expen­sive. Edwin Dal­mai­jer has done a real­ly amaz­ing job democ­ra­tiz­ing eye track­ing by devel­op­ing a tool­box that wraps around cheap, com­mer­cial grade, eye track­ers. But even with it being pos­si­ble to now buy 10 eye track­ers, for exam­ple, it’s still hard to scale up the research, like what Jo was talk­ing about ear­li­er, how no more under­pow­ered research, more diverse sam­ples. Well, it’s hard to do that with the hardware.

Prof Tom Armstrong:

And then as I learned, about a year ago, it’s not pan­dem­ic proof. And so you got to bring folks into the lab. You can’t real­ly do this online, although as we just heard, there are some pret­ty excit­ing options. And real­ly for me, web­cam eye track­ing is a holy grail. But in the mean­time, I want­ed to see if there were some other alter­na­tives that would be ready to go out of the box for eye-track­ing researchers. And one tra­di­tion, it turns out, is using Mou­se­View, where there’s a mouse that con­trols a lit­tle small aper­ture and allows you to sort of look through this lit­tle win­dow and explore an image.

Prof Tom Armstrong:

Now, I thought this was a pret­ty novel idea. It turns out folks have been doing this for maybe 20 years. And they came up with some pret­ty clever terms like fovea, for the way that sort of mim­ics foveal vision. Also, there’s been a lot of val­i­da­tion work show­ing that mouse view­ing cor­re­lates a lot with reg­u­lar view­ing as mea­sured by an eye track­er. So, what we were set­ting out to do was first to see if mouse view­ing would work in affec­tive and clin­i­cal sci­ences, to see if you’d get this sort of hot atten­tion, as well as the cold atten­tion that you see in just sort of brows­ing a webpage.

Prof Tom Armstrong:

And then in par­tic­u­lar, we want­ed to cre­ate a tool, sort of in the spir­it of Gorilla, that would be imme­di­ate­ly acces­si­ble to researchers and you could use with­out pro­gram­ming and hav­ing tech­ni­cal skills. And so we actu­al­ly used… We did this in Gorilla and we col­lect­ed some data on Gorilla over Pro­lif­ic, and we have data… This is from a pilot study. We did our first study with 160 par­tic­i­pants. And let me just show you what the task looks like. I’m going to zip ahead, because I’m a dis­gust researcher and you don’t want to see what’s on the first trial. At least you can see it blurred, but that’s good enough. Okay. So you can see some­one’s mov­ing a cur­sor-locked aper­ture and there’s this Gauss­ian fil­ter used to blur the screen to mimic periph­er­al vision and par­tic­i­pants can explore the image with the mouse. Okay. We move on.

Prof Tom Armstrong:

Okay. So one of the great things about Mou­se­View is that Alex has cre­at­ed it in a real­ly flex­i­ble man­ner where users can cus­tomize the over­lay. So you can use the Gauss­ian blur, you can use a solid back­ground, you can use dif­fer­ent lev­els of opac­i­ty. You can also vary the size of the aper­ture. And this is some­thing that we haven’t real­ly sys­tem­at­i­cal­ly buried yet. Right now it’s just sort of set to mimic foveal vision to be about two degrees or so.

Prof Tom Armstrong:

So we’ve done this pilot study, about 160 peo­ple, and the first thing we want­ed to see is, does the mouse scan­ning resem­ble gaze scan­ning? And Edwin did some real­ly bril­liant analy­ses to be able to sort of answer this quan­ti­ta­tive­ly and sta­tis­ti­cal­ly. And we found that the two real­ly con­verge, you can see it here in the scan pass. Like for exam­ple, if you look over the right, dis­gust five, real­ly sim­i­lar pat­tern of explo­ration. We blurred that so that you can’t see the pro­pri­etary IX images.

Prof Tom Armstrong:

Now the big­ger ques­tion for me, does this cap­ture hot atten­tion? Does this cap­ture the emo­tion­al mod­u­la­tion of atten­tion that we see with eye track­ing? And so here on the left, you can see the eye track­ing plot that I showed you before. Over here on the right, is the Mou­se­View plot. And in terms of that sec­ond com­po­nent of gaze I talked about, that strate­gic gaze, we see that com­ing through in the mass view data real­ly nice­ly. Even some of these sub­tle effects, like the fact that peo­ple look more at unpleas­ant images the first time before they start avoid­ing them, so we have that approach and that avoid­ance in the strate­gic gaze.

Prof Tom Armstrong:

The one thing that’s miss­ing, maybe not sur­pris­ing­ly, is this more auto­mat­ic cap­ture of gaze at the begin­ning of the trial because the mouse move­ments are more effort­ful, more vol­un­tary. We’ve now done a cou­ple more of these stud­ies and we’ve found that this dwell time index with the mouse view­ing is very reli­able in terms of inter­nal con­sis­ten­cy. Also, we’re find­ing that it cor­re­lates very nice­ly with self-report rat­ings of images and indi­vid­ual dif­fer­ences relat­ed to images like we see with eye gaze. So it seems like a pret­ty promis­ing tool. And I can tell you more about it in a minute, but I just want­ed to real­ly quick­ly thank Gorilla. I’m excit­ed about any announce­ment that might be com­ing, and my col­lege for fund­ing some of this val­i­da­tion research, and the mem­bers of my lab who are cur­rent­ly doing a with­in-per­son val­i­da­tion against eye-track­ing in-per­son in the lab.

Jo Ever­shed:

Thank you so much, Tom. That was absolute­ly fas­ci­nat­ing. A num­ber of peo­ple have said in the chat. I liked that. That was just absolute­ly fas­ci­nat­ing, your research. I’m so impressed. What I’d love to hear from atten­dees, like what do you think about Mou­se­View? Does­n’t that look tremen­dous? I’m so excit­ed by what that… Because there are lim­its to what eye track­ing we can do with the web­cam, right? I’m sure we can get two zones, maybe six, but what I think is real­ly excit­ing about Mou­se­View, is it allows you to do that much more detailed eye track­ing-like research. It’s a dif­fer­ent method­ol­o­gy that’s going to make stuff that oth­er­wise would­n’t be pos­si­ble to take online, pos­si­ble. Tom, I’d never heard of this before. It sounds so excit­ing. It seems like such a rea­son­able way to inves­ti­gate voli­tion­al atten­tion in an online con­text. I think peo­ple have been real­ly inspired by what you’ve said Tom.

Jo Ever­shed:

And the excit­ing news for those of you lis­ten­ing today, Mou­se­View is going to be a closed beta zone from next week in Gorilla. To get access to any closed beta zone, all you need to do is go to the sup­port desk, fill out the form, “I want access to a closed beta zone,” this one, and it gets applied instant­ly to your account. That’s just the case for eye track­ing, it’ll be the case for Mou­se­View. They’ll be able to be used with­out… You don’t need any cod­ing to be able to use them. If they’re in closed beta, it’s just an indi­ca­tion from us that there isn’t a lot of pub­lished research out there, we haven’t val­i­dat­ed it, so we say han­dle with care, right? Like run your pilots, check your data, check it thor­ough­ly, make addi­tion­al… Data qual­i­ty checks than you would otherwise.

Jo Ever­shed:

With things like show­ing images, you can see that it’s cor­rect, right? And the data that you’re col­lect­ing isn’t com­pli­cat­ed. So there were reserves that we don’t need to have in closed beta. Until things have been pub­lished and been val­i­dat­ed, we keep things in closed beta where they’re more tech­ni­cal­ly com­plex. That’s what that means.

Jo Ever­shed:

But yes, you can have access. So, Mou­se­View com­ing to Gorilla next week. And thank you to Tom and to Alex, who I think is on the call, and Edwin. They’re all here today. If you’re impressed by Mou­se­View, can you type Mou­se­View into the chat here, just so that Tom, and Edwin, and Alex will get to like whoop whoop from you guys. Because they’ve put mas­sive amount of work into get­ting this done and I think they deserve the equiv­a­lent of a lit­tle round of applause for that. Thank you so much. Now final­ly, over to Jonathan to talk about what you’ve been up to.

Jonathan Tsay:

Okay. Can you see my screen? Okay, per­fect, per­fect. So, my name is Jonathan, I go by JT, and I study how humans con­trol and acquire skilled move­ment. So let me give you an exam­ple of this through this video.

Jonathan Tsay:

Okay, my talk’s over. No, I was just kid­ding. So this hap­pens every day, how we adapt and adjust our move­ments to changes in the envi­ron­ment and the body. And this process, this learn­ing process requires mul­ti­ple com­po­nents. It’s a lot of trial and error. Your body just kind of fig­ures it out, but it’s also a lot of instruc­tion. How the father here instructs the son to jump on this chair, and of course reward too at the end with the hug.

Jonathan Tsay:

And we stud­ied this in the lab by ask­ing peo­ple to do some­thing a lit­tle bit more mun­dane. So, typ­i­cal­ly, you’re in this dark room, you’re asked to hold this dig­i­tiz­ing pen, you don’t see your arm, and you’re asked to reach to this blue tar­get, con­trol­ling this red cur­sor on the screen. And we described this as play­ing fruit ninja. Fun slice through the blue dot, using your red cursor.

Jonathan Tsay:

On the right side, I’m going to show you some data. So, ini­tial­ly, when peo­ple reached tar­get, con­trol­ling this red cur­sor, they can’t see their hand, peo­ple are on tar­get. On tar­get means hand angle is zero and x‑axis is time. So more reach­es means you’re mov­ing across the x‑axis. But then we add a per­tur­ba­tion. So we intro­duce a 15 degree off­set from the tar­get. The cur­sor is always going to move 15 degrees away from the tar­get. We tell you, we say, “Joe, this cur­sor has noth­ing to do with you. Ignore it. Just keep on reach­ing to the tar­get.” And so you see here on the right, this is par­tic­i­pants data. Peo­ple can’t keep reach­ing through the tar­get. They implic­it­ly respond to this red cur­sor by mov­ing in the oppo­site direc­tion, they drift off fur­ther and fur­ther away to 20 degrees.

Jonathan Tsay:

Even­tu­al­ly, they reach an asymp­tote, around 20 degrees, and when we turn off the feed­back, when we turn off the cur­sor, peo­ple drift­ed a lit­tle bit back to the tar­get. And this whole process is implic­it. If I asked you where your hand is, your actu­al hand is 20 degrees away from the tar­get. If I asked you where your hand is, you tell me your hand is around the tar­get. This is where you feel your hand. You feel your hand to be at the tar­get, your hand is 20 degrees off the tar­get. And this is how we study implic­it motor learn­ing in the lab. But because of the pan­dem­ic, we built a tool to test this online. And so, in a paper preprint recent­ly released, we com­pared in-per­son data using this kind of sophis­ti­cat­ed machin­ery that typ­i­cal­ly costs around $10,000 to set up.

Jonathan Tsay:

And you can see on the bot­tom, this is the data we have in the lab. We just cre­ate dif­fer­ent off­sets away from the tar­get, but nonethe­less peo­ple drift fur­ther and fur­ther away from the tar­get. And we have data from online using this model, this tem­plate we cre­at­ed to track your mouse move­ments and your reach­ing to dif­fer­ent tar­gets. The behav­ior in per­son and online seem quite sim­i­lar. But in per­son, online research affords some great advan­tages and I’m preach­ing to the choir here. For in lab results, we took around six months of in-per­son just to come to the lab, col­lect your data. For the online results, we col­lect­ed 1 to 20 peo­ple in a day and so that’s a huge time-saver, and in terms of cost as well. And of course, we have a more diverse pop­u­la­tion. I just want to give a few tips before I sign off here.

Jonathan Tsay:

So a few tips are instruc­tion checks. So, for instance, in our study, we ask peo­ple to reach to the tar­get and ignore the cur­sor feed­back, just con­tin­ue reach­ing. So, an instruc­tion check ques­tion we ask is where are you going to reach? Option A, the tar­get, option two, away from the tar­get? And if you choose away from the tar­get, then we say, “Sorry, that was the wrong answer and please try again next time.”

Jonathan Tsay:

Catch tri­als. So, for instance, some­times we would say, “Don’t reach to this tar­get.” The tar­get presents itself, and we say, don’t reach the tar­get, and if we see that par­tic­i­pants con­tin­ue to reach to the tar­get, they might be just not pay­ing atten­tion and just swip­ing their hand towards the tar­get. So we use some catch tri­als to fil­ter out good and bad sub­jects. We have base­line vari­abil­i­ty mea­sures. So reach the tar­get and if we see you’re reach­ing in a errat­ic way, then we typ­i­cal­ly say, “Okay, sorry, try again next time.” Again, move­ment time is a great indi­ca­tor, espe­cial­ly for a mouse tracking.

Jonathan Tsay:

If you, in the mid­dle of the exper­i­ment, you go to the restroom and you come back. These are things that can be tracked using move­ment time, which is typ­i­cal­ly, some­one might not take your exper­i­ment seri­ous­ly, but not always. And Simone brought this up, but batch­ing and iter­at­ing, get­ting feed­back from a lay per­son to under­stand instruc­tions is huge for us. And last but not least, some­thing that Tom brought up, was some­times when you see behav­ior that’s dif­fer­ent between in-lab and online, this is some­thing we strug­gle with, is it reflec­tive of some­thing that’s inter­est­ing that’s dif­fer­ent between online and in-per­son or is it noise? So that’s some­thing we’re strug­gling with, but we came to the con­clu­sion that some­times it’s just dif­fer­ent. You’re using a mouse ver­sus a robot in the lab. So, that can be very different.

Jonathan Tsay:

What I’m excit­ed about this mouse track­ing research and how it relates to motor learn­ing, is typ­i­cal­ly motor learn­ing patient research is around 10 peo­ple, but now, because we can just send a link to these par­tic­i­pants, they’re able to access the link and do these exper­i­ments at home. We can access some huge, larg­er group of patient pop­u­la­tions that typ­i­cal­ly, maybe logis­ti­cal­ly, hard to invite to the lab. And sec­ond, teach­ing. I’m not going to bela­bor this point. Third is pub­lic out­reach. So, we put our exper­i­ment on this Test­My­Brain web­site and peo­ple just try out the game and learn a lit­tle bit about their brain, and that’s an easy way to col­lect data, but also for peo­ple to learn a lit­tle bit about themselves.

Jonathan Tsay:

Here’s some open resources, you can take a screen­shot, we share our tem­plate, how to imple­ment these mouse track­ing exper­i­ments online. It’s also inte­grat­ed with Gorilla. We have a man­u­al to help you set it up, we have a paper, and here’s a demo, you can try it out your­self. And last thing, I want to thank my team. So, Alan, who did a lot of the work cod­ing up the exper­i­ment, my advi­sor, Rich Ivry, and Guy, and Ken Nakaya­ma. They all worked col­lec­tive­ly real­ly put this togeth­er and we’re real­ly excit­ed about where it’s going. So thank you again for your time.

Jo Ever­shed:

Thank You so much. Jon, that was fan­tas­tic. Now, what I want to hear from the panel from the atten­dees, did you like more from Jon there, the advan­tages of online research, the cost-sav­ing, the time-sav­ing, or were you more blown away by his tips for tak­ing and get­ting good qual­i­ty mouse track­ing data online. Tips, instruc­tion, check ques­tions, tips, tips, more tips. And that girl who jumped onto that stool at the begin­ning, were you not just blown away by her? If you were blown away by her resilience to jump­ing up… They’re entire­ly blown away in the chat. I thought she was tremen­dous. She must be about the same age as my son and he would not do that for sure. That was some­thing quite excit­ing. Jon, I want to ask a fol­low up ques­tion. Your mouse track­ing exper­i­ment, have you shared that in Gorilla Open Materials?

Jonathan Tsay:

Yeah, yeah. That is in Gorilla Open Materials.

Jo Ever­shed:

If you’ve got the link, do you want to dump that into the chat? Because then if any­body wants to do a repli­ca­tion or an exten­sion, they can just clone the study and see how you’ve done it, see how you’ve imple­ment­ed it. It’s just a super easy way of shar­ing research and allow­ing peo­ple to build on the research that’s gone before, with­out wast­ing time. Actu­al­ly, make sure we’ve got a link to that as well, can you? So that when we send a fol­low up email on Mon­day, we can make sure that every­body who’s here today can get access to that. Oh, I think Josh has already shared it, Jon. You’re off the hook. Excel­lent. We’ve now come to Q&A time. There are lots and lots of ques­tions. There are a total of 32 ques­tions, of which 16 have already been answered by you fine peo­ple as we go through. There are some more ques­tions though. Edwin has got a ques­tion of how do peo­ple deal with the huge attri­tion of par­tic­i­pants in web-based eye-track­ing. Simone or Jens, can either of you speak on that one? How have you dealt with attrition?

Simone Lira Calabrich:

Yeah, it’s a bit com­pli­cat­ed because my exper­i­ment is a very long one and par­tic­i­pants end up get­ting tired and they quit the exper­i­ment in the mid­dle of it. There is not much that we can do about it, but just keep recruit­ing more par­tic­i­pants. So we ran a power analy­sis, which sug­gest­ed that we need­ed 70 par­tic­i­pants for our study. So, our goal was to recruit 70 par­tic­i­pants no mat­ter what. So, if some­one quits mid­way, we just reject the par­tic­i­pant and we just recruit an addi­tion­al one as a sub­sti­tute, as a replacement.

Dr Jens Madsen:

So I think at least for my per­spec­tive, it’s very dif­fer­ent com­par­ing sta­tion­ary, like view­ing or eye track­ing of images, and then in my case, video. So video is mov­ing con­stant­ly, right? And so, you can show an image and they have to watch the whole video, and I have to syn­chro­nize it in time. And it also depends on the analy­sis method you do. In my case, I don’t real­ly look into new spa­tial infor­ma­tion. Spa­tial infor­ma­tion for me is irrel­e­vant. I use the cor­re­la­tion. So how sim­i­lar peo­ple’s eye move­ments are across time. I use other peo­ple as a ref­er­ence. And in that sense it can be very noisy, the data, actu­al­ly. You can actu­al­ly move around and it’s quite robust in that sense. And so it depends on the level of noise you induce in a sys­tem. For my case, because it was video, I put in aux­il­iary tasks, like peo­ple can watch, look at dots to see if they were actu­al­ly there or not or things like that just to con­trol for those things or else you’re in big trou­ble because you have no clue what’s happening.

Dr Jens Madsen:

And so hav­ing those extra things to make sure that they’re there. And also it turns out the atten­tion span of an online user, at least in edu­ca­tion­al con­tent, it’s around five, six min­utes, after that they’re gone. You can’t, it does­n’t mat­ter. They’re bored, they could­n’t be both­ered. And so my task were always around there. The videos that I showed were always five, six-min­utes long, three-min­utes long, and then some ques­tions. But they could­n’t be asked to sit still because when you use WebGaz­er, you have to sit still. It depen­dence on… You guys are using spa­tial tasks, right? So this would be a prob­lem for you. For me it’s fine because I use the tem­po­ral course. But for spa­tial peo­ple that’s going to be an issue because the whole thing is just going to shift. And how do you detect that, right? Do you have to some­how either insert some things like now you look at this dot and now I can recal­i­brate my data or some­thing? I don’t know how you guys are deal­ing with that. But yeah, those are the things that you need to worry about.

Simone Lira Calabrich:

Yeah. I was just going to add some­thing, that because we have a very long exper­i­ment with lots of tri­als, we can lose some data and it’s still going to be fine, right? So I have 216 tri­als in my exper­i­ments. So it’s not a six-minute long one, it’s two-hour exper­i­ments. So, even if I do lose some data, rel­a­tive­ly, it’s still fine. I have enough power for that.

Dr Jens Madsen:

I mean, you still have the cal­i­bra­tion? You do a cal­i­bra­tion, right? And I’m assum­ing you do it once, right? And you have to sort of… Or you do it mul­ti­ple times?

Simone Lira Calabrich:

Mul­ti­ple times. Yeah.

Dr Jens Madsen:

You have to do that.

Simone Lira Calabrich:

So we have six blocks.

Dr Jens Madsen:

Yeah, that makes sense.

Simone Lira Calabrich:

So we do it at the begin­ning of each block and also in the mid­dle of each block as well, just to make sure that it’s as accu­rate as possible.

Dr Jens Madsen:

You saw what I just did there, right? I read­just­ed myself and this is some­thing nat­ur­al. It’s like I just need… Ah, yeah, that’s bet­ter, you know? That’s a problem.

Simone Lira Calabrich:

Exact­ly, yeah. So, yeah, that’s why we do that mul­ti­ple times.

Dr Jens Madsen:

And we do it even with­out know­ing it.

Jo Ever­shed:

Simone, how often do you recalibrate?

Simone Lira Calabrich:

So, we have six blocks. So at the begin­ning of each block and in the mid­dle of each block. So, every 18 trials.

Jo Ever­shed:

Okay, that makes sense. So in pre­vi­ous lec­tures we’ve had about online meth­ods. Peo­ple have said, a good length for an online exper­i­ment is around 20 min­utes, much longer than that peo­ple start to get tired. If you pay peo­ple bet­ter, you get bet­ter qual­i­ty par­tic­i­pants. So, that’s anoth­er way that you can reduce attri­tion, dou­ble your fees and see what hap­pens. Peo­ple are will­ing to stick around longer if they’re being paid well for their time.

Jo Ever­shed:

And then one of their researchers, Ralph Miller, from New York, he does long stud­ies like Simone does online, and what he does is about every 15 min­utes, he puts in a five minute break and he says, “Look, please go away, get up, walk around, do some­thing else, stretch, maybe you need to go to the loo, maybe there’s some­thing you need to deal with, but you have to be back in five min­utes.” When you press next, that five minute I think it hap­pens auto­mat­i­cal­ly. And that gives peo­ple that abil­i­ty to, “Oh, I real­ly need to stretch and move.” So that you can build in an expe­ri­ence that is man­age­able for your participants.

Jo Ever­shed:

And so if you’re strug­gling with attri­tion, the thing to do is to pilot dif­fer­ent ideas until you find what works for your exper­i­ments. There aren’t things that will work for every­one, but there are tech­niques and approach­es that you can try out, sort of exper­i­men­ta­tion in real time, find out what’s going to work. And that can be real­ly help­ful too. Tom, there are quite a few ques­tions about… Can you guys see the Q&As? If you pull up the Q&A panel, there were some nice ones about mouse track­ing here that I think Tom might be able to answer. So one here, how viable is it to use mouse track­ing in read­ing research, for exam­ple, ask­ing par­tic­i­pants to move that cur­sor as they read. And then sim­i­lar­ly, Jens and Simone, there are ques­tions about eye fix­a­tions and data qual­i­ty. You can also type answers. So I think we’ll run out of time if we try and cover all of those live, but maybe Jens and Simone, you can have a go answer­ing some of the ones that are more tech­ni­cal. But Tom, per­haps you could speak about mouse track­ing, eye track­ing, the crossover. You’re muted. You’re muted.

Prof Tom Armstrong:

There are so many, but let me try to do it jus­tice. So, I mean, right now, I think, I don’t know what unique process­es we get from Mou­se­View. I’m think­ing it as being just a stand in for that vol­un­tary explo­ration that we see with the eye track­ing. In terms of what that gets you beyond, great ques­tion, about beyond just self-report, there are some inter­est­ing ways in which self-report and eye track­ing do diverse that we’ve found that I can’t do jus­tice to right now. So I think that you often pick up things with self-report that you don’t get with… I’m sorry, you get things with eye-track­ing that you don’t get with self-report. For exam­ple, Edwin and I found that eye move­ment avoid­ance of dis­gust­ing stim­uli does­n’t habit­u­ate, where­as peo­ple will say they’re less dis­gust­ed, but then they’ll con­tin­ue to look away from things.

Prof Tom Armstrong:

And so some­times, there’s more or so things peo­ple can intro­spect on. About read­ing, Edwin took that ques­tion on. Left ver­sus right mouse? Fas­ci­nat­ing, I’m not sure. And then impor­tant­ly, the touch screens that is in the works. So maybe if Alex can jump on that ques­tion. That’s the next thing that he’s work­ing on, mak­ing this sort of work with touch screens. Right now it’s just for desk­top, lap­top, Chrome, Edge or Firefox.

Jo Ever­shed:

Any­thing that works in Gorilla, prob­a­bly might already work for touch. I don’t know when, unfor­tu­nate­ly, [inaudi­ble 00:54:56] but I will make sure that that ques­tion gets asked next week, because by default, every­thing in Gorilla is touch com­pat­i­ble as well.

Prof Tom Armstrong:

Cool.

Jo Ever­shed:

I’m try­ing to pick out a good next ques­tion. What’s the next one at the top? Can we learn some­thing from online mouse track­ing that we can­not learn from online eye track­ing? Can any­one speak to that or have you already?

Dr Jens Madsen:

What was the ques­tion? Sorry.

Jo Ever­shed:

Can we learn some­thing from online mouse track­ing that we can­not learn from online eye track­ing? I think that there are dif­fer­ent meth­ods that answer dif­fer­ent ques­tions, right?

Dr Jens Madsen:

So there’s cer­tain­ly a cor­re­la­tion between where you look and where the mouse is, right? So this is clear. And also it depends on the task. In my case, with the video, you’re not mov­ing around the mouse where you’re look­ing, because you’re watch­ing a video, that’s not a nat­ur­al behav­ior. But if you [inaudi­ble 00:55:57] of just using UI but­tons and things like that, sure­ly they’re high­ly cor­re­lat­ed. So, it very much depends on the task.

Jo Ever­shed:

That’s real­ly good. We are now five min­utes to six. So, I’m going to wrap this up. There are lots and lots more ques­tions, but I don’t think we can get through all of them today. Hope­ful­ly, we’ve man­aged to answer 24 ques­tions. So I think we’ve done a real­ly, real­ly great job there. Actu­al­ly, there’s one more which I think Simone might be able to answer quick­ly. What’s the rela­tion­ship between face con­fig and cal­i­bra­tion accu­ra­cy mea­sure. Did you look at both of those?

Simone Lira Calabrich:

No, I did­n’t actu­al­ly inves­ti­gate that, but what I did was I did sim­i­lar plots for the cal­i­bra­tion analy­sis as well in Gorilla. They were very sim­i­lar to what I demon­strat­ed to you guys. So, depend­ing on whether par­tic­i­pants were wear­ing glass­es or not, there were some lower val­ues for that. What I try to do, I strive to use the five-points cal­i­bra­tion in Gorilla and if cal­i­bra­tion fails for at least one of the points, the cal­i­bra­tion has to be reat­tempt­ed. So, I’m try­ing to be very strict in that sense. That’s my default mode now. So, if it fails just one of the points, I think it’s just best to try to recal­i­brate, which can be quite frus­trat­ing for some par­tic­i­pants, but that will ensure that we have bet­ter data quality.

Jo Ever­shed:

Yeah, that was great. Now, I have one last ques­tion for the panel which is what do you see the next year bring­ing to this area of research. And we’re going to do this in reverse order, so start­ing with JT.

Jonathan Tsay:

I’m going to say that, at least in my field, I’m most excit­ed about larg­er scale patient research and that’s num­ber one. Reach­ing indi­vid­u­als who are typ­i­cal­ly hard­er to reach. So, larg­er scale in that sense, but anoth­er is reach­ing, for instance, peo­ple with­out pro­pri­o­cep­tion. For instance, you don’t have a sense of body aware­ness. I’m pret­ty sure most of you have never met some­one like that because in my view, I think there’s only three peo­ple in the world that have learned the lit­er­a­ture, and kind of being able to work with these peo­ple remote­ly would be a great oppor­tu­ni­ty in the future.

Jo Ever­shed:

That’s bril­liant. Tom, how about for you? What does the next year hold?

Prof Tom Armstrong:

So, one, get­ting Mou­se­View on to mobile devices to work with touch screen. Then just see­ing the method get adopt­ed by peo­ple in dif­fer­ent areas, and to see how a lot of these eye track­ing finds repli­cate. Also, to, hope­ful­ly, get this into task zones with some dif­fer­ent vari­eties of eye track­ing tasks. So, Larg­er matri­ces, 16 [inaudi­ble 00:58:54] and just incre­men­tal­ly work­ing like that.

Jo Ever­shed:

I think that’s always so excit­ing when you cre­ate a new method, is you don’t know how peo­ple are going to use it, and some­body’s going to see that and go, “Ooh, I could do some­thing that you’d never imag­ined,” and sud­den­ly a whole new area of research becomes pos­si­ble. That’s huge­ly excit­ing. Simone, how about you? What does the next year hold?

Simone Lira Calabrich:

I was just think­ing per­haps the pos­si­bil­i­ty of test­ing par­tic­i­pants who are speak­ers of dif­fer­ent lan­guages. That would be real­ly nice as well. So, with remote eye track­ing, we can do that more eas­i­ly. So hopefully…

Jo Ever­shed:

Hope­ful­ly, that will.

Simone Lira Calabrich:

.. that’s what’s going to happen.

Jo Ever­shed:

And, Jens, final­ly to you.

Dr Jens Madsen:

We were work­ing in online edu­ca­tion and we were mea­sur­ing the level of atten­tion of stu­dents when they watch these edu­ca­tion­al mate­r­i­al. And what we’re excit­ed about is that we can actu­al­ly reverse that process so we can have the per­son in the brows­er mea­sure the level of atten­tion and we can adapt the edu­ca­tion­al con­tent to the level of atten­tion. So, if stu­dents are drop­ping out or not look­ing, we can actu­al­ly inter­vene and make inter­ven­tions so that hope­ful­ly we can improve online edu­ca­tion. You’re muted.

Jo Ever­shed:

Sorry. Tom just dropped out. So I was just check­ing what hap­pened there. The online edu­ca­tion thing, I can see it being been tremen­dous and that’s what every­body needs. If you had one tip for every­body watch­ing today to improve online edu­ca­tion, what would it be?

Dr Jens Madsen:

Do short, and show your face, and skip the bor­ing long PowerPoints.

Jo Ever­shed:

Excel­lent. All about human inter­ac­tion, isn’t it?

Dr Jens Madsen:

It’s all about the inter­ac­tion. If you can see a per­son­’s face, you’re there.

Jo Ever­shed:

Yeah., yeah, yeah. So, maybe it’s when you’ve got your stu­dents in your class, get them to turn their videos on, right? They’ll feel like they’re there togeth­er in a room.

Dr Jens Madsen:

It’s so important.

Jo Ever­shed:

So impor­tant. Back to the par­tic­i­pants, there were 150 of you for most of today. Thank you so much for join­ing our third Gorilla Presents webi­nar. Each month we’ll be address­ing a dif­fer­ent topic on online behav­ioral research. So, why not write in the chat with sug­ges­tions of what you’d like us to cover next? Yes, thank you mes­sages, please, through to our amaz­ing pan­elists and Tom as well. It’s very dif­fi­cult to judge how much value you’ve got out from here, but big thank yous real­ly helps these guys know that you real­ly appre­ci­at­ed the wis­dom that they’ve had for you today.

Jo Ever­shed:

There will be a sur­vey. I think we email you with a sur­vey straight after this to help us make these ses­sions more use­ful. Please fill this out. It’s tremen­dous­ly use­ful to us and it allows us to make each ses­sion bet­ter and bet­ter. You guys can see the value that you’ve got out of this today. By giv­ing us feed­back, we can make future ses­sions even bet­ter. So you’re doing a solid for the whole research community.

Jo Ever­shed:

The next webi­nar is going to be about speech pro­duc­tion exper­i­ments online. It is going to be in late April. So, if speech pro­duc­tion exper­i­ments, where peo­ple talk… It’s going to be 29th of April, there you go. Where peo­ple talk and you’re col­lect­ing their voice, if that’s your bag, then make sure you sign up for that one as well. Thank you and good night. One final mas­sive thank you to the pan­elists. Thank you so much for giv­ing your time to the research com­mu­ni­ty today and we’ll chat in a minute in the next room.

Simone Lira Calabrich:

Thank you, everyone.

Dr Jens Madsen:

Yeah, thank you.

Jonathan Tsay:

Thank you.