Video game user interface interactions
In February 2020, I started a Mitacs Accelerate internship with BioWare. In case you’re not a gamer, BioWare is a video game company that has created many popular games, such as the Mass Effect series, Anthem, Dragon Age, and many other classics like Baldur’s Gate. My project concerned measuring interactivity with user interfaces using eye and mouse tracking during use. The goal was to extract key performance indicators; values that tell you something about the user’s experience when using the UI. I use an online service called Labvanced to create a task and collect data. You can see below an example of what kind of data I collect.
Decision making modeled as a reinforcement learning agent versus humans
I’m really interesting in foraging. I find it fascinating that an animal can develop an optimal strategy when foraging for food in the bush, or that when we go to the supermarket to grab some tomatoes, we don’t spend 6 hours rifling through the bin to find the perfect tomato. We can model these types of behaviours using reinforcement learning frameworks such as TensorFlow and JAX. I am working with Nathan Wispinski to create models of decision making in RL agents so that we can directly compare them to human participants. Due to the pandemic, we have been using MTurk to collect our participants, again using Labvanced as the backbone for our collection. We are hoping to demonstrate human-level control of behaviour in the RL agents (instead of the typical mantra of outperforming humans) because we believe this is a useful framework to better understand human foraging behaviours. See the video below for an example of what a typical foraging trial looks like in humans.
Delayed reaching retinotopy in EEG
A really cool paper demonstrated that performing a delayed action resulted in re-recruitment of early visual cortical areas at the time of execution. Typically, it is thought that dorsal visual information is lost in the matter of a few seconds, but in this paper, it was shown that even a delay of ~18s (which was necessary because they were measuring the BOLD response) still resulted in early visual cortical areas—the same areas that encode crucial information for grasping behaviours—being reactivated. Among these early cortical areas is V1, which we can record using EEG. We designed a task that presents a stimulus in one of four quadrants on a screen, which will then either disappear and reappear several seconds later, or remain on the screen. The goal is to move your hand and touch (on the screen) where the target was. We record hand movements using a few OptiTrack cameras, gaze using a Tobii eyetracker, and EEG using a 256 channel EGI net. Again, this project was done with Nathan Wispinski.