People with neurological damage can have a lot of difficulty navigating in space, even in places they have been many times before. But even so, some landmarks seem to be more memorable than others.
Neuroscientist Shayna Rosenbaum and electrical engineer Matthew Kyan are collaborators at York University’s Vision: Science to Applications program, and together they are developing tools to help people navigate more independently.
“Interdisciplinary collaboration has really allowed us to expand the type of research that we can do and the ways in which we can actually help neurological patients,” says Rosenbaum.
Using computational modeling, Rosenbaum and Kyan identify landmarks that are the most memorable to patients, and then increase how salient or noticeable they are in multimedia training exercises. Remarkably, the ways in which users respond to the tools also sheds light on the memory and perception processes happening in the brain.
“It’s a very interesting project that looks at trying to find a mapping between the activity that’s going on in the memory parts of the brain based on the kind of visual stimulus that you get exposed to,” says Kyan.
“A lot of this has been driven by this resurgence of deep learning and machine learning that’s going on currently, and we think that there are a number of models that may have the ability to give us some insight as to what’s happening in memory.”
Both Rosenbaum and Kyan stress that collaboration has been key to helping patients. While Rosenbaum has the expertise in neuroscience and an understanding of the needs that patients present, Kyan knows how to build the multimedia tools and works with vision and perception every day. There’s a synergy that allows them to create robust solutions and attack problems from multiple perspectives.
“It allows us to elevate our research,” says Rosenbaum, “to apply it in a way that is meaningful to patient populations, to industry, real-world applications.”