We’re Not Seeing Eye to Eye with This New Normal

For all the good that tools like Zoom and Skype have done, we miss something fundamental when using them. Can it be replicated?

 |  Transcript [PDF]
Share

Video conferencing took off during the pandemic as people around the world were self-isolating and working from home. But there are important visual cues we are used to getting in person that get lost on Zoom or Skype, and according to vision scientist Niko Troje, they affect what he calls “people perception.”

Over the past decade, Troje has been studying facial recognition and the way people move. That motion contains information about emotion, intention, and personality. The human visual system is highly sensitive to it, and users can’t extract the same depth of meaning in interactions captured on camera when they interact virtually.

“There’s many problems with systems such as Zoom or Skype that we know today,” says Troje, professor of biology at the Vision: Science to Applications (VISTA) program at York University.

“The main problem is that we are losing what I call directionality; so, if someone happens to look into the camera, the person on the other end feels being looked at and there’s no way to escape that gaze.

“Or if someone is not looking into the camera, for instance because they are looking at the screen where the other person is, we feel being looked at our chin or neck or something, but we can’t catch the other person’s gaze.”

In normal in-person interactions, we catch and break eye contact all the time. Even knowing this, there are many challenges in supporting a virtual system that could simulate it. The cameras we use have a fixed position on our devices, and it wouldn’t be practical to move it around just to catch a user’s gaze.

In the first step of Troje’s approach to this problem, he is researching how people look from slightly different viewpoint angles than the standard one captured by a fixed camera.

“We have a demo system, which functions beautifully but it’s based on computer graphics,” adds Troje. “So the person I’m talking to, I see represented as an avatar and the other person sees me represented as an avatar.”

Combining facial recognition, an understanding of biological motion, and the ability to shift to a different viewpoint enables a more natural experience. In the future, he hopes to be able to integrate this proof of concept back into a more photorealistic representation of each user, instead of using simplified computer graphics.

Virtual interactions have kept people connected during an unprecedented time, and it’s likely that many will continue using this technology even as public health measures are relaxed. Being able to replicate our in-person interactions more closely will help build a richer experience no matter how far apart we are.

‹ Previous post
Next post ›

Niko Troje directs the BioMotion Lab, at York University in Toronto, Canada. He received his PhD in Biology from the Albert Ludwigs University in Freiburg, Germany, in 1994. Subsequently, he moved to the Max Planck Institute for Biological Cybernetics in Tübingen.

From 1997 until 1999, he spent two years as a Visiting Professor at Queen’s University, Kingston, Ontario. In 1999, he was awarded with a Young Researchers Award from the Volkswagen Foundation and moved to Ruhr University where he founded and directed the BioMotionLab, at the Department of Psychology.

In 2003 he re-joined Queen’s University and in 2018 he moved to York University in Toronto. At York University he is affiliated with the Centre for Vision Research, the Department of Biology and the Graduate Programs in Biology, Psychology, and Electrical Engineering & Computer Science. Troje has received numerous awards and recognitions for his work, among others the Feodor-Lynen Fellowship of the Alexander von Humboldt Foundation, the E.W.R. Steacie Memorial Fellowship from NSERC, and most recently, the lifetime achievement Humboldt Research Award. His main research interest is focused on questions concerning the nature of perceptual representations. How can a stream of noisy nerve cell excitations possibly be turned into the coherent and predictable perception of a “reality”?


Research2Reality is a groundbreaking initiative that shines a spotlight on world-class scientists engaged in innovative and leading edge research in Canada. Our video series is continually updated to celebrate the success of researchers who are establishing the new frontiers of science and to share the impact of their discoveries with the public.