Virtual reality (VR) is getting more accessible to more audiences, and creators hope to find ways to make VR totally immersive. That starts with a better understanding of how human vision works so that researchers can define how people perceive the real world as events unfold before their eyes.
“In vision research, we use pictures a lot in order to study the visual system,” says Niko Troje, professor of biology at the Vision: Science to Applications (VISTA) program at York University.
“We present them on screens or on larger projection surfaces, and we assume that because the stimulation of our eyes under those conditions is similar to what we are experiencing in the real world, that what we are studying can be generalized to a real world that opens in front of our open eyes.”
But lately, researchers in the vision science field have been challenging that assumption. In particular, VR makes it clear that projections on a flat screen are not a complete substitute for vision in the real world.
In these virtual worlds, the user can see a slightly different image with each of their eyes, and the perspective of the scene responds to the movement and position of their head. It attempts to simulate vision in the real world. This visual space is fundamentally different to what we are used to seeing in pictures and movies, where there is a single camera that does not respond to its audience.
With the help of VISTA research funds, Troje has developed a new tool that allows him to carry out a side-by-side comparison between what people see in an open space in front of them versus a pictorial space like a standard picture or movie. The tool is called the Alberti Frame, making reference to the renaissance architect and mathematician Leon Battista Alberti, who believed that paintings should recreate exactly the view seen through an empty frame. You can think of it like a camera that can be placed anywhere inside virtual reality to take a 2D picture or video.
Imagine now that you were looking at a mountain in VR, and you take out your camera to freeze an Alberti frame. You could go back to that exact location and hold up the image, and if you closed one eye it would be indistinguishable from the rest of the scene. That screen would be like an open window.
But open both eyes and start moving around, and the mismatch becomes clear. Two new phenomena that give us a sense of depth perception come into play: stereopsis, because each of your eyes is in a slightly different position and sees the scene from a slightly different perspective; and active motion parallax, because what you see should change as you move your head.
Normally these phenomena are entangled, but researchers can separate them in VR. For example, both eyes can be shown the same image to eliminate stereopsis, or the visual scene can be decoupled from the user’s movements. In other words, the Alberti Frame can act like a screen in terms of one of the depth cues, but like a window in terms of the other.
“Our Alberti Frame now has not just two states, but it’s got four states,” says Troje. “So two additional ones in which it either behaves as a window in terms of stereopsis, but like a picture in terms of motion parallax, or the other way around.”
Although we know that both phenomena play important roles in vision, VR presents a new opportunity to study the effect of each one separately, and understand how each contributes to the virtual experience. This work will help creators build a more convincing illusion of reality in VR.