Engineers are developing virtual reality VR headsets that can adapt how they display images to account for factors like eyesight and age that affect how we actually see.
Current VR headsets can’t account for differences in vision, which can cause headaches and nausea.
“Every person needs a different optical mode to get the best possible experience in VR.”
Steve Cohen alum up 22% YTD explains why you should ignore those old market adages
Prentice Capital Long/ Short Equity Fund was up 2.7% net for the third quarter. Year to date through the end of September, the fund is up 21.9% net. The S&P 500 was up 8.9% for the third quarter and 5.6% for the first nine months of the year. Q3 2020 hedge fund letters, conferences and Read More
“Every person needs a different optical mode to get the best possible experience in VR,” says Gordon Wetzstein, assistant professor of electrical engineering at Stanford University and senior author of the study published in the Proceedings of the National Academy of Sciences.
Though the work is still in its prototype stage, the research shows how VR headsets could one day offer the sort of personalization that users have come to expect from other technologies.
“We hope our research findings will guide these developments in the industry,” Wetzstein says.
Why headsets cause headaches
The problem that the researchers set out to solve is that the display screens on VR headsets don’t let our eyes focus naturally. In real life, once our eyes focus on a point everything else blurs into the background.
VR makes focusing more difficult because the display is fixed at a certain point relative to our eyes. This eyestrain can cause discomfort or headaches.
“Over a 30- to 40-minute period, your eyes may start hurting, you might have a headache,” says Nitish Padmanaban, a PhD student in electrical engineering at Stanford and member of the research team. “You might not know exactly why something is wrong, but you’ll feel it.
“We think that’s going to be a negative thing for people as they start to have longer and better VR content.”
Younger and older viewers
Importantly, the effects of visual conflicts in VR may affect younger and older people differently. For example, people over the age of 45 commonly experience presbyopia—difficulty focusing on objects close up.
Younger people don’t generally have presbyopia but they may have vision issues that require them to wear glasses. In either case, current VR headsets don’t take these vision difficulties into account.
“One insight in our paper is to consider age as a factor, rather than focusing only on young users, and to show that the best solution for older users is likely different than for younger users,” says Emily Cooper, a research assistant professor at Dartmouth College.
VR headsets – Solution: Adaptive focus
The researchers are testing hardware and software fixes designed to change the focal plane of a VR display. They call this technology adaptive focus display.
The group tested two different hardware options. One relies on focus-tunable liquid lenses. Twisting a dial squeezes the liquid lenses inside the headset to change the screen display even though the lens itself remains in place.
The other option involves mechanically moving the display screen back or forth, like adjusting a pair of binoculars. The system also incorporates eye-tracking technology to determine where on the screen the user is looking.
In conjunction with the eye-tracking technology, software ascertains where the person is trying to look and controls the hardware to deliver the most comfortable visual display. The software can account for whether a person is nearsighted or farsighted but cannot yet correct for another vision issue called astigmatism.
With these displays, VR users would not need glasses or contacts to have a good visual experience.
“It’s important because people who are nearsighted, farsighted, or presbyopic—these three groups alone—they account for more than 50 percent of the US population,” says Robert Konrad, one of the researchers and a PhD candidate in electrical engineering at Stanford. “The point is that we can essentially try to tune this in to every individual person to give each person the best experience.”
The research was supported in part by the National Science Foundation, a Terman Faculty Fellowship, and grants from Okawa Research, Intel Corporation, and Samsung.
Source: Vignesh Ramachandran for Stanford University