Do You See What I See?

Every phone camera captures colour a little bit differently, which can be trouble if you're using one to get a diagnosis.

 |  Transcript [PDF]
Share

With the rise of smartphones, most adults carry a camera with them wherever they go. Most consumers would be happy if these cameras captured everyday moments as snappy and bright images that could be shared over social media. Standards for colour accuracy might not even register as an important feature.

Michael Brown, professor of electrical engineering and computer science at York University, wants to extend the use of smartphone cameras as readily accessible diagnostic tools for remote medicine. But the images that they capture are already highly processed to make them consumer-friendly straight out of the box, no matter how many #nofilter hashtags get applied to them.

Usually when we think about cameras, we think that we push a button and it records the scene, or the light, or the environment. But actually there’s a lot of things going on,” explains Brown.

“The world really doesn’t have any colours in it. Colour is a perceptual thing. The world has electromagnetic radiation.”

Brown, also a researcher at York University’s Vision: Science to Applications (VISTA) program, wants to extract more from the cameras on our smartphones. The exact processes that happen between the moment that light hits a camera sensor and the time that an image gets captured and saved are important to understand from a diagnostic perspective. Even if you and a friend image the exact same object at the same time, the colours can look slightly different depending on the phones you’re using.

“That’s a problem, because let’s say now I want to use my phone and an app to do something like skin cancer detection. Now it’s a real issue if every phone is doing it’s own sort of processing to make the colours look better,” adds Brown.

“So in this case, we really need the phone to act as an instrument or a scientific imaging device, and that’s what I’m interested in doing.”

If images could be standardized, they would be much more useful in remote medicine. Being able to send an image to a doctor would open up new opportunities for mobile devices to bring healthcare to locations without a nearby specialist’s office, such as rural areas or developing countries.

Even cheap phones have very similar camera technology as higher end phones, says Brown. They could be just as useful as devices for remote diagnostics.

“I see this as an area where there’s a lot of opportunity but there’s not a lot of research going into it,” says Brown. “And I think we really have the opportunity as researchers here at York to actually help steer the next generation of cameras and camera apps for purposes that are beyond photography.”

‹ Previous post
Next post ›

Michael S. Brown is a professor and Canada Research Chair in Computer Vision at York University in Toronto. Before joining York in 2016, he spent 13 years in Asia, working at the Hong Kong University of Science and Technology, Nanyang Technological University, and National University of Singapore. Dr. Brown’s research is focused on computer vision, image processing, and graphics. Dr. Brown routinely serves as an area chair for the major computer vision conferences (CVPR, ICCV, ECCV, and ACCV) and served as an associate editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) from 2011 to 2016. He is currently an associate editor for the International Journal of Computer Vision. 


Research2Reality is a groundbreaking initiative that shines a spotlight on world-class scientists engaged in innovative and leading edge research in Canada. Our video series is continually updated to celebrate the success of researchers who are establishing the new frontiers of science and to share the impact of their discoveries with the public.