In a recent episode of Black Mirror, a detective solves a crime by using a memory machine to tap into witnesses’ brains, recreating what they saw on the night in question. This type of technology – similar to, but more advanced than, what we have now – is the premise behind most Black Mirror episodes.
In this particular case, however, reality may not be very far behind.
Last month, researchers from the University of Toronto Scarborough described a technique to translate brain signals into images, literally allowing someone to see what you saw. The work was published in the journal eNeuro.
Led by Professor Adrian Nestor, the team used electroencephalography, or EEG, to record the brain’s electrical signals when a subject looked at images of faces. The image of the face could then be digitally recreated by a computer algorithm using only the EEG signals.
But as Nestor explains, the Black Mirror version of the technique glosses over one very critical fact: every brain is unique. Two people can look at the same scene and remember it completely differently.
“Some people may remember different details or may be influenced by what they feel and personal biases,” explains Nestor. “We’re not reconstructing the images people see but rather their perception of what they saw.”
This is one of the reasons eyewitness accounts are one of the most unreliable parts of a criminal investigation.
It’s also the reason the computer algorithms in Nestor’s study had to spend hours with each subject, establishing a mapping between what the subject saw and their brain signals. Only then could that same algorithm recreate an image given a new set of brain signals. With algorithm improvements, this learning phase could be shortened, but it would never disappear completely.
“It doesn’t matter how the tech evolves; it’s an essential stage,” says Nestor.
Despite this inherent subjectivity, Nestor says the technique could still be useful during police investigations, because it eliminates the need for witnesses to describe what they saw – to a sketch artist, for example.
“Verbalizing visual information is difficult. We are eliminating one translation step.”
Nestor’s previous work reconstructed images from functional magnetic resonance imaging (fMRI) of the brain. Although more precise, MRI machines are huge and not exactly portable. EEG on the other hand is like a hairnet of electrodes and can easily be taken into the field.
“We want something that can function reliably and robustly in the real world,” says Nestor.
His team is now looking to improve the method both on the hardware side, to make it more user-friendly, and on the software side, improving the algorithms and filtering out noise from the signals.
Once optimized, the technique could have a wide range of applications. In addition to its potential as a crime solving aid, it could be used to communicate with those who can’t speak or use sign language. Simply by visualizing what they want to say, non-verbal people could have their thoughts translated into images or words that we could understand.
Nestor is also fascinated by the relationship between the real-world and people’s perception of it. His ongoing work is showing differences in perception with age as well as with various brain impairments. Several conditions like borderline personality disorder or schizophrenia result in inaccurate perception of facial expressions. This technique could show psychologists what their patients are seeing, to better diagnose and treat.
Measuring your brainwaves may soon become standard practice in many situations – though it’s unlikely the process will look like exactly the Black Mirror universal memory machine.