The race is on to create technology that would allow autonomous cars to recognize objects and react better than a human driver, but this only considers what is in view of the vehicle. Is it possible to peek around corners ahead of time to make even smarter and faster decisions?
Recent developments by an international team of researchers at Stanford University could make this a reality, despite how improbable it seems. Using a highly-complex computational system with lasers that seek and return visual information on objects yet to come into view, the team hopes that this will one day save lives.
“There is this preconceived notion that you can’t image objects that aren’t already directly visible to the camera,” says Canadian lead author, Dr Matthew O’Toole. “We have found ways to get around these types of limiting situations.”
Their technique is related to the LIDAR system, which creates a 3D map of the surrounding environment by analyzing the flight path of photons from lasers. Depending on how long it takes for the photons to be reflected back at the vehicle, LIDAR can tell the proximity of other objects and direct the vehicle accordingly.
The team went one step further. Their laboratory tests included a T-shaped wall with a gap at the joint separating the stem and the perpendicular wall. On one side of the stem’s partition, there is an occluded object, on the other, an angled picosecond laser firing pulses at the perpendicular wall.
Naturally, the photons bounce back off the object, back to the wall, and finally, back to the source. However, the team was primarily interested not in the light that bounced directly back to the wall and then to the detector, but in how the photons were scattered by the hidden object. Reconstructing these data would give them clues about the object’s shape and form.
“We are looking for the second, and third and fourth bounces – they encode the objects that are hidden,” said O’Toole.
Upon return, the traces of light are extremely faint. Enter the single-photon avalanche diode (SPAD): a hyper-sensitive photon detector coupled with the laser and pointed at the same spot on the wall.
SPAD indicates the return of photons through a voltage peak – a single photon is enough to trigger an ‘avalanche’ of current.
To generate enough data about the hidden object, it’s necessary to fire off as many laser pulses as needed until enough info is gathered. In the idealized, dark conditions of the lab, the initial scan takes between 7 and 70 minutes, depending on the reflectivity of the object.
After the scan has finished collecting data, the algorithm analyzes the flight paths and timing of the photons and recreates the shape and form of the object in high-resolution. Incredibly, the algorithm can crunch all of this data in less than a second using a regular laptop as opposed to several hours with older tech in the same vein.
Integration with autonomous vehicles is still way off
Some obvious hurdles still need to be overcome. Despite the unprecedented speed of the algorithm’s processing power, the length of the initial scan is far too long to be of use in a live scenario with a vehicle anytime soon.
Then there are real-world variables to consider, like the myriad irregular surfaces that photons will bounce off as well as interference from daylight, requiring a more powerful laser (yet one that doesn’t blind people).
“The biggest challenge is the amount of signal lost when light bounces around multiple times,” says O’Toole. “This problem is compounded by the fact that a moving car would need to measure this signal under bright sunlight, at fast rates, and from long range.”
Pseudo-psychic powers in autonomous vehicles may be a way off, but other avenues of interest could see integration with this tech sooner. The authors suggest that robotic assistants in hospitals and hotels could use some navigational foresight to traverse the corridors of busy buildings safely.