In recent years, automobiles have been equipped with multiple sensors and cameras giving front, rear, and side information. This has led to increase in sophistication of assisted driving technologies such as driverless operation. In addition, the importance of image sensing using cameras has increased due to their use of in remote inspections of infrastructure, such as those conducted by drones and robots. In these applications, in addition to taking a two-dimensional image, it is also necessary to understand the object’s dynamic three-dimensional parameters such as its shape, movement, and distance from the camera.
Various methods for measuring the distance to the object have been proposed, such as stereo cameras (distance detection through triangulation), infrared distance sensors (illuminate using an infrared pattern, then detect the distance using the shift in the illumination pattern), ultrasonic distance sensors (emit ultrasonic waves, then detect the distance using the differences in reception times of the reflected waves), millimeter wave radar (emit millimeter waves,), LiDAR (emit a LiDAR beam), SfM (Structure from Motion) technology (calculate the distance using multiple images taken from multiple viewpoints, using an SfM camera moving around the object), and others.
In stereo cameras, it is necessary to maintain a distance between the two camera lenses of approximately 30 cm when measuring for a high degree of distance accuracy, so miniaturization is inherently difficult. Infrared and ultrasonic sensors measure distance by illuminating the object with an infrared light pattern or bombarding it with ultrasonic waves, respectively, so measuring objects at distances greater than 10 m is difficult. In addition to being difficult to miniaturize, millimeter wave radar and LiDAR equipment is costly. SfM technology measures the distance to the object from multiple images taken by a moving camera; however, it is difficult to measure the distance of a moving object with a high degree of accuracy.
Thus, conventional distance sensors have a variety of advantages and disadvantages. Obtaining a high degree of accuracy in a compact low-cost package is difficult.
Toshiba's proprietary imaging technique uses a combination of color filters and imaging processing to obtain both a color image and high-precision a depth map from a single monocular camera image. By attaching a proprietary color aperture filter consisting of blue and yellow filters to the lens aperture, a combination of blur and color shift occurs, and this combination depends on the distance to the object. Distance to the object is detected for each pixel through image analysis from the blur and color deviation obtained within a single photographic image. Deterioration in the quality of the captured image is also suppressed, because this color filter allows transmission of green light, which has a higher contribution ratio to overall image brightness.
In tests using a commercial camera, Toshiba confirmed that the distance accuracy obtained from a single image taken using a monocular camera is comparable to that obtainable with an image taken by a stereo camera that has its lenses 35 cm apart. Therefore, it is possible to construct an inexpensive image sensor using this method as it consists solely of a lens device and image processing.
Toshiba plans to furhter miniaturize the camera and speed up image processing to achieve early applications use of this technology.