A new 3D imaging chip is small enough to fit into a smartphone. Soon consumers can take a 3D image of an object with their smartphone and have it replicated in a 3D printer.
3D printing requires a digital model with height, width, and depth of the object to be replicated. To 3D scan an object today a relatively large system has to be used. The California Institute of Technology (CalTech) works on a tiny chip that would bring 3D imaging to smartphones.
The technology is based on a cheap, compact yet highly accurate new device known as a nanophotonic coherent imager (NCI). Using an inexpensive silicon chip less than a millimeter square in size, the NCI provides the highest depth-measurement accuracy of any such nanophotonic 3-D imaging device.
“Each pixel on the chip is an independent interferometer–an instrument that uses the interference of light waves to make precise measurements–which detects the phase and frequency of the signal in addition to the intensity,” says Ali Hajimiri, the Thomas G. Myers Professor of Electrical Engineering in the Division of Engineering and Applied Science.
The new chip utilizes an established detection and ranging technology called LIDAR, in which a target object is illuminated with scanning laser beams. The light that reflects off of the object is then analyzed based on the wavelength of the laser light used, and the LIDAR can gather information about the object’s size and its distance from the laser to create an image of its surroundings.
“By having an array of tiny LIDARs on our coherent imager, we can simultaneously image different parts of an object or a scene without the need for any mechanical movements within the imager,” Hajimiri says.
The incorporation of coherent light not only allows 3-D imaging with the highest level of depth-measurement accuracy ever achieved in silicon photonics, it also makes it possible for the device to fit in a very small size.
“By coupling, confining, and processing the reflected light in small pipes on a silicon chip, we were able to scale each LIDAR element down to just a couple of hundred microns in size–small enough that we can form an array of 16 of these coherent detectors on an active area of 300 microns by 300 microns,” Hajimiri says.
The first proof of concept of the NCI has only 16 coherent pixels, meaning that the 3-D images it produces can only be 16 pixels at any given instance. However, the researchers also developed a method for imaging larger objects by first imaging a four-pixel-by-four-pixel section, then moving the object in four-pixel increments to image the next section. With this method, the team used the device to scan and create a 3-D image of the “hills and valleys” on the front face of a U.S. penny, with micron-level resolution, from half a meter away. See image above.
In the future, Hajimiri says, that the current array of 16 pixels could also be easily scaled up to hundreds of thousands. One day, by creating such vast arrays of these tiny LIDARs, the imager could be applied to a broad range of applications from very precise 3-D scanning and printing to helping driverless cars avoid collisions to improving motion sensitivity in superfine human machine interfaces, where the slightest movements of a patient’s eyes and the most minute changes in a patient’s heartbeat can be detected on the fly.
“The small size and high quality of this new chip-based imager will result in significant cost reductions, which will enable thousands new of uses for such systems by incorporating them into personal devices such as smartphones,” he says.
The study was published in a paper titled, “Nanophotonic coherent imager.” In addition to Hajimiri, other Caltech coauthors include former postdoctoral scholar and current assistant professor at the University of Pennsylvania, Firooz Aflatouni, graduate student Behrooz Abiri, and Angad Rekhi (BS ’14). This work was partially funded by Caltech Innovation Initiative.
The research has been published in the February 2015 issue of Optics Express.
Share this Story
You Might Also Like
Read the Latest from I4U News
blog comments powered by Disqus