Researchers develop a camera the size of a salt grain
Researchers from Princeton and the University of Washington have developed a camera the size of a coarse grain of salt. Typically nano cameras like this produce poor picture quality. However, this group of researchers have figured out a way to output sharp full-color images comparable to conventional cameras 500,000 times the size.
The camera leverages imaging hardware and computational processing to produce stunning results compared to previous state-of-the-art equipment. The primary innovation is a technology called a “metasurface.”
In traditional cameras, a series of bent lenses focus light rays into an image. A metasurface, which can be produced similarly to integrated circuits, is only half a millimeter wide and is packed with 1.6 million cylindrical posts. These tiny columns are roughly the size of the human immunodeficiency virus.
“Each post has a unique geometry, and functions like an optical antenna,” notes Phys.org. “Varying the design of each post is necessary to correctly shape the entire optical wavefront.”
Machine learning-based algorithms compute data from the posts’ interactions with light and output images of higher quality with the widest field of view of any comparable metasurface camera engineered so far.
Additionally, previous cameras of this type required pure laser light and other laboratory conditions to produce an image. Because its optical surface is integrated with the signal processing algorithms, this device can capture pictures with natural light, making it more practical. The researchers envision it being used in non-invasive medical procedures and as compact sensors for small robots.
The scientists compared pictures captured with their tech against previous methods, and the results were night and day (image above). They also pitted it against a traditional camera with a compound optic of six refractive lenses, and aside from blurring around the edges, the images were comparable.
“It’s been a challenge to design and configure these little microstructures to do what you want,” said Princeton Ph.D. student Ethan Tseng, who co-led the study published in Nature Communications. “For this specific task of capturing large field of view RGB images, it’s challenging because there are millions of these little microstructures, and it’s not clear how to design them in an optimal way.”
To figure out the post configurations, they designed a computer simulation to test different nano-antenna setups. However, developing a model with 1.6 million posts can consume “massive” quantities of RAM and time. So they scaled down the simulation to adequately approximate the metasurface’s image rendering capabilities.
The team’s next goal is to add more computational capabilities to the tech. Optimizing image quality is a no-brainer, but they also want to incorporate object detection and other sensing abilities to make the camera viable for medical and commercial use.
As previously mentioned, endoscopy and robotics are just a couple of practical applications for metasurfaces. An arguably more exciting use would be to eliminate the camera bump on smartphones.
“We could turn individual surfaces into cameras that have ultra-high resolution, so you wouldn’t need three cameras on the back of your phone anymore, but the whole back of your phone would become one giant camera,” said Felix Heide, the study’s senior author and an assistant professor of computer science at Princeton. “We can think of completely different ways to build devices in the future.”