Suhit Gupta
03-25-2008, 06:30 PM
<div class='os_post_top_link'><a href='http://www.physorg.com/news125159442.html' target='_blank'>http://www.physorg.com/news125159442.html</a><br /><br /></div><p><em>"The camera you own has one main lens and produces a flat, two-dimensional photograph, whether you hold it in your hand or view it on your computer screen. On the other hand, a camera with two lenses (or two cameras placed apart from each other) can take more interesting 3-D photos. But what if your digital camera saw the world through thousands of tiny lenses, each a miniature camera unto itself? You'd get a 2-D photo, but you'd also get something potentially more valuable: an electronic "depth map" containing the distance from the camera to every object in the picture, a kind of super 3-D. Stanford electronics researchers, lead by electrical engineering Professor Abbas El Gamal, are developing such a camera, built around their "multi-aperture image sensor." They've shrunk the pixels on the sensor to 0.7 microns, several times smaller than pixels in standard digital cameras. They've grouped the pixels in arrays of 256 pixels each, and they're preparing to place a tiny lens atop each array."</em></p><p><img alt="" border="0" src="http://images.thoughtsmedia.com/resizer/thumbs/size/500/dht/auto/1206118564.usr14.jpg" /></p><p>This camera reminds me of those holographic cameras from Star Trek, where they can capture the entire scene in 3D with one shot. It will be interesting to see how they can seamlessly stitch all the smaller groups of pixels together into a large scene. One of the things that I wish this article talked more about was how they lighting/shadows were going to be used to help create more of the 3D scene, which is a pretty traditional way of converting 2D images into a 3D scene. Anyways, at nearly 140MP, this can give those massive camera backs a run for their money. </p>