• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Project

Time-folded optics

Copyright

MIT Media Lab, Camera Culture group

MIT Media Lab, Camera Culture group

What are the applications of this technology? 

Broadly speaking, time-folding can impact any imaging system that deals with optically capturing depth information or time information. We demonstrate that it is possible to reduce the length of the lens system by an order of magnitude using time-folding. This is useful for creating smaller camera lenses or mapping satellites. We also demonstrated that it is possible to image at two different focal lengths at the same acquisition. This can be useful in applications where both a wide angle view and a zoomed view are captured at the same time (without loss of resolution, or the need to change lenses) For example, optical coherence tomography (OCT) systems, interferometric imaging systems, and photocytometry systems can benefit from this. In our final demonstration, we showed multi-spectral ultrafast imaging. This has applications in fluorescence lifetime imaging and it enables multispectral imaging without having to change filters. In addition to our demonstrations, this technique can be applied to other types of imaging for various applications such as time-resolved ellipsometric or focal stack imaging. One of the most common time-of-flight sensors currently is used in LIDAR which is used in multiple robotics and navigation applications including autonomous cars. Time-folded optics could find a use in those applications.

What is the cost of the system? 

The cost of this system depends on the hardware specifications. Our technique is a general and broad new perspective that can be applied to varieties of systems with different costs. The optics itself can be as cheap as few tens of dollars but the imaging sensor is the main cost component. If you want to use ultrafast system with high time resolution the setup cost can be as high as 500K$; if you are using an electronic single photon diode array (SPAD) camera the cost can be around 50K$; and if you use a continuous wave time-of-flight camera the sensor cost can be as low as 50-500$.

What is time-folding?

Time-folding is the act of folding spatial optical path into time in order to encode a desired information from the scene or functionality from optics into time of acquisition. One can time-fold distances, wavelengths, polarizations or even time-fold an entire transfer function by placing optics inside a cavity. Time-folding makes optics time-dependant so at different times the optics has different response function or modulation transfer function (MTF). As our study proposes for the first time, such conversion can be done by varieties of cavities and using the Fabry-Perot cavity is the most basic way to do time-folding. During each round trip in the cavity, some of the light escapes the cavity and is captured on the sensor. The sensor is able to distinguish different round trip outputs of the cavity and thus recover the original information.

What's new about time-folding? 

Time-folding is a new way of designing photography or imaging optics by leveraging the duality between time and space. The technique enables unconventional arrangement of optics (for example placing a lens 10 time closer to the sensor than its physical focal length) which can provide a certain desired functionality at a specific time. 

What is an ultrafast camera? 

A typical cell phone will capture a video with a frame rate of 30 frames per second (fps). The high speed cameras that are able to capture phenomenon such as a bullet firing have frames rates ranging from 500 - 25000fps. The ultrafast cameras we are talking about have frame rates in the range of 1,000,000,000,000fps. At this speed it is possible to image light as it propagates through space! These cameras work by illuminating the scene with a very quick laser flash followed by a precisely timed camera capture. These cameras are becoming more and more commercially available. Most concepts introduced in our study can be also used by continuous-wave depth cameras or time-of-flight cameras, commercially used in game interfaces, autonomous cars, and mapping equipments.

What was the genesis of this technique?

Time-of-flight imaging has conventionally been used in imaging fast phenomena (Ultrafast imaging) and imaging in complex geometries (such as around the corner and through diffusers). There were a lot of research on how to better resolve the scene using time information or computational methods but there was no notion of what can be done with the optics of imaging itself in the time dimension. 

How does this work relate to the "optical brush" and "reading through closed books" from the Camera Culture research group? 

In principle we use the similar ultrafast imaging sensor to extract the signal we are looking for. By inspecting the temporal profile of our signal, we are able to infer properties of the scene or achieve functionalities from our optics that are impossible to retrieve using only spatial information. The previous works were focused on one specific challenging application such as reading through closed book or imaging around the corner but this work is more at the fundamental level and it can impact the design of optics for any time-resolved system.

What are the limitations of the current first demonstration? 

The optical alignment of the cavity is sensitive. If the optical components are not coaxial, the signal can escape the cavity. In the future we envision optical components designed for this use. For example, a single piece of glass can be coated on both side creating a high accuracy cavity. This would constrain the reflective components preventing them from going out of alignment. The loss of signal at higher roundtrips is another drawback; e.g. if your applications require 10 or more round trips in the cavity then your signal level is going to be reduced notably after those many round trips since cavities can be lossy. Having said that there are plenty of possibilities even with first initial round-trips as demonstrated in the paper.

What's next for you and your colleagues as you move forward with this technology? Are there any plans to enhance the technique, or other avenues or applications you'd like to explore?

There are endless possibilities with time-folding, for us it is as if we have found a new way to think about optical imaging at a very fundamental level. There are four core paths lying ahead of time-folding technology: 1- designing imaging optics with radically better capabilities. Examples of this category is ultracompact optics, SNR enhancement by time folding large aperture long-focal length lenses, etc. 2- Realizing higher order time-folding by improving the cavity quality factor or reducing its loss. Examples of projects on this direction can be enhancing the cavity elements materials or using polarization to reduce loss of each roundtrip. 3- Considering new categories of cavities for realizing more complex functionalities or enabling time-folding for non time-sensitive sensors. For example, one can use evanescent cavities, ring cavities or nonlinear cavities to acquire new types of information about the scene. 4- Using phase instead of time to realize the same functionalities and architectures. Examples on this direction can be using a specific type of cavity combined with interferometric imaging to encode a desired information into phase.

Can a normal camera be enhanced by time-folding? 

Not with the architecture presented in this study. A normal camera doesn't have the speed or depth sensitivity to resolve each roundtrip and thus the output results would be an integration of all round trips. But this is a great challenge to be thinking about and might be addressable with computational techniques or other types of cavities.

For starters, how does this going to impact our lives? 

Time-folding impacts the design of optics in time or depth sensitive cameras such as those used in autonomous navigation, aerial mapping, commercial gaming consoles, VR/AR headsets and many other applications that rely on time-of-flight or depth or ultrafast cameras. Time-folded optics can compress the size of those optics, enable capturing color as well as depth information, or even enhance SNR in a given formfactor. 

Could you talk a little about any medical device or technology applications?

We have demonstrated multispectral imaging using time folding which can be used for fluorescence lifetime imaging. Another potential application is impacting optics of optical coherence tomography or OCT systems with time-folding. There is a wide range of possibilities to leverage time-folding to improve magnification or enable a nonconventional functionality.