A femtosecond is one millionth of one billionth of a second.

To put that into a contextual reference, a femtosecond is to a second as a second is to something like 31.7 million years. Put another way, 200 femtoseconds is the time it takes for the fastest chemical reactions (like the the reaction of pigments in the eye to light stimulus) to occur and 300 femtoseconds is the complete duration of one vibration in an atom of an iodine molecule.

That's fast.

Now a group of researchers at the MIT Media Lab have spent a modest $500 to create a "nano-camera" capable of operating at the speed of light which could well change the way 3D scanners operate in the future.

The camera was demonstrated at Siggraph Asia in Hong Kong and could ultimately impact how medical imaging, collision-avoidance systems in cars and scanning 3D objects work.

Based on what's referred to as "Time of Flight" technology, the technology functions by calculating the location of objects by measuring how long it takes a light signal to reflect off a surface and return a sensor.

The real trick, according to co-author of the research Achuta Kadambi, a graduate student at MIT, is that this new camera works on translucent objects.

"Using the current state of the art, you cannot capture translucent objects in 3D," Kadambi said. "That's because the light that bounces off the transparent object and the background smear into one pixel on the camera. Using our technique, you can generate 3D models of translucent or near-transparent objects."

A conventional Time of Flight camera works by firing a coherent light signal at an object and capturing the data on the time it takes to return. By knowing the speed of light, the camera can be used to calculate the distance the signal has traveled and thereby gauge the depth of various points on the surface of the object from which it was reflected.

Here's the problem with that system; semitransparent surfaces, edges, or motion of the camera or target object can all result in multiple reflections which confuse the original signal and return to the camera in a jumbled mess which defies correct measurement.

The new MIT device defeats that problem by using an encoding technique taken from telecommunications applications which is used to calculate the distance a signal has traveled.

Ramesh Raskar, the leader of the Camera Culture group at the Media Lab, developed the method alongside Kadambi, Refael Whyte, Ayush Bhandari, and Christopher Barsi.

"We use a new method that allows us to encode information in time," Raskar says. "When the data comes back, we can do calculations that are very common in the telecommunications world, to estimate different distances from the single signal."

Based on 2011 work Raskar's team did to create a trillion-frame-per-second camera which is capable of capturing a single pulse of light, the camera worked by blasting a given scene or object with a femtosecond-long-pulse of light. Speedy, but expensive, laboratory optical equipment took an image each time. That system cost some $500,000 to build.

This new "nano-camera" uses a continuous-wave signal which oscillates at nanosecond periods and allows the use of off-the-shelf light-emitting diodes to achieve near-femtophotography speeds, but it costs just $500 to build.

"We are able to unmix the light paths, and therefore visualize light moving across the scene," Kadambi said. "We're able to get similar results to the $500,000 camera, albeit of slightly lower quality, for just $500."