Three Little Sweeps Turns 2D Image into 3D Model?
The video you're about to watch will blow your mind into another dimension.
No, really. Imagine spline modeling without splines. Strike that. Imagine it without the modeler.
A group of computer science professors is working on a new interactive 3D modeling technique, which can read information from 2D photos and, by plotting three different points along the image, turn the objects into 3D models.
Called the '3-Sweep' technique, the software appears to use just three mouse strokes to define what will be translated into the three dimensions of a given object. Judging from what you see in the video, the first two sweeps define the object's profile and the third sweep finds distinct outlines along an object's main axis.
If this is the real thing, and some initial research indicates that it might be the genuine goods, the 3D output of the startling process can then be rotated, adjusted and otherwise manipulated.
In what appears to be a collaboration between scientists at Tsing Hua University and Tel-Aviv University, the work is credited to Tao Chen, Zhe Zhu, Ariel Shamir, Shi-Min Hu and Daniel Cohen-Or.
Of course the proof will be in the pudding, and a paper on the project, entitled 3-Sweep: Extracting Editable Objects from a Single Photo, has been submitted for presentation during SIGGRAPH Asia in November of this year.
If this is indeed a real interface for extrapolating 3D models from nothing more than a reasonably sharp 2D image, the gentlemen who worked on the project will only need to answer one question after they present their paper: "Where would you like us to park this truck filled with money, guys?"
It does seem legit, and if it isn't, someone spent a good deal of time and effort to make it appear real. Here's a link to an explanation of the software on Ariel Shamir's home page and a summary of the project:
We introduce an interactive technique for manipulating simple 3D shapes based on extracting them from a single photograph. Such extraction requires understanding of the components of the shape, their projections, and relations. These simple cognitive tasks for humans are particularly difficult for automatic algorithms. Thus, our approach combines the cognitive abilities of humans with the computational accuracy of the machine to solve this problem. Our technique provides the user the means to quickly create editable 3D parts— human assistance implicitly segments a complex object into its components, and positions them in space. In our interface, three strokes are used to generate a 3D component that snaps to the shape's outline in the photograph, where each stroke defines one dimension of the component. The computer reshapes the component to fit the image of the object in the photograph as well as to satisfy various inferred geometric constraints imposed by its global 3D structure. We show that with this intelligent interactive modeling tool, the daunting task of object extraction is made simple. Once the 3D object has been extracted, it can be quickly edited and placed back into photos or 3D scenes, permitting object-driven photo editing tasks which are impossible to perform in image-space. We show several examples and present a user study illustrating the usefulness of our technique.
Let's just hope the work they've done here doesn't go the way of Canoma...