Hovering iPad

"Three-dimensional objects can be generated based on two-dimensional objects. A first user input identifying a 2D object presented in a user interface can be detected, and a second user input including a 3D gesture input that includes a movement in proximity to a surface can be detected. A 3D object can be generated based on the 2D object according to the first and second user inputs, and the 3D object can be presented in the user interface."

That simple declaration kicks off the document which grants Patent No. 8,514,221 to Apple and it could signal a sea change in the way 3D modelers go about their business.

Back in July of 2011, the US Patent & Trademark Office took in a patent application from Apple that outlined the concepts behind an advanced 3D gesturing system which applies to CAD applications for product and gaming developers as well as for consumers.

Apple says the next generation iPads and iOS device displays will allow consumers to use 3D gesturing.

In short, 3D gesturing will empower users to control color and textures, rotate objects to view various perspectives of their designs, and perhaps even use those gestures to mold virtual designs for eventual output to 3D printers via CAD software. Apple also envisions features for a CAD program to allow the user to apply colors and texture to the virtual surfaces of objects via slide bars to control the red, green, and blue color mix and 3D gesture inputs to control hue, saturation, and brightness.

This technology, using gesture inputs to render complex 3D objects, could be used to generate 3D figures for video games, objects for the consumer market and perhaps even create 3D maps.

Aimed at providing an intuitive way to generate and modify 3D objects based on 2D objects in photographs, the technology would translate 3D gesture inputs of a user's finger and hand movements.

Possible implementations might allow the CAD program to interpret a multi-touch input as a user lifts the fingers to "extrude" a formerly 2D object into a 3D object.

The new 3D GUI for iOS mobile devices works with proximity sensor arrays and responds to hovering gestures and comes out of the work on advanced 3D GUIs Apple started on back in 2009 by inventors Nicholas King and Todd Benjamin. That project related to head tracking technology, and then folded neatly into the current project.

Since modern mobile devices include a range of onboard sensors to determine the orientation of a device in space, a graphics processor within such devices might, at least in the Apple vision of the world, automatically calculate and display a perspective projection of the 3D display environment based on that data – without direct physical interaction via proximity sensor arrays.

iOS developer Ben Hopkins, born and raised in the UK and now a South Florida resident, created an iApp called "HoloToy" which demonstrates some of the features laid out in Apple's patent application and shows the general idea of what a 3D GUI might look like in practice.

HoloToy uses the accelerometer, gyroscope and head tracking to alter the perspective of a 3D view, giving the illusion of real 3D space existing within the device. It includes multiple interactive apps like a 3D photo editor.