Tuesday 15 February 2011

Seeing is Believing

Autodesk Lab’s Photofly has been around for a while now and whilst perhaps not strictly within the whelm of interaction design, is a project I still find particularly intriguing. The free technology demonstrator attempts to automatically translate photographs, taken on a garden variety digital camera, into a basic 3d model.

At the moment the software is limited to creating a point cloud, (of the sort familiar to anyone using laser scanners for facade surveys), but this data can interface with Autocad and used as an outline to create more tangible solid models. There is though no reason that the could not be further refined to produce mesh data that could be fed directly into a 3d CAD model, and indeed Autodesk have announced that a future release will feature exactly that functionality. Geotagging images to help the computer interpret their position and scale would seem another logical step, further reducing the need for manual input.

We’re used to seeing augmented reality technologies merging the virtual world with the physical, but feeding the real world back into the machine has traditionally been a laborious process. Having spent days of my life building site models from photographs, I am definitely excited about this project as a first step in making the computer see and understand the world in the same way as I do, and how even this rudimentary understanding can aid architectural practice.

1 comment:

  1. Like this you will love this:

    http://www.ted.com/talks/blaise_aguera_y_arcas_demos_photosynth.html

    ReplyDelete