At the moment the software is limited to creating a point cloud, (of the sort familiar to anyone using laser scanners for facade surveys), but this data can interface with Autocad and used as an outline to create more tangible solid models. There is though no reason that the could not be further refined to produce mesh data that could be fed directly into a 3d CAD model, and indeed Autodesk have announced that a future release will feature exactly that functionality. Geotagging images to help the computer interpret their position and scale would seem another logical step, further reducing the need for manual input.
We’re used to seeing augmented reality technologies merging the virtual world with the physical, but feeding the real world back into the machine has traditionally been a laborious process. Having spent days of my life building site models from photographs, I am definitely excited about this project as a first step in making the computer see and understand the world in the same way as I do, and how even this rudimentary understanding can aid architectural practice.
Like this you will love this:
ReplyDeletehttp://www.ted.com/talks/blaise_aguera_y_arcas_demos_photosynth.html