Kinedioscope

by convivialstudio in Circuits > Art

2203 Views, 19 Favorites, 0 Comments

Kinedioscope

Screenshot (73).png
2017-04-04-17-35-46-558.png
2017-04-04-19-15-29-965.png
2017-04-04-17-05-57-515.png
2017-04-04-18-30-51-259.png

A kinedioscope (animated("kínēma") dimensional("diastáseis") view("scope")), is a technique I invented to create animated depth effects on static photographs.

The process to create the effect is reverse-engineering the technology of photogrammetry.

"Photogrammetry is the science of making measurements from photographs, especially for recovering the exact positions of surface points."

In photogrammetry multiple photographs are used to compute a 3D model. In order to create this model, an algorithm defines each photograph's camera location, view direction and focal length. This information is used in the kinedioscope in order to perfectly align the photography with the perspective of the 3D model. Once the 3D model and the photography camera view match it is possible to create depth and masking effects.

In this instructables I am using photographs and 3d model from two different photo-shoot, the first one is an editorial fashion shooting and the second is an artwork about Muir Woods made at Autodesk Pier 9

Photogrammetry

20161105 140226

For the photogrammetry part you can find numerous resources about the process and camera settings.

I enjoyed this guide https://www.instructables.com/id/3d-Scan-Anything-Using-Just-a-Camera/

Recap

Screenshot (48).png
Screenshot (49).png
Screenshot (50).png

Initially I tried to do the photogrammetry 3d reconstruction with visual SFM, visual SFM gave decent point cloud reconstruction, then I was doing the point cloud to mesh reconstruction using meshlab. I was using this workflow which is free however the final result is not very accurate and needs to be reworked in a 3d modelling software. PCL might be a better option to do the

Remake was suggested as well but unfortunately it crashed all the time.

The third software I tried was Autodesk recap. Recap was initially a software to register point cloud from LIDAR scan, now it has a lot of new features, one is for photogrammetry with a cloud based 3d reconstruction, additionally there is a feature to create mesh from the point cloud. the final mesh from the photogrammetry are great quality. It is an advantage not to have to compute the reconstruction as it's all done on the cloud.

File Preparation

Screenshot (52).png
Screenshot (53).png

Once the 3d reconstruction is finished you can access "A360 Folder" from there download the .rcp file.

This file contains all the info about the camera position.

Software

Screenshot (46).png
2017-04-04-16-51-08-638.png
structure.PNG

You can download the source code and example on github

https://github.com/paul-ferragut/augmentedPhoto

The software to create effects and "augment" the photography is made in openFrameworks, if you would like to use it you would need to be familiar with openFrameworks and c++ coding. Light effects we have been using are from an addons called https://github.com/yasuhirohoshino/oF-MultiShadowE... by Yasuhiro Hoshino https://github.com/yasuhirohoshino/oF-MultiShadowE... some of the code from the example was inspired by ofxBundle by Patricio Gonzalez Vivo

In order to test the software with your own 3d model and pictures you will need first to setup openframeworks, then to download the app, then to modify the file structure. Your data folder should contain your 3d mesh as a .ply, your set of photographies from the photogrammetry and the .rcp file(note you might need to change the .rcp file encoding to UTF-8). Refer to the example for file organisation.

you will need to change a line of code in the source code in the file ofApp.cpp, you will need to specify the name of your folder

folderName = "myFolder";

This example is in openFrameworks but I would love to see a three.js version, if you would like to collaborate on that get in touch!

Extra: Laser Scanning

20170420_122309.jpg
Screenshot (40).png

In order to get more accurate 3d model we experimented merging the 3d model from the photogrammetry with a laser scan. Faro laser scan are simple to use and can be rented for the day, I was lucky to be able to borrow one from Autodesk Pier9

Extra: Projection Mapping

Muir Woods - Augmented photography
IMG-20170421-WA0028.jpeg
20170503_161442.jpg
20170503_155242.jpg
20170503_152901.jpg
20170504_151952.jpg
IMG-20170504-WA0008.jpeg
20170504_191952.jpg
20170502_185756.jpg
20170502_201132.jpg
IMG-20170504-WA0016.jpeg
20170504_184137.jpg
Capture2.PNG
Capture.PNG
Vertigo (1958) Muir Woods Sequence
517e6d2cee70cb4ff7a72c6787d10145.jpg

This project is exploring the medium of photography, it was natural to try to use a printed photograph and overlay the animated effects with projection mapping. I decided to take the Muir Woods near San Francisco in California. Other source of inspiration for this work was the serie Twin Peaks and Hitchcock's depiction of Muir Woods in Vertigo. A walnut frame was built at Autodesk Pier 9.