Invoked Computing: Spatial audio and video AR invoked through miming. #AR
Developed by Alvaro Cassinelli and Alexis Zerroug at the University of Tokyo, Ishikawa Komuro Lab
Multi-modal augmented reality system: in addition to the usual camera and projector pair, we are using a parametric speaker to augment object with sound. The system is mounted on a pan and tilt steering base.
The application running here is a video player projected on a board and controled by the finger. We can move the board anywhere and the system will follow it. The parametric speaker project the sound directly onto the board, as if the sound was generated by the board itself.
Hardware: minimac, tiny projector, pointgrey camera, IR illumination, parametric speakers, arduino and servomotors
Software: OpenFrameworks, Artoolkit, opencv, arduino
Demonstration at Laval Virtual 2011.
(via YouTube)