Day #14: Technical Setup Details

Written by Moco.
_world.setup()

On the technical side our setup consists of three main parts: tracking, visual output and sound.

For tracking the users we decided to go with Microsoft’s Kinect SDK to get real time 3D skeleton data around which we build the user’s avatar. Using openFrameworks with ofxKinectNui, we built a standalone Kinect application running on a dedicated Windows machine. Its job is to process the Kinect data, apply various transformations to the resulting skeleton coordinates to get them aligned with our projected virtual space and send the skeleton data via OSC over a wired network to the core application.

The core application was also built with openFrameworks and runs on a MacBook Pro where it receives the tracked users’ skeleton data and uses it to create the avatars. To give our virtual space – and the objects within – believable behavior we are using Bullet Physics Library as a physics engine to simulate and define the entire virtual environment. The incoming data gets translated to a representation of the skeleton in the physics simulation, this handles interaction and collision detection between the avatars and existing fragments of previous users.

Using the information from the physics engine, the core program takes care of the logic behind avatar freezing, transforming live avatars to fragments and moving newly created fragments to the edge of the interaction area.

Based on the position and movement of these simulated physics objects, the visual layer is generated live and rendered via OpenGL. To enable flexible handling of the visual output of the core application, we use the Syphon framework to send it to MadMapper which in turn generates the final output for the projectors. This allows for a nice edge-blended distribution of the image across multiple projectors. In this case we used a Matrox DualHead to get the desired resolution with the widescreen aspect ratio but the setup can be extended to as many projectors as we can get without major changes to the system.

In parallel, the information from the physics simulation as well as the occurrence of various events – like a new person entering or leaving the interactive area or freezing to a fragment – is used to trigger sounds. To avoid a repetitive soundscape, audio is generated live on a second MacBook Pro running Native Instruments’ Reaktor. We transmit the relevant information from the core via OSC messages over wired network to a custom patch inside Reaktor that adds variation to the sounds. So while the basic tone of the individual sounds stays the same, there are ranges for its parameters where diversification can take place so that no sound is exactly the same each time.