Thursday, April 7, 2011

My 491 thesis

Information on the Project : Reflection

The last 6 months I've been working on a project for my directed studies class at Berklee College of Music under the guidance of Dr. Richard Boulanger. This started out being just focused on projection mapping excercise and to test myself if I could program a software that could do some complex mapping with help of MaxMSP/Jitter.

As I said the main focus was the ability to map videos or animations on top of 3D objects as seen in the following videos that inspired me as well as the released MaxMSP patch “VPT 4.0 by hc gilje”

The initial development is a lot easier then the complex graphics shown in the video above but by using the same system as for simple graphics its just a matter of graphic design later on to apply the necessary effects.

This picture shows the first step of this evolving project. It covers 3 main aspects:             Video Capturing coming either from live video or a streaming pre  recorded clip.
3D Generation: Letting the user load up 3d objects and then applying a texture to it.

3D Processing and Spatial Mapping: This part of the program lets you take any incoming input in locate it into a 3dimensional space.

In January 2011 I decided to rework my project as part of my direct studies class as mentioned above to include Audio/Visual generation rather then just processing existing media so that the system would respond dynamicly in interaction with the music.

The next step to take was the creation of some Visual-Generation modules as well as live input processing plugins that I would need further down the road for analysis and interaction.
I ended up with 4 Video generation and 2 analysis plugins.


Asteroids: Inspired from the Jitter Cookbook tutorials. This Module takes real-time streaming audio and generates a shape in 3d space according to amplitude and timbral changes.

Grids: This module creates a retracting grid that responds to transient information from a streaming audio signal. The positioning of the grid is also tracking live video motion to determine focuse points and blind  spots.

This module generates a stream of particles across the screen. These particles are waiting for a vector impulse that is then transferred in energy to the particles in a specific direction. The vector impulse is created by the Vector tracking module which I’m going to analyse further in detail in this document.