This project explored using sensor data from an array of Kinect2 depth cameras to produce a platform that in house developers could create content for using CSS, Javascript and HTML. The Kinect sdk allowed us to detect positional data and gestures but is slow to acquire people that are not facing it. Using multiple sensors communicating over network to a host we were able to recognize targets faster and increase our potential to map more than the Kinect2’s native limitation of six people.
The prototype seen below leveraged a canvas animation made for the MullenLowe website.