So, this has been a long time coming. I bought a Kinect over break and wanted to do a little project to mess around with it. AirSample is that little project.
Originally, I was planning to do a synthesizer that created square waves with a frequency that depended on the position of each hand. I actually wrote some new classes for a C# audio library for it to work. It was okay, but ended up crackling really badly and having a terrible response time. C# is just not good for dynamic audio synthesis like that.
After that, I decided to just use samples. I found some samples online from an ARP Solina and decided to run with that. The system works with an arbitrary number of samples (right now, they’re hard-coded, but I plan on changing it so it knows what samples to load from an XML file soon). You can move each hand upward to make it louder, or downward to make it softer. Each hand is independent. There is a dead-zone at the bottom of the tracking area (about 1/4 of the total area) which will result in silence. The notes are displayed on the screen, as are the positons of each hand. You can pick your note by moving it to the dead-zone, then to the correct x-coordinate, then moving it upwards out of the dead zone.
I’m thinking about porting this to Processing using the OpenNI and NITE Kinect drivers so I can make it cross platform. As of now, I just wanted to get started with Microsoft’s SDK.
I’ll be posting the code soon, when it’s in a more usable state.