It’s been a long time since my last post and a lot of water has flowed under the boat since then. Here are some videos from various sessions on the canal towpath
In September I returned to Dingle in west Kerry, Ireland for my brothers wedding. While there I popped into Mazz OFlahertys record shop to say hello, one thing led to another and 2 days later I found myself doing a session in the tiny shop for her Youtube page. We had great fun, the craic was mighty as they say in Dingle. I’ve embedded the video here but please visit her site www.sessionsfromtheshop.com for some fantastic live music of all types of genres.
Thanks to Mazz and Brenda O’Shea who worked the camera.
I am a real fan of Tim Thompson and his kinect enabled musical interface. Originally called “Multi multi touch touch” it has now completely morphed into “the space palette”. This film shows a 15 minute performance by Tim and goes on to reveal a lot of detail on how it works in the Q&A that follows.
I’ve already written about this several times and it’s fascinating to watch the progress not only of the technology but also Tims skill in using it. If you haven’t seen this before prepare to be amazed.
Another addition since I last checked on the progress is control of visuals. These were produced by another invention of Tims “The Loopy Cam”
Please see the comments from Tim below, it’s even more awesome than I thought. Tim also sent a picture of the latest Palette Design
One of my concerns about the rise and rise of technology is that we all become physically isolated only communicating via devices. The concept of Openarch seemed at first to amplify this fear, however I’ve recently moved to a new town to a house that is almost totally soundproof and have no need to leave it except to stock up on supplies. I know no-one I meet very few people and I love it. My question would be if technology is leading to personal isolation and this is good , why bother with the technology?
Anyway blah, blah, blah, watch the film it’s fascinating and we’ll all be living like this soon if the power holds up. More info at thinkbig
Good news this week that South Downs Special School in Eastbourne has been successful with their Youth Music Grant application. I will be session leader on the project that starts in February 2012 and runs for 36 weeks. The project will use technology to enable disabled children to have a positive and creative impact on their environment.
The school already use Soundbeam and have numerous portable interactive whiteboards available. I will be using Eyecon/Unlocking Music software to make areas of the school interactive, where movement produces sound and images. I also hope to incorporate the Kinect sensor into the installation and am researching applications that could be used. Please let me know if you are working on anything that could be useful.
An interesting aspect of the project is having similar aged pupils from mainstream schools act as buddies to the children with disabilities. More news as we move into the planning phase next month.
The Brighton mini makers faire was held in the Dome at the Brighton pavillion. It was quite a small event but had some great and wierd inventions on show. Lots of arduino hacks controlling music, lighting, drawing robots and much more. The standouts for me however were these 2 inventions, a massive electronic music machine and a Kinect controlled hand tracking font designer. The Lanes were also wierd but whats new there.
I am a great fan of dancetech and have been using some of the software and techniques in my work with disabilities. Kinect brings a new dimension to this kind of work:-the third dimension. This project called “Versus” is very impressive and demonstrates some of the possibilities this new technology can open up. Produced by the duo inout the video is 6 mins longbut worth watching in its entirety.
Are DJs musicians? I’m on the fence here because I have DJ friends and even been given scratch lessons by one of them. However after seeing this video of Marc Romboy playing Smithson Martin’s Emulator I may change my musical direction.
This creates a touch screen user interface for Tractor and at the same time brings visuals into a DJ set. No more staring at your Mac now you stare at a large screen that your audience can see too.
I have just come across a Korean company called Everyware
they have some totally beautifull videos that use kinect.
This is art and science in harmony. I’m including a few
of their videos here there are more on their site.
The first one is Turn (digital pottery)
How about breaking some glass and getting rid of tension
in the classroom. Then the wierdly named KlankKlankKlank
is just the ticket.
Finally lets do some tie dye printing using kinect
To say this is amazing is an understatement and the
educational possibilities seem endless. Everyware is
where it’s at.
A conversation with Ben Kuperberg
It’s not going to be a very technical interview because I am not a programmer, what I do is design interactive spaces and I use programs from other people to do that. So, just a bit about yourself to start with. What gets you up in the morning, what makes you tick? What are your first thoughts when you wake up?
I like waking up with the thought that I’m going to play music, because I’m a musician, and I really like at the moment, finding new ways to use the kinect to interact, music and kinect work well together.
I’m guessing you were already doing this sort of thing before kinect came around. What were you doing then, what did you use for that?
Before the kinect? I studied animation at film school to do Pixar like movies, and then I began working on new technologies, like multi touch devices, I built some tables, and then I found people who were really interested by this technology, they worked in a studio which built educational games. We worked a lot to combine new technologies and serious games, and they gave me the chance to work with multiple sensors like arduino and a headset that’s an EEG sensor, which analyzes the brainwaves – basically it’s like a mind control tool ha-ha.
So I tried to study as many sensors as I was able to get, so yes it’s mainly multi touch tables and electronic sensors like arduino.
So did you ever use cameras at all? Personally I used something called Eyecon which I’ve used for about seven years, it’s like a stage management system that works on a camera view, and you can assign areas of the camera view to trigger activities like sound, MIDI files, DMX.
I was very taken with your mappInect because it does a similar thing, but yours is 3d, and that’s what I always wanted! In fact I was talking to the developer of Eyecon earlier, and he’s working on Kinect as well now. Did you ever use cameras in a similar way?
Yes with Augmented reality, we use a lot of that type of product. I’ve worked on that quite a lot actually, using Flash essentially, flash and processing. And the first tables I made were based on the Reactable. We continue to build that kind of table to be able to recognise specific objects.
I’ve also used cameras to do head tracking, and a lot of augmented reality.
You know when you get an idea, and you want to do something, have you a set way that you take it to a finished project. For example if I’m going to write a song, I might start with a guitar, or I may have a loop in reason and play along like that, or if I have words floating round my head I try and make them into a melody. How do you work it with a software development project? What starts it all off, are you playing around with a program when you realise “oh I can make it do this, I remember wanting to do this years ago, and now I can do it!” Is that what you do? How do you do it?
First of all when I have an idea I check on the internet all the papers, videos and blog posts I can find to see if it exists, and how it’s made. I really hate duplication.
Ah, like we say in English there’s no reason to reinvent the wheel.
Yes we say the same thing in French ha-ha. And so, if I have an idea and the software I find doesn’t match my needs, like with Mappinect I only found one piece of software that basically did what I wanted, but it was not open source and was a long way from release. So I woke up one morning and said “I want to do this, I want a big piece of software that lets me send what I want, like I want”, so I tried to imagine the basic features, the main concept of mapping outside the software in a configuration file. I have a white board near me at all times, to write every idea I get. So when I write the software I write the basic part, the main concept, and when I try it I think “OK, I also want to do that” so I write it down and when I wake up the next morning it’s there in front of me and I have all of that to do in the day.. So every day there’s new features being added.
I see. I think it looks amazing, especially your latest video, I love the way you can attach to anything, and use almost any gesture as a controller. You can do it behind your back, any way you want and bring in all these things.
It seems to have endless possibilities, how do you manage these? It’s like when you’re writing a song and you keep thinking of new verses to add and it goes on forever, the song never finishes.
Do you set yourself a plan of what the first stage will be when it’s finished and any other features are added at the next version?
Yes, before releasing any videos or software, I wanted this to be really complete first time with all the features you can see (and there are many others I couldn’t show because I was really tired!). I did get a little scared at one point, because I really didn’t see the end of this software, I couldn’t imagine what I would or could do with it. But then I got it clear in my head and said “ok we’ll start with the cool features, not all of them”. So the first thing to sort was a user friendly interface. You’ll see when you try it, that it’s really complicated, and I want this software to be usable not only by developers, but everybody. I want everybody to do simple fun stuff with it, and developers to be able to see whatever they want behind this, maybe just provide a good starting point for new programs (because it already has many features that you don’t have to rewrite), and can be used for whatever you want: controlling lights, music, pretty much whatever you want.
And yes I have many new features in my head coming next. Some of them are gesture recognition, I would like to do that, there’s a really cool video of gesture recognition in Unity and I would like to implement that to trigger in the same way. So the concept will remain the same, but with many and more possibilities. I would also like two way connection in OSC so I can get moving 3d points, for instance in the last video there are some fixed points that I fix to a chair, I would like to be able to tell the software that these points are moving in time, so I can map these to a dog or something. With another tracking algorithm I can say, Ok I want this point, and this point is moving, so follow him, so the distance filters will be moving too, not just the skeleton.
On the gesture recognition thing I saw a video today by a Japanese developer. What he’s doing is he’s recognising activities. He’s designed an app where you say you want to record an activity, so he’ll for instance run on the spot and name that as running, he did one of skipping, another walking.
I found this really interesting because I’ve worked a lot with people with profound disabilities, very disabled, so they may have a movement they can make, that wouldn’t be a normal move the cursor around the screen movement. But if you can recognise that movement and then map that movement to a cursor, so that even though they can’t do the normal movements associated with that, the movements he can do can be interpreted by the computer and move the cursor. But this was really interesting to me, a very simple interface, which allowed you to replay them, so you could say, put it into a mode where the guy would start walking and the computer would recognise it and say the words “Walking”, I thought that was nice.
I’ve been thinking of the educational possibilities of your software, and the fact that you don’t need great motor control to use your software, maybe to do a full performance or something yes, but if you imagine somebody coming into a room, standing and sitting down in front of a Kinect (with someone else operating the controls for them), then they can see all this happening projected onto a screen for them, maybe you could change your sort of scales type controllers, into a visual they would like, so for a child it could be something they squash or stretch, a toy or an animal, maybe a snake or something, I don’t know. Especially with you being able to also use it with flash, perhaps when you’re moving your hands a small video could play with someone squashing a cartoon frog or something.
Actually the starting point of that software is a project that I have which I will show in May 2012, I have a music project, I’m studying orchestral conducting, and I have a project to invite another orchestra to play with us, but I need money for that ha-ha. I wanted for the final show of this meeting of the two orchestras to include new music technology.
Right an interactive show?
Yes, I always wanted software like that, I decided to do it because I knew if I did it at the studio where I work (I’m a partner in a studio called the Curious Project, it’s pretty new, we’re only 3 months old), so I knew if I come with a live performance, a real one man show with a kinect, all the sound effects made interactively, I could get financial help from this studio. So I decided to do it really quickly to show them how it could work. Then I decided to release it as open source software, because pretty much all we do in this studio is based on open source software, so it’s my time to give something back.
So you didn’t use the Microsoft SDK at all?
No it’s currently running on openNI, but I am hosting Nicolas Burrus, he was there at the beginning of the Kinect, and he wrote all of the calibration system of the kinect to match RGB and depth camera, and he’s working currently on multiple kinect and sign language recognition, so, yeah he’s the man! I’m hosting him and another friend to work hard over 3 days to manage to get the Microsoft SDK into the MappInect tools kit, so people will be able to choose which SDK’s they want, because both have a lot to offer.
You say it’s using the openNI drivers, do you mean the Prime-sense drivers?
Right what I’ve got is a windows 7 laptop and an XP laptop. The windows 7 has the kinect SDK drivers on it, and that’s the one I tried using your software on. Would it better to put it on my XP laptop with the Open NI drivers?
Well I’m only working on windows 7, but I think so long as you have the OpenNI it should work. In fact, you can install both on the same computer, and just change the drivers when you load the program.
Do I need processing?
No, no, no. If you want to view the sources, then yes, but you can use it and change the mappings without processing, it is the main goal of the software. You don’t need to be a programmer to change the program!
Aha that might be why nothing seemed to work when I clicked on the application. Do I need to run it in a command box?
No, I had this problem too, I think it’s because one file is missing. I really did the comments on the video fast, and I don’t really know if I included everything you need to get it working. So actually I’m going to be taking today and tomorrow to explain how this works, because it is a bit complicated for the moment. I want this to be really clean and understandable, so I’m hoping to post news every day on the Google code page, and maybe on my blog, but I will inform you and keep in touch.
Great I’m really looking forward to using it! So thank you very much for answering my plea for a 3D instrument, because you seem to be there! I’ve watched your recent video about 4 times now, I think I put a comment along the lines of “Oh my God wow”,
Oh yeah I saw that, thanks
I know it’s not very constructive
No, no, it’s motivating! Ha-ha.
I’m really motivated to get your program working now!
If you can’t open the software with OpenNI installed, write me an email, or call me, and I can find out what’s not working, so that other people won’t get frustrated.
Well I never get frustrated to be honest, because I never expect these things to work for me.
This is a good outlook, because a lot of people are expecting too much from open source developers.