Live performance

What will a concert look like in 100 years? I have no clue but it will probably include some instruments that haven’t been invented yet. It will probably be performed using methods not yet imagined. Also, the music will probably sound vastly different than today’s music.

Next month I’m bringing a group of students to do some live performing at the eTech Ohio convention, in what I can best describe as an experiment to see what works well in this area. In my opinion, one of the big values of electronic music is the creative aspects – this will greatly influence how we present the music.

Let’s contemplate some of the reasons we do what we do now.


Figure one: the standard orchestra

This was the original “big” idea of a group that could provide enough versatility to play any kind of music. We can cover a big gamut of amplitudes, frequencies, and timbres with this group. The catch? You need about 80-100 highly trained individuals to pull it off.

Here’s an evolution of that idea: the broadway pit orchestra.

This is a hybrid acoustic/amplified/electric ensemble designed for the same purpose as a full orchestra, with the added pressure of space constraints and smaller player budgets. Most would agree a good pit can provide a similar range of sonic ideas, though they are highly tailored to the particular show.

Recent shows incorporate lots of electronics to this end. Look at the ridiculous technical specs for John Adams’ I Was Looking At The Ceiling and I Saw The Sky. Those are some serious tech requirements for “classical” music!

Which brings me to my solution for eTech.

We’re going to have a small ensemble of 5-7 students. One will operate the computer and the Akai APC40, while another two operate a 49 and 25 key Axiom keyboard/percussion pad combo. Another will add layers on an Akai EWI USB controller, which two more add live vocals. There may be others using the Alesis Pad controller or even a Wiimote if we’re feeling gutsy and reckless. We’ll play about 6-8 sessions worth of music (enough to fill 45 minutes while people come in for the keynote). The songs will be run out of Ableton Live sets, and will include a fair amount of improvisation and flexibility. If you asked us to play the songs twice it wouldn’t sound exactly the same either time but it would probably be recognizably similar.

And that’s the plan. Check back in early Feb. to see if it works out!

The Soundboard App

Here’s a little how-to for a project I started doing this year with my advanced classes. It’s a bit more complex than just the patch, but the soundboard is the most interesting part of the project.

Soundboard apps are one of the more common gimmick apps on the iPhone, so many students already understand what it’s supposed to do: You hit a button, it plays a sound. Simple, right?


Figure one: the thing that makes the noise

THE THING THAT MAKES THE NOISE
In PD, “tilde” objects are the ones that deal with sound. In our soundboard patch, the only two objects that directly deal with sound are tabplay~ (reads the sound buffer as audio) and dac~ (the final output to speakers). The button you see above tabplay~ is called a “bang” and it does what you think it does.


Figure two: the array, where sounds live

WHERE THE SOUND LIVES
In PD, sound lives in buffers called “arrays”. You have to both make the array, and tell PD to read the audio file into it. To get the sound into the array so that each time you load the patch it will be there is kind of dodgy – you have to hard code it. Meaning, you will ask PD to look in a specific location for the sound. I’ve noticed that some versions of PD hate filenames and folders with spaces in them, so try to avoid those if possible. Also, sound must be in an uncompressed format like wav or aif.


The sound loading routine – the “read” message is the only part of this patch that feels like actual programming.

You must get the read message box right or else your sound will not load. It goes like this: read is the command. “-resize” tells the array to be the same length as the sound file. Then, the file path. Finally the name of the array it’s going to land in. Make sense? Good.

Soundfiler takes the read message in as a command and makes it happen. You don’t need to connect it to anything.

Assuming you got this to work, you may have gotten your soundboard to work!

NEXT STEPS
Making the soundboard useful is another task. In lieu of having a video game that’s interactive hook into the patch (a holy grail goal of mine for this project) I opted for an alternative. Live but digital foley.

Here’s how it works. I use a 2-3 minute video clip from a video game. The students make sound that are needed for that clip (jump, door opening, lasers, etc.) They make a PD soundboard that uses those sounds.

Then we use JACK to route the PD audio back into GarageBand where it can be recorded. I play the clip on my projector so the whole class can see it at once and they “perform” their clips from the patch into GB. Then, they take a low res version of the video and re-sync it so they can export it as a project. Possible extensions might be writing some BGM for it as well, but that might make the project take a bit too long.