Balancing Act

Technical skill vs. Creative practice.

Where does one end and the other begin?

To be musically creative, one must acquire a modicum of skills.  To succeed in becoming skillful, one must have a passion for contributing to their art.

In most music classes (especially performance-based ones) the problem is framed as such: “How can I integrate the creative process into my technical-skill driven class?”  In my classes, I tend to approach this problem from an opposite view: “How can I integrate musical skills into a highly creative and personal project?”

It’s a strange balancing act.  On one hand, you can have a highly skilled group of musicians that absolutely rely on someone telling them what/how to play, and on the other a group who has the greatest concepts and designs but little technical know-how for putting together a polished final product.

The question is this: why are these two separate?  Why can’t we cut back on performing and spend some time in the lab?  Why does a choir class have to only explore the performing of pre-written music?  Why does a music tech class have to produce music with no intended audience or performance goal?

I’d like to see a new generation of lean, efficient music classes that buck the trend of performance/non-performance.  Picture something like a band class that can write it’s own music.  Or a guitar class that performs in duets and trios.  Or a music tech class that can double as a pool of electronic performers.  Heck, a music history class that was involved in writing the program notes would be a fresh change of pace even!

Recently, I asked a few students who wanted to record rap songs they had written to instead team up as co-producers for this song and each take a verse.  Make the beats, make the lyrics, record it – the whole shebang.  They have worked in my lab independently now for a few weeks and are starting to show some sweet results on both the production and the lyrics.  Technical skill + Creativity at work to create something totally unique to the high school experience.  That’s what music class is for, friends.

Next time you sit and plan a lesson, think of one thing that is under your “we could never do that” list, and give it a try.  You might be surprised at what happens.

How to: ADT

ADT stands for Automatic Double Tracking, and it’s one of the many “secret sauces” of a great vocal track.  I’d like to demystify this sound, and show you how to get it.

***Oldschool method***

By the way, just because it’s the old school method doesn’t mean don’t do it.  This method sounds great, and is fairly easy to achieve.  Just record your vocals once, then make a new track and record them again.  Even the best singers can’t robotically sing the same exact way twice, so there are going to be differences between the two tracks.  Play them back simultaneously and you’ll hear a fat, thick vocal that may remind you of an old Beatles track.  Well done (by the way, this is not ADT as it was not ‘automatic.’)

The Automatic comes in actually due to John Lennon.  He HATED doing double tracking, as he found it difficult to sing perfectly twice and stay on pitch and rhythm.  He asked his engineers to come up with a way to automate the process using the effects modules of the day.

Basically, the original track would be recorded onto a regular tape machine and a modified tape machine simultaneously.  The second machine had a variable speed motor that slightly changed speed at a given interval.  This made the mix sound like it had the natural imperfections of a true double track recording.

***Shiny New Digital Method***

Nowadays, ADT plugins are available to buy for most DAW programs.  If you feel like rolling your own though, here’s a starting point:

Step one: make two copies of your vocal track

Step two: set sample delay on the second track, and randomly automate the delay time between 0-200 milliseconds

Step three: apply pitch correction to the first vocal track (don’t apply it to the second)

Now try playing it.  You’ll get something very close to actual doubletracking without having to call the singer back into the studio, and you’ll be able to explain to your friends how clever of a mix engineer you are to boot!

Apparently not crazy

YES!  Audio prototyping builds of games exist!  Now just to get a copy working on a school computer without any of the ultra-violence.  Sounds like a summer project to me!

This particular one works with a special demo build of Quake 3 Arena that sends OSC messages over a network connection to PD.  I’m sure there are other implementations out there too.  This would enable students to program action sounds like footsteps and firing a weapon, but also would enable music ‘scene changes’ such as creepier music upon entering a dungeon, for instance.  I’M SO GLAD THIS EXISTS!!!

Audio Prototyping: Am I crazy?

Here I go again – another career choice in audio to explore!  This time audio prototyping.  I’ll post about it when I’m further along with the idea, and I’m not sure it will even get off the ground this semester.

Project: design a PD patch that makes the video game have sound & music

Method: Provide a simple game (space invaders?) that sends OSC hooks to a program like PD.  Teach the basic sampling stuff for PD to respond to these hooks.  Maybe teach some reactive music playback while we’re at it.

I’ll let you know how/if this one turns into a real project.


I’m going to take today’s post to talk about one of my music tech epiphanies.  Back in 2002 I discovered a game called Rez for the Dreamcast.  It wasn’t a normal game.

I’ll spare the plotline; suffice to say it has a sci-fi theme.  The story isn’t why I liked the game – it was the amazing sound design.  Rez was based on the concept of synaesthesia; this is when one sense is experienced in combination with another.  Some people are born with this (a person might “hear colors” or “see noises”, etc.)

Rez is a shooting game, so you naturally aim for baddies and shoot.  Every time you shoot or score a hit, a slice of the music is played.  It’s not arbitrary – your gun might be the hi hat, the first enemies a synth stab, bigger enemies might be a bass/kick combo.  All the while a “four-on-the-floor” pulse is rocking the controller as if it were some sort of sub-bass generator.

More importantly, Rez doesn’t try to hide the fact that it’s a game, and the audio is being generated based on user input.  This is how all games work, but Rez doesn’t strive for realism.  It basically feels as if you’re remixing music by shooting bad guys.  It’s a pretty cool experience, and I recommend anyone with a Dreamcast, PS2 or Xbox 360 (live) give it a try.