▶ Thoughts on “we all hit play”

Back in June, Joel Zimmerman (a.k.a. Deadmau5) posted a very candid article on the state of live performance in electronic dance music. It included lots of juicy details on how to pull off a “flawless” EDM performance, such as this nugget about timecode:

Somewhere in that mess is a computer, running ableton live… and its spewing out premixed (to a degree) stems of my original producitons, and then a SMPTE feed to front of house (so tell the light / video systems) where im at in the performance… so that all the visuals line up nicely and all the light cues are on and stuff. Now, while thats all goin on… theres a good chunk of Midi data spitting out as well to a handful of synths and crap that are / were used in the actual produciton… which i can tweak *live* and whatnot… but doesnt give me alot of “lookit me im jimi hendrix check out this solo” stuff, because im constrained to work on a set timeline because of the SMPTE.

I’ve played backup for more traditional groups in large arenas, and without bashing any group in particular too hard, I’ll just say that this is quite commonplace. Every candid aside was scripted, even down to a “from the heart” prayer (it was a religious band). The higher dollar the event, the more failsafe and disaster-proof the show needs to be. Setting every tiny part of the show to timecode ensures stability, and allows for tweaks to be made to perfect a show during its run.

But what about getting out of this box? A great aspect of EDM is that improvisation can be achieved with any number of degrees of safety nets – backing loops, in-ear cues, scale locking, live quantization; all of these can be achieved in a live show, even a quite-scripted mainstream House show.

For example, the group EOTO performs with similar gear to other shows, but uses it in a 100% live method. I would wager the big disadvantage might be a general lack of failsafes, but they play much smaller, lower-dollar shows. It’s their niche, you could say. Just check out their wiring diagram:

EOTO

And of course a video of EOTO performing live:

It’s different than normal EDM – like a jam band version of it or something. This type of group reminds me of a fully realized, 21st century version of FSOL. (You never heard of FSOL? Go educate yourself.) These guys improvised their music, but were limited to in-studio productions because of 1990’s-style gear.

Another good example of at least not being cynical about live performance would be The Glitch Mob. These guys may very well be dealing with extremely simplified “stems” of their music, but they certainly look like they’re doing a lot on stage. From a 2010 interview with Electronic Musician, they talk in good detail about the technological limitations of recreating a sophisticated EDM song live:

Almost every melody that you hear on the album has been sampled note for note [for the live show]. That was the only way that we could get the actual sounds of the record to translate live. There”s no way that we could essentially load up the plug-ins that we use, stack 20 plug-ins and play live; it would kill the computer.
Boreta: It”s also just the way we make music. If you wanted to actually play the synths live—maybe if the technology was there it might be better—but our sounds are processed over and over and over again, from the first phase to the mix phase. And when we mixed the album, we bounced down everything to audio because we have multiple sessions of hundreds of UAD plug-ins. Each sound would have to go through about 15 to 20 UAD plug-ins. That”s what brought us back down to audio was really technological limitations.

Notice though, that they can play with a SMPTE track but still add plenty of live elements by resampling their own material. It probably takes a lot of work, and even more rehearsal to make it look effortless, but it’s been a part of their schtick from the beginning:

But notice they do significantly less when their hands are less visible:

I think there’s a place for both styles, but I think it’s kind of limiting to concede to playing electronic music in a totally linear way. This kind of constraint is something we broke free from years ago; it doesn’t mean everything has to be a jam session, but to deny the possibilities of variable performance sets the genre back in the 20th century.

Cool HS Media program in CA

Everyone who teaches a school media program needs to check out Western DCC’s Facebook Page.  They’ve been posting TV show intro remakes, and they’re very well done, not to mention that this is a very clever and original idea for a project.

As a side note, I’ve noticed that unlike schools in the “Music Tech Belt” from Illinois to the east coast, schools out on the west coast have been adding project-based classes out of the film and media tradition, with audio playing a supporting role.  With Hollywood so close, this approach might make more sense on that side of the continent.

How to avoid overbuying in your A/V studio

As a teacher, I’m constantly assaulted with ads and promotional materials for “pro-quality” solutions for studio gear for schools.  Companies assume that most teachers aren’t gearheads, and don’t pay a ton of attention to the equipment they’re buying, so long as it works well.  This gives them a huge opportunity for overselling, and potentially costing you to exhaust your budget on items you don’t need, or that could have been much cheaper.

Too many schools overbuy, and I think a big contributor to our school’s successful program is my absolute resistance to the idea.  Here’s an example my students like to make fun of: At a convention performance, the kids got a chance to walk around the trade show.  They saw this “AMAZING” video production hardware that cost around $15,000.  I’ll bet a lot of your schools own one.  They came to me and asked “wait – how much is our stuff, because this thing’s reel looked exactly like what we produce in our tiny little studio.

So without further ado, here’s how to do what a Tricaster does for about 1/4 the price:

(disclaimer: our announcements are pre-taped, so some of this might not apply to your situation)

The Tricaster is generally used for mixing multiple camera sources and performing a chroma key (green or blue screen) to replace the background.  Additionally it can do overlays of pictures and text.  You know, your basic Steve Brule stuff.

So first off, let’s start by remembering that everything the Tricaster can do, your computer probably also can do “in post.”  We started on this theory when I first took over our announcements.  Our first signal flow looked like this:

Sort of, at least.  I’m pretty sure I ran the camera through something like a Pinnacle Hollywood converter to make it a Firewire source for the iMac, but the principle is at least right.  We filmed against a green screen, did the chroma key and overlays in iMovie (which requires two renderings, since iMovie doesn’t support more than one video “track” at a time).  It was inefficient, but it was cheap (and it worked.)

Year two, we basically did the same thing, but added some extensions.  I took the shotgun mic off the camera and ran it to a MacGuyver-ed mic boom we made, so the audio would be much clearer, and require less boosting in post.  We also found an old piece of kit that came with the camera to do the digitizing, which made importing from tape and other sources a little easier:

By this point, I was sending home a laptop with a student every night so they could do the editing as homework.  They would deliver the laptop back to me early the next morning, and we’d show the announcements to the school.  I didn’t like this setup, because sometimes the students either a) couldn’t do this or b) didn’t do a good enough job, and there was no time to fix problems.

So year three was all about cutting back on the amount of post production needed.  This meant eliminating one of the rendering processes.  Since we can’t avoid the final render, I decided it was time to get a video mixer capable of chroma key.  The cheapest machine of this type on the market to this day is the Roland V-4.  It costs $1000, and is really designed for VJ’s, or news anchors who like their transitions to be synched to a song or something.

This saves us time by recording the background along with the anchors, and still gives our editors (who work the following class period) the freedom to choose music and pictures.

The fatal flaw with both this setup and the Tricaster is the lack of true HD support.  Everything that runs in and out of the V-4 is SD quality, so even if we upgraded to an awesome camera it would still get downgraded to SD in post.  Not that it’s a pressing issue right now, since our closed circuit system would do this anyway.

But let’s say we upgraded cameras tomorrow.  How would we do this?  HD video mixers are extremely expensive, and are not something I’m interested in buying.  I’ve thought about this, and for now it seems the best solution would be to go back to year one’s signal flow and change to something like BoinxTV for doing chroma key and overlays.  Three years ago, our computer wasn’t powerful enough to run Boinx without hiccups, but when we do get a new camera, we’ll definitely have the horsepower to handle Boinx.

So, for a studio that started with a Streamgenie and DVD-RAM for recording daily announcements, we’ve done a lot of upgrading with very little money.  Don’t let any salesman tell you that high quality school video is only doable for over $10,000 – always start simple, buy equipment that goes up to your ability level, and you won’t fall in to the “overbuying” trap.