Did you miss it?

Time for a slightly non-music related post.

Did you notice the absence of Microsoft related stuff at CES this year? Where is their phone? Where is their tablet? Where is the newness?

Did you notice the absence of Intel at CES? Where are the low power mobile chips? Why did MS announce an ARM version of Windows?

Why isn’t this a bigger deal? The computing world has changed and no one noticed. A perspective often lost amid the Apple vs. Google noise…

  • PC’s aren’t the hotness. It’s all about the smart phones and tablets now.
  • All the major phones use Webkit for the browser engine. Not Internet Explorer, and not Firefox, but Webkit (also known as the engine Safari and Chrome run) which is an open source project with nightly builds.
  • All the major phones OS’s run some form of Linux or Unix built on top of an open source project. iOS, Android, Palm webOS, and even some of the other smaller players.
  • All smart phones and mobile tablets run on some sort of ARM RISC-based chip. Intel’s mobile chip, the Atom totally stinks and no one likes it.
  • It is less important than ever which operating system you use, so long as it has a standards-compliant browser for the Internet. Only specific industries require heavy use of native software.

Now email this back in time to yourself in 1998 and see if you would have believed it. No windows needed

The implications of an iPad dock synth.

Most people performing with a synth today have two options: use a dedicated synthesizer, or use a controller paired with a laptop computer.

Most people who choose the latter pair their keyboard controller (usually a USB device that doesn’t make sounds on its own) with a full featured program on the computer. There is a big problem with this as these apps are designed for music editing, mixing, and other decidedly “non-performance” tasks. Apps like Mainstage attempt to give performers a setup that makes more sense, but in my experience Mainstage has a somewhat steep learning curve for what it does, and is still only available as a component of a DAW program. In short, it still does not bridge the gap between a proper performance keyboard and a computer editing-style setup that’s been tricked into performing.

The Akai Synthstation 49 is a great attempt at bridging this gap through the use of the iPad. It is basically an MPK49 but with an iPad dock built in. There’s some software that is bundled with it but you can use it with any app that supports MIDI over the camera connector.

The device really doesn’t bridge the gap on its own. The iPad itself does. Why would an underpowered gadget like the iPad outclass a full computer, especially when a keyboard is obviating the touch interface?

Answer: less is more.

The perfect app isn’t here yet. But there are lots of apps that have the right idea. Take a look at stuff like Thumbjam, Bebot, NLog Pro, and the Korg iMS-20. Some of these apps have some light recording and sequencing features, but they’re really all instrument. They are the natural extension of the advanced knobs and settings you’d normally find on a really expensive synth like a Nord Lead or a Fantom, but they are cheap apps on a moderately powerful mobile computer.

It’s really the limitations of the app store approval process and the hardware itself that makes these types of apps the default, and I’m of the opinion that this isn’t a bad thing. There are a lot of musicians who could care less about plugins, return tracks, and arming tracks and just want to make some interesting sounds to play live. These apps bring that idea to the mainstream.

With the inclusion of coreMIDI in iOS 4.2 and the ability of the camera connector kit to accept MIDI messages to any USB keyboard, the iPad is very much able to perform in a live environment in place of a standalone synthesizer.

My dream looking at the year ahead:
a big music software company releasing a plugin-style version of features from their full app on ipad. Picture the Simpler and Impulse instruments from Ableton or Subtractor from Reason in app form and you’ll see what I mean.

It’s very much possible that Apple will pass over doing a clone of GarageBand in favor of doing something more MainStage-like for the iPad. Imagine an EXS24 sampler on an iPad. The possibilities are very exciting.

Analogue Monitoring

Every year I like to basically unhook my teacher station in the lab and the studio desk. Since our equipment is somewhat patched together, I can adapt the setup to our current needs without “uninstalling” anything. Unlike some other teachers, I would honestly rather have a few cables hanging out and be able to change things than only do this when equipment gets upgraded.

This summer’s goals are to set up the teacher station in the lab as a massive “beat laboratory” MIDI setup (more on this in a later post) and to solve some signal flow issues in the studio.

The Issue
When we record a band, musicians often complain about a slight delay in the headphones. This started after we expanded our interface to 16 inputs, and hasn’t magically gone away yet.

The Cause
Delay in the monitor headphones is caused by a too-long signal path that involves the computer. Here’s the history lesson behind this:

In the dark ages, when all was analogue and tape, this wasn’t an issue. The mics went into the mixer, and the mixer split the incoming (post fader) signal between the monitors and the tape machine. Modern digital boards still do this. Since there is no “conversion” stage (the signal remains an electrical signal all the way back to the headphones) the sound is in the ears instantly.

Delay is caused when the signal gets converted from analogue to digital, usually by an interface box, and then again when the computer sends audio out. Most setups with interfaces will default to hearing what’s on the computer (it is basically a sound card, after all).

So how do we fix this? I want my monitoring back!

Solutions
Here’s the tricky part. Our digital mixer (a pair of Alesis io26 interfaces) is supposed to do hardware monitoring. If I can get this to work, this means it can send the electrical signal back to the headphones without involving the computer, so we can monitor without delay. From a control panel on the computer, I can set whether this is active or not.

The tricky part is actually doing this in practice. A program like Logic (or Pro Tools or Cool Edit ’96 or whatever) will usually monitor the inputs for you, sending the audio back to the interface. This will still result in a delay, as the un-delayed signal is now being mixed with the delayed signal from the computer. You actually have to either mute the incoming tracks in the computer, or use the mix knob on the interface to switch it to monitoring. Confused? There’s more!

This leaves out the fact that lots of musicians and producers like to rely on software-based effects, which can only be heard through (you guessed it) the computer! So, want to hear that auto tune on the vocalist? It will have a bit of delay. Want to hear that sweet Amplitube effect on the raw guitar? Delay. A super good computer can overcome this, especially if you’re using a Lightpipe interface and a tiny buffer size, but when you have 16 simultaneous inputs this starts to not work, and delay (or worse) creeps back in.

Solved?
The best solution we’ve had to date is when a band can just perform individually and not deal with monitoring issues, but this isn’t ideal. Many bands want to record as they rehearse – all playing together. So that’s a basic overview of one of the pitfalls of recording audio digitally…I’ll update later with our solution to this problem!

Power Nerd mini-tutorial: S/PDIF audio

Do you know what S/PDIF stands for? Yes? Then you probably don’t need this tutorial. This is for everyone else.

Sony and Philips, a long long time ago, came up with a cool way of transmitting CD-quality digital audio called S/PDIF (Sony/Philips Digital Interchange Format). You sometimes see it on the back of older DVD players or receivers, and on USB audio interfaces for the computer.

Why should I care?
Well, I guess you could go pretty far and never use S/PDIF, but because I just troubleshooted a system that used it, you get to read about it!

OK, name one advantage over optical audio
Optical audio is certainly cool, but the fine folks at Sony and Philips thought they could do it an easier way. S/PDIF just uses plain jane old regular RCA cable. This cable sends an encoded pulse of data (think the noises an old modem might make but different), which gets decoded on the other end. The efficiency of S/PDIF is that it can be used to send multiple channels, such as stereo or 5.1 surround over only one cable, rather than over many.

Speaking of the one cable…can’t you just use a regular old RCA? The pros say no, and in most cases they’re right…a regular RCA cable is too noisy for a digital signal, though in a pinch component video cables might do the trick.

Now when you see a S/PDIF jack, you will know what to do with it!

Tutorial: Creating a Breakbeat-style song in Ableton Live

The Breakbeat style
is a form of electronic music that gained popularity in the mid 1990s. It’s main characteristics include liberal use of a sampled beat which was usually derived from the drum solo of a 70’s funk song. This beat is cut into many permutations, and is combined with a wide array of other samples, guitar clips, and original synth parts. This lesson plan includes some examples for listening at the end.

Here’s a video version of the lesson that follows:

Step one: Crate Digging
The first thing any good breakbeat artist would do is go digging for samples. I have my students pick 2-3 songs from which they will take the solo drum breaks. I provide a large selection of funk tracks from my personal library. One place to start when finding these might be a CD from the popular funk collection “Ultimate Breaks and Beats,” which was compiled mainly for the purpose of finding good drum breaks to sample for early hip-hop music. A nice side effect of this part of the lesson is that students get what might be their only direct contact with funk music of that era.

Step two: Isolating and retooling the beats
Using Ableton Live, drag one of the tracks into a clip slot. I’m going to use an Amy Winehouse track with a great drum intro for my break. Live tries to figure out the correct beats per minute, but isn’t always right. We’ll have to use Ableton Live’s beat markers to demarcate four bars of this beat. You can take the time to mark the beginning as measure one, but it really doesn’t matter which measure numbers the beat starts with. I’ve turned on “Loop” and set the loop length to four bars.

After you have the loop running well, take the global tempo of the song (in the upper left corner) and crank it up to the 170s or 180s. This is a standard tempo range for this style of music. Also, in the Clip Panel, take the transposition (Transp) to +3 or +4, and boost the volume. Adding an effect like a compressor or limiter might be good too.

By now, your funk drum break should sound much more techno-like.

Step three: Divide the beat into distinct levels of complexity, and different effects on an A/B

Ableton Live’s session view uses a column-style for different instrument tracks, each of which is divided into rows called “scenes”. I like to think of each scene as a different section of the song, and I usually arrange from top to bottom in order of complexity.

Duplicate the drum track you already made, and set the first track to A and the second track to B. I have my students use the MIDI button in the upper right to assign the Mod wheel on their keyboards to the crossfader (located underneath the rightmost “master” track). This will allow easy switching between these two tracks for varied drum beats.

I like one of the drum tracks to sound fat and boomy, and the other to sound dry, chopped, and gated. Some fat and boomy effects include: delay, compressor, or reverb. Some dry sounding effects include: hi-pass filter, gate, beat repeat, bitcrusher, and flanger.

Now it is the students’ job to come up with some interesting variations for each track. I require at least three different reworkings of the clip per track whether it is a re-chopped version of the clip or a version of the clip with effects on it.

Step four: Add guitar licks
Now that the drums are pumpin’, let’s find a good guitar lick. I recommend finding some really obscure isolated guitar part, and juicing it with effects into a third track. The main idea is to get the guitar to sound more like a synth part by the time you’re done with it. I like using grungy sounding licks from bands like Nirvana, The White Stripes, or any other heavy guitar lick you can find.

I would suggest two tracks of guitar; one for a repeating rhythm, and another for interesting “one-shots”

Step five: Add synth bass and supporting kick bass
Tracks five and six will be for synthetic instruments, mainly a bass or pad for underlying harmonic structure, and an electronic kit to provide extra punch and drive to the drum track.

I let students be more individual with this part, as the basic song works already for performance and these are really the sounds that will define their taste for this song.

One neat trick for getting a great sounding techno bass is to have one track with a dry kick drum part, and another track with an extremely low bass note effected with a limiter to take the volume level up. This will get that club-style thumping sound so familiar in all kinds of electronic music.

Step six: Season to taste and perform

Ableton Live can be used as a sequencer, but for this project I find it more exciting and effective to have the students perform their songs. Hit the global record button at the top, and start live-triggering your clips. Use the crossfader we assigned to the Mod wheel to flip between A and B drum parts, and go through the different sections of your song.

I tell students during this phase of the project to be doubly sure that none of the level meters are peaking to red, and if they can’t hear them after turning them down, to add a compressor to the master track or simply spend a little time mixing the levels of all the tracks to a comfortable listening level.

I set a time requirement of about 2-3 minutes for this project, and most students will have gotten through all of their material and ideas by that time.

It’s also not a bad idea to listen to some Breakbeat artists while doing this project. I’d suggest Roni Size/Reprazent, The Prodigy (older stuff), Photek, and the Breakbeat Massive series.

Voila!
You have successfully recreated the Breakbeat style, and are ready to bust out the glowsticks and rave until the sun comes up. Please add any thoughts/comments below.