Initial Thoughts on GarageBand 10.0.3 for Yosemite

Short version: It’s no WatchBand, but it’ll do.


First of all, I had some good conversations with folks about GarageBand.  I’ve been very vocal about my dislike for last year’s version, and some took it as a generic “OMG I hate GarbageBand” as if I won’t stoop to using it or something.

Let’s be very clear here.  When I started teaching in 2004, GarageBand was released that year and basically no one taught Music Technology.  The software was esoteric and hard for outsiders such as typical educators like myself to understand.  I didn’t know why you’d need things like track effects or bus groups because I had no idea what any of it meant.  GarageBand represented an actual accessible starting point for those of us who wanted to make music on the computer but had no good entry point.

So fast forward 10 years and an entire career built in part due to GarageBand.  Last year, after GarageBand 10 was released I installed it and started upgrading my projects for my beginner class.  Immediately I started noticing things were missing.  My projects were going to have to change big time to keep using GarageBand.

Of course, I could have just stayed on the old version – lots of smart people I know did exactly that.  But why build a curriculum around a product I’m not excited about?  One that I’m wary of updating?  This was not an option either – I can’t passionately get kids excited about making music on a product I’m not terribly sure has a future.  Even though I have the old version, students who like the course and buy a new Mac will get stuck on the new one.  It’s Final Cut Pro X all over again (disclaimer: I liked Final Cut Pro X, but I totally understand how professional video people felt when it was released and had a huge feature set missing).

The key here is communication – Apple is great at running developer and public Beta versions for it’s important apps like OS X and iOS.  So why not just call something totally unfinished like FCPX 10.0 and GarageBand 10.0 what it is?  Unfinished!  Taking the features out without communicating that they’re coming back sends the message that “you don’t need those anymore.”  Sometimes this is necessary, but with professional tools this needs to be clear as a bell.

GarageBand +0.3

So on to yesterday’s version.  One thing is clear to me: Apple is pretty confident that their main user base for this app is Guitarists.  Podcasting support is pretty much out (including that excellent Ducking function – giving meaning to autocorrect users everywhere).  Same as last year, the look is decidedly Logic-Lite.  The wood panels are still there, but they’re much more slim.  Column panels snap out from the left side just like Logic has done since they went single-window.  As for me, I miss the animations but for people who had trouble considering GarageBand a serious-enough tool for music making, maybe these changes are welcome.  It’s not like my software of choice looks any friendlier, and kids seems to like that just fine too.

One thing that still bothers me about this new version is the perceived lag during recording.  When you hit record and speak into the microphone, the waveforms should more or less appear in real-time on the screen.  In this version (still) the waveforms redraw every couple seconds and it implies that the computer is having trouble keeping up with this simple task.  It doesn’t give me confidence that it’s not going to crash under a heavy load, and it certainly doesn’t compare well to other DAW apps in this regard.  Whether there is actual audio lag or not, this is a big concern I think – the whole argument of choosing a Mac over a PC for audio production hinges on the thesis that audio handling has less latency and more stability on Mac OS X than on Windows.   It certainly still does, but the vendor-made bundled app lagging like this out of the box might make people think other wise.

Here’s a comparison.  Notice how much smoother recording appears in the old version (top):

So here’s hoping they continue improving GarageBand.  For whatever internal reason they decided to do a ground up rewrite, I’m hoping they recognize the thousands of educators now relying on this software to be easy, foolproof and friendly.  It’s not like there is a competing product really – the competitor to having a GarageBand-based Music Tech course is simply to not have one at all.  We’re relying on you, Apple to keep this movement alive.

Okay, onto the good stuff.

3 Cool New Things about GarageBand 10.0.3:

1. Track FX are back.  Now they’re called Plugins, and they look like this:

Screen Shot 2014-10-17 at 8.12.47 PMScreen Shot 2014-10-17 at 8.12.54 PM

They’re located in the Smart Controls, and have to be revealed with the “i” button.  Not a terrible system, and I’m digging the inclusion of the core set of Logic effects (the old GB Track Reverb was it’s own thing, for instance).  This is a big deal – last year’s version implied that this type of detail was out of the question, and it’s nice to see them back.  On the other hand, seeing this also implies that GarageBand is on a different track than iWork – feature parity with the iOS version is apparently not a priority here.

2. Automation Drawing

With the pencil tool, the way the good Lord intended.  This pairs well with last year’s inclusion of the ability to copy and paste automation with regions.

Screen Shot 2014-10-17 at 8.16.05 PM


3. Serious Amp Designers

Just look at the new Bass Amp designer: Screen Shot 2014-10-17 at 8.17.58 PM

Paired with the other Amp Designer, you’ve got serious tools for guitarists.  I’m pretty sure there was a version of Logic that charged an upgrade fee just to use these plugins.  Way cool, way pro.

In conclusion, I’m fully expecting the newly-unified Apple Inc. to continue incremental improvements to GarageBand and allowing this new version to flourish into a great reason to have a Mac lab in your music department.  Maybe someday I’ll move my beginners back to GarageBand too – the projects still work, mostly.  For most folks, GarageBand 10.0.3 will be just fine software on which build a curriculum.  For me though – I’m waiting until it’s airtight again to switch back.


Music Research: Drum and Bass beat detection at McGill

Who says music researchers are stodgy?  Some interesting new research going on at McGill that could someday work to improve beat detection in all kinds of music software:

An essential first step in understanding how various producers uniquely use percussion, melody, and harmony in their tracks is downbeat detection (to find the first beat of every measure). We’ve developed a style-specific method of downbeat detection catered to Hardcore, Jungle, and Drum and Bass (HJDB) by combining multiple forms of metrical information: low-frequency onset detection; beat tracking; and a regression model (SVR) trained on the timbre and sequence order of breakbeats. In a recent evaluation using 206 HJDB tracks, we demonstrate superior accuracy of our style-specific method over four general downbeat detection algorithms (including two commercial algorithms).

Read the rest at Breakscience.

Ableton Push First Prototypes

Great interview with the Ableton Push co-creator, Jesse Terry:

That’s right, I used Lego and sugru (a silicon putty). We attached Lego pieces to MIDI buttons with LEDs, connected to a Livid Brain. So, there were many burnt fingers and burnt Star Wars pieces along the way. My wife would always hear me digging away in the Lego bin and she’d wonder if I was actually working up here! The Lego prototype made it very easy to test out ergonomic setups as we could move the buttons around. We tried all kinds of different layouts and, we were able to user test the entire thing and learn to play it before we had a hardware version to play with. I’ve been playing this Push Lego layout on plywood for 2 years now.

Read the rest for a great view from people who are trying to redefine the idea of a Musical Instrument.

How the Drumsynth works in Ableton Suite 9

Great interview at Sonic Bloom with Christian Kleine, creator of the DrumSynth Max for Live device.  Lots of things I had no idea about, such as:

There are currently 13 different devices. The important (kick/snare/hi-hat/tom) as well as the exotic (Kplus and FM) are all represented. “Kplus” is called this way because here the “Karplus-Strong” synthesis is used – David Zicarelli (CEO of Cycling ’74, the creators of Max) jokingly said that this was also the best and only purpose of this synthetic form… Basically, most important for me, next to the sound, was a very dynamic control using velocity data – therefore, this section is quite extensive in every device and almost every parameter can be played dynamically.

Great stuff.  Be sure to check out the screenshot of the inside of DrumSynth, especially if you’ve never seen an intimidating Max Patch before.

Reason 7 and Komplete 9 Announced in same week

Big week in the “too big to be a plugin, but too small to be a DAW” market segment.

Anyone out there using Reason by itself still?  Ah well – the MIDI out feature will be cool to people who are into that sort of thing.  I’ve never been a huge fan of USING either Reason or Komplete, but the sounds both packages can make are formidable.

Reason 7, Komplete 9

Via Resident Advisor, because it’s an awesome site.

▶ Push Envy -Hacking the Novation Launchpad to do Scales

After rewatching the Ableton Preview Event video, in which Dennis DeSantis gives a great demo of the Push’s scale mapping capabilities, I decided to see if I could mimic this on my Launchpad using Max for Live.

With the help of some of the Novation tutorials and a little inspiration from the repository I figured out how to place a diatonic major or minor scale on my User 1 mode. The grid goes stepwise on the X axis and in diatonic 4ths on the Y axis. The lighted pads represent the tonic note. The layout is exactly like the one shown in all the push videos, although the lights aren’t accurate when the key is changed. Like I said, it’s a hack. And a dream.

I’ll post more details on how I did this later if anyone is interested. In short, it isn’t pretty, and it isn’t a workflow I’d probably use in practice. But it did let me make this cool jam:

What kills me is how fast it is to play scales. Just by learning a simple finger pattern I can play multiple octaves of scales or modes. I can also do sequences (of the music theory variety) and chord blocks very easily. I can’t wait to try this kind of thing on a device that’s actually designed for it!

Audio to MIDI in Ableton Live 9 vs. Melodyne

Oliver Chesler has an interesting look at the Audio to MIDI feature in Live 9.  He compares it to Melodyne, actually:

Whatever you think of the results as much as I love Melodyne and use it it’s not a feature built into Live therefor one step away from instant. I also don’t think you can Audio to Drums like you can in Live. The real killer feature for Audio to Midi is my own whistling or humming to create parts and ideas.

Of course, it can’t affect the audio signal with the MIDI analysis like Melodyne can, so it seems more of a songwriting tool to me.  Either way, he made a cool video demonstrating the difference between the two:

Via Wire to the Ear.