I received my Push 3 from Ableton (standalone edition) about two weeks ago and I have had some time to explore it and think about the workflow changes, hardware features and how physical hardware can act as a Trojan horse for software, blurring the already blurry line between computer and … gear? machine?
The first Push arrived about 10 years before this newest Push was announced. Then as now, the potential for shrinking Ableton Live into dedicated hardware was clearly visible, but less clear was the need for standalone hardware. After all, a standalone Push would be very expensive, and not be able to replicate the multitude of utility a laptop could provide.
But maybe it is time to think about what we have lost to convergence. Just as we reflect on the dominance of platforms across our laptop, phone, TV, and even wristwatches we are starting to see that the charm and excitement of discovery, ownership, and identity can be lost when platforms and functions are unified. Ableton Live running on a Mac or PC is not as distraction-free as using something like an OP-1 field or a modular synth. My first impression of the Push 3 was clear as soon as I realized after 20 minutes of beat making that I had forgotten to open up Live on my computer but somehow my work had been saved (with a cool “band name generator” filename to boot).
“If you are serious about software, make your own hardware” goes the Alan Kay adage, and Ableton seems to be very serious about both in the Push 3.
I’d like to use the device longer term to really develop final thoughts on it, because it’s an ambitious step forward on an already ambitious product. It is deserving of careful thought and criticism.
Ableton has recently previewed Live 11, a significant update to the DAW software that has increased its influence in educational markets since I first wrote about version 9 back in 2013. Back then I wrote:
Live is here to stay, and is the ultimate standard in electronic music making. “Everything you need to make great music can come from us”.
If Ableton’s major releases oscillate back and forth between updates for the professional market and updates for the prosumer/beginner market, this would put version 11 on schedule to appeal to that more professional market that Live 9 was focused on. Live 10, with Capture (a method for remembering casually produced MIDI), Wavetable (a very easy to use synth), pictorial devices like Echo (very easy to use tape delay), and the improvements to visualization and step sequencing on Push 2 seemed focused on broadening the base. New users could approach these features as a way to jump into music making more quickly. Live 11 includes a couple of these features that I detail below, but the bulk of the updates are squarely focused on the professional market. Some features like Comping feel long overdue, and were a standout missing feature for users coming from other software like Logic Pro. Other new features like the global use of MPE in MIDI clips and software instruments is more focused on those with the hardware resources to employ those kinds of features (MPE also enables new expressions on Push controllers, but is best experienced on 3rd party controllers like the Roli Seaboard.) Rather than deep dive all of the new features, I’d like to focus on a few that I think will make a real difference for music educators using Ableton Live.
“Finally.” In version 11, Ableton Live lets the user choose a global scale for the project. This feature has actually been available on Push for the last couple years, and has scale settings from Push have been saved inside of Live projects since version 9.5. But now, there is a proper interface for viewing and folding by scales on screen and the scale menu matches those available on Push. This is a huge benefit for anyone making tonal music but especially for those unable to do complex transpositions or key signature troubleshooting. Simply press the “scale” button next to “Fold” and Live will only show in-key notes.
Minor quibble: It would be really nice if this data somehow also could highlight in-key clips in the browser. Maybe some kind of Logic-esque metadata, or even better: key analysis of incoming clips could take advantage of this?
Fun detail: You can now set the MIDI piano roll to show Sharps, Flats or both note names for accidentals. It still defaults to “Sharps only” which I find charming.
I love getting lots of mileage out of short, simple MIDI sequences. Live now allows probability to be adjusted per-note in the MIDI editor. This is a great way to humanize drum patterns and liven up bass sequences, and I expect it to become a key feature of writing in Live. Set up an interesting and complex MIDI clip, record its output to an adjacent Audio track and let the machine jam until it produces the best pattern. Note probability is an excellent “happy accident” feature and I’m really glad to see it this accessible and upfront in the UI.
On the surface, this is a feature catching up to other DAW programs that have had take management built in for years. The idea is simple: Record over the same spot several times and each loop the recording will add a sub-track. You can then quickly composite these recordings into a perfectly spliced take. It’s a method borrowed from tape-based recording and a really good example of how computers have sped up the studio process.
Of course, Ableton could have added this feature long ago. Instead of simply adding a similar feature, they opted to give take comping in Live 11 a bit of Ableton-y quirk. Arbitrary recordings can be comped together after the fact, making takes a very quick editor for chopping up and scrambling samples. MIDI clips can be comped together, as can video clips (!) I’m excited to see the off-kilter creative possibilities of this implementation down the road.
Video in particular is interesting to me. The feature is still very much in beta (Live segfaulted when I tried to demo on the Electronic Music School stream), but if the feature works as advertised it would seem that multi-camera clips could be spliced together very easily here. While Live is not intended to be a video editing app, it can be used as a utility app for quick retiming and now for quick synced camera cutting. Will I use Live to cut an interview? Not sure about that, but I might use it to create interesting background visuals for my next DJ set, which brings us to…
Video in Session View
Another “Finally!” Before, add-on tools made this possible, but usually added significant demand to the machine. With modern codec support and much improved video acceleration on the horizon for laptops (thank you, Apple M1) I can see session view video being an easy visual upgrade for lots of laptop musicians. Simply add your clips, trigger the scene and the video changes. Hook up a projector and throw that popup window on a second screen (and for the first time in your life, be thankful how that window pops up without standard UI controls).
Even as an “under the hood” release, there is a lot to be excited about Live 11. Years ago I wrote about Live 9 being a similar release, laying the groundwork for huge leaps forward. We have seen Ableton’s influence in education bloom in recent years. Professional-level academics now have the tools to make experimental and procedural music baked right into the same app DJ’s and producers are using to perform on stage. Pro-level changes that will benefit smaller scale uses will end up exposing new audiences to things like MPE and Max programming. When I wrote the review of version 9, VJ Manzo and I were still writing Interactive Composition. Now, there are high schools using it – we never envisioned that book for high schools but the inclusion of advanced tools made it possible.
So if Live 11 looks uninteresting at first blush because the features may only interest pro-level users, consider that for many young musicians this will be their first DAW. Things like MPE and note probability may end up just being an expected feature for them, and will again change the way the average person makes music.
I’m excited to share a long-burning project of mine that I’ve recently completed. Chippy is a chiptune synthesizer, written as a Max for Live device. It has lots of cool features.
16 voice polyphony
5 preset waveforms
1 custom draw-able wavetable
low pass filter to negate bit-rate destruction
I’ll probably add some features along the way, but for now this is what I will be using in my class for the chiptune assignment. I’ve posted the device on maxforlive.com for all to enjoy.
Readers will recognize this as an extended version of the device built in the Chiptune chapter of Interactive Composition. At one point in the chapter I actively decide not to go the path of writing the polyphonic version of Chippy – polyphony in Max is not a task for beginners or most intermediate users of Ableton or Max for Live. Describing the process would almost warrant its own chapter, so I decided to skip it.
Enjoy Chippy, and I appreciate any feedback you might have!
I had a great time visiting sunny San Antonio last week in the dead of winter (read: 70 degrees). This year I was fortunate enough not only to give two workshops, but also to speak on behalf of the organization as the recipient of this year’s teacher of the year award.
For those interested, here were my remarks upon receiving the award:
I am humbled and honored to receive the Mike Kovins TI:ME Teacher of the Year award. TI:ME is an organization I respect greatly and I am sincerely grateful for this recognition. I truly have no words.
And now for some words.
For those who don’t know me very well, here is my story. It’s a bit unconventional, but I imagine many in this room have taken a path unlike our peers in the music education world.
At one point I aspired to be a great band director. In fact, my interest in music technology began in part by writing marching band arrangements. When I actually started teaching, however, I realized this way of life wasn’t for me. I became disillusioned at the fact that no matter what I did with my band, it had probably been done before. In that world, you can have success, you can have fun, you can build a program, you can make a living…but I didn’t feel like I could do anything truly new. I’m not a follower – it’s very difficult for me to do something that has already been done better by someone else. Realizing this I made plans to leave the teaching profession shortly after I had gotten started.
In 2006, everything changed. I received a phone call that through some strange turn of events I would be teaching a class that had been fortuitously titled Music Technology at the High School. When I say titled, instead of planned you should know – that was all the planning – no meetings had taken place, no equipment had been purchased – there was only a class title and a room. All it took was that title.
What they didn’t know was that since high school I lived and breathed electronic music in my spare time. I spent years absorbing the music of Aphex Twin, the Prodigy, DJ Shadow, and early Daft Punk. It wasn’t possible to buy equipment yet, but I devoured all the music I could – teaching a class on the topic of electronic music was like a dream come true.
So on the first day of Music Tech class, what I found surprised me. I had three classes of seniors who needed my class to graduate – they were the type of people who waited three and a half years to get their fine arts credit. How do you teach music to kids like this, and make them enjoy it? So there is the secret to my success – at the very core of all my curriculum decisions is the memory of a room full of eighteen and possibly nineteen year olds staring at a recently transferred elementary teacher wondering what in the world the class could possibly offer to them. It turned out great because it simply had to be.
I won’t go on with specifics about my classes – I have a session on that Saturday. Suffice to say this core belief has enabled my program to grow to a full course load that trains over 350 students per year to be creative with music using the latest tools available.
I’ll use this opportunity to address those in the room – TI:ME – people I see as the chief innovators of music education. Keep using your work to spread the idea that music learning is a personal endeavor. Never be tempted to compete with each other, or compare to each other – everyone’s method can and should be different. The freedom to experiment with teaching methods and curriculum is what makes the arts the last bastion of individuality and culture in your school – don’t forget that, and don’t let it go. Let your instincts and your interests guide what you do, not what someone else says will get you recognition or a high rating.
I’d like to thank Barbara Freedman for her tireless work in nominating me for this award. She’s been a great mentor and friend over the years, and I’m proud to share this honor with someone as successful and influential as she.
I’d like to thank VJ Manzo, the smartest man alive, for also nominating me but also for approaching me to work on our new book, Interactive Composition. For a guy like that to take a chance on a slob like me means the world, and I hope you have as great a time working with me as I do with you.
And finally I’d like to thank my children Annelise and Ethan, who told me I wasn’t allowed to go on this trip and who I miss very much right now, and especially my wife Jennifer. There’s not a thing I’ve done that she hasn’t been there to see me through – I wouldn’t be anywhere at all without her support and her guidance.
I’ll do my best to live up to the expectations that come with receiving this award. Thank you!
Part of what I do is produce awesome videos for our school and the community. This time, I’m doing a music video for my daughter’s Girl Scout troop – a parody of Shake it Off but about cookies. Without going into tons of detail about the entire process, I want to call attention to a technique I’ve read and watched a lot about but hadn’t gotten a chance to try until now – using slow motion footage for a music video.
Spoiler: Here is the finished product:
So here’s the idea: a normal video is shot at 24 or 30 frames per second, and played back at the same speed.
Slow motion video is shot at 60 or more frames per second, and played back at 24 or 30 frames per second. If something’s happening at regular speed and shot in a high frame rate, it will look slow when you play it back.
Thus, If something happens at a faster than normal speed and shot in a high frame rate, it will look normal-ish when you play it back. Audio is usually played back at regular speed for a video shoot, and the singer will just sing along with the track to keep it in sync. We then throw away the audio from the shoot, and use the perfectly synced motions of the singer in our video. In this case, we’re making the singer lip sync quickly to the song and slow down the footage later to allow this rapid motion to play back at regular speed with the song.
Why do this? Because it looks awesome! It takes away the awkwardness of the singer just staring at you singing and really sells the action. It makes the whole thing feel a big bigger than life.
Slow motion, Fast music
To do this, I took the song that was to be used for lip-syncing, and made a 160% speed version using Ableton Live. My camera, the Canon XA25, converts 60fps footage to 24fps. Knowing these facts, it will take me speeding up the resultant video to 150% in Final Cut Pro to get something that lip-syncs with the “correct” original song.
Here’s a video showing the difference between the three versions of the speed:
I’ll update this post with the full video when it’s finished. Expect cookie-themed costumes.