Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Assign knobs to individual note volumes/velocities?

I'm wondering if this idea is possible. I'd like to have a setup where I can choose several (maybe 4-8) notes that would be held in a chord and sent to a synth. And then I'd like to be able to assign knobs on an external controller to individually change the volumes or velocities of each note while the notes are being held together. The objective is to be able to bring individual voices of a chord in and out gradually (or at any given rate).

To be more specific, I'd like to send the chord voicings to AudioKit Synth One (IAA) because it can take Scala files through its Tuneup feature, and I'll be using non-12TET.

I suppose I still have a lot to learn about iOS music and synthesis in general, so is this even possible with a polysynth? I.e. can a synth handle velocities that change differently between different incoming notes? If so, can I execute this in iOS?

Any advice is greatly appreciated.

Comments

  • edited July 2020

    What you are describing, in midi terms, is MPE (multitouch polyphonic expression), or polyphonic aftertouch.
    Velocity mod on a sustained note is a no go, because Velocity = initial volume. Aftertouch is what you want to modulate.

    There's not a lot of hardware MPE controllers, much less ones with knobs that can routed to individual voices.
    If Audiokit Synth One has MPE, you may be able to make a custom controller (drambo, lemur, midi designer, etc).
    Alternatively, you could definitely make something by dispatching midi across several channels. Then use multiple instances of an AUv3 synth in Aum, ape matrix, or Audiobus to act as a polyphonic instrument. A custom controller would be routed to aftertouch, or the VCA of each instance.

  • Another method, similar to @aleyas suggestion.

    Polythemus app allows to split incoming chords to separate midi channels. You could set up AUM to have one channel per MIDI note/channel and then map your controllers faders to automate each of the faders in AUM.

  • I'm not sure if doing this with an IAA is possible, but you could definitely use multiple instances of an AU synth, map knobs to the volume faders for each channel in AUM, and then use Polythemus...or the Mozaic script I wrote, Poly-It-Up, or Tim's Roll-a-Poly script, two other variations on the same idea.

    https://patchstorage.com/poly-it-up-channel-rotator/
    https://patchstorage.com/roll-a-poly/

    And if you'd like to do this with synths that don't accept scala files, you might be interested in my other script, Microtonal Maker:
    https://patchstorage.com/microtonal-maker/

  • @Skyblazer Rad, I will definitely check out your Microtonal Maker script - I wasn't aware that such a thing existed. If it would make non-12TET possible on [insert preferred AUv3 synth here], it would make this endeavor easier. The main reason I've been using Synth One is the aforementioned integration, via Tuneup, with Wilsonic and scala files from elsewhere, and because I'd gotten the impression that AudioKit Digital D1 (also having Tuneup) doesn't really allow for as many instances as I'd need even as an AUv3.

    I guess another reason I had thoughts on trying this "fade-in/fade-out" concept with a single instance of a synth is to be able to easily modulate parameters such as filter cutoff and resonance. And also to have those parameters affecting the chord as it changes and as the voices interact with one another, with the resultant overtones and everything, rather than having multiple instances of a parameter modulated and affecting just one voice each. So I guess I'm probably thinking out loud here and realizing the best way would be to bus all the instances to a single channel and do all effects there (via effect slot rather than in-synth).

    Anyway, thanks for the suggestions, and thanks also to @aleyas and @Jocphone for the info on aftertough and Polythemus.

  • edited July 2020

    I wrote a script a couple of days ago that does just this using polyphonic aftertouch and iSEM, which allows you to modulate volume, filters, lfo, etc. Not something I feel is fit for general release though, but it can be done.

  • @TheOriginalPaulB said:
    I wrote a script a couple of days ago that does just this using polyphonic aftertouch and iSEM, which allows you to modulate volume, filters, lfo, etc. Not something I feel is fit for general release though, but it can be done.

    Tease 😁

  • edited August 2020

    Sorry, but the interface is too clunky. It dynamically assigns notes played to sliders that then generate Poly Aftertouch for those notes, but the sliders are in first available order, not pitch. If there was a piano keyboard display where sliding vertically on a key does the same thing as a slider, I could have highlighted keys corresponding to notes played and touching and sliding them would have generated the poly aftertouch. I’d have been quite happy to release that one. I did ask the dev for such an interface ages ago, but it was dismissed.

Sign In or Register to comment.