Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Poly Aftertouch fudge - a suggestion for midi devs

I don’t know if this would appeal to anyone else, but I was thinking that it should be possible to have an app that shows incoming midi notes on a piano display and allows a vertical sliding touch on a played note to generate poly aftertouch messages to be sent to the same midi output channel as the incoming notes. That would give poly aftertouch capability where the keyboard hardware doesn’t have it.

Comments

  • Poly-Aftertouch is a performance mode that provides the player with a wider range of expression capabilities while playing.
    If you don't play such hardware, any kind of midi controller can be added later on the timeline, even on multiple controller tracks.

    It woud be incredibly difficult to add such controllers 'live' while notes are comming in.
    At least in theory 10 different expressions are possible simultanously on a poly-pressure sensitive keyboard.

  • Strictly speaking Channel Pressure is a single pressure based expression per MIDI Channel (and usually, but not always, it's a binary on/off expression value) and Poly Aftertouch is the ability to apply pressure (with 128 possible values) individually for each note, so, for instance, the right-hand melody notes apply pressure whilst the supporting chords played by the left hand do not (just an example, there are many ways that Poly Aftertouch can play out).

    On a well-specified hardware synth, you can have 10 notes of Poly Aftertouch expression, mod wheel expression, pitch bend, expression pedal and even wind controller all happening at the same time. But considering the average homo-sapien finds it difficult to pat their head and rub their belly and the same time, there are very few keyboard players (if any) that could play all that data meaningfully at one time. :)

    More seriously, there's a great utility called mxfConvert that allows you to convert any input MIDI signal to a new output signal. The MIDI values that can be converted are Note, (poly) Aftertouch, (cc)Controller, Program Change, Channel Pressure, Pitchbend, Song Select, Start, Continue, Stop and Song Position Pointer. Whilst this is all very useful it's still not possible to convert events, post creation, if the keyboard in question doesn't transmit, note-based expression values.

    I don't have enough deep SysEx/MIDI knowledge to know whether it's possible to apply note based expression post-performance, and by post-performance, I include 'live' with enough delay (5-15 ms should be enough) to make the second stream of new data possible. The fact that I've never seen anything on the desktop that matches this workflow leads me to believe that it's unlikely, but that may simply be because nobody has tried. Never say never! ;)

    MPE creates a huge amount of real-time data so I don't believe that data bandwidth would be an issue (within reason). And funnily enough what many people think of as MPE is simply Poly Aftertouch with pitch bend - I believe Animoog is a being a prime example of this.

  • Poly aftertouch is syntactically similar to Note on and Note off messages.

    1000nnnn 0kkkkkkk 0vvvvvvv Note Off event.
    (kkkkkkk) is the key (note) number. (vvvvvvv) is the velocity.

    1001nnnn 0kkkkkkk 0vvvvvvv Note On event.
    (kkkkkkk) is the key (note) number. (vvvvvvv) is the velocity.

    1010nnnn 0kkkkkkk 0vvvvvvv Polyphonic Key Pressure (Aftertouch).
    (kkkkkkk) is the key (note) number. (vvvvvvv) is the pressure value.

    It gets transmitted whenever the value of (vvvvvvv) changes.

    It would not pose too much of a technical problem to insert such messages into a live data stream. These are held notes, the note on has been handled already, we’re just waiting for the note off message. As was mentioned, it’s a performance mode, which is why I am thinking of a way to insert it during, rather than after, the performance in cases where there is no keyboard capable of generating it. Most of the time it would be applied to one or two notes of a chord. If I can free up a hand for activating pitch bend or a mod wheel, I can select a couple of notes on an iPad screen and slide my fingers up and down. It’s a better live performance solution than what most people have access to at the moment. Think of it as dynamically assigned, per note, mod wheels.

    MPE is something different. It uses a midi channel per note to keep channel aftertouch and pitch bend note specific. I don’t have any hardware keyboards that support that, either.

  • edited April 2019

    @brambos
    Will Mozaic allow me to put this functionality together? I’d need some dynamic way to indicate active notes on screen, but the template itself could be static.

  • edited April 2019

    @TheOriginalPaulB said:
    MPE is something different. It uses a midi channel per note to keep channel aftertouch and pitch bend note specific. I don’t have any hardware keyboards that support that, either.

    I understand exactly how MPE is different and was simply explaining a common misunderstanding in the iOS community. e.g. That apps like Animoog are MPE. It was nothing more than an afterthought.

  • @jonmoore thanks for supporting what I always said, i.e. that MPE is basically bullshit because MIDI already supported one per-note controller (PolyPressure / Polyphonic AT), and it would've been much easier to just add another such message instead of inventing a whole new standard with confusing and over-complicated specs, destroying the channel concept while at it. Oh, and as you say, most people already struggle with ONE dimensional control per-note, so I guess in the end, nothing really had to be done. PP was enough :)

  • Yeah, but I thought I’d explain for the benefit of the folks you referred to who do misunderstand...

  • @TheOriginalPaulB said:
    I don’t know if this would appeal to anyone else, but I was thinking that it should be possible to have an app that shows incoming midi notes on a piano display and allows a vertical sliding touch on a played note to generate poly aftertouch messages to be sent to the same midi output channel as the incoming notes. That would give poly aftertouch capability where the keyboard hardware doesn’t have it.

    No reason why this couldn't be done. Poly-pressure (Ax yy zz) is independent of the regular notes you trigger (9x yy zz). They both reference notes (the yy bytes), but independently: you could have poly-pressure messages cycling constantly in the background for every note, to be heard when you trigger the notes with the 9x messages.

    All you need is a way to generate the poly pressure and a way to see the notes triggered, but they don't need to be synchronized to work (so they could be separate apps)

    Wondering what are you trying to send the messages to? Sadly, there doesn't seem to be much in the way of apps that respond to poly-pressure.

  • @aplourde
    iSEM, Animoog, Model15, Kauldron. Pretty sure there are others. Also, I’m thinking of sending the output back out of the iPad and over to my PC, where I have the likes of Arminator 2 and CS-80V to play with. As you said, I just need to see which notes are triggered and be able to generate the varying poly pressure values.

  • Wow, didn't realize iSEM had poly-pressure! Gonna have to look into that as it's one of my favorites....

  • @SevenSystems said:
    @jonmoore thanks for supporting what I always said, i.e. that MPE is basically bullshit because MIDI already supported one per-note controller (PolyPressure / Polyphonic AT), and it would've been much easier to just add another such message instead of inventing a whole new standard with confusing and over-complicated specs, destroying the channel concept while at it. Oh, and as you say, most people already struggle with ONE dimensional control per-note, so I guess in the end, nothing really had to be done. PP was enough :)

    So you call my Haken Continuum and the amount of expression, which is possible with it „Bullshit“?
    This makes me feel very dissapointed. I really thought you’re a cool developer, but a cool developer would see this as a challenge and try to get it done, instead of calling it bullshit that nobody needs. There are so many new, very interesting Controllers with a huge amount of expression, so hopefully one day, a “cool”developer will come and see that THERE IS a need for a tool, which is capable to record and edit this huge amount of data, and take the challenge to build something for the iPad.
    I use Xequence on my iPad, and with V2, which I use regularly (in tandem with AUM), I still hoped, there is a chance to see MPE editing in Xequence 2 one day, but now, I consider to delete this bullshit from my iPad and wait until Bitwig for iPad is there. Bitwig and Cubase both have excellent MPE editing, so I keep on using my MacBook Pro with AUM on iPad. The only problem about is, that the space in my bed (which I also have to share very often with a beautiful girl and a cat) is limited. Yes, I love to wake up in the night and tweet some sounds and record some tracks (which may sound like bullshit in your ears, ‘cause there is to much expression in them).
    There are so many high quality Synths for iPad, many of them with MPE support. It’s sad that there is no Sequencer, which is not light years behind Bitwig & Co.
    Skål.

  • If you read carefully, you’ll see that he was not saying that expressiveness is bullshit, but that the method chosen to enable it is, because there was already a mechanism that could have been extended with very little change to the way MIDI already worked, preserving the feature of having 16 discrete channels. All MPE allows, over and above the capabilities of standard MIDI, is poly pitch bend, which is great, but was already possible and had been implemented by several developers without changing the MIDI standard. Regardless, if market forces dictate, I’m sure MPE support will find its way into Xequence at some point.

  • edited July 2020

    @Ploe sorry, I know my post was a bit controversial and I wouldn't phrase it the same way anymore nowadays (note that my post is very old). As @TheOriginalPaulB pointed out, my rant mostly was about the technical implementation of per-note expression in MPE, which I still find needlessly complicated and limited.

    I apologize for the "bullshit" wording in that post and would definitely be happy to see you holding on to Xequence if you'd like to see MPE editing support on iOS in the future! 🙂

  • @SevenSystems said:
    @Ploe sorry, I know my post was a bit controversial and I wouldn't phrase it the same way anymore nowadays (note that my post is very old). As @TheOriginalPaulB pointed out, my rant mostly was about the technical implementation of per-note expression in MPE, which I still find needlessly complicated and limited.

    I apologize for the "bullshit" wording in that post and would definitely be happy to see you holding on to Xequence if you'd like to see MPE editing support on iOS in the future! 🙂

    Alles gut. Off course I will. ☺️
    AUM and Xequence 2 is a very nice combination. It would be even better, if … 😂

Sign In or Register to comment.