Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

controlling multiple mpe synths in AUM

i can't wrap my mind around it... say i have 3 channels in AUM.
CH 1 Model 15
CH2 Animoog
CH3 Volt
all MPE synths and in MPE mode.
How can i control them with a single hardware MPE controller (ie. Lightpad blocks) without hearing them sounding all at the same time ?

it's clear to me how to do it with non-mpe synths: i can work with filters and assign a single midi channel to each synth, and then select the corresponding midi channel on the controller keyboard.
But as i understand it, MPE is csome taking up all 16 channels - at least in the mode my controller is configured right now... So is there a way to select a "focus" synth/channel i want to control ?

i tried some configuration with midi filters and the digitakt and by "converting" chan 1-16 to a single channel, which somehow works, gut i'm loosing all the MPE functionailty....

any ideas ? thanks a lot

Comments

  • I believe you can mute channels by dragging left on the bottom icon of each column

  • thanks,

    but muting channels is not the solution i’m looking for. In a live situation if
    synth A is been played by a sequencer and i would like to play synth B live through my mpe-controller, as it is right now, the played notes would be sent to synth a AND b

  • wimwim
    edited April 2019

    Two taps will disable or enable midi control for a synth. Not ideal, but pretty easy really.

    I wonder if midi connection changes can be midi learned in the upcoming release?

  • true, but if i disable midi, in the scenario described above, the sequencer as well would not be able to control synth A anymore :)
    my goal is it to choose the synth i wanna control on the fly, in a live situation, without touching the screen (which is possible with NON-mpe channels, simply by changing the midi channel on the keyboard)

  • wimwim
    edited April 2019

    @AlaErt said:
    true, but if i disable midi, in the scenario described above, the sequencer as well would not be able to control synth A anymore :)

    Not if you only disable the MPE keyboard, not the sequencer. Each has its own checkbox.

    my goal is it to choose the synth i wanna control on the fly, in a live situation, without touching the screen (which is possible with NON-mpe channels, simply by changing the midi channel on the keyboard)

    Yes, you would have to touch the screen. It’s possible that this will be midi learnable in the new release though. Maybe one of the beta testers can confirm.

  • Thanks for your help. i think i’m still going to use the digitakt to change the channels - thats working great. For now without MPE. :)

  • In iOS, midi connections are done via ‘ports’, each app often having an input port and an output port. Each port can have 16 channels, which you can often select to limit the scope of a midi stream to one app, instrument or sound. This is what you are doing with your hardware in non MPE mode. It has a single midi output port and you have several apps connected to that one port responding to different midi channels.
    In MPE mode, you need one port PER app, instrument or sound as MPE uses all 16 channels of any midi port. Not such a problem within iOS, as each app usually has a dedicated input port which can be selected in the sending app, but in the case of your hardware, it only has one output port, so it can only produce one unique MPE data stream. It’s your hardware that is the problem.
    In the early days of midi, there were a few keyboards that could only use midi channel 1. This is similar, MPE is new, and there aren’t many options for hardware with multiple output ports. Not sure how you’d connect multiple output ports to an iOS device, for that matter.

  • thanks mate for this long and good explanation. Somehow i now realized that my controller doesn‘t do much more for me in MPE mode, as ist does in standard midi mode. I simply map some „gestures“ to certain synth parameters (for example glide to cutoff) and that‘s enough for me. Switching instruments on the fly is much more important in a live set than a lot of mpe expression performance. So i‘ll stick to the standard midi mode with one channel per instrument :)

  • tjatja
    edited April 2019

    I do not fully understand MPE.

    On a hardware controller you can map all three dimensions to some effect, right?
    On an iPad controller, that could only be two dimensions.
    So, we are automatically at a loss with using iPads as controllers, it seems.

    Which are good iOS MPE controllers?

    I read Gestrument, ThumbJam and GeoShred, so far.
    Any more, beside those that are part of a Synth like Animoog?

  • I found this list from ROLI:

    NOISE
    Seaboard 5D
    GeoShred
    GarageBand iOS
    AniMoog
    Audio Damage Quanta
    Geo Synthesizer
    iFretless Bass
    Minimoog Model D
    Moog Model 15
    Open Labs Stagelight
    PPG Infinite, Phonem, WaveGenerator and WaveMapper
    SampleWiz
    SpaceCraft Granular Synth
    SpringSound
    SynthMaster Player
    Tardigrain
    ThumbJam
    VOLT

    I own most of them, beside 3 of the PPG, SampleWiz and SpringSound.

    Which would you recommend as general MPE controller for (other) MPE Synths?

  • MPE is broken by design. It, as others have pointed out, destroys the channel mechanism, while at the same time not bringing anything fundamentally new to the table which couldn't have been done with a simple additional PolyPressure message or two. I must wonder how this could have been officially adopted by MIDI.

  • @SevenSystems said:
    MPE is broken by design. It, as others have pointed out, destroys the channel mechanism, while at the same time not bringing anything fundamentally new to the table which couldn't have been done with a simple additional PolyPressure message or two. I must wonder how this could have been officially adopted by MIDI.

    And well, since mpe uses one midi-channel per note it is by design limited to 16 voices of polyphony or something.
    The mpe supporters defend the design by saying 'you've only got 10 fingers so it doesn't matter' but what about sounds with long release-times etc. etc. etc.

    So yeah, in practice mpe is mostly suitable for 'solo sounds' and I may be cynical but i have a feeling that it's the envy of string wankers that sparked the whole mpe thing...

  • @Samu said:

    @SevenSystems said:
    MPE is broken by design. It, as others have pointed out, destroys the channel mechanism, while at the same time not bringing anything fundamentally new to the table which couldn't have been done with a simple additional PolyPressure message or two. I must wonder how this could have been officially adopted by MIDI.

    And well, since mpe uses one midi-channel per note it is by design limited to 16 voices of polyphony or something.
    The mpe supporters defend the design by saying 'you've only got 10 fingers so it doesn't matter' but what about sounds with long release-times etc. etc. etc.

    So yeah, in practice mpe is mostly suitable for 'solo sounds' and I may be cynical but i have a feeling that it's the envy of string wankers that sparked the whole mpe thing...

    Haha, yes, good point about the 16 channel limit too.

    Yeah really no idea how it all came to be... probably one guy (was it that Linnstrument thing?) wanted to be able to record their multiple controllers per note in an existing DAW without any changes, so came up with that channel hack.

    That's OK for a personal environment and a little playing around, but IMO not something that should be adopted as a standard. Especially since on the whole, MIDI is a really well-designed and efficient protocol. MPE is a bit of an oddball in there.

  • I believe the creator of Mujician, Cantor and Geo Synth was the one who came up with the multi-channel hack in order to pitch bend each note independently via midi. It wasn’t an attempt to convert channel pressure to poly pressure at all. That’s a side effect. MPE midi is the only way to do pitch bend per note, as there is no support for it in the standard midi spec.

  • @TheOriginalPaulB said:
    I believe the creator of Mujician, Cantor and Geo Synth was the one who came up with the multi-channel hack in order to pitch bend each note independently via midi. It wasn’t an attempt to convert channel pressure to poly pressure at all. That’s a side effect. MPE midi is the only way to do pitch bend per note, as there is no support for it in the standard midi spec.

    Yes, but my point is that it would have been better to extend the MIDI spec with an actual per-note pitch bend message instead of throwing the rest of the spec out of the window. ;) Heck, even a custom SysEx message would've been better than this MPE contraption!

  • @SevenSystems said:

    @TheOriginalPaulB said:
    I believe the creator of Mujician, Cantor and Geo Synth was the one who came up with the multi-channel hack in order to pitch bend each note independently via midi. It wasn’t an attempt to convert channel pressure to poly pressure at all. That’s a side effect. MPE midi is the only way to do pitch bend per note, as there is no support for it in the standard midi spec.

    Yes, but my point is that it would have been better to extend the MIDI spec with an actual per-note pitch bend message instead of throwing the rest of the spec out of the window. ;) Heck, even a custom SysEx message would've been better than this MPE contraption!

    True, but I suppose pragmatism goes a long way. Changing/extending an official spec via the governing committee easily takes over half a decade, and then another half-decade before some main players start adopting it. MPE as it is could be introduced immediately, with backwards compatibility and no new technology requirements. I think it’s quite clever actually (if not a little technically contrived).

  • @brambos said:

    @SevenSystems said:

    @TheOriginalPaulB said:
    I believe the creator of Mujician, Cantor and Geo Synth was the one who came up with the multi-channel hack in order to pitch bend each note independently via midi. It wasn’t an attempt to convert channel pressure to poly pressure at all. That’s a side effect. MPE midi is the only way to do pitch bend per note, as there is no support for it in the standard midi spec.

    Yes, but my point is that it would have been better to extend the MIDI spec with an actual per-note pitch bend message instead of throwing the rest of the spec out of the window. ;) Heck, even a custom SysEx message would've been better than this MPE contraption!

    True, but I suppose pragmatism goes a long way. Changing/extending an official spec via the governing committee easily takes over half a decade, and then another half-decade before some main players start adopting it. MPE as it is could be introduced immediately, with backwards compatibility and no new technology requirements. I think it’s quite clever actually (if not a little technically contrived).

    Yes, as I said... it’s OK as a personal proof-of-concept for a pet project. But that the aforementioned standards body, when approached, then simply accepts this hack as a “standard” instead of saying “OK... nooooow we’ll do it RIGHT” is unexpected :)

    And causes customer support nightmares as can be seen here (“why can’t I control multiple MPE synths through a single (virtual) cable?”) and downright impossibilities. If you came out with a new Ruismaker synth and told your customers “sorry guys, Ruismaker Ultimate requires all 16 channels”, you’d be in for some questions :)

  • @Samu said:

    @SevenSystems said:
    MPE is broken by design. It, as others have pointed out, destroys the channel mechanism, while at the same time not bringing anything fundamentally new to the table which couldn't have been done with a simple additional PolyPressure message or two. I must wonder how this could have been officially adopted by MIDI.

    And well, since mpe uses one midi-channel per note it is by design limited to 16 voices of polyphony or something.
    The mpe supporters defend the design by saying 'you've only got 10 fingers so it doesn't matter' but what about sounds with long release-times etc. etc. etc.

    So yeah, in practice mpe is mostly suitable for 'solo sounds' and I may be cynical but i have a feeling that it's the envy of string wankers that sparked the whole mpe thing...

    You are wrong about polyphony. One thing is max midi voices while synths max voices is a different thing.
    While mpe will actually limit to 16 playable notes it won’t stop the receiving synth to handle all the voices ITS polyphony can manage, since on release cycles there won’t be any modulation per definition(after note off message)

  • @mschenkel.it said:

    @Samu said:

    @SevenSystems said:
    MPE is broken by design. It, as others have pointed out, destroys the channel mechanism, while at the same time not bringing anything fundamentally new to the table which couldn't have been done with a simple additional PolyPressure message or two. I must wonder how this could have been officially adopted by MIDI.

    And well, since mpe uses one midi-channel per note it is by design limited to 16 voices of polyphony or something.
    The mpe supporters defend the design by saying 'you've only got 10 fingers so it doesn't matter' but what about sounds with long release-times etc. etc. etc.

    So yeah, in practice mpe is mostly suitable for 'solo sounds' and I may be cynical but i have a feeling that it's the envy of string wankers that sparked the whole mpe thing...

    You are wrong about polyphony. One thing is max midi voices while synths max voices is a different thing.
    While mpe will actually limit to 16 playable notes it won’t stop the receiving synth to handle all the voices ITS polyphony can manage, since on release cycles there won’t be any modulation per definition(after note off message)

    And if I read the standards specification correctly, the model is to start applying modulations on MPE channels to multiple simultaneous notes should you run out of channels. Note sure how that works out in practice, but it sounds like a sensible solution.

    Anyway, as a drummer I'm a numpty when it comes to playing keys, so I can only go by what other people told me :D

  • The more I think about it MPE seems to be designed for controlling plug-ins running on a computer or another polyphonic mono-timbral source. Then again I don't own any MPE controllers so I couldn't care less...

  • edited April 2019

    I’ve been using Geoshred and iFretless bass to play MPE quite a bit recently on iPad and Mac.

    As you’ll probably know Ableton Live does not really support MPE. You have to set up a whole bunch of tracks - one for each channel - and then route them to another track that hosts the instrument. (There is a utility called MPEutil that uses a python script to generate these for you)

    Anyhow - by having the tracks split up per channel like this I’ve noticed that the normal operating mode of GeoShred and iFretless bass is NOT one channel per note and rotating round them.

    Insatead it is one channel per string. Channel 1 is used for general control messages. Then any notes you play on the highest string come out on Channel 2, next on Channel 3 etc.. So you do not actually need all 16 channels. Mostly you need 7 or 8.

    This “one channel per string” MPE means that you can use guitar articulations like hammer on and pull off - and the synth can respond properly (Effectively each channel is a mono synth). IFretless calls this MPE4.

    iFretless also can generate “one channel per note” MPE which it calls MPE3.

    I totally agree with you guys that Per Note MPE should really be handled by a polyphonic MIDI message within the current system. But the Per String method really needs to use separate MIDI channels to function as far as I can see.

    PS. I have no idea what Roli keyboards would generate - I am assuming it’s per note MPE.

  • @ricksteruk said:
    This “one channel per string” MPE means that you can use guitar articulations like hammer on and pull off - and the synth can respond properly (Effectively each channel is a mono synth). IFretless calls this MPE4.

    That's a really clever implementation of the standard. I like that! :)

  • @TheOriginalPaulB said:
    I believe the creator of Mujician, Cantor and Geo Synth was the one who came up with the multi-channel hack in order to pitch bend each note independently via midi. It wasn’t an attempt to convert channel pressure to poly pressure at all. That’s a side effect. MPE midi is the only way to do pitch bend per note, as there is no support for it in the standard midi spec.

    @TheOriginalPaulB

    You are taking me back in time now talking about Cantor and Mujician had both apps in the day what 6 years ago plus. They where great for the time. Rob Fielding was the dev he was doing cutting edge stuff with MPE style stuff and note expression. Shame he faded out of the music app game.

  • The 'one MIDI channel per string' is how the guitar-modes in the Oberheim Matrix 1000, Casio VZ-8m, and Roland GR-50 did it in the 1980s. Hammer-ons, individual string bends, etc. Course you needed a guitar with a special pickup, but I'm glad these 'alternative MIDI controllers' like MPE are finally getting some attention.

  • Actually MPE is 15 note per channel control , one channel is reserved as global .

  • edited April 2019

    @brambos said:

    @ricksteruk said:
    This “one channel per string” MPE means that you can use guitar articulations like hammer on and pull off - and the synth can respond properly (Effectively each channel is a mono synth). IFretless calls this MPE4.

    That's a really clever implementation of the standard. I like that! :)

    Though a few alarm bells should ring as soon as you say "I like this implementation of the standard" 😂

    Oh, and happy polyphonic Easter! ;)

  • edited April 2019

    @ricksteruk said:
    I totally agree with you guys that Per Note MPE should really be handled by a polyphonic MIDI message within the current system. But the Per String method really needs to use separate MIDI channels to function as far as I can see.

    But even this could be done better with a special MIDI message that just takes an arbitrary "group index" instead of a note number. Or just an additional "prefix" message that contains the group number for the next message. All this could've been done with standard existing stuff / SysEx. It's just that channels are pretty much the foundation of MIDI and killing this means that everything's royally messed up. Especially with hardware where cost is actually a factor (16-port MIDI interface vs. 16 MIDI Thrus :))

  • edited April 2019

    @SevenSystems said:

    @brambos said:

    @ricksteruk said:
    This “one channel per string” MPE means that you can use guitar articulations like hammer on and pull off - and the synth can respond properly (Effectively each channel is a mono synth). IFretless calls this MPE4.

    That's a really clever implementation of the standard. I like that! :)

    Though a few alarm bells should ring as soon as you say "I like this implementation of the standard" 😂

    :D well played

  • I have taken this conversation to the polyexpression forum, to get another view of the problem. I know nothing about MIDI and reading you, experts, I feel like I've done the wrong movement buying a Linnstrument...

    Well, to be honest, I don't feel so.

    I am very happy because all the things I can do with it. I've never thought of controlling two or three sound sources with it, just like I didn't think about playing two or three basses at a time. I don't care if I can edit notes or automation lanes on a MPE DAW because I rather practice until it sounds how I want it to sound and I don't have the need to reach out after a knob, button or wheel just as I didn't reach out after the tone knob in a bass. Actually with the Linnstrument I don't have to reach after a knob as I have modulation, per note, already at the top of my fingers.

    MPE might not be perfect, there seem to be consensus on that but it seems to have a long story and it's a step in the good direction beyond on/off switches.

    So thank you, Roger Linn, Madrona Labs, Moog, Kai Aras, Audio Damage (and soon AudioKit team) and many others for introducing me to electronic music.

    And who am I to say things? Nobody, absolutely nobody.

  • iFretless can do channel per note, channel per string, single channel mono legato and single channel polyphonic. GeoShred has a quite comprehensive system of configurable midi setting presets which basically allow you to do anything. Geo Synth just allows you to specify the range of midi channels it uses, as does Thumbjam, I believe.

Sign In or Register to comment.