Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Intelligent splits

Hi there everyone,

This is Tim, I’m new to the forum, although I’ve been lurking for years now :)

I have a question about keyboard splits. Conventional splits divide the keyboard up into fixed regions, but that isn’t always what you want, at least for live use.

What I’d like is the ability to have an “intelligent split”. For example, if I play a chord, split it such that the bass note goes to one synth, and the remainder of the chord goes to another.

Similarly at the top - play a right hand chord such that the top note goes to one sound (the lead sound, for example) and the remainder goes to another (a pad sound perhaps). Add a sostenuto pedal to that and I’m thinking the chord could be maintained while a play a lead line over the top.

Can this be done in any of our wonderful array of apps? I currently use iMIDIPatchbay for live setup (thank you Johannes), but I’d love to have more flexibility.

Thanks,

Tim

Comments

  • I am just brainstorming here...

    there is a new app that should be out soon called Polythemus. It splits multiple notes out to individual channels.

    If you do a regular split for high/low sections, then send each out to an instance of Polythemus ( it is AU), it should do what you describe. Set one Polythemus to separate the bass from the rest, send out on 2 channels, set the other Polythemus to separate top note from the rest, send out to 2 channels.

    Since the app isn’t out yet, I don’t know if this will actually work. But from what I remember from checking out the User Manual, it seems highly likely that you could do this.

    Here is the link to the manual if you would like to investigate further...

    http://www.amssoftware.org/manual/PolythemusManual.pdf

  • There is a fundamental problem with implementing this- when a human plays a chord, that notes aren’t pressed at exactly the same millisecond, one will be first. They also won’t be triggered an a particular order, each time. During that time of slop, the computer has to guess, is this first note the low note? High note? Middle note?

    You could do it after the fact, in a DAW, very well, or with a look ahead, but that would add latency.

  • Excited about this one. It's like voice allocation in Analog Four or using Polymer app on OSX.

  • Thanks Cracklepot, that looks like a good find. I’m very interested to see how that works out, sounds like it almost does what I described. Do you know the dev by any chance? I just wondered how you found out about it.

    @Processaurus, you have a good point, the order in which you play the notes will be critical. I don’t want to sort it out after the event, I’m thinking of live use rather than recorded. It might need some getting used to, ensuring the chords are played in the right order.

    I have a Roli Seaboard which already splits everything onto separate channels, so I was hoping this might be able to do it, but it seems to me the channel allocation is fairly random, it’s certainly not predictable as far as I can see.

  • The developer is @midiSequencer

    He is on the forum here quite often.

    His brand new app Photon has a couple of threads here. He mentioned his upcoming app Polythemus in those threads. Photon is really cool. You should check that one out, too.

  • Polythemus is ready for launch - I just need to do a video.

    It will have most use turning iVCS3 into poly, but yes its a voice allocation program design to assign note ons to different voices - you can then attach synths/output midi devices to those voices.

  • Thanks @midiSequencer. I’ve read the manual, looking forward to it.

  • @midiSequencer, please forgive my poor understanding, but could your app allow me to route a split keyboard with bass in left, piano in right, to two separate tracks on a DAW? That would be a great help to me. Thanks!

  • Not exactly - its related to timing (when you press that key) rather than where (midi note value) - just like a synth voice allocation program rather than key split.

    I think @blueveek 's midi keyzone would be a better candidate if it could be adapted to have multiple outputs & route each range to separate outputs? If not I could adapt mine.....

  • Thanks @midisequencer, maybe another member has a solution for me. Good luck with the app!

  • @LinearLineman This is what you're looking for, probably: http://audioveek.com/key-zone/

  • Thank you @blueveek. What a great set of tools there. I will give it a try.

  • I think @LinearLineman wants different key ranges going to different outputs too?

  • edited March 2019

    I submitted to Apple earlier today :) @CracklePot will add you to my beta.

    reading the requirements I think it should fit the bill if you consider sustained notes - they are held in the voice (until you run out - in which case voice stealing takes place). I have keepHigh & keepLow modes of voice stealing to allow you to maintain drones/sustained notes in either the highest or lowest note order.

    I only have 8 voices though - so no 10 finger chords!

    It does need a host that supports multiple output channels (only AUM at the moment I think, but apeMatrix & Audiobus are in development)

Sign In or Register to comment.