Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Hypnopad’s iPad only studio project

For several years my studio has been laptop/Ableton based with multiple iPads performing auxiliary jobs. As a side project I’ve tried to see if I could reverse engineer my setup/ workflow and apply it to an all iPad rig. I’ve learned a lot in the process but I can’t say I’ve been very successful (yet). It’s been a series of frustrations and almost theres.
I want to thank @McD @aleyas @bcrichards for their ingenious creations and many others who have helped along the way.
Instead of giving up or shelving this project, I want to document some of the challenges I’ve encountered so as to possibly educate fellow readers who might also want to go down this path and also to get more input and creativity from this great community. I’m hoping this pipe dream is still possible!

«134

Comments

  • edited May 2020

    thing is iOS is moving FAST , stuff coming out monthly that changes the game (multi out,dram... nembrinin, DDMF mozaic etc etc) no static scene constantly changing, so the advice(s) needs to be updated weekly... not to rain on your parade B)

  • I saw a video with a Jacob Collier Logic session with something like 165 tracks. Even with AUM and drambo, which combined are way more efficient than Cubasis , I top out at about 12 tracks plus effects on a 2018 iPad Pro. Instruments must use disk streaming and not ram, plus low cpu apps to make iPad feasible for serious production. We’re just not there yet.

  • @noob said:
    thing is iOS is moving FAST , stuff coming out monthly that changes the game (multi out,dram... etc) no static scene constantly changing .. and I that the move from DAW desktop to iOS creatively boosted me

    That gives me hope. Actually a large missing piece is hopefully about to drop with the next Drambo update.

  • edited May 2020

    @ion677 said:
    I saw a video with a Jacob Collier Logic session with something like 165 tracks. Even with AUM and drambo, which combined are way more efficient than Cubasis , I top out at about 12 tracks plus effects on a 2018 iPad Pro. Instruments must use disk streaming and not ram, plus low cpu apps to make iPad feasible for serious production. We’re just not there yet.

    I’m not having any issues with RAM or the cpu. On the laptop and Ableton I’m using specialized max4live devices and intricate routings and layers not tons of tracks/clips or cpu hungry synths. The tools just aren’t quite there yet. Apps that let you make your own devices(Mozaic / Drambo/ MidiDesignerPro) are my main hope.

  • Here we go.
    The setup:
    Audio from a Roland pad is routed into Impaktor type modules in Drambo with various tunings triggered by scene pads. Roland Handsonic sends midi notes to round robin devices that in turn are sent to various synths/samplers. Also sends signals to change scenes in Drambo. All housed in AUM.

    The goals:
    1) Midi (foot pedal) control over scene changes( Impaktor tunings) while also playing bass line via melodic round robins.
    2) Midi switching of different round robin devices- ability to change bass/ melody lines on the fly.
    3) Round robin devices that let me input any note as a note value (ie A2/D4/A2/G#5) and does not let midi note input transpose the round robin.

    The (corresponding) issues:
    1) Drambo as of yet does not have midi mappable pads. Hopefully this is coming soon. I’m hoping with AUM I can route one controller to two round robins’ midi accordingly. One to a bass line and another to control chord changes.
    2) I think I can midi map two bypass buttons in AUM to switch out round robin devices.
    3) The big one. I have three options here that the community has graciously supplied me. Unfortunately none are working out as well as I would like while in the process of composing.
    The first one - round robin flexi for Drambo by @bcrichards uses a sequencer to change samples. If you use the same sample but tune each one differently you kind of get a melodic round robin. The problem is workflow. Very tedious to tune by cents by ear with a knob. What normally would take me under a minute now would take a lot longer. Add the fact that I also have to take into account that the input note value transposes the whole thing. Less than ideal.

    The second one is an arp based round robin in Drambo controlled by ChordPolyPad from @aleyas . Extremely clever routing of cv and midi between tracks and apps. Unfortunately the note sequences are limited to going up or down a scale (because that’s what arps do). I need to be able to input any note anywhere in the sequence or even have the same note repeat within the note sequence.

    The third one is a Mozaic round robin script by @McD . It actually comes the closest to what I want but the workflow is a deal breaker for me. Inputting numbers in multiple groupings in the code and trying to remember how they interact is too abstract an exercise when all I want is to input a short chain of note values. If the UI was tweaked as to expose simple note values it would be much easier to use.

    4) I’ve been having an issue while using the cv quantizer in Drambo. Can’t figure out if it’s Drambo itself or my pads/interface/cable/ static electricity/ moon phase. I get occasional ghost echoes that are low in velocity that trigger the lowest note in the note range right after a played note. I change up something and it goes away but eventually it comes back. Could be within a minute or a few days. Complete mystery.
    Anyways, that’s what is happening this week. Anything that people want to contribute would be highly appreciated.
    Thanks again for all involved.

  • @ion677 said:
    I saw a video with a Jacob Collier Logic session with something like 165 tracks. Even with AUM and drambo, which combined are way more efficient than Cubasis , I top out at about 12 tracks plus effects on a 2018 iPad Pro. Instruments must use disk streaming and not ram, plus low cpu apps to make iPad feasible for serious production. We’re just not there yet.

    Actually we can already do that. It's just not yet efficient.
    Multitrack DAW allows 8 tracks to be loaded. Per effect chain. So, if you wanted to go all Jacob Collier, you can. You'd just spend a ton of time dealing with printing audio to file.

    There's also a MIDI app but not many people know about it even though they already have it. Working on a video on it. The app isn't straightforward as well but it allows for a ton of optimization.

  • edited May 2020

    Do you have any opinions about approaching the task with miRack? I didn't try it earlier because I was waiting for the AUv3 update, but just now I've got a proof of concept running smoothly with one voice, for your #3 requirement:

    A midi trigger module advances a sequencer by 1 step per hit. The sequencer has an easy 'write' function, and also a sequence reset input.

    More info on the modules: the midi trigger module gives you 16 trigger outputs, each trigger corresponding to a learnable midi note input. Example C2-D#3. Potentially, each trigger could clock its own sequencer, or be assigned to reset a specific sequence. The Write-32 sequencer is a 3 channel sequencer with lengths 1-32 steps. Writing and modifying a sequence is immediate and hassle free.

    I think with some crossfaders or OR/XOR logic I should be able to change between multiple sequences on a single pad, or changing between instrument timbres/outputs. Transposing should also be simple.

    Caveat of using miRack for the time being.. no midi out. So you'd be stuck using internal sounds.
    I believe a midi out update is in the works however, as is AUv3.

    edit: oh yeah, I think 8 custom parameters are available to be assigned midi control in miRack.. or that might be the next update too.


  • @aleyas This looks very interesting. I actually have had this app for awhile but never really messed with it. I’m totally pragmatic about this project. Whatever app that gets the job done is fine by me.
    If we can get multiple midi inputs to multiple sequencers within just one instance I guess it doesn’t need to be AUV3.
    As far as no midi out goes- I can live with that assuming it has sample player modules and I can use my own samples. The Audible Instrument modal synth sounds/looks interesting too. I really like the reset function. From your picture it looks like you can assign a note to it ? Maybe have one note routed to reset all the sequences at once?
    Let me know the easiest way to share this patch and feel free to post it on patchstorage if you want.
    Wow, another interesting rabbit hole to go down.
    Thanks so much for this!

  • edited May 2020

    Yeah, on that trigger module you can select midi input source and give it a channel filter too. In the picture, I had C#2 from my V-Synth assigned to reset. One trigger could certainly be routed to reset all sequences simultaneously.

    Good news, I quickly loaded up a sampler module. Importing a wav from my audioshare folder was quick and easy. Tested it with the sequencer, advanced with individual hit triggers - perfect.

    This patch looks promising. I'll set up for 4 channels with a global reset trigger for 2 samplers and 2 synth voices. If that works as expected, then I'll try to address your #2 requirement for switching sequences / devices. After I'm done with work for the day I'll upload here with dropbox.

  • You’re awesome! Looking forward to it. Thanks.

  • edited May 2020

    Alright! Got a 4 channel / 4 voice set up.

    A midi in module is being used to write note data to any / all of the sequencers.
    -Select 'run 1-3' on the Seq module to toggle between play mode and write mode.
    -In write mode simply play a series of notes from your controller, and hit 'run 1-3' again to exit 'write'.

    Take note: 'monitor' wasn't working for me, I'm not sure why.. when you input from a controller, it may be useful to have that triggering another sound module to hear the sequence you are writing. I put a secondary midi in and modal synth module on the top right of the rack just for auditioning purposes. You can totally delete those.

    A midi trigger module is routing individual note hits to each of the sequencers' clock inputs.
    -C2,D2,E2,F2 are currently used for sequencers 1-4.
    -C3 is routed to 'reset' all sequences*

    *Note: 'Reset' actually triggers the first step of the sequence. So if you use reset, the next time you play a note on that sequencer, it would be the 2nd step. May want to offset your sequence by 1 step, so that for an 8 note sequence it begins on 2 and ends on 1.

    Everything is color coded, so you can easily follow the signal flow for each voice.
    The samplers are being triggered by their trigger, not gate inputs. So step makes them play from beginning to end. Place an amp envelope after to tame it.

    Each sequencer has 3 channels. I think with a switch and some logic gates we could alternate between sequences by midi. I'll play with it. Otherwise, you could just connect more instruments to those unused channel 2 & 3 CV/Gate outs. Then 1 trigger could advance up to 3 instruments simultaneously. Polyphony-ahoy.

    https://www.dropbox.com/s/1mz1c1wgfp8imkb/4 seq manual trig 1.1.zip?dl=0

    I tried to upload on patch storage, but it wouldn't allow me to submit.

    Let me know how it works for you. If there are any mirack / modular wizards reading, feel free to modify if you think this setup can be more efficient, or you got good logic skills.

  • edited May 2020

    Got it up and running. I know what I’m going to be doing this weekend! Will maybe make a short demo video once I get something interesting happening.
    Was thinking about adding more sequencers. An easy way is to have a second instance in AUM and just use different input notes. That as opposed to doing it all in one patch- saves a lot of programming.What do you think? Is that not a good idea?
    (edit). I now think that might needlessly add to the cpu hit as opposed to just one instance?

  • Totally for more sequencers. Just used 4 this time to test the concept.
    For the time being miRack is IAA, so only 1 instance available. The AUv3 update is just around the corner though, gonna be really powerful once that hits.

    @hypnopad said:
    An easy way is to have a second instance in AUM and just use different input notes. That as opposed to doing it all in one patch- saves a lot of programming

    Could you explain this thought a bit more? Not sure I understand. Like, having different input sources to enter notes on different instances?

  • Also, would love to hear a little demo if you get some good stuff from this patch later!

  • Oh I forgot it’s not AUv3 yet.
    @aleyas said: Could you explain this thought a bit more? Not sure I understand. Like, having different input sources to enter notes on different instances?

    Same midi source just different notes. Was just thinking of a way to save additional programming but the more I think of it it seems to just introduce more issues- possible cpu hit/ having two mirack projects to save/ etc.
    I guess it might be a good option if you wanted to use more than one controller unless you can have multiple input sources in one patch already.
    Sounds like adding more sequencers into the one patch is the way better/ simpler option.

  • edited May 2020

    edit: I was confused, and just got what you meant finally. Just ignore this post!

    How do you intend to enter note data to the sequencers? Guessing with just a normal midi keyboard?

    @hypnopad said:
    Same midi source just different notes. Was just thinking of a way to save additional programming but the more I think of it it seems to just introduce more issues- possible cpu hit/ having two mirack projects to save/ etc.

    Unless I'm misunderstanding, this is how I have it setup now. 1 controller writes sequence data, the other (pad controller) jams out on those sequences.

    The midi-1 module is routed to the 'write' input of all 4 sequencers. It's quick to enter data on each sequence, cause you don't need to fiddle with different channel settings or controllers. I'll record a quick workflow video to demonstrate what I mean.
    Also, the midi-1 module can be fed by a completely separate source from the trigger module. Multiple midi in sources are totally possible.

    If you wished though, you could create as many midi-1 modules as you want. Each one could receive from different midi sources, or from a common source.

    (Also to test multiple midi input - I created 4 different midi-trigger modules. Each module receiving from a different source, and driving 1 of the 4 sequences. Not really necessary though, as one trigger module already gives you 16 outputs from learnable midi inputs. Just a test.)

    (further unrelated - I've thought of a workaround to give you velocity input for each of your drum pads, will post later if you're interested)

  • edited May 2020

    Oh I feel like an idiot. I think I got you now. It's late here now and my brain isn't working!

    Yeah, when the AUv3 update drops you could make multiple instances. Midi-trigger module is basically like a note filter. So you could just copy the project to as many instances as you want, and your pads could be sent to any of them. Easy peasy. That would actually be ideal cause you'd be able to utilize your effects app collection on each instance.

    Feel free to ignore my last post entirely! I thought you were talking about the way to write note data to the sequencer.

  • @noob said:
    thing is iOS is moving FAST , stuff coming out monthly that changes the game (multi out,dram... nembrinin, DDMF mozaic etc etc) no static scene constantly changing, so the advice(s) needs to be updated weekly... not to rain on your parade B)

    Things are moving super fast (part of what makes this platform so addictive) and for me I always wonder if I am maybe missing a trick so threads like this are much appreciated. I don’t see this as an attempt to give a conclusive summation on the state of things iOS so much as a peak behind just one of the many possible curtains.

  • @aleyas It’s all good. Just saved me some extra typing! Sometimes texting is not the best way of communicating complex ideas. Yeah one advantage of multiple instances would be separate effect chains on each channel in AUM.
    Yes, I would love some velocity . On my laptop rig I have the option of it being on or setting a static amount. Both are useful. Sometimes you want expressiveness and other times you just want everything at a certain level regardless of your input.

  • edited May 2020

    Cool, I remember in the mozaic thread the question of preserving velocity came up.

    Once miRack gets the AUv3 update it'll be cake. But until then we can use Drambo to filter your pad midi outputs, and route them to individual midi-1 modules, which have a velocity CV out among other things. Maybe mfxConvert would work too. Will post back tomorrow with progress.

    Hopefully it'll be a temporary workaround till the next mirack update.

  • @AudioGus said: I don’t see this as an attempt to give a conclusive summation on the state of things iOS so much as a peak behind just one of the many possible curtains.

    Welcome to our little curtained room! I really like the endless ways us musicians here use our various iOS tools.

  • @aleyas said:
    Cool, I remember in the mozaic thread the question of preserving velocity came up.

    Once miRack gets the AUv3 update it'll be cake. But until then we can use Drambo to filter your pad midi outputs, and route them to individual midi-1 modules, which have a velocity CV out among other things. Maybe mfxConvert would work too. Will post back tomorrow with progress.

    Hopefully it'll be a temporary workaround till the next mirack update.

    Wouldn’t you know! Drambo to the rescue!

  • edited May 2020

    Made all four devices sample players. Going to try for more polyphony with the extra channels. I’m also figuring out a good sample management workflow . Most of my one hit samples live in my laptop.
    I’m monitoring with a synth in another track in AUM. Also adding audio effects. @aleyas Could you tell me how to add an adsr to the sample player- can’t seem to figure out that routing. I’m slowly figuring this beast out.

  • I thought it was going to be straightforward, but I was getting very strange behavior from certain envelope generators not retriggering. Odd.

    Anyway, you want to keep the audio from the sample player routed to the Mix-8 input.
    Route Seq-32 gate output to: (1) sample trigger input , and (2)* envelope trigger input.
    Route ENV output to 'level' input on the Mix-8 mixer. That's our VCA.
    (**alternatively, you could feed 'env trig in' from the same trigger that feeds the seq-32 'clock in' of that voice)

    As for envelope generators, the Complex DAHD by Hysthi worked best for me. Having a variable hold time is good if the sample/env is triggered by a trigger instead of a gate.
    BogAudio's Shaper was good too.. It can act as both amp and envelope. Very short decay times were weird though.

    Still working on that velocity routing. Been a long day!

  • edited May 2020

    I just realized that the (+) and (-) buttons at the top of the sample player module cycle through all the samples in a folder. A trigger into one of those inputs can change the loaded sample!

  • @aleyas said:
    I just realized that the (+) and (-) buttons at the top of the sample player module cycle through all the samples in a folder. A trigger into one of those inputs can change the loaded sample!

    Good to know.
    Thanks for the marked up diagram- they help a lot! I’m coming from working with the signal/ trigger flow of audio/instrument/ midi effect racks in Ableton. This modular stuff is incredible but the signal flow is somewhat confusing for me still.

  • We want something like "TriggerTune" in Max for Live with Simpler.

    @hypnopad describes on another thread:

    You just manually fill in the piano roll. No keyboard input or other midi device before it to program it. It then just receives a note from your controller pad and loops through the pattern you’ve programmed one note at a time. Just to be clear, that is two separate devices in an Ableton rack- a Triggertune round robin maxforlive device and a Simpler instrument .

    Now a bored developer (or 5) will read that and start working. If you build it, we will buy. There are a lot of people that just want to tap and create. And everyone else can just put "taps" in the DAW's and Sequencing apps.

    Drambo developers should just upload Patches for download because we get it... really. But we just want to "tap" and not read the instruction sheets like some 20 page Ikea document.

    Users just want to use things and people. That's why we're not called Do'ers. Get Use'd to it.

  • edited May 2020

    @McD Nice save from the other thread I kind of hijacked.😄
    For a long time I thought I should learn Max or Reaktor and make my own stuff. I soon learned so many cool people built so many cool things I’d rather be a user than a doer and concentrate on other tasks/levels. Those levels now include building things with AUM/Drambo/Mirack/MidiDesigner. Anything more “granular” may never happen.
    Triggertune/ Step Melody are the two laptop devices that I’ve been asking the iOS community to help me find/build/emulate for the iPad for months now. I appreciate all the help I’ve had so far. The amount of help/collaboration here is truly humbling.


    Triggertune max for live device feeding midi into a Simpler instrument in Ableton Live.

  • edited May 2020

    Alright I've got velocity working successfully. I also optimized the patch slightly, as I realized that DAHD envelope was also an amplifier.

    I've got 2 methods for routing velocity. One is using mfxConvert, which is the quicker of the two. The other is a simple patch in Drambo. Both methods achieve the same outcome.

    Here's the Drambo patch (I recommend mfxConvert if you got it, but this will do fine too)
    (track inputs are listening to channel 1, remapping to channels 2-13 for C2-B2, twelve notes. Change as needed.)
    https://www.dropbox.com/s/saem3l1lpbyebhv/Velocity filter.drproject?dl=0

    What I'm doing is converting the notes from my controller to different midi channels. For example:
    From C4, Channel 1 - To C4, Channel 2
    From D4, Channel 1- To D4, Channel 3
    From E4, Channel 1- To E4, Channel 4
    etc, for as many sequences / pads you'll be using.

    That allows each note to get it's own Midi-1 module. So, instead of using the Trigger-16 module, I'm using multiple Midi-1 modules, as those have gate, as well as velocity cv outputs.
    The Midi-1 gate outputs get routed to the Seq-32 'gate' and write 'clock' inputs. (you could route the gate/retrig output to trigger the envelope too, or just do that from the Seq-32 gate out.)

    I've been enjoying routing velocity to the decay and attack stages on the DAHD.
    You could also mult the velocity to an amplifier (to scale it), and send it to several destinations at once.

    Also, I made a dedicated write module, and dedicated reset module.


  • edited May 2020

    Also, if you like I can upload the miRack patch, or an Aum project with drambo / mfxConvert doing the channel remapping.

    And it looks like the AUv3 update is hitting this week also. After that, using AUM's built in note range slider and channel filter would be sufficient, that the method I just posted wouldn't be necessary anymore.

Sign In or Register to comment.