Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Nanostudio 2.0.1 update available.

1234579

Comments

  • @EyeOhEss said:
    Ah, this sounds good! :)

    I think I misunderstood before.
    So if I understand correctly, in NS2 you insert midi au on the midi instrument track? Like in an audio effect au bin (I find this implementation vastly preferable than the AUM method of midi au being on a separate channel/fader). But still any midi au on any track can receive midi from any other midi au on any other track? And selectively from its own midi au bin? And receive combinations of midi au that are located on various other tracks?

    let me show you 2 examples ... in both case there are 2 plugins in projects - Rozetta Arp and Rozetta Colider. Arp notes are played by one obsidian, collider notes are played by other obsidian. Then also both Arp + Collider are routed into third track, where they play Model D.

    1/ First method - both Arp and Colider track are set as "child tracks" of track with Model D, and "Send midi to parent" is enabled on them. (btw in this case also audio generated from both obsidians is routed to track with ModelD, this is default NS2 behaviours, which can be of course disabled per track)



    2/ Second method - both Arp and Colider track have activated "MIDI SEND" which sends output of MIDI fx chain into track with Model D. In this case it doesn't matter where in gorup hierarchy are those track. You can add any number of sends on track, so basically track with Arp can be send to any number other tracks.


  • edited May 2019

    Things starts to be even more interesting when you add to game MIDI Tools "Bus" plugin which basically allows route midi from one it's instance to other instance.

    With this plugin you can put, lets say, on track 1 into MIDI fx slot [Rozetta Bassline 1] -> [Midi Tools Bus] -> [Rozetta Baslien 2] and then on track 2 just [Midi Tools Bus] ... with this configuration track 1 will play both Bassline 1 and Bassline 2 notes, but track 2 will play just Bassline 1 notes

    So basically - you can do same things like in AUM, just in AUM there are 2 different track types "MIDI" and "AUDIO" - in NS it's just one general purpose track which handles both audio AND midi in more or less same way. (this is btw. preparation for audio tracks implementation, where you will be able to have both audio and midi on same track)

    Last note: If we are talking about most simple use case - just simple routing Rozetta Arp to let's say Model D, in AUM you need to put Rozetta Arp on one MIDI track, Model D on other "audio track", and then configure routing Arp->Model D in AUM's midi matrix.

    In NS you just create track with Model D and put Rozetta Arp as MIDI fx on that track. Done.

  • The user and all related content has been deleted.
  • edited May 2019

    It sounds like the functionality I was looking for isn’t built in and requires the Midi Tools Bus au?

    Which exactly ? Not sure that i understand :-) You mean RECORDING of output of MIDIfx back into into sequencer track ? I thought you was asking about routing capabilities not recording :-)

    In this case yes, for now recording of AU MIDIfx is not native feature in NS2 (although definitely in todo list), but easy doable, as you mentioned, with Midi Tools Route plugin (see my video on previous page). It's actually so simple that i'm in doubt it can be done even more simple after it will be supported as native feature :lol:

  • The user and all related content has been deleted.
  • edited May 2019

    I mean - if one track has 4 x midi au on it. I want to be able to select any one of those midi au to also send midi to any midi au instance that is on another track. Is that possible?

    Aah.. ok. No, this is not possible - you have to split them to 4 tracks or use MIDI Tools Bus (one instance after every plugin with setting

  • edited May 2019

    This is setup using MIDI bus - you choose "PRE" mode which stops propagatin of output of previous AU plugin down the chain in current track and sends it to selected bus channel - which you then add on other channel where you want receive that AU midi stream..

    MIDI Tools is set of VERY handy plugins, and it's cheap, if you still don't have it, i suggest you to purchase it ;-)


  • The user and all related content has been deleted.
  • I see a future whereby NS2 could be the master recorder for AUM/ApeMatrix/AB3/etc. projects;

    Being able to record multiple MIDI and audio tracks (!) into NS2 to bridge modular and linear would be awesome.

    You would be able to 'perform' a sequence via, say, AUM into NS2 and record the MIDI and Audio. From there you can switch your brain into arrange mode and use the benefits of a linear sequencer to mix it into a finished track.

    Audio tracks can't come soon enough :-)

    The way I'm mucking about with iOS today means I have lots of little bits created on the iPhone or iPad which then get assembled in Logic. It would be nice if the projects stayed on iOS for longer.

  • The user and all related content has been deleted.
  • @syrupcore said:
    Yep. Same for real-time quantize. My hope is that these will eventually end up under (easily accessible) MIDI FX. I wanna record free as a bird and then tighten it up a bit without committing to it. For now, I duplicate the clip, mute the duplicate and quantize the original. Works fine but non-destructive real-time would be friendly like.

    I’ve wondered if that was the plan for quantize originally. When you pick a midi fix to load, there are both internal and external banks to choose from, yet NS2 doesn’t have any internal midi fx. That always makes me scratch my head.

  • edited May 2019

    @syrupcore
    @legsmechanical

    There is one major issue which limits "quantisation/groove" midi plugin - in half of cases you need shift note "backward" in time... this is not possible if it is plugin, because plugin receives notes only in momennt when they are played

    quantisation/groove must be solved at "sequencer level" this code needs access to whole sequence to be able shift notes not just "later in time" but also "sooner in time"
    related to their real position

    At least until Mr. Borstel will not implement TimeTravelApi (tm) :-))

  • edited May 2019

    @klownshed said:
    Being able to record multiple MIDI and audio tracks (!) into NS2 to bridge modular and linear would be awesome.

    With MIDI this is already possible, up to 16 midi tracks simultaneously.

    With audio - ok, obviously after audio tracks will be added :-)))

  • @Telstar5 said:

    @espiegel123 said:

    @Telstar5 said:
    .

    What is it about Auria Pro and MIDI AU that you are asking about. It wasn't clear to me?

    Wondering if you can do the following with Auria Pro also;

    unlimited level of groups (eh you can group tracks, then you can group groups of track, then you can group groups of groups,...) Grouping works simply by selecting tracks you can group and then performing action "group" OR dragging one track over other track
    unlimited number of sends (any track can be send to any number of other tracks)
    for each send you can define amount(level), PRE or POST fader, invert phase and set panning - so it's not just simple "send" amount knob like in most of DAWs
    track is sending audio by default to it's parent track, but you can disable this
    track can send output directly to HW output instead of to it's parent track
    midi

    unlimited number of sends (same like with audio)
    by default track is not sending midi to it's parent track, but you can enable it
    by default every track receives ALL midi from ALL inputs when it is selected
    you can switch it to "receive always", you can choose just one midi input, you can select midi channels to receive and adjust velocity curve
    Generally, concept of "track" in NS is derived

    It’s hard to compare both apps since their workflows are conceptually different in many aspects, but -although I’m not skilled at NS2- I’ll try to answer what I believe it could be comparable in Auria Pro.

    -Tracks groups
    In Auria Pro individual faders can be grouped together, so that adjusting one will adjust the rest of the group and their respective mute and solo buttons will be grouped as well.
    Additionally, AP contains eight separate subgroups which can be used for advanced routing and bus processing. It works by re-routing the channel’s output to a specific subgroup, instead of directly to the master channel, allowing selected channels to be summed and processed together first before being bussed to the master.
    -Sends
    Any channel output can be assigned to up to 32 buses, which in turn can be assigned to another channel’s input. So a single channel can have multiple bus output destinations at once. In addition the Aux Returns are considered buses, so any aux channel can be returned to an individual stereo channel in the mixer.
    -Auxes
    AP has six stereo auxiliaries, with sends available on every channel and subgroup. By default these are post-fader sends, but can also be switched to pre-fader mode.
    -MIDI
    Any track can receive and record MIDI from AU/IAA instruments loaded in the track or any other track, a MIDI plugin loaded in the same track or loaded in any other track, a virtual MIDI port, the screen keyboard or a hardware device. The track’s active input is selected in the channel strip.
    Similar to the audio channel strip, every MIDI track has a MIDI channel strip where virtually any MIDI parameter can be adjusted non destructively in realtime.

    Hope this helps.

  • edited May 2019

    @legsmechanical said:

    @dendy said:
    @syrupcore
    @legsmechanical

    There is one major issue which limits "quantisation/groove" midi plugin - in half of cases you need shift note "backward" in time... this is not possible if it is plugin, because plugin receives notes only in momennt when they are played

    quantisation/groove must be solved at "sequencer level" this code needs access to whole sequence to be able shift notes not just "later in time" but also "sooner in time"
    related to their real position

    At least until Mr. Borstel will not implement TimeTravelApi (tm) :-))

    That’s a really good point, and one of the reasons that pyramid and rs7000/rm1x are still go-tos for me when it comes to real-time midi manipulation, in spite of all the cool midi AUs. Those hardware sequencers work on the sequence data, not just the real-time midi stream like AUs do.

    That’s what I was hoping the “internal” midi fix might do (if they are ever implemented).
    (And why it would make since to distinguish between internal and external midi fx as the app does now. )

  • Wow, thanks @Rodolfo! That certainly makes sense although upon looking at this thread , NS2 DOES in fact look more streamlined . I know there’s a “tips and tricks “ section in the blip interactive forum, but someone needs to make a compendium of all these techniques in one place . Maybe @dendy as well as some others, I dunno.

  • edited May 2019

    @legsmechanical
    That’s a really good point, and one of the reasons that pyramid and rs7000/rm1x

    RM1x !!! Holy grail of midi sequencing. I deeply wish and dream about @jimpavloff making iOS remake of RM1x :-)

    @Telstar5
    but someone needs to make a compendium of all these techniques in one place . Maybe @dendy as well as some others, I dunno.

    Yeah, i feel like this is inevitable. There is lot of information circulating over forums but they are often lost in discussions...

  • edited May 2019

    @legsmechanical :Yeah there’s a Squarp pyramid in my future..Perfect except for the 96ppq resolution. Is there even a desktop sequencer that’s as good as the Pyramid?

  • @dendy it depends. For example, if NS2 was connected to Xequence, it would receive timestamped MIDI events 300 ms in advance (with default settings), so shifting events backwards in time up to ~ 300 ms would work then. However that assumes that NS2 chooses to actually receive them in advance too (forgot the API call), and in case there's anything in between, all of that would have to pass the events through immediately too and not do some timed sending of their own.

    I guess though that the majority of apps do not implement such buffering.

  • @dendy said:
    @syrupcore
    @legsmechanical

    There is one major issue which limits "quantisation/groove" midi plugin - in half of cases you need shift note "backward" in time... this is not possible if it is plugin, because plugin receives notes only in momennt when they are played

    quantisation/groove must be solved at "sequencer level" this code needs access to whole sequence to be able shift notes not just "later in time" but also "sooner in time"
    related to their real position

    At least until Mr. Borstel will not implement TimeTravelApi (tm) :-))

    Think it would just have to compute a playback buffer long enough to accommodate any offsets/computation time, no? Not that that's trivial by any means.

  • edited May 2019

    @syrupcore said:
    Think it would just have to compute a playback buffer long enough to accommodate any offsets/computation time, no? Not that that's trivial by any means.

    And then you delete note in sequencer, during playback, which was already added to that buffer and send to plugin for processing :)))

    You need to think in 4 dimensions, Marty :)

    I know, that are extreme cases, but i see there lot of dirty work needed, and i'm suspicious that there are some more deep issues in host->plugin->host communication, which would make this impossible.

    @SevenSystems was talking about 300 ms buffer but don't think this is enough for groove quantise plugin, especially with faster BPM "backward in time" note shifts can be bigger (at least i think, didn't calculate what is theoretically biggest possible shift "back in time")

    Anyway interesting discussion, would love to see involved more experienced devs in this topic :)

  • @dendy the buffer can be as long as you like... but of course that will at some point become noticeable when you do edits during playback, as these will take longer and longer to actually be reflected in the playback.

  • edited May 2019

    @dendy said:

    @syrupcore said:
    Think it would just have to compute a playback buffer long enough to accommodate any offsets/computation time, no? Not that that's trivial by any means.

    And then you delete note in sequencer, during playback, which was already added to that buffer and send to plugin for processing :)))

    You need to think in 4 dimensions, Marty :)

    I know, that are extreme cases, but i see there lot of dirty work needed, and i'm suspicious that there are some more deep issues in host->plugin->host communication, which would make this impossible.

    @SevenSystems was talking about 300 ms buffer but don't think this is enough for groove quantise plugin, especially with faster BPM "backward in time" note shifts can be bigger (at least i think, didn't calculate what is theoretically biggest possible shift "back in time")

    Anyway interesting discussion, would love to see involved more experienced devs in this topic :)

    Logic has had non-destructive groove quantising from the beginning. I'm pretty sure Cubase was first with groove quantising but Logic had it available non-destructively nearly 30 years ago.

    Amongst Logics other non-destructive parameters, it also has a 'delay' option allowing you to delay any object forwards or backwards by reasonably large amounts. So Logic allowed you to time travel in the early 1990s.

    If NS2 or Auv3 doesn't allow for MIDI plugins to have at least the same buffer the audio engine is using then perhaps this is a case for an internal solution. As has been mentioned before in the thread, the internal midi fx tab already exists but is empty. NS2 plug-ins or features won't have the same restrictions as an Auv3, surely?

    As for the 'logic' of notes in bar 1 that may be occurring before the start of the bar, then decisions can be made, either not quantising first time round or not playing a note that should occur before the start of the bar. Any other edge cases may require some thought but weren't insurmountable 30 years ago so shouldn't be an issue in 2019.

  • @klownshed said:

    @dendy said:

    @syrupcore said:
    Think it would just have to compute a playback buffer long enough to accommodate any offsets/computation time, no? Not that that's trivial by any means.

    And then you delete note in sequencer, during playback, which was already added to that buffer and send to plugin for processing :)))

    You need to think in 4 dimensions, Marty :)

    I know, that are extreme cases, but i see there lot of dirty work needed, and i'm suspicious that there are some more deep issues in host->plugin->host communication, which would make this impossible.

    @SevenSystems was talking about 300 ms buffer but don't think this is enough for groove quantise plugin, especially with faster BPM "backward in time" note shifts can be bigger (at least i think, didn't calculate what is theoretically biggest possible shift "back in time")

    Anyway interesting discussion, would love to see involved more experienced devs in this topic :)

    Logic has had non-destructive groove quantising from the beginning. I'm pretty sure Cubase was first with groove quantising but Logic had it available non-destructively nearly 30 years ago.

    Amongst Logics other non-destructive parameters, it also has a 'delay' option allowing you to delay any object forwards or backwards by reasonably large amounts. So Logic allowed you to time travel in the early 1990s.

    If NS2 or Auv3 doesn't allow for MIDI plugins to have at least the same buffer the audio engine is using then perhaps this is a case for an internal solution. As has been mentioned before in the thread, the internal midi fx tab already exists but is empty. NS2 plug-ins or features won't have the same restrictions as an Auv3, surely?

    As for the 'logic' of notes in bar 1 that may be occurring before the start of the bar, then decisions can be made, either not quantising first time round or not playing a note that should occur before the start of the bar. Any other edge cases may require some thought but weren't insurmountable 30 years ago so shouldn't be an issue in 2019.

    For the record -- Xequence has both... (non-destructive delay and swing per track), so if you're looking for that right now... you can also easily use Xequence to sequence NS2 via virtual MIDI...)

  • I wish there were more tutorials on NS2 I know many of you started with the original but for a noob that just got it and has no clue there is not allot out ther to get yo going it seems that even the mainsail assumes you are familiar with the terminology and and procedures. If you have never used a recording device or program and want to get started it is daunting. There seem to be some more advanced tutorials out there but nothing for the absolute beginner and all of you seem to very advanced. I do not comprehend half of what is being discussed but I read it anyway in hopes that I will understand something

  • edited May 2019
    The user and all related content has been deleted.
  • edited May 2019

    @ralis said:
    I wish there were more tutorials on NS2 I know many of you started with the original but for a noob that just got it and has no clue there is not allot out ther to get yo going it seems that even the mainsail assumes you are familiar with the terminology and and procedures. If you have never used a recording device or program and want to get started it is daunting. There seem to be some more advanced tutorials out there but nothing for the absolute beginner and all of you seem to very advanced. I do not comprehend half of what is being discussed but I read it anyway in hopes that I will understand something

    As a first step you should go through this serie of videos, it will helps you to get pretty much good basic overview.

    SoundTestRoom Setting Up & Quick Start Guide

    As a starting point you can watch this one. Really just quick overview of basic NS2 structure

    PlatinumAudioLab video tutorials

    Now you should go through all those tutorials every one is aimed to one particular app part or specific workflow ..

    Send FX

    Bus groups & send FX chains

    Automate instruments & FXs

    Sidechain compression

    Basics of Obsidian synth / sampler

    Basics of Slate drum sampler

    Quick workflow tips

    Sampling / audio recording

    Obsidian - Filters

    Obsidian - FM oscillator

    Obsidian - Physical modelling tricks

    Manual

    After those video tutorials, i STRONGY suggest to go through manual - lot of details is explained there

    https://www.blipinteractive.co.uk/nanostudio2/user-manual/index.html
    (same manual is also inside app)

    Nanostudio forums (and especially tips&tricks section)

    Here you can found some advanced tip&tricks
    https://www.blipinteractive.co.uk/community/index.php?p=/discussion/366/tips-tricks

  • Excellent job @dendy, appreciate that list.

  • The only thing wrong with the Platinum Audio videos is that he hasn't made more of them.

    This hackattack video on Obsidian is also highly recommended:

    For me Obsidian has become my workhorse synth. I use other synths if I want something a bit special, but Obsidian is my default.

    The only thing that I don't like about Obsidian is that it doesn't have more than 24 modulation slots. Which is a little first world problems, but it does seem kind of arbitrary.

  • edited May 2019

    The only thing that I don't like about Obsidian is that it doesn't have more than 24 modulation slots. Which is a little first world problems, but it does seem kind of arbitrary.

    yes - sometimes i reach limit too.. i'm ready to open this topic in discussions with matt after iphone and audio tracks ;-)

    Btw. do you know that you can save lot of mod matrix slots used by MACRO kbobs by not assigning them directly as mod source BUT assigning them as "multiplier" into other existing mod matrix slot ? For many uses cases it works - it's a pity i was not aware of this method when i was working on factory patches ..

    (for example if you want control Filter Env amount by macro knob, simply locate Filter Env > Filter Cutoff" modulation slot, open it and assign macro kbob as "multiplier" into this existing slot - result is same like creating NEW slot and selecting macro as SOURCE and Filter Env Level as target)

Sign In or Register to comment.