Audiobus: Use your music apps together.
What is Audiobus? — Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.Download on the App Store
Audiobus is the app that makes the rest of your setup better.
Drambo standalone audio not accessible in audiobus or AUM
Am I missing something? I have a nice little multitimbral synth I created in Drambo. There are a few auv3 effects (midi and audio) used for some patches, so I have to run in standalone. I want to be able to drive the drambo synth as an external device via midi from within another DAW (I this case NS2).
Everything worked and sounded great until I wanted to mix down the DAW output with the Drambo synth. Opened up AB to find that Drambo standalone does not exist as either an audio input or output. So I have no way to mix the 2 outputs to a recording.
I find it rather odd that standalone drambo audio is not exposed outside of itself. Is it intentional? Is there a simple workaround? How do I access the audio output of drambo standalone?
Hopefully I’m just missing something stupid. Thanks.
Drambo does not support IAA nor Audiobus, only AUv3. So you have 2 options:
1. Use Drambo as the main AU host and your synth patch would be e.g. one of tracks
2. Move the AUs used in Drambo to your host and keep only internal modules within Drambo as AU.
That will still not solve the problem for rendering the project in NS2 with Audio from Drambo...
Perfect example of the curse of 'Franken DAW'
Wow. This is the audiobus forum. It doesn’t support audiobus? Why? Can someone explain the reasoning behind it? Is it a technical limitation or a conscious decision?
AudioBus is built on top of IAA a technology which Apple deprecated (meaning they will no longer develop or fix bugs in it) a few years ago and encouraged developers to move on to using AUv3.
(But as long as people keep using IAA there's no real motivation for developers to make the move until their apps 'break').
Adding support for a deprecated standard is a risky move as there's no guarantee it will work and causes head-aches for developers as it sometimes requires iOS version specific quirks to even work...
As @skrat mentioned it's a good idea to do as much as possible within a host like NS2.
Currently iPadOS has some limitations such as that an AUv3 can not host other AUv3's.
This is something that is not present on macOS and I would not be surprised if this will become possible when iPadOS17 rolls out next year.
It's always possible to sequence using Drambo, do an export and import the Audio files into Slate pads in NS2 and sequence them that way.
@giku_beepstreet Is the developer behind Drambo and maybe he can shed a light on the decisions that have been made...
Though we got the Sunvox metamodules in auv3. And then metamodules in metamodules ☺
Mix it down to audio.
Nope. You're just making a reasonable assumption that there would be a way to do it. Unfortunately, due to IAA being deprecated, there's a gap in interoperability. That's just the way it's going to be. Fewer and fewer developers will invest the time to support functionality that Apple can break at any time. The limitation of AU's not being able to host AU's is unlikely to change either, so there's likely to always be roadblocks such as this.
The easiest thing to do sometimes is to have a DAW that you leverage for mixing, and dump as much as possible to audio tracks and mix there.
Yeah, but the SunVox meta-modules are not really regular 'plug-ins' by definition.
It's more like 'adding a new project folder into another projects project folder' and it's practically just a 'text document' and does nada without being interpreted and processed by SunVox
Thanks all for the explanations. Makes sense. It does appear that the major obstacle to leaving IAA behind is auv3 hosting other auv3s. @samu and @wim seem to have different opinions on whether or when that will happen. Until it does, IAA is still needed in my view. I appreciate that mixing down to audio files and then mixing the audio files does work, it’s just a bit cumbersome.
If you look at this comment by the Synthmaster devs, you will realize that devs can only continue to maintain IAA by building their apps with really outdated copies of the Apple libraries. But they need the newest libraries for full compatibility with M1 iPads. So, IAA really is on the way out.
@Samu: Always hoping for more features + eternal optimist
@Wim: Too easily satisfied with status quo + eternal pessimist (I prefer "realist" lol.)
I agree with @wim; mix down to audio.
If it were me, in drambo stand-alone, I’d record everything in some Loopy Pro AU donuts, save em, then open the Loopy Pro AU session in NS2 (I’m not a big Nano user, but I think that would work, wouldn’t it?)
NS2 can host both the Drambo AUv3 and the AUv3 audio effects on the same track so I would try to either separate them in the synth patches and save the same patches with the same preset name for the effect presets, or use Drambo's internal effects instead (and a lot of typical effect modules are on board now). (basically what @skrat suggested). And whatever is hosted on that track can also be mixed down to audio inside NS2.
@Edward_Alexander's idea using Loopy Pro sounds like a good idea too if you don't mind the effort. I just would prefer an in-the-box solution because it allows me to tweak the synth sound or change what the synth plays when required during composition.
I’m using LP in Drambo to store synced audio clips from other tracks, kind of like Ableton’s Session view. But when I export stems for editing I will just use Drambo’s own stem recording and then drop them into a DAW
Surely somebody is working on closing that gap? Another form of IAA? Auv4? IOS itself as a mega-host? To me, exporting and importing audio files from one standalone to another is certainly a viable workaround (and thanks to all for the various ways of doing it). But it’s still a workaround. Just feels kludgey to me. A buzz kill in the creative process.
Most likely it will be some some kind of new 'audio driver' (Similar to SoundFlower on the Mac) but as driver-kit will only be available for M1 based or newer iDevices it will take some time to come unless Apple stops it! (You know they don't like people 'stealing' music by doing lossless recording from Music app to other places on mobiles due to licensing agreements).
iPadOS17 is the next big update and if developers are 'convincing enough' and feed Apple with their requests we might get something but as iOS music apps users are a very marginal category of the overall user base I don't have high hopes.
Another work-around is to see if it's possible to make the Drambo projects without the AUv3's (ie. use the AUv3's in the host to sequence Drambo and load the effects in the host as well once the audio leaves Drambo).
iOS us full of 'work around' and it can sometimes tax the patience and eradicate the flow completely.
@Samu I don't think that audio loopback will happen on iOS soon, even on MacOS you can't do it without 3rd party apps as far as I know.
I'm aware of the need for 3rd party tools for loopback and it's NOT about Apple not being capable of doing it but the licensing agreements with the damned entertainment companies that are paranoid about their precious content being pirated
iPadOS16's DriverKit would at least make it possible to create a custom Audio-Driver for M1+ iPads.
If Apple approves a driver that does loopback is another discussion.
My UR-242 has a 'check box' for LoopBack in the dspMixFX app but it's 'greyed out' when connected to an iPad.
For the Audient ID4mk2 I can see and select the 'Loop Back Port' but there's no iOS app to control the level of loopback.
I would not be surprised at all if Rogue Amoeba is already experiment with a custom driver for iPad as this would allow them to bring over Audio Hijack to the iPad which is requested a LOT.
Time will tell what happens...
Can we get apple to un-deprecate IAA? (That’s kind of rhetorical question since nobody can get apple to do anything) There is obviously still a legitimate use case for it and would likely be the simplest solution. Then they can put it back into new libraries and everybody can stop using really old libraries. E.g. AUM and every daw I know of must be building with old libraries. What prompted apple to deprecate it? It’s widely used existing functionality that is being removed for no apparent compelling reason. Thought that was a big no-no in software development. I know I’m just venting here. Forgive me. I feel better now…. Grrr.