Audiobus: Use your music apps together.
What is Audiobus? — Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.
Download on the App StoreAudiobus is the app that makes the rest of your setup better.
Buffer Frame Size and Audio Sessions
So... I know if I use Audiobus to open a whole session (including AUM), I can use Audiobus settings to change the latency/"buffer frame size"... and that works... but I recently came up with a couple questions/issues regarding this-
is there a "native" buffer size somehow? In viewing a CPU meter, I notice that if I change the value while the session is open, 128 frames gives me the best CPU performance... any other value increases the CPU... so does that mean Audiobus is sort of "translating" the frame rate for the hosted apps?
using a convolution IR app, "Thafknar", I notice that if I change the buffer size of the session, I'm 90% sure that the timbre of the audio signal changes... so... why would this happen.... do certain apps not support certain frame rates... or... actually now I'm guessing that... for IR apps in particular, it's necessary to create a specific IR for a specific session frame rate?
Comments
The lower the buffer size, the less time DSP apps to have to process the audio before sending it out. Higher buffer size make it easier for computationally intensive processes (like convolution reverbs) to get their work done. The buffer size won't change the timbre. So, no , you don't need IRs customized for the buffer size. If the buffer is inadequate, you usually get crackly artifacts in the audio.
The audio buffer size set by Audiobus or AUM or other hosts is shared by all audio apps that are running.
Thanks for the replies @espiegel123, and I understand what you’ve written above, but in practice (with a CPU meter) my results are non intuitive and not fitting with the theory- what I’m saying is that my CPU is LOWEST at 128… lower than 64frames sure, but also lower than at 256, and at higher frame rates (in fact the higher the frame rate the higher the CPU, which is obviously against the theory) … I’m trying to figure out why that is
And again maybe my wording wasn’t clear enough, but I’m speaking from actual testing… I’m definitely hearing changes in timbre… only with the IR app Thafknar… though… so yeah… why would that be? The sound is slightly more “phasy” at 64frames…
_
again I’m wondering if this has to do with some sort of “default” frame rate used (maybe only per session?)… or the default frame rates selected in individual apps?
What sort of IR are you running with a buffer size of 64 -- that is pretty extreme for an IR. Do you hear differences in the output when using larger buffers? Most convolution apps probably assume a larger buffer than that is being used.
Re CPU%. CPU percentages are truly meaningless until the CPU is pushed hard because the OS does things like throttling (lowering the CPU speed) to minimized battery life. See:
https://wiki.audiob.us/doku.php?id=cpu_load_comparisons_and_testing&s[]=throttling
It would be pretty surprising if when the CPU is pushed (approaching 90% baseline) you get better performance with a small buffer. If you do get such a result, I'd very interested in the particulars.
I think that using different buffer lengths for different processing components would be so difficult as to be impossible. All elements in the chain have to use the same buffer length.
The problem with a convolution IR is that, to produce one output sample, it needs to multiply each point in the IR with one point in the input stream, going back in time as far as the length of the IR. It needs to carry a history, which may be several buffers long, in order to do this. So something odd may be happening when you change the buffer size on the fly. It might be different if you remove thafknar and readd it after changing the buffer length.
Regarding your observation of higher cpu for larger buffers (lower frame rate, not higher), your device could be switching to the slower, low-power cores, when you increase the buffer length. These cpu figures can be untrustworthy.
ok awesome, thanks for the info and resources @espiegel123 and @uncledave. Good to know about this "throttling" business... was racking my brains about it for quite some time. I'm pretty sure this is the case then. I just purchased an iPadAir4 for audio work and have been testing it... so, it's true I am currently not pushing the limits at all (quite the opposite thankfully)... except the latency limits. I'm using my iOS audio system for real time guitar performance and looping so having a low latency system is really important... I often record on-the-fly very rhythm-heavy guitar loops
I'm using a very short mic'd guitar IR (size is on par with a cabinet-type IR)
I think I do hear a bit of a difference with larger buffers as well, but seemingly more subtle high frequency content
Ok thanks for this insight, it makes sense that this might be the case... I'll try some tests this week and post the results back here
So after testing... it seems the situation is unchanged. Tried removing and re-adding Thafknar (also tried saving a 64 rate session, closing and reopening all apps) and still the 64 frame rate is giving "phasey" type audio. Not sure why that it is. It's not a huge deal for me, 128 is fast enough for my needs, but it's good to know the limitations of my main apps.
Here is a test that you could do and perhaps @polaron_de can provide some insight.
Record a piece of audio to feed to THAFKNAR.
If you using AUM, load that audio into a file player that is followed by THAFKNAR.
Set the buffer to 64.
Record enable that AUM channel and record to capture the result.
Set the buffer to 256.
Record again.
Post the results -- and the IR if you can -- in case it relates to something particular to the IR.
... and still the 64 frame rate is giving "phasey" type audio. Not sure why that it is. It's not a huge deal for me, 128 is fast enough for my needs, but it's good to know the limitations of my main apps.
The „phasey“ points to some delay in the processing, on high frequency content just a couple of samples are enough to make it audible.
But it‘s not easy to exactly name the source. Imho it‘s IOS itself with it‘s afforementioned throtteling for energy efficiency. Tbh I don‘t have a clue how they manage digital audio under such circumstances, as most results are surprisingly well or even great.
My suspicion is based on the experience, that I once wasn‘t able to „phase out“ a signal roundtrip to external gear with it‘s inverted source. A procedure I‘ve done dozens of times with various DAW setups.
(the source is duplicated to a temporary (inverted) channel which is delayed incrementally until the mixed return signal from outside vanishes. The number of inserted samples then gives the exact duration of the roundtrip)
The (subjective) impression during this test was that the whole datastream was moving constantly... but in fact you can record a return signal without phasing artifacts.
This left me clueless and I threw the towel.
Good idea, I’ll try and get this one this week sometime
Interesting idea to cancel the signals out like that… as a means of comparison
Yes, the method is frequently applied to compare 2 files, which is easier because they usually have a common startpoint (or you can shift one until startpoints match).
The tricky part is to have both levels absolutely identical, otherwise it will never zero.
Extended use: to check reliability of high track count recording, you use (f.e) 16 copies of a single source and 16 inverted copies of the same file and record all 32 channels.
The mix down would result in perfect silence, if everything went well.
Any signal indicates the position of an error at 1st glance, if you scale the output even the most tiny deviation becomes visible.
ah cool, interesting! Definitely pushing the limits of software
If the buffer size is too small, then Thafknar lacks the time to process the impulse response completely. It aborts the processing prematurely and shows a warning in this case, but you see this only if Thafknar's view is open.
Ok great, thanks for the response, and awesome app by the way!
So theoretically, if I understand it correctly, the shorter the IR file, the less processing time needed. Therefore I’ll test out the new “range” feature in Thafknar, if I make the range of the IR short enough, I’ll see if I can get through the 64 frames buffer setting without error (and consequently without audible differences compared to the larger buffer settings). From there I’ll try and determine if it’s even possible for me to run my set up at 64 frames, given the shortest IR that will still give me the results I want.
I think you are right, and it is not necessarily an overload symptom. A smaller buffer size reduces the frequency resolution of the FFT that is used for a speedy convolution operation, and this does indeed have a slight effect on the timbre.
This thread was really useful — I’m just getting into processing with ITs. Good ears and strong will, @thenonanonymous !
That's really interesting for me. I'm fighting with unacceptable recording/tracking latency in GarageBand on iPad 10.2.
I'm a total iOS/iPad newbie so please let me ask .. Is using GarageBand within Audiobus a way to reduce overall audio latency? Will GarageBand, inserted into Audiobus, follow its audio settings? In other words, if I set 64 samples in Audiobus, will GarageBand accept that or will it run on its default 256 samples?
According to how iOS works, GarageBand has to follow the audio setting that is set at the time it is loaded into the host. I can't prove that to be the case, but I did set the buffers to 64 in Audiobus and then load GarageBand. The sample rate in Audiobus didn't change after loading. Since everything needs to run at the same buffer settings and GarageBand sounds as it should, I have to assume that GarageBand is complying.
IMPORTANT: GarageBand must not already be running before you start Audiobus or the buffer will be forced to 256. It's a good idea to check Audiobus before and after to be sure the buffer setting is what you expect. Sometimes iOS leaves behind "ghost" processes that affect audio and midi settings even after apps are closed. If this is the case a reboot of the device should clear it out.
Sorry, I didn't take any actual measurements to prove the host buffer settings hold, but it seems likely that they do.
@wim Thank you, it sounds promising! I will check it tomorrow and I hope you are right :-)
Unfortunately I can't confirm that. I bought Audiobus yesterday and made several test even with restarted iPad. I have followed rules you wrote but GarageBand surely adding noticeable latency.
My test scenario:
iPad 10.2, lightning USB adapter, Focusrite Solo 2gen, direct monitoring off, plugged electric guitar
Reboot iPad, start Audiobus, set its audio buffer size to 64.
Engage input (1st plus button), engage output (3rd plus button) as direct to Focusrite.
Play on guitar - latency is OK/acceptable.
Then erase 3rd button and add GarageBand there. Tap to GarageBand and arm empty track to recording/monitoring.
Play on guitar - latency is significantly increased and probably the same as directly opened GarageBand (without AudioBus)
@filo01 : latency does not happen only because of the main buffer size. Plug-ins and apps may have additional buffers.
GarageBand’s optimizations seem based on maximizing track/effect playback rather than minimizing latency (which increases cpu demand).
Also, I believe that by sending the signal to GarageBand from Audiobus (rather direct to GB), you are adding another set of buffers between the input and GarageBand.
@espiegel123 Interesting, thank you. I have a personal workaround for GB latency but I really want to be sure that I'm not doing anything wrong or miss something. And I don't understand why GB on iOS doesn't have dedicated sound option like other "normal" apps.
Sorry it didn't work out @filo01. GarageBand is frustratingly opaque. It's not possible to know for sure where the added latency is coming from.
@wim Agree with you about frustration. I thought I finally found amazing piece of SW (and reason why to buy iPad) but for me with this latency issue is GB just very good for making backing tracks and good for limited guitar recordings.
@wim @espiegel123 It leads me to the last question: Daughter's iPad 10.2 has 3GB RAM and A13 Bionic CPU. Is it a chance that iPad Pro with M2 chip and 8GB RAM will process audio stream faster with lower latency?
Straight-through latency? Such as just to record a guitar track? Unlikely since you can't change the buffer settings in GB.
A faster iPad lets you use lower buffers with more apps before getting audio under runs (crackles). If you can't lower the buffers then the path from the output to your ears is unaffected.
Everything gets processed in buffers. As long as the buffers can be processed in time to avoid under runs, the latency is the same on an iPad Air 2 as an M2 iPad Pro. The difference is the Air 2 will buckle under the load of low buffers far more easily than the M2 will.
On the other hand, multi-thread audio capable hosts and devices can do some things in parallel that otherwise would need to be done serially. I have no idea how or if that translates into reducing latency for record monitoring. I also don't know if GarageBand is multi-thread capable. Cubasis and AudioEvolution Mobile DAWs are multi-threaded. I don't know about others. There's a thread tracking that here somewhere.
All is not lost if GarageBand isn't doing it for you. There are other capable DAWs and alternatives to DAWs. That have more flexibility for adjusting latency.
fwiw, I firmly believe even an old iPad like my Air 2 is the perfect recording device for guitar. Latency is far better than what I ever achieved on computers. No fan noise. No electrical noise if powering from battery. Tons of fantastic software like Loopy Pro to make recording easy. Absolutely stellar amp modeling (I'm a big Nembrini fan). Portability to the max. Everything I could ever want except some magic to make my chops halfway decent.
btw, to put this into the realm of physical effect:
I think the latency at 48kHz / 64 buffers is 1.333 milliseconds. At 256 buffers it is 5.333 milliseconds. If (and that's a BIG if) my calculations are correct, that's about the difference one would perceive of moving four more feet away from a sound source. This is of course exclusive of possible ripple effect if FX or audio routing add more buffers into the equation.
People have widely differing ability to tolerate latency for sure, so in no way am I minimizing the problem. I'm just adding some context.
@wim Thank you for your explanation. Mentioned 1.33ms for 64s/48kHz is definitely not total round trip latency. I have pretty powerful RME Babyface Pro FS at my home i7 PC and round trip for 64s/48kHz 48s/48kHz is about 3.1ms (first line is Focusrite Solo 2gen)
I will be really happy to get overall iPad latency with armed GarageBand under 9-10ms. Still noticeable for my ears/fingers but definitely acceptable.
Yeh, sub 10ms is generally what I can tolerate as well.
I honestly don't care about the numbers even a little. I know when I struggle with latency and when I don't.
It is a free app that is pretty amazing given that it is free. It is designed for ease of use and smooth playback not for people that care about latency.
If minimal latency is important to you, you might want to consider a different app.
@espiegel123 Yes, I'm aware of that but features like drummer or smart piano/bass/guitar are amazing and also unique I guess. These things stick me to GB more and more