Audiobus: Use your music apps together.
What is Audiobus? — Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.
Download on the App StoreAudiobus is the app that makes the rest of your setup better.
Comments
From other users here: https://forum.audiob.us/discussion/36296/gospel-musicians-launches-impulsation-on-sale-for-6-99/p4
Our AUv3 IMPULSation convolution Reverb was able to achieve 12-14 instances at 20% CPU, before crashing and chances are the crash was due to the Ram limitations of AUv3 and not CPU.
PS: You can fully manage the Impulses through our browser and import entire folders of impulses.
Yeah IMPULSation rocks on my 1st gen iPad Pro .. Convolution Pro is unusable due to crackling issues.
Ok. Major revision of my opinion here. After doing some more testing of all the convolution apps that I have with mixes that aren’t dense and have instruments panned away from the center, I realized that all the other convolution apps/plugins are really working in dual mono: the left input it only gets processed through the left IR channel and the right input channel through the right IR channel. In a lot of cases that works out fine. But if you have a detailed stereo source that doesn’t have everything towards the center, the imaging is unnatural. A piano hard-panned left should have reverberation for both the left and the right.
Of the four apps and plugins I tried on iOS, only convolutor was in true full stereo (rather than dual mono).
As a result one instance of it is essentially the equivalent of two instances of the others. So, when stereo imaging is important, Convolutor Pro gives a much better result. It is processor intensive and so needs larger buffers than the others to reduce CPU load. But it is worth it when imaging details are important.
Does impulsation do full stereo processing? It seems like FantasyVerb may be doing a sort of dual mono: if something is hard-panned reverberation only is heard from the channel it is panned to. Or am I mistaken?
We discussed it and the cpu issues kept us back from that, as you correctly figured out as one would need to double the amount of work.
Wait a sec. Aren't most stereo impulse responses created from a mono signal source???
@rs2000 all my IRs are mono 🤔
From the AppStore description:
Most of my IRs are stereo but they're created using a mono signal.
Which means that if you have a piano panned to the left, you could simply make a mono signal from it by copying the left channel to the right channel and feeding the convolutor with it.
I could imagine creating "true stereo" IRs by firing two separate dirac-like audio impulses into the room to capture left source and stereo response plus right source and stereo response but how would you go about defining what's left and what's right?
Every sound source meant to be processed by the convolutor is different, so there's no general answer to this.
@david_2017: You're most likely working with guitar amp or cabinet responses, right? 😉
AltiVerb (which in my opinion is the standard setter) provides mono-to-stereo and stereo-to-stereo IRs. Each has its own use. I converted a few of my ALtiVerb IRs to wav files for my personal use on iOS. When using a stereo-stereo IR with a source where there are instruments with distinct positioning, the difference between the true stereo and dual mono becomes apparent.
In my opinion, Jens could do himself a favor by more clearly describing this distinction between dual-mono and true stereo (in my opinion the current description is weak in meaningful detail). A few well-chosen audio examples would go a long way to demonstrating this. I think in most casual tests, dual mono can be quite satisfactory but in some important contexts it is unsatisfactory.
That's true, he doesn't really describe what his plugin does better, just 'full convolution' - whatever he means by that.
But tell me, how does Altiverb realize Stereo-to-stereo, are they using two different stereo impulse responses simultaneously, one for each source position? That would explain the instruments with distinct positioning.
In my opinion, that App Store text you showed is an example of a poor description as it is vague on specifics and sounds like bad-mouthing the competition without specifics. If he simply said: "Unlike other iOS convolution plugins, this plugin does full stereo processing of each input channel like high-end hardware and desktop software. In true stereo, when using stereo-to-stereo impulse responses, you will hear reverberant sound from both left and right channels. Most iOS convolution reverbs sacrifice this realism to reduce CPU load.
This realism comes at a cost which discriminating users will find worthwhile. It may require using larger buffers in your host than with other reverbs that perform dual-mono rather true stereo processing."
If someone reads that, it'll tell them exactly what he is talking about. And people for whom such things make a difference will realize what it has to offer. (The current text just sounds like bragging in my opinion -- particularly in light of a past app store description of another of his plugins where he blamed AUM for something that was his lack of understanding).
How does AltiVerb create there multichannel (they have not just stereo to stereo but also surround IRs)? They play their source sweep from two speakers at the source end. Btw, there is a nice video on their web site about how to make a mono-to-stereo IR in Altiverb: https://www.audioease.com/altiverb/sampling.php
Here is the text from their support page about the mono-to-stereo and stereo-to-stereo
And that's exactly why I asked.
There's no point in using the same stereo IR for the left and right channels independently, you could just downmix the input signal to mono and feed it the convolutor, the pure effected signal mixed with the original stereo source would be the same, right?
No. That is not correct. The left input channel instrument's reverberation will be different in the left channel and the right channel. And the same is true for the right channel. There is kind of rich interaction and phase-cancellation that is different from doing true stereo. A lot of time one can get away with the down mixing but sometimes the difference between stereo-to-stereo and mono-to-stereo. That is why AltiVerb provides both separate mono-to-stereo and stereo-to-stereo IRs.
If you take a source that has some hard-panned instruments and compare the results of downmixing the reverb input and running mono-to-stereo compared to running the stereo input through a stereo-to-stereo IR from the same location, you will notice a distinct difference in realism. This is particularly true, when you are using an IR on the master bus as glue that puts your mix in a room.
I think we're both saying the same. I don't question the difference between using one stereo IR for mono-to-stereo and two separate IRs recorded on-location with the reference signal at different locations, used for stereo-to-stereo.
What I've questioned is using two exactly identical IRs in a stereo-to-stereo convolution setup.
@rs2000 wrote:
What I've questioned is using two exactly identical IRs in a stereo-to-stereo convolution setup.
@rs2000 : Maybe I am confused about what you are saying. Are you talking about creating the IRs or applying the IR to create reverb?
It's basically about how Convolutor PE applies impulse responses to audio files.
My understanding of "Full convolution" and "True stereo" like in the App Store description is actually using different stereo IRs for the left and right channels, which I wonder if that's the case. The CPU appetite suggests it, but the results don't.
I see no reason to doubt that Jens is doing what he says: processing the left channel through the left and right channels of the IR and doing the same for the right input channel and combining the results. Btw, true stereo doesn't use two different stereo IRs. Stereo-to-stereo convolution involves running the two inputs independently through the same stereo IR. This will only be meaningful if that IR was created using stereo input.
Now, how big of a difference that will make depends on a few things:
As someone mentioned an awful lot of stereo IRs floating around are probably mono-to-stereo IRs. I have no idea about the quality of the IRs that Jens has built into Convolutor PE. I would love to be able to try it with some known stereo-to-stereo IRs (like the ones that I have). But I see no reason to doubt that he is doing what he says. As you say, the CPU use suggests that he is doing what he says.
@StudioES The text in your screen shot says that it's not a single stereo IR but rather two stereo IRs, one recorded with only the left speaker emitting the reference signal and the second one with only the right speaker.
How would you be able to model two different microphone positions in a room otherwise?
Here's a nice explanation of using IRs in true stereo convolution.
https://www.avosound.com/en/tutorials/create-impulse-responses/convolution-reverb-for-mono-and-stereo
I believe that the information on the AltiVerb site is correct. There is a reason that there work has the stature that it does. Basically you set up the two speakers that you want as your your sources sufficiently far apart to represent the input soundstage that you want -- and it seems that by trial-and-error they have come up with a good sense of placement. And you set up your stereo mics in an area whose acoustics you want to capture. Record your sweep and deconvolve (or use a clapper or starter pistol if you must -- but they explain why they think a sweep is preferable). The result is a single stereo IR file.
@espiegel123 Sorry to disagree (I'm not into arguing, I'd just like to fully understand the differences). Here's another short essay that I find clearer than the info on the audioease site, copied from the Liquidsonics page for Reverberate:
@rs2000 : having read a bit more, I think there are basically two different approaches that get called 'true stereo'. One approach is the one that AudioEase describes as stereo-to-stereo and uses one stereo impulse response file and the one described by Liquisonics that uses different two stereo impulse response files. In both cases, both the left and right inputs are processed through a stereo IR. In one case, the same IR is used twice.
Even though the same stereo IR is used for the left and right inputs in the AudioEase method, the result is different from using a summed input.
I don't know enough to know how significant the difference results are or under what conditions the differences will be significant. Both of these flavors or "true stereo" will sometimes be noticeably different from mono-to-stereo.
I had an exchange with JAX recently and my impression was that for the pro version he might be using the liquidsonics approach or maybe that would be an option.
EDIT ADDED: I exchanged email with the AltiVerb folks and the source of my confusion was the ambiguously worded text rs2000 quoted up-thread. They do stereo the same way that liquidsonics does. They record one stereo impulse response playing the sweep through the left speaker in the setup. Then they record a stereo impulse response for the right-speaker sweep. And when applying the IR, the left input gets convoluted with the L and R channels of the "left speaker IR" and the right input channel gets convoluted with the L and R channels of the "right speaker IR".