Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Convolution Pro by Jens Guell

1246

Comments

  • @espiegel123 Yes, that's perfectly possible.
    Not that I would always insist on true stereo to stereo (in some cases it might even be counter-productive), it's just good to know which is which.
    Let's hope for Jens writing our dream convolutor one day, the PE version looks very promising already 😎👏

  • @rs2000 said:
    @espiegel123 Yes, that's perfectly possible.
    Not that I would always insist on true stereo to stereo (in some cases it might even be counter-productive), it's just good to know which is which.
    Let's hope for Jens writing our dream convolutor one day, the PE version looks very promising already 😎👏

    +1

  • @Gravitas said:

    @rs2000 said:
    @espiegel123 Yes, that's perfectly possible.
    Not that I would always insist on true stereo to stereo (in some cases it might even be counter-productive), it's just good to know which is which.
    Let's hope for Jens writing our dream convolutor one day, the PE version looks very promising already 😎👏

    +1

    For anyone hoping to see the Pro version, I recommend both rating the PE version and writing a nice review. I've updated my review and rating now that I better understand what is going on with it and what the other convolution apps are doing.

    I think one problem facing a pro convolution app is that an awful lot of the free IRs floating around seem to be mono-to-stereo and don't put a true-stereo reverb to the best use.

  • @espiegel123 said:

    @Gravitas said:

    @rs2000 said:
    @espiegel123 Yes, that's perfectly possible.
    Not that I would always insist on true stereo to stereo (in some cases it might even be counter-productive), it's just good to know which is which.
    Let's hope for Jens writing our dream convolutor one day, the PE version looks very promising already 😎👏

    +1

    For anyone hoping to see the Pro version, I recommend both rating the PE version and writing a nice review. I've updated my review and rating now that I better understand what is going on with it and what the other convolution apps are doing.

    I think one problem facing a pro convolution app is that an awful lot of the free IRs floating around seem to be mono-to-stereo and don't put a true-stereo reverb to the best use.

    I agree.

    I had found the same.

  • edited January 2020

    @espiegel123 said:
    @rs2000 : having read a bit more, I think there are basically two different approaches that get called 'true stereo'. One approach is the one that AudioEase describes as stereo-to-stereo and uses one stereo impulse response file and the one described by Liquisonics that uses different two stereo impulse response files. In both cases, both the left and right inputs are processed through a stereo IR. In one case, the same IR is used twice.

    Even though the same stereo IR is used for the left and right inputs in the AudioEase method, the result is different from using a summed input.

    I don't know enough to know how significant the difference results are or under what conditions the differences will be significant. Both of these flavors or "true stereo" will sometimes be noticeably different from mono-to-stereo.

    I had an exchange with JAX recently and my impression was that for the pro version he might be using the liquidsonics approach or maybe that would be an option.

    After thinking about the information that you posted I peaked into Altiverb's IR folders and it looks like the stereo-to-stereo IR folders do have 2 pairs of IRs. I have written to their support to get clarification about how to record impulses for stereo-to-stereo IRs. I haven't heard back yet. I anticipate that the answer is that you set things up as we said (a pair of stereo speakers on the stage to play the impulse and stereo mics at the place where you want to get the reverberation from) and then do separate recordings for the sweep from the left speaker and the right speaker rather than what I said). Which would mean that when they apply them that they do use separate IRs for the left and right channels.

    Working under that assumption, I am going to do some tests and see if I can re-create a couple of these IR pairs for testing.

    The Rooms developer has pointed out to me that if one has a proper set of stereo IRs that one can do true stereo reverb by using two instances of Rooms. Set the first instance to Mono-Left operation and use one of the true-stereo IRs. Have another instance of Rooms using Mono-Right operation and the other IR from the IR pair.

    CORRECTION: To set up Rooms to do full stereo, have one stereo instance using the "left" stereo IR (of a stereo IR pair) and another instance using the "right" stereo IR. Feed the "left" instance, with left channel audio (by doing a mono to stereo conversion of the of the left channel input). Do the same for instance 2, feeding it the right channel of your source (again doing mono to stereo conversion of the right input). When set up like this, you will probably need to increase your buffer sizes though your mileage may vary.

  • @espiegel123 Good work. I was actually curious about something along those lines — using a regular version twice

  • @espiegel123 said:

    @espiegel123 said:
    @rs2000 : having read a bit more, I think there are basically two different approaches that get called 'true stereo'. One approach is the one that AudioEase describes as stereo-to-stereo and uses one stereo impulse response file and the one described by Liquisonics that uses different two stereo impulse response files. In both cases, both the left and right inputs are processed through a stereo IR. In one case, the same IR is used twice.

    Even though the same stereo IR is used for the left and right inputs in the AudioEase method, the result is different from using a summed input.

    I don't know enough to know how significant the difference results are or under what conditions the differences will be significant. Both of these flavors or "true stereo" will sometimes be noticeably different from mono-to-stereo.

    I had an exchange with JAX recently and my impression was that for the pro version he might be using the liquidsonics approach or maybe that would be an option.

    After thinking about the information that you posted I peaked into Altiverb's IR folders and it looks like the stereo-to-stereo IR folders do have 2 pairs of IRs. I have written to their support to get clarification about how to record impulses for stereo-to-stereo IRs. I haven't heard back yet. I anticipate that the answer is that you set things up as we said (a pair of stereo speakers on the stage to play the impulse and stereo mics at the place where you want to get the reverberation from) and then do separate recordings for the sweep from the left speaker and the right speaker rather than what I said). Which would mean that when they apply them that they do use separate IRs for the left and right channels.

    Working under that assumption, I am going to do some tests and see if I can re-create a couple of these IR pairs for testing.

    The Rooms developer has pointed out to me that if one has a proper set of stereo IRs that one can do true stereo reverb by using two instances of Rooms. Set the first instance to Mono-Left operation and use one of the true-stereo IRs. Have another instance of Rooms using Mono-Right operation and the other IR from the IR pair.

    Looking forward to your results.

  • Here’s a question that I have though… Didn’t James claim that this PE version had the less computationally versus algorithm only

  • @espiegel123 Exactly, that's how I understood stereo-to-stereo in the first place and I'd be surprised if AudioEase did it differently.

  • If someone has a nice sounding dry source file with instruments placed far left and far right (which will expose the unreality of dual-mono), please share and I'll use them in the tests. I can't share the IRs that I am using since they are licensed.

  • @espiegel123

    Wouldn't Jens Guell have some for you to do some real world tests?

  • @Gravitas said:
    @espiegel123

    Wouldn't Jens Guell have some for you to do some real world tests?

    Have IRs or source files to apply the to?

  • @Gravitas said:
    @espiegel123

    Wouldn't Jens Guell have some for you to do some real world tests?

    I've done a channel test session in AUM and it doesn't look like his app uses different stereo IRs for each input channel, I believe his take on "true stereo" is using a single stereo IR for each preset and processing the left channel with the left channel of the impulse response and the right channel with the right channel of the impulse response, which is basically "parallel stereo" processing but not true stereo.

  • @rs2000 said:

    @Gravitas said:
    @espiegel123

    Wouldn't Jens Guell have some for you to do some real world tests?

    I've done a channel test session in AUM and it doesn't look like his app uses different stereo IRs for each input channel, I believe his take on "true stereo" is using a single stereo IR for each preset and processing the left channel with the left channel of the impulse response and the right channel with the right channel of the impulse response, which is basically "parallel stereo" processing but not true stereo.

    I am not sure, but I think maybe in the PE version he used parallel processing in the second version he posted because of the comments here about the crackles and cpu load. I think that is what the App Store description is saying.

  • @espiegel123 said:

    @rs2000 said:

    @Gravitas said:
    @espiegel123

    Wouldn't Jens Guell have some for you to do some real world tests?

    I've done a channel test session in AUM and it doesn't look like his app uses different stereo IRs for each input channel, I believe his take on "true stereo" is using a single stereo IR for each preset and processing the left channel with the left channel of the impulse response and the right channel with the right channel of the impulse response, which is basically "parallel stereo" processing but not true stereo.

    I am not sure, but I think maybe in the PE version he used parallel processing in the second version he posted because of the comments here about the crackles and cpu load. I think that is what the App Store description is saying.

    I've updated to the latest version only now and in both versions, the channels were processed separately.
    CPU load is the same by the way, just as much crackling with 256 bytes buffer size in AUM on iPad 6 and with a 4.00s IR loaded into a single instance of Convolutor PE.

  • @espiegel123 said:

    @Gravitas said:
    @espiegel123

    Wouldn't Jens Guell have some for you to do some real world tests?

    Have IRs or source files to apply the to?

    Both now that you've mentioned it.

  • @Gravitas @rs2000 @audiobussy

    I've created a new topic for continued discussion about this so that we can talk about other IR apps than Convolutor without disrupting the topic:

    https://forum.audiob.us/discussion/36516/convoluted-convo#latest

    I've uploaded a zip with a piano and bass file with extreme panning processed through Altiverb using a full stereo setting and another plugin on iOS using a pair of IRs created from the AltiVerb IR (for my private use) to try to accomplish full stereo. There is a third example processed though an iOS plugin that uses the dual mono method.

  • edited January 2020

    Reading the news at https://midi.digitster.com/ indicates that Jens might not release the pro version.

    If you’ve been following the thread, and are considering purchasing, you should contact Jens and let him know.

  • @frond said:
    Reading the news at https://midi.digitster.com/ indicates that Jens might not release the pro version.

    If you’ve been following the thread, and are considering purchasing, you should contact Jens and let him know.

    Also it would probably help a lot to leave positive reviews on the App Store for the free version.

  • @richardyot said:

    @frond said:
    Reading the news at https://midi.digitster.com/ indicates that Jens might not release the pro version.

    If you’ve been following the thread, and are considering purchasing, you should contact Jens and let him know.

    Also it would probably help a lot to leave positive reviews on the App Store for the free version.

    Ahhh....

    I read the updated news.
    It will be a pity if they do that.

    I’ll email them as soon
    as I get the chance.

    Are they on the AB
    forum do you know?

  • No. He got some negative feedback and left in a huff. The tone of his posts does suggest some sort of personality disorder that gets triggered by criticism.

  • @TheOriginalPaulB said:
    No. He got some negative feedback and left in a huff. The tone of his posts does suggest some sort of personality disorder that gets triggered by criticism.

    Sounds like he hasn’t
    had much encouragement or
    he’s a total prima donna.
    I‘m used to both.

    Okay, I’ll see if I can find
    his email and I’ll email him
    to find out what’s what.

    Thanks.

  • @TheOriginalPaulB said:
    No. He got some negative feedback and left in a huff. The tone of his posts does suggest some sort of personality disorder that gets triggered by criticism.

    I would characterize it differently than leaving due to criticism. He made some questionable statements about some technical issues in strong language and when another developer tried (with no animus) to clarify the issue, Jens responded in a less than respectful way without checking out the merits of what the other dev was explaining. At least that was the impression I had.

  • @espiegel123 said:

    @TheOriginalPaulB said:
    No. He got some negative feedback and left in a huff. The tone of his posts does suggest some sort of personality disorder that gets triggered by criticism.

    I would characterize it differently than leaving due to criticism. He made some questionable statements about some technical issues in strong language and when another developer tried (with no animus) to clarify the issue, Jens responded in a less than respectful way without checking out the merits of what the other dev was explaining. At least that was the impression I had.

    +1👍

  • @espiegel123 said:

    @TheOriginalPaulB said:
    No. He got some negative feedback and left in a huff. The tone of his posts does suggest some sort of personality disorder that gets triggered by criticism.

    I would characterize it differently than leaving due to criticism. He made some questionable statements about some technical issues in strong language and when another developer tried (with no animus) to clarify the issue, Jens responded in a less than respectful way without checking out the merits of what the other dev was explaining. At least that was the impression I had.

    I'd suggest to just ignore that. Developers are a rare species and it's not like we have too many developers to choose from.
    Jens is definitely interested in building high-quality apps and like every iOS developer, he's been struggling more or less with the many pitfalls that come with iOS development, being proud to have managed to get over many of them.

    If you knew what kind people were involved in building your car, chances are you'd never purchased it ;)

  • @rs2000 said:

    @espiegel123 said:

    @TheOriginalPaulB said:
    No. He got some negative feedback and left in a huff. The tone of his posts does suggest some sort of personality disorder that gets triggered by criticism.

    I would characterize it differently than leaving due to criticism. He made some questionable statements about some technical issues in strong language and when another developer tried (with no animus) to clarify the issue, Jens responded in a less than respectful way without checking out the merits of what the other dev was explaining. At least that was the impression I had.

    I'd suggest to just ignore that. Developers are a rare species and it's not like we have too many developers to choose from.
    Jens is definitely interested in building high-quality apps and like every iOS developer, he's been struggling more or less with the many pitfalls that come with iOS development, being proud to have managed to get over many of them.

    If you knew what kind people were involved in building your car, chances are you'd never purchased it ;)

    +1

    I know this as a musician/composer/teacher/mentor

  • @espiegel123 said:

    @TheOriginalPaulB said:
    No. He got some negative feedback and left in a huff. The tone of his posts does suggest some sort of personality disorder that gets triggered by criticism.

    I would characterize it differently than leaving due to criticism. He made some questionable statements about some technical issues in strong language and when another developer tried (with no animus) to clarify the issue, Jens responded in a less than respectful way without checking out the merits of what the other dev was explaining. At least that was the impression I had.

    Yeah, pretty much. :)

  • “We are searching intensively for performance bugs in our JAX Convolutor engine since some days now. But there is no!

    We made several advanced tests with circular buffers and optimized fixed internal buffer sizes, which introduces latency (test wise). As soon you lower the rendering buffer sizes with most AUV3 host applications, the higher the extreme CPU peaks become, regardless what you do with the code. Over proportionally. This behavior is everything but normal. A fixed circular buffer should keep the performance nearly constant regardless which host buffer size is switched!

    Our convolution engine (the exactlyy same codebase) uses on an quite old 2012 Mac Mini with a 128 seconds stereo IR a maximum of 3% CPU. THREE PERCENT! On iOS this is with the same configuration on a latest iPad Pro around 20 to 40 percent !!! (buffersize 1024) up to 90% with 256 samples in AUM...

    So iOS is either buggy like hell or there is a heavy audio system misconception/misconfiguration on Apples mobile devices running. Multi threading and multi core seems to be completely missing with AUV3. This phenomenon raises exponentially with higher performance demands.

    However, we reduced the CPU hit as much as even possible with the current implementation of the JAX Convolutors. The latest update will come soon. More is just not possible, unless something fundamental has changed on iOS/iPadOS for audio processing. We tested on the latest devices, and it seems there is no POWER at all in that CPUs, it is just ridiculous. It feels like building on a single core 1990 desktop computer.

    Apples high performance Accelerate Framework is told to be hardware-optimized and we used excessively NEON optimizations additionally all over the code. It is useless, if the system (obviously) throttles the audio performance artificially. This is not just a suspicion. This is a fact.”

    From jen’s site. Using since November 2018 latest iPad Pro I could only agree with Jen’s thought.

    How could some so powerful machines be so poor at rendering audio?

    Do engineers at apple not care at all about audio performance?

Sign In or Register to comment.