Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Convolution Pro by Jens Guell

1356

Comments

  • edited January 2020

    From other users here: https://forum.audiob.us/discussion/36296/gospel-musicians-launches-impulsation-on-sale-for-6-99/p4

    Our AUv3 IMPULSation convolution Reverb was able to achieve 12-14 instances at 20% CPU, before crashing and chances are the crash was due to the Ram limitations of AUv3 and not CPU.

    PS: You can fully manage the Impulses through our browser and import entire folders of impulses.

  • @GospelMusicians said:
    From other users here: https://forum.audiob.us/discussion/36296/gospel-musicians-launches-impulsation-on-sale-for-6-99/p4

    Our AUv3 IMPULSation convolution Reverb was able to achieve 12-14 instances at 20% CPU, before crashing and chances are the crash was due to the Ram limitations of AUv3 and not CPU.

    PS: You can fully manage the Impulses through our browser and import entire folders of impulses.

    Yeah IMPULSation rocks on my 1st gen iPad Pro .. Convolution Pro is unusable due to crackling issues.

  • Ok. Major revision of my opinion here. After doing some more testing of all the convolution apps that I have with mixes that aren’t dense and have instruments panned away from the center, I realized that all the other convolution apps/plugins are really working in dual mono: the left input it only gets processed through the left IR channel and the right input channel through the right IR channel. In a lot of cases that works out fine. But if you have a detailed stereo source that doesn’t have everything towards the center, the imaging is unnatural. A piano hard-panned left should have reverberation for both the left and the right.

    Of the four apps and plugins I tried on iOS, only convolutor was in true full stereo (rather than dual mono).

    As a result one instance of it is essentially the equivalent of two instances of the others. So, when stereo imaging is important, Convolutor Pro gives a much better result. It is processor intensive and so needs larger buffers than the others to reduce CPU load. But it is worth it when imaging details are important.

  • @GospelMusicians said:
    From other users here: https://forum.audiob.us/discussion/36296/gospel-musicians-launches-impulsation-on-sale-for-6-99/p4

    Our AUv3 IMPULSation convolution Reverb was able to achieve 12-14 instances at 20% CPU, before crashing and chances are the crash was due to the Ram limitations of AUv3 and not CPU.

    PS: You can fully manage the Impulses through our browser and import entire folders of impulses.

    Does impulsation do full stereo processing? It seems like FantasyVerb may be doing a sort of dual mono: if something is hard-panned reverberation only is heard from the channel it is panned to. Or am I mistaken?

  • @espiegel123 said:

    @GospelMusicians said:
    From other users here: https://forum.audiob.us/discussion/36296/gospel-musicians-launches-impulsation-on-sale-for-6-99/p4

    Our AUv3 IMPULSation convolution Reverb was able to achieve 12-14 instances at 20% CPU, before crashing and chances are the crash was due to the Ram limitations of AUv3 and not CPU.

    PS: You can fully manage the Impulses through our browser and import entire folders of impulses.

    Does impulsation do full stereo processing? It seems like FantasyVerb may be doing a sort of dual mono: if something is hard-panned reverberation only is heard from the channel it is panned to. Or am I mistaken?

    We discussed it and the cpu issues kept us back from that, as you correctly figured out as one would need to double the amount of work.

  • @espiegel123 said:
    Ok. Major revision of my opinion here. After doing some more testing of all the convolution apps that I have with mixes that aren’t dense and have instruments panned away from the center, I realized that all the other convolution apps/plugins are really working in dual mono: the left input it only gets processed through the left IR channel and the right input channel through the right IR channel. In a lot of cases that works out fine. But if you have a detailed stereo source that doesn’t have everything towards the center, the imaging is unnatural. A piano hard-panned left should have reverberation for both the left and the right.

    Of the four apps and plugins I tried on iOS, only convolutor was in true full stereo (rather than dual mono).

    As a result one instance of it is essentially the equivalent of two instances of the others. So, when stereo imaging is important, Convolutor Pro gives a much better result. It is processor intensive and so needs larger buffers than the others to reduce CPU load. But it is worth it when imaging details are important.

    Wait a sec. Aren't most stereo impulse responses created from a mono signal source???

  • @rs2000 all my IRs are mono 🤔

  • @espiegel123 said:
    Ok. Major revision of my opinion here. After doing some more testing of all the convolution apps that I have with mixes that aren’t dense and have instruments panned away from the center, I realized that all the other convolution apps/plugins are really working in dual mono: the left input it only gets processed through the left IR channel and the right input channel through the right IR channel. In a lot of cases that works out fine. But if you have a detailed stereo source that doesn’t have everything towards the center, the imaging is unnatural. A piano hard-panned left should have reverberation for both the left and the right.

    Of the four apps and plugins I tried on iOS, only convolutor was in true full stereo (rather than dual mono).

    As a result one instance of it is essentially the equivalent of two instances of the others. So, when stereo imaging is important, Convolutor Pro gives a much better result. It is processor intensive and so needs larger buffers than the others to reduce CPU load. But it is worth it when imaging details are important.

    From the AppStore description:

  • edited January 2020

    @david_2017 said:
    @rs2000 all my IRs are mono 🤔

    Most of my IRs are stereo but they're created using a mono signal.
    Which means that if you have a piano panned to the left, you could simply make a mono signal from it by copying the left channel to the right channel and feeding the convolutor with it.
    I could imagine creating "true stereo" IRs by firing two separate dirac-like audio impulses into the room to capture left source and stereo response plus right source and stereo response but how would you go about defining what's left and what's right?
    Every sound source meant to be processed by the convolutor is different, so there's no general answer to this.

    @david_2017: You're most likely working with guitar amp or cabinet responses, right? 😉

  • edited January 2020
    The user and all related content has been deleted.
  • @rs2000 said:

    @david_2017 said:
    @rs2000 all my IRs are mono 🤔

    Most of my IRs are stereo but they're created using a mono signal.
    Which means that if you have a piano panned to the left, you could simply make a mono signal from it by copying the left channel to the right channel and feeding the convolutor with it.
    I could imagine creating "true stereo" IRs by firing a dirac-like audio impulse into the room to capture left source and stereo response plus right source and stereo response but how would you go about defining what's left and what's right?
    Every sound source meant to be processed by the convolutor is different, so there's no general answer to this.

    AltiVerb (which in my opinion is the standard setter) provides mono-to-stereo and stereo-to-stereo IRs. Each has its own use. I converted a few of my ALtiVerb IRs to wav files for my personal use on iOS. When using a stereo-stereo IR with a source where there are instruments with distinct positioning, the difference between the true stereo and dual mono becomes apparent.

    In my opinion, Jens could do himself a favor by more clearly describing this distinction between dual-mono and true stereo (in my opinion the current description is weak in meaningful detail). A few well-chosen audio examples would go a long way to demonstrating this. I think in most casual tests, dual mono can be quite satisfactory but in some important contexts it is unsatisfactory.

  • @espiegel123 said:

    @rs2000 said:

    @david_2017 said:
    @rs2000 all my IRs are mono 🤔

    Most of my IRs are stereo but they're created using a mono signal.
    Which means that if you have a piano panned to the left, you could simply make a mono signal from it by copying the left channel to the right channel and feeding the convolutor with it.
    I could imagine creating "true stereo" IRs by firing a dirac-like audio impulse into the room to capture left source and stereo response plus right source and stereo response but how would you go about defining what's left and what's right?
    Every sound source meant to be processed by the convolutor is different, so there's no general answer to this.

    AltiVerb (which in my opinion is the standard setter) provides mono-to-stereo and stereo-to-stereo IRs. Each has its own use. I converted a few of my ALtiVerb IRs to wav files for my personal use on iOS. When using a stereo-stereo IR with a source where there are instruments with distinct positioning, the difference between the true stereo and dual mono becomes apparent.

    In my opinion, Jens could do himself a favor by more clearly describing this distinction between dual-mono and true stereo (in my opinion the current description is weak in meaningful detail). A few well-chosen audio examples would go a long way to demonstrating this. I think in most casual tests, dual mono can be quite satisfactory but in some important contexts it is unsatisfactory.

    That's true, he doesn't really describe what his plugin does better, just 'full convolution' - whatever he means by that.
    But tell me, how does Altiverb realize Stereo-to-stereo, are they using two different stereo impulse responses simultaneously, one for each source position? That would explain the instruments with distinct positioning.

  • @Faland said:

    @espiegel123 said:
    Ok. Major revision of my opinion here. After doing some more testing of all the convolution apps that I have with mixes that aren’t dense and have instruments panned away from the center, I realized that all the other convolution apps/plugins are really working in dual mono: the left input it only gets processed through the left IR channel and the right input channel through the right IR channel. In a lot of cases that works out fine. But if you have a detailed stereo source that doesn’t have everything towards the center, the imaging is unnatural. A piano hard-panned left should have reverberation for both the left and the right.

    Of the four apps and plugins I tried on iOS, only convolutor was in true full stereo (rather than dual mono).

    As a result one instance of it is essentially the equivalent of two instances of the others. So, when stereo imaging is important, Convolutor Pro gives a much better result. It is processor intensive and so needs larger buffers than the others to reduce CPU load. But it is worth it when imaging details are important.

    From the AppStore description:

    In my opinion, that App Store text you showed is an example of a poor description as it is vague on specifics and sounds like bad-mouthing the competition without specifics. If he simply said: "Unlike other iOS convolution plugins, this plugin does full stereo processing of each input channel like high-end hardware and desktop software. In true stereo, when using stereo-to-stereo impulse responses, you will hear reverberant sound from both left and right channels. Most iOS convolution reverbs sacrifice this realism to reduce CPU load.

    This realism comes at a cost which discriminating users will find worthwhile. It may require using larger buffers in your host than with other reverbs that perform dual-mono rather true stereo processing."

    If someone reads that, it'll tell them exactly what he is talking about. And people for whom such things make a difference will realize what it has to offer. (The current text just sounds like bragging in my opinion -- particularly in light of a past app store description of another of his plugins where he blamed AUM for something that was his lack of understanding).

  • @rs2000 said:

    @espiegel123 said:

    @rs2000 said:

    @david_2017 said:
    @rs2000 all my IRs are mono 🤔

    Most of my IRs are stereo but they're created using a mono signal.
    Which means that if you have a piano panned to the left, you could simply make a mono signal from it by copying the left channel to the right channel and feeding the convolutor with it.
    I could imagine creating "true stereo" IRs by firing a dirac-like audio impulse into the room to capture left source and stereo response plus right source and stereo response but how would you go about defining what's left and what's right?
    Every sound source meant to be processed by the convolutor is different, so there's no general answer to this.

    AltiVerb (which in my opinion is the standard setter) provides mono-to-stereo and stereo-to-stereo IRs. Each has its own use. I converted a few of my ALtiVerb IRs to wav files for my personal use on iOS. When using a stereo-stereo IR with a source where there are instruments with distinct positioning, the difference between the true stereo and dual mono becomes apparent.

    In my opinion, Jens could do himself a favor by more clearly describing this distinction between dual-mono and true stereo (in my opinion the current description is weak in meaningful detail). A few well-chosen audio examples would go a long way to demonstrating this. I think in most casual tests, dual mono can be quite satisfactory but in some important contexts it is unsatisfactory.

    That's true, he doesn't really describe what his plugin does better, just 'full convolution' - whatever he means by that.
    But tell me, how does Altiverb realize Stereo-to-stereo, are they using two different stereo impulse responses simultaneously, one for each source position? That would explain the instruments with distinct positioning.

    How does AltiVerb create there multichannel (they have not just stereo to stereo but also surround IRs)? They play their source sweep from two speakers at the source end. Btw, there is a nice video on their web site about how to make a mono-to-stereo IR in Altiverb: https://www.audioease.com/altiverb/sampling.php

    Here is the text from their support page about the mono-to-stereo and stereo-to-stereo

    What is the difference between the mono to stereo and the stereo to stereo options in a stereo out Altiverb ?
    
    Best way to understand this is to go back with me to our sampling session when we recorded to room's acoustics for the IR in Altiverb.
    
    mono to stereo
    We put a single speaker in the center of the stage and two mics in the audience.
    
    This results in a mono to stereo IR.
    When you playback your audio through this IR in Altiverb, it will sound like your audio comes out of that single speaker from the center of the stage (mono!) picked up with two microphones (stereo!).
    This IR gives a stereo reverb, however the input is (mixed to) mono, as there is only a single source on stage.
    
    When you want to process a vocal (mono) or direct recorded electrical guitar or something where the panning is not important the mono to stereo IR will do just fine (pro: costs half of the processing then a stereo-to-stereo IR, so it is more efficient in cpu and memory).
    
    stereo to stereo
    back to recording the room
    this time we put two speakers on stage, one on the left and one on the right side of the stage and we record again with the two mics in the audience.
    
    When you playback your stereo audio through this IR in Altiverb your stereo signal comes out of these two speakers.
    Easy to picture here that your original panning will be maintained using a stereo to stereo IR.
    
    So basically it comes down to this:
    is your input stereo or is it panned in stereo ?
    use a stereo - stereo IR, else you will loose the stereo information in the reverb (although the reverb is stereo).
    
    Sometimes even when using stereo panned sources a mono to stereo IR can be preferred as you do not want the panning to be reflected in the reverb levels. But this is a mixing decision which depends on taste mostly (and music style) and I dare not to advice on that :-)
    
  • edited January 2020

    @espiegel123 said:

    @rs2000 said:

    @espiegel123 said:

    @rs2000 said:

    @david_2017 said:
    @rs2000 all my IRs are mono 🤔

    Most of my IRs are stereo but they're created using a mono signal.
    Which means that if you have a piano panned to the left, you could simply make a mono signal from it by copying the left channel to the right channel and feeding the convolutor with it.
    I could imagine creating "true stereo" IRs by firing a dirac-like audio impulse into the room to capture left source and stereo response plus right source and stereo response but how would you go about defining what's left and what's right?
    Every sound source meant to be processed by the convolutor is different, so there's no general answer to this.

    AltiVerb (which in my opinion is the standard setter) provides mono-to-stereo and stereo-to-stereo IRs. Each has its own use. I converted a few of my ALtiVerb IRs to wav files for my personal use on iOS. When using a stereo-stereo IR with a source where there are instruments with distinct positioning, the difference between the true stereo and dual mono becomes apparent.

    In my opinion, Jens could do himself a favor by more clearly describing this distinction between dual-mono and true stereo (in my opinion the current description is weak in meaningful detail). A few well-chosen audio examples would go a long way to demonstrating this. I think in most casual tests, dual mono can be quite satisfactory but in some important contexts it is unsatisfactory.

    That's true, he doesn't really describe what his plugin does better, just 'full convolution' - whatever he means by that.
    But tell me, how does Altiverb realize Stereo-to-stereo, are they using two different stereo impulse responses simultaneously, one for each source position? That would explain the instruments with distinct positioning.

    How does AltiVerb create there multichannel (they have not just stereo to stereo but also surround IRs)? They play their source sweep from two speakers at the source end. Btw, there is a nice video on their web site about how to make a mono-to-stereo IR in Altiverb: https://www.audioease.com/altiverb/sampling.php

    Here is the text from their support page about the mono-to-stereo and stereo-to-stereo

    What is the difference between the mono to stereo and the stereo to stereo options in a stereo out Altiverb ?
    
    Best way to understand this is to go back with me to our sampling session when we recorded to room's acoustics for the IR in Altiverb.
    
    mono to stereo
    We put a single speaker in the center of the stage and two mics in the audience.
    
    This results in a mono to stereo IR.
    When you playback your audio through this IR in Altiverb, it will sound like your audio comes out of that single speaker from the center of the stage (mono!) picked up with two microphones (stereo!).
    This IR gives a stereo reverb, however the input is (mixed to) mono, as there is only a single source on stage.
    
    When you want to process a vocal (mono) or direct recorded electrical guitar or something where the panning is not important the mono to stereo IR will do just fine (pro: costs half of the processing then a stereo-to-stereo IR, so it is more efficient in cpu and memory).
    
    stereo to stereo
    back to recording the room
    this time we put two speakers on stage, one on the left and one on the right side of the stage and we record again with the two mics in the audience.
    
    When you playback your stereo audio through this IR in Altiverb your stereo signal comes out of these two speakers.
    Easy to picture here that your original panning will be maintained using a stereo to stereo IR.
    
    So basically it comes down to this:
    is your input stereo or is it panned in stereo ?
    use a stereo - stereo IR, else you will loose the stereo information in the reverb (although the reverb is stereo).
    
    Sometimes even when using stereo panned sources a mono to stereo IR can be preferred as you do not want the panning to be reflected in the reverb levels. But this is a mixing decision which depends on taste mostly (and music style) and I dare not to advice on that :-)
    

    And that's exactly why I asked.
    There's no point in using the same stereo IR for the left and right channels independently, you could just downmix the input signal to mono and feed it the convolutor, the pure effected signal mixed with the original stereo source would be the same, right?

  • @rs2000 said:

    @espiegel123 said:

    @rs2000 said:

    @espiegel123 said:

    @rs2000 said:

    @david_2017 said:
    @rs2000 all my IRs are mono 🤔

    Most of my IRs are stereo but they're created using a mono signal.
    Which means that if you have a piano panned to the left, you could simply make a mono signal from it by copying the left channel to the right channel and feeding the convolutor with it.
    I could imagine creating "true stereo" IRs by firing a dirac-like audio impulse into the room to capture left source and stereo response plus right source and stereo response but how would you go about defining what's left and what's right?
    Every sound source meant to be processed by the convolutor is different, so there's no general answer to this.

    AltiVerb (which in my opinion is the standard setter) provides mono-to-stereo and stereo-to-stereo IRs. Each has its own use. I converted a few of my ALtiVerb IRs to wav files for my personal use on iOS. When using a stereo-stereo IR with a source where there are instruments with distinct positioning, the difference between the true stereo and dual mono becomes apparent.

    In my opinion, Jens could do himself a favor by more clearly describing this distinction between dual-mono and true stereo (in my opinion the current description is weak in meaningful detail). A few well-chosen audio examples would go a long way to demonstrating this. I think in most casual tests, dual mono can be quite satisfactory but in some important contexts it is unsatisfactory.

    That's true, he doesn't really describe what his plugin does better, just 'full convolution' - whatever he means by that.
    But tell me, how does Altiverb realize Stereo-to-stereo, are they using two different stereo impulse responses simultaneously, one for each source position? That would explain the instruments with distinct positioning.

    How does AltiVerb create there multichannel (they have not just stereo to stereo but also surround IRs)? They play their source sweep from two speakers at the source end. Btw, there is a nice video on their web site about how to make a mono-to-stereo IR in Altiverb: https://www.audioease.com/altiverb/sampling.php

    Here is the text from their support page about the mono-to-stereo and stereo-to-stereo

    What is the difference between the mono to stereo and the stereo to stereo options in a stereo out Altiverb ?
    
    Best way to understand this is to go back with me to our sampling session when we recorded to room's acoustics for the IR in Altiverb.
    
    mono to stereo
    We put a single speaker in the center of the stage and two mics in the audience.
    
    This results in a mono to stereo IR.
    When you playback your audio through this IR in Altiverb, it will sound like your audio comes out of that single speaker from the center of the stage (mono!) picked up with two microphones (stereo!).
    This IR gives a stereo reverb, however the input is (mixed to) mono, as there is only a single source on stage.
    
    When you want to process a vocal (mono) or direct recorded electrical guitar or something where the panning is not important the mono to stereo IR will do just fine (pro: costs half of the processing then a stereo-to-stereo IR, so it is more efficient in cpu and memory).
    
    stereo to stereo
    back to recording the room
    this time we put two speakers on stage, one on the left and one on the right side of the stage and we record again with the two mics in the audience.
    
    When you playback your stereo audio through this IR in Altiverb your stereo signal comes out of these two speakers.
    Easy to picture here that your original panning will be maintained using a stereo to stereo IR.
    
    So basically it comes down to this:
    is your input stereo or is it panned in stereo ?
    use a stereo - stereo IR, else you will loose the stereo information in the reverb (although the reverb is stereo).
    
    Sometimes even when using stereo panned sources a mono to stereo IR can be preferred as you do not want the panning to be reflected in the reverb levels. But this is a mixing decision which depends on taste mostly (and music style) and I dare not to advice on that :-)
    

    And that's exactly why I asked.
    There's no point in using the same stereo IR for the left and right channels independently, you could just downmix the input signal to mono and feed it the convolutor, the effected signal would be the same, right?

    No. That is not correct. The left input channel instrument's reverberation will be different in the left channel and the right channel. And the same is true for the right channel. There is kind of rich interaction and phase-cancellation that is different from doing true stereo. A lot of time one can get away with the down mixing but sometimes the difference between stereo-to-stereo and mono-to-stereo. That is why AltiVerb provides both separate mono-to-stereo and stereo-to-stereo IRs.

    If you take a source that has some hard-panned instruments and compare the results of downmixing the reverb input and running mono-to-stereo compared to running the stereo input through a stereo-to-stereo IR from the same location, you will notice a distinct difference in realism. This is particularly true, when you are using an IR on the master bus as glue that puts your mix in a room.

  • edited January 2020

    @espiegel123 said:

    @rs2000 said:

    @espiegel123 said:

    @rs2000 said:

    @espiegel123 said:

    @rs2000 said:

    @david_2017 said:
    @rs2000 all my IRs are mono 🤔

    Most of my IRs are stereo but they're created using a mono signal.
    Which means that if you have a piano panned to the left, you could simply make a mono signal from it by copying the left channel to the right channel and feeding the convolutor with it.
    I could imagine creating "true stereo" IRs by firing a dirac-like audio impulse into the room to capture left source and stereo response plus right source and stereo response but how would you go about defining what's left and what's right?
    Every sound source meant to be processed by the convolutor is different, so there's no general answer to this.

    AltiVerb (which in my opinion is the standard setter) provides mono-to-stereo and stereo-to-stereo IRs. Each has its own use. I converted a few of my ALtiVerb IRs to wav files for my personal use on iOS. When using a stereo-stereo IR with a source where there are instruments with distinct positioning, the difference between the true stereo and dual mono becomes apparent.

    In my opinion, Jens could do himself a favor by more clearly describing this distinction between dual-mono and true stereo (in my opinion the current description is weak in meaningful detail). A few well-chosen audio examples would go a long way to demonstrating this. I think in most casual tests, dual mono can be quite satisfactory but in some important contexts it is unsatisfactory.

    That's true, he doesn't really describe what his plugin does better, just 'full convolution' - whatever he means by that.
    But tell me, how does Altiverb realize Stereo-to-stereo, are they using two different stereo impulse responses simultaneously, one for each source position? That would explain the instruments with distinct positioning.

    How does AltiVerb create there multichannel (they have not just stereo to stereo but also surround IRs)? They play their source sweep from two speakers at the source end. Btw, there is a nice video on their web site about how to make a mono-to-stereo IR in Altiverb: https://www.audioease.com/altiverb/sampling.php

    Here is the text from their support page about the mono-to-stereo and stereo-to-stereo

    What is the difference between the mono to stereo and the stereo to stereo options in a stereo out Altiverb ?
    
    Best way to understand this is to go back with me to our sampling session when we recorded to room's acoustics for the IR in Altiverb.
    
    mono to stereo
    We put a single speaker in the center of the stage and two mics in the audience.
    
    This results in a mono to stereo IR.
    When you playback your audio through this IR in Altiverb, it will sound like your audio comes out of that single speaker from the center of the stage (mono!) picked up with two microphones (stereo!).
    This IR gives a stereo reverb, however the input is (mixed to) mono, as there is only a single source on stage.
    
    When you want to process a vocal (mono) or direct recorded electrical guitar or something where the panning is not important the mono to stereo IR will do just fine (pro: costs half of the processing then a stereo-to-stereo IR, so it is more efficient in cpu and memory).
    
    stereo to stereo
    back to recording the room
    this time we put two speakers on stage, one on the left and one on the right side of the stage and we record again with the two mics in the audience.
    
    When you playback your stereo audio through this IR in Altiverb your stereo signal comes out of these two speakers.
    Easy to picture here that your original panning will be maintained using a stereo to stereo IR.
    
    So basically it comes down to this:
    is your input stereo or is it panned in stereo ?
    use a stereo - stereo IR, else you will loose the stereo information in the reverb (although the reverb is stereo).
    
    Sometimes even when using stereo panned sources a mono to stereo IR can be preferred as you do not want the panning to be reflected in the reverb levels. But this is a mixing decision which depends on taste mostly (and music style) and I dare not to advice on that :-)
    

    And that's exactly why I asked.
    There's no point in using the same stereo IR for the left and right channels independently, you could just downmix the input signal to mono and feed it the convolutor, the effected signal would be the same, right?

    No. That is not correct. The left input channel instrument's reverberation will be different in the left channel and the right channel. And the same is true for the right channel. There is kind of rich interaction and phase-cancellation that is different from doing true stereo. A lot of time one can get away with the down mixing but sometimes the difference between stereo-to-stereo and mono-to-stereo. That is why AltiVerb provides both separate mono-to-stereo and stereo-to-stereo IRs.

    If you take a source that has some hard-panned instruments and compare the results of downmixing the reverb input and running mono-to-stereo compared to running the stereo input through a stereo-to-stereo IR from the same location, you will notice a distinct difference in realism. This is particularly true, when you are using an IR on the master bus as glue that puts your mix in a room.

    I think we're both saying the same. I don't question the difference between using one stereo IR for mono-to-stereo and two separate IRs recorded on-location with the reference signal at different locations, used for stereo-to-stereo.
    What I've questioned is using two exactly identical IRs in a stereo-to-stereo convolution setup.

  • @rs2000 wrote:

    I think we're both saying the same. I don't question the difference between using one stereo IR for mono-to-stereo and two separate IRs recorded on-location with the reference signal at different locations.

    What I've questioned is using two exactly identical IRs in a stereo-to-stereo convolution setup.

    @rs2000 : Maybe I am confused about what you are saying. Are you talking about creating the IRs or applying the IR to create reverb?

  • @espiegel123 said:
    @rs2000 wrote:

    I think we're both saying the same. I don't question the difference between using one stereo IR for mono-to-stereo and two separate IRs recorded on-location with the reference signal at different locations.

    What I've questioned is using two exactly identical IRs in a stereo-to-stereo convolution setup.

    @rs2000 : Maybe I am confused about what you are saying. Are you talking about creating the IRs or applying the IR to create reverb?

    It's basically about how Convolutor PE applies impulse responses to audio files.
    My understanding of "Full convolution" and "True stereo" like in the App Store description is actually using different stereo IRs for the left and right channels, which I wonder if that's the case. The CPU appetite suggests it, but the results don't.

  • edited January 2020
    The user and all related content has been deleted.
  • The user and all related content has been deleted.
  • The user and all related content has been deleted.
  • @rs2000 said:

    @espiegel123 said:
    @rs2000 wrote:

    I think we're both saying the same. I don't question the difference between using one stereo IR for mono-to-stereo and two separate IRs recorded on-location with the reference signal at different locations.

    What I've questioned is using two exactly identical IRs in a stereo-to-stereo convolution setup.

    @rs2000 : Maybe I am confused about what you are saying. Are you talking about creating the IRs or applying the IR to create reverb?

    It's basically about how Convolutor PE applies impulse responses to audio files.
    My understanding of "Full convolution" and "True stereo" like in the App Store description is actually using different stereo IRs for the left and right channels, which I wonder if that's the case. The CPU appetite suggests it, but the results don't.

    I see no reason to doubt that Jens is doing what he says: processing the left channel through the left and right channels of the IR and doing the same for the right input channel and combining the results. Btw, true stereo doesn't use two different stereo IRs. Stereo-to-stereo convolution involves running the two inputs independently through the same stereo IR. This will only be meaningful if that IR was created using stereo input.

    Now, how big of a difference that will make depends on a few things:

    • whether the stereo input files have significant meaningful differences
    • whether the IR is actually a stereo-to-stereo IR

    As someone mentioned an awful lot of stereo IRs floating around are probably mono-to-stereo IRs. I have no idea about the quality of the IRs that Jens has built into Convolutor PE. I would love to be able to try it with some known stereo-to-stereo IRs (like the ones that I have). But I see no reason to doubt that he is doing what he says. As you say, the CPU use suggests that he is doing what he says.

  • @StudioES The text in your screen shot says that it's not a single stereo IR but rather two stereo IRs, one recorded with only the left speaker emitting the reference signal and the second one with only the right speaker.
    How would you be able to model two different microphone positions in a room otherwise?

  • The user and all related content has been deleted.
  • edited January 2020

    @StudioES said:
    After creating the IR files, just mix them down to a single file.

    Here's a nice explanation of using IRs in true stereo convolution.
    https://www.avosound.com/en/tutorials/create-impulse-responses/convolution-reverb-for-mono-and-stereo

  • @rs2000 said:

    @StudioES said:
    After creating the IR files, just mix them down to a single file.

    Here's a nice explanation why that doesn't make sense.
    https://www.avosound.com/en/tutorials/create-impulse-responses/convolution-reverb-for-mono-and-stereo

    I believe that the information on the AltiVerb site is correct. There is a reason that there work has the stature that it does. Basically you set up the two speakers that you want as your your sources sufficiently far apart to represent the input soundstage that you want -- and it seems that by trial-and-error they have come up with a good sense of placement. And you set up your stereo mics in an area whose acoustics you want to capture. Record your sweep and deconvolve (or use a clapper or starter pistol if you must -- but they explain why they think a sweep is preferable). The result is a single stereo IR file.

  • edited January 2020
    The user and all related content has been deleted.
  • @espiegel123 Sorry to disagree (I'm not into arguing, I'd just like to fully understand the differences). Here's another short essay that I find clearer than the info on the audioease site, copied from the Liquidsonics page for Reverberate:

    Reverberate supports true stereo processing; this is an area that can sometimes be a little confusing and this page provides a brief summary of the modes of operation supported by Reverberate. Three topologies for convolution are provided by Reverberate and are available for use within each of the two impulse response convolution units. The IR units are termed IR1 and IR2 and each of these can load two stereo impulse responses when in true stereo mode (termed IR1-A, IR1-B, IR2-A and IR2-B).
    Modulation of the outputs from the true stereo convolution units within the mixer means two true stereo impulse responses (using a total of four stereo impulse response files) can be modulated for highly dynamic and rich reverb effects. The convolution units operate in any of the following modes.
    
    Parallel Stereo
    The left input channel is convolved with the left impulse response file channel and the right input channel is convolved with the right impulse response file channel. This is the typical configuration for stereo convolution reverbs when used with stereo impulse responses, although when input audio is panned left or right, using Mono to Stereo may provide more intuitive results.
    
    True Stereo
    The left input channel is convolved with the left and right impulse response file channels from IR1-A and the right input channel is convolved with the left and right impulse response file channel from IR1-B. The two output convolutions’ respective left and right components are then summed into a single stereo output. This configuration is necessary to take full advantage of true stereo impulse responses. True stereo impulse responses are required to be provided as two separate stereo files and loaded into IR1-A and IR1-B (or IR2-A and IR2-B). This configuration is typically found in high-end algorithmic reverbs.
    
    Mono to Stereo
    The left and right input channels are mixed to mono and then independently convolved with the left and right impulse response file channels. When using a single stereo impulse response file, this is useful when input audio 
    
  • edited January 2020

    @rs2000 : having read a bit more, I think there are basically two different approaches that get called 'true stereo'. One approach is the one that AudioEase describes as stereo-to-stereo and uses one stereo impulse response file and the one described by Liquisonics that uses different two stereo impulse response files. In both cases, both the left and right inputs are processed through a stereo IR. In one case, the same IR is used twice.

    Even though the same stereo IR is used for the left and right inputs in the AudioEase method, the result is different from using a summed input.

    I don't know enough to know how significant the difference results are or under what conditions the differences will be significant. Both of these flavors or "true stereo" will sometimes be noticeably different from mono-to-stereo.

    I had an exchange with JAX recently and my impression was that for the pro version he might be using the liquidsonics approach or maybe that would be an option.

    EDIT ADDED: I exchanged email with the AltiVerb folks and the source of my confusion was the ambiguously worded text rs2000 quoted up-thread. They do stereo the same way that liquidsonics does. They record one stereo impulse response playing the sweep through the left speaker in the setup. Then they record a stereo impulse response for the right-speaker sweep. And when applying the IR, the left input gets convoluted with the L and R channels of the "left speaker IR" and the right input channel gets convoluted with the L and R channels of the "right speaker IR".

Sign In or Register to comment.