Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Binaural Location by Ngo Minh Ngoc

12346»

Comments

  • @Blue_Mangoo any chance of more individual orbit parameters exposed for individual LFOs? You can set some nice automations with the current controls but some extra additional controls would help to make more possibilities :)

    eg orbit left and orbit right, so you can assign different LFOs to each orbit.

    Great app though, very useful for sound design :)

  • @Blue_Mangoo said:
    Here’s a video that wins the “most creative” prize for demonstrating Binaural Location:

    That’s a Killer Demo for a Killer App! Nicely done, very creative video! 😊👍
    When Aparillo cames in, the the Sound adventure starts, very expressive!
    Love the Tabla player... 😎

    Nice!

  • @Blue_Mangoo said:
    Here’s a video that wins the “most creative” prize for demonstrating Binaural Location:

    that was great! Also a nice demo of Aparillo.

  • @Blue_Mangoo : I love this plugin, I am not hearing a strong front to back distinction. Is that because my over 50 year-old ears don’t hear some frequencies critical to that discrimination.

    It would probably useful in the plugin’s u.i. if there were a front/back indicator. It took me watching your video to realize that I had them reversed.

  • @espiegel123 said:
    @Blue_Mangoo : I love this plugin, I am not hearing a strong front to back distinction. Is that because my over 50 year-old ears don’t hear some frequencies critical to that discrimination.

    It would probably useful in the plugin’s u.i. if there were a front/back indicator. It took me watching your video to realize that I had them reversed.

    Front-back distinguishing is a problem with spatialization software in general. We may make some adjustments and improvements in the coming weeks. It depends on your ears of course, but also on the amount of high frequency content in the input signal. If, for example you take a synth that has a lowpass filter on it and put that straight into binaural location, you will have great difficulty distinguishing front and back. That's why in my own demo videos I was using voice to demonstrate. You need something that sounds natural and has as much high frequency sound as real physical objects would have. Otherwise, if you start out with a fake sound, it's very hard to make it sound like it has a real location.

  • You’d probably have to have the software equivalent of a Jecklin disk, to introduce an artificial front-back difference in the wider hrtf.

  • wimwim
    edited June 2019

    I assume the parameters are exposed as AU parameters, so something like Mozaic could be used to do programmatic patterns without having to bloat the app.

  • @wim There are 2 distances and angles as AU parameters - its simple math to convert cartesian coords (x/y) to the angular notation (dist/angle) and therefore there is no limit in programming movement patterns.

    So one could do a lot more than lissajous based on sines with different freq and phase offsets by using
    the math and io functions provided by Mozaic.

    • For instance using the tilt sensors to move arround the sound souces.

    • One could do a free path editor/sequencer like in animoog. Two knobs for each speaker to move them around the in x/y and a padbutton to store/append current pos to the position sequence. And then define the replay speed.

    • Or let the user draw on the XY-pad and sample/replay these positions.

    .

    But currently there is a problem in trying around with these ideas: @Blue_Mangoo Sending a CC to these parameters crashes the plugin, but IIRC this was already reported.

  • @_ki said:
    @wim There are 2 distances and angles as AU parameters - its simple math to convert cartesian coords (x/y) to the angular notation (dist/angle) and therefore there is no limit in programming movement patterns.

    So one could do a lot more than lissajous based on sines with different freq and phase offsets by using
    the math and io functions provided by Mozaic.

    • For instance using the tilt sensors to move arround the sound souces.

    • One could do a free path editor/sequencer like in animoog. Two knobs for each speaker to move them around the in x/y and a padbutton to store/append current pos to the position sequence. And then define the replay speed.

    • Or let the user draw on the XY-pad and sample/replay these positions.

    .

    But currently there is a problem in trying around with these ideas: @Blue_Mangoo Sending a CC to these parameters crashes the plugin, but IIRC this was already reported.

    au automation and copy/paste isn’t too much of a faff... but i love the idea of tilt sensor sound placement.. ( tc-11.. )
    i guess it probably occurs in one or two VR games these days?

  • @RockySmalls said:

    @_ki said:
    @wim There are 2 distances and angles as AU parameters - its simple math to convert cartesian coords (x/y) to the angular notation (dist/angle) and therefore there is no limit in programming movement patterns.

    So one could do a lot more than lissajous based on sines with different freq and phase offsets by using
    the math and io functions provided by Mozaic.

    • For instance using the tilt sensors to move arround the sound souces.

    • One could do a free path editor/sequencer like in animoog. Two knobs for each speaker to move them around the in x/y and a padbutton to store/append current pos to the position sequence. And then define the replay speed.

    • Or let the user draw on the XY-pad and sample/replay these positions.

    .

    But currently there is a problem in trying around with these ideas: @Blue_Mangoo Sending a CC to these parameters crashes the plugin, but IIRC this was already reported.

    au automation and copy/paste isn’t too much of a faff... but i love the idea of tilt sensor sound placement.. ( tc-11.. )
    i guess it probably occurs in one or two VR games these days?

    whoops! here’s the update... time to get super-stereo....

  • @RockySmalls said:

    @RockySmalls said:

    @_ki said:
    @wim There are 2 distances and angles as AU parameters - its simple math to convert cartesian coords (x/y) to the angular notation (dist/angle) and therefore there is no limit in programming movement patterns.

    So one could do a lot more than lissajous based on sines with different freq and phase offsets by using
    the math and io functions provided by Mozaic.

    • For instance using the tilt sensors to move arround the sound souces.

    • One could do a free path editor/sequencer like in animoog. Two knobs for each speaker to move them around the in x/y and a padbutton to store/append current pos to the position sequence. And then define the replay speed.

    • Or let the user draw on the XY-pad and sample/replay these positions.

    .

    But currently there is a problem in trying around with these ideas: @Blue_Mangoo Sending a CC to these parameters crashes the plugin, but IIRC this was already reported.

    au automation and copy/paste isn’t too much of a faff... but i love the idea of tilt sensor sound placement.. ( tc-11.. )
    i guess it probably occurs in one or two VR games these days?

    whoops! here’s the update... time to get super-stereo....

    visible in cubasis, visible in apematrix... good to go. thanks for a speedy recovery mr @Blue_Mangoo !
    time to make like Laika

  • @wim said:
    I assume the parameters are exposed as AU parameters, so something like Mozaic could be used to do programmatic patterns without having to bloat the app.

    How satisfactory the results are may depend in whether an app interpolates/slews when receiving MIDI CC values, given MIDI's low resolution. There are cases with some synths, for example, where you hear stairstepping response to MIDI CCs when controlling some parameters.

  • _ki_ki
    edited June 2019

    @Blue_Mangoo Still no luck with AU parameter automation in AUM, sending a Rozeta LFO to angle or distance via CC still crashes the updated v1.0.1 AU.

    @espiegel123 If the CC steps are audible, one can at least try around with the apeMatrix build-in LFOs, IIRC these use the full AU parameter resolution.

  • @_ki said:
    @Blue_Mangoo Still no luck with AU parameter automation in AUM, sending a Rozeta LFO to angle or distance via CC still crashes the updated v1.0.1 AU.

    We are working on fixing that today at the office.

    @espiegel123 If the CC steps are audible, one can at least try around with the apeMatrix build-in LFOs, IIRC these use the full AU parameter resolution.

    If it’s not smooth enough, let us know and we will make it smoother

  • @_ki said:
    @Blue_Mangoo Still no luck with AU parameter automation in AUM, sending a Rozeta LFO to angle or distance via CC still crashes the updated v1.0.1 AU.

    @espiegel123 If the CC steps are audible, one can at least try around with the apeMatrix build-in LFOs, IIRC these use the full AU parameter resolution.

    AU parameters sent from the host aren’t limited to MIDI resolution?

  • @espiegel123 said:

    @_ki said:
    @Blue_Mangoo Still no luck with AU parameter automation in AUM, sending a Rozeta LFO to angle or distance via CC still crashes the updated v1.0.1 AU.

    @espiegel123 If the CC steps are audible, one can at least try around with the apeMatrix build-in LFOs, IIRC these use the full AU parameter resolution.

    AU parameters sent from the host aren’t limited to MIDI resolution?

    I think that’s true ... probably depending on the host though. I know Mozaic’s auxiliary User AU parameters are high resolution.

  • @espiegel123 said:

    AU parameters sent from the host aren’t limited to MIDI resolution?

    They are limited to MIDI resolution when they are controlled via MIDI. But we can still put a smoothing filter on it so they don't click unpleasantly when transitioning between that limited set of 127 values.

  • @Blue_Mangoo said:

    @espiegel123 said:

    AU parameters sent from the host aren’t limited to MIDI resolution?

    They are limited to MIDI resolution when they are controlled via MIDI. But we can still put a smoothing filter on it so they don't click unpleasantly when transitioning between that limited set of 127 values.

    Thanks for the clarification.

  • @Blue_Mangoo Thanks for your work - i‘ll post some sample scripts that other can refine. I already have four other mozaic scripts in my working pipeline i need to do tests/videos/demos for, so don‘t have time to go into the depth with any of these new ideas.

  • @Blue_Mangoo : Binaural Location is having some problems rendering in Auria Pro. I just popped into the App Store to give it a good rating and noticed a comment about problems in Auria Pro. I am running into two problems when doing mixdown in Auria Pro when Binaural Location is on a track

    *) Auria Pro seems to get stuck at the end of its render cycle and the mix dialog never goes away
    *) in the mixdown file, the effect is not present

    It is possible that Auria Pro is to blame but the one other plugin that this happened with the plugin was apparently the culprit. I don’t think there are any special steps you’ll need, but if you have any trouble reproducible no the issue let me know. This is in the current OS version on an iPad gen 6.

  • @espiegel123 said:
    @Blue_Mangoo : Binaural Location is having some problems rendering in Auria Pro. I just popped into the App Store to give it a good rating and noticed a comment about problems in Auria Pro. I am running into two problems when doing mixdown in Auria Pro when Binaural Location is on a track

    *) Auria Pro seems to get stuck at the end of its render cycle and the mix dialog never goes away
    *) in the mixdown file, the effect is not present

    It is possible that Auria Pro is to blame but the one other plugin that this happened with the plugin was apparently the culprit. I don’t think there are any special steps you’ll need, but if you have any trouble reproducible no the issue let me know. This is in the current OS version on an iPad gen 6.

    Thanks for reporting this.

    Version 1.0 had many bugs. 1.0.1 is out now. And 1.0.2 coming soon. After that release we will test this bug report and fix it if it still doesn’t work.

  • @Blue_Mangoo I did a quick test with CC automation - with v1.0.2 (and the just updated v1.0.3) i could feed Rozeta LFOs to both angles and distances to drive the speaker locations. But both versions crash if the user clicks on the orbit knob during CC automation.

  • @Blue_Mangoo : don't know if you have seen it but there have been reports that the latest binaural location is causing mixdown issues in both Cubasis and NS2

  • buying a new iPad for using this app, haha

  • edited December 2021

    is there a plugin in ableton or else, equivalent to binaural

    and is there a chance that @Blue_Mangoo releases a desktop version of binaural ?

Sign In or Register to comment.