Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

miRack coming to iOS.

12467

Comments

  • Yeah multichannel audio would be ideal for this, fingers crossed :)

  • edited September 2019

    considering fact that iOS core audio supports multichannel I/O i think it's probably just about adding more output connectors to "out" module and enable then if multiple output streams are available (and eventually add "IN" module )

    idea that this can attract real modular users to ios community thanks to this sounds very real

  • Awesome. Glad it's got good licensing, would hate to see such an effort get shut down for a dumb reason.

  • Would love to see Audiobus, or AUv3. Must play well with others!

  • @mifki said:
    It won't be free but will be priced lower than similar apps since I'm only developing the engine and optimising some of the modules. You can always donate to module authors:)

    Audio recording is the most requested feature, so it will be added (either via a recorder module or via AudioBus support) in one of the first updates. The plan now is to release the first version with the current features as soon as there are no major issues. I found what's causing the problem with audio on 2018 iPad Pro devices and hopefully fixed it, so we're getting close.

    Surely I'll keep adding modules. Again, the plan is to release with the current set and add more in regular updates.

    @ipadmusic Awesome! May I repost?

    Great news! Could you share what was the problem with audio on iPad Pro 2018, so we all know and other developers could utilize your experience. Also, was you able to use multithreading?

  • edited September 2019

    @dendy said:
    considering fact that iOS core audio supports multichannel I/O i think it's probably just about adding more output connectors to "out" module and enable then if multiple output streams are available (and eventually add "IN" module )

    idea that this can attract real modular users to ios community thanks to this sounds very real

    Hm, I guess just adding more outputs doesn't necessarily mean more complex patches and higher CPU usage (yes again I'm mostly worried about possible performance issues). In that case it may happen soon.

    @vov said:
    Great news! Could you share what was the problem with audio on iPad Pro 2018, so we all know and other developers could utilize your experience. Also, was you able to use multithreading?

    The problem was that newer devices have 48kHz hardware sample rate and when I'm setting preferred buffer duration to say 1024 samples, the hardware buffer is set to that duration, but due to rounding, my callback is getting alternating 940/941-sample buffers. I didn't expect the buffer size to vary.

    Currently, miRack is always working at 44.1kHz due to the fact that some modules (most notably Audible Instruments) are hardcoded to work at certain sample rate which can't be changed without recompilation. In VCV Rack this is solved by resampling inputs and outputs of such modules but that's a waste of resources on mobile. At some point I will add support for switching between most commonly used sample rates.

    Yes, the audio engine is multi-threaded.

  • @mifki said:

    @dendy said:
    considering fact that iOS core audio supports multichannel I/O i think it's probably just about adding more output connectors to "out" module and enable then if multiple output streams are available (and eventually add "IN" module )

    idea that this can attract real modular users to ios community thanks to this sounds very real

    Hm, I guess just adding more outputs doesn't necessarily mean more complex patches and higher CPU usage (yes again I'm mostly worried about possible performance issues). In that case it may happen soon.

    @vov said:
    Great news! Could you share what was the problem with audio on iPad Pro 2018, so we all know and other developers could utilize your experience. Also, was you able to use multithreading?

    The problem was that newer devices have 48kHz hardware sample rate and when I'm setting preferred buffer duration to say 1024 samples, the hardware buffer is set to that duration, but due to rounding, my callback is getting alternating 940/941-sample buffers. I didn't expect the buffer size to vary.

    Currently, miRack is always working at 44.1kHz due to the fact that some modules (most notably Audible Instruments) are hardcoded to work at certain sample rate which can't be changed without recompilation. In VCV Rack this is solved by resampling inputs and outputs of such modules but that's a waste of resources on mobile. At some point I will add support for switching between most commonly used sample rates.

    Yes, the audio engine is multi-threaded.

    Thank you, will wait for your app.

  • edited September 2019

    The problem was that newer devices have 48kHz hardware sample rate and when I'm setting preferred buffer duration to say 1024 samples, the hardware buffer is set to that duration, but due to rounding, my callback is getting alternating 940/941-sample buffers. I didn't expect the buffer size to vary.

    samplerate may even change in realtime durimg playback (it happens on all models with headphone jack, which were released after iPhone 6S - when you connect headphones with microphone, decice switches to 44 khz, otherwise it runs at 48 khz)

    Also with AudioBus you need to be prepraded on various sample rate changes in realtime during playback.

  • @mifki said:

    @dendy said:
    considering fact that iOS core audio supports multichannel I/O i think it's probably just about adding more output connectors to "out" module and enable then if multiple output streams are available (and eventually add "IN" module )

    idea that this can attract real modular users to ios community thanks to this sounds very real

    Hm, I guess just adding more outputs doesn't necessarily mean more complex patches and higher CPU usage (yes again I'm mostly worried about possible performance issues). In that case it may happen soon.

    @vov said:
    Great news! Could you share what was the problem with audio on iPad Pro 2018, so we all know and other developers could utilize your experience. Also, was you able to use multithreading?

    The problem was that newer devices have 48kHz hardware sample rate and when I'm setting preferred buffer duration to say 1024 samples, the hardware buffer is set to that duration, but due to rounding, my callback is getting alternating 940/941-sample buffers. I didn't expect the buffer size to vary.

    Currently, miRack is always working at 44.1kHz due to the fact that some modules (most notably Audible Instruments) are hardcoded to work at certain sample rate which can't be changed without recompilation. In VCV Rack this is solved by resampling inputs and outputs of such modules but that's a waste of resources on mobile. At some point I will add support for switching between most commonly used sample rates.

    Yes, the audio engine is multi-threaded.

    Do you resample on output from 44.1 kHz to the device's output sample rate?

  • Very eagerly following this thread. I buy this the second it pops up in the App Store.

    Big props for developing this.

  • This is handled by the OS (output of the app resampled to whatever the hardware sample rate is). As I said, once I find a way to change the hardcoded sample rate of some modules in realtime, it will be possible to switch the app sample rate (either manually or automatically to match hw rate).

  • @mifki said:
    This is handled by the OS (output of the app resampled to whatever the hardware sample rate is). As I said, once I find a way to change the hardcoded sample rate of some modules in realtime, it will be possible to switch the app sample rate (either manually or automatically to match hw rate).

    Have you confirmed that the resampling happens correctly on the 48 kHz hardware? There have been quite a few apps where developers failed to handle the issue correctly though in the last few months most of those apps have addressed the problem.

  • I think yes, after fixing the original problem with buffer length I mentioned above, I got the same output from my iPad @ 44.1 and iPhone XR @ 48 (that was just for testing, public version won't support iPhone).

    If other testers who have devices with different hw sample rates have any issues, please let me know.

  • @mifki said:
    I think yes, after fixing the original problem with buffer length I mentioned above, I got the same output from my iPad @ 44.1 and iPhone XR @ 48 (that was just for testing, public version won't support iPhone).

    If other testers who have devices with different hw sample rates have any issues, please let me know.

    Great news! Thanks.

  • @rinzai said:

    @dendy said:

    @JohnnyGoodyear said:
    I got your boffinhood right here baby...

    what the fuck is with her hand ?!! 😱

    That's not even a proper archery position...
    Is she shooting sideways or just using an invisible sling? So many questions.

    Some people are born with hyperextended elbows and other joints. https://www.danceworkshop.com.au/hyperextension-the-good-the-bad-and-the-ugly/

  • @mifki said:
    I think yes, after fixing the original problem with buffer length I mentioned above, I got the same output from my iPad @ 44.1 and iPhone XR @ 48 (that was just for testing, public version won't support iPhone).

    If other testers who have devices with different hw sample rates have any issues, please let me know.

    Are you accepting beta testers?

  • @reasOne and others - Sorry I think I have enough beta testers for now, there's just not that much functionality in the app at the moment for me to really need that many people. I'll keep in mind everyone who contacted me though, especially when I'll be adding more module packs and all the modules will need testing.

  • @mifki When trying core midi, I can’t see AUM output on the same device but I can see other midi outputs like fugue machine and Audiobus.

  • @auxmux said:
    @mifki When trying core midi, I can’t see AUM output on the same device but I can see other midi outputs like fugue machine and Audiobus.

    Do you see AUM's Midi out port in other apps?

  • @espiegel123 said:

    @auxmux said:
    @mifki When trying core midi, I can’t see AUM output on the same device but I can see other midi outputs like fugue machine and Audiobus.

    Do you see AUM's Midi out port in other apps?

    Good call, I don’t. I guess because I do everything IN AUM, it doesn’t show up in Audiobus or Fugue Machine as an input. It shows up in Xequence, but may be that requires special coding?

  • Hi @mifki welcome to the forum!

    Regarding third-party, open source modules: are you able to include features and bug fixes being added to their current, post v1 versions? Or do these come from the state they were in 0.6?

  • @auxmux said:

    @espiegel123 said:

    @auxmux said:
    @mifki When trying core midi, I can’t see AUM output on the same device but I can see other midi outputs like fugue machine and Audiobus.

    Do you see AUM's Midi out port in other apps?

    Good call, I don’t. I guess because I do everything IN AUM, it doesn’t show up in Audiobus or Fugue Machine as an input. It shows up in Xequence, but may be that requires special coding?

    In Xequence, you are seeing AUM as a destination not a source. I don't think AUM exposes itself as a MIDI source. But from AUM you can route to other apps' MIDI in.

  • edited September 2019

    @auxmux said:
    @mifki When trying core midi, I can’t see AUM output on the same device but I can see other midi outputs like fugue machine and Audiobus.

    ideally it would be perfect if miRack would register it's MIDI ports as virtual port in iOS. It would be then possible to choose miRack as midi out in AUM (or other DAW) directly.

    for now you need to use something like Midiflow, or other third party app, to create virtual midi IN/OOUT ports and use it for routing (sequencer) -> (midiflow) -> (miRack)

  • @mifki Will additional modules added after launch and in the future be available via IAP in the App Store seeing how Apple store payment methods work ?

    I tried Navichord midi in to miRack works fine no problem.

  • Sure more modules will be added. The first batch is just a semi-random selection of good modules from VCV Rack 0.6 times to get started.

  • @espiegel123 said:

    @auxmux said:

    @espiegel123 said:

    @auxmux said:
    @mifki When trying core midi, I can’t see AUM output on the same device but I can see other midi outputs like fugue machine and Audiobus.

    Do you see AUM's Midi out port in other apps?

    Good call, I don’t. I guess because I do everything IN AUM, it doesn’t show up in Audiobus or Fugue Machine as an input. It shows up in Xequence, but may be that requires special coding?

    In Xequence, you are seeing AUM as a destination not a source. I don't think AUM exposes itself as a MIDI source. But from AUM you can route to other apps' MIDI in.

    It can do both. I went to a track and added it as a source.

    @dendy solution for virtual midi ports can work. I might just use another iPad via Bluetooth MIDI to get around this.

  • Also @mifki Is multi-touch an option? I keep trying to do it by accident. Maybe a option to toggle this on/off like reset and drag buttons on top?

  • edited September 2019

    Ok... I played with VCV Rack for a couple 2 to 3 months about a year ago. So, I'm certainly familiar with this, but honestly... I don't get the enthusiasm.

    I mean, we have a plethora of various fx, instruments, midi tools & toys, sound generators, etc. that we can chain up in nearly infinite configurations already using AB3/AUM/apeMatrix.

    Add to that, we have apps like zmors modular, Audulus, and a couple others I seem to recall.

    Again, from the video demo... this looks very close to VCV rack that I've already spent a few months with. How is this iOS version all that and a bag of chips?

    Don't get me wrong, I'll almost certainly get it too. But beyond audio nerd erector-set style tinkering, I don't get what mirack will bring to the table that we don't already have in abundance. I could easily be wrong here, but please tell me what I'm missing?

  • edited September 2019

    it's mostly about UI.. it looks and feels (but of course also sounds) like real modular ..

    regardimg me

    • aum/apematrix - totally not for me, i'm not enjoyig it in any way. Don't like UI. It's also very different thing than vcv/miRack, it's more like virtual studio with synths and fxs and patchbay. Not true modular.
    • audulus - UI is too "strange" for me. Tried it, but somehow it didn't clicked with me.

    Didn't tried Zmors modular. Maybe i would enjoy it too, no idea.

    vcv/miRack is for me most close mach to "perfect sw modular" of all times - at least for me - which is Nord Modular G2.

Sign In or Register to comment.