Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

4pockets multitrack invalidated inside of AUM? iOS 13.6.1

Hello

Has anyone had similar issues loading multitrack within AUM?

I have tried uninstalling and reinstalling multitrack and hard resetting my phone after quitting all apps- still keep getting this even with a fresh AUM session

Any ideas how to fix this? My buffer size is 2048

Edit: when I set the AUM buffer to 1024 it seems to work- does anyone have any ideas on how to make multitrack function properly within a 2048 buffer environment?

Comments

  • wimwim
    edited September 2020

    Try setting the buffer size lower. Many apps can't handle 2048.
    What iOS version? [edit ... nvm - not relevant]

  • Yep. That's it. Set buffers to anything below 2048.

  • What are the consequences of a buffer size of 2048 when it is allowed? Fewer application interrupts to process buffers and lower CPU usage reported as a result? Any clues appreciated.

    I understand the rationale behind very small buffers to lower latency.

    And I get the tradeoff of only using larger packets to avoid dataloss. For example, if
    Bluetooth allowed us to set small buffers for headphones maybe we'd like them for
    audio applications with the improved latency. But large buffers just make for fewer transfers so the chips can be easier to design without dropping any packets. If they are dropped of corrupted you can re-transmit and still have sometime to avoid inserting silence or repeating stale data.

  • wimwim
    edited September 2020

    @McD said:
    What are the consequences of a buffer size of 2048 when it is allowed? Fewer application interrupts to process buffers and lower CPU usage reported as a result? Any clues appreciated.

    I understand the rationale behind very small buffers to lower latency.

    And I get the tradeoff of only using larger packets to avoid dataloss. For example, if
    Bluetooth allowed us to set small buffers for headphones maybe we'd like them for
    audio applications with the improved latency. But large buffers just make for fewer transfers so the chips can be easier to design without dropping any packets. If they are dropped of corrupted you can re-transmit and still have sometime to avoid inserting silence or repeating stale data.

    (Caveat: I'm going to vastly over simplify some things here and will probably get some particulars and numbers wrong. But I think this is directionally correct for the most part.)

    It's not about data transfer rate, and it's not about data loss as you've described it. It's about the processing overhead from working in smaller chunks. There's computing overhead for the transfer between the host and the app. It's also more efficient for the app to work in bigger chunks.

    In general the lower the buffers the higher the CPU overhead. If the overhead gets too high, so that the app can't finish its work before its data is needed, dropouts occur. (So, in one way it is about data loss, but data loss due to timing, not in the sense of a radio transmission data loss due to interference.)

    It's easier to picture if you take it to extremes. Let's take an FX app since it has more handoff's to do. If our our buffer size is one. For every sample the host has to pass that data to the app, the app has to read it, process it, and then it has to pass it back to the host. All that has to happen in the timeframe of one sample. At 44.1kHz, that's 44,100 times per second. If that all doesn't happen in that 1/44,100 th of a second there will be a dropout.

    Now, if the buffer is 1024 samples, the two-way handoff only has to happen about 43 times per second. Handoff overhead is drastically reduced. Additionally, the hosted app can work more efficiently when working on samples in bulk than it can one-sample at a time.

    So, 2048 buffers can be good for squeezing out more DSP power, but at the cost of lots of latency.

    The problem with 2,048 is sometimes iOS programmers to overlook the possibility of people setting it that high. There's actually a parameter for maximum buffer size that AUv3 programmers have to set. If they overlook that some hosts can go as high as 2,048, their plugins will crash at that buffer setting.

  • McDMcD
    edited September 2020

    @wim said:
    The problem with 2,048 is sometimes iOS programmers to overlook the possibility of people setting it that high. There's actually a parameter for maximum buffer size that AUv3 programmers have to set. If they overlook that some hosts can go as high as 2,048, their plugins will crash at that buffer setting.

    Interesting. I dug a little deeper into the Bluetooth specs and discovered there's a perfectly good low latency audio profile (aptX) but Qualcomm invented it and wants to charge hardware makers a $1 per device (which seems reasonable) but no OS vendor has licensed it. Probably because the market of people that care is too small.

    Audio transmission latency

    The amount of latency (lag) in the audio depends on many factors: the size of the buffer in the audio library, in Bluetooth stack and in the playback device itself, the algorithmic delay of the codec.

    The delay of simple codecs, like SBC, aptX and aptX HD is quite small, about 3-6 ms, which can be neglected, but complex codecs, such as AAC and LDAC can give a noticeable delay. Algorithmic delay AAC for 44.1 kHz is 60 ms. LDAC—about 30 ms (by rough analysis of the source code. I can be wrong, but not much.)

    The total delay is highly dependent on the playback device, its chipset and buffer. During the tests, I got a spread of 150 to 250 ms on different devices (with the SBC codec). If we assume that devices with support for aptX, AAC, and LDAC additional codecs use better quality components and a small buffer size, we get the following typical latency values:

    SBC: 150-250 ms
    aptX: 130-180 ms
    AAC: 190-240 ms
    LDAC: 160-210 ms

    Let me remind you: aptX Low Latency is not supported in operating systems, which is why a lower delay can be obtained only with a transmitter + receiver or transmitter + headphone/speaker bundle, and all devices must support this codec.

    I experimented with a pair of these:

    https://forum.audiob.us/uploads/editor/8f/feo3lg0wd3hp.png

    But they are mono only and the signal is pretty timid in the phones but the latency was acceptable.

  • The problem with wireless transmission is not only the bandwidth (and necessary compression). Those can/will improve over time. It's not sample accurate. Midi is much less data, way more usable but still far from accurate. The protocol was made for consumer market.

Sign In or Register to comment.