Audiobus: Use your music apps together.
What is Audiobus? — Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.Download on the App Store
Audiobus is the app that makes the rest of your setup better.
Cubasis 3.2 - Multicore-rendering & Latency Settings
We've received several questions regarding the performance optimizations in Cubasis 3.2.
Below please find further information about the new multicore rendering, and how to configure the latency settings, provided by our lead engineer, Alex.
Multicore-rendering has been implemented in Cubasis 3.1 for Android and in Cubasis 3.2 for iOS.
This means that on devices with more than two CPU cores*, the rendering of multiple tracks during playback and mixdown is simultaneously performed on multiple cores.
On devices with 2 CPU cores*, multi-core rendering is disabled in order to keep one of the cores available for all the non-audio stuff (UI, touch, the operating system…)
Note that a project must contain at least 2 tracks in order for multi-core rendering to kick in.
Latency setup options
In the Setup under Audio, “Audio Engine Latency” must be enabled in order for Cubasis to perform multi-core rendering. Note that this setup option is only available on devices with more than two CPU cores*. In most cases, the sweet spot for rendering performance is with Audio Engine Latency set to twice the Device Latency (on iOS), or 16-32ms (on Android).
However, this introduces additional latency to monitoring and live keyboard input, since the engine uses additional internal buffers to render into, which also prevents drop-outs.
Multi-core rendering yields the most performance benefit when playing projects with many tracks and effects on devices with many CPU cores, where Audio Engine Latency is enabled.
On some devices (possibly those with 3 cores) and in certain situations (monitoring or when using specific AU plug-ins), it might be beneficial to turn off Audio Engine Latency.
The DSP level in the Inspector’s "System Info” tab measures the time duration that a rendering cycle takes, divided by the buffer duration (the time available to perform rendering).
With Audio Engine Latency disabled, rendering is performed on the system’s single ultra high priority audio thread, which means that a DSP peak of 100% always results in a drop-out (crackling).
When Audio Engine Latency is enabled, rendering is performed in engine threads and a short peak of 100% doesn’t always mean that there is a drop-out, because the engine's buffers might have been able to prevent it. A dropout will only occur if DSP is 100% for longer than “Audio Engine Latency” is set to. Note that the DSP usage might be higher than with Audio Engine Latency disabled, which is normal because engine threads don’t get the same priority as the system's audio thread, but that's not a problem because multi-core rendering more than makes up for it.
*Note that on some devices with more than 2 cores, only 2 cores are considered by Cubasis because the other ones are energy efficient (lower performance) cores and might be unsuited for real-time audio rendering.