Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Sample Rates, Bit Depths, Dithering and Band Limiting.

I'm saving this to sic on all the 'experienced sound engineers' who thumb their noses at 16 bit, 44.1 kHz audio chains.

Knock yourselves out...

«1

Comments

  • A classic :) Note that it's true only when no further processing is wanted on the signal. It should be kept high-resolution while doing any math on it..

  • Lol! I've used all the analog test gear he used! A great explanation.

  • @j_liljedahl Agreed, that's why I said audio chains. If an app does internal processing it can upscale the sample to whatever it wants to get the necessary headroom, then spit out a 16 bit, 44.1 kHz output stream when it's done. For buffer transfer, it's plenty good enough.

  • Nice. Now I don't feel bad for not being able to tell the difference between 44.1khz audio and 96

  • edited April 2014

    Great video

    It's true that 16 bit linear PCM audio does not quite cover the entire theoretical dynamic range of the human ear in ideal conditions. Also, there are (and always will be) reasons to use more than 16 bits in recording and production.

    Professionals use 24 bit samples in recording and production for headroom, noise floor, and convenience reasons.

    These are quotes from the same well spoken gentleman.
    http://people.xiph.org/~xiphmont/demo/neil-young.html

    Hydrogenaudio forums have this issue well coved for years.

    From Roger Nichols, I'm sure you've heard of him

    http://www.soundonsound.com/sos/may06/articles/rogernichols_0506.htm

    Audiobus would only bennefit from the option of 24bit, but o well, obladi oblada.

    Edit: man you gies are quick, by the time I posted this, 2 others, I got to start refreshing before posting.

  • edited April 2014

    That's exactly the argument you'll get into with those trying to get by on super cheap set-ups, when they don't focus on what audio processing will do to their recordings. I'd rather have an information dense source when processing than beginning with a CD quality recording.

    That said, 24-bit/96k is as high as I'll go with it.

  • I refer you to my original post. :)

  • edited April 2014

    I'll refer you to an article from the same Monty previously linked above, and this very quote:

    "When does 24 bit matter?

    Professionals use 24 bit samples in recording and production [14] for headroom, noise floor, and convenience reasons.

    16 bits is enough to span the real hearing range with room to spare. It does not span the entire possible signal range of audio equipment. The primary reason to use 24 bits when recording is to prevent mistakes; rather than being careful to center 16 bit recording-- risking clipping if you guess too high and adding noise if you guess too low-- 24 bits allows an operator to set an approximate level and not worry too much about it. Missing the optimal gain setting by a few bits has no consequences, and effects that dynamically compress the recorded range have a deep floor to work with.

    An engineer also requires more than 16 bits during mixing and mastering. Modern work flows may involve literally thousands of effects and operations. The quantization noise and noise floor of a 16 bit sample may be undetectable during playback, but multiplying that noise by a few thousand times eventually becomes noticeable. 24 bits keeps the accumulated noise at a very low level. Once the music is ready to distribute, there's no reason to keep more than 16 bits."

    So the question is, are we producing music here, or are we talking about a final product?

    People always seem to miss that quote while attempting to bludgeon others.

  • edited April 2014

    @AQ808 said:

    The primary reason to use 24 bits when recording is to prevent mistakes; rather than being careful to center 16 bit recording-- risking clipping if you guess too high and adding noise if you guess too low-- 24 bits allows an operator to set an approximate level and not worry too much about it. Missing the optimal gain setting by a few bits has no consequences, and effects that dynamically compress the recorded range have a deep floor to work with.

    So this is saying that if you're more precise and on it you can use 16. If you're a bit slapdash, prone to make mistakes and lazy you should use 24, right? ;)

  • edited April 2014

    I think that's exactly what he's saying. Or, being slightly more charitable, it's handy, but not essential.

  • edited April 2014

    It has to do with the rise of a waveform, low frequency benefits more from 24bit due to this. (Less rise in a cycle)

    @j_liljedahl said:

    A classic :) Note that it's true only when no further processing is wanted on the signal. It should be kept high-resolution while doing any math on it..

    Got to be true!

    @AQ808 said:

    So the question is, are we producing music here, or are we talking about a final product?

    Bingo.

  • Well, if that were true, then why this quote?

    "An engineer also requires more than 16 bits during mixing and mastering. Modern work flows may involve literally thousands of effects and operations. The quantization noise and noise floor of a 16 bit sample may be undetectable during playback, but multiplying that noise by a few thousand times eventually becomes noticeable. 24 bits keeps the accumulated noise at a very low level. Once the music is ready to distribute, there's no reason to keep more than 16 bits."

    That seems to squash 16-bit for app chains of effects it seems.

  • Less rise in a cycle?? Over time, yes, but otherwise the rise is related to amplitude.

  • edited April 2014

    That's during processing, not in between the processors. Audiobus is in between, it's purely a pipeline.

  • @funkjunky27

    Just got done in a jam session and seen your point above...

    I guess I should of put over time or along x axis or something, the Roger Nichols link goes on in detail. What's your take on the bit depth topic? I can respect your experience.

  • Thanks for the vote in confidence @WMWM, however I'm just a bedroom musician...at best. I added my comment since I'm fairly technical (about 30 years in satellite comms.) it is certainly obvious to me that when going from 8bit to 16bit, the difference in quality is significant, so I have to believe that bit depth needs to be considered, but like anything that's subject to the laws of physics....there are limits when considering what we can discern. I get that electronics may benefit from more overhead, but as others have mentioned, once it comes to the point of delivery, 16 bit is more than sufficient. It's that buffer that needs to be taken into account. If your apps/gear/workflow require some overhead, then I think 24 bits makes sense, but for what I'm doing, 16 bits is enough.

  • edited April 2014

    Do you master (or have it done) your creations, is that at 16 bits going in? Don't get me wrong, not trolling here, just seeking information.

    Here's my situation, if I could possibly get some advise, ipad is used for sound generation, sent out through an interface, into tubed external gear (sorry it's all I have), then into a DAC, out to be recorded on a hard drive. Would I benefit from higher bit depth on the recorder? Would I benefit from higher bit depth from ipad out? ( yes my audio interface is cable of 24 bit).

    Any good advise would be appreciated.

  • For anything except the final distribution of music (ready and mastered), 24 bits is better. This includes the pipeline between apps in Audiobus. But it would depend on the apps actually handling 24 bits all the way and never crush it down to less bits. This is not about the internal representation in effect apps (which should be 32-bit floating point for any quality DSP), but what happens in between.

  • Exactly.

  • edited April 2014

    @j_liljedahl Why does it matter in between then? If the audio is properly encoded in 16 bits, it has been clearly demonstrated that it is non destructive. If all that happens is the data gets picked up by the next stage of the chain then that stage can process it in whatever resolution it needs. Any noise floor issues are negligable, since the 16 bit pipeline is not part of any iterative process.

  • edited April 2014

    It's not destructive when output as a final product. That's a totally different scenario than its travel up to that point. And, it isn't that we need "24-bit processing" as already mentioned, but that we would want 24-bit (depth) audio to be processed by 32-bit (floating point) digital signal processing, finally, at the last stage post-mastering, finished as 16-bit (depth) audio.

    But to even start that you have to have 24-bit (depth) recorded sources.

  • I was asking @j_liljedahl, and I never mentioned '24 bit processing'.

  • edited April 2014

    I was clarifying for anyone's benefit. Something tells me you won't be happy with his answer.

  • A very clear and well presented video, very educational.

  • @PaulB Regarding bits, the noise level is negligible for direct listening. But any processing might amplify or get affected by this noise. How much depends entirely on the kind of processing taking place. Compare with adjusting levels/curves in a low bit image file, it will show artifacts, which can be dithered - introducing noise.

    Also, if apps don't output normalized (full level) signals, the noise floor comes even closer. Then there is also the sample rate, if one wants to do pitching/stretching and such, it's better to work with higher rates in the material. (That's not currently something that would be done on realtime streams in audiobus, however)

  • Tracking very dynamic instruments (acoustic guitar and percussive instruments only, practically speaking) with 16 bits when 24 is available is silly. You're trying to capture magic—why would you take a chance of an overload? If you're the producer, performer and engineer of a given recording I would hardly call it 'lazy' to opt for safety (particularly if you are improvising and setting a level is difficult). For everything else... including post processing a few tracks via AB chains... meh. "1000s of operations"? C'mon. 24 bit by default would just amount to a lot of overhead on our already stressed devices.

    Since people do track through Audiobus, I reckon an optional 24 bit mode wouldn't do the world any harm and could actually do some good. Whining about its absence or really even pining for its existence is pretty goofy though. And goofy turns to empty if you're tracking synths or electric guitar or have a 16 bit only app anywhere in the chain. Obviously, plenty of 'pro' stuff has been done with 16 bit signal chains.

    Considering that there are only so many hours in a day and a similarly finite amount of hours available for AB development, there's a forum full of ideas here that would add more overall value to AB than 24 bit support. But yeah, I reckon there's a case to be made (tracking via AB) to include 24 bit support in that backlog of ideas. Of course, this isn't as simple as adding 24 bit support to AB because this isn't a single app on an island — every AB compatible app would need to support it.

    Personally, I don't track acoustic guitar or drums/percussion through AB. I record at 24 bit into Auria and process recorded tracks with AB as needed after. Most everything else is easy to set a level for with 16 bits.

  • edited April 2014

    I have a question, is the audio coming out of the ipad, audio bus or not, manipulated, or is it bit streamed? For example, the windows mixer vs kernel streaming situation.

    This audio is coming out of the dock connector of ipad and into an audio interface.

    So I've accepted the fact AB does not plan 24 bit depth on its release tomorrow (maybe someday when iPads mature, hopefully), I am out putting 16/44.1 then interpolating to 24/96 on the receiving end.

  • "Whining about its absence or really even pining for its existence is pretty goofy though."

    Only if you don't have 24-bit sources, that is. If you do have them, you'd be out of your head not to want them treated properly.

    Auria and it's internal ecosystem have that in mind, and IAA is supposedly capable of providing it.

    And that is the competition on iOS by the way.

    That said, the argument you're presenting is regarding personal preference, and has nothing to do with the technical subject of the thread.

  • edited April 2014

    Can anyone answer if the audio coming out of the dock connector is bit exact, or is there internal mixing going on before output?

  • edited April 2014

    As far as I know, and you should really ask someone like Rim at the Auria forums if you want to be sure, you only get issues out of the ipad speaker itself because it is doing something to the mix there. Headphones and interfaces would bypass this.

    There is apparently a bug in iOS 7.1 which squashes 96k audio if you have your in-built speaker or mic on, which Rim put a fix in Auria to address.

    He'd probably also have the best answer for your 2 interfaces question as well.

Sign In or Register to comment.