Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Why does the mastering engineer do the ceiling and not the codec?

Very technical topic, beware!

This is something I've always wondered since the whole "Ceiling" thing came about when music increasingly got spread via lossy formats / streaming etc.

Why the heck is the mastering engineer supposed to set the "Ceiling" to prevent any codecs (well, encoders) further down the line from doing stupid stuff, thereby needlessly reducing the output format's dynamic range (by a tiny amount, I know, but still!)

Should't each individual codec know better how it works internally and thus do the necessary level reduction, always assuming that the input is normalized to 0 dBFS?

Or am I again totally incompetent?

Comments

  • edited November 2022

    Because usually the final ceiling is tied hand in hand with the overall volume/loudness of the track, which is the mastering engineer is usually doing the heavy lifting on.

    Also, the only way the codec can prevent overs if the file is 0dBFS is to lower the dynamic range some how, which a lot of people aren't happy leaving up to the codec to decide.

  • Interesting, thanks... so as I suspected, I'm looking at it too much from a technical point of view than artistic.

  • I think setting the ceiling one or two dB lower than full scale started in the β€˜80, when compact disks appeared. The format itself was fine, but many consumer CD players had awful converters or other components, so they sounded pretty bad when the true peak was over 0dBFS.

  • edited November 2022

    There's still many places where you can run into clipping issues at 0dBFS, even with streaming services. There's a reason that all of them recommend masters no louder than -1TP (True Peak) instead of -1dBFS.

    Whether or not that clipping is audible is a whole other story, which is one reason we don't hear issues even though most masters sent for streaming are probably -0.3 to -0.5dBFS. πŸ€·πŸΌβ€β™‚οΈ

    I still do all my client masters to -0.3dBFS unless they specifically tell me they want something different. Most online aggregators don't let you send multiple versions of the same track for different outlets, so one "traditional" master for everything usually works fine.

  • It's also about the fact that reducing the volume of a digital signal (i.e. inside the codec) can't be done without adding more quantization errors.
    Might not be as relevant in a high volume mastered track with 24 bits resolution or more but if the volume can be reduced in the mastering process already then why leave it up to the codec - which doesn't even know in advance by how much to reduce the volume if it's a live stream.

    Nonetheless, audio compression then usually adds even more quantization errors (now in the frequency domain) to reduce the data rate.

  • @rs2000 said:
    It's also about the fact that reducing the volume of a digital signal (i.e. inside the codec) can't be done without adding more quantization errors.
    Might not be as relevant in a high volume mastered track with 24 bits resolution or more but if the volume can be reduced in the mastering process already then why leave it up to the codec - which doesn't even know in advance by how much to reduce the volume if it's a live stream.

    Nonetheless, audio compression then usually adds even more quantization errors (now in the frequency domain) to reduce the data rate.

    This......even when pulling the daw fader down and up it is introducing quant errrors.

    Cheers

  • @zedzdeadbaby said:
    This......even when pulling the daw fader down and up it is introducing quant errrors.

    Can any human actually hear these "errors"?

    Or is it like driving down the street in your car and time is going infinitesimally slower for you in the car than for the people waking on the footpath?

  • @Simon said:

    @zedzdeadbaby said:
    This......even when pulling the daw fader down and up it is introducing quant errrors.

    Can any human actually hear these "errors"?

    Or is it like driving down the street in your car and time is going infinitesimally slower for you in the car than for the people waking on the footpath?

    Since DAWs either work with floating point or 32/64bit precision, it's likely more like a slow car, not a lightspeed racing machine πŸ˜„

  • @rs2000 said:
    Since DAWs either work with floating point or 32/64bit precision, it's likely more like a slow car, not a lightspeed racing machine πŸ˜„

    But can you actually hear the errors?

  • @Simon said:

    @rs2000 said:
    Since DAWs either work with floating point or 32/64bit precision, it's likely more like a slow car, not a lightspeed racing machine πŸ˜„

    But can you actually hear the errors?

    I can't, that's what I meant to say πŸ˜‰

  • @rs2000 said:

    @Simon said:
    But can you actually hear the errors?

    I can't, that's what I meant to say πŸ˜‰

    OK, thanks. Good to know.

  • edited November 2022

    The "the Codec scaling down the amplitude and thus introducing additional quantization errors" argument also only holds when the master is delivered in 16-Bit WAV. Much less already in 24-Bit, and pretty much irrelevant in 32-Bit Float I'd say.

    I still personally think that the job of the mastering engineer should focus on the artistic part (dynamics, perceived loudness (as measured by their trusty ole' LUFS meter), EQ, stereo image etc.) and mere technicalities like ceiling to avoid conversion artefacts should be handled automatically downstream, which isn't really a problem if the master is delivered in a 32 or 64 bit floating point format.

    But I also understand the reasoning that ONE engineer might want precise control about EVERYTHING including the FINAL loudness down to a tenth of a dB.

    Good discussion to have! πŸ‘

  • Some of it is practical too, how many aggregators are actually accepting 32bit float masters? You’re lucky if you can actually find one that lets you upload 24bit sadly.

    FWIW I can’t believe anyone can hear quantization errors in a file in normal use cases. The noise floor of even the best playback systems is probably way higher. Sometimes the theory of digital audio gets in the way of the practical side of audio engineering. πŸ™ƒ

  • I’m making my songs as Dobly Atmos currently and I have to get them down to -18LUFS

    My approach is to not use a limiter, instead I (painstakingly) bring each bed track or object track down in level independently to achieve the balanced mix

    I prefer this to slapping a limiter on the final groups or channels etc, a limiter would only kick in above a certain level and let most quiet passages pass as is (or at least very little-ly smallened)

    Fortunately I have enough pain to stake

  • Is there a quick intro to this topic somewhere? I think I understand, but the dynamic range bit was new to me. Do streaming services measure lufs now or?

Sign In or Register to comment.