Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Excellent video explaining audio bit depth, noise floor and dithering

This video explains perfectly something that is very difficult to explain using just words....bit depth, how that affects noise floor and using dithering.

Comments

  • edited January 2019

    Very well explained and demonstrated, just keep in mind that it's about a final track - not the recording part of the job.
    He generated the 8bit demo version from a 24bit file with 'healthy' level - which effectively can deliver a dynamic range of 48dB, not too bad.
    And of course those 2 files will cancel out (if one of them is polarity inversed) because all the 'defining' top bits are identical.
    It may be somewhat puzzling at first, but consider your own tracks' content below -50dB ;)

    Good reminder of the dithering aspect at the very final stage.
    I usually ignore dithering because on my sources I don't hear any difference at all, but from a technical pov he's absolutely right.

    For the recording process things often look a bit different.
    It's also about noise in the first place that's generate by the digitizing chip's quantization noise.
    With current converters the rule of thumb is to use 24bit on acoustic sources (microphone recordings) because it provides more 'usable' range.
    It leaves headroom to the top (no clipping) and also in the lower volume part, which can be 'amplified' digitally.
    Each increment of 6dB simply shifts one bit to the left, but there's no influence on signal integrity (18 dB gain shifts 3 bits, which still leaves 21 usably bits).

    With noisy sources like guitar amps or analog outputs of synth modules and fx units there's nothing to be gained by 24bits because the noisefloor is rarely below -85dB, which is well covered by 16bits (96 dB theoretical range).
    On a desktop DAW that's of few concern as there's a lot of disk space, but on mobile devices 30% less storage may matter.

  • Very interesting.

    Any way to inverse polarity of audio on iOS?

  • I don't understand one word from https://www.bhphotovideo.com/explora/audio/tips-and-solutions/polarity-vs-phase-whats-the-difference

    But it seems that Polarity and Phase are different things.

    And what was done in the video seems to be Phase inversion, not Polarity inversion.
    Or?

    Anyway, Cubasis has an internal PhaseInverter which seems to do exactly what was done in the video.

  • Judging by the content of the article I would conclude itโ€™s polarity inversion, and the phase invertor in cubasis is labeled incorrectly ๐Ÿ˜

  • @tja said:
    Very interesting.

    Any way to inverse polarity of audio on iOS?

    For example the "Invert Phase" insert fx node in AUM, it's in the "Stereo Processing" category.

  • @j_liljedahl said:

    @tja said:
    Very interesting.

    Any way to inverse polarity of audio on iOS?

    For example the "Invert Phase" insert fx node in AUM, it's in the "Stereo Processing" category.

    Ah, great.
    Thanks!

    That could also mean that Phase Inversion is the right term :)

  • There's also an invert phase switch on every channel in Auria.

  • Thanks, @richardyot

    I also see this as layer FX on pads in BeatMaker 3

    Cannot find this in nanostudio 2

  • @tja said:
    Thanks, @richardyot

    I also see this as layer FX on pads in BeatMaker 3

    Cannot find this in nanostudio 2

    Without audio tracks, there's not much point :)

    Maybe once NS2 can do audio...

  • edited January 2019

    @denx said:
    Judging by the content of the article I would conclude itโ€™s polarity inversion, and the phase invertor in cubasis is labeled incorrectly ๐Ÿ˜

    'Phase invert' is a common term in daily use, though it's technically nonsense.
    Phase only applies to periodic waveforms (like sines), visualized on a circular timeline because each period is identical, hence the measure in degree.

    Practically all musical signals are non periodic waves - there is no 'phase' in them.

    But polarity inversion applies to arbitrary signals - if you duplicate some audio onto another track and invert it's polarity, then both channels sum up to zero and extinguish each other.

    If the talk or writing is about phase inversion, then polarity inversion is the subject.

  • @Telefunky said:

    @denx said:
    Judging by the content of the article I would conclude itโ€™s polarity inversion, and the phase invertor in cubasis is labeled incorrectly ๐Ÿ˜

    'Phase invert' is a common term in daily use, though it's technically nonsense.
    Phase only applies to periodic waveforms (like sines), visualized on a circular timeline because each period is identical, hence the measure in degree.

    Practically all musical signals are non periodic waves - there is no 'phase' in them.

    But polarity inversion applies to arbitrary signals - if you duplicate some audio onto another track and invert it's polarity, then both channels sum up to zero and extinguish each other.

    If the talk or writing is about phase inversion, then polarity inversion is the subject.

    That's true. I used the term "Phase Invert" in AUM because that seems to be the common term for it.

  • edited January 2019

    @Fissura said:
    ... the whole point is that the human ear simply cannot perceive many sounds. It is most important.

    sorry, but that's completely wrong - and misses the intention of the video.
    Which is to stop the nonsense about 'sound' definition by bit depth and explain what dithering is about.

    You can't tell the difference between 16 and 24 bit because it's way too low to be perceived at all. A bit in digital audio represents nothing but a loudness difference of 6dB.
    16 bits cover a range of 16x6 dB, consider the last bit unreliable it means whatever is different between the 2 representations starts somewre around -90dB.
    The pre-condition in this video was that both signals were derived from the same 24 bit source.

    The relevant aspect for IOS use is that you don't loose relevant information if:
    you copy audio (standard is 16bit afaik)
    you store a file recorded with 24bits as a 16bit version to save storage.

    But it doesn't need golden ears to tell the difference in sound between different types of converters, though there's a grain of salt.
    The analog path of the circuit is never identical, as isn't the digital clock design.
    So it cannot be a 100% objective observation.

    The whole misunderstanding is based on those infamous sine diagrams digitzed as a sequence of stair steps, which suggests that the 'step detail' is the crucial part.
    It is not, because those 'steps' are NEVER played back - instead they feed a reconstruction function which does all the smoothing to get back a proper analog signal.

    Another neglected point is the fidelity of the conversion in relation to time.
    2 devices with 100% identical noise and frequency specs can sound very different due to different phase accuracy (as it's called). That figure is rarely published in specs btw.

    Bottomline: the human ear is a really great sound sensor ;)

  • Only the average young human ear is able to sense the full spectrum.
    That range definition is first of all to have 'everything possible' covered.

    An octave division like 78, 156, 312, 625, 1250, 2500, 5000, 10000, 20000 Hz is easier to understand. (I just divided downwards by 2 starting at 20khz)

    'Music' is happening in the first 6 octaves, the 7th and 8th are mostly overtones and above that range you find what's called 'air'.
    There's almost no 'content' in the top octave from 10-20k, that's why some audio compression algorithms simply cut it off because that bit of sparkle would need the same bandwidth as everything below (mathematically).

    Each of us regular folks has individual damages of personal hearing - some more, some less.
    But the brain is able to compensate a lot of this, IF it's given the opportunity to 'learn'.
    Which means to feed sound to it AND contemplate about that sound.

    We get used to our personal sound environment rather quickly - at least I don't know anyone who ever complained about the livingroom's acoustic features ;)

  • @Telefunky said:
    We get used to our personal sound environment rather quickly - at least I don't know anyone who ever complained about the livingroom's acoustic features ;)

    I did. We got rid of the carpets and curtains, and replaced them with wooden floors and blinds. Horrible treble assault on my ears. I literally could no longer listen to music in that room. It's perfect for singing practice though, all those reflections mean you can hear yourself perfectly.

Sign In or Register to comment.