Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

24 bit mastering on iOS, what sources?

2»

Comments

  • @brambos said:

    @Ocsprey said:
    Apologies for meandering

    I get the feeling Apple wants to keep it as transparent as possible so we don't have to know. And I guess the only way to really know these things is to measure it, since processor-load is heavily dependent on the CPU/FPU that's used and the way conversions are handled by the system layers and the apps. E.g. on some processors you can move between integer and floating point at hardly any penalty, whereas on others it costs a lot of cycles to do so. I always used to think that fixed point arithmetic was faster than floating point, but this too is not as black and white as it used to be. etc. etc.

    RIght - its the most beneficial approach - well balance between proprietary, user access and the like - but once in a vast while someone should at least grump about interoperability and protocols - the time and creativity wasted and every crackle in my ears, there are counter arguments, but here we are!

  • @Ocsprey said:

    <...>

    Apologies for meandering

    96kHz, 32 bit ;-)

    I presume, that you understand that you just need double the frequency that you will contain. And as 44kHz is enough for 22kHz frequencies and that this is more than you can hear just means, that you prepare to transport frequencies up to 48kHz!

    So, I am just asking if you ever made a frequency analysis of your source material - and if you ever found something above 22kHz?

    If not, you are just wasting CPU and space ;-)

  • @AudioGus said:
    Pretty hardcore for a hobby! ;)

    Just some basics from school ;-)
    Ah, and yes I also work with computers.

  • then your school finished too early I suppose... ;)
    The 'twice the highest frequency' rule is only one side of the coin - the other is labeled 'aliasing'.
    With 44.1 and 48k sampling rates the processing creates artifacts in ultrasound that 'flip back' into the audible domain.
    With any sampling frequency above 70khz such aliasing is (due to math) always beyond 20 khz and thus the sound is generally perceived as better, more clear, transparent etc.
    It's extreme disharmonic content makes aliasing standout more than it's level may suggest.

  • If not, you are just wasting CPU and space ;-)

    I potentially agree, but neither of those are factors, so I'm okay with that - that's kinda it - if a normal MBP has 40gb of throughput... I remember around '05 putting in a 40gb capable packet analyzer or something, it was like 350k, now we have it at home... should be no problems to record as many channels at whatever kHz. Otoh, I potentially disagree because there is much we don't know about sound, including how our own brain processes it, once you get past the isle of corti the electro-chemical mystery begins, and nominally most naturally occurring 'things' like sound tend to eventually unravel as some way affecting - and I think it's still worthwhile to call into question not the math which clearly demonstrates such frequencies are unheard, but our as yet incomplete understanding of our own processing of sound. There's a popular saying about language and context - that every statement always already exists, so too the entirety of frequency, and we may still discover nature's sweetest melodies are those unheard

  • tjatja
    edited May 2017

    @Telefunky said:
    then your school finished too early I suppose... ;)
    The 'twice the highest frequency' rule is only one side of the coin - the other is labeled 'aliasing'.
    With 44.1 and 48k sampling rates the processing creates artifacts in ultrasound that 'flip back' into the audible domain.
    With any sampling frequency above 70khz such aliasing is (due to math) always beyond 20 khz and thus the sound is generally perceived as better, more clear, transparent etc.
    It's extreme disharmonic content makes aliasing standout more than it's level may suggest.

    That´s new to me and beyond my knowledge - very interesting!
    Do you have some links about this topic?

    So, i would need to shift my wishes to 24 bit, 96 kHz :blush:

  • edited May 2017
    The user and all related content has been deleted.
  • I stand corrected ;-)
    Esp. those aliases where new to me....

  • At the point where your source enters the system at 16 bits, up-rezzing to 24 bits is lossless.

    Once inside the system, especially when mixing, the extra headroom from mixing at 24 bits rather than 16 bits gives a greater freedom from distortions due to arithmetic rounding errors.

    e.g. adding together 48 channels with their LSB (at 24 bits) greater than 10 will yield a result differing by 1 from the same operation dine in 16 bits. Assuming random distributions, the errors - which show up as distortion, could be on average 15 different. Which Is actually quite audible and nasty. I have had to deal with this with up to 96 16 bit sources in my DSP work.

    Downrezzing back to 16 bits at the point where the final result is served nack to the outside world does not introduce errors in the sense that it is already proven that 16 bit is already faithfully reproducing waveforms hi-cut at 20K.

    There are of course plenty of conversations about whether or not this brick wall filtering can or can not be heard. My opinion is that what is heard in those tests are flaws in the filters and jitter in the bit clock (where it is already shown, long ago, that sub-nanosecond jitter can audibly degrade the noise floor).

  • edited May 2017

    here's the analog counterpart of that 'effect' (quoting self)

    @Telefunky said:
    depends on content: it's a tough task to digitize precisely beyond bit #20, which already requires sophisticated hardware/clock and a flawless powersupply.
    There are reports that those 'spoilt bits' in fact sum up significantly in todays loudness oriented productions.

    @dwarman nice to read a real world proof, I never use that amount of channels

  • @dwarman said:
    At the point where your source enters the system at 16 bits, up-rezzing to 24 bits is lossless.

    Once inside the system, especially when mixing, the extra headroom from mixing at 24 bits rather than 16 bits gives a greater freedom from distortions due to arithmetic rounding errors.

    I can't speak for all developers, but internally all my sound engines run at (at least) 32 bits. So the 16/24 bit distinction only comes into play when getting sounds in and out of the app and are not part of the equation when mixing voices/channels, etc. I am almost certain most iOS DAWs also mix at much higher resolutions than 24 bits internally.

  • @dwarman said:
    At the point where your source enters the system at 16 bits, up-rezzing to 24 bits is lossless.

    Once inside the system, especially when mixing, the extra headroom from mixing at 24 bits rather than 16 bits gives a greater freedom from distortions due to arithmetic rounding errors.

    You obviously know what you talk about, esp. in regards to sound.

    But "up-rezzing" data from 16 to 24 bit is sure not the same as starting with 24 bit!

    It would only be the same, when your data in 24 bit does contain the same data as in 16 bit, without the additional nuances that are possible with 24 bit - saying that it was already blown up and not really containing data in those 24 bits.

    Please see my simple example above, which shows this in an easy way.

    And about bits and data (outside of sound) I am quite sure.

  • edited May 2017

    @tja said:

    @dwarman said:
    At the point where your source enters the system at 16 bits, up-rezzing to 24 bits is lossless.

    Once inside the system, especially when mixing, the extra headroom from mixing at 24 bits rather than 16 bits gives a greater freedom from distortions due to arithmetic rounding errors.

    You obviously know what you talk about, esp. in regards to sound.

    But "up-rezzing" data from 16 to 24 bit is sure not the same as starting with 24 bit!

    It would only be the same, when your data in 24 bit does contain the same data as in 16 bit, without the additional nuances that are possible with 24 bit - saying that it was already blown up and not really containing data in those 24 bits.

    Please see my simple example above, which shows this in an easy way.

    And about bits and data (outside of sound) I am quite sure.

    Only in theory. If the data in your 24bits stream doesn't make full use of the resolution there will be very little difference between the 24 bit and 16 bit version. In other words: these extra 8 bits of resolution are not used meaningfully. And - as stated a couple of times in this thread already - if the data was generated digitally in an iPad synth this will most likely be the case.

    Look at the image below:

    The resolution of this image is much higher than is required by the data it contains, and it could have been encoded in a much lower resolution without losing fidelity (both in terms of spatial resolution and in the fact that three color channels are used to encode a single grayscale range). A similar analogy can be made when it comes to headroom and dynamic range in audio signals.

  • edited May 2017

    Focusing too much on Bits and Hz leads nowhere. There are amazing sounding 16-bit devices and 'shty sounding' 24-bit devices. I mean does a 12-bit Akai S950 sound like 'sht'? I doubt that very, very much...

    Personally I'm semi-addiceted to 'chip-music'(SID6581/8580, NES, 2OP FM etc.).
    The Yamaha FM-Essentials app has a next to perfect emulation of the 12-bit DAC found on the TX81z...
    (I can't hear any difference between my real TX81z and the FM Essentials app).

    So instead of focusing in bit-depths and sample-rates like some photographers focus on 'ISO-Noise' it might be better to look at the bigger picture and ask ourselves...
    ...What do we really want to accomplish?

  • ^ the voice of reason :D

  • wimwim
    edited May 2017

    Every bit of modern music is based on distortion. The rock'n'roll revolution would never have occurred if amps didn't fail to reproduce the input signal properly. The character of virtually every sought after vintage compressor, mixing desk, and microphone etc. is based on these failures.

    Hell, all every instrument, and even nature itself is perceived with some kind of interference (albeit non-digital). There is no such thing as pristine sound.

    Interesting thread for sure, and I'm not disputing anyone's points or downplaying your own preferences, but it does seem pointless to me to obsess about such perfection in a media that is virtually driven by imperfection.

  • @brambos said:

    already blown up and not really containing data in those 24 bits.

    Only in theory. If the data in your 24bits stream doesn't make full use of the resolution there will be very little difference between the 24 bit and 16 bit version. In other words: these extra 8 bits of resolution are not used meaningfully.

    That's exactly what I wrote - only valid, when there is in fact enough data to fill the buts and not just already blown up.

    And - as stated a couple of times in this thread already - if the data was generated digitally in an iPad synth this will most likely be the case.

    This is new to me, or I misunderstood.

    The iPad may produce 24 bit, but this is not "full quality"?
    Where did you get this from, any link?

  • edited May 2017

    @tja said:

    @brambos said:

    already blown up and not really containing data in those 24 bits.

    Only in theory. If the data in your 24bits stream doesn't make full use of the resolution there will be very little difference between the 24 bit and 16 bit version. In other words: these extra 8 bits of resolution are not used meaningfully.

    That's exactly what I wrote - only valid, when there is in fact enough data to fill the buts and not just already blown up.

    And - as stated a couple of times in this thread already - if the data was generated digitally in an iPad synth this will most likely be the case.

    This is new to me, or I misunderstood.

    The iPad may produce 24 bit, but this is not "full quality"?
    Where did you get this from, any link?

    That's not how it works.. it is full quality, but the detail of the dynamic range in a fully digitally generated signal is so 'perfect' that it already exceeds the human perception without needing 24 bits. E.g. There is no audible noise floor 'eating up bits' that needs to be compensated for, etc.

    But either way, as @Samu already said: in the end it's all about the ears, not the theory.

  • edited May 2017

    brambos probably did get it elsewhere, but dwarman in a post above wrote about exactly that aspect of the topic.
    That 'tiny difference' starts to kick in around -90 dB levels and below.
    To lift that part of the signal above perceiption boundary you'll have to apply at least 110 dB of gain.
    Audio algorithms are a tricky thing in certain domains: just picture a fast compressor like the 1176 model, with attack times in the sub-millisecond range.
    How is it supposed to deal with a bass sound of 50Hz, where a single cycle of the waveform takes 20ms, not that easy - calculation errors are almost guaranteed ;)

    As mentioned above in heavy distorted (or compressed) sounds it's bare nonsense to use 24 bits input, because digital processing will introduce higher calculation errors than the gain in recording precision contained.
    And the more channels you have in your mix, the more such small errors sum up.
    That's why it sometimes helps to deliberately cut off bits (which are internally replaced by zeroes).

  • I am not talking theory here, I am talking practice.

    The issue is in the internal processing arithmetic, which is where the 24 bit (or better 32 bit float) are needed. Take a random set of 16 bit samples, scale each by some fraction (and immediately you have lost some resolution and accuracy), then add the results together. Do the same with 24 bit samples set, and additionally downrez the final result to 16 bits. You will get different results. Not all differences will be inaudible (thus generating bitch reports), and any such differences will screw up automated testing. It is however true that for a single audio file, there is no advantage to being 24 bit over 16 bit if the processing hardware is properly designed. But it might save a bit or two of rounding errors if you are going to present such a file as one of many similar files for mixing and mastering together.

    DId you notice I code these things for a living? WiiU audio engine, for example, is 16 bits at the periphery but 24 bits uprezzed internally. My first hack was 16 bits internally, in my naive days. I do not speak from theory, I speak from practice and experience. Don't understand the math of the theory anyway, but I do know my way around an FFT and an oscilloscope, and my ears are near golden. The signal path will in general include several processing steps, each one of which is a potential source of distortion inversely proportional to the bit rez and proportional to the number of channels involved. Up to 96 in that case. Rounding errors are a bitch. And the closer to the noise floor you have to work, the worse they are. I had one case where it made a difference between a scale value of 1 or 0. Filter converged to high DC instead of 0v.

  • tjatja
    edited May 2017

    I was wondering, why I remembered that 44.1 kHz @ 24 bit was the best solution for mastering, before producing the final output at 44.1@16....

    I found one of that older articles that seems very interesting:

    https://people.xiph.org/~xiphmont/demo/neil-young.html

    Still there is no mentioning of alias problems below 70 kHz, but I sure do not understand everything :smiley:

    Maybe you're interested to read ....

  • edited May 2017
    The user and all related content has been deleted.
Sign In or Register to comment.