Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Video: why you need 96 KHz sample rate to clearly record frequencies up to 20 KHz

Recently I am enjoying making some tutorial videos that explain questions about signal processing. I made this video explaining why higher than 48 KHz sample rates can sound considerably better to the ear.

It's long, heavy, and complicated; but it makes an argument that may make a difference for people who are serious about sound quality.

«13

Comments

  • Leaders teach. Well done

  • These videos are great.

    Adding this to the wiki's forum gold.

  • Thanks!
    Educational and 'easy enough' to understand!

    Being a tracker junkie (Renoise and SunVox etc.) I'm quite familiar with how different interpolation methods (None, Linear, Bicubic, Sinc etc.) affect the sound when re-pitchined (ie. add or remove samples) during playback and the different 'creative' use-cases for each of them but I never bothered to learn the math behind them...

    I almost have a 'fetish' towards the rougher methods such as dropping or holding the last sample since the Amiga's Paula worked like that creating nice aliasing overtones when pitching the samples :)

  • Wow.
    I'm impressed!

    Going to also watch the quoted video.

    This made me check your Apps again: I already own 6 of them, maybe I will add some more.
    But sure I will watch any new App and Video from Blue Mangoo!

    One point of feedback, though, @Blue_Mangoo:
    Please, restrain from drinking, but esp. from eating, while creating a video ;)

    BTW, this thread should be moved to the knowledge base category!
    Maybe @Michael?
    Do we have more moderators?

  • @tja said:
    Wow.
    I'm impressed!

    Going to also watch the quoted video.

    This made me check your Apps again: I already own 6 of them, maybe I will add some more.
    But sure I will watch any new App and Video from Blue Mangoo!

    One point of feedback, though, @Blue_Mangoo:
    Please, restrain from drinking, but esp. from eating, while creating a video ;)

    BTW, this thread should be moved to the knowledge base category!
    Maybe @Michael?
    Do we have more moderators?

    tja: the wiki is where such things are now kept. The knowledgebase category was intended as a temporary place to put things until the wiki was up and running. Feel free to add this to the wiki.

    I have added this thread to the 'Forum Gold' page on the wiki which is a place where you can easily had threads and posts that should be a part of the wiki.

  • @tja said:
    Wow.
    I'm impressed!

    >

    One point of feedback, though, @Blue_Mangoo:
    Please, restrain from drinking, but esp. from eating, while creating a video ;)

    Why, Brad Pitt ate in every scene of Ocean's Eleven and that movie turned out fine

  • tjatja
    edited June 2019

    @oat_phipps said:
    Why, Brad Pitt ate in every scene of Ocean's Eleven and that movie turned out fine

    They did scale down the eating sounds :D ;)

  • On topic, we all know, that there are also strong arguments against 192 kHz, esp. those about artifacts that may bounce back into the hearable range.

    It would be interesting to produce audio with 16, 44.1 or 24, 48 and then in 24, 192 and mix down to 16, 44.1 for the final product.
    And that final product should then be AB/Xed to compare: A double blind test with many people, not just one person.

    But can this be done on an iPad?

  • AUM offers up to 96 kHz, but this does not seem to work on my iPad Pro 9.7

  • @tja said:
    AUM offers up to 96 kHz, but this does not seem to work on my iPad Pro 9.7

    Higher sample-rates are selectable when the connected audio interface supports them...

  • @tja said:
    On topic, we all know, that there are also strong arguments against 192 kHz, esp. those about artifacts that may bounce back into the hearable range.

    It would be interesting to produce audio with 16, 44.1 or 24, 48 and then in 24, 192 and mix down to 16, 44.1 for the final product.
    And that final product should then be AB/Xed to compare: A double blind test with many people, not just one person.

    But can this be done on an iPad?

    Yeah. The whole thing is kind of wasted if in the end you are going to release it at 44.1 KHz on YouTube or Spotify anyway. This video is more of a prayer for the future than something that is actually practical in today’s world of streaming services.

  • @Samu said:

    @tja said:
    AUM offers up to 96 kHz, but this does not seem to work on my iPad Pro 9.7

    Higher sample-rates are selectable when the connected audio interface supports them...

    I have no audio interface, as in "there is no spoon" :)

    After some tests and hard reboot, I'm quite sure that the iPad Pro 9.7 can only work at 44.1 or 48 kHz natively.
    Will also check the iPad Pro 12.9 2nd gen and the iPhone 8.
    Just to know what's possible - without audio interfaces, which I do not have any use for (getting audio in or out, I suppose).

    Thanks for the information with the audio interface!

  • @Blue_Mangoo said:

    @tja said:
    On topic, we all know, that there are also strong arguments against 192 kHz, esp. those about artifacts that may bounce back into the hearable range.

    It would be interesting to produce audio with 16, 44.1 or 24, 48 and then in 24, 192 and mix down to 16, 44.1 for the final product.
    And that final product should then be AB/Xed to compare: A double blind test with many people, not just one person.

    But can this be done on an iPad?

    Yeah. The whole thing is kind of wasted if in the end you are going to release it at 44.1 KHz on YouTube or Spotify anyway. This video is more of a prayer for the future than something that is actually practical in today’s world of streaming services.

    I think, this could still be valuable for mixing and mastering, even if it ends at 16, 44.1
    Would really like to test such stuff....

  • @Samu said:

    @tja said:
    AUM offers up to 96 kHz, but this does not seem to work on my iPad Pro 9.7

    Higher sample-rates are selectable when the connected audio interface supports them...

    I run at 96khz using a sound card on iPad, its worth noting that some apps don’t run correctly at higher sample rates on iOS unfortunately, for example AD Quanta, Zeeon effects, sunriser effects pitch incorrectly at 96khz.

  • tjatja
    edited June 2019

    @Samu Or are you saying that I just should connect an audio interface and could then produce audio at 24, 96 kHz internally, on the iPad itself?

    I mean, without putting the audio somewhere over the audio interface?
    In this case, I should finally get some audio interface.

    Finally, I just want to produce the audio files on the iPad....

  • @tja said:
    @Samu Or are you saying that I just should connect an audio interface and could then produce audio at 24, 96 kHz internally, on the iPad itself?

    I mean, without putting the audio somewhere over the audio interface?
    In this case, I should finally get some audio interface.

    Finally, I just want to produce the audio files on the iPad....

    Yes, if you have an audio interface that handles 96k and the device is running at 96k, apps that can handle 96k audio will generate 96k audio.

    Keep in mind that CPU use will go up dramatically since synth apps will need to generate/process twice as many samples per second.

  • For most consumer audio interfaces external recordings, 16bit 44100Hz is more than enough...
    But indeed ,internal virtual instruments can benefit on higher resolutions (and the result can be noticeable on true 24/96 interfaces)

  • I remember one of my 'earlier' audio interfaces (ESI U2A and U24) the AD/DA chip did internal oversampling (4x on the input and and a bit more on the output) to lessen the aliasing when frequencies got higher.

    Regardless, I do miss sampling on the Amiga with variable sample-rates and overdriving the 8-bit AD's to the max :)
    (It's almost like running the audio signal thru a S&H module clocked with a square wave oscillator before sampling the sound).

  • Thank you @Blue_Mangoo for this vid. I posed this question on YouTube, but thought it might be helpful to raise it here as well:

    All of the samples in my sample library are, at best 48k (24 bit). Assuming I have a 96k session with guitar, synth, bass and vocals, and I’m using some of those lower res samples, how will they sound?

    Does anyone actually have a sample library chock full of 96 or 192k samples? Will my lower res samples work without issue in my 96 or 192k session in Auria Pro? :smile: :neutral: :smiley:

  • @Korakios said:
    For most consumer audio interfaces external recordings, 16bit 44100Hz is more than enough...
    But indeed ,internal virtual instruments can benefit on higher resolutions (and the result can be noticeable on true 24/96 interfaces)

    If you actually want to record sounds clearly all the way up to 20 KHz, 44.1 is not enough. The video explains why.

    But if you just want subjectively good sound and don’t actually care about the 20 KHz range then 44.1 might be enough. It depends on your ears, your speakers, your music etc.

  • edited June 2019

    @eustressor said:
    Thank you @Blue_Mangoo for this vid. I posed this question on YouTube, but thought it might be helpful to raise it here as well:

    All of the samples in my sample library are, at best 48k (24 bit). Assuming I have a 96k session with guitar, synth, bass and vocals, and I’m using some of those lower res samples, how will they sound?

    Does anyone actually have a sample library chock full of 96 or 192k samples? Will my lower res samples work without issue in my 96 or 192k session in Auria Pro? :smile: :neutral: :smiley:

    Lower sample rate samples should work fine in higher sample rate projects. But Unless there are other sounds in the project that actually do produce real 96 or 192 KHz audio then mixing at a higher sample rate won’t help much.

    Another thing to note is, if your final product is going to be downsampled to 44.1 or 48 KHz then 96 KHz sample libraries will not help at all.

    Also I haven’t seen 96kHz sample libraries on iOS and hardly see them for the desktop.

    It wasn’t discussed much in the video but if you are running dynamics processors such as compressors, saturator, dynamic EQ, and limiters, those benefit from higher sample rates regardless of the sample rate of the incoming audio and they still benefit from the higher sample rate even if you downsample to 44.1 KHz when you release the final product.

    We made this video because we feel interested in the topic; not because it’s really practical in 2019 to release music to end users at 96 KHz.

    However, it’s essential that a quorum of people in the music business understand what Claude Shannon was actually saying about recording up to half the Nyquist frequency in order for these things to change in the future. As long as everybody believes that Shannon’s theorem 1 actually guarantees clear Digital to Analog conversion up to 1/2 of the sampling rate in real world hardware then they will still keep thinking that 44.1 and 48 KHz sample rates can actually give us clear sound up to 20 KHz.

  • @Blue_Mangoo said:

    @Korakios said:
    For most consumer audio interfaces external recordings, 16bit 44100Hz is more than enough...
    But indeed ,internal virtual instruments can benefit on higher resolutions (and the result can be noticeable on true 24/96 interfaces)

    If you actually want to record sounds clearly all the way up to 20 KHz, 44.1 is not enough. The video explains why.

    But if you just want subjectively good sound and don’t actually care about the 20 KHz range then 44.1 might be enough. It depends on your ears, your speakers, your music etc.

    I agree, but for descent audio interfaces. Cheap ones may allow to record up to 96KHz ,but it's just placebo and nothing is actually recorded above 18KHz . Especially when using entry level mics,pre-apmps etc....

  • edited June 2019

    @Blue_Mangoo said:

    @eustressor said:
    Thank you @Blue_Mangoo for this vid. I posed this question on YouTube, but thought it might be helpful to raise it here as well:

    All of the samples in my sample library are, at best 48k (24 bit). Assuming I have a 96k session with guitar, synth, bass and vocals, and I’m using some of those lower res samples, how will they sound?

    Does anyone actually have a sample library chock full of 96 or 192k samples? Will my lower res samples work without issue in my 96 or 192k session in Auria Pro? :smile: :neutral: :smiley:

    Lower sample rate samples should work fine in higher sample rate projects. But Unless there are other sounds in the project that actually do produce real 96 or 192 KHz audio then mixing at a higher sample rate won’t help much.

    Another thing to note is, if your final product is going to be downsampled to 44.1 or 48 KHz then 96 KHz sample libraries will not help at all.

    Also I haven’t seen 96kHz sample libraries on iOS and hardly see them for the desktop.

    It wasn’t discussed much in the video but if you are running dynamics processors such as compressors, saturator, dynamic EQ, and limiters, those benefit from higher sample rates regardless of the sample rate of the incoming audio and they still benefit from the higher sample rate even if you downsample to 44.1 KHz when you release the final product.

    We made this video because we feel interested in the topic; not because it’s really practical in 2019 to release music to end users at 96 KHz.

    However, it’s essential that a quorum of people in the music business understand what Claude Shannon was actually saying about recording up to half the Nyquist frequency in order for these things to change in the future. As long as everybody believes that Shannon’s theorem 1 actually guarantees clear Digital to Analog conversion up to 1/2 of the sampling rate in real world hardware then they will still keep thinking that 44.1 and 48 KHz sample rates can actually give us clear sound up to 20 KHz.

    Thank you for that info. I do have a Focusrite audio interface that supports 96/192 for recording at 96 or better, and realize it all does get downsampled to 48/44.1 in mixdown. I just wondered how lower rate “prefab” audio content from my sample library would fare if I started recording at 96k.

    Thank you for the clarification(s)! This is quite an informative thread. The more I learn, the more I realize how much I don’t know :smile:

  • I've subjectively noticed the biggest difference in audio quality between 44.1 and 96 to be in the generated sounds that have lots of harmonics, like soft synths, or distortion plugins, amp modelers. That will change on a plugin by plugin basis, though, and all of the different ways the programmer may have designed their processor to handle generating harmonics, and avoid aliasing errors when you try to generate a harmonic that outside the range of what the sampling frequency can describe.

    A lot of soft synthesizers, for instance, won't compute a square wave (or other buzzy waveform) but rather separate sine waves, at lowering volumes, for each harmonic, up to, say, 32 harmonics.

    One place I rarely see demand for high sample rates, where it would make sense, is in samplers and sample libraries- all kinds of destructive squishing of the audio is sure to happen when you play back a digital sample, at rates other than what it was recorded at. It would make sense to have as much extra information in the audio file, to get better resampling. Especially playing a sample at a lower pitch, you are taking all this ultrasonic sound that we can't hear, and bringing it down into the range of human hearing.

  • @Processaurus said:
    I've subjectively noticed the biggest difference in audio quality between 44.1 and 96 to be in the generated sounds that have lots of harmonics, like soft synths, or distortion plugins, amp modelers. That will change on a plugin by plugin basis, though, and all of the different ways the programmer may have designed their processor to handle generating harmonics, and avoid aliasing errors when you try to generate a harmonic that outside the range of what the sampling frequency can describe.

    A lot of soft synthesizers, for instance, won't compute a square wave (or other buzzy waveform) but rather separate sine waves, at lowering volumes, for each harmonic, up to, say, 32 harmonics.

    One place I rarely see demand for high sample rates, where it would make sense, is in samplers and sample libraries- all kinds of destructive squishing of the audio is sure to happen when you play back a digital sample, at rates other than what it was recorded at. It would make sense to have as much extra information in the audio file, to get better resampling. Especially playing a sample at a lower pitch, you are taking all this ultrasonic sound that we can't hear, and bringing it down into the range of human hearing.

    As a sometime graphic designer, I can relate to your comment regarding samples at high rates - don’t sell me the JPG, give me access to the original PSD file.

    Could be the reason your video prompted me to think, “what about my 44.1 sample library” in the first place :smile:

  • edited August 2019
    The user and all related content has been deleted.
  • edited August 2019

    @Blue_Mangoo said:

    Yeah. The whole thing is kind of wasted if in the end you are going to release it at 44.1 KHz on YouTube or Spotify anyway. This video is more of a prayer for the future than something that is actually practical in today’s world of streaming services.

    This does not make sense.
    What does make sense is keeping all of the sonic info at maximum quality throughout a project, so that when mixing and mastering, all of the detail is able to be EQed - and there is double the info for reverbs / FX.
    Then at the mastering stage, converting to 44.1k / MP3's.

    The analogy can be looked at through the prism of film and video.
    Cinematographers shoot at the highest quality available, so there is much more info to process color, detail, depth, add FX etc.
    Just because a project may end up on youtube - or a low res crappy video format, that does not mean one should shoot the film on a crappy low quality video format.

    If one records at 96khz, there is much more harmonic detail (and extended LF info), for eg acoustic guitars, piano, cymbals, mandolin, violin - all sound much better at 96khz - and it is all there when you want to do crucial EQing at the mastering stage.
    (Ive been recording 96khz for more than 2 decades).

    Its all about keeping the quality at maximum detail until it is depreciated.

  • @Mayo said:

    @Blue_Mangoo said:

    Yeah. The whole thing is kind of wasted if in the end you are going to release it at 44.1 KHz on YouTube or Spotify anyway. This video is more of a prayer for the future than something that is actually practical in today’s world of streaming services.

    This does not make sense.
    What does make sense is keeping all of the sonic info at maximum quality throughout a project, so that when mixing and mastering, all of the detail is able to be EQed - and there is double the info for reverbs / FX.
    Then at the mastering stage, converting to 44.1k / MP3's.

    The analogy can be looked at through the prism of film and video.
    Cinematographers shoot at the highest quality available, so there is much more info to process color, detail, depth, add FX etc.
    Just because a project may end up on youtube - or a low res crappy video format, that does not mean one should shoot the film on a crappy low quality video format.

    If one records at 96khz, there is much more harmonic detail (and extended LF info), for eg acoustic guitars, piano, cymbals, mandolin, violin - all sound much better at 96khz - and it is all there when you want to do crucial EQing at the mastering stage.
    (Ive been recording 96khz for more than 2 decades).

    Its all about keeping the quality at maximum detail until it is depreciated.

    I agree with this 100%. I didn’t think of it because I was thinking in terms of the arguments put forth in the video. Your point is another important way of looking at it.

    For other readers following along:

    Digital filter curves differ significantly from their analog counterparts above sampleRate/4 and the analog shapes are more musically useful. So if we want the analog-like EQ curves in the mastering stage then we ought to keep the signal at at least 96 kHz till the very end of mastering even if we know we have to go down to 44.1 for the final output.

    On the other hand, if we go with the analogy to graphics editing, I often find that because some graphic designs look different when scaled down to final export size, I need to zoom out the editor when designing in order to have a picture of the final output result right during the editing phase. Affinity designer has automatically downsampled preview modes so that you can see individual pixels even when you zoom in. The audio analogy to that is a DAW that runs internally at a high sample rate but downsamples everything that goes to the monitor speakers so that it maintains all the detail for editing but then always gives your ears a realistic preview of what it’s actually going to sound like when exported.

  • @Blue_Mangoo said:

    For other readers following along:

    Digital filter curves differ significantly from their analog counterparts above sampleRate/4 and the analog shapes are more musically useful. So if we want the analog-like EQ curves in the mastering stage then we ought to keep the signal at at least 96 kHz till the very end of mastering even if we know we have to go down to 44.1 for the final output.

    On the other hand, if we go with the analogy to graphics editing, I often find that because some graphic designs look different when scaled down to final export size, I need to zoom out the editor when designing in order to have a picture of the final output result right during the editing phase. Affinity designer has automatically downsampled preview modes so that you can see individual pixels even when you zoom in. The audio analogy to that is a DAW that runs internally at a high sample rate but downsamples everything that goes to the monitor speakers so that it maintains all the detail for editing but then always gives your ears a realistic preview of what it’s actually going to sound like when exported.

    Exactly !
    Hence why most top mastering guys upsample a 44.1k song to 96k, and have much better tools to EQ / compress with.

    Like your pixel preview analogy, they use Sonnox software to instantly hear what a low res file will sound like - and better inform the EQ and compression/ limiting process by hearing exactly what the degradation does to the EQ (etc) decisions.

  • @Blue_Mangoo said:
    On the other hand, if we go with the analogy to graphics editing, I often find that because some graphic designs look different when scaled down to final export size, I need to zoom out the editor when designing in order to have a picture of the final output result right during the editing phase. Affinity designer has automatically downsampled preview modes so that you can see individual pixels even when you zoom in. The audio analogy to that is a DAW that runs internally at a high sample rate but downsamples everything that goes to the monitor speakers so that it maintains all the detail for editing but then always gives your ears a realistic preview of what it’s actually going to sound like when exported.

    I recently checked a lot of digital photographs when looking for a new camera.
    There's an amazing amount of detail and precision in current mid price gear, achieved by powerful processors within the cam that correct any lens flaws 'on the fly' while shooting a pic.
    Current $500 cams are roughly comparable to $5k film sets (cam+lens) of the past.

    Yet I preferred a lot of shoots someone did with (quality) lenses from the analog age mounted to a modern digital cam body.
    The 'composition' (or call it mixdown) was much more pleasing to the eye.
    Those shoots were not graphically 'lofi' in any way, but very close to what might be considered a perfect capture of the object - but they lacked this almost surreal presentation of detail modern cams deliver.

    In the end it's the composition that counts - for both viewers and listeners.
    The world isn't sharp only, but also has soft edges ;)

Sign In or Register to comment.