Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Samples rates on iPad Air 3 and similar devices - stuck or not

135

Comments

  • One thing really isn't clear with me: up until a certain generation of devices you could choose any sample rate you wanted, even without an interface connected. For example on my 2017 iPad Pro there is no restriction, same goes for older phones.

    From what I can tell, the restriction only applies to devices that don't have a headphone socket. The question that I just can't figure out is why?

    Why have Apple locked those devices to 48k? Is there any technical reason or is it just to add insult to injury, by first taking away the headphone jack and then locking the sample rate to really piss us off?

  • I then created a 96k 32-bit float project in MultiTrack DAW, drag-and-dropped the audacity file in a track and mixed down / exported again - file attached.

    It is still a 96k file and contains all frequencies!

    So, Auria Pro and MultiTrack DAW may be the only DAWs that allow for this ...
    I will try with nanostudio 2

  • @richardyot said:
    One thing really isn't clear with me: up until a certain generation of devices you could choose any sample rate you wanted, even without an interface connected. For example on my 2017 iPad Pro there is no restriction, same goes for older phones.

    Yes, this is what I was wondering about and wether the whole thing is even required at all.

    From what I can tell, the restriction only applies to devices that don't have a headphone socket. The question that I just can't figure out is why?

    Not true, my iPad Air 3 has a headphone jack and is locked at 48k

    Why have Apple locked those devices to 48k? Is there any technical reason or is it just to add insult to injury, by first taking away the headphone jack and then locking the sample rate to really piss us off?

    Just Apple?

  • @tja said:
    Not true, my iPad Air 3 has a headphone jack and is locked at 48k

    Ah OK, so it's not directly related to the headphone jack, but it seems to have happened at around the same time as the jack was removed in the phones and later the iPads.

  • tjatja
    edited January 2022

    I created a nanostudio 2 project, added a Obsidian track, loaded the file as patch and mixed down the project to 96k and 32-bit float.

    But I got "output file is silent" ... this is surely due to me doing something wrong.
    Most probably, I need to play the sample somehow ...

  • @tja said:
    I changed Audacity to 96k, 32-bit float, with conversions set to Best Quality and no dither (not sure why it offers dither here, that would depend von wether you want to reduce the bit depth)

    I needed to restart Audacity to get a 96k project when using the Generate / Tone menu and created a 10 second project with 4 mono tracks: sinus wave at 40k, 32k, 24k and 16k

    I then exported the tracks as single 32-bit float WAV files - just to have them - and as a mixdown of all 4 tracks.

    [...]

    Not sure why it only goes down to -90dB on a 32-bit float ....
    And why the 24k spike looks so different than the others.

    The difference in the 24k frequency is most likely just down to the binning that happens because of the window size they are using to generate the spectra.

    -90dB is pretty much zero volume, so unless there is some technical thing you are doing like looking for noise floor or something like that, it's easier to make your graphs assume a min of -90dB. A 32-bit float is effectively around the bit depth of a 21-bit or 24-bit int (or fixed point depending on how you are doing things). They have the ability to deal with a higher volume level but they don't really get more resolution for the minimum. They are also much nicer to work with algorithm wise.

  • Thanks for the explanation, @NeonSilicon

  • tjatja
    edited January 2022

    nanostudio 2 did render the whole thing as stereo file, not sure how to change that ... file attached.
    I was quite helpless to create this in nanostudio 2 :smile:

    But anyways, it also contains all the spikes - even if they look totally different.

  • @tja said:

    @Telefunky said:
    Except that 90% of current quality synth/fx engines already did use higher internal sample rates. ;)

    Yes.
    And that means, that you cut out aliasing and other stuff between the plugins - which is what you want when creating music, but what stops you from analysing sound and effects of aliasing.

    Why should an iDevice not allow sound creation and analysis as is possible on a desktop?

    And it is really hard to tell the difference in a final mix. o:)

    This has absolutely nothing to do with the creation of music.
    This is just a topic around sound processing on iDevices.

    One thing to note about synths and effects using upsampled audio is that many don't and there isn't any reason that they should. You really only need to upsample if you are doing any non-linear processing that can add higher harmonics that would cause aliasing. For a synth, using a higher sample rate and downsampling is most likely not going to be the best path. You are almost always going to be better off using a band limited algorithm that eliminates any aliasing before it can happen.

  • @richardyot said:
    One thing really isn't clear with me: up until a certain generation of devices you could choose any sample rate you wanted, even without an interface connected. For example on my 2017 iPad Pro there is no restriction, same goes for older phones.

    From what I can tell, the restriction only applies to devices that don't have a headphone socket. The question that I just can't figure out is why?

    Why have Apple locked those devices to 48k? Is there any technical reason or is it just to add insult to injury, by first taking away the headphone jack and then locking the sample rate to really piss us off?

    I don't know the answer to why they have been locked down, but it could be something as simple as they went to a fixed sample rate DAC on the newer iPads for the internal speakers and so they locked the sample rate to the DAC so there wouldn't be any question of needing to do SRC on the output or the mics.

  • I finally tried Cubasis 3, it cannot change the project to 96k and instead offers to convert the file to 48k, which I declined.

    Then I did create a mixdown - file attached.

    Cubasis also created a stereo file, which was my mistake, I think - but anyways, it could only create a 48k file!

    So, as expected, Cubasis cannot be used to handle content of a different sample rate properly - MultiTrack DAW and Auria Pro can, and nanostudio 2 too ... even if it is a different situation without audio tracks.

  • @tja said:
    nanostudio 2 did render the whole thing as stereo file, not sure how to change that ... file attached.
    I was quite helpless to create this in nanostudio 2 :smile:

    But anyways, it also contains all the spikes - even if they look totally different.

    Yeah, but it also contains a spike it shouldn't contain -- the one at 8 kHz. There's your aliasing :)

  • @krassmann said:

    @NeonSilicon said:

    @tja said:

    @NeonSilicon said:
    I have the M1 Pro. It does appear to be fixed at 48kHz without an interface attached.

    I don't have either Nanostudio 2 or MultiTrack DAW so I can't test those.

    Auria Pro?

    Yeah, I've got that one. I'll give it a go.

    Would also be cool to test that with Cubasis 3. Many people (including me) are using it for mastering. I think this would be a totally cool topic for a video on YT to compare the popular iOS AU hosts in this regard. A collab between you guys and @jakoB_haQ would be awesome.

    I'd give it a try in Cubasis 3, but I don't have that one. I would be surprised though if they did any sort of SRC to mix down a file to a sample rate that was different from what the thread was actually running at. I'd expect that they would use the manual rendering thread I mentioned in a link above or they'd use whatever they are doing in their own internal threading to run at the sample rate the project was set to.

  • tjatja
    edited January 2022

    @SevenSystems said:

    @tja said:
    nanostudio 2 did render the whole thing as stereo file, not sure how to change that ... file attached.
    I was quite helpless to create this in nanostudio 2 :smile:

    But anyways, it also contains all the spikes - even if they look totally different.

    Yeah, but it also contains a spike it shouldn't contain -- the one at 8 kHz. There's your aliasing :)

    Ohhhhh

    As I was so hasty in creating this, i did not even notice! :smiley:

    Many thanks.

    So, our list of "capable" DAWs is reduced to Auria Pro and MultiTrack DAW.

    I will use Auria Pro for mixing and mastering from now on :)
    I should do everything in it ...

  • tjatja
    edited January 2022

    @NeonSilicon said:

    @krassmann said:

    @NeonSilicon said:

    @tja said:

    @NeonSilicon said:
    I have the M1 Pro. It does appear to be fixed at 48kHz without an interface attached.

    I don't have either Nanostudio 2 or MultiTrack DAW so I can't test those.

    Auria Pro?

    Yeah, I've got that one. I'll give it a go.

    Would also be cool to test that with Cubasis 3. Many people (including me) are using it for mastering. I think this would be a totally cool topic for a video on YT to compare the popular iOS AU hosts in this regard. A collab between you guys and @jakoB_haQ would be awesome.

    I'd give it a try in Cubasis 3, but I don't have that one. I would be surprised though if they did any sort of SRC to mix down a file to a sample rate that was different from what the thread was actually running at. I'd expect that they would use the manual rendering thread I mentioned in a link above or they'd use whatever they are doing in their own internal threading to run at the sample rate the project was set to.

    Please see above.

    For some years, I did buy just everything ... so I have many Apps I rarely use. But I am in the process of stripping down ;-)

  • @Samu said:
    Ideally a DAW should dither / resample the output to match the playback device...
    ...back in the days of 8-bit colors displays it was still possible to process an image with higher bit-depth we just could not see all the colors.

    I think what led to this 'hardware locked sample rate apps chaos' has to do with the early apps that targeting a specific hardware sample-rate and when the hardware sample-rate got changed we ran into issues...

    This was mainly done to optimize the performance of an app and retain the quality as early Core Audio libraries frankly sucked at sample-rate conversion.

    So in a DAW such as Cubasis we should be able to set the sample-rate at which everything is processed and the output is then dithered/downsampled to match the playback device...

    One thing to remember is that in iOS and to a large degree in macOS, the DAW doesn't have control of the audio thread. On iOS in particular the DAW doesn't really control the thread that the AU's are processing on. The plugins are totally separate applications and everything is being handed off to them using IPC. For the most part, the DAW is only asking the OS to do something and the OS gets to say yes or no and then it orchestrates the whole rendering chain. That's why Apple has the API in AVAudioSession and AVAudioEngine for doing the manual rendering.

  • @tja said:

    @NeonSilicon said:

    @krassmann said:

    @NeonSilicon said:

    @tja said:

    @NeonSilicon said:
    I have the M1 Pro. It does appear to be fixed at 48kHz without an interface attached.

    I don't have either Nanostudio 2 or MultiTrack DAW so I can't test those.

    Auria Pro?

    Yeah, I've got that one. I'll give it a go.

    Would also be cool to test that with Cubasis 3. Many people (including me) are using it for mastering. I think this would be a totally cool topic for a video on YT to compare the popular iOS AU hosts in this regard. A collab between you guys and @jakoB_haQ would be awesome.

    I'd give it a try in Cubasis 3, but I don't have that one. I would be surprised though if they did any sort of SRC to mix down a file to a sample rate that was different from what the thread was actually running at. I'd expect that they would use the manual rendering thread I mentioned in a link above or they'd use whatever they are doing in their own internal threading to run at the sample rate the project was set to.

    Please see above.

    For some years, I did buy just everything ... so I have many Apps I rarely use. But I am in the process of stripping down ;-)

    Oops! I hadn't seen your Cubasis results yet. So, it looks like they downsample the input file and then run at the sample rate the audio thread is set to.

  • tjatja
    edited January 2022

    @NeonSilicon said:

    @tja said:

    @NeonSilicon said:

    @krassmann said:

    @NeonSilicon said:

    @tja said:

    @NeonSilicon said:
    I have the M1 Pro. It does appear to be fixed at 48kHz without an interface attached.

    I don't have either Nanostudio 2 or MultiTrack DAW so I can't test those.

    Auria Pro?

    Yeah, I've got that one. I'll give it a go.

    Would also be cool to test that with Cubasis 3. Many people (including me) are using it for mastering. I think this would be a totally cool topic for a video on YT to compare the popular iOS AU hosts in this regard. A collab between you guys and @jakoB_haQ would be awesome.

    I'd give it a try in Cubasis 3, but I don't have that one. I would be surprised though if they did any sort of SRC to mix down a file to a sample rate that was different from what the thread was actually running at. I'd expect that they would use the manual rendering thread I mentioned in a link above or they'd use whatever they are doing in their own internal threading to run at the sample rate the project was set to.

    Please see above.

    For some years, I did buy just everything ... so I have many Apps I rarely use. But I am in the process of stripping down ;-)

    Oops! I hadn't seen your Cubasis results yet. So, it looks like they downsample the input file and then run at the sample rate the audio thread is set to.

    Cubasis asked wether to downsample the file to 48k when I imported it, but I declined the conversion!
    So it may probably internally still be 96k ... but as you cannot export as 96k, it get's cut out at mixdown.

  • @tja said:

    @NeonSilicon said:

    @tja said:

    @NeonSilicon said:

    @krassmann said:

    @NeonSilicon said:

    @tja said:

    @NeonSilicon said:
    I have the M1 Pro. It does appear to be fixed at 48kHz without an interface attached.

    I don't have either Nanostudio 2 or MultiTrack DAW so I can't test those.

    Auria Pro?

    Yeah, I've got that one. I'll give it a go.

    Would also be cool to test that with Cubasis 3. Many people (including me) are using it for mastering. I think this would be a totally cool topic for a video on YT to compare the popular iOS AU hosts in this regard. A collab between you guys and @jakoB_haQ would be awesome.

    I'd give it a try in Cubasis 3, but I don't have that one. I would be surprised though if they did any sort of SRC to mix down a file to a sample rate that was different from what the thread was actually running at. I'd expect that they would use the manual rendering thread I mentioned in a link above or they'd use whatever they are doing in their own internal threading to run at the sample rate the project was set to.

    Please see above.

    For some years, I did buy just everything ... so I have many Apps I rarely use. But I am in the process of stripping down ;-)

    Oops! I hadn't seen your Cubasis results yet. So, it looks like they downsample the input file and then run at the sample rate the audio thread is set to.

    Cubasis asked wether to downsample the file to 48k when I imported it, but I declined the conversion!
    So it may probably internally still be 96k ... but as you cannot export as 96k, it get's cut out at mixdown.

    I'm assuming that they left the file at 96k but did SRC on the audio from the file at input and ran the project at 48k. I'd have to run a debug session in Cubasis to confirm that though.

  • edited January 2022

    @tja said:

    @Telefunky said:
    And it is really hard to tell the difference in a final mix. o:)

    This has absolutely nothing to do with the creation of music.
    This is just a topic around sound processing on iDevices.

    Sorry, I should have quoted...

    @tja said:
    But a DAW that could run at 44.1k, 48k, 88.2k, 96k, 192k or even higher, accepting, handling and exporting such content, without needing an attached interface, would simply be fantastic for all people creating music on iDevices that are hardware locked to 48k ... hint... hint 😅

    which was the reason for slightly extending the perspective, but just ignore it in this context.

    On the other hand it is related to the „locking to samplerate“ by Apple, as they have to consider a general audience.
    Afaik their audio codec chips are custom designed with no general specs available.
    These chips vary with hardware and there maybe whatever consequences affecting some and some not.

    Frankly said: Apple sacrificed a feature that only 1 out of 1000 customers even knows about, and only 1 out of 100 from this group considers it relevant.
    Apple knows that almost every member of the latter group owns an external audio interface, so they just spare the effort.
    They are neither stupid nor naive ... and only in it for the money anyway... :*

    Now back to tech stuff .

  • edited January 2022

    @Telefunky said:

    @tja said:

    @Telefunky said:
    And it is really hard to tell the difference in a final mix. o:)

    This has absolutely nothing to do with the creation of music.
    This is just a topic around sound processing on iDevices.

    Sorry, I should have quoted...

    @tja said:
    But a DAW that could run at 44.1k, 48k, 88.2k, 96k, 192k or even higher, accepting, handling and exporting such content, without needing an attached interface, would simply be fantastic for all people creating music on iDevices that are hardware locked to 48k ... hint... hint 😅

    which was the reason for slightly extending the perspective, but just ignore it in this context.

    On the other hand it is related to the „locking to samplerate“ by Apple, as they have to consider a general audience.
    Afaik their audio codec chips are custom designed with no general specs available.
    These chips vary with hardware and there maybe whatever consequences affecting some and some not.

    Frankly said: Apple sacrificed a feature that only 1 out of 1000 customers even knows about, and only 1 out of 100 from this group considers it relevant.

    I just wish 1 out of 1 devs would fix their plugin to work at both 44 and 48 (or maybe it is the hosts faults?) ... le sigh

  • tjatja
    edited January 2022

    I tested sample rates on my iPad Pro 12.9 2nd gen. and it seems that it can use either 44.1k or 48k, but no other rates - from what was possible in AUM and Cubasis 3.

    So, it is not fully locked, but still restricted.
    But at least, it can handle both CD and DVD / streaming resolutions - great!

    All still without an interface attached.

  • @AudioGus said:

    @Telefunky said:

    @tja said:

    @Telefunky said:
    And it is really hard to tell the difference in a final mix. o:)

    This has absolutely nothing to do with the creation of music.
    This is just a topic around sound processing on iDevices.

    Sorry, I should have quoted...

    @tja said:
    But a DAW that could run at 44.1k, 48k, 88.2k, 96k, 192k or even higher, accepting, handling and exporting such content, without needing an attached interface, would simply be fantastic for all people creating music on iDevices that are hardware locked to 48k ... hint... hint 😅

    which was the reason for slightly extending the perspective, but just ignore it in this context.

    On the other hand it is related to the „locking to samplerate“ by Apple, as they have to consider a general audience.
    Afaik their audio codec chips are custom designed with no general specs available.
    These chips vary with hardware and there maybe whatever consequences affecting some and some not.

    Frankly said: Apple sacrificed a feature that only 1 out of 1000 customers even knows about, and only 1 out of 100 from this group considers it relevant.

    I just wish 1 out of 1 devs would fix their plugin to work at both 44 and 48 (or maybe it is the hosts faults?) ... le sigh

    Which plugin shows problems? And in which host?

  • edited January 2022

    Multitrack DAW is a really good example:
    one of the first apps considered a „DAW“, supported from the first to the most recent iDevice on both phone and tablet platform.
    When I checked the 96khz capability on iPad-1, sample rates above 48khz were marked red as „unsuggested“, yet still usable (leaving the decision up to the user).
    Btw it’s mix engine is not standard 32bit float, but something more sophisticated.
    (I don‘t remember the specs, but it once was a topic in their forum)
    MTD was multichannel input from day one on, supporting almost any interface on the market.

  • tjatja
    edited January 2022

    I tried to use auGEN X within an Auria Pro project with 96k to create sound at 32k, but it refused and restricted the sound to 20.48k.... even as the session was shown as 96k:

    Shouldn't that be possible, @auDSPr?

    Any other plugin or App that can produce sound higher than 22.5kHz?
    I seek out those mentioned above

  • This is esp. strange as the iPad is locked to 48k, which means that audio could be generated at up to 24k - if the apparent project rate of 96k somehow does not apply, @auDSPr

  • Have it do a sawtooth at the highest frequency it can and see what it produces.

    I would expect that some synths don't produce any waveforms with content over about 20kHz, especially those that are using some form of band limited synthesis.

  • @AudioGus said:

    @Telefunky said:

    @tja said:

    @Telefunky said:
    And it is really hard to tell the difference in a final mix. o:)

    This has absolutely nothing to do with the creation of music.
    This is just a topic around sound processing on iDevices.

    Sorry, I should have quoted...

    @tja said:
    But a DAW that could run at 44.1k, 48k, 88.2k, 96k, 192k or even higher, accepting, handling and exporting such content, without needing an attached interface, would simply be fantastic for all people creating music on iDevices that are hardware locked to 48k ... hint... hint 😅

    which was the reason for slightly extending the perspective, but just ignore it in this context.

    On the other hand it is related to the „locking to samplerate“ by Apple, as they have to consider a general audience.
    Afaik their audio codec chips are custom designed with no general specs available.
    These chips vary with hardware and there maybe whatever consequences affecting some and some not.

    Frankly said: Apple sacrificed a feature that only 1 out of 1000 customers even knows about, and only 1 out of 100 from this group considers it relevant.

    I just wish 1 out of 1 devs would fix their plugin to work at both 44 and 48 (or maybe it is the hosts faults?) ... le sigh

    I'd expect most plugins work fine at any sample rate 44.1 and above.

    BTW, I had issues in some of my plugins running at sample rates below 44.1kHz. They would run there, but they weren't stable. I didn't expect any DAW to be running anything down there, but discovered that GarageBand does have tracks, some samples and smart instruments that run at 22.05kHz on iOS.

  • edited January 2022

    @tja said:

    @AudioGus said:

    @Telefunky said:

    @tja said:

    @Telefunky said:
    And it is really hard to tell the difference in a final mix. o:)

    This has absolutely nothing to do with the creation of music.
    This is just a topic around sound processing on iDevices.

    Sorry, I should have quoted...

    @tja said:
    But a DAW that could run at 44.1k, 48k, 88.2k, 96k, 192k or even higher, accepting, handling and exporting such content, without needing an attached interface, would simply be fantastic for all people creating music on iDevices that are hardware locked to 48k ... hint... hint 😅

    which was the reason for slightly extending the perspective, but just ignore it in this context.

    On the other hand it is related to the „locking to samplerate“ by Apple, as they have to consider a general audience.
    Afaik their audio codec chips are custom designed with no general specs available.
    These chips vary with hardware and there maybe whatever consequences affecting some and some not.

    Frankly said: Apple sacrificed a feature that only 1 out of 1000 customers even knows about, and only 1 out of 100 from this group considers it relevant.

    I just wish 1 out of 1 devs would fix their plugin to work at both 44 and 48 (or maybe it is the hosts faults?) ... le sigh

    Which plugin shows problems? And in which host?

    Egoist in NS2 has problems. I should give it a more clinical try again in BM3 and Cubasis.

    The VST has problems too with 44/48 so I just imagined it is a Sugar Bytes thing.

  • tjatja
    edited January 2022

    @AudioGus said:

    @tja said:

    @AudioGus said:

    @Telefunky said:

    @tja said:

    @Telefunky said:
    And it is really hard to tell the difference in a final mix. o:)

    This has absolutely nothing to do with the creation of music.
    This is just a topic around sound processing on iDevices.

    Sorry, I should have quoted...

    @tja said:
    But a DAW that could run at 44.1k, 48k, 88.2k, 96k, 192k or even higher, accepting, handling and exporting such content, without needing an attached interface, would simply be fantastic for all people creating music on iDevices that are hardware locked to 48k ... hint... hint 😅

    which was the reason for slightly extending the perspective, but just ignore it in this context.

    On the other hand it is related to the „locking to samplerate“ by Apple, as they have to consider a general audience.
    Afaik their audio codec chips are custom designed with no general specs available.
    These chips vary with hardware and there maybe whatever consequences affecting some and some not.

    Frankly said: Apple sacrificed a feature that only 1 out of 1000 customers even knows about, and only 1 out of 100 from this group considers it relevant.

    I just wish 1 out of 1 devs would fix their plugin to work at both 44 and 48 (or maybe it is the hosts faults?) ... le sigh

    Which plugin shows problems? And in which host?

    Egoist in NS2 has problems. I should give it a more clinical try again in BM3 and Cubasis.

    The VST has problems too with 44/48 so I just imagined it is a Sugar Bytes thing.

    I have DrumComputer, Factory, Aparillo, Unique, Cyclop, Thesys and Turnado, but not Egoist :-D
    May try later with one of them.

Sign In or Register to comment.