Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Recommendations for sample rate and resolution?

I'm still quite a noob when it comes to mixing. I wonder which sample rate and resolution I should choose. for now I'm using 44kHz/16 bit because this seems to me the most compatible across all apps. On the other hand I'd like to have some headroom for the mixing. Which settings do you recommend?

I guess as always in life there is no single right answer to this question, so let me describe my setup and workflows. I have an iConnectivity Audio 2+ interface and it's connected to my iPad Pro and also my MacBook. The MacBook is not much used for music, mainly for browsing samples with Loopcloud and perspectively mastering (I got tons of IKMultimedia plugins since I got into the 25th anniversary sale). I have a pair of IKM iLoud Micro monitors.

I mostly do electronic music, techno, organic house, dub reggae. I also do record some audio: percussion, voice harmonies, the latter often with auto tune or vocoder.

A typical workflow is that I create ideas in Korg Gadget or AUM, print them to audio and import them into Cubasis 3. Record audio in CB3, sometimes also adding some Midi tracks in CB3. Recently Loopy Pro came into the mix, sometimes replacing CB3, sometimes an additional step in the ideation phase and recording Gadget and AUM into Loopy. On the MacBook I have Ableton Live lite but I don't use that for recording or creating stuff.

Thanks in advance for your recommendations.

Comments

  • edited January 18

    First, these days I don't mix or master, I never reach that state with my own music, I am bored with it before I even get there. The first stage of music production I like the most.

    But there is one thing you can avoid with keeping the rate at 44.1 khz and bit depth at 16 bit, you don't have to dither and convert sample rate at some point, if you want the music file to be playable for people who have no knowledge about music files and codecs.

    I never understood the different methods for dithering, so I avoided it all together.

    Modern music file players can handle all kinds of formats, so maybe it is of no concern anymore.

    If dithering is no concern, and file size (free disk space), file transfer speed, and cpu power is no problem, you can go up in bit depth and samplerate. I am not sure if "extra" headroom is something you actually use in a mix. Maybe only in a very busy mix with a lot of channels, or with really quiet or dynamic (live) recordings or noisy preamps or convertors? You need to keep some headroom left for the mastering process.

  • edited January 18

    You may track digital sources from the CPU (fx and virtual instruments) in any resolution.
    There is a tiny loss of detail in (final) reverb decays in 16bit, but that affects only levels below -90dB/fs.
    The same applies to dithering, which shapes the final bits... and that‘s why it‘s „effect“ is almost non existent to the listener.

    You may assume digital processors generate a (technically) perfect waveform, so all bits are valid.

    This is different with analog inputs (mic, line, instrument) of your interface: the signal has to be converted to digital... and that‘s a rather unreliable process compared to the „plain calculation“ case above.
    There are physical limitations by sample clock, temperature drift, circuit design, that make the lower bits shift from the exact value, which adds noise to the signal.

    A 16bit converter can mathematically handle a 96dB dynamic range, but the final bit is always considered unreliable, so 90dB would be an excellent range.
    The same calculation with a 24bit converter results in a (whooping) 23x6 = 138dB range.
    This is impossible with regular technolgy and 20 to 22 bits are a more realistic figure, so lets assume a 126 dB range is covered reliably.

    If you leave some headroom (f.e. 12dB) in recording, at 16bit your effective dynamic range drops to 78dB (13bits), while in 24 bit there’d be still a 108dB range (18bits, assuming lower boundary).
    But don‘t forget circuit noise: if I fully crank my Audient ID22 mic channel, the meter shows a noise floor of -90dB. :o
    It’s still tracked in 24bit (for headroom), but the signal itself could never be „better“ than 16bit... and that interface is considered one of the best in the sub $1k range...

    ps: calculation rule 1 bit represents 6dB (doubling) of level, it‘s a logarithmic scale

  • 24/44.1 is a pretty safe bet these days and what I use for 95% of my own music making. Sometimes 24/48 as well, just be aware that you’ll likely have to sample rate convert at the end to make it compatible with the most apps and devices. Some people think this sounds better than just working at 44.1 initially, though I haven’t really noticed much sonic difference if you’re using modern gear.

  • @Telefunky said:

    But don‘t forget circuit noise: if I fully crank my Audient ID22 mic channel, the meter shows a noise floor of -90dB. :o

    Is that -90db with 'open circuit' or terminated by a 150ohm dummy resistor?
    If it's open circuit it doesn't represent the true amp-noise floor...


    (Old classic by Julian Krause)

  • edited January 18

    Terminated by a 260 Ohm resistor, and (to be precise) fully cranked means „just below the point where noise kicks in“, which is probably about 55-57dB effective gain, one tick mark below the end of the dial path.
    The true gain figures are unknown to me (specs say 60dB max) and I have no gear to measure precisely.
    But low noise performance is very similiar to my Telefunken V676a, if both signals are leveled identically.
    The ID22 is sensitive to ground issues (as any unit powered by 2pole PSUs).
    Mine is connected to a grounded rack device via an unused out, giving a 3-6dB improvement. The exact performance depends on local electricity.

    Just in case someone looks up specs of these vintage devices: figures back then were power based dB/mW (iirc) which can‘t be re-calculated to dB/V units. I was once mighty impressed by the 120 dB SNR of the V76 tube (!) preamp... but now have no idea how that translates into reality... and lack the $2k to check out :|

  • Thanks @Telefunky for the details...

    I'm currently looking to get an extra audio-interface and I'm leaning towards the ID4 Mk2...
    ...but will the USB-C port on the iPads provide enough juice to crank up the head-phone amp to the max as it's dependent on the power it gets from the USB-C port?
    (No matter what, it has to be connected directly to a USB-C port to negotiate the power it gets for the headphone amp, on regular USB2/3 it's fallback to less juice).

    The other candidate is the SSL2 but it's 'huge'.

  • Imho the ID4 is the most IOS convenient item of the product line.
    It‘s so simple that the lack of the control panel doesn‘t matter and it has the excellent preamp stage.
    USB-C should provide enough juice for the headphone out ... but I have never tried one myself.

  • tjatja
    edited January 18

    Many devices are stuck at 48k nowadays, if no interface is attached.

    Default:
    So, my recommendation would be 24 bit at 48k as a sane default and dither down to 16/44.1k or 16/48k for the final master(s), if you need 16 bit masters.
    This is also the default of Loopy Pro.

    32 bit floating / fixed:
    Not all hosts can work with 32 bit "fixed point" audio exports / imports, but many work internally with 32 bit "floating point".
    There is no real standard for the 32 bit floating point format, so it is better left to be used internally or for backups, if floating point can be used for exports. Otherwise you may experience problems between hosts or platforms.
    With "32 bit" in this posting, I refer to 32 bit fixed point for a project in a host and for the file format exported from the host, which is a higher bit rate than 24 bit.
    For exports, 24 bit files are the safest bet anyways.

    AAC output (for streaming services):
    If you need AAC output, you should stay at 24 bit before compression or even use 32 bit instead of 24, if you use recorded audio and are OK with 32 bit files! But 24 bit suffice for really most things.

    This is, because AAC files are 32 bit anyways, so it is better to not dither down to 16 first in such cases, as proven by @Blue_Mangoo

    MP3 output:
    A similar argument is valid for MP3 output, as MP3 have an effective dynamic range of 20 bit and should better be compressed from 24 bit files and not from 16 bit. This is also an argument for using at least 24 bit as default.
    I did not test / compare compressing from 32 bit to MP3, which is an interesting question.

    Recorded audio:
    If your audio files are digitally created and not recorded, there may be no point in using 32 bit. I would still use 24 bit in this case, but you could argue for staying at 16 bit right from the beginning (for a pure CD or DVD project).
    If you have lots of recorded audio, you could run the project at 32 bit, compress to AAC from there and only dither down to 24 or 16 bit if required for a master (CD, DVD, archives, ... but don't use dither when you compress lossy to AAC or MP3). But still, 24 bit may suffice.

    Oversampling:
    I would also use oversampling in any capable App / effect - this was also proven by @Blue_Mangoo as best option compared to running the whole project at a higher frequency.

    In summary:

    Use either 24/48k or 32/48k if you work with recorded audio and like to use 32 fixed point files (or floating point, if there is no compatibility problem).

  • I use 24 bit for the reasons mentioned above. I use 48 Khz for the reasons Tja mentions: i have a few boxes, like the OP-Z, which are fixed at 48, so it’s easier to just make everything match.

  • edited January 18

    So, summarizing, when you record live instruments or voices, you have more dynamic range (or steps) between total silence and clipping level, so 24 bit could have an advantage later in the mixing stage.

    When you stay in the box with IOS or desktop/laptop, there is no real improvement, and if system resources are limited like an Ipad, better keep close to the native bit depth and sample rate of the device (or target device).

  • tjatja
    edited January 18

    I checked some i*OS hosts.

    Internally, most will use 32 bit float. But we are interested in the possible export formats:

    AUM and MultiTrack DAW can record / export in 16, 24 or 32 bit floating point format.

    Cubasis 3, BeatMaker 3 and Loopy Pro offer 16, 24 and "32 bit"... not sure if this means fixed or floating point.

    I could not see a setting within Auria Pro or nanostudio 2, but they offer a selection when mixing down

    Auria Pro 16, 24 and 32 bit.
    And nanostudio 2 even offers four WAV options: 16, 24, 32 and 32 float! No other host I checked offers this!

    Audio Evolution Mobile and GarageBand offer 16 or 24 bit.

    I may check the exported 32 bit files from AUM, MultiTrack DAW, Cubasis 3, BeatMaker 3, Loopy Pro, Auria Pro and nanostudio 2 ...just to see if they are all compatible and if they are fixed or floating point.

  • Glad you brought this up @krassmann its been on my mind as of late.

    Big thanks to @tja for that very detailed and informative answer. Cheers.

  • tjatja
    edited January 18

    @raabje said:

    So, summarizing, when you record live instruments or voices, you have more dynamic range (or steps) between total silence and clipping level, so 24 bit could have an advantage later in the mixing stage.

    When you stay in the box with IOS or desktop/laptop, there is no real improvement, and if system resources are limited like an Ipad, better keep close to the native bit depth and sample rate of the device (or target device).

    In general, yes.

    But 24 bit has a big advantage over 16 bit in many situations. 24 bit are also a better base for lossy compressions.

    Also, 32 bit float has advantages over 24 bit fixed, and be it things like "too much gain" added to a file and wether you can recover from this or not!

    You often can recover a 32 bit float file, where a 24 bit file cannot be fixed anymore.

    I found an example for this: https://www.sounddevices.com/sample-32-bit-float-and-24-bit-fixed-wav-files/

    And 24 bit give you a much better recording than 16 bit when you did record too silent, which reduces the quality greatly.

    So, 32 > 24 > 16

    The more bit, the better, but also larger and probably less compatible.

  • tjatja
    edited January 18

    @Poppadocrock said:
    Big thanks to @tja for that very detailed and informative answer. Cheers.

    A pleasure 😅🤗
    I did edit this several times, as re-reading made clear that I needed to add or re-formulate stuff.

    I like technical sound stuff and Apps around sound over actually creating music 😅😂

  • The pretty much standard for Core Audio is 32-bit float. So, no matter what you record at, if you process the audio through a typical chain on iOS or macOS, it's going to be converted to 32-bit float. That doesn't mean that you should record at 32-bit float as that is going to depend on other factors. But, trying to record at a given bit depth to avoid conversions is not likely to do any good.

    Actually, in most of my AU's the first thing I'm going to do is convert the input buffers to 64-bit floats. There are certain situations where I'll leave it 32-bit for saving on memory usage. But, for the most part, I'm going to push you up to 64-bit float for various algorithmic stability reasons.

    The sample rate depends on the situation. For recording things for processing, 48k is going to be good enough for most everything. But, if I were recording to archive for the future, I'd go with as high as my converters and storage allowed. There are mathematical and perceptual reasons for this, it's not just because you can. But, the main point is the down converting/sampling is very good and it isn't going to hurt, so if you can preserve the content for future use, there isn't a reason not too.

  • I created 32 float and fixed files from nanostudio 2, but they look the same in AudioShare, same file type, same size.

    Tomorrow, I will check those files in Audacity, which may give more information.

    Is there no i*OS App that gives more details about audio files?!? 😳

    The Files App does not even show the file name endings, all audio files look the same.
    Apple is so.... strange...

  • @tja said:
    I created 32 float and fixed files from nanostudio 2, but they look the same in AudioShare, same file type, same size.

    Tomorrow, I will check those files in Audacity, which may give more information.

    Is there no i*OS App that gives more details about audio files?!? 😳

    The Files App does not even show the file name endings, all audio files look the same.
    Apple is so.... strange...

    I hope that it's safe to assume that anything doing a 32-bit float format is using IEEE format for the files.

    Sometimes the apple audio file icons give hints as to the format. I haven't looked on the iPad, maybe they do there too?

  • Good timing for the question, as late as 2 days ago I did a refresher on the topic myself, mostly because I wanted to ensure I didn't get sync issues by having different settings in different apps when producing a tune in a third app.

    I found these three articles informative:

    https://decibelpeak.com/44-1-khz-or-48-khz/
    https://www.mixinglessons.com/sample-rate/
    https://www.mixinglessons.com/bit-depth/

    Basically they came to the conclusion that "most people" should record in 24/48 and be prepared to downsample to 44.1 when needed.

    As I was doing my reading with my iPad next to me it all sounded good. However, I then connected my interface and noticed that it forces me to use 44.1 in the setup I currently have (I used to have a different setup that included a Mac in the chain too, pretty sure that went "higher"). Without options, therefore, I find myself currently using 24/44.1. After having read those articles though I'm not overly concerned about it. It sounds good. It probably is good.

    As my iPad is an iPad Pro 12.9" 1st gen I don't have plenty of CPU any longer (apps are using more these days than they did when I bought it) and as increasing all these settings come at the cost of more storage and more CPU cycles I have no intention of pushing the limits on that front for now (though it is time to upgrade my iPad for sure).

  • @NeonSilicon said:

    @tja said:
    I created 32 float and fixed files from nanostudio 2, but they look the same in AudioShare, same file type, same size.

    Tomorrow, I will check those files in Audacity, which may give more information.

    Is there no i*OS App that gives more details about audio files?!? 😳

    The Files App does not even show the file name endings, all audio files look the same.
    Apple is so.... strange...

    I hope that it's safe to assume that anything doing a 32-bit float format is using IEEE format for the files.

    Sometimes the apple audio file icons give hints as to the format. I haven't looked on the iPad, maybe they do there too?

    The icons in the Files App look exactly the same.

    If nanostudio 2 offers two variants of 32 bit file exports, I am just assuming that those are two different outputs.

    And in general there are 32 bit integer files, 32 bit floating point and 32 bit fixed point files.
    The size will always be about the same.

    I will report back tomorrow 😅

  • @tja said:

    @NeonSilicon said:

    @tja said:
    I created 32 float and fixed files from nanostudio 2, but they look the same in AudioShare, same file type, same size.

    Tomorrow, I will check those files in Audacity, which may give more information.

    Is there no i*OS App that gives more details about audio files?!? 😳

    The Files App does not even show the file name endings, all audio files look the same.
    Apple is so.... strange...

    I hope that it's safe to assume that anything doing a 32-bit float format is using IEEE format for the files.

    Sometimes the apple audio file icons give hints as to the format. I haven't looked on the iPad, maybe they do there too?

    The icons in the Files App look exactly the same.

    If nanostudio 2 offers two variants of 32 bit file exports, I am just assuming that those are two different outputs.

    And in general there are 32 bit integer files, 32 bit floating point and 32 bit fixed point files.
    The size will always be about the same.

    I will report back tomorrow 😅

    I just checked on my Mac and it looks like all of the useful icon info has been removed in the move to make everything flat and ugly.

    The 32-bit float format from AUM is a standard WAV format and opens without issue in Audacity. I don't know anything about Nanostudio.

    I haven't checked with GB, but I assume that it is still limited to 44.1kHz even though they did add 24-bit support.

  • Great posts!!! Thank you so much. It seems to be reasonable to switch to [email protected] Usually my tracks end up in a lossy format. As I do record audio I think it is a good choice.

  • I checked the two 32 bit files from nanostudio in Audacity on my Mac: There is no difference.

    But Audacity does not show as much information as I saw in some codec software on Windows.

    I will asked in the nanostudio forum.

Sign In or Register to comment.