Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Normalizing multisamples (SynthJacker feature related)

If and when you create sampled instruments, do you normalize the individual multisamples? And if you do, is that in relation to each other, or individually?

For a long time SynthJacker has had the option to normalize the sliced samples to a user-specified level, up to 0 dB. It is done individually for each sample, after slicing and trimming.

I was revising the app for an update, and for various reasons I got thinking that this may not make a lot of sense after all, or at least the results might not be very good.

So I'm thinking of removing the whole normalization feature from SynthJacker, to better concentrate on other things. If it was gone in the next version, would you miss it?

Comments

  • The only way that makes sense to me is to normalize before slicing and trimming. That is the only way I would use normalization, so if you left the current method out I wouldn’t miss it at all.

  • Thanks. So it might make sense if the normalization was applied to the original long recording before the other operations.

    Maybe it's better to just adjust the input level before the actual recording (in the case of external instruments) ... but for Audio Units, I don't know if there is anything that could be done. Maybe if I add insert effects to the recording chain then you could put some sort of gain AU as the first effect.

  • Hi @coniferprod - I'm not familiar with your app but I do a fair bit of sampling. My €0.02 is that if you're going to normalise every sample, then you're also probably going to need some key scaling - don't know if you have that already.

    For example, I'll typically have a 6dB difference between the lowest and highest notes (low being louder) if the samples are at the same peak level, and assuming the instrument being sampled doesn't do that naturally already, like say a piano does.

    That being said, I try to avoid normalising samples if I can, and get the recording levels right at the time of sampling - and that's probably not going to be 0dB - more like -1dB max. It also complicates any de-noising as obviously normalisation will potentially change the noise floor level - typically increasing it.

    @coniferprod said: Thanks. So it might make sense if the normalization was applied to the original long recording before the other operations.

    Or this, I'll do that if I'm not happy with the peak level and I'll apply that to the whole pass, 10 notes or whatever. But... I'd do noise reduction (if needed) after normalising.

    I guess ultimately it's all about capturing the natural dynamic range of the thing you're sampling. Once you've captured your instrument, it's easier to take that dynamic range away (with a compressor or whatever) than put it back :smile:

  • Thanks for the input @moodscaper, appreciated. I'm all the more certain that the normalization currently done in SynthJacker is not worth its salt.

    Since the general idea of SynthJacker is to sample the source instrument in multiple MIDI velocities, generate a sample for each note/velocity combination, and also output an SFZ instrument description, there will be natural amplitude differences between the different velocities of the same note, which may even be lost in the current normalization process.

    BTW, Moodscaper looks like a really nice app, I don't know how I've missed it -- going to get it and dive in!

  • @coniferprod said:
    If and when you create sampled instruments, do you normalize the individual multisamples? And if you do, is that in relation to each other, or individually?

    For a long time SynthJacker has had the option to normalize the sliced samples to a user-specified level, up to 0 dB. It is done individually for each sample, after slicing and trimming.

    I was revising the app for an update, and for various reasons I got thinking that this may not make a lot of sense after all, or at least the results might not be very good.

    So I'm thinking of removing the whole normalization feature from SynthJacker, to better concentrate on other things. If it was gone in the next version, would you miss it?

    I think normalizing individual slices would be problematic. You want the equivalent of normalizing the file pre-slicing but that also might mean needing a lot more free space to make a normalized file.

    What you could do is scan the original file for the peak value and determine the gain needed. And as you create slices, add that amount of gain. This will preserve the dynamic range and keep the noise floor constant.

  • @espiegel123 said:
    I think normalizing individual slices would be problematic. You want the equivalent of normalizing the file pre-slicing but that also might mean needing a lot more free space to make a normalized file.

    Thanks for your thoughts on this. You are spot on, of course; for very long runs it would definitely be a problem. Some folks have put SynthJacker through its paces by (re-)sampling gargantuan piano libraries. That is something I don't really recommend, but in any case, that already results in problems due to the way large files are currently handled.

    It's more a memory issue than disk space, I think. As I understand it, iOS/iPadOS is more tuned to low power usage than heavy disk I/O anyway. This could maybe be optimized by working more with the raw PCM data instead of temporary files, but that really only helps with the trimming of the slices.

  • Normalizing multisamples does make a lot of sense IMHO but the question is how you normalize.
    The golden aim is to have every sample playing with the same subjective level and that's the challenge: Simple normalization by sample values won't work. Frequency dependent weighting and calculating rms levels of a sensible portion at the beginning of the sample looks like the minimum to get a usable result.

Sign In or Register to comment.