Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Video: why you need 96 KHz sample rate to clearly record frequencies up to 20 KHz

2

Comments

  • edited August 2019

    @Mayo said:

    @Blue_Mangoo said:

    Yeah. The whole thing is kind of wasted if in the end you are going to release it at 44.1 KHz on YouTube or Spotify anyway. This video is more of a prayer for the future than something that is actually practical in today’s world of streaming services.

    This does not make sense.
    What does make sense is keeping all of the sonic info at maximum quality throughout a project, so that when mixing and mastering, all of the detail is able to be EQed - and there is double the info for reverbs / FX.
    Then at the mastering stage, converting to 44.1k / MP3's.

    The analogy can be looked at through the prism of film and video.
    Cinematographers shoot at the highest quality available, so there is much more info to process color, detail, depth, add FX etc.
    Just because a project may end up on youtube - or a low res crappy video format, that does not mean one should shoot the film on a crappy low quality video format.

    If one records at 96khz, there is much more harmonic detail (and extended LF info), for eg acoustic guitars, piano, cymbals, mandolin, violin - all sound much better at 96khz - and it is all there when you want to do crucial EQing at the mastering stage.
    (Ive been recording 96khz for more than 2 decades).

    Its all about keeping the quality at maximum detail until it is depreciated.

    That's how I see it as well. +1

    However one shouldn't start using their food money to invest in expensive top shelf mics, preamps and software if they cannot hear/enjoy the benefits anyway. I guess the true path is somewhere in the middle unless you're actually making proper money with this stuff.

  • edited August 2019

    @supadom said:

    @Mayo said:

    @Blue_Mangoo said:

    Yeah. The whole thing is kind of wasted if in the end you are going to release it at 44.1 KHz on YouTube or Spotify anyway. This video is more of a prayer for the future than something that is actually practical in today’s world of streaming services.

    This does not make sense.
    What does make sense is keeping all of the sonic info at maximum quality throughout a project, so that when mixing and mastering, all of the detail is able to be EQed - and there is double the info for reverbs / FX.
    Then at the mastering stage, converting to 44.1k / MP3's.

    The analogy can be looked at through the prism of film and video.
    Cinematographers shoot at the highest quality available, so there is much more info to process color, detail, depth, add FX etc.
    Just because a project may end up on youtube - or a low res crappy video format, that does not mean one should shoot the film on a crappy low quality video format.

    If one records at 96khz, there is much more harmonic detail (and extended LF info), for eg acoustic guitars, piano, cymbals, mandolin, violin - all sound much better at 96khz - and it is all there when you want to do crucial EQing at the mastering stage.
    (Ive been recording 96khz for more than 2 decades).

    Its all about keeping the quality at maximum detail until it is depreciated.

    That's how I see it as well. +1

    I guess the true path is somewhere in the middle unless you're actually making proper money with this stuff.

    Yep! Definitely some activities that benefit from higher sample rate/bit depth but wouldn't worry too much making music on iOS

  • edited August 2019

    @Blue_Mangoo (or anyone else who can answer), I have few questions regarding this video as I still don't understand the benefits of 96 khz:

    1. You demonstrated the creation of frequencies above ~20khz spectrum by inserting a silence in a random place in the waveform. Although this is a possible way to produce such higher frequencies, it's not a very real use case. You mention that such things are very common in the sound processing chain but I wonder - when and how exactly? I can imagine some distortion algorithms to make such "brutal" changes to the soundwave, but is it also the case of e.g. compressors, filters or EQs? I can't really imagine how such high frequencies can be produced there...? Of course, synths are different beasts, I completely understand that they can produce much higher frequencies.

    2. Even if there is such processing that creates >20khz frequencies, shouldn't the developer of the plugin use oversampling for that? Isn't this kind of a standard approach? I know that in many (maybe mostly desktop?) plugins, there is usually some "hi-quality" or "eco" mode that turns oversampling on/off and for offline rendering, most of the plugins use oversampling. Isn't this a way how to address this issue and you as a user then don't need to use higher than 44khz frequency rate?

    3. Even if such frequencies are produced and "lost in processing", does it mean that the resulting sound is inherently "bad" or "inferior"? From what I know when playing around with exporting in different frequency rates (or with such eco/high-quality modes) is that the output is not really better or richer, just different. This is quite obvious with synths and high frequency sounds. In some other discussion on this topic dendy provided good sound examples where it was really obvious, but the point is that you couldn't really tell which one is better - just very different.

    4. Is it possible this anyhow affects lower frequencies? I know many people involved in electronic music production accuses digital sound to be "weak on lows" or "thin", usually no one complains about unprecise high frequencies... Also many people tell that using higher frequency rates and bit depth helps a lot, but personally I've found no real scientific base for such statements and in general that sounds to me like there should be no relation...

    Thanks for sharing your wisdom! :smile:

  • @skrat said:
    @Blue_Mangoo (or anyone else who can answer), I have few questions regarding this video as I still don't understand the benefits of 96 khz:

    1. You demonstrated the creation of frequencies above ~20khz spectrum by inserting a silence in a random place in the waveform. Although this is a possible way to produce such higher frequencies, it's not a very real use case. You mention that such things are very common in the sound processing chain but I wonder - when and how exactly? I can imagine some distortion algorithms to make such "brutal" changes to the soundwave, but is it also the case of e.g. compressors, filters or EQs? I can't really imagine how such high frequencies can be produced there...? Of course, synths are different beasts, I completely understand that they can produce much higher frequencies.

    It is unusual for a plugin to make a hard cut like I did at that part of the video but any plugin that automatically adjusts the gain usually creates harmonics above the frequency of the input. Examples include saturators, compressors, limiters, distortion pedals, amp sims, and transient shapers. Harmonics come in integer multiples of the input frequency (frequency x2, x3, x4, x5...). So when a plugin creates those harmonics, the lowest of them is 2 x higher frequency. Imagine a gentle saturation that simply creates one harmonic above the input. If the input frequency is 15khz then that little first harmonic is at 30khz, which is beyond the limit of 48khz sample rate. A saturator that creates only the first harmonic is a very gentle saturation. Most of them go much farther than that. I demonstrate how this happens in this video:

    1. Even if there is such processing that creates >20khz frequencies, shouldn't the developer of the plugin use oversampling for that? Isn't this kind of a standard approach? I know that in many (maybe mostly desktop?) plugins, there is usually some "hi-quality" or "eco" mode that turns oversampling on/off and for offline rendering, most of the plugins use oversampling. Isn't this a way how to address this issue and you as a user then don't need to use higher than 44khz frequency rate?

    In the video I linked above you will see that it can take up to 20,000 times oversampling to completely eliminate aliasing. No plugin does that. They typically do 2x or 4x only. Some go as high as 16 or 32 but that’s unusual. Oversampling helps a little but in many cases it’s not a solution.

    Also, oversampling can be done in two ways. If done using FIR filters then it implies a trade off between adding delay into your signal chain and loosing some of your high frequency content. If you expect your plugins to oversample from 44.1 to 88.2 KHz and still keep all the sound above 20 KHz then you will need to add 3 or 4 ms of delay for each plugin in the chain. If you use three plugins in series then you loose the ability to process in real-time.

    Alternatively, if the plugins use IIR filters for oversampling, delay won’t be an issue but the plugins will cause ringing near the nyquist frequency. The amount of ringing they produce depends on the same questions mentioned above for FIR filters.

    To put it simply: upsampling and downsampling are not simple operations. They use some of the most complex filtering schemes found anywhere in audio signal processing plugins and they distort the signal significantly. If you can avoid it, you should. Regardless of the type of filtering It would horrible to have to chain up three or four plugins, all running at 44.1 KHz and all oversampling.

    If you run at 96kHz however, oversampling filters will still mangle the signal but all of their mangling happens from 30khz to 48khz, and We aren’t going to hear the distortion if it stays in that frequency range. If you run your DAW at 96kHz and use IIR filters for oversampling, you would find it almost impossible to even detect the effects of oversampling filters because the distortion happens at very high frequencies and the delay is minuscule.

    1. Even if such frequencies are produced and "lost in processing", does it mean that the resulting sound is inherently "bad" or "inferior"? From what I know when playing around with exporting in different frequency rates (or with such eco/high-quality modes) is that the output is not really better or richer, just different. This is quite obvious with synths and high frequency sounds. In some other discussion on this topic dendy provided good sound examples where it was really obvious, but the point is that you couldn't really tell which one is better - just very different.

    If they truly get lost, then it doesn’t matter. But, as demonstrated in the video I linked above, many of the aliasing artefacts do not get lost in processing; instead they ouch your noise floor up significantly and once that happens there is no filter that can clean it up again.

    1. Is it possible this anyhow affects lower frequencies? I know many people involved in electronic music production accuses digital sound to be "weak on lows" or "thin", usually no one complains about unprecise high frequencies... Also many people tell that using higher frequency rates and bit depth helps a lot, but personally I've found no real scientific base for such statements and in general that sounds to me like there should be no relation...

    This is a question I am personally very interested in. I also have the feeling that analog synths are warmer and digital ones sound a bit thin. I agree with you that there is no scientific basis for digital synths lacking bass because if that were true then you could just use an EQ to boost the bass. I think what is actually happening here is the digital ones have too much treble.

    My guess is that there are several reasons for it:

    First, people are hearing some aliasing sound and it’s perceived as harshness in the high frequencies that makes the sound feel “thin”. It’s described as thin because of it weren’t so harsh in the high frequencies we could turn the volume up louder, making it sound bigger, but because it bothers our ears we have to keep the volume low.

    Second, the digital filters sound different from their analog counterparts above sampleRate/2. If you sample at 44.1 KHz then the digital filters are very unlike the analog ones from 11 KHz on up. In general the digital filters cut much deeper than the analog ones do in this range. So in theory they shouldn’t sound thinner, but I suspect that because the cut is too deep, people feel the sound lacks clarity so they run digital synths with 25% higher filter cutoff than they would use on the same synth if it were analog. This issue is easily solved by running at 96kHz. At 96kHz he digital filters are similar to the analog ones up to about 24khz.

    The last issue is that analog synths play through amps and speakers. If you have ever tried running a digital synth through a guitar amp simulator, you’ll hear that it sounds significantly fatter. This should be obvious in retrospect but still people use digital synths directly with no amp and cabinet simulator and compare them against the analog ones using amp and speaker cab and wonder why digital synths don’t live up to expectations. I agree that many digital synths have problems with aliasing and filter design, but at least give them an equal playing field before comparing them.

    Personally I find most software amp sims also not very good quality, so once again, I recommend running them at the highest sample rate available. 192 KHz is advisable for guitar amp sims because they saturate heavily and therefore they alias heavily. If you run them at 192 KHz and don’t push the gain hard (use a clean amp setting and no distortion pedals) they should be fine.

    Finally, guitar amps have very little high frequency output above 4kHz due to the weak HF response of the speakers. If you want to fatten up a synth without loosing all the high frequencies I recommend putting the following effects after the synth:

    1. A good saturator plugin.
    2. A very dense reverb with very short decay time. Less than half a second should be good. Set the predelay at zero, and take the high cut frequency down low. Cut at about 2khz. You don’t want the synth to sound echoey: you just want it to sound like it played through a real speaker cabinet.
    3. EQ: all synths have filters but many of them don’t have a built in parametric EQ. Adjusting the EQ properly can do wonders for a digital synth. To make it sound more analog, use a mid boost with wide q. The boost frequency depends on the synth tone you are looking for but it’s usually between 350 and 1500. A little boost around 60hz also helps but go gently on it because too much bass doesn’t make it sound vintage and analog. A high-pass filter at 40 Hz also helps. Vintage analog synths usually don’t have subwoofers so you don’t want that rumbling even if you are making bass patches. finally, a high shelf filter at about 2khz, cutting down the high frequencies by 5-10 dB makes it sound like a keyboard amp speaker without completely destroying all your high frequency tone like a guitar amp sim would.
  • We are very lucky to have you sharing your knowledge here @Blue_Mangoo
    :)

  • @Blue_Mangoo said:

    @skrat said:
    @Blue_Mangoo (or anyone else who can answer), I have few questions regarding this video as I still don't understand the benefits of 96 khz:

    1. You demonstrated the creation of frequencies above ~20khz spectrum by inserting a silence in a random place in the waveform. Although this is a possible way to produce such higher frequencies, it's not a very real use case. You mention that such things are very common in the sound processing chain but I wonder - when and how exactly? I can imagine some distortion algorithms to make such "brutal" changes to the soundwave, but is it also the case of e.g. compressors, filters or EQs? I can't really imagine how such high frequencies can be produced there...? Of course, synths are different beasts, I completely understand that they can produce much higher frequencies.

    It is unusual for a plugin to make a hard cut like I did at that part of the video but any plugin that automatically adjusts the gain usually creates harmonics above the frequency of the input. Examples include saturators, compressors, limiters, distortion pedals, amp sims, and transient shapers. Harmonics come in integer multiples of the input frequency (frequency x2, x3, x4, x5...). So when a plugin creates those harmonics, the lowest of them is 2 x higher frequency. Imagine a gentle saturation that simply creates one harmonic above the input. If the input frequency is 15khz then that little first harmonic is at 30khz, which is beyond the limit of 48khz sample rate. A saturator that creates only the first harmonic is a very gentle saturation. Most of them go much farther than that. I demonstrate how this happens in this video:

    1. Even if there is such processing that creates >20khz frequencies, shouldn't the developer of the plugin use oversampling for that? Isn't this kind of a standard approach? I know that in many (maybe mostly desktop?) plugins, there is usually some "hi-quality" or "eco" mode that turns oversampling on/off and for offline rendering, most of the plugins use oversampling. Isn't this a way how to address this issue and you as a user then don't need to use higher than 44khz frequency rate?

    In the video I linked above you will see that it can take up to 20,000 times oversampling to completely eliminate aliasing. No plugin does that. They typically do 2x or 4x only. Some go as high as 16 or 32 but that’s unusual. Oversampling helps a little but in many cases it’s not a solution.

    Also, oversampling can be done in two ways. If done using FIR filters then it implies a trade off between adding delay into your signal chain and loosing some of your high frequency content. If you expect your plugins to oversample from 44.1 to 88.2 KHz and still keep all the sound above 20 KHz then you will need to add 3 or 4 ms of delay for each plugin in the chain. If you use three plugins in series then you loose the ability to process in real-time.

    Alternatively, if the plugins use IIR filters for oversampling, delay won’t be an issue but the plugins will cause ringing near the nyquist frequency. The amount of ringing they produce depends on the same questions mentioned above for FIR filters.

    To put it simply: upsampling and downsampling are not simple operations. They use some of the most complex filtering schemes found anywhere in audio signal processing plugins and they distort the signal significantly. If you can avoid it, you should. Regardless of the type of filtering It would horrible to have to chain up three or four plugins, all running at 44.1 KHz and all oversampling.

    If you run at 96kHz however, oversampling filters will still mangle the signal but all of their mangling happens from 30khz to 48khz, and We aren’t going to hear the distortion if it stays in that frequency range. If you run your DAW at 96kHz and use IIR filters for oversampling, you would find it almost impossible to even detect the effects of oversampling filters because the distortion happens at very high frequencies and the delay is minuscule.

    1. Even if such frequencies are produced and "lost in processing", does it mean that the resulting sound is inherently "bad" or "inferior"? From what I know when playing around with exporting in different frequency rates (or with such eco/high-quality modes) is that the output is not really better or richer, just different. This is quite obvious with synths and high frequency sounds. In some other discussion on this topic dendy provided good sound examples where it was really obvious, but the point is that you couldn't really tell which one is better - just very different.

    If they truly get lost, then it doesn’t matter. But, as demonstrated in the video I linked above, many of the aliasing artefacts do not get lost in processing; instead they ouch your noise floor up significantly and once that happens there is no filter that can clean it up again.

    1. Is it possible this anyhow affects lower frequencies? I know many people involved in electronic music production accuses digital sound to be "weak on lows" or "thin", usually no one complains about unprecise high frequencies... Also many people tell that using higher frequency rates and bit depth helps a lot, but personally I've found no real scientific base for such statements and in general that sounds to me like there should be no relation...

    This is a question I am personally very interested in. I also have the feeling that analog synths are warmer and digital ones sound a bit thin. I agree with you that there is no scientific basis for digital synths lacking bass because if that were true then you could just use an EQ to boost the bass. I think what is actually happening here is the digital ones have too much treble.

    My guess is that there are several reasons for it:

    First, people are hearing some aliasing sound and it’s perceived as harshness in the high frequencies that makes the sound feel “thin”. It’s described as thin because of it weren’t so harsh in the high frequencies we could turn the volume up louder, making it sound bigger, but because it bothers our ears we have to keep the volume low.

    Second, the digital filters sound different from their analog counterparts above sampleRate/2. If you sample at 44.1 KHz then the digital filters are very unlike the analog ones from 11 KHz on up. In general the digital filters cut much deeper than the analog ones do in this range. So in theory they shouldn’t sound thinner, but I suspect that because the cut is too deep, people feel the sound lacks clarity so they run digital synths with 25% higher filter cutoff than they would use on the same synth if it were analog. This issue is easily solved by running at 96kHz. At 96kHz he digital filters are similar to the analog ones up to about 24khz.

    The last issue is that analog synths play through amps and speakers. If you have ever tried running a digital synth through a guitar amp simulator, you’ll hear that it sounds significantly fatter. This should be obvious in retrospect but still people use digital synths directly with no amp and cabinet simulator and compare them against the analog ones using amp and speaker cab and wonder why digital synths don’t live up to expectations. I agree that many digital synths have problems with aliasing and filter design, but at least give them an equal playing field before comparing them.

    Personally I find most software amp sims also not very good quality, so once again, I recommend running them at the highest sample rate available. 192 KHz is advisable for guitar amp sims because they saturate heavily and therefore they alias heavily. If you run them at 192 KHz and don’t push the gain hard (use a clean amp setting and no distortion pedals) they should be fine.

    Finally, guitar amps have very little high frequency output above 4kHz due to the weak HF response of the speakers. If you want to fatten up a synth without loosing all the high frequencies I recommend putting the following effects after the synth:

    1. A good saturator plugin.
    2. A very dense reverb with very short decay time. Less than half a second should be good. Set the predelay at zero, and take the high cut frequency down low. Cut at about 2khz. You don’t want the synth to sound echoey: you just want it to sound like it played through a real speaker cabinet.
    3. EQ: all synths have filters but many of them don’t have a built in parametric EQ. Adjusting the EQ properly can do wonders for a digital synth. To make it sound more analog, use a mid boost with wide q. The boost frequency depends on the synth tone you are looking for but it’s usually between 350 and 1500. A little boost around 60hz also helps but go gently on it because too much bass doesn’t make it sound vintage and analog. A high-pass filter at 40 Hz also helps. Vintage analog synths usually don’t have subwoofers so you don’t want that rumbling even if you are making bass patches. finally, a high shelf filter at about 2khz, cutting down the high frequencies by 5-10 dB makes it sound like a keyboard amp speaker without completely destroying all your high frequency tone like a guitar amp sim would.

    😲 wow, this is really an amazing amount (and quality) of information! Thanks a lot, you answered everything I was curious about. I think everyone should make experiments on their own to see (and hear) the possible difference and find out, which plugins do some harm and which can deal with this. Now I'll be more paranoid when making music... :sweat_smile:

  • @skrat said:

    @Blue_Mangoo said:

    @skrat said:
    @Blue_Mangoo (or anyone else who can answer), I have few questions regarding this video as I still don't understand the benefits of 96 khz:

    1. You demonstrated the creation of frequencies above ~20khz spectrum by inserting a silence in a random place in the waveform. Although this is a possible way to produce such higher frequencies, it's not a very real use case. You mention that such things are very common in the sound processing chain but I wonder - when and how exactly? I can imagine some distortion algorithms to make such "brutal" changes to the soundwave, but is it also the case of e.g. compressors, filters or EQs? I can't really imagine how such high frequencies can be produced there...? Of course, synths are different beasts, I completely understand that they can produce much higher frequencies.

    It is unusual for a plugin to make a hard cut like I did at that part of the video but any plugin that automatically adjusts the gain usually creates harmonics above the frequency of the input. Examples include saturators, compressors, limiters, distortion pedals, amp sims, and transient shapers. Harmonics come in integer multiples of the input frequency (frequency x2, x3, x4, x5...). So when a plugin creates those harmonics, the lowest of them is 2 x higher frequency. Imagine a gentle saturation that simply creates one harmonic above the input. If the input frequency is 15khz then that little first harmonic is at 30khz, which is beyond the limit of 48khz sample rate. A saturator that creates only the first harmonic is a very gentle saturation. Most of them go much farther than that. I demonstrate how this happens in this video:

    1. Even if there is such processing that creates >20khz frequencies, shouldn't the developer of the plugin use oversampling for that? Isn't this kind of a standard approach? I know that in many (maybe mostly desktop?) plugins, there is usually some "hi-quality" or "eco" mode that turns oversampling on/off and for offline rendering, most of the plugins use oversampling. Isn't this a way how to address this issue and you as a user then don't need to use higher than 44khz frequency rate?

    In the video I linked above you will see that it can take up to 20,000 times oversampling to completely eliminate aliasing. No plugin does that. They typically do 2x or 4x only. Some go as high as 16 or 32 but that’s unusual. Oversampling helps a little but in many cases it’s not a solution.

    Also, oversampling can be done in two ways. If done using FIR filters then it implies a trade off between adding delay into your signal chain and loosing some of your high frequency content. If you expect your plugins to oversample from 44.1 to 88.2 KHz and still keep all the sound above 20 KHz then you will need to add 3 or 4 ms of delay for each plugin in the chain. If you use three plugins in series then you loose the ability to process in real-time.

    Alternatively, if the plugins use IIR filters for oversampling, delay won’t be an issue but the plugins will cause ringing near the nyquist frequency. The amount of ringing they produce depends on the same questions mentioned above for FIR filters.

    To put it simply: upsampling and downsampling are not simple operations. They use some of the most complex filtering schemes found anywhere in audio signal processing plugins and they distort the signal significantly. If you can avoid it, you should. Regardless of the type of filtering It would horrible to have to chain up three or four plugins, all running at 44.1 KHz and all oversampling.

    If you run at 96kHz however, oversampling filters will still mangle the signal but all of their mangling happens from 30khz to 48khz, and We aren’t going to hear the distortion if it stays in that frequency range. If you run your DAW at 96kHz and use IIR filters for oversampling, you would find it almost impossible to even detect the effects of oversampling filters because the distortion happens at very high frequencies and the delay is minuscule.

    1. Even if such frequencies are produced and "lost in processing", does it mean that the resulting sound is inherently "bad" or "inferior"? From what I know when playing around with exporting in different frequency rates (or with such eco/high-quality modes) is that the output is not really better or richer, just different. This is quite obvious with synths and high frequency sounds. In some other discussion on this topic dendy provided good sound examples where it was really obvious, but the point is that you couldn't really tell which one is better - just very different.

    If they truly get lost, then it doesn’t matter. But, as demonstrated in the video I linked above, many of the aliasing artefacts do not get lost in processing; instead they ouch your noise floor up significantly and once that happens there is no filter that can clean it up again.

    1. Is it possible this anyhow affects lower frequencies? I know many people involved in electronic music production accuses digital sound to be "weak on lows" or "thin", usually no one complains about unprecise high frequencies... Also many people tell that using higher frequency rates and bit depth helps a lot, but personally I've found no real scientific base for such statements and in general that sounds to me like there should be no relation...

    This is a question I am personally very interested in. I also have the feeling that analog synths are warmer and digital ones sound a bit thin. I agree with you that there is no scientific basis for digital synths lacking bass because if that were true then you could just use an EQ to boost the bass. I think what is actually happening here is the digital ones have too much treble.

    My guess is that there are several reasons for it:

    First, people are hearing some aliasing sound and it’s perceived as harshness in the high frequencies that makes the sound feel “thin”. It’s described as thin because of it weren’t so harsh in the high frequencies we could turn the volume up louder, making it sound bigger, but because it bothers our ears we have to keep the volume low.

    Second, the digital filters sound different from their analog counterparts above sampleRate/2. If you sample at 44.1 KHz then the digital filters are very unlike the analog ones from 11 KHz on up. In general the digital filters cut much deeper than the analog ones do in this range. So in theory they shouldn’t sound thinner, but I suspect that because the cut is too deep, people feel the sound lacks clarity so they run digital synths with 25% higher filter cutoff than they would use on the same synth if it were analog. This issue is easily solved by running at 96kHz. At 96kHz he digital filters are similar to the analog ones up to about 24khz.

    The last issue is that analog synths play through amps and speakers. If you have ever tried running a digital synth through a guitar amp simulator, you’ll hear that it sounds significantly fatter. This should be obvious in retrospect but still people use digital synths directly with no amp and cabinet simulator and compare them against the analog ones using amp and speaker cab and wonder why digital synths don’t live up to expectations. I agree that many digital synths have problems with aliasing and filter design, but at least give them an equal playing field before comparing them.

    Personally I find most software amp sims also not very good quality, so once again, I recommend running them at the highest sample rate available. 192 KHz is advisable for guitar amp sims because they saturate heavily and therefore they alias heavily. If you run them at 192 KHz and don’t push the gain hard (use a clean amp setting and no distortion pedals) they should be fine.

    Finally, guitar amps have very little high frequency output above 4kHz due to the weak HF response of the speakers. If you want to fatten up a synth without loosing all the high frequencies I recommend putting the following effects after the synth:

    1. A good saturator plugin.
    2. A very dense reverb with very short decay time. Less than half a second should be good. Set the predelay at zero, and take the high cut frequency down low. Cut at about 2khz. You don’t want the synth to sound echoey: you just want it to sound like it played through a real speaker cabinet.
    3. EQ: all synths have filters but many of them don’t have a built in parametric EQ. Adjusting the EQ properly can do wonders for a digital synth. To make it sound more analog, use a mid boost with wide q. The boost frequency depends on the synth tone you are looking for but it’s usually between 350 and 1500. A little boost around 60hz also helps but go gently on it because too much bass doesn’t make it sound vintage and analog. A high-pass filter at 40 Hz also helps. Vintage analog synths usually don’t have subwoofers so you don’t want that rumbling even if you are making bass patches. finally, a high shelf filter at about 2khz, cutting down the high frequencies by 5-10 dB makes it sound like a keyboard amp speaker without completely destroying all your high frequency tone like a guitar amp sim would.

    Now I'll be more paranoid when making music... :sweat_smile:

    A certain amount of paranoia is justified. In the end, your ears must be the guide, but it's challenging because most plugins simultaneously affect the sound in positive and negative ways. I guess most decisions we make in life are like that. :)

  • edited August 2019

    @skrat

    I was working on adjusting anti-aliasing filters in an oversampler this morning so I decided to make a video while I had that code handy that demonstrates what kind of distortion you get from upsampling and downsampling audio:

  • Would love to see the original video at the beginning of this thread - can it be re uploaded? Thanks!

  • @sm606 said:
    Would love to see the original video at the beginning of this thread - can it be re uploaded? Thanks!

    Oh, indeed, @Blue_Mangoo

    What happended?
    Can we get this video back, pleaseeeee?

  • Totally unrelated, i just got a YouTube Ad, that has a length of 70 minutes!!!

    Are they totally crazy now?

  • edited January 2021
    The user and all related content has been deleted.
  • @Max23 said:

    @Mayo said: Exactly !
    Hence why most top mastering guys upsample a 44.1k song to 96k, and have much better tools to EQ / compress with.

    this makes no sense
    if you upsample a file of 44.1 kHz to 96khz you gain nothing
    its just a 44.1 file in drag. ;)

    its like when you export an mp3 as wav.
    nothing happens. its the same file as before.
    it just a pig with lipstick. ;)

    If you up-sample and down-sample and do nothing, you are correct. However, some DSP processes have artifacts related to the sampling rate. When these are applied with a very high sampling rate, the artifacts are well above the range of human hearing. When they are applied at lower sample rates they may impact audible frequencies.

    To what extent these artifacts are noticeable will depend on the processes, the playback equipment and the listener's ears.

  • edited January 2021
    The user and all related content has been deleted.
  • edited January 2021
    The user and all related content has been deleted.
  • edited January 2021

    Dan Worrall's take on Samplerates: the higher the better, right? is wrong?

    His conclusion:

    • Higher sample rates have no benefit for audible content below 20KHz
    • And can result in lower quality due to intermodulation
    • Higher samplerates only benefit non-linear processes (e.g., saturation)
    • But in those cases it is better to oversample each plugin individually
    • However, aliasing is usually quite subtle and difficult to hear
    • So don't stress about it too much
  • @mojozart said:
    Dan Worrall's take on Samplerates: the higher the better, right? is wrong?

    It is worth mentioning that to the extent that for anything where the difference in audio quality is subtle in the original raw audio, those differences may disappear when lossy audio codecs are applied (as is the case with YouTube or SoundCloud and most streaming services).

    Worrall doesn’t make the claim that oversampling is never worthwhile. His point is that in some cases it is counter-productive and often provides no significant improvement. But he doesn’t say that it never has a benefit (and mentions cases where it does have a benefit). And in some cases where there is a benefit the benefit isn’t worth the CPU hit... which is different from the implication that it is never beneficial which someone else implied.

  • edited January 2021

    Well as long as it’s 128bit audio everything should be just fine 🤷‍♂️

  • @mojozart said:
    Dan Worrall's take on Samplerates: the higher the better, right? is wrong?

    His conclusion:

    • Higher sample rates have no benefit for audible content below 20KHz
    • And can result in lower quality due to intermodulation
    • Higher samplerates only benefit non-linear processes (e.g., saturation)
    • But in those cases it is better to oversample each plugin individually
    • However, aliasing is usually quite subtle and difficult to hear
    • So don't stress about it too much

    I really would like to recommend the @Blue_Mangoo video again!

  • Does anyone remember the essential aspects of the video? I recently watched a couple of digital audio introductory videos by Monty Montgomery published by Xiph.org that eventually led me to Dan Worrall’s, and now I’m curious about the argument for higher sample rates.

  • @Mayo said:

    @Blue_Mangoo said:

    Yeah. The whole thing is kind of wasted if in the end you are going to release it at 44.1 KHz on YouTube or Spotify anyway. This video is more of a prayer for the future than something that is actually practical in today’s world of streaming services.

    This does not make sense.
    What does make sense is keeping all of the sonic info at maximum quality throughout a project, so that when mixing and mastering, all of the detail is able to be EQed - and there is double the info for reverbs / FX.
    Then at the mastering stage, converting to 44.1k / MP3's.

    The analogy can be looked at through the prism of film and video.
    Cinematographers shoot at the highest quality available, so there is much more info to process color, detail, depth, add FX etc.
    Just because a project may end up on youtube - or a low res crappy video format, that does not mean one should shoot the film on a crappy low quality video format.

    If one records at 96khz, there is much more harmonic detail (and extended LF info), for eg acoustic guitars, piano, cymbals, mandolin, violin - all sound much better at 96khz - and it is all there when you want to do crucial EQing at the mastering stage.
    (Ive been recording 96khz for more than 2 decades).

    Its all about keeping the quality at maximum detail until it is depreciated.

    I record a lot of songs with live guitar, bass, vocals, etc and mainly at 24/96k. At the end of the day when mixing down for uploading mp3/m4a, etc...yeah .... 24/96k might not make sense, but at this point it’s mainly for my listening pleasure and being able to listen to a 96k mixdown thru an interface that supports it.

  • edited January 2021
    The user and all related content has been deleted.
  • The user and all related content has been deleted.
  • @Max23 I was joking. 16bit is more then enough for me. 24bit might give you slightly more headroom when you’re mixing that new orchestral main theme for Nolan’s next movie. 🤷‍♂️

  • The user and all related content has been deleted.
  • And soon we’ll see that not even 192KHz is not enough when composing music for Bats and Cats...

    The 16 vs 24 bit jump was a huge but I’ve not seen the need to go above 44.1 wih oversampling.

    My super ESI U2A did 64x oversampling at the AD stage and there never was much energy above 17k but then again not many microphones capture super high frequencies with enough energy above the noisefloor... self-resonating filters could go quite high but the levels also went up and to avoid distortion the levels had be turned down and whoops the super high frequency content also got turned down below hearing threshold.

    Theoretical limits and practical usecases are sometimes quite far apart from each other...

    I’m more allergic to noise and even when the noise is around -90db it annoys me, maybe my eats are too sensitive or something...

    Cheers!

  • tjatja
    edited January 2021

    @Max23 said:
    in case nobody noticed
    lol the video at the beginning of this was deleted
    so somebody changed his mind :)

    I would be curious to hear the reason.

    I cannot believe that a change of mind caused this.

    The videoswas very interesting and did dive into lots of details to explain the situation.
    I recommend the video in some other threads.

    Sadly, i did not download it.
    But that idea only ever comes too late :'(

    I wrote a PM to @Blue_Mangoo in case he does not check the regular notifications.

  • edited January 2021

    Why you don't need 24 Bit 192 kHz listening formats;
    https://youtube.com/watch?v=cIQ9IXSUzuM&feature=emb_logo

  • @BladeRunner said:
    Why you don't need 24 Bit 192 kHz listening formats;
    https://youtube.com/watch?v=cIQ9IXSUzuM&feature=emb_logo

    This is about at which settings to use your DAW at, while working on the audio.
    It is not aboht listening.

Sign In or Register to comment.