Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

SynthJacker

1567911

Comments

  • @InfoCheck said:
    Don’t even know if this is possible or not, but it’d be nice if SynthJacker could automatically create sustain markers for the SFZ files it outputs as that seems like a very tedious manual task.

    Much like I just asked wim, are you referring to sample loop points by sustain markers? Or something else?

  • @wim I agree about some of the excess silence that comes out in the samples. That seems like something a programmer could detect and delete. Audio below some small threshold could be deleted. I would load some instruments into audio layer and the initial silence might need trimming and that puts you back into "Why aren't these delivered to me clean up?" I suspect the slicing is based upon pure timing details related to the MIDI stream and @coniferprod assumes that's close enough. Well, MIDI timing and audio recording can slowly get out of sync'ing on sample jobs that take 10 minutes of more so I get a lot of extra
    unwanted silence.

    Probably a big ask but a tool to batch trim silence would be appreciated if it exits in Auditor or some other sample editing app.

    @coniferprod said:

    @wim said:
    The only thing I can think of would be auto-trim of silence from the beginning and end of BYOF samples.

    All samples... detect silence and capture the true transient ramp of the sample. The endings I may not care about unless it goes silent and snips off the next transient due to a timing misalignment. My loops are so long that I have never had that problem. With some instruments I have to make loop points in AudioLayer and at that point I usually just use the original app.

    Sorry, can you elaborate? If you bring in a long recording from elsewhere, and you used a SynthJacker-generated MIDI sequence, then the same Trim Start/End/Both setting applies to all the slices that are made from that long recording.

    Yes. It looks like the MIDI events are the authority being used for timing details. Works pretty good but probably fails with really long audio files.

    Maybe you mean the initial lag that can be present when recording IAA instruments, before the first note? I fiddled with that but it wasn't worth the effort to be honest. Easier to just chop it in an audio editor than try to adjust all sorts of nudge/pre-roll settings.

    Easier for who? Sorry to be rude but please consider checking for silence if asked by a user setting. Huge benefit for me.

    Of course, it would be fabulous if there was some magic for setting sustain loop points, but I can't even imagine an app being able to do that effectively.

    Are sustain loop points different than "regular" loop points, that is, sections of the sample that repeat while the note is held down, snapped to zero-crossings? Or is it the same thing?

    I guess I want great sustain loops to make due with shorter samples that sound real with looping... in general they don't to my ear but I should give this more thought and try looped samples in some projects to run more audio layer instances. Harry demo's a Cubasis project with something like 10 instances using his VSCL orchestra sounds. Very impressive for a cheap orchestrator solution but it sounds less real. Upping the "real" and doing lots of parallel work is where the handmade stuff really shines. Especially importing into NS2. It scales way beyond any AUv3 DAW approach.

    Good to see @wim pop in with enhancements He's very productive and would know what to focus on for doing real work.

  • @coniferprod said:

    @wim said:
    The only thing I can think of would be auto-trim of silence from the beginning and end of BYOF samples.

    Sorry, can you elaborate? If you bring in a long recording from elsewhere, and you used a SynthJacker-generated MIDI sequence, then the same Trim Start/End/Both setting applies to all the slices that are made from that long recording.

    Maybe you mean the initial lag that can be present when recording IAA instruments, before the first note? I fiddled with that but it wasn't worth the effort to be honest. Easier to just chop it in an audio editor than try to adjust all sorts of nudge/pre-roll settings.

    yes, that's what I meant. I'm not sure why it's difficult to trim all zeroDB waveform from the beginning or end of a sample, but if you say it is, then I'm not in a place to question that. B)

    It's easy enough to do manually as you say, but is something that I mentioned because it would save time, and more importantly because it tripped me up in the beginning until I understood that it needed to be done. My imports were all screwed up and I at first blamed Synthjacker. It finally dawned on me and then things went fine. This is something a detail that I always need to caution new users about, and it isn't that easy to explain to a beginner.

    Of course, it would be fabulous if there was some magic for setting sustain loop points, but I can't even imagine an app being able to do that effectively.

    Are sustain loop points different than "regular" loop points, that is, sections of the sample that repeat while the note is held down, snapped to zero-crossings? Or is it the same thing?

    No, it's the same thing, I think. What I mean is finding a section of each sample that sounds like it should when looping while a key is held down. That is challenging to do manually, even with an editor such as Auditor that has some good looping tools. It's hard for me to imagine an app that could do that effectively and have it sound good. Samples are just too different in their dynamics. But, perhaps it's possible.

    -cheers. Thanks for this really great tool and your continued interest in improving it.

  • @coniferprod said:

    @InfoCheck said:
    Don’t even know if this is possible or not, but it’d be nice if SynthJacker could automatically create sustain markers for the SFZ files it outputs as that seems like a very tedious manual task.

    Much like I just asked wim, are you referring to sample loop points by sustain markers? Or something else?

    Yes, creating loop points so that when you press down the key it goes through the attack and decay portion first and then loops through the sample between the sustain loop points and then goes through the release phase when you release the key.

  • edited March 2020

    If you do get excess trailing silence in the sample files after slicing, you should definitely check the noise floor setting. SynthJacker works backwards from the end of the file and discards all trailing samples that are below the noise floor. The same applies to trimming from the front; the first sample that is above the noise floor becomes the first in the file when you select to trim from start. (Sample = a data point in the file, not the actual sample file. Yes, it’s confusing.)

    If SynthJacker did transient detection, there would also have to be a similar setting. So in that sense it’s no different from the current timing based operation. Just plan ahead and check your results, and if necessary, do it again. And obviously, if you’re trying to sample a piano, the same settings will not apply to the low and high keys, so it’s best to create different sequences for the low, mid, and high keys. Then you just change the trim settings between runs, because they are global, not part of the sequence.

    Hope this helps.

  • Those are great tips since you know how the trimming is managed internally. Do you add any special consideration to insuring samples start and stop on a zero-crossing point to prevent unintended clicks?
    With a high threshold for example would the slice start on that thresh hold? Or do you adding a touch of ramp up and fade out to insure 0’s for begin and end.

  • @McD said:
    Do you add any special consideration to insuring samples start and stop on a zero-crossing point to prevent unintended clicks?

    No, there is no ”snap to zero crossing” or similar — would that be useful in trimming? If there was loop detection, it would be very useful, if not essential.

    With a high threshold for example would the slice start on that thresh hold? Or do you adding a touch of ramp up and fade out to insure 0’s for begin and end.

    Also, no fade in or fade out is applied when trimming. It’s quite simple really. But at least what I sample tends to have quite sharp transients at the start. So maybe I’ve just been lucky. But if I found out there’s a click, I’d just take it to an audio editor and trim some more, or let the sampler handle it.

  • @coniferprod said:

    @McD said:
    Do you add any special consideration to insuring samples start and stop on a zero-crossing point to prevent unintended clicks?

    No, there is no ”snap to zero crossing” or similar — would that be useful in trimming? If there was loop detection, it would be very useful, if not essential.

    With a high threshold for example would the slice start on that thresh hold? Or do you adding a touch of ramp up and fade out to insure 0’s for begin and end.

    Also, no fade in or fade out is applied when trimming. It’s quite simple really. But at least what I sample tends to have quite sharp transients at the start. So maybe I’ve just been lucky. But if I found out there’s a click, I’d just take it to an audio editor and trim some more, or let the sampler handle it.

    I'm not expert so someone might weigh in... If it isn't broke don't fix it. I think you should really work the Nanostudio 2 users since AL has changed it's workflow... there may be corner cases worth working there too. Also importing into Auria Pro should be emphasized since that internal Lyra sampler is a disk streamer that can run the biggest samples on available like the 6GB "Piano in 162" (don't get it... it's really 2 pianos: one close and one far mic'ed)
    and in many ways not as good as Salamander. There's a 24-bit salamander you can load that's nicer than the standard 16-bit AP has).

    But let's show the power of the SynthJacker to NS2 hand-off with some details on the process and demo's of the results. @TheAudioDabbler did a video on the process I think already. He didn't quite explain every little detail so I didn't follow but I'm not up to speed with NS2 yet so I have to start just by learning it's UI. @rs2000 and @MrBlaschke have made and shared some excellent NS instruments that we're probably made with SJ.

  • edited March 2020

    @McD said:
    @rs2000 and @MrBlaschke have made and shared some excellent NS instruments that we're probably made with SJ.

    No SynthJacker used because I'm old enough to have my own tool chain on the Mac for automated sampling and editing. Would be a nightmare to do on iOS.

    SynthJacker certainly does a great job in sampling too, it's only that I see the most work outside the actual sampling process itself: Especially on iOS, I always try to make the instruments as small as possible yet sounding as good as possible. It's involving a lot of trial and error regarding sample mapping, velocity layer choices, modulations etc. - especially with natural instruments.

  • Nope also. No SynthJacker used for that.
    Used for other instruments, but chopping and preparing for the NS2 Instrument(s) was with Auditor and doing Tests in AudioLayer, manual work.

    My work with the actual SJ trimming was always good. Like @coniferprod says - mo fuzz around it - just clean cuts.
    Which is exactly what I would prefer in that situation.

    I think you can say the fully automated process of sampled instrument creation is not possible. Nearly each sample needs trimming and fine detail work done by hand afterwards - so it is of very good use and that is exactly what SJ is doing good.

    All that is possibly added, like Fades or stuff, should be optional and can be supportive but also destructive and would also need manual considerations afterwards for fine detailed instruments.

  • @MrBlaschke said:
    I think you can say the fully automated process of sampled instrument creation is not possible. Nearly each sample needs trimming and fine detail work done by hand afterwards -

    Exactly. Or, it may be technically possible, but not feasible.

    I do urge people to always check the results, and although SJ is called an autosampler, that really refers to the bulk of the sampling work, not to the necessary fine-tuning.

  • @coniferprod said:

    @MrBlaschke said:
    I think you can say the fully automated process of sampled instrument creation is not possible. Nearly each sample needs trimming and fine detail work done by hand afterwards -

    Exactly. Or, it may be technically possible, but not feasible.

    I do urge people to always check the results, and although SJ is called an autosampler, that really refers to the bulk of the sampling work, not to the necessary fine-tuning.

    On your side here - and that is what you implemented very good. The biggest major step was that BYOF implementation. I mean, that worked way before AudioLayer implemented it’s own functionality lately. And I would not even trust that one blindly!

  • I think you should consider an update that slices up audio based upon deleting silence.
    No correlation to midi events at all. Just BYOAF of something and it detects and slices it into
    waves for importing. That will make the app useful even to AudioLayer users that use it's new features for MIDI-based sampling but would like to import random audio slices into a sample playback target.

    It would be nice to har what the Forum thinks the right naming strategy would be:

    assign slices to sequential Notes: 60.wav, 61.wav, etc
    or some other idea... being useful in most apps faster would be the idea.

  • Yes, I saw in the AudioLayer release notes that there was some new functionality in this area. Haven’t checked that out yet, mainly because I couldn’t find anything about it in the AL manual, so I left it for another time. But I guess someone may have done / will do a study of it in some AudioLayer related thread.

  • wimwim
    edited March 2020

    @coniferprod said:
    Yes, I saw in the AudioLayer release notes that there was some new functionality in this area. Haven’t checked that out yet, mainly because I couldn’t find anything about it in the AL manual, so I left it for another time. But I guess someone may have done / will do a study of it in some AudioLayer related thread.

    It is similar functionality to SynthJacker but built-in to the app. You set the range of notes, number of steps between notes, number of velocity layers, and duration of hold and release. Then you point it to an app, which it triggers and captures the output, then builds the zones. It has a few bugs right now, but generally worked in my tests in AUM. There’s a video of it in operation in a related AudioLayer thread. (https://forum.audiob.us/discussion/comment/770981/#Comment_770981)

  • @wim said:
    It is similar functionality to SynthJacker but built-in to the app. You set the range of notes, number of steps between notes, number of velocity layers, and duration of hold and release. Then you point it to an app, which it triggers and captures the output, then builds the zones. It has a few bugs right now, but generally worked in my tests in AUM. There’s a video of it in operation in a related AudioLayer thread. (https://forum.audiob.us/discussion/comment/770981/#Comment_770981)

    Thanks for the pointer! AudioLayer seems to be coming at this from a slightly different angle, being an instrument Audio Unit. I guess it was inevitable / obvious that AudioLayer would be getting something like this.

  • wimwim
    edited March 2020

    There are still situations where it won’t work. Specifically, some apps don’t expose a virtual port for AUM to route to, so you can’t send the MIDI there. SynthJacker to the rescue there. And, if one just want’s to have a single workflow that works for everything SynthJacker makes total sense. Plus, when I sample something for AudioLayer, I also want to do it for other apps like NS2 Obsidian. It can all be done smoothly and without fuss. SynthJacker is a great tool. B)

  • @wim said:
    There are still situations where it won’t work. Specifically, some apps don’t expose a virtual port for AUM to route to, so you can’t send the MIDI there. SynthJacker to the rescue there. And, if one just want’s to have a single workflow that works for everything SynthJacker makes total sense. Plus, when I sample something for AudioLayer, I also want to do it for other apps like NS2 Obsidian. It can all be done smoothly and without fuss. SynthJacker is a great tool. B)

    Thanks wim, appreciated! That's actually an important point: how easy or difficult it is to get samples for any use, including all the capable apps on iOS/iPadOS (like AL, NS2, Auria Pro) and also desktop samplers like NI KONTAKT, which I think doesn't have auto sampling. HALion has something like this, and of course MainStage has the old Redmatica Autosampler. And of course there is SampleRobot. Ableton Live may have something by way of Max for Live.

    Originally I wanted SynthJacker to just sample my hardware synths, and I didn't even seriously consider supporting instrument Audio Units, because I thought they could just be used as is. But apparently when you use enough of them, you run out of CPU, and it's more efficient to just use samples (although you may lose some of the playability, depending on how you sample, and how you use the results in your sampler).

    There is still a lot I would like to add to SynthJacker, and I still think it is feasible to develop it further. (To all who is interested: see a little earlier in this thread for some of the future development ideas I have considered.) To have AudioLayer do the auto sampling for you requires AudioLayer (duh!), and I don't know (yet) how easy it is to fish out those samples and use them in some other app.

  • It’s easy to get at the samples created by AudioLayer’s auto sample. But then they would most likely need to be batch renamed for a different target app, so not so simple without resorting to python or other approaches. So, unless one only wants to use AudioLayer, SynthJacker is more practical IMO.

    BTW, I’m less likely to sample AU’s unless they’re just particularly heavy, but only as a last resort. IAA apps on the other hand, are a great opportunity as using them is generally a pain.

  • I won‘t delete SynthJacker also!
    There are strict memory limits in AU‘s - so AL has to be tested in that case (which is very time consuming).
    SJ is standalone and is underlying other memory resources - i guess. So heavy sampling jobs will always be a SJ job at my place.
    „Quick and dirty tests“ are easily done inside AL.

  • @MrBlaschke said:
    I won‘t delete SynthJacker also!

    And if we did it wouldn't matter. The point of any app is to get new users that pay for it.
    I can see why any developer would appreciate subscriptions or constantly sell significant updates. But we get great apps at budget prices and a steady supply of update with more features. It's rare an update causes a lot of problems.

    I'd like to see Synthjacker pivot and keep increasing in value to me as it secures new sales.
    To do that it has to find a unique need and fill it.

    If you have any ideas throw 'em out there.

  • edited March 2020

    Ideas

    1. If Synthjacker can take care of the conversion of the sampled files into Beatmaker, Nanostudio 2, Logic, Ableton, and Audiolayer formats such that we send the sampled file, then that would be amazing. Synthjacker could also be used by other types of producers, not just iOS.

    2. If sampling using just the headphone jack was user friendly. By that I mean make it possible so that if someone forgets their lightning to usb cable, they can use their wired 3.5mm cable only, and the phone asks them to play certain notes with visual cues on which key to press.

    3. Auto noise detection meter before sampling (similar to Audioshare meter when recording).

    4. Noise reduction for headphone jack sampling.
  • Ditto on #1 of what @Samflash3 said. That would be huge. Would pay IAP for that. I have sampler instructments in other formats that are not exs or sfz.

  • Hi folks, as is the tradition, the SynthJacker Summer Solstice Sale is on! Half price on the App Store through June 21. If you already have it, thanks for your support, and tell a friend!

    Had a crazy busy spring, but slowly getting back to SJ development. It’s been a while. (But WWDC first.)

    Suggestions, requests; discuss them here. Thanks again, keep on (multi-)sampling!

  • Hello again, the new SynthJacker version 0.8.2 is rolling out to the App Store now. Here's what's happened with this one:

    • Fixed a problem with hardware sample rates. Previously SynthJacker would try to set the hardware sample rate to 44.1 kHz. On newer devices the sample rate is fixed at 48 kHz. This could result in samples being out of tune.
    • New sample post-processing engine with better visual feedback.
    • Added input level meter.
    • Note sequence name now follows the selected factory preset on instrument Audio Units (when applicable).
    • Added option to clean up temporary files (in Settings menu).
    • Added alert for missing microphone permission, in case it was turned off after it was initially granted.

    As always, download at https://apps.apple.com/us/app/id1445018791.

    Any questions, comments etc. feel free to discuss here, or send feedback from inside the app. SynthJacker keeps a log of what it has been doing, so in tricky problem cases I may need to ask you to send the log file over for analysis.

    Thanks to everyone who helped in testing and troubleshooting! Hope you enjoy making sample-based instruments.

  • Thank you, @coniferprod. And congrats on the release. 🎉

  • Thanks @syrupcore ! It's been a while... Forgot to mention that this is obviously a free update to existing users.

    Anyone who's considering to buy, well, there may be some kind of pattern to our sales... wink wink.

  • Congrats on the update - glad you got that sample rate issue fixed 👍

  • @coniferprod said:

    Audiobus would need a wizard to point out how you can drive the synth. I’ve done the mental exercise twice, and didn’t come out of it any wiser, sorry.

    Really? How do you drive an external hardware synth? You throw MIDI at it via the correct MIDI port and channel. Synth apps are no different, even standalone. They have their own MIDI ports which they expose to the system and all the user has to do is select them in the controller app (and/or select the controller app’s output port from the synth app). Then SynthJacker could do its MIDI magic. This bit doesn’t need to involve Audiobus at all.

    Audiobus would only be required to capture the audio output. It would take the place of the hardware input. Provided the synth app is sitting in the Audiobus input slot, the audio it produces becomes available first to the AB fx slot, then to the AB output slot, which is where SynthJacker would need to be sitting to capture it.

    Audiobus is a virtual cable. SJ needs to drive the synth app directly, like it does with hardware.

  • It's not about the MIDI port or channel. It's not even about capturing the output. It's about sequencing the notes. In a scenario like this (with Audiobus), who is controlling the transport?

    SynthJacker uses a MIDI sequencer to drive the external synth. It basically plays a MIDI file generated on the fly into the selected port.

    With instrument Audio Units, currently SJ plays the notes individually using a thread. That is because I couldn't get Apple's AVAudioSequencer to work with instrument Audio Units. Now it seems that I may have gotten it to work, and that may well be because Apple has actually fixed it in the last year or so, for all I know (because I'm not doing anything different now than what I did with iOS 12.4). So if that turns out well, instrument Audio Units will also be driven using a sequencer, i.e. prescripted notes, not notes emitted live.

    I have made the decision some time ago to not support IAA directly. For that, SynthJacker has the BYOAF (Bring Your Own Audio File) feature, which lets you bring in any recorded audio file for slicing into notes, and if the timings match the note sequence, the results will be the same as if you "jacked" live.

    And because I couldn't figure out how SJ would fit in the Audiobus scenario, it doesn't support Audiobus directly either, but through BYOAF. You can use any MIDI player in Audiobus to play back a note sequence exported from SJ, and record the result in Audiobus, then bring the recording over and have it sliced. That, to me, is a good enough workaround for not having direct Audiobus support.

    If you can point me to some other app that both supports Audiobus and does what SJ needs to do in terms of transport (starting and stopping the sequencer) and notes (playing them either live or from a MIDi file created in memory), then I can study it and try to learn more. Especially if you are a developer and know about what it takes to host Audio Units and drive them using a sequencer. It could be that I'm just not smart enough for Audiobus.

Sign In or Register to comment.