Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

SynthJacker

15791011

Comments

  • @coniferprod said:
    I looked at SynthJacker device and iOS version stats for 2019, for the whole year and also Q4 only. There is a sharp tilt towards iOS 13 in the last quarter, so that now almost 70% of active users are on iOS 13. I'm thinking it's time to take full advantage of the new iOS 13 features, so the next version of SynthJacker will most likely require iOS 13. What are your thoughts?

    What new iOS 13 features?

  • @wim said:
    What new iOS 13 features?

    Technically, frameworks that require at least iOS 13. First Combine, to better handle async operations (maybe also BackgroundTasks to let a long sampling job go on in the background). Maybe SwiftUI when it matures (big change, not yet). Switching to Core Data for the sequence preset database would allow it to be synced to iCloud with CloudKit. There are also AVFoundation framework improvements.

  • Nothin' in there I'm interested in. I don't use the internal AU hosting, only BYOF. Don't care about iCloud sync even a little. The UI doesn't need to be fancy for something like this. Frankly, I don't see anything there worth de-supporting older versions except maybe the long-sampling job in the background thing, though if it were me I would just let it run in the foreground - overnight if needed.

    Just my two cents. B)

  • edited January 2020

    Thanks for sharing, I appreciate it. Which iOS version are you on?

    Of course, a new version that requires iOS 13 does not kill the previous one.

    Not that SJ is anywhere near feature complete yet; moving forward makes it easier to bring new useful stuff.

  • @wim said:
    Nothin' in there I'm interested in. I don't use the internal AU hosting, only BYOF. Don't care about iCloud sync even a little. The UI doesn't need to be fancy for something like this. Frankly, I don't see anything there worth de-supporting older versions except maybe the long-sampling job in the background thing, though if it were me I would just let it run in the foreground - overnight if needed.

    Just my two cents. B)

    I'm on iOS 13, latest version. So no worries for me anyway. B)

  • I made a video about how to use SynthJacker's BYOAF (Bring Your Own Audio File) feature. It's up on YouTube now, hope you like it!

  • Thanks for your awesome app! Great video!

  • SynthJacker Leap Day Flash Sale! Approx. 29% off on the App Store until end of February. Jack before you leap!

  • @coniferprod said:
    SynthJacker Leap Day Flash Sale! Approx. 29% off on the App Store until end of February. Jack before you leap!

    Dude, you’re awesome. It’s almost like you read my post from yesterday about buying the app.

  • Get it! A really awesome app for sampling❤️❤️❤️

  • edited February 2020

    @Tones4Christ said:
    Get it! A really awesome app for sampling❤️❤️❤️

    Yeah. Just got it. Planning on turning my tiny 64GB iPhone SE into powerful sampling device.
    Combined with Koala Sampler, and NanoStudio, I’m set.

  • @coniferprod - Been testing Synth Jacker out and it's been fantastic. But do you have any advice for working with Bluetooth MIDI interfaces? I've been trying to make an (almost) wireless sampling kit with an Apogee Guitar input, and a Yamaha UD-BT01 module but I can't seem to send/receive MIDI via Synth Jacker. I did make a MIDI file of all the scales I would capture and played it back via Xequence. Or is there anything I'm missing from Synth Jacker settings?

  • wimwim
    edited March 2020

    No, you have it right. SynthJacker doesn’t support Core MIDI to apps or to BlueTooth, only to hardware. Using Xequence for the playback is the best solution, and works well.

  • wimwim
    edited March 2020

    SunthJacker has the great bring your own file functionality if you don’t already know. It’ll create the midi file for you, then auto slice it into samples with an SFZ file to boot. The nice thing about it is you can reuse the midi file out to any source you want to sample using X2 or any other midi file playback app.

  • @wim said:
    SunthJacker has the great bring your own file functionality if you don’t already know. It’ll create the midi file for you, then auto slice it into samples with an SFZ file to boot. The nice thing about it is you can reuse the midi file out to any source you want to sample using X2 or any other midi file playback app.

    I've been seeing a lot of .sfz and ..exs lately. Maybe you could help answer this. Does Nanostudio 2 Obsidian support them, or do I need to get Audiolayer to use them?

  • wimwim
    edited March 2020

    NS2 does not. However, SynthJacker can make an Obsidian patch in a flash. Maybe even a SamFlash. Only limited number of samples compared to a many SFZ instruments though.

    Zenbeats supports SFZ import to some extent. Auria Pro supports SFZ all the way to Sunday.

  • @wim said:
    NS2 does not. However, SynthJacker can make an Obsidian patch in a flash.

    NS2's Obsidian has 3 Oscillators that can be loaded with samples and mapped across note ranges.
    For typical ROMpler style playback SynthJacker should create a set of sample (24 is ideal) with one Velocity
    into a selected folder. Do that 2 more times with 2 more velocities into 2 more folders.

    File naming is not important as much as isolating the 72 total samples by 2 folders.

    IN NS2 you import each folder into an Oscillator and define the note zoning and the velocity mapping for each layer.

    This gives you a 3 layer sampled instrument as one possible use of NS2 as an AudioLayer like capability.

    The primary advantages of NS2 are the efficiency of the Obsidian Sampler code so you can run more parallel "v-instruments"
    in a project than you'd ever get to work in AudioLayer instances in any DAW. Just the voice of experience... for quality of a single instrument and more features fo AudioLayer. For parallel instruments in a MIDI based project go NS2.

    SynthJacker is ideal for processing existing AUv3 apps and external hardware into V-instrument sample sets.

    NOTE: A good audio interface will allow you to cable up a loop-back audio OUT into audio IN and let you SynthJack
    IAA apps too with great low noise samplesets even with the D-to-A and A-to-D conversions in the middle. It's also useful to
    treat an iPhone as a sound module and SynthJack it's instruments back into your iPad. Desktops are also possible with cabling and maybe even with IDAM over USB. I never tried that one. Once I get my NS2 efforts off the planning stage I'll probably go after all those sounds on my Mac too.

  • @wim said:
    Maybe even a SamFlash.

    Haha, good one

    Zenbeats supports SFZ import to some extent. Auria Pro supports SFZ all the way to Sunday.

    I never actually got Zenbeats or Auria. Is it worth the unlock if I already have Beatmaker, Nanostudio, Blocs, Gadget, Beathawk, and Garageband?

  • wimwim
    edited March 2020

    @Samflash3 said:
    I never actually got Zenbeats or Auria. Is it worth the unlock if I already have Beatmaker, Nanostudio, Blocs, Gadget, Beathawk, and Garageband?

    I can’t answer that. Unless there’s something specific you’re missing from one of those that you really need and Zenbeats has, then the answer is probably no.

    For me the biggest draw with Zenbeats is the combination of clip-looping and timeline workflows, somewhat like Ableton Live. It’s also nice that it’s universal and cross-platform, though I rarely use it on anything but iOS.

  • @McD said:
    insert brilliant advice from McD here...

    Thanks for the reply. I'm slowly moving a lot of the things I like from my PC to my iPhone, iPad, and cloud storage for easy accessibility.

    Assuming I'm using Xequence for the MIDI, and AudioBus 3, could I technically route the audio to Audioshare and record it that way? Then all I'd have to do is send that recording to Synthjacker for slicing?

  • @wim said:

    @Samflash3 said:
    I never actually got Zenbeats or Auria. Is it worth the unlock if I already have Beatmaker, Nanostudio, Blocs, Gadget, Beathawk, and Garageband?

    I can’t answer that. Unless there’s something specific you’re missing from one of those that you really need and Zenbeats has, then the answer is probably no.

    For me the biggest draw with Zenbeats is the combination of clip-looping and timeline workflows, somewhat like Ableton Live. It’s also nice that it’s universal and cross-platform, though I rarely use it on anything but iOS.

    I guess it's just the FOMO speaking to me. I keep thinking, "Roland is going to do something exclusive on Zenbeats...", but I think I'm better off waiting and watching.

  • wimwim
    edited March 2020

    @Samflash3 said:

    @McD said:
    insert brilliant advice from McD here...

    Thanks for the reply. I'm slowly moving a lot of the things I like from my PC to my iPhone, iPad, and cloud storage for easy accessibility.

    Assuming I'm using Xequence for the MIDI, and AudioBus 3, could I technically route the audio to Audioshare and record it that way? Then all I'd have to do is send that recording to Synthjacker for slicing?

    Yes. But you will need to trim the silence from the beginning of the file with AudioShare’s editor, or you’ll get wonky results.

  • @wim said:

    @Samflash3 said:

    @McD said:
    insert brilliant advice from McD here...

    Thanks for the reply. I'm slowly moving a lot of the things I like from my PC to my iPhone, iPad, and cloud storage for easy accessibility.

    Assuming I'm using Xequence for the MIDI, and AudioBus 3, could I technically route the audio to Audioshare and record it that way? Then all I'd have to do is send that recording to Synthjacker for slicing?

    Yes. But you will need to trim the silence from the beginning of the file with AudioShare’s editor, or you’ll get wonky results.

    Very true. Thanks for the tip.
    Here's the last question. Some synths have audio that are tempo based. Should I set it at a lower tempo (say 70), or is 110 the default? I find 110 is always set for apps like Garageband.

  • @Samflash3 said:
    Here's the last question. Some synths have audio that are tempo based. Should I set it at a lower tempo (say 70), or is 110 the default? I find 110 is always set for apps like Garageband.

    Great question... when you feed a sample that gets mapped to multiple keys then that sample gets pitch shifted. Along with
    pitch any thing that's like a LFO effect also gets shifted and for my money that destroys the realism of the new instrument.

    One one of a saxophone with vibrato will sound OK but the other notes close to it will sound faux and cheesy.

    The best strategy for me is to sample steady state oscillations or clones of a real instrument that plays pure tones without
    any LFO FX in the mix.

    Then you can insert LFO's using the playback engine which is a synth that can do Modulations and implement fresh LFO's that present themselves without any shift in the timebase of the underlying samples.

    When I search for a source cello I'll go with a non-vibrato so I would do that with any sampled synth target. Sampling something like iWaveStation that can generate massive sweeps just means making really long sampled recordings and SynthJacker has been bumped to 20 seconds. The resulting sample sets approach 1 GB in total size and need disk streaming
    which AudioLayer and NS2 seem to manage. @ScottVanZandt does orchestral pieces in NS2 using 20 second samples to avoid looping which can also ruin the illusion of live recording.

  • @Samflash3 said:

    @wim said:

    @Samflash3 said:

    @McD said:
    insert brilliant advice from McD here...

    Thanks for the reply. I'm slowly moving a lot of the things I like from my PC to my iPhone, iPad, and cloud storage for easy accessibility.

    Assuming I'm using Xequence for the MIDI, and AudioBus 3, could I technically route the audio to Audioshare and record it that way? Then all I'd have to do is send that recording to Synthjacker for slicing?

    Yes. But you will need to trim the silence from the beginning of the file with AudioShare’s editor, or you’ll get wonky results.

    Very true. Thanks for the tip.
    Here's the last question. Some synths have audio that are tempo based. Should I set it at a lower tempo (say 70), or is 110 the default? I find 110 is always set for apps like Garageband.

    It’s best to avoid anything like that when you sample synths if you can. Turn off such FX and modulations if you can or they can sound weird as heck when you use them in a sampler. Unless you use one sample for each note, every sample is going to be sped up or slowed down to make the pitch change from note to note. Those modulations are going to sound different on every sample. Even if that’s not noticeable, the modulations aren’t going to have any relation to the tempo in your DAW.

    It’s probably fine for stuff like vibrato, but echos and LFO’s may end up giving unpredictable results.

    SynthJacker’s midi file it produces has a set tempo. I forget what it is, but I think it will import with that tempo in X2. Changing the playback tempo will screw up the slicing part, I think.

  • @wim said:
    SynthJacker’s midi file it produces has a set tempo. I forget what it is, but I think it will import with that tempo in X2. Changing the playback tempo will screw up the slicing part, I think.

    It's 60 BPM, because that way 1 beat = 1 second. Makes it easy/possible to calculate the note offsets when slicing.

    So if you do generate a MIDI file to be played back in a DAW, just import the MIDI file, and if your DAW asks to set the tempo from what it finds in the file, then let it. Then the timings will be correct.

  • @wim said:
    No, you have it right. SynthJacker doesn’t support Core MIDI to apps or to BlueTooth, only to hardware.

    Actually it's not true. I'm using it with Bluetooth without problems. The only caveat is that you need to do the connection with a different app (I usually use AUM), that can then be closed and have Bluetooth as MIDI device available in SynthJacker. I'm using a Yamaha MD-BT01 interface.

  • @Keyb said:

    @wim said:
    No, you have it right. SynthJacker doesn’t support Core MIDI to apps or to BlueTooth, only to hardware.

    Actually it's not true. I'm using it with Bluetooth without problems. The only caveat is that you need to do the connection with a different app (I usually use AUM), that can then be closed and have Bluetooth as MIDI device available in SynthJacker. I'm using a Yamaha MD-BT01 interface.

    Nice! Thanks for the correction!

  • edited March 2020

    @Keyb said:

    @wim said:
    No, you have it right. SynthJacker doesn’t support Core MIDI to apps or to BlueTooth, only to hardware.

    Actually it's not true. I'm using it with Bluetooth without problems. The only caveat is that you need to do the connection with a different app (I usually use AUM), that can then be closed and have Bluetooth as MIDI device available in SynthJacker. I'm using a Yamaha MD-BT01 interface.

    That's so weird. I'm using the same bluetooth module and I tried connecting via midimitr and even Audiobus but it didn't work.

    Sadly, I dont have AUM.

  • SynthJacker can use any Core MIDI output port, they just need to be available somehow. Sadly, I don’t have any Bluetooth MIDI device to test with...

Sign In or Register to comment.