Tutorial/Guide: Creating samples for a manually configured instrument

Creating samples for a manually configured instrument

I tried to make my own (sample based) instruments using many different ways. My last way was using the app „SynthJacker“ from „Conifer Productions Oy“ - sadly it never worked out for me. The app always crashes after sampling the notes. It doesn’t matter if i let the app crop or not crop, normalize or not normalize. Didn’t matter which AUv3 Synth i tried - i could not get one file to save. I really tried. I contacted them via Twitter but had no reaction until now. That is why i came up with my own way. It is, so i think, a straight forward way that only needs some apps most people already have anyway. So i thought why not share the process.

To be clear - i like the idea behind SynthJacker and don’t want to „hate“ on it - but as i said, it never worked for me whatever i tried.

Step 01: The sampling itself

The procedure of sampling can be achieved in many ways. Important for this workflow is only the generated file itself. If the file itself is like i describe later on the rest of the tutorial will be running smoothly - well i hope. There is always something that can happen.

I am using the following apps for this process: AUM, Xequence, AudioShare and any (Software-) Synth that has to be sampled as the instrument itself.
As the target i use AudioLayer from VirSyn for my instrument.

AUM: AppStore

AudioBus: AppStore

Xequence: AppStore

AudioShare: AppStore

AudioLayer: AppStore

bs-16i: AppStore

I am comparing some steps to the mentioned „SynthJacker“ as that was my goal to achieve

Xequence:

In Xequence i have set up the notes i want to sample like this

I did set up 6 „streams“ as i want to sample in 6 different velocities.
The tracks are named respectively and labeled with their configured velocity.

I want to sample every 3rd note, starting from C3 (including C3 itself)

This is an important step in the configuration for later on! We will come back to this. Mainly we have to sample a number of notes that is evenly dividable by 2 - or only a single note. And this is one of the main limitations: We have to sample each octave separately. Maybe in some time, when the workflow changes, this will be different - but for now it is like this.

What is also important is the placing of the notes and the spaces between. The actual note can be as you like your final sample to be (long or short) - that doesn‘t matter. You have to leave some space after that note for the fade out and the next note has to begin at the next possible full bar! We want some clean and silent areas between the notes - that makes it easier later on to split the individual samples.

The final step is to configure the binding of the note output/Xequence instrument to AUM - like this

Also you can configure Ableton Link, if you want to. As i mentioned: You can use whatever Software combination you prefer - these are only for example and i use them because i have these available :)

AUM

In AUM we setup our „instrument“. This can be an external hardware device/synth or any AUv3. You can configure whatever you like - even using IAA or combinations with Audiobus. Be creative or do as it is needed.

For example i chose bs-16i for sampling purposes with a velocity sensitive instrument (Piano).

This instrument has to receive the „AUM Destination“ which we feed from Xequence - that’s why we configured AUM as target inside Xequence

Also make sure that you set the instrument channel to „REC“ at the bottom of the channel strip. We surely want that file with the played notes!

When this is all done you can start the selected note section in Xequence and have a listen to see if all is connected correctly. If so you should hear your notes you want to record. After that you can begin to record that octave.

Recording that octave

Make sure that Xequence is stopped and the playback cursor is at the beginning of the note-stream. In AUM check again that the instrument is armed for recording.

Press the record button in AUM, switch to Xequence and hit the PLAY button.
The notes should be played and happily recorded by AUM.

After the last note for the chosen octave and velocity is faded out - if there is really nothing to hear anymore - you can stop both processes. Press stop inside AUM and also in Xequence.

Audioshare

Verify that recorded file inside of AudioShare. Open it up and browse to the folder from AUM that includes your project data. There you should have a file named after the Channel number and bpm and any number.

You should be able to see the amount of notes you sampled - 4 in my case.
Rename that file to the following specification:
„Instrumentname“_“volumenumber“.wav

So for example:
pianoexample_100.wav

Important is the „UNDERSCORE“ between the name and the volume value.
Do NOT include the note name in the filename itself, for example
„testname-c3_100.wav“
I know it is named that way in my screenshots, but AudioLayer - which is our target in this example - interprets this as a wrong information and auto-stretches the imported samples the wrong way! I just didn’t want to make all screenshots again - Sorry for that!

If you need to write down the note/octave use it in the folder name for example.

Then click in the top right „Tools“, „Trim & Fade“ and in the bottom area onto „SNAP“. Check the „Snap to beat“ and enter the Tempo that you used to record your samples - you can easily see that value inside AUM in the top left corner.

The beginnings of the individual samples should be perfectly aligned with the grid you see.

This might look like an awful lot of work, but it really isn’t. The main work is only done before you even begin this process. You have to define your notes inside your player. I‘ll try to provide a simple starting file from this writing to speed things up. The sampling process itself should take less than 5 minutes if you get used to it.

Comments

  • edited April 25

    Step 02: The slicing

    We can now start to extract the individual samples. As we have all nicely aligned to the grid this is only a matter of minutes. Select the samples beginning on the left right off to the right.
    Important: Do not jump left, right, middle! Keep it straight forward - one after another, from left to right.

    Just select each sample, click „Save“ in the top right corner, leave the name as is and hit „Enter“ to save that sample. With each sample.

    You should end up with, in this case, 4 more files additionally to your source file that still contains all 4 samples in one batch.

    Now you can make a folder to call it, for example, „step1“ and if you are on it create another one called „step2“ also. We need 2 folders for this example. If you keep track of your files in different ways then use your way. I show it to you using 2 separate folders.

    Copy the newly created single-sample files to the folder „step1“ - like this

    That should be it for the sampling and organization of files for this sampled octave. As i mentioned before - the steps and apps you use are exchangeable as long as the produced files are the same. All in all until now you should have spent only a few minutes to create this files.

  • edited April 25

    Step 03: The naming convention

    To import the files into our target - AudioLayer - and let it auto-map the samples we need to rename the files.

    I created a „Workflow“ for Apples Shortcut App to do that automatically - so you do not have to fiddle this on your own. So in the best case you have ready sliced samples for all your octaves and then let that workflow - well i call it script from now on although it is not really a script - rename it for a convenient import.

    To get started open the Workflow app. If you have previously uninstalled it from your device you can get it here from the AppStore.

    The Workflow itself should already be installed from this link. It is called „Rename Octaves“.

    You can start it by tapping on its „icon plate“ or tapping the „tripled dots circle“ on its plate and hitting the „play“ button at the top of the screen.

    The files should be accessible from your device, your iCloud storage or an accessible Dropbox structure. The workflow can handle all three.

    If the workflow is running you are presented with a file selection dialogue. Navigate to the folder you need - we choose that „step1“ folder we created earlier inside AudioShare

    Then tap on „select“ - the second item from the top right. If there is no select press „Cancel“ and restart the workflow. Sometimes the file selector does not show that entry - that is not the fault of the workflow itself!

    Then select all files - remember that here are only the sliced samples and NOT the single file including all the samples in one stream!

    After you selected all hit „Open“ at the top right

    The workflow starts working and asks you a few questions. I decided to keep some manual steps to keep the problematic areas as low as possible.

    If all went well at first you get this confirmation dialogue. Basically than some small internal checks went ok, meaning the amount of selected files is ok (remember that i told you to have an amount of files that is evenly dividable by 2 or just 1 single file). If you want to know what happens browse through the workflow itself and get behind it. Proceed with a tap on „OK“.

    Next we need to enter the octave number

    I could have fiddled that from the filename but that would have meant that there is more naming convention and stuff. So even if you did not note the octave inside the filename we can proceed here.

    Next on the agenda is the target folder. Same as before browse to the folder we called „step2“ earlier on. If you made it like i described it above it is just below your „step1“ folder - so no big browsing necessary.

    Click on the „Add“ button at the top right and wait a second.

    If all went well the workflow should close itself and go back to its main screen.
    You can now leave that app and verify the result in AudioShare, like this

    When you open the folder „step2“ inside your AudioShare structure you should see the correctly named files with the naming convention we need for the automated import into AudioLayer.
    „name-velocity-note.wav“

    For this automated naming process it is important to slice the single notes in the correct order inside AudioShare - like i have noted above in that specific section.

  • edited April 25

    Step 04: Importing to AudioLayer

    After all this is done we import our samples to build up our final instrument
    Open up AudioLayer and create a new/blank instrument

    By clicking on the bottom left „+ Instrument“ you can enter a name for your new instrument

    After pressing on „New“ in the top right you are presented with a blank canvas - tap anywhere inside this field to bring up the import dialogue

    Here you choose the middle option - „Select sample“ and point the file dialogue to the folder we named „step2“

    Select all the samples you want to place inside your new instrument

    And press the „Open“ button in the top right.
    After that you‘ll get a dialog to possibly auto-map the samples

    Choose the middle option to map the samples by filename.
    Then watch the AudioLayer magic happen. Right in front of you your samples keep growing to their specified destinations - hopefully.

    In the end it should look like this

    Important: Press the „Disk“ icon in the top middle where your filename is located. The state at the moment is NOT saved until you do this!

    Then go away and play your newly created instrument.
    This example is not really fancy, because we have a low amount of samples and only 1 velocity but if you prepare more note samples and different velocities the same way and store them inside your „step2“ folder, when you import them all you have a fully mapped multi velocity instrument.

  • edited April 25

    Step 05: Finish and stuff

    Well. That looks massive, i know - but it really isn‘t.

    This is my personal workflow that, from following several discussions here in the forum, wanted to share. I think it can help people to understand the process and maybe get some use out of it.

    This grew because i wanted to work this process out with SynthJacker but never got to the finish line with that solution. Maybe one day when the versions iterate.

    One important thing for me was that all could be done on iOS only. I have my MacBook Air here, no problem, but i barely switch it on since i switched my „non-musical“ workflow nearly completely to iOS only.

    This is all for discussion and collaboration.
    I am open for ideas and everyone can participate.

    I hope i jotted that down for some interested people .. if not i have a personal documentation of this process, so nothing is really wasted.

    Now go and #roast me, or not :)

  • edited April 25

    Here is the workflow: iCloud Shortcut

    Simple Xequence octave from the tutorial: Dropbox

    And the Links again:

    AUM: AppStore
    AudioBus: AppStore
    Xequence: AppStore
    AudioShare: AppStore
    AudioLayer: AppStore
    bs-16i: AppStore

    Workflow App from Apple: AppStore

  • Most Epic Post.
    Thank you for this.
    B)

  • Thank you 😊👍

  • McDMcD
    edited April 26

    HUMOR ALERT - DO NOT TAKE OFFENSE

    This good but can you make a video of this showing your hands on the little buttons?

    Better yet. Fly to where I live and use my iPad to do it?

    Better yet bring your iPad because I don't own these Apps.

    In fact I don't own an iPad but I saw one at the Mall.

    OK. I saw one on TV.

    So... bring the iPad.

    One quick question: What exactly is a sampler? No one seems to agree.
    Does it need an iPhone or iPad? Because they sell them at the Mall in the
    candy store.

    OK. HUMOR OFF. Beautiful job. We should put a link in the Knowledge Base
    under "How Can I Sample My Stuff?"

  • Damn, what a great post! Thanks!

  • @MrBlaschke said:
    I tried to make my own (sample based) instruments using many different ways. My last way was using the app „SynthJacker“ from „Conifer Productions Oy“ - sadly it never worked out for me. The app always crashes after sampling the notes. It doesn’t matter if i let the app crop or not crop, normalize or not normalize. Didn’t matter which AUv3 Synth i tried - i could not get one file to save. I really tried. I contacted them via Twitter but had no reaction until now.

    Hi, sorry to hear about that -- I would like to work with you to find out what is causing the crashes, as that is definitely not a common experience (ask @McD for example).

    I didn't see anything on Twitter, but it might be best anyway if you send feedback by e-mail to synthjacker (at) coniferproductions (dot) com, and include any symptoms and other information (you can also do it straight from the bottom of Settings menu of SynthJacker).

    It will be interesting to dig into the process you describe and see if SynthJacker would (as I theorise) be able to streamline it somewhat. Then again, your method may well produce better results.

  • @coniferprod said:

    @MrBlaschke said:
    I tried to make my own (sample based) instruments using many different ways. My last way was using the app „[SynthJacker]

    ask @McD for example.

    We have been in contact to share ideas.

    I suspect the failures to auto-sample happen when the whole recording process
    exceeds 13 minutes or so. I elected to reduce either the number of notes sampled or the number distinct velocities (i.e. layers) to be productive and fill
    my sampler with solid instruments from my AU's.

    If a sample run crashed I'd reduce the settings until it would run to a useful file of samples. Crash? Drop some setting value to reduce the run time. I'm probably still making sets that have more samples that I really need. I tend to
    drop the really high values because I just don't bang the keys that hard so
    having the Sampler pay a velocity 80 sample at 100 works for me. There's also
    a complex gain staging at play and loud samples can show up as distorted and
    useless results. I love the nuance of the 20-40 velocity pianos in the $30-50 Apps. That's what I paid to hear when I play alone with headphones so cloning that tone and light touch makes a Moonlight Sonata or Clare de Lune shine.
    Banging out Jerry Lee Lewis reduces most pianos to which has the most highs.

    I SHOULD HAVE MADE IT CLEAR. My instruments tend towards 100 samples total so most have 4 layers of 26 notes. Pushing for more just got frustrating.

    I use 5 second samples with 2 seconds decays and 1 second gaps = 8 seconds
    per sample.

    I like to sample from A0 to C6 using an interval of 3 (1/2 steps) which adds to 26 samples. 26 + 8 seconds = 208 seconds or 3 1/2 minutes per layer.

    Hand slicing 200 or so samples would drive me crazy on a touch screen.

    It's possible there are other variables of concern but I'd start here:
    Change 1 thing at a time (to reduce recording time) until you get a valid folder of samples using synthjacker. Load the results into AudioLayer. If you need more layers do another run with some extra velocity settings and import those
    into the first run's instrument and use a group "grab" of a layer adjust the
    velocity high and low boundaries across the 88 keys.

    It would be fun to make some sample sets that approximate the 400-800 samples of the best pianos.

    For any developer fixing bugs is easiest when the bug can be reproduced.
    So, defining the failure cases is the right way to start a process that fixes the right problem.

  • Well, if the total length of the autosampling time is way over 10 minutes, that translates to a lot of disk space being required, plus a proportional, additional amount for the post-processing steps. Most of that is temporary space which will be reclaimed either by iOS or by SynthJacker itself when it does a cold start.

    It may be that some more robust checking of failures and/or a pre-flight check of recording time vs. free space would be required.

    I'll try to work this out when I get more details. I just checked Crashlytics, and didn't even see any crashes in the last few days, but that may be a reporting delay. Anyway, I don't want to hijack this very informative thread with SynthJacker problems more than I've already done.

  • Thanks all for the nice comments.

    @coniferprod @McD
    Yes, we had contact about this. I did send you details via PM to get that working :smile:
    Thank you for coming back to my cry for help.

  • Nice work

  • Nice tutorial and kudos to @coniferprod on following up with users on the forum to workout what’s going on with SynthJacker so it will work more predictably with longer sample times.

  • Just wanted to let you all know that the root cause of the problems @MrBlaschke was initially having with SynthJacker has been resolved in the 0.5.4 update. Thanks for working with me with that one!

Sign In or Register to comment.