Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

The End of Times: 9, Ghosts of a Green Meadow

9th draft is done. One to go.
That one changed a lot because I wanted to include that Soundpaint flute. Originally the part with the flute was quite a bit different, more "violent".

Comments

  • edited November 24

    I can't believe no one is commenting on these tracks of yours... Too busy eating stuffed turkey maybe?
    Wondering how this one sounded with more violent flute !

    You're doing an incredible job with this serie, Joseph. I am not sure what you're referring to, but this End Of Times serie strongly resonates in me and reminds me of the terrible environmental challenge awaiting humanity for the next 50 years.
    I really love your creations. Keep them coming !

  • Love the dark atmosphere. The soundtrack to a terrifying movie, or indeed the End Of Times. 😮

  • Thank you so much @JanKun , @richardyot .

    I was thinking of a more violent flute, but I only have soundpaint for a couple days, I need to understand how it works and what I can do with it, so in that recording the flute is pretty basic.

    What I'm referring to would take a book to describe, and I'm terrible at writing books, instead I can manage to make some music in hope someone will get it one way or another.

  • edited November 24

    @JanKun said:
    I can't believe no one is commenting on these tracks of yours...

    Maybe if I was making 4/4 110bpm I - V - vi - IV intro-verse-chorus-verse-chorus-bridge-chorus-outro with sounds people are used to ear, people would love it more?
    I don't know.

  • edited November 24

    @jo92346 said:

    @JanKun said:
    I can't believe no one is commenting on these tracks of yours...

    Maybe if I was making 4/4 110bpm I - V - vi - IV intro-verse-chorus-verse-chorus-bridge-chorus-outro with sounds people are used to ear, people would love it more?
    I don't know.

    Actually I think there’s very little of that on the forum, the varying genres that people post here are wide-ranging, and mostly instrumental. There’s EDM, ambient, jazz, the occasional hip-hop-ish beat, some orchestral etc etc

    And maybe that’s why sometimes stuff falls below the radar, birds of a feather flock together so if your stuff is unusual and not easily categorised then there might not be enough like-minded people to comment.

    But anyway I wouldn’t take forum attention as meaningful in any way, I’ve seen amazing tracks that get zero comments a fair few times over the years. Sometimes it’s just random.

  • @richardyot said:
    But anyway I wouldn’t take forum attention as meaningful in any way

    I don't :-)

  • @richardyot said:

    @jo92346 said:

    @JanKun said:
    I can't believe no one is commenting on these tracks of yours...

    Maybe if I was making 4/4 110bpm I - V - vi - IV intro-verse-chorus-verse-chorus-bridge-chorus-outro with sounds people are used to ear, people would love it more?
    I don't know.

    Actually I think there’s very little of that on the forum, the varying genres that people post here are wide-ranging, and mostly instrumental. There’s EDM, ambient, jazz, the occasional hip-hop-ish beat, some orchestral etc etc

    And maybe that’s why sometimes stuff falls below the radar, birds of a feather flock together so if your stuff is unusual and not easily categorised then there might not be enough like-minded people to comment.

    But anyway I wouldn’t take forum attention as meaningful in any way, I’ve seen amazing tracks that get zero comments a fair few times over the years. Sometimes it’s just random.

    I agree with Richard ! The range of music that can be heard in the creation sections is pretty wide, which is a good thing and, thank God, the quality of music cannot be judged based on the amount of people commenting on a forum. But I am surprised that your latest End Of Times didn't receive more attention cause it sounds really great. There are plenty of great producers on the forum but to me, you're probably the one producing the most organic soundscapes I heard. So far everything I heard from you, is really cohesive, dense, immersive, and very evocative. And the fact that, to achieve this, you are manipulating bunch of samples on your iPad in NS2 and Audiolayer makes this even more intriguing and could potentially interest a lot of iOS Music producers...

    I like all kinds of music, which also includes the kind of song structure you described. Despite having been used millions of time, it's nonetheless one the most efficient musical structure. And it is actually difficult to create a good
    memorable and original song with this structure. I also believe that your great sound design and production skills applied to this kind of song could very probably sound extremely good and unique !

    Keep the good work coming !

  • it's actually easier to do it on the iPad in NS2 and audiolayer (and now neon audio editor for the flute that I can't play on iPad):
    just the usability with pencil / finger is a HUGE improvement over the mouse keyboard combo. If only for drawing the CC curves, it's a monumental improvement,

    For the lack of a big screen, I'll have to buy the iPad Pro, someday. I tried connecting the iPad to my monitor, but first it sucks, and second it totally defeats the purpose of a small portable music station.
    I admit I don't use the mini Akai here, I will on the go, so I keep using my piano .

    I was sometimes a bit limited with the number of tracks before getting artefacts and glitches, but I can now replace the lack of freeze option in NS2 by exporting the audio and reimporting it in Neon, so that is not an issue anymore. Anyway, the lack of processing power / storage force me too streamline my work, optimise and find new creative shortcuts.

  • @jo92346 I would be curious to hear more about your process, if you have time to explain it in more depth.

  • edited November 25

    @richardyot :smile:
    If your asking about inspiration, I just couldn't really explain in an efficient way that would fit here.
    BUT, technicallyit's very simple and straightforward, really nothing fancy at all. I suppose most composer work more or less the same. Only the first step is sometimes harder: transcribing the music in my head.

    So I have the music in my head: I write it down as fast a possible before it vanishes or before I'm banging my head on the walls ( you know like when you have a silly song that doesn't leave you and really drives you crazy? Well it can get much much much worse with me when I have a new music in my head): paper and pen or iPad and pencil, I take note of the kind of sound/instrument, textures etc... it's the harder part when that music is like a dream you can remember but can't really remember. When I'mlucky I can write down the basic score. Sometimes I end up with just a the harmony and vague feelings, so I try to imagine the basic orchestration that could go with that. Or I sit at the piano and play to find something that will fill the holes.

    I select the sounds I need, create the patches in obsidian for sounds with sweeps or CC controlled layers, and audiolayer for percussive sounds / velocity controlled layers. It took time at first but my patches library is quickly getting bigger.

    Then I just start playing what I wrote, or should I say freely interpreting what I wrote,, because more ideas can come whenever they want to come.

    When all the tracks are recorded, I overdub or replay what needs to be corrected, and then I add the reverb, make a basic mix and a rough master. In case of audio glitches because of too much happening at the same time, I rewrite or adjust the patches ( like instead of 3 layers crossfading with a CC, if it's always the same, I bake it in a new sample, and create a new patch with only one layer. But in the end, the real problem is the reverb. That thing is killing the iPad so much that I tried to undust my old lexicon and use it. It proved to be super unpractical; exporting each track trough the audio out trough the lexicon back to a new recording in neon audio editor and then back again in NS2.

    That is gonna be the base for what is for me the real work on that piece.

    Sometimes I play it and add, correct or remove things in real time or with the piano roll; sometimes I play it on my big sound system, sit and write done correction that I will later do in NS2. That is were I spend most of my time on a piece: finely adjusting things like velocity and note start, which apparently I'm the only one to ear. I can get really anal with that stuff. I keep all the intermediate version: most of the time between 20 and 40 versions...

    Then a couple days later, I play it, and realize I'm not happy and need to adjust more thing. That is the trap I keep falling into.

    Posting on YouTube as soon as I'm ready to work on the next thing helps a little, and eventually, the final publishing will make any piece not editable anymore.

  • @jo92346 thanks for the walkthrough. So are you creating sample instruments in Obsidian using your own samples?

    And for the reverb, can you not send multiple tracks to an AUX send in NS2? I would have thought that would save a lot of CPU processing.

  • My

    I use my samples, pianobook samples, Labs samples (no straightforward way to do it with labs, I do as if I was recording a real instrument, and very few paid kontakt libraries. I work on each sample individually to save processing power: most use the software to set start, end, denoise, bias, some effects and sometimes even enveloppes. If I do it on a sample level obsidian and AudioLayer don’t have to do it. Maybe it’s a placebo effect, but I enjoy doing that anyway.

    I do exactly that in NS2 with the reverb, but wish there was a simple way to use the Lexicon. Maybe with a big external audio interface that would possible but again that defeat the “simple system” thing.

  • @jo92346 said:
    My

    I use my samples, pianobook samples, Labs samples (no straightforward way to do it with labs, I do as if I was recording a real instrument, and very few paid kontakt libraries. I work on each sample individually to save processing power: most use the software to set start, end, denoise, bias, some effects and sometimes even enveloppes. If I do it on a sample level obsidian and AudioLayer don’t have to do it. Maybe it’s a placebo effect, but I enjoy doing that anyway.

    I do exactly that in NS2 with the reverb, but wish there was a simple way to use the Lexicon. Maybe with a big external audio interface that would possible but again that defeat the “simple system” thing.

    Thanks! I also make a fair few samples for Obsidian (and Slate), mainly because I really like working with the NS2 sequencer. Obsidian is pretty good for sample mangling as well.

  • Yes it is. I really love obsidian. Wouldn’t be unhappy if it had 2 more layers and oscillators though … working with samples and oscillators is a bit like the old days when you had short sam0les for transitoires and attacks’ short loop for harmonics and the rest of the sound was hybrid synthesis

  • Thank you for sharing ! Very instructive and inspiring! It seems I should reopen NS2. I remember the sequencer and MIDI were very smooth.

  • NS2 is just perfect for me, the midi editing is robust, the CC matrix is great, the export goes up to 96-32, obsidian seem easy but the controller matrix is insane, you can control absolutely any parameter with a chain of controls, and it can run the few plugins I like. It is also super inexpensive.

    However it lacks audio tracks, and some people really need that.

  • Only the first step is sometimes harder: transcribing the music in my head.

    So I have the music in my head: I write it down as fast a possible before it vanishes
    Then I just start playing what I wrote, or should I say freely interpreting what I wrote,, because more ideas can come whenever they want to come.

    Thanks for explaining how you work. I've been listening now to all of these and I'm impressed with your orchestrations. Reading through your method, it's amazing you seem to work so quickly. So are you first writing in notation? and then playing this into midi timelines? Not sure how much time you actually spent on these. For me it would take months.

  • edited November 27

    @Stochastically : Yes that is what I do. I started The end of time project when I posted Misery nov 12th
    I have a lot of free time, I can work on my music 13hours a day. I just can’t stop.

Sign In or Register to comment.