Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

With half an eye on the idiocy of my species

Out of the Omicron frying pan into the potential nuclear fire:

Comments

  • True insanity... all of the above and the even increasing climate crisis.

  • @Svetlovska do you have a track fx breakdown. Nice tonal landscape.

  • Interesting, there’s a lightness to this in the way its rhythms sit against the creep of the tones 👍👍

  • edited February 2022

    Hi, @Krupa, thank you. Lightness? I’ve obviously slipped up… :)

    @audiblevideo.thank you also. FX breakdown? Strap in. It’s quite a list…

    The whole track is built off a rhythmic sort-of ‘envelope following’ of a Borderlands loop I made external to AUM, then tracked with the rather wonderful Objeq physical modelling app. That gave the erratic ‘drum beat.’

    I then made multiple varispeeded loops of the same Borderlands loop, used A2M to derive MIDI sequences from them, captured the sequences in instances of Atom, and used those, with tweaked tempo and probabilities to drive the Phosphor 3 and Model 15 for bass pad and pulse.

    More percussive textures from my fave technique of Fractal Beat driven by Rozeta Particles with Zero Reverb on 100% mix, and from Skiid. Skiid is great, but the lack of probability settings within it lead to too much predictability, so I used Glitchcore to mix that up, and reverb to smooth out the gaps and clicks.

    On several channels I used a sample and hold lfo from Art Kerns MIDILFOs to randomly switch out audio sources, with those channels bussed to partner channels hosting reverb. This prevents sounds just stopping dead when the signal flow is interrupted by the lfo.

    I also used GateLab and Filterstep to chop up and modify several elements, including a field recording of a dot matrix printer.

    Then, shock horror, I actually ‘played’ Animoog Z. Well, I say played. Mashed the keyboard, locked to the scale that A2M had identified for me.

    Same trick using ScaleBud 2 for Quanta.

    My favourite ‘mastering’ of Bark Filter Triple band with compression switched on and FAC Bandit exciter

    The usual shedload of ‘verbs…

    Finally, screen captured a performance of all this riding faders and mutes and manually triggering the Talker phrase. This quick and dirty way lets me capture reverb tails when I stop the AUM transport (oh how I wish there was a way for AUM to continue to record audio with the Transport stopped.)

    Then over to AudioStretch to extract the audio from the video.

    Job done.

    My, er, ‘method’ therefore embraces large elements of chance, with the phasing audio loops, Atom probabilities GateLab and Filterstep free running on infinite and so on, anchored by one or two predictable periodic elements, e.g. the Objeq ‘drums’, and an interactive building out by me from a simple audio base, adding self-performing elements until it, er, sounds right. Then I stop and try to mix and capture the result.

    This, or some version of this, some times also incorporating auto generator midi tools like Euclidean, Cykle, Zoa and so on, is how I usually do things. I never plan or record extended chord sequences myself, frankly because I do not have the knowledge or skills to work within a conventional song structure.

    This is both a blessing and a curse to me. If I did understand chord progressions, maybe the work would improve. Perhaps Scaler 2 can save me… :)




  • @Svetlovska said:
    Finally, screen captured a performance of all this riding faders and mutes and manually triggering the Talker phrase. This quick and dirty way lets me capture reverb tails when I stop the AUM transport (oh how I wish there was a way for AUM to continue to record audio with the Transport stopped.)

    Me too.

    I'm going to try one of these:
    Koala
    Loop Pro
    Cubasis 3

    as the recording mechanism and just chop the silences in AudioShare at the last step. The winner will be the one that gets to AudioShare the fastest... but now that soundcloud stopped supporting the uploads from audio share maybe I'll rethink my whole packaging step and look into "Neon". Hmm... does Neon sit in an FX slot in AUM? I need to work on loop editing.

  • Sorry... great track @Svetlovska. It really illuminates your title in sonic terms.

  • @McD : hey, thanks for the listen and the comment. Never need to apologise for those… :)

    Yes, I feel your pain.

    I used to painstakingly set up every track in AUM to record simultaneously in Cubasis or other DAW, but it’s a very tedious process, takes me out of the moment, seems limited to 8 simultaneous tracks, and because of the way I work, adding and deleting AUM channels in a very free form way, no ‘template’ based solution really works for me.

    The audio capture would need to be in AUM itself - an option to have an audio only ‘final mix’ out independent of the transport, that you could manually toggle on and off would be the dream, (along with fader and button automation, obvs) if @j_liljedahl fancies a challenge… :)

  • it’s not the ideal solution, but sometimes I just flick a sound source (synth, file player etc) to the left while the timeline plays on…

  • McDMcD
    edited February 2022

    Neon looks like the best choice… you can use it in any AUM FX slot and hit the independent record controls. I’ll look to way to have larger record buttons maybe with Mozaic scripts. It has a lot fewer steps for playback vs AUM’s recording. It has a start button right next to the record dot. It’s currently $12.99 so I waited and missed the $10 sales last week. But it looks like the right tool for this use case in addition to clip launcher, loop editor and more…

    Extra cool: using Warp it can change BPM without changing pitch like file player does. That will make the Session Band BPM less important in the collage style creations and it works on the iPhone too for waiting room time sessions.

    Stereo display while Koala seemed to be mono only. It can load AUv3’s in standalone. I’m pumped to explore and play.

  • edited February 2022

    @Krupa : Hm. Too many here for a manual flip, but that does get me thinking. I could bind all the channel outs to a Midi note, say the bottom most from AUMs integral keyboard, bring it up and hit it to switch them all out at the same time… That might work… thanks for the suggestion, I’ll have to give it a try.

    @McD : I await your explorations with interest. :)

  • Oh great, I worried it might be too obvious, I reckon your thought about automating it might work nicely, maybe even moves on the faders would be possible - a mosaic script even…

    Neon might be the other answer though, I’m not sure that it’s 24bit if that’s an important factor though…

  • @Krupa : does it sound like audio fidelity matters to me? :) I spend ages making tape wobbles, fluff on the needle, blown speaker noises heard through thick walls, in a rainstorm, whilst being interrupted at random intervals by something roaring in a cavernous basement about a mile away. So I can probably live with 16 bit… ;)

  • edited February 2022

    This is great! 👏👏
    I’m very impressed after reading the description of how you made it. Wow. In a way you’re sort of “coding” music which I find extremely cool and intriguing. This kind of approach is like making some beautiful music form the “other part of the brain”, if that makes any sense…
    I do apply some elements of randomness or letting the machine do it’s thing but ultimately my “harmony” side of the brain is the boss. Whereas your approach is sort of the opposite… Am I getting into some bullshit theory here?. 😅.
    Excuse me in advance if I’m making the wrong assumptions for this next part… It seems to me like you’re overly worried about not knowing musical theory, scales and whatever.

    Then, shock horror, I actually ‘played’ Animoog Z. Well, I say played. Mashed the keyboard, locked to the scale that A2M had identified for me

    or maybe you’re just making a point about limiting the ”human harmony” factor, which is actually very cool. I suck at scales and theory, I just play by ear. Been doing so for decades. In my very personal opinion adding “wrong notes” that sound good to you might make these pieces even better.
    Anyway, I’m probably overthinking!. Great stuff, congrats!

  • 😁 fair point, i didnae wish to presume ta presume though 😄

    I mostly am happy with 16bit too, hell, I’ve just got a SID based euro module so probably way less than that, though sometimes the headroom of 24 in the recording can help with those massive dynamic shifts I’m led to believe

  • Magnificent! I listened to this over and over. After the 2nd or 3rd time, I stopped trying to analyze it and just went with it.

  • @Paulieworld said:
    Magnificent! I listened to this over and over. After the 2nd or 3rd time, I stopped trying to analyze it and just went with it.

    @Svetlovska productions display a mastery of the possibility of IOS. It's very moving in ways that are not typical of music but a bit more like a theater of sound.

  • edited February 2022

    @tahiche : thank you for those kind words, the listen, and the theory. It’s a method born of necessity due to a lack of conventional musical skills, but I do like the way it has evolved over the last couple of years, and I think within the narrow genre parameters I choose/must operate in, I can discern some forward motion. It will be a chilly day in Abbadon before I produce a catchy 3 minute pop choon though, for sure. I fear I am cursed always to be a distant stranger to both harmony and melody…

    @Paulieworld : yes, I strongly encourage multiple listens. Tell all your friends! (Seriously, though: thank you very much.)

    @McD : I kind of like that phrase ‘theatre of sound’. :) Also flattered that you have proclaimed that there is such a thing as a ‘Svetlovska production’. Puts me on a par with George Lucas THX or IMAX or something. Also ‘very moving in ways that are not typical of music’ sounds like my mission statement. (Whilst being sufficiently broad in definition to leave room for subjectivity. After all, a dodgy curry might also be said to be capable of being ‘very moving’ but not in ways you’d necessarily want to listen to! ;)

    Trust me, this is the best I can manage. For it to be in any way typical of music, I’d have to know some first…

    Thanks all for the great feedback. Careful - you might encourage me! :)

  • @Svetlovska Thank you for that VERY detailed breakdown. You have given me several ideas.

  • @audiblevideo : be sure to let us hear them when ready! :)

  • Art is in the ear of the be-listener. And I be listening. Keep making these tone poems.

    I hear art. It challenges me to accept that sound is more than music just as the visual arts are more than oil on canvas.

Sign In or Register to comment.