Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Mozaic: a sneak-peek at my new project

13567

Comments

  • @brambos said:

    @rs2000 said:

    @brambos said:

    @rs2000 said:
    About a year ago, I suggested Nic (MidiFire and Streambyter developer) to add UI controls for manipulating variables in MIDI scripts live.
    I'm somewhat surprised but I definitely welcome bram now having his own go at it, including a more "human-friendly" coding language, including a MIDI clock engine. :+1:

    I’m not including a ‘midi clock engine’ as such. But you can respond to clock messages ( if the host passes them on to plugins in the first place - which not many do as far as I can tell). I’m not even sure how useful MIDI clock is in an AUv3 chain - that’s a task for the host in normal cases. But it’s just MIDI, so if you can receive it you can handle it.

    OK, you're right, it's more like some kind of clock generator for use in a script that runs in sync with the host transport and bpm.
    I'm just wondering what could be the most accessible syntax for building arps, note doublers/shufflers/repeaters, simple controllable, generative sequencers - somehow you have to refer to a timing reference in order to send MIDI messages at the correct time.

    I have several methods for that:

    The programmable Metronome generates a pulse at a sync-rate of 1-384PPQN (still debating the max rate). The pulse generates an event that you can assign a subscript to:

        @OnLoad
          SetMetroPPQN 4 // generate 16th note pulses
          SetMetroSwing 0 // you can optionally add a swing to the metronome
        @End
        
        @OnPulse
          // create your Arp here... this event will be triggered every 16th step in sync with the host
        @End
    

    In a similar way, you can set up a sample-accurate timer which can run independent of the host's tempo (you define the timer interval in milliseconds and it will stay constant even if the host's tempo changes).

    That last bit = 😍

  • @brambos I love the mutate feature in your apps and there's really no good AU MIDI FX for adding some generative variety to sequences. Might you consider a preset that gets the kids at the back of the room started?

  • @lukesleepwalker said:
    @brambos I love the mutate feature in your apps and there's really no good AU MIDI FX for adding some generative variety to sequences. Might you consider a preset that gets the kids at the back of the room started?

    I could imagine a nice script that "listens" to your melody and/or chords for a while, then when you hold one of the pads, it adds rhythmic variation, another pad will play inversions, a third pad will add "blue notes" and the fourth pad will add short flams here and there to spice up the melody ;)

  • Nice! looking forward to this!

  • @brambos said:

    @MonkeyDrummer said:

    @aplourde said:
    @brambos

    And with built in LFOs the $64K question is how slow do they go?! That picture shows "Super Slow LFO" and the LFO speed is, I'm presuming, 0-127 * 6000 (or 0-1 * 6000). So what does that represent: 0-6000 seconds?! Hoping @MonkeyDrummer and I will have reason to celebrate!

    That’s getting close, but you still don’t need stop motion photography to see it move.

    I’ll make it a point to allow you to create a 24 hour LFO.

    That will be good for a six pack!

  • Trying to get my head around all the use cases for this but I have faith in @brambos and will be eagerly following along.

  • :) Nice

    On your medium page you already mentioned nested loops and arrays, - are user defined functions also in the scope of the language ?

  • edited March 2019

    @_ki said:
    :) Nice

    On your medium page you already mentioned nested loops and arrays, - are user defined functions also in the scope of the language ?

    No user functions for now, but lots of different events to tap into.

    Also compound conditional statements:

    if (HostRunning and HostBeat = 0) or HostTempo < 80
      SetKnobValue 4, 64
    else if HostBar > 0
      FlashPad (Random 0, 3)
    endif
    

    So plenty of possibilities for structured code.

  • Case statements?

  • I will post the programming guide when it’s done.

  • Very awesome !!! I always wanted to build a Lego like building blocks approach to MIDI tooling ...something like Scratch ( https://scratch.mit.edu ) but for MIDI composition and routing

  • edited March 2019

    Not sure if inventing a whole new programming language along with all the "rat tail" it involves is / was worth it? Why not just embed a WKWebView and let users use a widespread language, i.e. Javascript? (along with an already existing, extremely optimized JIT etc...)... just a thought :) I'm normally no "standard(s) guy" but developing a whole language including an interpreter etc. for such a specialized one-time use would've been overkill for me.

  • edited March 2019

    @SevenSystems said:
    Not sure if inventing a whole new programming language along with all the "rat tail" it involves is / was worth it? Why not just embed a WKWebView and let users use a widespread language, i.e. Javascript? (along with an already existing, extremely optimized JIT etc...)... just a thought :) I'm normally no "standard(s) guy" but developing a whole language including an interpreter etc. for such a specialized one-time use would've been overkill for me.

    Valid questions, but I have thought my strategy through before I started ;)

    1. The virtual machine runs on the audio-thread (because I want all events to be sample-accurate). Third party interpreters can't run on realtime threads because they are unsafe. They're full of locks, dynamic memory allocations etc. So there was a clear need to "roll my own".

    2. Because I build-my own, I can optimize the living shit out of my interpreter. I'm applying DSP-grade optimizations in my parser and interpreter and I can selectively choose where to make compromises in order to speed up the engine.*

    3. I really didn't want to use JavaScript because this is meant to be a gentle entry for non-coders. JavaScript with its object oriented foundation and beginner-unfriendly aspects like case sensitivity did not meet my needs. I want to keep the core language structure simple, so people with no prior experience don't need to learn dozens of coding patterns before they can get a little bit productive.

    4. Building the virtual machine is the fun part. I love me a challenge, and designing my own language with interpreter is the best fun I've had in ages - I even enjoy writing the manual! B)

    *) for example; because this is a music application I can prioritize seamless playback and gracefully handling errors with best guesses over “breaking and informing” when exceptions happen. Lots of subtle music-specific design decisions to be made.

  • @brambos said:

    @SevenSystems said:
    Not sure if inventing a whole new programming language along with all the "rat tail" it involves is / was worth it? Why not just embed a WKWebView and let users use a widespread language, i.e. Javascript? (along with an already existing, extremely optimized JIT etc...)... just a thought :) I'm normally no "standard(s) guy" but developing a whole language including an interpreter etc. for such a specialized one-time use would've been overkill for me.

    Valid questions, but I have thought my strategy through before I started ;)

    1. The virtual machine runs on the audio-thread (because I want all events to be sample-accurate). Third party interpreters can't run on realtime threads because they are unsafe. They're full of locks, dynamic memory allocations etc. So there was a clear need to "roll my own".

    2. Because I build-my own, I can optimize the living shit out of my interpreter. I'm applying DSP-grade optimizations in my parser and interpreter and I can selectively choose where to make compromises in order to speed up the engine.*

    3. I really didn't want to use JavaScript because this is meant to be a gentle entry for non-coders. JavaScript with its object oriented foundation and beginner-unfriendly aspects like case sensitivity did not meet my needs. I want to keep the core language structure simple, so people with no prior experience don't need to learn dozens of coding patterns before they can get a little bit productive.

    4. Building the virtual machine is the fun part. I love me a challenge, and designing my own language with interpreter is the best fun I've had in ages - I even enjoy writing the manual! B)

    *) for example; because this is a music application I can prioritize seamless playback and gracefully handling errors with best guesses over “breaking and informing” when exceptions happen. Lots of subtle music-specific design decisions to be made.

    OK, mostly valid points, sorry, been talking a bit out of my backpart ;) I'm a bit quick sometimes...

    I agree with everything except maybe for point 3) -- JavaScript can really be pretty beginner-friendly, as you could provide a "runtime-library" that simplifies creating the event handlers and other stuff from your examples in a very simplistic syntax. But yeah, the realtime aspect probably is very important and if you're having fun while at it, then it's all good :)

  • I hope the straightforward efficient design elements found in other brambos apps will also translate well to the language created for Mozaic.

  • This looks excellent and something I have been trying to find - a flexible MIDI controller app. It seems to have the right balance of sliders and pads. I wonder if Envelopes might find their way into a future version?

  • @craftycurate said:
    This looks excellent and something I have been trying to find - a flexible MIDI controller app. It seems to have the right balance of sliders and pads. I wonder if Envelopes might find their way into a future version?

    With both constant and bpm-dependent timers, I don't see any reason why that shouldn't be possible with Mozaic from day one. You could also do a lot with Lemur but that's a standalone app with no AUv3 support.

  • Definitely not going to buy this...I don't need more coding in my life...
    Who am I kidding LOL, gimmmeeeee :D

  • @brambos said:

    @rs2000 said:
    Scripting AUv3 audio plugins would be something :smiley:
    Something like MobMuPlat but with AUv3 support.

    I'm not convinced of an urgent need for something like that right now. The market for synths and effects is getting pretty saturated already.

    And I vividly remember the early 2000s, when the Windows VST market was flooded with vanilla subtractive SynthEdit synths. shudder

    That's not a monster I want to create :D

    Ahhhhhh good times good times

  • @brambos said:

    Valid questions, but I have thought my strategy through before I started ;)

    1. The virtual machine runs on the audio-thread (because I want all events to be sample-accurate). Third party interpreters can't run on realtime threads because they are unsafe. They're full of locks, dynamic memory allocations etc. So there was a clear need to "roll my own".

    2. Because I build-my own, I can optimize the living shit out of my interpreter. I'm applying DSP-grade optimizations in my parser and interpreter and I can selectively choose where to make compromises in order to speed up the engine.*

    3. I really didn't want to use JavaScript because this is meant to be a gentle entry for non-coders. JavaScript with its object oriented foundation and beginner-unfriendly aspects like case sensitivity did not meet my needs. I want to keep the core language structure simple, so people with no prior experience don't need to learn dozens of coding patterns before they can get a little bit productive.

    4. Building the virtual machine is the fun part. I love me a challenge, and designing my own language with interpreter is the best fun I've had in ages - I even enjoy writing the manual! B)

    *) for example; because this is a music application I can prioritize seamless playback and gracefully handling errors with best guesses over “breaking and informing” when exceptions happen. Lots of subtle music-specific design decisions to be made.

    As previouslyously indicated I'll be all over Mosiac on release. But I do have a concern about attempting to solve everything on the audio thread and you've guessed it, it's parallel processing.

    Having used Max on the desktop for many years I'm aware how easy it is to create a system that's inflexible to the modern parallel processing world. I continue to use Max within Ableton Live as Max only locks the channels that have IO requirements to the M4L device in play to a single thread. Ableton is still able to provide it's multithreaded goodness to the rest of the set.

    I acknowledge that everything is locked to the audio thread in iOS at the moment but I'd expect that situation to be solved at some point over the next few years. I'm hoping that Mosaic is flexible enough to adapt to those changes (if they come to pass).

  • edited March 2019

    @jonmoore said:
    I acknowledge that everything is locked to the audio thread in iOS at the moment but I'd expect that situation to be solved at some point over the next few years. I'm hoping that Mosaic is flexible enough to adapt to those changes (if they come to pass).

    I'm building it 100% according to how Apple dictates stuff to be implemented for Audio Units. If they're going to change their internal architecture that's something they'll need to sort out. In AUv3 MIDI, all MIDI is handled on the audio thread. There is nothing optional for developers about that.

    I'm not worried about it, so users shouldn't be either :)

  • edited March 2019

    @brambos said:

    I'm building it 100% according to how Apple dictates stuff to be implemented for Audio Units. If they're going to change their internal architecture that's something they'll need to sort out. In AUv3 MIDI, all MIDI is handled on the audio thread. There is nothing optional for developers about that.

    I'm not worried about it, so users shouldn't be either :)

    It's fine for you not to be worried about it. But as a user, I believe it's an important consideration that everything isn't restricted to a single thread (especially for automation/midi ccs where sample accuracy isn't a real consideration outside of iOS). You mentioned in our other discussion on threading, that you don't consider that iOS DAW's shouldn't be judged as DAW's with mobility, that's a need that's already fulfilled by laptops. But Apple markets iPad Pro's as laptop replacements and many now carry only one of either a laptop or a tablet. Everything being tied to a single real-time thread is a consideration in that context.

    I applaud you for your vision of iOS as a modular playground and have already indicated that I'll be a customer on day one, but condescending 'mother knows best' statements I can do without.

  • edited March 2019

    @jonmoore said:

    @brambos said:

    I'm building it 100% according to how Apple dictates stuff to be implemented for Audio Units. If they're going to change their internal architecture that's something they'll need to sort out. In AUv3 MIDI, all MIDI is handled on the audio thread. There is nothing optional for developers about that.

    I'm not worried about it, so users shouldn't be either :)

    It's fine for you not to be worried about it. But as a user, I believe it's an important consideration that everything isn't restricted to a single thread (especially for automation/midi ccs where sample accuracy isn't a real consideration outside of iOS). You mentioned in our other discussion on threading, that you don't consider that iOS DAW's shouldn't be judged as DAW's with mobility, that's a need that's already fulfilled by laptops. But Apple markets iPad Pro's as laptop replacements and many now carry only one of either a laptop or a tablet. Everything being tied to a single real-time thread is a consideration in that context.

    I applaud you for your vision of iOS as a modular playground and have already indicated that I'll be a customer on day one, but condescending 'mother knows best' statements I can do without.

    It's not meant to be condescending - apologies if it came across that way (my excuse: English is not my native tongue). It's just a simple observation that there is no other way to develop AUv3 on iOS than the way I'm doing with Mozaic. If you have other information, I'm more than a little bit interested to hear it.

    But if you're concerned about Mozaic's "single realtime thread implementation" you should be worried about every AUv3 plugin out there on iOS. This is how AUs work on iOS, it's not simply an arbitrary decision made by me.

    Hence my "you shouldn't worry about it" remark. If Apple breaks Mozaic, they break the entire AU ecosystem on the platform. I estimate the chance of that happening is very low.

  • edited March 2019

    I have a couple of friends that work for Apple and (hopefully) without the risk of sending Apple's NDA police for a visit to their cubicle, they've hinted to me that multithreading real-time audio on iOS is a major Apple priority. It wasn't by coincidence that I posted that Apple recruitment advertisement the other day.

    https://www.linkedin.com/jobs/view/audio-real-time-embedded-systems-engineer-at-apple-967774184/

    One of my bugbears with iOS is that sometimes mission-critical apps get dropped by the devs because Apple changes something. The customer is left in a place where they either have to find another solution or wait until the developer creates a new app that works under Apples new application frameworks.

    I'm not asking that you peer into your crystal ball, simply that you develop something thinking about possible futures. You're asking for a major commitment from your customers (that they learn a new scripting language, or in the case of many, take their first baby steps into the wider world of programming). On that basis, I'm hoping that you're building some form of future proofing into Mosaic, that allows for rewrites/refactoring should Apple get the ball rolling with multithreading real-time audio within 3 years (it's more than reasonable that customers should feel their purchase is good for 3 years of ongoing support).

  • wimwim
    edited March 2019

    I didn’t read @brambos post as condescending in the least. It was factual, simple, and to the point. I appreciated it. It was a concise answer to an equally valid and well put opinion.

    Developers are constrained to the platform they develop for.

  • @jonmoore it might be a good idea to share your concerns about multithreading audio in iOS with Apple as it seems after feedback from the developers that their hands are tied in this regard and Apple holds the cards to change.

    It doesn’t seem like the multi core aspects of the iPad Pros can be leveraged for musicians so letting Apple know there are some who would like them to change that could be an effective way to facilitate that pro functionality?

  • @jonmoore said:
    I have a couple of friends that work for Apple and (hopefully) without the risk of sending Apple's NDA police for a visit to their cubicle, they've hinted to me that multithreading real-time audio on iOS is a major Apple priority. It wasn't by coincidence that I posted that Apple recruitment advertisement the other day.

    https://www.linkedin.com/jobs/view/audio-real-time-embedded-systems-engineer-at-apple-967774184/

    One of my bugbears with iOS is that sometimes mission-critical apps get dropped by the devs because Apple changes something. The customer is left in a place where they either have to find another solution or wait until the developer creates a new app that works under Apples new application frameworks.

    I'm not asking that you peer into your crystal ball, simply that you develop something thinking about possible futures. You're asking for a major commitment from your customers (that they learn a new scripting language, or in the case of many, take their first baby steps into the wider world of programming). On that basis, I'm hoping that you're building some form of future proofing into Mosaic, that allows for rewrites/refactoring should Apple get the ball rolling with multithreading real-time audio within 3 years (it's more than reasonable that customers should feel their purchase is good for 3 years of ongoing support).

    Great, cool information B) . But I'm afraid it's not something I can act upon. Apple will have to sort that out transparently and invisibly for developers. Multi-threaded plugins do not and can not exist on iOS in the way AUv3 works, so the threading will need to be managed by the DAWs or deep inside the CoreAudio framework.

    So, again: I'm not worried about this. I'm certain Apple will sort it out for us painlessly, or they will destroy the entire AUv3 ecosystem on iOS and we'll have a much bigger problem than the time you and I invest in Mozaic.

    Mozaic is like any other AUv3 MIDI plugin on iOS. Either they all work, or they all stop working :)

  • edited March 2019

    @brambos said:

    Great, cool information B) . But I'm afraid it's not something I can act upon. Apple will have to sort that out transparently and invisibly for developers. Multi-threaded plugins do not and can not exist on iOS in the way AUv3 works, so the threading will need to be managed by the DAWs or deep inside the CoreAudio framework.

    So, again: I'm not worried about this. I'm certain Apple will sort it out for us painlessly, or they will destroy the entire AUv3 ecosystem on iOS and we'll have a much bigger problem than the time you and I invest in Mozaic.

    Mozaic is like any other AUv3 MIDI plugin on iOS. Either they all work, or they all stop working :)

    Much like on the desktop, I'd expect that DAW's will be the first aspect of iOS audio that will be able to make use of multi-threading.

    @wim It's probably a cultural/language thing. After I've raised what you acknowledge to be a valid opinion, I found the "I'm not worried about it, so users shouldn't be either", to be at best, dismissive of a valid opinion.

    Anyway not to dwell, I've already stated multiple times that I'm excited by Mosaic and that I'll be a day one customer. @bramos also responded with grace to any perceived tone of voice criticisms I subsequently brought up, so hopefully, there's no need for any other commentary. :)

  • edited March 2019

    @jonmoore said:

    @brambos said:

    Great, cool information B) . But I'm afraid it's not something I can act upon. Apple will have to sort that out transparently and invisibly for developers. Multi-threaded plugins do not and can not exist on iOS in the way AUv3 works, so the threading will need to be managed by the DAWs or deep inside the CoreAudio framework.

    So, again: I'm not worried about this. I'm certain Apple will sort it out for us painlessly, or they will destroy the entire AUv3 ecosystem on iOS and we'll have a much bigger problem than the time you and I invest in Mozaic.

    Mozaic is like any other AUv3 MIDI plugin on iOS. Either they all work, or they all stop working :)

    Much like on the desktop, I'd expect that DAW's will be the first aspect of iOS audio that will be able to make use of multi-threading.

    @wim It's probably a cultural/language thing. After I've raised what you acknowledge to be a valid opinion, I found the "I'm not worried about it, so users shouldn't be either", to be at best, dismissive of a valid opinion.

    Anyway not to dwell, I've already stated multiple times that I'm excited by Mosaic and that I'll be a day one customer. @bramos also responded with grace to any perceived tone of voice criticisms I subsequently brought up, so hopefully, there's no need for any other commentary. :)

    The bluntness of the Dutch style of communication is legendary all over the world. To describe it as "direct" is probably an understatement. I try to be conscious of it when speaking in an international audience, but sometimes a hint of it may trickle through in how I articulate things ;)

    http://www.bbc.com/travel/story/20180131-where-dutch-directness-comes-from

  • I remember going through the interview tapes of Python creator Guido van Rossum on the Museum of Computing YouTube, and Guido mentions that his directness has been a major cause of problems over the years. Plus I've worked with a few Dutch folk in my time, so I should know better. :)

Sign In or Register to comment.