Audiobus: Use your music apps together.
What is Audiobus? — Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.
Download on the App StoreAudiobus is the app that makes the rest of your setup better.
Comments
That last bit = 😍
@brambos I love the mutate feature in your apps and there's really no good AU MIDI FX for adding some generative variety to sequences. Might you consider a preset that gets the kids at the back of the room started?
I could imagine a nice script that "listens" to your melody and/or chords for a while, then when you hold one of the pads, it adds rhythmic variation, another pad will play inversions, a third pad will add "blue notes" and the fourth pad will add short flams here and there to spice up the melody
Nice! looking forward to this!
That will be good for a six pack!
Trying to get my head around all the use cases for this but I have faith in @brambos and will be eagerly following along.
Nice
On your medium page you already mentioned nested loops and arrays, - are user defined functions also in the scope of the language ?
No user functions for now, but lots of different events to tap into.
Also compound conditional statements:
So plenty of possibilities for structured code.
Case statements?
I will post the programming guide when it’s done.
Very awesome !!! I always wanted to build a Lego like building blocks approach to MIDI tooling ...something like Scratch ( https://scratch.mit.edu ) but for MIDI composition and routing
Not sure if inventing a whole new programming language along with all the "rat tail" it involves is / was worth it? Why not just embed a WKWebView and let users use a widespread language, i.e. Javascript? (along with an already existing, extremely optimized JIT etc...)... just a thought I'm normally no "standard(s) guy" but developing a whole language including an interpreter etc. for such a specialized one-time use would've been overkill for me.
Valid questions, but I have thought my strategy through before I started
The virtual machine runs on the audio-thread (because I want all events to be sample-accurate). Third party interpreters can't run on realtime threads because they are unsafe. They're full of locks, dynamic memory allocations etc. So there was a clear need to "roll my own".
Because I build-my own, I can optimize the living shit out of my interpreter. I'm applying DSP-grade optimizations in my parser and interpreter and I can selectively choose where to make compromises in order to speed up the engine.*
I really didn't want to use JavaScript because this is meant to be a gentle entry for non-coders. JavaScript with its object oriented foundation and beginner-unfriendly aspects like case sensitivity did not meet my needs. I want to keep the core language structure simple, so people with no prior experience don't need to learn dozens of coding patterns before they can get a little bit productive.
Building the virtual machine is the fun part. I love me a challenge, and designing my own language with interpreter is the best fun I've had in ages - I even enjoy writing the manual!
*) for example; because this is a music application I can prioritize seamless playback and gracefully handling errors with best guesses over “breaking and informing” when exceptions happen. Lots of subtle music-specific design decisions to be made.
OK, mostly valid points, sorry, been talking a bit out of my backpart I'm a bit quick sometimes...
I agree with everything except maybe for point 3) -- JavaScript can really be pretty beginner-friendly, as you could provide a "runtime-library" that simplifies creating the event handlers and other stuff from your examples in a very simplistic syntax. But yeah, the realtime aspect probably is very important and if you're having fun while at it, then it's all good
I hope the straightforward efficient design elements found in other brambos apps will also translate well to the language created for Mozaic.
This looks excellent and something I have been trying to find - a flexible MIDI controller app. It seems to have the right balance of sliders and pads. I wonder if Envelopes might find their way into a future version?
With both constant and bpm-dependent timers, I don't see any reason why that shouldn't be possible with Mozaic from day one. You could also do a lot with Lemur but that's a standalone app with no AUv3 support.
Definitely not going to buy this...I don't need more coding in my life...
Who am I kidding LOL, gimmmeeeee
Ahhhhhh good times good times
As previouslyously indicated I'll be all over Mosiac on release. But I do have a concern about attempting to solve everything on the audio thread and you've guessed it, it's parallel processing.
Having used Max on the desktop for many years I'm aware how easy it is to create a system that's inflexible to the modern parallel processing world. I continue to use Max within Ableton Live as Max only locks the channels that have IO requirements to the M4L device in play to a single thread. Ableton is still able to provide it's multithreaded goodness to the rest of the set.
I acknowledge that everything is locked to the audio thread in iOS at the moment but I'd expect that situation to be solved at some point over the next few years. I'm hoping that Mosaic is flexible enough to adapt to those changes (if they come to pass).
I'm building it 100% according to how Apple dictates stuff to be implemented for Audio Units. If they're going to change their internal architecture that's something they'll need to sort out. In AUv3 MIDI, all MIDI is handled on the audio thread. There is nothing optional for developers about that.
I'm not worried about it, so users shouldn't be either
It's fine for you not to be worried about it. But as a user, I believe it's an important consideration that everything isn't restricted to a single thread (especially for automation/midi ccs where sample accuracy isn't a real consideration outside of iOS). You mentioned in our other discussion on threading, that you don't consider that iOS DAW's shouldn't be judged as DAW's with mobility, that's a need that's already fulfilled by laptops. But Apple markets iPad Pro's as laptop replacements and many now carry only one of either a laptop or a tablet. Everything being tied to a single real-time thread is a consideration in that context.
I applaud you for your vision of iOS as a modular playground and have already indicated that I'll be a customer on day one, but condescending 'mother knows best' statements I can do without.
It's not meant to be condescending - apologies if it came across that way (my excuse: English is not my native tongue). It's just a simple observation that there is no other way to develop AUv3 on iOS than the way I'm doing with Mozaic. If you have other information, I'm more than a little bit interested to hear it.
But if you're concerned about Mozaic's "single realtime thread implementation" you should be worried about every AUv3 plugin out there on iOS. This is how AUs work on iOS, it's not simply an arbitrary decision made by me.
Hence my "you shouldn't worry about it" remark. If Apple breaks Mozaic, they break the entire AU ecosystem on the platform. I estimate the chance of that happening is very low.
I have a couple of friends that work for Apple and (hopefully) without the risk of sending Apple's NDA police for a visit to their cubicle, they've hinted to me that multithreading real-time audio on iOS is a major Apple priority. It wasn't by coincidence that I posted that Apple recruitment advertisement the other day.
https://www.linkedin.com/jobs/view/audio-real-time-embedded-systems-engineer-at-apple-967774184/
One of my bugbears with iOS is that sometimes mission-critical apps get dropped by the devs because Apple changes something. The customer is left in a place where they either have to find another solution or wait until the developer creates a new app that works under Apples new application frameworks.
I'm not asking that you peer into your crystal ball, simply that you develop something thinking about possible futures. You're asking for a major commitment from your customers (that they learn a new scripting language, or in the case of many, take their first baby steps into the wider world of programming). On that basis, I'm hoping that you're building some form of future proofing into Mosaic, that allows for rewrites/refactoring should Apple get the ball rolling with multithreading real-time audio within 3 years (it's more than reasonable that customers should feel their purchase is good for 3 years of ongoing support).
I didn’t read @brambos post as condescending in the least. It was factual, simple, and to the point. I appreciated it. It was a concise answer to an equally valid and well put opinion.
Developers are constrained to the platform they develop for.
@jonmoore it might be a good idea to share your concerns about multithreading audio in iOS with Apple as it seems after feedback from the developers that their hands are tied in this regard and Apple holds the cards to change.
It doesn’t seem like the multi core aspects of the iPad Pros can be leveraged for musicians so letting Apple know there are some who would like them to change that could be an effective way to facilitate that pro functionality?
Great, cool information . But I'm afraid it's not something I can act upon. Apple will have to sort that out transparently and invisibly for developers. Multi-threaded plugins do not and can not exist on iOS in the way AUv3 works, so the threading will need to be managed by the DAWs or deep inside the CoreAudio framework.
So, again: I'm not worried about this. I'm certain Apple will sort it out for us painlessly, or they will destroy the entire AUv3 ecosystem on iOS and we'll have a much bigger problem than the time you and I invest in Mozaic.
Mozaic is like any other AUv3 MIDI plugin on iOS. Either they all work, or they all stop working
Much like on the desktop, I'd expect that DAW's will be the first aspect of iOS audio that will be able to make use of multi-threading.
@wim It's probably a cultural/language thing. After I've raised what you acknowledge to be a valid opinion, I found the "I'm not worried about it, so users shouldn't be either", to be at best, dismissive of a valid opinion.
Anyway not to dwell, I've already stated multiple times that I'm excited by Mosaic and that I'll be a day one customer. @bramos also responded with grace to any perceived tone of voice criticisms I subsequently brought up, so hopefully, there's no need for any other commentary.
The bluntness of the Dutch style of communication is legendary all over the world. To describe it as "direct" is probably an understatement. I try to be conscious of it when speaking in an international audience, but sometimes a hint of it may trickle through in how I articulate things
http://www.bbc.com/travel/story/20180131-where-dutch-directness-comes-from
I remember going through the interview tapes of Python creator Guido van Rossum on the Museum of Computing YouTube, and Guido mentions that his directness has been a major cause of problems over the years. Plus I've worked with a few Dutch folk in my time, so I should know better.