Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

AUx Sequencer desire (or other AUx desires)

Now, I don't know in depth about the AUx spec, or what is truly possible with it. I know however that my experience is that for my uses, it is way more stable and convenient than using stand alone or IAA apps.

I am talking about the more simple 'analog' style sequencers.

This has me thinking how much I would love an AUx sequencer to have in my AUM setup. Yes I can get around it's need, but it's so much easier to just open an AUx within AUM while playing. That with the fact that I've rarely had any issues with AUxs bringing my setup crashing down.

What do you peeps think? Would anyone else see a use for an AUx sequencer?

«1

Comments

  • edited December 2016

    I would like to see more MIDI Generators/processors available as AUx, MIDI FX if you like that can be loaded onto a particular channel for things like this, could also have Quantizer/Humanizer, Tempo Doubler/Halver, MIDI Delay (works like a delay but by adding midi notes with reducing or increasing velocity).....you know the sort of stuff.....

    I'm not sure if MIDI is even part of AUx, as the devs talk about AUx parameter automation.

    Perhaps this stuff will be available in AB3, which to me would move it back near the top of the list of essential apps for iOS music

  • edited December 2016

    Yeah and more Guitar like fx in AUx format too! Need more distortions etc.

  • I love the idea. In fact so much that I've looked into the possibilities of it months ago. It turns out (after doublechecking with Apple's CoreAudio engineeers) that an AUv3 plugin can not output MIDI.

    Now, it has crossed my mind to design an open-source extension to the AUv3 protocol (AUv3+?) that allows hosts to get MIDI commands from a plugin, the same way they get audio data from it (via a renderBlock system).

    It's technically not impossible, but it would require Jonatan Liljedahl to also implement it into AUM. And proper documentation and samplecode so any other dev can also do it.

    It's probably a lot of work for all dev stakeholders, and it would be like doing Apple's job except not getting paid for it (because users would never fully understand the complexity and effort that goes into something so seemingly simple and not be willing to pay much for the feature).

    So I've put it in the mental fridge for now as a thought exercise. Unless @j_liljedahl is also up for it ;)

  • @Fruitbat1919 said:
    Yeah and more Guitar like fx in AUx format too! Need more distortions etc.

    I think what would be really nice would be an AUx that was itself a host for AUx.....seems crazy but if you think about an FX chain you may have multiple AUx's and most hosts are either limited to the number you can use or the list of loaded AUx becomes too long and unwieldly....being able to load 'chunk' of that FX chain into a container would help keep the UI less cluttered while still enabling us to throw as many Aux into the chain as we want.

  • @brambos said:
    I love the idea. In fact so much that I've looked into the possibilities of it months ago. It turns out (after doublechecking with Apple's CoreAudio engineeers) that an AUv3 plugin can not output MIDI.

    Now, it has crossed my mind to design an open-source extension to the AUv3 protocol (AUv3+?) that allows hosts to get MIDI commands from a plugin, the same way they get audio data from it (via a renderBlock system).

    It's technically not impossible, but it would require Jonatan Liljedahl to also implement it into AUM. And proper documentation and samplecode so any other dev can also do it.

    It's probably a lot of work for all dev stakeholders, and it would be like doing Apple's job except not getting paid for it (because users would never fully understand the complexity and effort that goes into something so seemingly simple and not be willing to pay much for the feature).

    So I've put it in the mental fridge for now as a thought exercise. Unless @j_liljedahl is also up for it ;)

    Thank you so much for your reply. It really is helpful having a dev that can explain some of the intricacies of AUx design to us. Yeah, I can see how people's perceptions of the work / costs involved are a sticking point with certain projects. I wonder if this is something we should discuss with the AudioBus team for their intended implementation of midi within Audiobus 3?

  • @Fruitbat1919 said:

    @brambos said:
    I love the idea. In fact so much that I've looked into the possibilities of it months ago. It turns out (after doublechecking with Apple's CoreAudio engineeers) that an AUv3 plugin can not output MIDI.

    Now, it has crossed my mind to design an open-source extension to the AUv3 protocol (AUv3+?) that allows hosts to get MIDI commands from a plugin, the same way they get audio data from it (via a renderBlock system).

    It's technically not impossible, but it would require Jonatan Liljedahl to also implement it into AUM. And proper documentation and samplecode so any other dev can also do it.

    It's probably a lot of work for all dev stakeholders, and it would be like doing Apple's job except not getting paid for it (because users would never fully understand the complexity and effort that goes into something so seemingly simple and not be willing to pay much for the feature).

    So I've put it in the mental fridge for now as a thought exercise. Unless @j_liljedahl is also up for it ;)

    Thank you so much for your reply. It really is helpful having a dev that can explain some of the intricacies of AUx design to us. Yeah, I can see how people's perceptions of the work / costs involved are a sticking point with certain projects. I wonder if this is something we should discuss with the AudioBus team for their intended implementation of midi within Audiobus 3?

    Yeah thanks Bram, I always enjoy your insights into the guts of these things ;)

  • An AU sequencer is the one missing element of an AU geared setup in AUM for me. Modstep can sort of serve a similar purpose but I love the AUM GUI and workflow.

  • edited December 2016

    @Fruitbat1919 said:
    Thank you so much for your reply. It really is helpful having a dev that can explain some of the intricacies of AUx design to us. Yeah, I can see how people's perceptions of the work / costs involved are a sticking point with certain projects. I wonder if this is something we should discuss with the AudioBus team for their intended implementation of midi within Audiobus 3?

    It would certainly be a unique selling point for AB3 and something that fits in their plans to make MIDI routing part of the AB3 fundaments (at least looking at it from an outside perspective). Until Apple spoil the party again and pull a stunt like they did with IAA, blocking third party functionality.

    So I could understand if the AB guys don't want to get too close to Apple's off-limits OS functionality again (or simply don't deem it a commercially viable feature).

  • @Sebastian

    Hi, wonder if you would like to comment on this thread Sebastian? I know you guys have a lot going on, but even if you think there may be something in it for an alternative solution, it's good for us to tell you our ideas and desires :)

  • @AndyPlankton said:

    @Fruitbat1919 said:

    @brambos said:
    I love the idea. In fact so much that I've looked into the possibilities of it months ago. It turns out (after doublechecking with Apple's CoreAudio engineeers) that an AUv3 plugin can not output MIDI.

    Now, it has crossed my mind to design an open-source extension to the AUv3 protocol (AUv3+?) that allows hosts to get MIDI commands from a plugin, the same way they get audio data from it (via a renderBlock system).

    It's technically not impossible, but it would require Jonatan Liljedahl to also implement it into AUM. And proper documentation and samplecode so any other dev can also do it.

    It's probably a lot of work for all dev stakeholders, and it would be like doing Apple's job except not getting paid for it (because users would never fully understand the complexity and effort that goes into something so seemingly simple and not be willing to pay much for the feature).

    So I've put it in the mental fridge for now as a thought exercise. Unless @j_liljedahl is also up for it ;)

    Thank you so much for your reply. It really is helpful having a dev that can explain some of the intricacies of AUx design to us. Yeah, I can see how people's perceptions of the work / costs involved are a sticking point with certain projects. I wonder if this is something we should discuss with the AudioBus team for their intended implementation of midi within Audiobus 3?

    Yeah thanks Bram, I always enjoy your insights into the guts of these things ;)

    Yeah. I think that's why the AudioBus forums work so well. There is always someone around here that knows the stuff that helps us all :)

  • Well GarageBand was supposed to be that idea right?

  • @Fruitbat1919 said:
    … I am talking about the more simple 'analog' style sequencers. …

    @brambos said:
    … an AUv3 plugin can not output MIDI.
    … mental fridge …

    In that case, what is the frequency response lower limit of audio within AUv3 and Core Audio in general? Or more precisely, can it handle and process DC? If an audio signal is considered to be so low in frequency that it is a DC static level, then that's all that is required to construct an analogue sequencer that outputs a stepped series of DC values. If CoreAudio still thinks that is audio, then good, and a receptacle app at the other end could be 'looking out for' a specific 'audio' connection that is actually DC, and treat it as if it were a control voltage (except that it is a value, in a computer). An interesting aspect of this is that it pretty much narrows you down to only being able to make analogue sequencers with it. If it turns out to be possible to route DC within CoreAudio, that is.

  • @brambos said:
    It turns out (after doublechecking with Apple's CoreAudio engineeers) that an AUv3 plugin can not output MIDI.

    So if that is the case, how does AU parameter recording/automation recording work?

  • @stormywaterz said:
    Well GarageBand was supposed to be that idea right?

    No I'm thinking more along the lines of the excellent MidiSequencer.

  • @u0421793 said:
    In that case, what is the frequency response lower limit of audio within AUv3 and Core Audio in general? Or more precisely, can it handle and process DC? If an audio signal is considered to be so low in frequency that it is a DC static level, then that's all that is required to construct an analogue sequencer that outputs a stepped series of DC values. If CoreAudio still thinks that is audio, then good, and a receptacle app at the other end could be 'looking out for' a specific 'audio' connection that is actually DC, and treat it as if it were a control voltage (except that it is a value, in a computer). An interesting aspect of this is that it pretty much narrows you down to only being able to make analogue sequencers with it. If it turns out to be possible to route DC within CoreAudio, that is.

    Now THAT is creative thinking. I love it.

    But you'd need to come up with a way to encode the signal in an audiostream of any rate. Because CoreAudio sessions dictate the samplerate depending on what is already going on in the system and what hardware is connected for input/output.

    But you could probably encode simple midi commands with timestamps in any audio stream - provided you're sure the audiostream isn't used for playback. I'd prefer a method that does not involve Fourier transforms to get the data out of the stream, because that would likely introduce latency.

  • I'd really like to see this too so I hope it's possible one day.

  • edited December 2016

    @gsm909 said:

    @brambos said:
    It turns out (after doublechecking with Apple's CoreAudio engineeers) that an AUv3 plugin can not output MIDI.

    So if that is the case, how does AU parameter recording/automation recording work?

    It doesn't ;-) Because as far as I know there is no host that implements it at this point in time (hence all my plugins also listen to MIDI CC events mapped to AU parameters with the same address).

    Theoretically it should work as follows:

    • the AU Parameter value is exposed to both the host and the plugin; for example a value representing "Cutoff".
    • both put a "listener" onto this value, which will notify you if the other has changed it
    • so if the cutoff knob is twiddled on the plugin, the host will get a notification. conversely, if the host sends a new value to the cutoff parameter the plugin will get a notification to update its state and GUI.
    • that's it! Easy peasy. So it should be fairly easy for hosts to implement parameter automation. My plugins are ready for it ;-)
  • edited December 2016

    @u0421793 said:

    @Fruitbat1919 said:
    … I am talking about the more simple 'analog' style sequencers. …

    An interesting aspect of this is that it pretty much narrows you down to only being able to make analogue sequencers with it. If it turns out to be possible to route DC within CoreAudio, that is.

    Thinking more about this... if someone would make a CoreAudio CODEC class for this, it would allow every plugin to send and receive CV/GATE/SyncPulse data over an audiostream, effectively enabling the AUv3 system to become a huge modular synth.

    Poweruser stuff, for sure, but an interesting thought! Especially because each individual AudioUnit would only have to do very little processing and use almost no resources.

    I love this idea!

  • @brambos said:

    @u0421793 said:
    In that case, what is the frequency response lower limit of audio within AUv3 and Core Audio in general? Or more precisely, can it handle and process DC? If an audio signal is considered to be so low in frequency that it is a DC static level, then that's all that is required to construct an analogue sequencer that outputs a stepped series of DC values. If CoreAudio still thinks that is audio, then good, and a receptacle app at the other end could be 'looking out for' a specific 'audio' connection that is actually DC, and treat it as if it were a control voltage (except that it is a value, in a computer). An interesting aspect of this is that it pretty much narrows you down to only being able to make analogue sequencers with it. If it turns out to be possible to route DC within CoreAudio, that is.

    Now THAT is creative thinking. I love it.

    But you'd need to come up with a way to encode the signal in an audiostream of any rate. Because CoreAudio sessions dictate the samplerate depending on what is already going on in the system and what hardware is connected for input/output.

    But you could probably encode simple midi commands with timestamps in any audio stream - provided you're sure the audiostream isn't used for playback. I'd prefer a method that does not involve Fourier transforms to get the data out of the stream, because that would likely introduce latency.

    Fascinating stuff. I pulled these links out of my bookmarks... might be a good source of inspiration.

    https://chirp.io/
    https://github.com/korginc/volcasample

  • I'm guessing that latency would partly be down to having to wait for a whole wave cycle in order to detect a frequency, so rather than low frequency a high frequency would be better as the time for a full cycle is shorter.

    Man I love this place....wish I didn't have work to do today, I'd be all over this more than i am at the moment :)

  • Perhaps the iVCS3 dev would be interested in this too, what with the recent AU update and all !

  • @AndyPlankton said:
    Perhaps the iVCS3 dev would be interested in this too, what with the recent AU update and all !

    Well, provided we have a CODEC we all use/agree on for the signal transmission, it will be very easy for any dev to make modules compatible with such a modular system. I'm estimating you'd be able to make something fairly slick and polished (like MIDI controllable LFO generator) in a weekend. And it would all run in AUM without any modifications (with state saving to save your modular synth design).

    Niiiiice!

  • This is all making me drool a bit....sorry but these ideas are all so sexy B)

  • edited December 2016

    @brambos said:

    @AndyPlankton said:
    Perhaps the iVCS3 dev would be interested in this too, what with the recent AU update and all !

    Well, provided we have a CODEC we all use/agree on for the signal transmission, it will be very easy for any dev to make modules compatible with such a modular system. I'm estimating you'd be able to make something fairly slick and polished (like MIDI controllable LFO generator) in a weekend. And it would all run in AUM without any modifications (with state saving to save your modular synth design).

    Niiiiice!

    Viva la AUv3 revolution !!!!

  • @brambos said:

    @AndyPlankton said:
    Perhaps the iVCS3 dev would be interested in this too, what with the recent AU update and all !

    Well, provided we have a CODEC we all use/agree on for the signal transmission, it will be very easy for any dev to make modules compatible with such a modular system. I'm estimating you'd be able to make something fairly slick and polished (like MIDI controllable LFO generator) in a weekend. And it would all run in AUM without any modifications (with state saving to save your modular synth design).

    Niiiiice!

    Man....what you just said just struck me properly..........Modular with memory B)

  • @brambos said:
    But you'd need to come up with a way to encode the signal in an audiostream of any rate.


    @brambos said:
    I'd prefer a method that does not involve Fourier transforms to get the data out of the stream, because that would likely introduce latency.


    @brambos said:
    ...it would allow every plugin to send and receive CV/GATE/SyncPulse data over an audiostream, effectively enabling the AUv3 system to become a huge modular synth.


    Would a simple zero-crossing on a basic waveform get the job done?

    (sample_rate / (window_size / zero_crossings)) / 2

  • @nrgb said:
    Would a simple zero-crossing on a basic waveform get the job done?
    (sample_rate / (window_size / zero_crossings)) / 2

    I assume something like that could work given a high enough sample rate.

  • edited December 2016

    @brambos said:
    I assume something like that could work given a high enough sample rate.

    If the goal was to simply emulate control voltage (which is usually a range of about 10v ?) wouldn't a fairly modest sample rate work?

    A rate of 22050 khz would give 10000 khz of usable audio.
    1000 khz == 1v

    Does that make sense?

  • @nrgb said:

    @brambos said:
    I assume something like that could work given a high enough sample rate.

    If the goal was to simply emulate control voltage (which is usually a range of about 10v ?) wouldn't a fairly modest sample rate work?

    A rate of 22050 khz would give 10000 khz of usable audio.
    1000 khz == 1v

    Minor correction: you probably mean 22.05 KHz, which gives ~10.0KHz of usable audio.
    That's 1000Hz per volt. Should be enough, but I always like to have a fair margin to account for inaccuracies. But I must admit I haven't done the maths yet ;-)

  • @brambos said:
    Minor correction: you probably mean 22.05 KHz, which gives ~10.0KHz of usable audio.

    Yep. :p
    Serves me right for trying to internets and watch TV at the same time...

Sign In or Register to comment.