Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Dawless toe dipping

1235

Comments

  • @wim said:

    @gregsmith said:
    What about ApeMatrix? Is it a viable alternative to AUM in this situation? I don’t have it.

    It says it does ‘precise midi clock in/out’

    Tempo and Start/Stop only. No timeline positioning.

    Ah I see

  • Just realised I’ve been so focussed on freezing tracks to reduce cpu, I forgot about increasing buffer. Increased it to 2048 and cpu is now 30-40% so no problem at all 🤦‍♂️

  • @GovernorSilver : thanks! Yeah, that doesn’t bother me too much, as my stuff is never live, and I’m fine to go back in and delete blank space after the fact, but good to know.

  • edited November 2019

    @SevenSystems said:

    @wim said:
    You're totally right @SevenSystems.

    On the other hand, I've been listening to host developers resisting MIDI Clock for 20 years now, not wanting to support it because it's inadequate and antiquated. Meanwhile, MIDI remains the only viable standard that works with everything, and hardware has just rolled along with it. They've proclaimed it's death is imminent since the last century, but nothing has come along to replace it.

    It's never going to change. I give up.

    But! if some brilliant developer who has a standalone app that's best in class for midi sequencing and two outstanding AU midi control surface apps, decided to port a subset of his outstanding piano roll/timeline midi sequencing app to AU midi, maybe I can someday come back to the quest for a modular "best in class" approach. ;)

    Yes I understand ;) not to get too philosophical or off-topic now, but I want to say something more generic here:

    A protocol that deals with solving some kind of problem cannot really be "outdated" unless the problem drastically changes. There's a reason why MIDI is still around in its absolutely original 1983 form almost 40 years later -- it is well-designed and the problems it tries to solve are essentially unchanged. There is no need for something more "modern" because MIDI is not "antiquated".

    Yes, maybe nowadays' XML crowd, where to store the number 1234, 500 kilobytes of bracket bullshit are needed, would say that MIDI's enormously optimized use of bandwidth is "antiquated". But even then, sending the same information over MIDI as over, say, XML, is still 1000 times more efficient even nowadays, and if you send 1000s of notes or controllers at the same time, even with today's hardware you would notice.

    With that off the table: MIDI sync is also quite well designed, and well-specified! (something that cannot always be said about Ableton Link, for example). The MIDI spec is extremely clear and goes into insane detail for the tiniest things, because it was designed in a different time -- when things (specifications, software, cars, etc.) where built for the future, not for the shareholders! (do I sound like anticapitalista now? I amn't!)

    The same can be said, for example, of the C programming language. It's also essentially unchanged since around 1980 or earlier.

    I mean, it's just ridiculous how a "modern" programming language like Swift has now seen like, what, 5 iterations in 5 years? That's just insane. It doesn't show that the language is "modern" or "current" -- it just means that no thinking-ahead has been put into it, and its designers keep adding and changing stuff because they didn't have a coherent vision from the start.

    Wow, that was a lovely post! :D

    MIDI clock was not designed for synchronizing audio timelines, it was designed for sequencers where 24 PPQN is enough resolution (not many needs smaller than 1/64th note durations!). For audio, however, this slow clock must be "multiplied" to reach the actual sample rate, i.e. corresponding to 1/44100th notes for 44.1kHz sample rate. This must be done in a way using some smart filtering to try to keep the clock stable and jitter-free, and adjusting to changes slowly, spreading out the error. It's doable in theory but not an easy task to get good results. This has to be done "in advance", not only reacting to the actual MIDI clock ticks coming in but also predicting when they are supposed to come in the future. And then compensate and adjust for the error between prediction and reality, again spreading it out to not get sudden changes. This is the reason so few audio DAWs supports MIDI clock slaving.

    MIDI clock has several shortcomings when it comes to this purpose:

    • Very low time resolution, as mentioned above.
    • Tempo is not communicated directly. You need to actually count the MIDI ticks for a while to calculate the tempo.
    • Clock ticks are not "numbered", all ticks looks exactly the same. If one tick is lost for some reason, you're out of sync, for ever.

    Both IAA/AU host sync and Ableton Link solves all this, by providing exact (64 bit floating point) time information and current tempo for each and every audio buffer. Host sync also has absolute time position information, which Link currently lacks.

  • edited November 2019

    @gregsmith said:

    @wim said:

    @gregsmith said:
    What about ApeMatrix? Is it a viable alternative to AUM in this situation? I don’t have it.

    It says it does ‘precise midi clock in/out’

    Tempo and Start/Stop only. No timeline positioning.

    Ah I see

    Running a few tests with Apematrix I do notice it seems to heed my loop points set in Xequence though, even though it does visualize this in a funny way: I've created a "tune" in Xequence with 8 full bars. I loop bars 5-8. Apematrix visualizes this by saying it is starting every "cycle" on bar 2 and then counts up to 5 (so not 1-4 and not 5-8).

    Digging through old apps I also found Genome, which is a midi only sequencer. I never really made friends with it at the time, but now I noticed it had sync both in/out listed under its features, apart from being "Ableton-esque" in it's way of handling "clips" or "scenes". First initial tests seem to indicate that Genome and Apematrix might need further testing.

  • @j_liljedahl said:

    MIDI clock was not designed for synchronizing audio timelines, it was designed for sequencers where 24 PPQN is enough resolution (not many needs smaller than 1/64th note durations!). For audio, however, this slow clock must be "multiplied" to reach the actual sample rate, i.e. corresponding to 1/44100th notes for 44.1kHz sample rate. This must be done in a way using some smart filtering to try to keep the clock stable and jitter-free, and adjusting to changes slowly, spreading out the error. It's doable in theory but not an easy task to get good results. This has to be done "in advance", not only reacting to the actual MIDI clock ticks coming in but also predicting when they are supposed to come in the future. And then compensate and adjust for the error between prediction and reality, again spreading it out to not get sudden changes. This is the reason so few audio DAWs supports MIDI clock slaving.

    MIDI clock has several shortcomings when it comes to this purpose:

    • Very low time resolution, as mentioned above.
    • Tempo is not communicated directly. You need to actually count the MIDI ticks for a while to calculate the tempo.
    • Clock ticks are not "numbered", all ticks looks exactly the same. If one tick is lost for some reason, you're out of sync, for ever.

    Both IAA/AU host sync and Ableton Link solves all this, by providing exact (64 bit floating point) time information and current tempo for each and every audio buffer. Host sync also has absolute time position information, which Link currently lacks.

    So, for my benefit: would it be fair to say implementing external sync in an app that "only" handles midi should be a lot simpler than implementing external sync in an app that hosts audio files? And any app that tries to do both still will need to adhere to the clock-multiplier you describe? I'm also assuming an audio app then tries to re-calculate things to 1/48000th notes for 48 kHz sample rates...and that this is what is causing confusion now with the later versions of iOS...

    Another work habit of mine is trying to "solve the right problem", not just solving "a problem", but I also geekily enjoy the learning of what the actual problem is.

  • @j_liljedahl said:

    @SevenSystems said:

    @wim said:
    You're totally right @SevenSystems.

    On the other hand, I've been listening to host developers resisting MIDI Clock for 20 years now, not wanting to support it because it's inadequate and antiquated. Meanwhile, MIDI remains the only viable standard that works with everything, and hardware has just rolled along with it. They've proclaimed it's death is imminent since the last century, but nothing has come along to replace it.

    It's never going to change. I give up.

    But! if some brilliant developer who has a standalone app that's best in class for midi sequencing and two outstanding AU midi control surface apps, decided to port a subset of his outstanding piano roll/timeline midi sequencing app to AU midi, maybe I can someday come back to the quest for a modular "best in class" approach. ;)

    Yes I understand ;) not to get too philosophical or off-topic now, but I want to say something more generic here:

    A protocol that deals with solving some kind of problem cannot really be "outdated" unless the problem drastically changes. There's a reason why MIDI is still around in its absolutely original 1983 form almost 40 years later -- it is well-designed and the problems it tries to solve are essentially unchanged. There is no need for something more "modern" because MIDI is not "antiquated".

    Yes, maybe nowadays' XML crowd, where to store the number 1234, 500 kilobytes of bracket bullshit are needed, would say that MIDI's enormously optimized use of bandwidth is "antiquated". But even then, sending the same information over MIDI as over, say, XML, is still 1000 times more efficient even nowadays, and if you send 1000s of notes or controllers at the same time, even with today's hardware you would notice.

    With that off the table: MIDI sync is also quite well designed, and well-specified! (something that cannot always be said about Ableton Link, for example). The MIDI spec is extremely clear and goes into insane detail for the tiniest things, because it was designed in a different time -- when things (specifications, software, cars, etc.) where built for the future, not for the shareholders! (do I sound like anticapitalista now? I amn't!)

    The same can be said, for example, of the C programming language. It's also essentially unchanged since around 1980 or earlier.

    I mean, it's just ridiculous how a "modern" programming language like Swift has now seen like, what, 5 iterations in 5 years? That's just insane. It doesn't show that the language is "modern" or "current" -- it just means that no thinking-ahead has been put into it, and its designers keep adding and changing stuff because they didn't have a coherent vision from the start.

    Wow, that was a lovely post! :D

    MIDI clock was not designed for synchronizing audio timelines, it was designed for sequencers where 24 PPQN is enough resolution (not many needs smaller than 1/64th note durations!). For audio, however, this slow clock must be "multiplied" to reach the actual sample rate, i.e. corresponding to 1/44100th notes for 44.1kHz sample rate. This must be done in a way using some smart filtering to try to keep the clock stable and jitter-free, and adjusting to changes slowly, spreading out the error. It's doable in theory but not an easy task to get good results. This has to be done "in advance", not only reacting to the actual MIDI clock ticks coming in but also predicting when they are supposed to come in the future. And then compensate and adjust for the error between prediction and reality, again spreading it out to not get sudden changes. This is the reason so few audio DAWs supports MIDI clock slaving.

    MIDI clock has several shortcomings when it comes to this purpose:

    • Very low time resolution, as mentioned above.
    • Tempo is not communicated directly. You need to actually count the MIDI ticks for a while to calculate the tempo.
    • Clock ticks are not "numbered", all ticks looks exactly the same. If one tick is lost for some reason, you're out of sync, for ever.

    Both IAA/AU host sync and Ableton Link solves all this, by providing exact (64 bit floating point) time information and current tempo for each and every audio buffer. Host sync also has absolute time position information, which Link currently lacks.

    Thanks for the detailed post and sorry for my philosophical digression 🙂

    Regarding your list:

    1 - Low time resolution: at least between apps, when timestamps are available, couldn't the smoothing be skipped completely as long as the sender uses packet lists with future timestamps providing the next few hundred milliseconds of clock ticks in advance? (for instance Xequence will do this for at least 100 ms (adjustable in its settings) -- also, there will by definition be no jitter.

    2 - Again, while we are talking "between apps", it would be enough to use the time between any two adjacent clock ticks (and their timestamps) to calculate the correct tempo at that time.

    3 - In an ideal (or even "adequate" in my book 😉) world, MIDI connections don't lose packets (even in my old hardware days, I can't remember a single time a MIDI packet would have been lost, over those pesky 9600 baud 5-pin links! But admittedly that was with the most expensive MIDI interface available 😂). But even then: What about just sending a "synchronization SPP" once per bar? That would deal with any lost clock ticks. Kinda like a keyframe in a video format.

    But as soon as hardware is involved and / or no future packets with timestamps are available, all your points are of course valid. Maybe "MIDI clock sender manufacturers" could at least agree on sending a "sync SPP" once per bar to eliminate problem 3.

  • edited November 2019

    Sounds to my ears that using a standalone midi sequencer with its own clock only makes sense if your interest is pure midi sequencing. This setup is fundamentally flawed if you want to have audio aligned with the midi events.

    A standalone midi sequencer really should slave to an audio host if it has any intention of being used alongside audio tracks for full arrangements.

  • How about a Sync AU which you run in any hosts you wish to keep in sync and song position alignment? One instance could be set to be the ‘master’, controlled by its host, which would then rebroadcast the host sync data to the other instances of the Sync AU, all set to be ‘slaves’, which can then communicate the data to their own hosts.

    The caveats would be a) how to send realtime data to an AU hosted in another app, and b) can an AU feed that data back to its host app? I have no idea, since I’m not an iOS music app dev, but if those two things are possible, any AU host could be the ‘master’ without changes, and only those that want to allow ‘slaving’ would need development to support following the incoming sync data.

  • @j_liljedahl, once again, thanks for the detailed and clear explanation.

    I'd still like to know - since you do sync to Link tempo. Could SPP in conjunction with Link not be used to track song position? I know it sounds weird, but basically ignoring the clock but paying attention to the SPP? I'm sure it would need to be quantized to some division of the Link metronome to keep from getting off time, but it seems to me like it should be possible. After all, you do have positioning by finger swipe on the transport bar already.

  • @TheOriginalPaulB said:
    How about a Sync AU which you run in any hosts you wish to keep in sync and song position alignment? One instance could be set to be the ‘master’, controlled by its host, which would then rebroadcast the host sync data to the other instances of the Sync AU, all set to be ‘slaves’, which can then communicate the data to their own hosts.

    The caveats would be a) how to send realtime data to an AU hosted in another app, and b) can an AU feed that data back to its host app? I have no idea, since I’m not an iOS music app dev, but if those two things are possible, any AU host could be the ‘master’ without changes, and only those that want to allow ‘slaving’ would need development to support following the incoming sync data.

    If I understand what you're saying correctly, unfortunately not.

    AU's can't communicate between themselves across hosts. They also have no mechanism to change the tempo of the host. Lastly, there's no common time coordination between hosts that would make that information relevant other than MIDI Clock and Link, so we're back to where we started.

    Cool idea though.

  • @wim said:

    @TheOriginalPaulB said:
    How about a Sync AU which you run in any hosts you wish to keep in sync and song position alignment? One instance could be set to be the ‘master’, controlled by its host, which would then rebroadcast the host sync data to the other instances of the Sync AU, all set to be ‘slaves’, which can then communicate the data to their own hosts.

    The caveats would be a) how to send realtime data to an AU hosted in another app, and b) can an AU feed that data back to its host app? I have no idea, since I’m not an iOS music app dev, but if those two things are possible, any AU host could be the ‘master’ without changes, and only those that want to allow ‘slaving’ would need development to support following the incoming sync data.

    If I understand what you're saying correctly, unfortunately not.

    AU's can't communicate between themselves across hosts. They also have no mechanism to change the tempo of the host. Lastly, there's no common time coordination between hosts that would make that information relevant other than MIDI Clock and Link, so we're back to where we started.

    Cool idea though.

    Lol. Always enjoyed a bit of fantasy...

  • @wim said:

    @TheOriginalPaulB said:
    How about a Sync AU which you run in any hosts you wish to keep in sync and song position alignment? One instance could be set to be the ‘master’, controlled by its host, which would then rebroadcast the host sync data to the other instances of the Sync AU, all set to be ‘slaves’, which can then communicate the data to their own hosts.

    The caveats would be a) how to send realtime data to an AU hosted in another app, and b) can an AU feed that data back to its host app? I have no idea, since I’m not an iOS music app dev, but if those two things are possible, any AU host could be the ‘master’ without changes, and only those that want to allow ‘slaving’ would need development to support following the incoming sync data.

    If I understand what you're saying correctly, unfortunately not.

    AU's can't communicate between themselves across hosts. They also have no mechanism to change the tempo of the host. Lastly, there's no common time coordination between hosts that would make that information relevant other than MIDI Clock and Link, so we're back to where we started.

    Cool idea though.

    While I agree with you across different developers, it seems that @Blue_Mangoo may have found a way with Visual Mixer. All you need to do is place one of each on each track/ channel in AUM and all channels show up in one instance...seems to be some communication going on...

  • edited November 2019

    @mjcouche said:

    @wim said:

    @TheOriginalPaulB said:
    How about a Sync AU which you run in any hosts you wish to keep in sync and song position alignment? One instance could be set to be the ‘master’, controlled by its host, which would then rebroadcast the host sync data to the other instances of the Sync AU, all set to be ‘slaves’, which can then communicate the data to their own hosts.

    The caveats would be a) how to send realtime data to an AU hosted in another app, and b) can an AU feed that data back to its host app? I have no idea, since I’m not an iOS music app dev, but if those two things are possible, any AU host could be the ‘master’ without changes, and only those that want to allow ‘slaving’ would need development to support following the incoming sync data.

    If I understand what you're saying correctly, unfortunately not.

    AU's can't communicate between themselves across hosts. They also have no mechanism to change the tempo of the host. Lastly, there's no common time coordination between hosts that would make that information relevant other than MIDI Clock and Link, so we're back to where we started.

    Cool idea though.

    While I agree with you across different developers, it seems that @Blue_Mangoo may have found a way with Visual Mixer. All you need to do is place one of each on each track/ channel in AUM and all channels show up in one instance...seems to be some communication going on...

    But they can’t communicate with other AUs. They are sandboxed. They can access the same sandboxed storage areas, etc as they’re multiple instances of the same app. They have no access to any other AUs though and are totally unaware other AUs even exist.

  • @gregsmith said:
    I’m thinking about having a go at this, mostly because daws I have are all missing features such as au midi, sidechain, etc. Not being tied to a particular daw means I can hopefully figure out my own ways of achieving things depending on what each song needs.

    Host

    So I have AB but gather AUM can record a session when finished so sounds like a better option.

    Piano Roll

    Probably atom? Or is there better options now?

    Sequencer stuff

    This is the bit I don’t get. If I have several atom rolls setup, how do I tell one to start when one finishes? Or do I have the wrong idea?

    Also automation - can atom handle this or do I need something else?

    That would be enough to get going I guess. How are people getting on with this type of workflow?

    Signal Chain Route incoming audio into AUM(incoming signals/apps)

    into

    AB (benefit of AB remote for something like loopy or other AB apps)

    Piano Roll(several options, but may want to just look at Gadget midi out control for ease initially)

    Are you trying to "do music a certain way" or do you have a need to do music this way?

    I find that trying to do things a certain way without a need is often a dead end. Just me though.

  • @RUST( i )K

    The whole reason for doing this initially was to use FAC Envolver to control cutoff or gain or whatever on a synth using the audio envelope from another track, as I saw an interview with Jon Hopkins where he does something similar on desktop. No universal iOS daw could do this.

    Now I’m into xequence, AB and AUM I can’t imagine going back to a traditional daw.

  • @mjcouche said:

    @wim said:

    @TheOriginalPaulB said:
    How about a Sync AU which you run in any hosts you wish to keep in sync and song position alignment? One instance could be set to be the ‘master’, controlled by its host, which would then rebroadcast the host sync data to the other instances of the Sync AU, all set to be ‘slaves’, which can then communicate the data to their own hosts.

    The caveats would be a) how to send realtime data to an AU hosted in another app, and b) can an AU feed that data back to its host app? I have no idea, since I’m not an iOS music app dev, but if those two things are possible, any AU host could be the ‘master’ without changes, and only those that want to allow ‘slaving’ would need development to support following the incoming sync data.

    If I understand what you're saying correctly, unfortunately not.

    AU's can't communicate between themselves across hosts. They also have no mechanism to change the tempo of the host. Lastly, there's no common time coordination between hosts that would make that information relevant other than MIDI Clock and Link, so we're back to where we started.

    Cool idea though.

    While I agree with you across different developers, it seems that @Blue_Mangoo may have found a way with Visual Mixer. All you need to do is place one of each on each track/ channel in AUM and all channels show up in one instance...seems to be some communication going on...

    They can communicate between instances of themselves, but not with other apps. They also can't communicate even with themselves across different hosts, which was what would be required in order to have something that could get other hosts to sync up. Great idea, but can't work, unfortunately.

  • @gregsmith said:
    I’m thinking about having a go at this, mostly because daws I have are all missing features such as au midi, sidechain, etc. Not being tied to a particular daw means I can hopefully figure out my own ways of achieving things depending on what each song needs.

    Host

    So I have AB but gather AUM can record a session when finished so sounds like a better option.

    Piano Roll

    Probably atom? Or is there better options now?

    Sequencer stuff

    This is the bit I don’t get. If I have several atom rolls setup, how do I tell one to start when one finishes? Or do I have the wrong idea?

    Also automation - can atom handle this or do I need something else?

    That would be enough to get going I guess. How are people getting on with this type of workflow?

    Here is a clearer version of what I mean on iPad vs iPhone pic I posted on my point and method.

  • @wim said:

    @mjcouche said:

    @wim said:

    @TheOriginalPaulB said:
    How about a Sync AU which you run in any hosts you wish to keep in sync and song position alignment? One instance could be set to be the ‘master’, controlled by its host, which would then rebroadcast the host sync data to the other instances of the Sync AU, all set to be ‘slaves’, which can then communicate the data to their own hosts.

    The caveats would be a) how to send realtime data to an AU hosted in another app, and b) can an AU feed that data back to its host app? I have no idea, since I’m not an iOS music app dev, but if those two things are possible, any AU host could be the ‘master’ without changes, and only those that want to allow ‘slaving’ would need development to support following the incoming sync data.

    If I understand what you're saying correctly, unfortunately not.

    AU's can't communicate between themselves across hosts. They also have no mechanism to change the tempo of the host. Lastly, there's no common time coordination between hosts that would make that information relevant other than MIDI Clock and Link, so we're back to where we started.

    Cool idea though.

    While I agree with you across different developers, it seems that @Blue_Mangoo may have found a way with Visual Mixer. All you need to do is place one of each on each track/ channel in AUM and all channels show up in one instance...seems to be some communication going on...

    They can communicate between instances of themselves, but not with other apps. They also can't communicate even with themselves across different hosts, which was what would be required in order to have something that could get other hosts to sync up. Great idea, but can't work, unfortunately.

    Indeed, and even the communication-between-instances is undocumented and unsupported. In other words: Apple don't guarantee that this behavior will keep working and they may take it away at any point in time, without giving reason or warning.

    It isn't even unlikely to happen in the near future, since giving each instance its own 'system sandbox' is much safer than the current setup: one instance of a plugin crashing will then no longer take down all other instances of that same plugin.

  • This is mainly a workflow problem I think.

    Being Dawless for me means not using synced audio tracks, but midi sequencing, jamming and triggering/sequencing samples, sequencing automation, but also doing bits as live as possible, adding live hardware too. You can do a lot with that and can design a track that way but If you're wanting to add synced audio tracks as well, then you may as well ask yourself why aren't you just using a DAW, since you're basically building a kind of Frankenstein-DAW. The problem is the ipad format is not the best for this level of detail.

    I'm not saying I don't want to see that workflow more easily achievable in AUM etc, I think we will. I'm looking forward to Sunvox AU. for me this is the missing piece. a modular daw in AU format. I've been testing with MTR but start to feel like I'm overcomplicating the dawless act, it gets too fiddly and not fun. Sunvox gui is great for mobile as its zoomable etc.

    As soon as I start to work in a DAW type of way in my ipad, I slow down and get frustrated.
    And then I realise I don't have to work this way and I'm forcing it. Audio editing on ipad is terrible compared to desktop and always will be I think. Fine for a sketch, but not precision workflow.

    I record my dawless sequences and sketches into Ableton Live and use that as my DAW for constructing a more complex composition. It works perfectly, no issues, synced with Link. The ipad with Audio units is a practically perfect dawless sketchpad and sequencer and Ableton is (for me) a (close to) perfect DAW. :)

  • @j_liljedahl said:

    1) I could add tempo as a MIDI controllable parameter in AUM. The plugin could then send MIDI cc to control the tempo.

    Hi Jonatan, yes please add this! :)

  • @wim said:
    @j_liljedahl, once again, thanks for the detailed and clear explanation.

    I'd still like to know - since you do sync to Link tempo. Could SPP in conjunction with Link not be used to track song position? I know it sounds weird, but basically ignoring the clock but paying attention to the SPP? I'm sure it would need to be quantized to some division of the Link metronome to keep from getting off time, but it seems to me like it should be possible. After all, you do have positioning by finger swipe on the transport bar already.

    Sure, using SPP in combination with Link should be possible. One could then make it so that an SPP message means "at the next Link sync quantum, jump to this position".

    But it would be much better if Link added this, so it could use the same mechanism for communication instead of having to set up both Link and a CoreMIDI virtual endpoint connection etc..

  • @hellquist said:

    @j_liljedahl said:

    MIDI clock was not designed for synchronizing audio timelines, it was designed for sequencers where 24 PPQN is enough resolution (not many needs smaller than 1/64th note durations!). For audio, however, this slow clock must be "multiplied" to reach the actual sample rate, i.e. corresponding to 1/44100th notes for 44.1kHz sample rate. This must be done in a way using some smart filtering to try to keep the clock stable and jitter-free, and adjusting to changes slowly, spreading out the error. It's doable in theory but not an easy task to get good results. This has to be done "in advance", not only reacting to the actual MIDI clock ticks coming in but also predicting when they are supposed to come in the future. And then compensate and adjust for the error between prediction and reality, again spreading it out to not get sudden changes. This is the reason so few audio DAWs supports MIDI clock slaving.

    MIDI clock has several shortcomings when it comes to this purpose:

    • Very low time resolution, as mentioned above.
    • Tempo is not communicated directly. You need to actually count the MIDI ticks for a while to calculate the tempo.
    • Clock ticks are not "numbered", all ticks looks exactly the same. If one tick is lost for some reason, you're out of sync, for ever.

    Both IAA/AU host sync and Ableton Link solves all this, by providing exact (64 bit floating point) time information and current tempo for each and every audio buffer. Host sync also has absolute time position information, which Link currently lacks.

    So, for my benefit: would it be fair to say implementing external sync in an app that "only" handles midi should be a lot simpler than implementing external sync in an app that hosts audio files? And any app that tries to do both still will need to adhere to the clock-multiplier you describe? I'm also assuming an audio app then tries to re-calculate things to 1/48000th notes for 48 kHz sample rates...and that this is what is causing confusion now with the later versions of iOS...

    Yes, correct.

  • @j_liljedahl said:

    @wim said:
    @j_liljedahl, once again, thanks for the detailed and clear explanation.

    I'd still like to know - since you do sync to Link tempo. Could SPP in conjunction with Link not be used to track song position? I know it sounds weird, but basically ignoring the clock but paying attention to the SPP? I'm sure it would need to be quantized to some division of the Link metronome to keep from getting off time, but it seems to me like it should be possible. After all, you do have positioning by finger swipe on the transport bar already.

    Sure, using SPP in combination with Link should be possible. One could then make it so that an SPP message means "at the next Link sync quantum, jump to this position".

    But it would be much better if Link added this, so it could use the same mechanism for communication instead of having to set up both Link and a CoreMIDI virtual endpoint connection etc..

    Cool, thanks for at least thinking it over. Definitely, it would be better if it existed in Link. It seems very odd that it doesn’t, considering how long MIDI has had it, and considering they set out to do something better than it to begin with.

    AUM would be ahead of other hosts in this aspect if it did get SPP+Link until then though. Just a sayin’. ;)

  • edited December 2019

    In case anyone cares, here’s my first dawless release

  • @SevenSystems said:

    And as mentioned earlier, Xequence 2 has a double life anyway and actually IS already very similar to NS2, but the audio features are cleanly separated from the sequencer and hidden behind a switch (in code), so nobody has to be afraid of interference there -- as long as that switch is disabled, Xequence stays true to its pure and focused sequencer personality ;)

    (when I make music nowadays, I make it completely "in the box" in Xequence 2, including all sounds, effects and mastering. Not even AUs. The modular synth can really do anything anyway if I'm not too lazy to wire something up, and as all FX modules are available both as inserts AND as modules inside the modular, I don't think there's a lot of stuff that can't be done inside that box :)) But for my conservative 4/4 EDM, I don't need anything fancy anyway :D )

    This is already implemented?!?
    :o

    And can be activated by a switch / toggle?

    Is that already available in the betas?

  • @gregsmith said:
    In case anyone cares, here’s my first dawless release

    First reaction.. kept increasing the volume because I wanted to feel that low-end.. and that intro.. had to rewind it after the 1st minute because I needed to experience it again.. killer.. that building pulse was just screaming to end.. I would love to hear this on a serious sound system because my Sony 7506 headphones aren't quite cutting it.. the tones here are really something.. 👍 I can't get enough of the low ones.. great sound choices and arrangement.. love it!

  • @royor said:

    @gregsmith said:
    In case anyone cares, here’s my first dawless release

    First reaction.. kept increasing the volume because I wanted to feel that low-end.. and that intro.. had to rewind it after the 1st minute because I needed to experience it again.. killer.. that building pulse was just screaming to end.. I would love to hear this on a serious sound system because my Sony 7506 headphones aren't quite cutting it.. the tones here are really something.. 👍 I can't get enough of the low ones.. great sound choices and arrangement.. love it!

    Thanks man! 👍

  • @gregsmith said:
    In case anyone cares, here’s my first dawless release

    Cool, nice, well done! It sounds great!
    I have to admit that I actually found the intro a bit too long, but that might be me, I'm in a phase right now where I try to cut down my own intros to avoid making too long tunes. Great pump/beat when it gets going, with good width and still dynamic depth left.

    It sounds like there is some side chaining going on there too, how did you achieve that? Side-chaining (or even dance stuff) isn't really something I'm trying to do myself, but I've seen comments about it elsewhere being tricky to achieve between apps, so I'm curious about it more from a music creation technical kind of perspective. :)

  • @hellquist said:

    @gregsmith said:
    In case anyone cares, here’s my first dawless release

    Cool, nice, well done! It sounds great!
    I have to admit that I actually found the intro a bit too long, but that might be me, I'm in a phase right now where I try to cut down my own intros to avoid making too long tunes. Great pump/beat when it gets going, with good width and still dynamic depth left.

    It sounds like there is some side chaining going on there too, how did you achieve that? Side-chaining (or even dance stuff) isn't really something I'm trying to do myself, but I've seen comments about it elsewhere being tricky to achieve between apps, so I'm curious about it more from a music creation technical kind of perspective. :)

    You’re right about the side chaining - I did it using Bleass sidekick. I triggered it with an otherwise silent midi drum track in Xequence 2 which was routed straight to AUM and into sidekick from there. That allowed me to have a Xequence track that could include some, all, or none of the drum hits. Not exactly the same as routing the actual drum audio through a compressor, but close enough for what I needed.

Sign In or Register to comment.