Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Is Swift enough for developing MIDI-based app?

I know similar questions have been asked before. But I was wondering about some specifics and the current state of Swift.

I'm not a programmer, I have some basic experience with Python and Haskell. I have some ideas for a MIDI-centric app — as any sort of musical live coding is impossible right now on iOS I'd like to create something that's close to live coding at least in spirit and flow. An interface for applying 'functions' to outcoming midi-messages. I really like to write my music sometimes and not point&click it :D It'll still be touch-based and not code-based, but visually more text-centric.

Is Swift enough for something like that? Should I also learn Obj-C (is it worth learning it at all at this point)? C? Or just immerse myself in C++?

«13

Comments

  • It’s a good question.

  • You may just try out yourself... write a small app that emits midi notes in a fixed sequence, featuring 2 controls: startpoint and time between notes.
    Play an audio loop or a metronome in another app.
    Adjust your app’s midi notes, so they exactly hit the background beat.

    Now do whatever comes to mind on your iPad and listen to your sequence... do your notes run off the audio-grid, do they drift or remain stable ?
    (instead of the controls you may as well choose a fixed pattern and an audiofile with an identical pattern, adjusting the audiofile‘s startpoint)

  • Any code running on realtime thread must be C or C++, and be realtime safe. This includes MIDI, especially AUv3 MIDI.

  • @j_liljedahl said:
    Any code running on realtime thread must be C or C++, and be realtime safe. This includes MIDI, especially AUv3 MIDI.

    Got it

  • edited November 2021

    You should still be able to do this in the style of Mozaic, where instead of a custom scripting language you’re using Swift. And then the C/C++ real-time thread is just a shim that’s marshaling data and commands to and from the Swift thread. This will surely introduce latency, jitter, and a bunch of other issues I’m not thinking of at the moment, but should at least allow you to experiment with functional reactive programming in Swift. I’d encourage looking at Mozaic and its design limitations first, to get a sense of what likely is or isn’t possible if you swap out its scripting layer for something more powerful in Swift.

    Edit: another design consideration with this approach is whether you need truly real-time responsiveness from the Swift end, or if you’re willing to batch up timestamped messages and commands. Also, it’s possible that Mozaic isn’t actually marshaling data and commands to another thread, but instead is “compiling” the scripting language down to something that is run entirely on the real-time thread, without any cross-thread communication.

  • Since timing is an issue in iOS MIDI, a good start could be the sync engine repo provided by the very owner of this forum:
    https://github.com/michaeltyson/TheSpectacularSyncEngine

    If this isn't a good demo of realtime safe MIDI code then I don't know which is 😉

  • @Michael
    Since
    https://github.com/TheAmazingAudioEngine/TheAmazingAudioEngine
    is retired now but people are still having various issues with AudioKit apps, I wonder which one would be the better choice today?

  • Just offer @j_liljedahl a few bucks to write it for you.
    I hear he’s got plenty of time on his hands and running out of things to code :p

  • In my humble opinion / limited experience you can do a reasonable amount with Core MIDI and Swift. FWIW, all the MIDI stuff in my app touchscaper (including the sequencer) is in Swift.

    Sounds like what you want to do will entail lots of tinkering with MIDI packets. Gene De Lisa has done lots of investigation work around using Swift and Core MIDI as there is, to the best of my knowledge zero stuff about it on any Apple tech resource.

    If this looks a bit scary, then you're right it is, a bit. You kinda get used to it eventually :smile:

    rockhoppertech.com/blog/core-midi-midipacket-midipacketlist-and-builders-2/

  • It's been a long time since I've done a MIDI application with macOS (AFAIK, iOS is still basically the same in this area) but I'm with @moodscaper.

    My MIDI app for OS X was a live performance app that would have a lot of overlap with a live coding type scenario. The entire thing was written in very much non-realtime safe Objective-C. It could handle things like Wacom tablets and game controllers generating breath and pressure controls with no hiccups.

    A little care with Swift and I don't think you will have an issue.

    If you are doing MIDI code in an AUv3 setting though, as @j_liljedahl says, that's a very different thing. If you get anywhere near the audio threads or in the AU implementation, Swift is basically not going to work in any of the callbacks.

  • @rs2000 said:
    @Michael
    Since
    https://github.com/TheAmazingAudioEngine/TheAmazingAudioEngine
    is retired now but people are still having various issues with AudioKit apps, I wonder which one would be the better choice today?

    I don’t know! I’m personally using TAAE2, which I keep up to date. The only reason it’s “retired“ is because I didn’t want to spend as much time as I was supporting it 😁

  • edited November 2021

    Swift is not ready for realtime yet though; it’s extraordinarily difficult to write anything that doesn’t violate one or more of the realtime coding principles. Real-time parts should really be written in C or C++. Unless you’re developing just a toy app and it doesn’t really matter if it glitches 😄

  • C/C++ it is then

  • @Michael said:

    @rs2000 said:
    @Michael
    Since
    https://github.com/TheAmazingAudioEngine/TheAmazingAudioEngine
    is retired now but people are still having various issues with AudioKit apps, I wonder which one would be the better choice today?

    I don’t know! I’m personally using TAAE2, which I keep up to date. The only reason it’s “retired“ is because I didn’t want to spend as much time as I was supporting it 😁

    Perfect, thanks! 👏🏼

  • @AlmostAnonymous said:
    Just offer @j_liljedahl a few bucks to write it for you.
    I hear he’s got plenty of time on his hands and running out of things to code :p

    :p

  • @Michael said:
    Swift is not ready for realtime yet though; it’s extraordinarily difficult to write anything that doesn’t violate one or more of the realtime coding principles. Real-time parts should really be written in C or C++. Unless you’re developing just a toy app and it doesn’t really matter if it glitches 😄

    What if your specifically writing a glitch plugin? Would programming it in swift just make it a generative glitch app?

  • @AlmostAnonymous said:

    @Michael said:
    Swift is not ready for realtime yet though; it’s extraordinarily difficult to write anything that doesn’t violate one or more of the realtime coding principles. Real-time parts should really be written in C or C++. Unless you’re developing just a toy app and it doesn’t really matter if it glitches 😄

    What if your specifically writing a glitch plugin? Would programming it in swift just make it a generative glitch app?

    Haha, perfect! As a plus it'll affect any other audio apps you're currently using, to spread the glitchy goodness!

  • I need more glitchy goodness. MOAR!

  • @Michael said:

    @AlmostAnonymous said:

    @Michael said:
    Swift is not ready for realtime yet though; it’s extraordinarily difficult to write anything that doesn’t violate one or more of the realtime coding principles. Real-time parts should really be written in C or C++. Unless you’re developing just a toy app and it doesn’t really matter if it glitches 😄

    What if your specifically writing a glitch plugin? Would programming it in swift just make it a generative glitch app?

    "spread the glitchy goodness!"

    sounds like musical herpies

  • My ex, a software architect, tried extremely hard to convince me I don’t need to know C/C++ to program with audio… I’m still not coding anything, I literally just ended up confused.

  • @tehskwrl said:
    My ex, a software architect, tried extremely hard to convince me I don’t need to know C/C++ to program with audio… I’m still not coding anything, I literally just ended up confused.

    It depends on what you are trying to do with the audio. It also depends on the system you are developing the audio application for. To do DSP for live audio on iOS or macOS you are going to have to use some language that allows you to meet the timing constraints of the (soft) real time threads that you are working in. There's lots of languages that can meet the requirements, but it's pretty much easiest to use C++. It's also important to note that neither C nor C++ guarantee that what you do will be correct for the real time threads. It's just that they are fairly easy to know how to restrict what you do to those things that will have guaranteed runtime performance constraints. The same thing can't be said for most other languages and their libraries. Swift is actually fairly close to being usable, but there are some things still even in the low level libraries that interface with the underlying C libraries that don't have guaranteed run time upper bounds and that makes them basically unusable for RT work.

    The other big thing is that most of the DSP/audio code out there in the world for learning purposes is written in C and C++. That includes all the open source projects that are good for learning from too. It's really useful to learn C and at least the foundations of C++ to do audio work.

    But, you can do realtime audio work in a language that's as high level as something like Faust. You can even compile Faust into VST's and AU's that you can use in your DAW. It's a bit of a pain for iOS but it is still doable if you have Xcode on a Mac. (You could do this on an iPhone too if Apple didn't block the possibility of running compilers or JIT on iOS.) If you are doing your DSP on a Mac, Windows, Linux, or BSD box, then something like Faust is a really good way to get to program your own audio processing tools in a domain specific language that makes everything much more doable.

    To bring this around to the original question, there are live coding setups based around Haskell for doing music performance involving DSP. Haskell is about as far from C/C++ as you are going to get (with maybe the exception of Prolog).

    So, your ex was basically correct. But, it does depend on what and where you are trying to do the audio programming.

  • edited November 2021

    TLDR; it depends :wink:

    You can write "audio apps" in Swift that are also AUv3 compatible. Could you write a MIDI sequencer is Swift? Yes, you can. Could you write a shimmer reverb AUv3 in Swift? No. You could not. Or rather, you should not.

    But I think the OP question was about MIDI, not audio processing, right? I have various MIDI projects planned and I also have a few working prototypes that are 100% Swift.

  • cp3cp3
    edited November 2021

    @moodscaper said:
    But I think the OP question was about MIDI, not audio processing, right? I have various MIDI projects planned and I also have a few working prototypes that are 100% Swift.

    Are you writing Swift code on the audio thread? (for AUv3 apps you'd have to). If so, how can you be sure the code is realtime-safe? Or are you using another mechanism to do the scheduling (I read CADisplayLink works well)?

  • edited November 2021

    @NeonSilicon said:
    ... Haskell is about as far from C/C++ as you are going to get (with maybe the exception of Prolog).

    Wow, you even know what Prolog is :+1:
    (probably the only language that lacks an auto-synthax colorizer in the usual editor suspects) :o

  • I wish I could just use Tidalcycles on iOS but that’s not gonna happen :D

  • @NeonSilicon said:

    But, you can do realtime audio work in a language that's as high level as something like Faust. You can even compile Faust into VST's and AU's that you can use in your DAW. It's a bit of a pain for iOS but it is still doable if you have Xcode on a Mac. (You could do this on an iPhone too if Apple didn't block the possibility of running compilers or JIT on iOS.) If you are doing your DSP on a Mac, Windows, Linux, or BSD box, then something like Faust is a really good way to get to program your own audio processing tools in a domain specific language that makes everything much more doable.

    Faust always compiles to realtime safe C++.
    I use it in all my AU3FX and AUFX apps.

  • @Telefunky said:

    @NeonSilicon said:
    ... Haskell is about as far from C/C++ as you are going to get (with maybe the exception of Prolog).

    Wow, you even know what Prolog is :+1:
    (probably the only language that lacks an auto-synthax colorizer in the usual editor suspects) :o

    I had a job once where we worked in Prolog. It was fun to use and a bit mind bending to think in.

    @j_liljedahl said:

    @NeonSilicon said:
    [...]

    Faust always compiles to realtime safe C++.
    I use it in all my AU3FX and AUFX apps.

    Faust will compile to WASM, Rust, C, and even directly to LLVM IR now. A DSL using C as an intermediate step used to be pretty typical. It seems that's being replaced with WASM and LLVM these days.

    I've been playing with Faust for use with little DSP boards like Daisy. The Faust -> C/C++ path is really useful for this.

  • @moodscaper said:
    TLDR; it depends :wink:

    You can write "audio apps" in Swift that are also AUv3 compatible. Could you write a MIDI sequencer is Swift? Yes, you can. Could you write a shimmer reverb AUv3 in Swift? No. You could not. Or rather, you should not.

    But I think the OP question was about MIDI, not audio processing, right? I have various MIDI projects planned and I also have a few working prototypes that are 100% Swift.

    Yeah, I still agree with you on this. In most MIDI generation applications you aren't going to be touching any realtime processing. Swift should be fine for these and you aren't going to disrupt the audio thread doing it this way.

    There are a couple of new methods in Core MIDI with iOS 14, MIDIDestinationCreateWithProtocol and MIDIInputPortCreateWithProtocol that involve a MIDIReceiveBlock that is run in a Core MIDI thread. The docs say that this thread is "high priority," but they don't describe the restrictions on use beyond that. I'd do some pretty heavy testing with this path before using it from Swift. But even there, it's pretty easy to write a couple of very light weight C functions to call from the Swift block to do any safe RT work you need. I've tested this even in the AUv3 audio processing callback and it works fine.

  • @NeonSilicon said:

    @Telefunky said:

    @NeonSilicon said:
    ... Haskell is about as far from C/C++ as you are going to get (with maybe the exception of Prolog).

    Wow, you even know what Prolog is :+1:
    (probably the only language that lacks an auto-synthax colorizer in the usual editor suspects) :o

    I had a job once where we worked in Prolog. It was fun to use and a bit mind bending to think in.

    @j_liljedahl said:

    @NeonSilicon said:
    [...]

    Faust always compiles to realtime safe C++.
    I use it in all my AU3FX and AUFX apps.

    Faust will compile to WASM, Rust, C, and even directly to LLVM IR now. A DSL using C as an intermediate step used to be pretty typical. It seems that's being replaced with WASM and LLVM these days.

    Oh, I see! I haven’t been following the development of Faust the last years.

  • @NeonSilicon said:

    @moodscaper said:
    TLDR; it depends :wink:

    You can write "audio apps" in Swift that are also AUv3 compatible. Could you write a MIDI sequencer is Swift? Yes, you can. Could you write a shimmer reverb AUv3 in Swift? No. You could not. Or rather, you should not.

    But I think the OP question was about MIDI, not audio processing, right? I have various MIDI projects planned and I also have a few working prototypes that are 100% Swift.

    Yeah, I still agree with you on this. In most MIDI generation applications you aren't going to be touching any realtime processing. Swift should be fine for these and you aren't going to disrupt the audio thread doing it this way.

    There are a couple of new methods in Core MIDI with iOS 14, MIDIDestinationCreateWithProtocol and MIDIInputPortCreateWithProtocol that involve a MIDIReceiveBlock that is run in a Core MIDI thread. The docs say that this thread is "high priority," but they don't describe the restrictions on use beyond that. I'd do some pretty heavy testing with this path before using it from Swift. But even there, it's pretty easy to write a couple of very light weight C functions to call from the Swift block to do any safe RT work you need. I've tested this even in the AUv3 audio processing callback and it works fine.

    Yes, CoreMIDI threads are ”high priority” but not audio thread. So blocking there might not have the same catastrophic consequences as in the audio thread, but it all adds up (between apps) and can lead to late events / bad timing.

    AUv3 MIDI is in audio thread. Any time spent there will eat from the shared buffer duration of the current render cycle. So if you want to use swift or obj-c, push the events to a real-time safe queue (such as Michael’s TPCircularBuffer) and dispatch then from the C renderBlock.

Sign In or Register to comment.