Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Is Swift enough for developing MIDI-based app?

13»

Comments

  • edited July 2022

    @NeonSilicon said:

    @realdawei said:
    The emphasis on SwiftUI at WWDC was kind of blunt. I wouldn’t ignore it.

    Write a metal based spectrum visualizer in SwiftUI.

    The emphasis on SwiftUI at WWDC in the non-technical talks is basically Apple trying to convince devs to use SwiftUI to write iPhone apps with instead of React Native or Dart/Flutter. I'm not going to use either of those two to write a metal based app either. At least at this point, SwiftUI is not appropriate for entire classes of applications. You choose the tools to do a job. You can write what are basically forms based web-like apps in SwiftUI. You can't write a 3D game in it.

    Not saying it’s imminent, I think they are telegraphing the future. Seems like a warning shot.

  • @realdawei said:

    @NeonSilicon said:

    @realdawei said:
    The emphasis on SwiftUI at WWDC was kind of blunt. I wouldn’t ignore it.

    Write a metal based spectrum visualizer in SwiftUI.

    The emphasis on SwiftUI at WWDC in the non-technical talks is basically Apple trying to convince devs to use SwiftUI to write iPhone apps with instead of React Native or Dart/Flutter. I'm not going to use either of those two to write a metal based app either. At least at this point, SwiftUI is not appropriate for entire classes of applications. You choose the tools to do a job. You can write what are basically forms based web-like apps in SwiftUI. You can't write a 3D game in it.

    Not saying it’s imminent, I think they are telegraphing the future. Seems like a warning shot.

    The warning shot will be the actual deprecation mark in the docs. After deprecation, it will be a decade before they kill UIKit. How long ago did they deprecate IAA? IAA is a much less important dev tool than UIKit. UIKit underpins every Apple application on iOS. Apple will clearly and distinctly mark UIKit deprecated long before there is any reason to worry about UIKit going away. I'd give it a 50/50 shot that I'll be dead before that happens. Meanwhile, I absolutely can't use SwiftUI for the dev I do because it is not capable at this point.

  • To try to make the real word state of AUv3 dev on iOS clear, here are the two things you must implement as far as the UI is concerned:

    https://developer.apple.com/documentation/audiotoolbox/auaudiounitfactory?language=objc
    https://developer.apple.com/documentation/coreaudiokit/auviewcontroller?language=objc

    The first is a protocol that defines how the AUv3 instance will be created. The second is a subclass of UIViewController or NSViewController. These are UIKit and AppKit classes. The AUAudioUnit protocol is almost always implemented in the derived AUAudioUnitViewController subclass. UIViewController is defined in either AppKit or UIKit. Note that I linked the Objective-C docs. There is a good reason for that. These are defined in Objective-C. The Swift versions are bridges.

    This is the real world state of development for audio on iOS and macOS. There is zero chance of this changing until you hear of AUv4.

  • @NeonSilicon said:

    @realdawei said:

    @NeonSilicon said:

    @realdawei said:
    The emphasis on SwiftUI at WWDC was kind of blunt. I wouldn’t ignore it.

    Write a metal based spectrum visualizer in SwiftUI.

    The emphasis on SwiftUI at WWDC in the non-technical talks is basically Apple trying to convince devs to use SwiftUI to write iPhone apps with instead of React Native or Dart/Flutter. I'm not going to use either of those two to write a metal based app either. At least at this point, SwiftUI is not appropriate for entire classes of applications. You choose the tools to do a job. You can write what are basically forms based web-like apps in SwiftUI. You can't write a 3D game in it.

    Not saying it’s imminent, I think they are telegraphing the future. Seems like a warning shot.

    The warning shot will be the actual deprecation mark in the docs. After deprecation, it will be a decade before they kill UIKit. How long ago did they deprecate IAA? IAA is a much less important dev tool than UIKit. UIKit underpins every Apple application on iOS. Apple will clearly and distinctly mark UIKit deprecated long before there is any reason to worry about UIKit going away. I'd give it a 50/50 shot that I'll be dead before that happens. Meanwhile, I absolutely can't use SwiftUI for the dev I do because it is not capable at this point.

    Jesus ok.

  • edited July 2022

    @NeonSilicon said:

    @wahnfrieden said:

    @NeonSilicon said:
    https://developer.apple.com/documentation/uikit/app_and_environment/building_a_desktop-class_ipad_app
    This would be the most important new direction for iPadOS. All of the new navigation and "multitasking" features for iPad are UIKit.

    This is what I referred to as essentially UIKit reaching SwiftUI parity on new platform behaviors/functions

    I don't follow what you mean. These are things that are new to iPadOS that are only possible using UIKit. This is UIKit well ahead of SwiftUI.

    Sorry I don't see the nav/multitasking functionality that SwiftUI lacks, can you give a couple specifics? In terms of the end-user behavior/experience not the specific APIs which are of course different

  • Hello everyone.

    I have just finished the prototype of my own midi sequencer, on my computer, it works with webmidijs, so its javascript. I love the workflow i came up with, and would love to build a full fledged ios app.
    Now im no rush, i know this will probably take me years.
    But i dont really know where to start, i dont even own a mac. I use linux. Is there a way for me to develop the thing on linux?
    Can anyone give me any advice that gets me closer to my goal? thank you!

  • @wahnfrieden said:

    @NeonSilicon said:

    @wahnfrieden said:

    @NeonSilicon said:
    https://developer.apple.com/documentation/uikit/app_and_environment/building_a_desktop-class_ipad_app
    This would be the most important new direction for iPadOS. All of the new navigation and "multitasking" features for iPad are UIKit.

    This is what I referred to as essentially UIKit reaching SwiftUI parity on new platform behaviors/functions

    I don't follow what you mean. These are things that are new to iPadOS that are only possible using UIKit. This is UIKit well ahead of SwiftUI.

    Sorry I don't see the nav/multitasking functionality that SwiftUI lacks, can you give a couple specifics? In terms of the end-user behavior/experience not the specific APIs which are of course different

    These are new features being added to iPadOS 16 to support the "pro" app usage that everyone has been calling for. The API's do define what the functionality is going to be, but this WWDC 2022 session is a starting point, https://developer.apple.com/videos/play/wwdc2022/10069/

    If you browse through the UIKit and related docs with the beta markings turned on, you'll see that there has been a ton of stuff added to UIKit (and AppKit). These are new features being added and not things to bring UIKit up to parity with SwiftUI. Right now, there are large number of major features of iOS and macOS that aren't available directly in SwiftUI. For example and most importantly for this discussion writing an AUv3.

  • @octaviu5 said:
    Hello everyone.

    I have just finished the prototype of my own midi sequencer, on my computer, it works with webmidijs, so its javascript. I love the workflow i came up with, and would love to build a full fledged ios app.
    Now im no rush, i know this will probably take me years.
    But i dont really know where to start, i dont even own a mac. I use linux. Is there a way for me to develop the thing on linux?
    Can anyone give me any advice that gets me closer to my goal? thank you!

    There is some dev work that can be done on Windows and Linux, especially if you use some of the 3rd party dev tools, but to release an iOS program to the App Store you'll have build on a Mac. For on device testing and using simulators I think you also must have a Mac.

  • edited July 2022

    @NeonSilicon said:

    @wahnfrieden said:

    @NeonSilicon said:

    @wahnfrieden said:

    @NeonSilicon said:
    https://developer.apple.com/documentation/uikit/app_and_environment/building_a_desktop-class_ipad_app
    This would be the most important new direction for iPadOS. All of the new navigation and "multitasking" features for iPad are UIKit.

    This is what I referred to as essentially UIKit reaching SwiftUI parity on new platform behaviors/functions

    I don't follow what you mean. These are things that are new to iPadOS that are only possible using UIKit. This is UIKit well ahead of SwiftUI.

    Sorry I don't see the nav/multitasking functionality that SwiftUI lacks, can you give a couple specifics? In terms of the end-user behavior/experience not the specific APIs which are of course different

    These are new features being added to iPadOS 16 to support the "pro" app usage that everyone has been calling for. The API's do define what the functionality is going to be, but this WWDC 2022 session is a starting point, https://developer.apple.com/videos/play/wwdc2022/10069/

    I'm still digging in but many of the beta markings for UIKit I saw and the topics of this video, I believe are also on offer in SwiftUI. All this multi-window navigation, command, bar button, search stuff is in SwiftUI as well

    AUv3 is low priority so I'm not surprised to see SwiftUI not focusing on audio application. I suspect we'll see some interesting audio API development with realityOS forthcoming, or an iteration after the launch.

    There'll certainly be a long tail of applications, and forever applications where it makes sense to drop down to layers like Metal directly for performance and control reasons. It does look like SwiftUI will stop offering the ability to "drop down" to layers as UIKit stops being used to back as many of the new iOS/iPadOS functions which is scary as a developer used to being able to control very flexibly via UIKit etc.

  • @wahnfrieden said:

    @NeonSilicon said:

    @wahnfrieden said:

    @NeonSilicon said:

    @wahnfrieden said:

    @NeonSilicon said:
    https://developer.apple.com/documentation/uikit/app_and_environment/building_a_desktop-class_ipad_app
    This would be the most important new direction for iPadOS. All of the new navigation and "multitasking" features for iPad are UIKit.

    This is what I referred to as essentially UIKit reaching SwiftUI parity on new platform behaviors/functions

    I don't follow what you mean. These are things that are new to iPadOS that are only possible using UIKit. This is UIKit well ahead of SwiftUI.

    Sorry I don't see the nav/multitasking functionality that SwiftUI lacks, can you give a couple specifics? In terms of the end-user behavior/experience not the specific APIs which are of course different

    These are new features being added to iPadOS 16 to support the "pro" app usage that everyone has been calling for. The API's do define what the functionality is going to be, but this WWDC 2022 session is a starting point, https://developer.apple.com/videos/play/wwdc2022/10069/

    I'm still digging in but many of the beta markings for UIKit I saw and the topics of this video, I believe are also on offer in SwiftUI. All this multi-window navigation, command, bar button, search stuff is in SwiftUI as well

    I don't see the new features in SwiftUI. I also can't find any sessions on them for SwiftUI from the 2022 WWDC. There were at least three sessions for UIKit and desktop-class apps on iPadOS.

    They definitely have added some new features to SwiftUI for iPadOS 16, but they mainly look to be enhancements to toolbars and buttons and access to share sheets and related things. Most of the navigation additions I see are macOS only.

    The main documentation link for the changes is here, https://developer.apple.com/xcode/swiftui/

    One of the interesting additions is updates to using SwiftUI within UIKit apps. They've added the ability to use SwiftUI to construct cells for things like table views in UIKit. That's actually a pretty slick use of SwiftUI.

    AUv3 is low priority so I'm not surprised to see SwiftUI not focusing on audio application. I suspect we'll see some interesting audio API development with realityOS forthcoming, or an iteration after the launch.

    I don't think AUv3 is low priority to Apple. It is used all over the place within iOS and macOS. They actually can't let it fall out of sync or you won't get any audio out of anything on iOS.

    You can already get a feel for what's coming in AR/VR by looking into the current docs for RealityKit, https://developer.apple.com/documentation/realitykit?changes=latest_minor

    It's interesting to note that RealityKit is neither SwiftUI nor UIKit. It has its own architecture based on an entity-component-system model.

    There'll certainly be a long tail of applications, and forever applications where it makes sense to drop down to layers like Metal directly for performance and control reasons. It does look like SwiftUI will stop offering the ability to "drop down" to layers as UIKit stops being used to back as many of the new iOS/iPadOS functions which is scary as a developer used to being able to control very flexibly via UIKit etc.

    SwiftUI isn't built on top of UIKit now. It's built on the C based Foundation stuff that is also what UIKit and AppKit use. There is support in SwiftUI for using UIKit inside of a SwiftUI based app for those things that SwiftUI can't do like MapKit, MetalKit, and WebKit. At this point, Apple is adding more support for using UIKit from SwiftUI.

    Why would Apple remove developer functionality that they use in all of their own apps?

  • To sum it up, you don't need Swift if you want to develop music apps for iOS. You'll need Obj-C for audio (AUV3) and proper Midi realtime stuff.

    Am I correct?

  • @Max_Free said:
    To sum it up, you don't need Swift if you want to develop music apps for iOS. You'll need Obj-C for audio (AUV3) and proper Midi realtime stuff.

    Am I correct?

    Mostly. You can do almost everything from Objective-C if you want to. There is no requirement to use Swift. The one huge caveat is that the runtime environment for Objective-C is also not safe for realtime dev. You have to take care not to do anything in any RT thread and the audio thread in particular that ties you to the Objective-C runtime. So, you can't use any Objective-C classes or memory (ARC) in the audio thread. The easiest way to do this is to use the Objective-C/C++ ability to interface directly to C or C++ and then write all of the RT related code in C or C++.

  • edited July 2022

    I can be wrong but it seems to me that the main conclusion of this thread so far is wrong, the conclusion being that Swift shouldn't be used for real time audio.

    In the process of creating modular synthesis components for VisualSwift I've ended up creating 3 audio engines:

    1) based on C functions that get called from Objective-C
    2) GPU based
    3) 100% Swift

    The 100% Swift version is my latest and I think the one I will end up including with VisualSwift. It is working very well, not crashing and giving the same performance that I'd be able to achieve with C or even assembler ( in the past I've created a compiler for an audio DSL in a windows app called SynthMaker, it generated assembler from a high level language )

    I've done extensive profiling with Instruments, every little change I've profiled to find out the implications on performance and came to many conclusions (most of them obvious in hindsight).

    A few of the findings were:

    a) a call made to a swift static function seems to have negligible effect on performance, my swift audio engine runs one sample at a time so there are many function calls happening which never appears in the top list of bottlenecks.

    b) don't use class variables ( you can but it has an impact on performance ), instead use pointers, for example, inside a class you can store a pointer to an array of floats as: var ptrToFloats : UnsafeMutablePointer? and an index into that float array as: var myVarBufferIndex:Int? you can then use .pointee to read the value in that memory location or to change it. I find that this gives the exact same performance as C or assembler.

    c) you can pass swift closures around and call them very fast as long as they don't use self inside, you can capture the pointers that I mentioned before as closure captured variables so you get access to data. Actually you can use self with [weak self] but it has some impact on performance.

    d) this is very important, use simd instructions, they are supported inside swift in a friendly way and there is no need to go deeper into C or assembler to access them. I use a lot of simd_float16, simd_int16 and SIMDMask<SIMD16>, they're for operating on 16 floats, 16 ints, and 16 bools at the same. You can do all kinds of operations on them. As the audio code is mainly based on simd instructions, I think there is very little room for the compiler to behave differently going from swift to assembler or from c to assembler.

    I'm not finished with optimisations but so far I get a PolyBLEP square oscillator to run 16 voices over a 30 second period taking 130ms of processing time in total, all in swift.

    Staying in Swift is great, you can go a lot further with higher productivity and concentrating more on the application itself. If you write code that compiles to the same assembler you would get in C then I see no difference. I think the issues mentioned with ARC, allocation of memory inside audio thread etc are not a language issue, they can be avoided in swift using unsafe pointers etc.

    The GPU audio experience was very interesting, I was able to have many hundreds of voices running, it looked very promissing but now I don't recommend it as the GPU is not made for audio. For example, when you run an audio compute shader on the GPU and your app goes to the background, iOS shuts down the GPU processing as it assumes you're using it for visuals. Setting up the command buffer is also clearly not made for audio, it takes a bit too long specially if you want to wait for it to compute ( if you don't you're then creating a lag of one buffer ).

    The GPU has one very interesting advantage, it compiles the code at run time wich allowed me to create a component for users to type their own custom code.

    The SwiftUI vs UIKit discussion is very interesting, in my day job I've been doing SwiftUI since the day it came out. VisualSwift is mostly implemented in SwiftUI, I think it could maybe be 100% implemented in SwiftUI ( well, except when you want to host an AudioUnit's ViewController ) which I tried in initial versions but I found it a lot easier to sometimes go down to UIKit partly because I had more experience with it, partly because SwiftUI wasn't mature enought at the time. I also don't think that Apple will deprecate UIKit, I avoid it but I'm really glad it exists.

  • @Jorge said:
    I can be wrong but it seems to me that the main conclusion of this thread so far is wrong, the conclusion being that Swift shouldn't be used for real time audio.

    In the process of creating modular synthesis components for VisualSwift I've ended up creating 3 audio engines:

    1) based on C functions that get called from Objective-C
    2) GPU based
    3) 100% Swift

    The 100% Swift version is my latest and I think the one I will end up including with VisualSwift. It is working very well, not crashing and giving the same performance that I'd be able to achieve with C or even assembler ( in the past I've created a compiler for an audio DSL in a windows app called SynthMaker, it generated assembler from a high level language )

    I've done extensive profiling with Instruments, every little change I've profiled to find out the implications on performance and came to many conclusions (most of them obvious in hindsight).

    A few of the findings were:

    a) a call made to a swift static function seems to have negligible effect on performance, my swift audio engine runs one sample at a time so there are many function calls happening which never appears in the top list of bottlenecks.

    b) don't use class variables ( you can but it has an impact on performance ), instead use pointers, for example, inside a class you can store a pointer to an array of floats as: var ptrToFloats : UnsafeMutablePointer? and an index into that float array as: var myVarBufferIndex:Int? you can then use .pointee to read the value in that memory location or to change it. I find that this gives the exact same performance as C or assembler.

    c) you can pass swift closures around and call them very fast as long as they don't use self inside, you can capture the pointers that I mentioned before as closure captured variables so you get access to data. Actually you can use self with [weak self] but it has some impact on performance.

    d) this is very important, use simd instructions, they are supported inside swift in a friendly way and there is no need to go deeper into C or assembler to access them. I use a lot of simd_float16, simd_int16 and SIMDMask<SIMD16>, they're for operating on 16 floats, 16 ints, and 16 bools together. You can do all kinds of operations on them.

    I'm not finished with optimisations but so far I get a PolyBLEP square oscillator to run 16 voices over a 30 second period taking 130ms of processing time in total, all in swift.

    Staying in Swift is great, you can go a lot further with higher productivity and concentrating more on the application itself. If you write code that compiles to the same assembler you would get in C then I see no difference. I think the issues mentioned with ARC, allocation of memory inside audio thread etc are not a language issue, they can be avoided in swift using unsafe pointers etc.

    The GPU audio experience was very interesting, I was able to have many hundreds of voices running, it looked very promissing but now I don't recommend it as the GPU is not made for audio. For example, when you run an audio compute shader on the GPU and your app goes to the background, iOS shuts down the GPU processing as it assumes you're using it for visuals. Setting up the command buffer is also clearly not made for audio, it takes a bit too long specially if you want to wait for it to compute ( if you don't you're then creating a lag of one buffer ).

    The GPU has one very interesting advantage, it compiles the code at run time with allowed me to create a component for users to type their own custom code.

    The SwiftUI vs UIKit discussion is very interesting, in my day job I've been doing SwiftUI since the day it came out. VisualSwift is mostly implemented in SwiftUI, I think it could maybe be 100% implemented in SwiftUI ( well, except when you want to host an AudioUnit's ViewController ) which I tried in initial versions but I found it a lot easier to sometimes go down to UIKit partly because I had more experience with it, partly because SwiftUI wasn't mature enought at the time. I also don't think that Apple will deprecate UIKit, I avoid it but I'm really glad it exists.

    The last time I looked carefully, which was when I was doing this silly demo,

    https://github.com/NeonSilicon/Demo_Volume_AUv3

    there were still calls within Swift that operate on UnsafePointer and UnsafeMutablePointer that are not guaranteed to not make copies of the data. There were discussions on the Swift development forum about the issues. That's why I did the Swift callable C functions to prep the UnsafePointers before handing them off to the Accelerate libraries. There were and probably still are efforts to build an RT safe subset of Swift, but the indications from the Apple audio team at this time is still that this is something you shouldn't trust.

    The problem isn't the speed of Swift compiled executables. Swift is plenty fast to do audio in. The issues are that Swift doesn't have a set of structures and methods that are guaranteed and documented to be RT safe.

  • edited July 2022

    @NeonSilicon said:

    @Jorge said:
    I can be wrong but it seems to me that the main conclusion of this thread so far is wrong, the conclusion being that Swift shouldn't be used for real time audio.

    In the process of creating modular synthesis components for VisualSwift I've ended up creating 3 audio engines:

    1) based on C functions that get called from Objective-C
    2) GPU based
    3) 100% Swift

    The 100% Swift version is my latest and I think the one I will end up including with VisualSwift. It is working very well, not crashing and giving the same performance that I'd be able to achieve with C or even assembler ( in the past I've created a compiler for an audio DSL in a windows app called SynthMaker, it generated assembler from a high level language )

    I've done extensive profiling with Instruments, every little change I've profiled to find out the implications on performance and came to many conclusions (most of them obvious in hindsight).

    A few of the findings were:

    a) a call made to a swift static function seems to have negligible effect on performance, my swift audio engine runs one sample at a time so there are many function calls happening which never appears in the top list of bottlenecks.

    b) don't use class variables ( you can but it has an impact on performance ), instead use pointers, for example, inside a class you can store a pointer to an array of floats as: var ptrToFloats : UnsafeMutablePointer? and an index into that float array as: var myVarBufferIndex:Int? you can then use .pointee to read the value in that memory location or to change it. I find that this gives the exact same performance as C or assembler.

    c) you can pass swift closures around and call them very fast as long as they don't use self inside, you can capture the pointers that I mentioned before as closure captured variables so you get access to data. Actually you can use self with [weak self] but it has some impact on performance.

    d) this is very important, use simd instructions, they are supported inside swift in a friendly way and there is no need to go deeper into C or assembler to access them. I use a lot of simd_float16, simd_int16 and SIMDMask<SIMD16>, they're for operating on 16 floats, 16 ints, and 16 bools together. You can do all kinds of operations on them.

    I'm not finished with optimisations but so far I get a PolyBLEP square oscillator to run 16 voices over a 30 second period taking 130ms of processing time in total, all in swift.

    Staying in Swift is great, you can go a lot further with higher productivity and concentrating more on the application itself. If you write code that compiles to the same assembler you would get in C then I see no difference. I think the issues mentioned with ARC, allocation of memory inside audio thread etc are not a language issue, they can be avoided in swift using unsafe pointers etc.

    The GPU audio experience was very interesting, I was able to have many hundreds of voices running, it looked very promissing but now I don't recommend it as the GPU is not made for audio. For example, when you run an audio compute shader on the GPU and your app goes to the background, iOS shuts down the GPU processing as it assumes you're using it for visuals. Setting up the command buffer is also clearly not made for audio, it takes a bit too long specially if you want to wait for it to compute ( if you don't you're then creating a lag of one buffer ).

    The GPU has one very interesting advantage, it compiles the code at run time with allowed me to create a component for users to type their own custom code.

    The SwiftUI vs UIKit discussion is very interesting, in my day job I've been doing SwiftUI since the day it came out. VisualSwift is mostly implemented in SwiftUI, I think it could maybe be 100% implemented in SwiftUI ( well, except when you want to host an AudioUnit's ViewController ) which I tried in initial versions but I found it a lot easier to sometimes go down to UIKit partly because I had more experience with it, partly because SwiftUI wasn't mature enought at the time. I also don't think that Apple will deprecate UIKit, I avoid it but I'm really glad it exists.

    The last time I looked carefully, which was when I was doing this silly demo,

    https://github.com/NeonSilicon/Demo_Volume_AUv3

    there were still calls within Swift that operate on UnsafePointer and UnsafeMutablePointer that are not guaranteed to not make copies of the data. There were discussions on the Swift development forum about the issues. That's why I did the Swift callable C functions to prep the UnsafePointers before handing them off to the Accelerate libraries. There were and probably still are efforts to build an RT safe subset of Swift, but the indications from the Apple audio team at this time is still that this is something you shouldn't trust.

    The problem isn't the speed of Swift compiled executables. Swift is plenty fast to do audio in. The issues are that Swift doesn't have a set of structures and methods that are guaranteed and documented to be RT safe.

    Thanks for the info. I don't want to find out at the end of my efforts that I'm on the wrong path. Would be great to know the definitive answer to this. Maybe it's a question of implementing it, stress test it, and if it never crashes then it's all good. For example, in general it's not safe to access a float variable from different threads as they are not atomic, I think that depends very much on the device and that on apple's CPUs it's fine to do so. A language like Faust that is made for many different types of devices needs to worry about it but maybe on iPadOS we don't. Maybe it's a case that even if it's not documented anywhere that it's safe to do so, it still is.

    EDIT: This Swift Forum thread about using swift for real time audio looks very interesting: https://forums.swift.org/t/realtime-threads-with-swift/40562/44
    There is a very interesting video where Taylor Holliday explores at which point swift becomes unsafe for real time. My conclusion is that it's possible to write real time safe swift although it's also possible not to. In the example in that video swift becomes unsafe when he creates an array of floats inside the audio thread. He says that Objective-C is also unsafe although it has a very clear subset i.e: C that makes it easier to separate the realtime safe part of Objective-C. He's developing a swift function attribute ( @ realtime ) that checks the generated IR to find if the function is realtime safe.

  • @Jorge said:

    @NeonSilicon said:

    @Jorge said:
    I can be wrong but it seems to me that the main conclusion of this thread so far is wrong, the conclusion being that Swift shouldn't be used for real time audio.

    In the process of creating modular synthesis components for VisualSwift I've ended up creating 3 audio engines:

    1) based on C functions that get called from Objective-C
    2) GPU based
    3) 100% Swift

    The 100% Swift version is my latest and I think the one I will end up including with VisualSwift. It is working very well, not crashing and giving the same performance that I'd be able to achieve with C or even assembler ( in the past I've created a compiler for an audio DSL in a windows app called SynthMaker, it generated assembler from a high level language )

    I've done extensive profiling with Instruments, every little change I've profiled to find out the implications on performance and came to many conclusions (most of them obvious in hindsight).

    A few of the findings were:

    a) a call made to a swift static function seems to have negligible effect on performance, my swift audio engine runs one sample at a time so there are many function calls happening which never appears in the top list of bottlenecks.

    b) don't use class variables ( you can but it has an impact on performance ), instead use pointers, for example, inside a class you can store a pointer to an array of floats as: var ptrToFloats : UnsafeMutablePointer? and an index into that float array as: var myVarBufferIndex:Int? you can then use .pointee to read the value in that memory location or to change it. I find that this gives the exact same performance as C or assembler.

    c) you can pass swift closures around and call them very fast as long as they don't use self inside, you can capture the pointers that I mentioned before as closure captured variables so you get access to data. Actually you can use self with [weak self] but it has some impact on performance.

    d) this is very important, use simd instructions, they are supported inside swift in a friendly way and there is no need to go deeper into C or assembler to access them. I use a lot of simd_float16, simd_int16 and SIMDMask<SIMD16>, they're for operating on 16 floats, 16 ints, and 16 bools together. You can do all kinds of operations on them.

    I'm not finished with optimisations but so far I get a PolyBLEP square oscillator to run 16 voices over a 30 second period taking 130ms of processing time in total, all in swift.

    Staying in Swift is great, you can go a lot further with higher productivity and concentrating more on the application itself. If you write code that compiles to the same assembler you would get in C then I see no difference. I think the issues mentioned with ARC, allocation of memory inside audio thread etc are not a language issue, they can be avoided in swift using unsafe pointers etc.

    The GPU audio experience was very interesting, I was able to have many hundreds of voices running, it looked very promissing but now I don't recommend it as the GPU is not made for audio. For example, when you run an audio compute shader on the GPU and your app goes to the background, iOS shuts down the GPU processing as it assumes you're using it for visuals. Setting up the command buffer is also clearly not made for audio, it takes a bit too long specially if you want to wait for it to compute ( if you don't you're then creating a lag of one buffer ).

    The GPU has one very interesting advantage, it compiles the code at run time with allowed me to create a component for users to type their own custom code.

    The SwiftUI vs UIKit discussion is very interesting, in my day job I've been doing SwiftUI since the day it came out. VisualSwift is mostly implemented in SwiftUI, I think it could maybe be 100% implemented in SwiftUI ( well, except when you want to host an AudioUnit's ViewController ) which I tried in initial versions but I found it a lot easier to sometimes go down to UIKit partly because I had more experience with it, partly because SwiftUI wasn't mature enought at the time. I also don't think that Apple will deprecate UIKit, I avoid it but I'm really glad it exists.

    The last time I looked carefully, which was when I was doing this silly demo,

    https://github.com/NeonSilicon/Demo_Volume_AUv3

    there were still calls within Swift that operate on UnsafePointer and UnsafeMutablePointer that are not guaranteed to not make copies of the data. There were discussions on the Swift development forum about the issues. That's why I did the Swift callable C functions to prep the UnsafePointers before handing them off to the Accelerate libraries. There were and probably still are efforts to build an RT safe subset of Swift, but the indications from the Apple audio team at this time is still that this is something you shouldn't trust.

    The problem isn't the speed of Swift compiled executables. Swift is plenty fast to do audio in. The issues are that Swift doesn't have a set of structures and methods that are guaranteed and documented to be RT safe.

    Thanks for the info. I don't want to find out at the end of my efforts that I'm on the wrong path. Would be great to know the definitive answer to this. Maybe it's a question of implementing it, stress test it, and if it never crashes then it's all good. For example, in general it's not safe to access a float variable from different threads as they are not atomic, I think that depends very much on the device and that on apple's CPUs it's fine to do so. A language like Faust that is made for many different types of devices needs to worry about it but maybe on iPadOS we don't. Maybe it's a case that even if it's not documented anywhere that it's safe to do so, it still is.

    EDIT: This Swift Forum thread about using swift for real time audio looks very interesting: https://forums.swift.org/t/realtime-threads-with-swift/40562/44
    There is a very interesting video where Taylor Holliday explores at which point swift becomes unsafe for real time. My conclusion is that it's possible to write real time safe swift although it's also possible not to. In the example in that video swift becomes unsafe when he creates an array of floats inside the audio thread. He says that Objective-C is also unsafe although it has a very clear subset i.e: C that makes it easier to separate the realtime safe part of Objective-C. He's developing a swift function attribute ( @ realtime ) that checks the generated IR to find if the function is realtime safe.

    Thanks for the link to the thread. It's not all the hard to write non-RT safe code in C and it's pretty easy in C++, so having a function attribute in Swift would be pretty slick.

    One of my concerns with using Swift in the audio thread is that Swift is still in development. Although, the big changes are slowing down. But, I still don't feel comfortable with the idea of using Swift functions and structures without them being official marked as safe because even with solid testing an update might break something I've done.

    BTW, the vDSP and BLAS calls in Accelerate look to me to actually be the C functions with the Swift bridge straight to C. I don't think there is in Swift involved in the library. These should be safe. I've done some experimenting where I wrote some C support functions that dealt with initial memory setup and then used only Accelerate functions for the audio processing and this worked without issue using Swift directly for the AU kernel.

    I'm going to go check out that thread on swift.org now.

  • @rs2000 said:
    @Michael
    Since
    https://github.com/TheAmazingAudioEngine/TheAmazingAudioEngine
    is retired now but people are still having various issues with AudioKit apps, I wonder which one would be the better choice today?

    honestly, it may sound a bit harsh but i would encourage everyone to give hands away from audiokit and rather go into something which is rock solid designed from bottom and doesn't learn devs bad habits from beginning - even if it is now discontinued Amazingn Audio Engine, or better still suppoeted and developed JUCE (has plethora of tutoriasl on YT)

    Audiokit was one big mistake in iOS apps developement, it spawned to existence s lot of unreliable u stable poorly coded plugins .. vast majority of plugins in iOS witch which inhad troubles were build with AudioKIT and i don't think it's coincidence .. Even their flagshio Synth One has still after years (reported) completely wrong imolementsrion of env/lfo > filter modulation (and this bug was then copied into few more other synths built with AK)

  • edited July 2022

    @NeonSilicon said:

    You can already get a feel for what's coming in AR/VR by looking into the current docs for RealityKit, https://developer.apple.com/documentation/realitykit?changes=latest_minor

    yep been tracking this closely, even happen to know someone on that team at Apple. pretty exciting to see next year what they'll have been working on for a solid 2+ years without us seeing since it was absent from this year's WWDC

    It's interesting to note that RealityKit is neither SwiftUI nor UIKit. It has its own architecture based on an entity-component-system model.

    there's a lot they're signaling with RealityKit, but I expect some more radical development from realityOS's announcement as a new platform/OS that builds on these libs but integrates them and adds to them in new ways. Specifically we have yet to see what Apple will do between SwiftUI and realityOS (which is more or less confirmed if you don't just look at Apple PR) and I don't think RealityKit is the whole story there

    SwiftUI isn't built on top of UIKit now. It's built on the C based Foundation stuff that is also what UIKit and AppKit use. There is support in SwiftUI for using UIKit inside of a SwiftUI based app for those things that SwiftUI can't do like MapKit, MetalKit, and WebKit. At this point, Apple is adding more support for using UIKit from SwiftUI.

    SwiftUI absolutely uses a bunch of shared UIKit components... you can easily go look inside your views, random article on how to do this https://betterprogramming.pub/how-to-access-the-uikit-components-under-swiftui-objects-4a808568014a same with SwiftUI reusing AppKit or AppKit internals for many things on macOS. it sounds like you haven't looked 'under the hood' much beyond Apple's official words

    btw the desktop-class UIKit materials you shared, all that stuff is in SwiftUI, they did not launch substantial new platform capabilities exclusively on UIKit. the reason that article only talks about UIKit is because there's separate docs for SwiftUI that weren't written in the same way / with the same headline

  • edited July 2022

    @dendy said:
    honestly, it may sound a bit harsh but i would encourage everyone to give hands away from audiokit and rather go into something which is rock solid designed from bottom and doesn't learn devs bad habits from beginning - even if it is now discontinued Amazingn Audio Engine, or better still suppoeted and developed JUCE (has plethora of tutoriasl on YT)

    Audiokit was one big mistake in iOS apps developement, it spawned to existence s lot of unreliable u stable poorly coded plugins .. vast majority of plugins in iOS witch which inhad troubles were build with AudioKIT and i don't think it's coincidence .. Even their flagshio Synth One has still after years (reported) completely wrong imolementsrion of env/lfo > filter modulation (and this bug was then copied into few more other synths built with AK)

    I understand AudioKit's reputation as described. I see their devs or others make kinda vague statements about many of these earlier issues having been solved, and that a bunch of the popular apps like L7 were built on earlier revisions and haven't transitioned to new components/architectures that address fundamental issues... but I haven't been able to track down many specifics to verify how much has been corrected, or to know whether there are new demos/apps that make use of the supposedly more stable current platform. I would love to believe it, there's a lot given up with JUCE or the unmaintained one. I need to choose one for a new project...

    update: I contacted AudioKit directly

  • @wahnfrieden said:

    @NeonSilicon said:

    You can already get a feel for what's coming in AR/VR by looking into the current docs for RealityKit, https://developer.apple.com/documentation/realitykit?changes=latest_minor

    yep been tracking this closely, even happen to know someone on that team at Apple. pretty exciting to see next year what they'll have been working on for a solid 2+ years without us seeing since it was absent from this year's WWDC

    It's interesting to note that RealityKit is neither SwiftUI nor UIKit. It has its own architecture based on an entity-component-system model.

    there's a lot they're signaling with RealityKit, but I expect some more radical development from realityOS's announcement as a new platform/OS that builds on these libs but integrates them and adds to them in new ways. Specifically we have yet to see what Apple will do between SwiftUI and realityOS (which is more or less confirmed if you don't just look at Apple PR) and I don't think RealityKit is the whole story there

    SwiftUI isn't built on top of UIKit now. It's built on the C based Foundation stuff that is also what UIKit and AppKit use. There is support in SwiftUI for using UIKit inside of a SwiftUI based app for those things that SwiftUI can't do like MapKit, MetalKit, and WebKit. At this point, Apple is adding more support for using UIKit from SwiftUI.

    SwiftUI absolutely uses a bunch of shared UIKit components... you can easily go look inside your views, random article on how to do this https://betterprogramming.pub/how-to-access-the-uikit-components-under-swiftui-objects-4a808568014a same with SwiftUI reusing AppKit or AppKit internals for many things on macOS. it sounds like you haven't looked 'under the hood' much beyond Apple's official words

    btw the desktop-class UIKit materials you shared, all that stuff is in SwiftUI, they did not launch substantial new platform capabilities exclusively on UIKit. the reason that article only talks about UIKit is because there's separate docs for SwiftUI that weren't written in the same way / with the same headline

    I did actually do quite a bit of profiling of SwiftUI apps when I was learning to use it. I know that some of the capability is based on UIKit/AppKit. I also know that a whole bunch of it isn't. The main important things I saw that were from UIKit is the base windowing, events, and gesture recognition. My assumption and hope is that everything in the View hierarchy that is currently based on UIKit implementations is replaced as soon as possible. The SwiftUI-Introspect package from the article is interesting, but there is no way I would use any of that in a production application.

    I've gone through the "What's New in SwiftUI", the two "SwiftUI on iPad", and the "SwiftUI Navigation Cookbook" sessions for WWDC 2022 and I don't see the pieces I'm talking about. But, let's just assume I'm wrong and support has been added to SwiftUI for the new multitasking and multi-screen support. I fully expect everything to be added to SwiftUI that is required anyway. The point I was trying to make was that UIKit has had all of this work done on it to support all of the latest iPad capabilities. Why would they spend thousands and thousands of hours adding any of this to a framework that they were going to kill?

  • @NeonSilicon said:

    @Jorge said:

    [...]

    I'm going to go check out that thread on swift.org now.

    I did go read through the thread and watched the video on the proposed idea. It's an interesting idea, but it seems to be a long way off even if it would be possible to catch all of the potential calls that would break RT safety.

    The bigger take away I had was that Apple hasn't provided any way to interact with OS/Workgroups frameworks through Swift and that the workgroup threading stuff in Audio Workgroups is all C based. There was even a mention from one of the Swift people that trying to compile any of the workgroup code within Swift would throw up an error. I'd take that as a signal that you still shouldn't try to do any RT processing in Swift.

  • edited July 2022

    @NeonSilicon said:
    Why would they spend thousands and thousands of hours adding any of this to a framework that they were going to kill?

    It seems simple enough to me, SwiftUI isn't ready enough for everyone including Apple's internal teams to suddenly switch over to it to gain access to new platform capabilities, and their platform evolution can't afford to wait on SwiftUI maturity. Part of what makes multi-year transitions like their (now explicit) SwiftUI vision so expensive

    I think they recoup some of this double effort by having some of the new SwiftUI capabilities reuse the same UIKit internals... and when the internals are completely new SwiftUI native, they're not getting backported to UIKit

  • @wahnfrieden said:

    @NeonSilicon said:
    Why would they spend thousands and thousands of hours adding any of this to a framework that they were going to kill?

    It seems simple enough to me, SwiftUI isn't ready enough for everyone including Apple's internal teams to suddenly switch over to it to gain access to new platform capabilities, and their platform evolution can't afford to wait on SwiftUI maturity. Part of what makes multi-year transitions like their (now explicit) SwiftUI vision so expensive

    I think they recoup some of this double effort by having some of the new SwiftUI capabilities reuse the same UIKit internals... and when the internals are completely new SwiftUI native, they're not getting backported to UIKit

    Hmm, maybe. That's not the use of resources that I'd choose if I were planning a full transition. The development of SwiftUI has been going on for four or five years now, depending on how long they worked on it before release. I'd expect all development on it to be pure SwiftUI at this point.

  • edited July 2022

    So then what do you think it means that they are in fact still using many UIKit/AppKit classes underneath, where do you think that leads to in 3-5 years

    I expect UIKit lasts under the hood for at least as long as Apple is still committed enough to UIKit to extend it with strategic new platform capabilities, needed by their existing UIKit properties or needing mass adoption without waiting on dev community to move fully to SwiftUI. They’re going for breadth on SwiftUI support to make it quickly mature enough for devs, I’m not surprised they look for opportunities for simple reuse of well known components where they serve the requirements. Apple does operate with constrained engineering/product/design resources

    This kind of long-term migration strategy is known as the “strangler pattern” in engineering. Move everything to the new interface so that everything keeps running while you then swap out the underlying components with your new architecture piece by piece. It allows early adoption and iterative value delivery. Apple can just look pretty slow moving if you scrutinize individual components like why on earth List is still backed by the ancient UITableViewController (or similar), but when you’re just a team within a company you have to weigh rewriting that full UITableViewController component, and for what value exactly, or launching something new like charts

Sign In or Register to comment.