Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Still no multi core support for audio processing on iOS?

vovvov
edited September 2019 in General App Discussion

I know there was a thread about this, but wonder if anything’s changed since then.
I read that all the audio processing of all the music apps at any given moment is locked to only one core. As I understand it’s the main reason why all the modern iPads with their powerful processors underperform intel based systems by so much.
So I wonder maybe anyone was able to implement multi core processing for audio, especially in a host app?
And why is it so much harder than on intel based platforms? Is there an architectural difference in Apple processors which makes it impossible?

«1345

Comments

  • I'm not sure where this comes from, but it is confused at the very least. The realtime audio render thread works the same way in macOS and iOS. At least, it works the same way on iOS now as it did when I was working on OS X. I don't think that it has changed on macOS. A quick search indicates to me that it is still working the same way.

    Importantly, this thread has very high priority and is basically the reason that audio works so well on iOS and macOS.

    The next point is that as a developer, you can make as many audio processing threads as you want. The only limitation is that you have to feed the bits of data to the rendering thread in a lock free manner. I launch multiple threads on AU's that I write.

    Another point is that the UI definitely runs on multiple threads. A typical AU of mine will have six to eight threads running at any given time.

  • @NeonSilicon said:
    I'm not sure where this comes from, but it is confused at the very least. The realtime audio render thread works the same way in macOS and iOS. At least, it works the same way on iOS now as it did when I was working on OS X. I don't think that it has changed on macOS. A quick search indicates to me that it is still working the same way.

    Importantly, this thread has very high priority and is basically the reason that audio works so well on iOS and macOS.

    The next point is that as a developer, you can make as many audio processing threads as you want. The only limitation is that you have to feed the bits of data to the rendering thread in a lock free manner. I launch multiple threads on AU's that I write.

    Another point is that the UI definitely runs on multiple threads. A typical AU of mine will have six to eight threads running at any given time.

    It came from this thread.

    https://forum.audiob.us/discussion/31881/apps-that-use-ipad-multicore-support

    Do you want to say that on MacOS all the audio processing runs on one core?

  • McDMcD
    edited September 2019

    Too long... don't read.

    There has been discussions of this topic before on the forum. The question assumes some benefits from Multi-Core hardware that don't really apply to "realtime" processing.

    Where CPU's started to hit the limits of the physics of the distance between molecules on the silicon chips the system designers started making chip with multiple cores to allow the system to run many tasks in parallel. Spreading the logic of an application over these cores requires some tradeoffs.

    Ideally, a multi-processor (or multi-threaded) application has "batch features'.
    It's like the check out lines at the super market. More throughput by adding more checkouts in service. When the workload on the system is low shut down idle lanes.

    So, Music is rarely rendered in parallel batches without extreme attention to synchronization of the audio events. It could be done but it would like be done in a really sophisticated DAW for example and we have enough trouble getting that level of engineering focused on IOS as it is. Too many pro's will buy the more capable Mac's that can run 1000 tracks at a time so the $500 DAW engineering teams and busy competing in that space for the pro $'s.

    It's possible Apple could invest in putting the Logic team on creating an amazing IOS DAW and tell the IOS engineers what they need to change to allow for perfect synchronization but
    the ROI seems hard to justify unless someone creates an Android product that we all would rather use over the rock solid platform we have now for mobile (low cost) use cases.

    Now, a real developer might step in an add real insights here.
    SUMMARY: even with 16 cores you will likely only benefit from the clock rate of 1 for a mucic app. You want that one core to be all yours and locked down.

    For mobile devices Apple is looking at managing system power and heat by setting down cores and dropping the clock rate... both of which could potentially create bugs for real time use cases where you'd want IOS to insure your core is not throttled or dropped if the app is idle for a short period of time and misses processing a realtime input.

    I look forward to re-visiting the conversation rather than finding the old thread and starting at again. Forum practice lately has been to re-start a thread from 2017 labeled "Help me pick the right iPhone".

  • @McD said:
    Too long... don't read.

    So, you say MacOS uses multiple cores for audio processing after all. The problem is I top out my latest iPad Pro with only about 5 well processed tracks, not hundreds, without any other apps running. I tried different DAWs for that. So if multicore audio processing were in place on iOS it would give me some breathing space.

  • @vov said:

    Do you want to say that on MacOS all the audio processing runs on one core?

    No, I'm saying that on iOS the audio processing doesn't have to run on only one core. The realtime thread that an AU lives on works the same in both settings. But, in either OS, the AU developer can launch and use more threads than the given thread that the audio callback is called on (the realtime audio thread).

    The base system on the two OSes is somewhat different because of the limitations of memory, energy, and the control over the audio system that is need for a phone to work. I haven't checked into how much (if any) iPadOS changes this for iPads. The main point I want to make is that it is definitely possible to use multicore support for audio in iOS. It often isn't needed though.

  • vovvov
    edited September 2019

    @NeonSilicon said:

    @vov said:

    Do you want to say that on MacOS all the audio processing runs on one core?

    No, I'm saying that on iOS the audio processing doesn't have to run on only one core. The realtime thread that an AU lives on works the same in both settings. But, in either OS, the AU developer can launch and use more threads than the given thread that the audio callback is called on (the realtime audio thread).

    The base system on the two OSes is somewhat different because of the limitations of memory, energy, and the control over the audio system that is need for a phone to work. I haven't checked into how much (if any) iPadOS changes this for iPads. The main point I want to make is that it is definitely possible to use multicore support for audio in iOS. It often isn't needed though.

    That’s about how I understand that. But I wonder why it hasn’t been implemented on iOS yet while Auv3s have been in place for quite a while. Multicore audio processing is badly needed to utilize their capabilities. There should be some serious obstacles.

  • edited September 2019

    @NeonSilicon
    But, in either OS, the AU developer can launch and use more threads than the given thread that the audio callback is called on (the realtime audio thread).

    Of course for non-realtime code like UI and other stuff like that. Actually you cannot NOT use it, because it is impossible to do some operations (UI related, storage related, networking - all komds of operations which can be potentially threead - locking) in realtime audio thread - you have to run them on other threads.

    But for realtime audio (DSP code) you have to use realtime audio thread which is just one, shared for all running apps (plugins, host, orher standalone apps).
    So it means ALL dsp code (which is usually 95% of cpu demands of audio app) runs on same aingle thread all the time (because you need to be sample accurate with all dsp audio calculations)

  • @vov said:
    So, you say MacOS uses multiple cores for audio processing after all.

    Yes. MacOS has support to program multi-threaded apps. IOS does not have that system design.

    I'll repeat the critical requirement: The app developer has to own synchronization to use more than one core in perfect sync Apple's Operating System has to provide "locks" and "timers" to allow that to happen. IOS doesn't have those features (yet?). A developer will likely weigh in here with specifics.

    Creating, maintaining and supporting apps of this level of complexity requires Apple and the developer to do the required work for synchronizing realtime event processing and audio track alignment.

    iPad IOS is not and will not have the features of the state of the art desktop computer Operating Systems.

    Maybe a mobile Linux will emerge that changes the game. It's likely the only way out of Apple's walled garden. Android probably won't because it's provided on too many device types to address this narrow use case. But a realtime Mobile Linux for audio applications might happen since Linux development is provided by programmers that work without the usual ROI factors that drive Apple, Google or MS. They write code because they want to be able to use it for their work.

  • @dendy said:

    @NeonSilicon
    But, in either OS, the AU developer can launch and use more threads than the given thread that the audio callback is called on (the realtime audio thread).

    Of course for non-realtime code like UI and other stuff like that. But for realtime audio (DSP code) you have to use realtime audio thread which is just one, shared for all running apps (plugins, host, orher standalone apps).
    So it means ALL dsp code (which is usually 95% of cpu demands of audio app) runs on same aingle thread all the time (because you need to be sample accurate with all dsp audio calculations)

    As far as I know audio processing can be spread between the cores on other systems.

  • @dendy said:

    @NeonSilicon
    But, in either OS, the AU developer can launch and use more threads than the given thread that the audio callback is called on (the realtime audio thread).

    Of course for non-realtime code like UI and other stuff like that. But for realtime audio (DSP code) you have to use realtime audio thread which is just one, shared for all running apps (plugins, host, orher standalone apps).
    So it means ALL dsp code (which is usually 95% of cpu demands of audio app) runs on same aingle thread all the time (because you need to be sample accurate with all dsp audio calculations)

    No you don't. I run DSP code on other threads that I launch. I can also definitely say that there are situations where the UI of an AU can take more processing than the audio. I have one in particular that I've written just for myself that I had to move the display to using Metal because the UI would totally kill my iPhone 6. The DSP code on that one did absolutely nothing in comaprison.

  • @McD said:

    @vov said:
    So, you say MacOS uses multiple cores for audio processing after all.

    Yes. MacOS has support to program multi-threaded apps. IOS does not have that system design.

    I'll repeat the critical requirement: The app developer has to own synchronization to use more than one core in perfect sync Apple's Operating System has to provide "locks" and "timers" to allow that to happen. IOS doesn't have those features (yet?). A developer will likely weigh in here with specifics.

    Creating, maintaining and supporting apps of this level of complexity requires Apple and the developer to do the required work for synchronizing realtime event processing and audio track alignment.

    iPad IOS is not and will not have the features of the state of the art desktop computer Operating Systems.

    Maybe a mobile Linux will emerge that changes the game. It's likely the only way out of Apple's walled garden. Android probably won't because it's provided on too many device types to address this narrow use case. But a realtime Mobile Linux for audio applications might happen since Linux development is provided by programmers that work without the usual ROI factors that drive Apple, Google or MS. They write code because they want to be able to use it for their work.

    So you say the main reason is that iOS unlike other systems don’t provide what you called ‘’locks and timers”. That’s why there’s no multicore audio processing on iOS? That’s why developers can’t implement it?

  • @vov said:

    So you say the main reason is that iOS unlike other systems don’t provide what you called ‘’locks and timers”. That’s why there’s no multicore audio processing on iOS? That’s why developers can’t implement it?

    No, there are locks and timers in iOS. I use them in my UI's and control code. You don't want to use any locks in DSP code though.

  • By not use them in DSP code, I mean in any situation on any platform. There are a whole raft of things you can't do in a realtime setting. These are true on any system.

  • vovvov
    edited September 2019

    @NeonSilicon said:
    By not use them in DSP code, I mean in any situation on any platform. There are a whole raft of things you can't do in a realtime setting. These are true on any system.

    Ok, there’s some conflicting information. But the question is why Windows for example can utilize multicore audio processing, but iOS can’t. At least I don’t see any evidence of that.

  • edited September 2019

    @NeonSilicon
    I run DSP code on other threads that I launch.

    Hm, that's weirď.. What kimd of magic you then use for ensuring all calulations are sample accurate ? To my knowledge all sample accurate calculations needs to be done in main realtime audio thread - which is just one... At the end all your audio calculations needs to end in that audio thread which is only one called by iOS core exactly every buffer cycle at same moment.

    at least that's how i understand it base on this article https://atastypixel.com/blog/four-common-mistakes-in-audio-development/

  • @vov said:

    @NeonSilicon said:
    By not use them in DSP code, I mean in any situation on any platform. There are a whole raft of things you can't do in a realtime setting. These are true on any system.

    Ok, there’s some conflicting information. But the question is why Windows for example can utilize multicore audio processing, but iOS can’t. At least I don’t see any evidence of that.

    I think you are misunderstanding some key points that @NeonSilicon made and mis-summarizing them.

    He has tried to point out to you that:

    • iOS audio apps can use multiple processes/cores
    • the audio render thread itself -- the one that handles the actual playback is single-threaded.
    • if an app is careful, it can create threads that do some audio processing (ahead of time) in threads other than the render thread. When that happens, they send the result to the render thread.

    I think I have that straight.

    Because of the nature or mobile computing and running on so little power, iOS (and probably other mobile OSs but I don't really know much about them) has to manage CPU speed differently than on desktops and laptops. I think that this necessitates some differences in how realtime processes are treated.

  • vovvov
    edited September 2019

    @espiegel123 said:

    @vov said:

    @NeonSilicon said:
    By not use them in DSP code, I mean in any situation on any platform. There are a whole raft of things you can't do in a realtime setting. These are true on any system.

    Ok, there’s some conflicting information. But the question is why Windows for example can utilize multicore audio processing, but iOS can’t. At least I don’t see any evidence of that.

    I think you are misunderstanding some key points that @NeonSilicon made and mis-summarizing them.

    He has tried to point out to you that:

    • iOS audio apps can use multiple processes/cores
    • the audio render thread itself -- the one that handles the actual playback is single-threaded.
    • if an app is careful, it can create threads that do some audio processing (ahead of time) in threads other than the render thread. When that happens, they send the result to the render thread.

    I think I have that straight.

    Because of the nature or mobile computing and running on so little power, iOS (and probably other mobile OSs but I don't really know much about them) has to manage CPU speed differently than on desktops and laptops. I think that this necessitates some differences in how realtime processes are treated.

    Interesting. Could you give some examples of apps utilizing multicore audio processing on iOS?
    I don’t know any to be honest.
    As far as power consumption, my iPad Pro losing its charge faster than my old pad/laptop transformer.

  • @vov said:
    So you say the main reason is that iOS unlike other systems don’t provide what you called ‘’locks and timers”. That’s why there’s no multicore audio processing on iOS? That’s why developers can’t implement it?

    I have been corrected. IOS has "locks and timers". Now I should back away and let developers explain how synchronizing realtime threads for audio applications is complex and prone to bugs in the real world.

    Remember the 10 check stands? What if you put 5 buyers in 5 queues and you want all of them to reach the checkout and be serviced with 10 second precision. To do that the store has to provide a way for the 5-person shopping team (your app) to alert the clerks to take these customers out of order and service them in parallel. If IOS has such a feature for the developer to invoke then you might get multi-core audio support but I'm sure this is just an analogy. The details get a lot more confusing and are above my pay grade. Great Multi-threading developers are in high demand because the the complexity of these designs.

    I'm trying to translate the specifics into concepts that show that it's a service IOS was not architected to provide because coordination of core processing is not required to run a lot of apps in parallel. It's just this realtime audio synch requirement that changes the difference from OS X and IOS.

    For audio it's generally better to keep buying the faster Clocks for your CPU's and insure
    precise control over event timing.

  • @espiegel123 said:

    @vov said:

    @NeonSilicon said:
    By not use them in DSP code, I mean in any situation on any platform. There are a whole raft of things you can't do in a realtime setting. These are true on any system.

    Ok, there’s some conflicting information. But the question is why Windows for example can utilize multicore audio processing, but iOS can’t. At least I don’t see any evidence of that.

    I think you are misunderstanding some key points that @NeonSilicon made and mis-summarizing them.

    He has tried to point out to you that:

    • iOS audio apps can use multiple processes/cores
    • the audio render thread itself -- the one that handles the actual playback is single-threaded.
    • if an app is careful, it can create threads that do some audio processing (ahead of time) in threads other than the render thread. When that happens, they send the result to the render thread.

    I think I have that straight.

    Because of the nature or mobile computing and running on so little power, iOS (and probably other mobile OSs but I don't really know much about them) has to manage CPU speed differently than on desktops and laptops. I think that this necessitates some differences in how realtime processes are treated.

    That is a good summary, I think.
    At least, that was my interpretation of the discussion.

  • vovvov
    edited September 2019

    @McD
    So you think there’s some features missing in iOS after all?
    I wonder what it is. How deep is underlying problem?
    Multicore for audio processing seems to be pretty standard on other systems.
    And I also would like to know if anyone did actually try to implement multicore audio processing on iOS?

  • @espiegel123 said:

    @vov said:

    @NeonSilicon said:
    By not use them in DSP code, I mean in any situation on any platform. There are a whole raft of things you can't do in a realtime setting. These are true on any system.

    Ok, there’s some conflicting information. But the question is why Windows for example can utilize multicore audio processing, but iOS can’t. At least I don’t see any evidence of that.

    I think you are misunderstanding some key points that @NeonSilicon made and mis-summarizing them.

    He has tried to point out to you that:

    • iOS audio apps can use multiple processes/cores
    • the audio render thread itself -- the one that handles the actual playback is single-threaded.
    • if an app is careful, it can create threads that do some audio processing (ahead of time) in threads other than the render thread. When that happens, they send the result to the render thread.

    I think I have that straight.

    Because of the nature or mobile computing and running on so little power, iOS (and probably other mobile OSs but I don't really know much about them) has to manage CPU speed differently than on desktops and laptops. I think that this necessitates some differences in how realtime processes are treated.

    That's close enough. The only thing I would add is that I can also add latency to the output audio. This is actually pretty typical and there are methods to inform the host how much latency I've added. There are DSP techniques that will add latency. For example processing that depends on Fourier transforms. Doing spectral processing off of the main thread would be a fairly typical thing to do. You have to be really careful about how you synchronize the buffers and never lock the realtime thread, but there are standard techniques to do this.

  • @NeonSilicon said:

    @espiegel123 said:

    @vov said:

    @NeonSilicon said:
    By not use them in DSP code, I mean in any situation on any platform. There are a whole raft of things you can't do in a realtime setting. These are true on any system.

    Ok, there’s some conflicting information. But the question is why Windows for example can utilize multicore audio processing, but iOS can’t. At least I don’t see any evidence of that.

    I think you are misunderstanding some key points that @NeonSilicon made and mis-summarizing them.

    He has tried to point out to you that:

    • iOS audio apps can use multiple processes/cores
    • the audio render thread itself -- the one that handles the actual playback is single-threaded.
    • if an app is careful, it can create threads that do some audio processing (ahead of time) in threads other than the render thread. When that happens, they send the result to the render thread.

    I think I have that straight.

    Because of the nature or mobile computing and running on so little power, iOS (and probably other mobile OSs but I don't really know much about them) has to manage CPU speed differently than on desktops and laptops. I think that this necessitates some differences in how realtime processes are treated.

    That's close enough. The only thing I would add is that I can also add latency to the output audio. This is actually pretty typical and there are methods to inform the host how much latency I've added. There are DSP techniques that will add latency. For example processing that depends on Fourier transforms. Doing spectral processing off of the main thread would be a fairly typical thing to do. You have to be really careful about how you synchronize the buffers and never lock the realtime thread, but there are standard techniques to do this.

    So did you actually use multicore audio processing?

  • @vov said:
    @McD
    So you think there’s some features missing in iOS after all?
    I wonder what it is. How deep is underlying problem?
    Multicore for audio processing seems to be pretty standard on other systems.
    And I also would like to know if anyone did actually try to implement multicore audio processing on iOS?

    I wouldn't say there are any missing features from iOS. It has different goals than macOS though. For example, memory is handled differently. The virtual memory system is optimized differently between the two. It needs to be for optimizing resource usage.

    This is true for many of the systems that are similar but optimized differently between iOS and macOS. It's true for other systems too. If I write an application to run on an embedded system where I know exactly how all the resources are being used and I'm the only thing that is running, I get to do all sorts of things that I can't do on a multi-application OS.

    Personally, my mind is completely blown by what we can get away with on iOS and the sophistication of the audio apps that are available is pretty damn stunning.

  • @vov said:

    @NeonSilicon said:

    @espiegel123 said:

    @vov said:

    @NeonSilicon said:
    By not use them in DSP code, I mean in any situation on any platform. There are a whole raft of things you can't do in a realtime setting. These are true on any system.

    Ok, there’s some conflicting information. But the question is why Windows for example can utilize multicore audio processing, but iOS can’t. At least I don’t see any evidence of that.

    I think you are misunderstanding some key points that @NeonSilicon made and mis-summarizing them.

    He has tried to point out to you that:

    • iOS audio apps can use multiple processes/cores
    • the audio render thread itself -- the one that handles the actual playback is single-threaded.
    • if an app is careful, it can create threads that do some audio processing (ahead of time) in threads other than the render thread. When that happens, they send the result to the render thread.

    I think I have that straight.

    Because of the nature or mobile computing and running on so little power, iOS (and probably other mobile OSs but I don't really know much about them) has to manage CPU speed differently than on desktops and laptops. I think that this necessitates some differences in how realtime processes are treated.

    That's close enough. The only thing I would add is that I can also add latency to the output audio. This is actually pretty typical and there are methods to inform the host how much latency I've added. There are DSP techniques that will add latency. For example processing that depends on Fourier transforms. Doing spectral processing off of the main thread would be a fairly typical thing to do. You have to be really careful about how you synchronize the buffers and never lock the realtime thread, but there are standard techniques to do this.

    So did you actually use multicore audio processing?

    Yes. I'm sure many other audio developers on iOS do too.

  • @NeonSilicon said:

    @vov said:

    @NeonSilicon said:

    @espiegel123 said:

    @vov said:

    @NeonSilicon said:
    By not use them in DSP code, I mean in any situation on any platform. There are a whole raft of things you can't do in a realtime setting. These are true on any system.

    Ok, there’s some conflicting information. But the question is why Windows for example can utilize multicore audio processing, but iOS can’t. At least I don’t see any evidence of that.

    I think you are misunderstanding some key points that @NeonSilicon made and mis-summarizing them.

    He has tried to point out to you that:

    • iOS audio apps can use multiple processes/cores
    • the audio render thread itself -- the one that handles the actual playback is single-threaded.
    • if an app is careful, it can create threads that do some audio processing (ahead of time) in threads other than the render thread. When that happens, they send the result to the render thread.

    I think I have that straight.

    Because of the nature or mobile computing and running on so little power, iOS (and probably other mobile OSs but I don't really know much about them) has to manage CPU speed differently than on desktops and laptops. I think that this necessitates some differences in how realtime processes are treated.

    That's close enough. The only thing I would add is that I can also add latency to the output audio. This is actually pretty typical and there are methods to inform the host how much latency I've added. There are DSP techniques that will add latency. For example processing that depends on Fourier transforms. Doing spectral processing off of the main thread would be a fairly typical thing to do. You have to be really careful about how you synchronize the buffers and never lock the realtime thread, but there are standard techniques to do this.

    So did you actually use multicore audio processing?

    Yes. I'm sure many other audio developers on iOS do too.

    Could you name the app?
    Also, I don’t think memory allocation is a problem for me, I just can’t get enough processing power from my iPad Pro 2018 to run a project that would be pretty standard on Win/Mac.
    I can’t get even close to that. I think it’s because there’s no proper multi core audio processing to run enough Auv3s.

  • edited September 2019

    Hmm, big thread here already (ie. TLDR) but I asked Michael this about a week ago and here was his reply at the top of this page...

    https://forum.audiob.us/discussion/34613/when-will-we-all-be-on-the-same-page-re-app-prices/p2

    @Audogus said:
    Any thoughts / comments on the multithreading single core / multicore situation? Rumour has it that the multicore scores are irrelevant to audio app users. Thoughts? Why? Hard? Heat? IOS?

    @Michael said... Yeah, that's generally true. Audio rendering – as in, the chain of processing that ends with sound coming out of a speaker somewhere – is typically one thread only, so only on one core. The audio system has a single high-priority thread which does the work of producing audio for each source and mixing it together, and it has a hard deadline for each render interval. Splitting out the work to multiple threads isn't feasible because the system just isn't designed to do that without running the risk of breaching those deadlines. It may not always be that way, especially for rendering across multiple apps in parallel, as far as I'm aware. But it's not that way right now.

  • vovvov
    edited September 2019

    @AudioGus said:
    Hmm, big thread here already (ie. TLDR) but I asked Michael this just the other day and here was his reply at the top of this page...

    https://forum.audiob.us/discussion/34613/when-will-we-all-be-on-the-same-page-re-app-prices/p2

    @Audogus said:
    Any thoughts / comments on the multithreading single core / multicore situation? Rumour has it that the multicore scores are irrelevant to audio app users. Thoughts? Why? Hard? Heat? IOS?

    @Michael said... Yeah, that's generally true. Audio rendering – as in, the chain of processing that ends with sound coming out of a speaker somewhere – is typically one thread only, so only on one core. The audio system has a single high-priority thread which does the work of producing audio for each source and mixing it together, and it has a hard deadline for each render interval. Splitting out the work to multiple threads isn't feasible because the system just isn't designed to do that without running the risk of breaching those deadlines. It may not always be that way, especially for rendering across multiple apps in parallel, as far as I'm aware. But it's not that way right now.

    Why is it a standard practice on other systems? And it does provide more performance capabilities. It’s so weird. There must be a technological limitation, right?

  • McDMcD
    edited September 2019

    @vov said:
    @McD
    So you think there’s some features missing in iOS after all?

    There are features missing in everything. That's why all software gets updated.

    I wonder what it is. How deep is underlying problem?

    Only about 2 inches really.

    Multicore for audio processing seems to be pretty standard on other systems.

    There is no other product like the iPad with better features for making music in realtime.

    And I also would like to know if anyone did actually try to implement multicore audio processing on iOS?

    I'm sure they have. Maybe they already have. Maybe they will speak up and find you as another customer. I'm assuming you're not going to be developing anytime soon... but if you are. Please make something to suit your needs.

    I love my iPad(s), my iPhone(s) and my MacBook. They are all stable but different. Pick the right tool for the job. Or buy another tool. iPad is not broken. Not to say that they won't change something to make it better but different.

  • I don't know - actually, I don't even have experience with audio on other platforms, so I'm probably the wrong person to be talking about it =)

  • @McD said:

    @vov said:
    @McD
    So you think there’s some features missing in iOS after all?

    There are features missing in everything. That's why all software gets updated.

    I don’t see much progress in this area unfortunately.

    I wonder what it is. How deep is underlying problem?

    Only about 2 inches really.

    Still wonder where the problem is. Processors’ architecture, iOS, ....

    Multicore for audio processing seems to be pretty standard on other systems.

    There is no other product like the iPad with better features for making music in realtime.

    It’s a very questionable statement, but it’s a matter of style and taste. Some need only their own voice.

    And I also would like to know if anyone did actually try to implement multicore audio processing on iOS?

    I'm sure they have. Maybe they already have. Maybe they will speak up and find you as another customer. I'm assuming you're not going to be developing anytime soon... but if you are. Please make something to suit your needs.

    I love my iPad(s), my iPhone(s) and my MacBook. They are all stable but different. Pick the right tool for the job. Or buy another tool. iPad is not broken. Not to say that they won't change something to make it better but different.

    No, I’m not into software development, just trying to understand how long this situation will persist, as it’s been quite a while.

Sign In or Register to comment.