Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Convolution Pro by Jens Guell

1235

Comments

  • @Charlesalbert said:
    ...
    Our convolution engine (the exactlyy same codebase) uses on an quite old 2012 Mac Mini with a 128 seconds stereo IR a maximum of 3% CPU. THREE PERCENT! On iOS this is with the same configuration on a latest iPad Pro around 20 to 40 percent !!! (buffersize 1024) up to 90% with 256 samples in AUM...

    This is the most important statement. It starts to make sense once you try to understand why audio glitches appear at a certain buffer size and why cpu load is no indicator of glitches/crackles.
    First, CPU load is usually measured in very coarse time intervals. In AUM it seems to be 1000 milliseconds.
    256 samples buffer, on the other hand, have to be calculated within less than 6 milliseconds so if only one of the about 172 frames to be calculated within one second is handled by the cpu too late, then you already get crackles even though the cpu load indicator does not yet indicate an overload (because the average load over all 172 frames is well below 100%).
    An efficient algorithm not only reduces the average cpu load but also evens out potential peaks in cpu load by doing calculations in a way that causes cpu load to be as steady as possible. The current algorithm is certainly optimized for one objective but, as it seems, not for loading as many instances as possible at low audio buffer sizes.

    So iOS is either buggy like hell or there is a heavy audio system misconception/misconfiguration on Apples mobile devices running. Multi threading and multi core seems to be completely missing with AUV3. This phenomenon raises exponentially with higher performance demands.

    Granted, it's an OS that's clearly not optimized for realtime audio processing, but synth developers have faced more challenging problems years ago when we only had single-core CPUs and yet they've managed to build polyphonic synthesizers, reverbs and (short-sample) convolutors on CPUs that were considerably weaker than a single cpu core today.

    However, we reduced the CPU hit as much as even possible with the current implementation of the JAX Convolutors. The latest update will come soon. More is just not possible, unless something fundamental has changed on iOS/iPadOS for audio processing. We tested on the latest devices, and it seems there is no POWER at all in that CPUs, it is just ridiculous. It feels like building on a single core 1990 desktop computer.

    If that was true then apps like Gadget, NS2, BM3, Caustic, GR16 and many more wouldn't exist. And they run quite well at 256 samples buffer size (on current iDevices).

    How could some so powerful machines be so poor at rendering audio?

    Do engineers at apple not care at all about audio performance?

    See above. Sometimes it takes patience and dedication to find a solution.
    But that is what makes a product great in the end.

  • @rs2000

    ‘...Sometimes it takes patience and dedication to find a solution.
    But that is what makes a product great in the end.’...

    This +1

  • Patience may be the key requirement here...

  • @TheOriginalPaulB said:
    Patience may be the key requirement here...

    And pride.

    He released an app that he thought was almost perfect and it wasn't.
    It's the one problem with perfectionists, me being one of them.

    He'll get over it, if it's that, but in the interim the conversation
    about convolution reverbs and the technology is quite inspirational.

    Once he gathers his strength, gets over his own self criticism
    and takes a good look at the problems generated by
    his interpretation of Apples coding environment and
    the limitations imposed by Apple he'll be cool.

    The app has potential, it's to be expected to fall over at the first hurdle.

    If it's impossible right now then at the very least we would have all learnt something.

  • i know Jens only from email correspondence.

    he's a person of passion and perfection.

    he's acknowledged, to a person he doesn't even know (me), that he is deficient with regard to communication.

    that he's not good at communicating on a forum (partly language barrier).

    that he's terrible at marketing.

    that creating audio apps is not financially rewarding, but he continues creating out of passion for music, and a 'certain strong craziness'.

    imo this says a lot about his character.

    i suspect that some of his frustrations come from apple's lack of documentation, and negative feedback on app store (that would be disheartening). to receive negative feedback on apps that you've released for free would be maddening.

    i enjoy his apparent bluntness, passion, frenzied responses, eccentricities. i believe it comes from a good place.

    he always fixes issues when they arise, sometimes with customised work-arounds in assembler (how many developers do that?).

    sometimes people need encouragement. ([email protected])

    i feel fortunate to have his apps on ios. they are unique.

  • @frond said:
    i know Jens only from email correspondence.

    he's a person of passion and perfection.

    he's acknowledged, to a person he doesn't even know (me), that he is deficient with regard to communication.

    that he's not good at communicating on a forum (partly language barrier).

    that he's terrible at marketing.

    that creating audio apps is not financially rewarding, but he continues creating out of passion for music, and a 'certain strong craziness'.

    imo this says a lot about his character.

    i suspect that some of his frustrations come from apple's lack of documentation, and negative feedback on app store (that would be disheartening). to receive negative feedback on apps that you've released for free would be maddening.

    i enjoy his apparent bluntness, passion, frenzied responses, eccentricities. i believe it comes from a good place.

    he always fixes issues when they arise, sometimes with customised work-arounds in assembler (how many developers do that?).

    sometimes people need encouragement. ([email protected])

    i feel fortunate to have his apps on ios. they are unique.

    I initially thought as much.

    It would be great to communicate with him.

    Thank you.

  • How could some so powerful machines be so poor at rendering audio?

    Do engineers at apple not care at all about audio performance?

    I read Jen's rant and I came to the conclusion that he doesn't really understand the problems that he's facing TBH. I'm sure there are problems with Apple stuff and the OS, like there are with any system, but developer ignorance (this stuff is hard - I'm not knocking him) is typically the reason for poorly performing code.

    As soon you lower the rendering buffer sizes with most AUV3 host applications, the higher the extreme CPU peaks become, regardless what you do with the code. Over proportionally. This behavior is everything but normal. A fixed circular buffer should keep the performance nearly constant regardless which host buffer size is switched!

    This is simply not true, particularly for the kind of FFT type thing he's doing here. This is going to have a huge effect on caching - and if you don't know that, you're going to struggle to get the performance you're looking for.

    Our convolution engine (the exactlyy same codebase) uses on an quite old 2012 Mac Mini with a 128 seconds stereo IR a maximum of 3% CPU. THREE PERCENT! On iOS this is with the same configuration on a latest iPad Pro around 20 to 40 percent !!! (buffersize 1024) up to 90% with 256 samples in AUM...

    He's comparing a machine running on mains power to a battery device. In addition the battery device uses an ARM chip, whereas the Mac uses an Intel chip. Intel desktop processors are heavily optimized for speed over power consumption, even in something like a Mini. ARM chips are optimized for power consumption (which is why they are in things like phones), rather than performance. So Intel chips will always do better at stuff like FFT. In addition an iPad, like any battery power device (your Mac Book Pro on battery for example) will throttle the CPU to preserve battery and to prevent it melting down. You want it to do this - trust me, you do. One final point - the Mini probably has more memory as well, and (this is a guess, but I suspect it's right) the CPU memory architecture better utilizes memory through things like caching.

    So iOS is either buggy like hell or there is a heavy audio system misconception/misconfiguration on Apples mobile devices running. Multi threading and multi core seems to be completely missing with AUV3. This phenomenon raises exponentially with higher performance demands.

    Multi-threading and Multi-Core are two completely different things, and a developer using them interchangeably is a bad sign (using threads to improve performance doesn't actually work well, unless you pay attention to the number of cores that you have and use a well designed thread pool. Done poorly it can reduce performance). Multi-threading for realtime code is almost always a bad idea, unless you are very careful/skillful, or organize your code in such a way that things can work independently. And there are all kinds of traps here (for example - spinlocks work really well until you push the CPU, when suddenly they destroy your performance). That said there are ways that Apple could make multi-core stuff work in plugins, but they'd have to give control over this to the host (e.g. if I'm a host, I may not want you using two cores, as I'm running two plugin chains i parallel, which is a much better use of resources).

    Basically this stuff is complex, and bad mouthing Apple is not a great look TBH.

    Apples high performance Accelerate Framework is told to be hardware-optimized and we used excessively NEON optimizations additionally all over the code.

    If you use NEON optimizations in the wrong way you will not get speed ups. Getting a CPU (Intel, or AMD) to get the kinds of speed ups that you want is hard, requires a lot of thought about data organization, understanding how your CPU and cache works and is in some ways a black art (high performance code is hard). If you're not getting significant improvements - you're probably doing something wrong somewhere. Because other people DO get significant improvements. Accelerate helps, yes - but it can only do so much.

    Also, annoyingly fast code written for an Intel chip may be dog slow on an ARM chip (and vice versa). They work differently, and that matters if you must squeeze every ounce of performance.

    It is useless, if the system (obviously) throttles the audio performance artificially. This is not just a suspicion. This is a fact.”

    If you're running very demanding CPU bound code, such as an FFT, you are going to heat up the CPU. So unless you're okay with your iPad melting -> CPU throttling is a fact of life. In other news - gravity is really annoying for plane designers.

    I get this stuff is hard, and frustrating. And that's fine. But still.

  • @cian said:

    How could some so powerful machines be so poor at rendering audio?

    Do engineers at apple not care at all about audio performance?

    I read Jen's rant and I came to the conclusion that he doesn't really understand the problems that he's facing TBH. I'm sure there are problems with Apple stuff and the OS, like there are with any system, but developer ignorance (this stuff is hard - I'm not knocking him) is typically the reason for poorly performing code.

    As soon you lower the rendering buffer sizes with most AUV3 host applications, the higher the extreme CPU peaks become, regardless what you do with the code. Over proportionally. This behavior is everything but normal. A fixed circular buffer should keep the performance nearly constant regardless which host buffer size is switched!

    This is simply not true, particularly for the kind of FFT type thing he's doing here. This is going to have a huge effect on caching - and if you don't know that, you're going to struggle to get the performance you're looking for.

    Our convolution engine (the exactlyy same codebase) uses on an quite old 2012 Mac Mini with a 128 seconds stereo IR a maximum of 3% CPU. THREE PERCENT! On iOS this is with the same configuration on a latest iPad Pro around 20 to 40 percent !!! (buffersize 1024) up to 90% with 256 samples in AUM...

    He's comparing a machine running on mains power to a battery device. In addition the battery device uses an ARM chip, whereas the Mac uses an Intel chip. Intel desktop processors are heavily optimized for speed over power consumption, even in something like a Mini. ARM chips are optimized for power consumption (which is why they are in things like phones), rather than performance. So Intel chips will always do better at stuff like FFT. In addition an iPad, like any battery power device (your Mac Book Pro on battery for example) will throttle the CPU to preserve battery and to prevent it melting down. You want it to do this - trust me, you do. One final point - the Mini probably has more memory as well, and (this is a guess, but I suspect it's right) the CPU memory architecture better utilizes memory through things like caching.

    So iOS is either buggy like hell or there is a heavy audio system misconception/misconfiguration on Apples mobile devices running. Multi threading and multi core seems to be completely missing with AUV3. This phenomenon raises exponentially with higher performance demands.

    Multi-threading and Multi-Core are two completely different things, and a developer using them interchangeably is a bad sign (using threads to improve performance doesn't actually work well, unless you pay attention to the number of cores that you have and use a well designed thread pool. Done poorly it can reduce performance). Multi-threading for realtime code is almost always a bad idea, unless you are very careful/skillful, or organize your code in such a way that things can work independently. And there are all kinds of traps here (for example - spinlocks work really well until you push the CPU, when suddenly they destroy your performance). That said there are ways that Apple could make multi-core stuff work in plugins, but they'd have to give control over this to the host (e.g. if I'm a host, I may not want you using two cores, as I'm running two plugin chains i parallel, which is a much better use of resources).

    Basically this stuff is complex, and bad mouthing Apple is not a great look TBH.

    Apples high performance Accelerate Framework is told to be hardware-optimized and we used excessively NEON optimizations additionally all over the code.

    If you use NEON optimizations in the wrong way you will not get speed ups. Getting a CPU (Intel, or AMD) to get the kinds of speed ups that you want is hard, requires a lot of thought about data organization, understanding how your CPU and cache works and is in some ways a black art (high performance code is hard). If you're not getting significant improvements - you're probably doing something wrong somewhere. Because other people DO get significant improvements. Accelerate helps, yes - but it can only do so much.

    Also, annoyingly fast code written for an Intel chip may be dog slow on an ARM chip (and vice versa). They work differently, and that matters if you must squeeze every ounce of performance.

    It is useless, if the system (obviously) throttles the audio performance artificially. This is not just a suspicion. This is a fact.”

    If you're running very demanding CPU bound code, such as an FFT, you are going to heat up the CPU. So unless you're okay with your iPad melting -> CPU throttling is a fact of life. In other news - gravity is really annoying for plane designers.

    I get this stuff is hard, and frustrating. And that's fine. But still.

    It’s these sorts of posts which make me appreciate even more the challenges of developers. My brief forays into coding have always quickly revealed my limited insights into what I’m doing as I anticipate things should work one way and when I run the code it does something completely unexpected.

    As for Jens, my experiences communicating with him haven’t been enjoyable at all and it seems like he’d benefit from someone else handling the public communication for him. While ultimately it doesn’t really matter if my communication with him is good or not because I have nothing of significance to offer him, his prickly demeanor with other developers and Apple would mean he’d have a more difficult and longer road to gaining insights into how he can improve his apps when he runs into roadblocks.

    I’m certainly one to call the kettle black when it comes to getting in my own way and burning bridges to people who are in a position to help me move forward. I hope Jens is able and willing to manage some of his impulses to lash out when things don’t go his way as this increased flexibility will be much more productive and satisfying.

  • Well, if you have a look at the video published today on discchord, the CPU seems to have been considerably improved...

  • @cuscolima said:
    Well, if you have a look at the video published today on discchord, the CPU seems to have been considerably improved...

    Which buffer size??

  • @cian : thanks for sharing your thoughts. I think a lot of time when someone says something passionately, people sometimes reflexively think that the degree of passion confers credibility.

    @cuscolima : with large buffers the version in the app store works well.

  • @espiegel123 said:
    @cian : thanks for sharing your thoughts. I think a lot of time when someone says something passionately, people sometimes reflexively think that the degree of passion confers credibility.

    @cuscolima : with large buffers the version in the app store works well.

    With buffer of 1024 and a 4s IR I have something like 20% on my DSP indicator in AUM (IPad pro 2017). On the video, this is not the version that is currently available on the app store that is limited to 16 secondes IR. Here it goes up to something like 120 secondes with 11% DSP...

    And you can see that he is importing IRs so I assume that it is the pro version.

  • @cuscolima said:

    @espiegel123 said:
    @cian : thanks for sharing your thoughts. I think a lot of time when someone says something passionately, people sometimes reflexively think that the degree of passion confers credibility.

    @cuscolima : with large buffers the version in the app store works well.

    With buffer of 1024 and a 4s IR I have something like 20% on my DSP indicator in AUM (IPad pro 2017). On the video, this is not the version that is currently available on the app store that is limited to 16 secondes IR. Here it goes up to something like 120 secondes with 11% DSP...

    And you can see that he is importing IRs so I assume that it is the pro version.

    Cool!

  • How many of us have made logical assumptions about the behavior of IOS devices and
    learned the hard way that berating Apple doesn't lead to a solution?

    AUv3 means you can run multiple instances of an AUv3 synth or FX app, so, logically I'd like
    to use iSymphonic and set up a simple string quartet: violin, viola, cello, arco bass. Damn you Apple and/or Crudebyte. The 4th instance crashes.

    Calmer, more knowledgeable developers explain the issues to me and it puts the problem back on my shoulders to find a solution. Still, Apple and Crudebyte get the refund requests as more of us learn that IOS Music Production is a constant series of trade offs to reach our visions of a finished work.

    I'm glad Jens is working on this extreme corner case with massive IR's, filters and complex computational systems. Someone has to find the limits. I will continue to buy every JAX app to support his efforts. No personal communication required... let the man do the really hard work.

    There are many developers making IOS products that I feel this way about: @j_liljedahl, Christian Siedschlag of DDMF, and @brambos for example: World class programmers that
    make IOS apps.

  • @McD not to hijack but iSymphonic has 16 channels in one instance. I can imagine a few different ways to route this using only one instance and 4 channels. Solo cello is worth it.

  • @mjcouche said:
    @McD not to hijack but iSymphonic has 16 channels in one instance. I can imagine a few different ways to route this using only one instance and 4 channels. Solo cello is worth it.

    Hijack away. I love the Crudebyte sounds but tend to rarely use them due to a lot of issues:
    extremely slow load times, terrible UI for selecting a specific preset (I even tried programming MIDI controller knobs to Bank Select/PC's and gave up). I have similar feelings about using the sounds in BeatHawk, Noise, etc. I have decided that sampling into the 3 layers of NS2 is probably the right trade off for me of scaling in a DAW and quality of sounds.

  • I'm glad Jens is working on this extreme corner case with massive IR's, filters and complex computational systems. Someone has to find the limits. I will continue to buy every JAX app to support his efforts. No personal communication required... let the man do the really hard work.

    Maybe so, but when a developer blames other people for their own lack of understanding I'm not super impressed.

  • @McD said:
    ... I have decided that sampling into the 3 layers of NS2 is probably the right trade off for me of scaling in a DAW and quality of sounds.

    Welcome to the club! Add a few sampled reverb tails from different instruments at different pitches and you have an outstanding reverb nobody else owns :D

  • edited January 2020

    Guys, I've just loaded 8 instances of Convolutor PE on 8 channels in AUM with Xynthesizr running over IAA and MIDI sync. AUM buffer set to 256 samples.
    You won't believe it.
    I got no crackles!!!
    What gives?
    As if more Convolutor PE instances would require less CPU than a single instance.

    Weird, isn't it?

    Edit:
    It only works when I do two things.
    First, set AUM's buffer to 1024.
    Then add the Convolutor PE instances, one after another.
    Then reduce the AUM audio buffer to 512, then 256 samples.

    When I save the project, close and restart AUM and load the project again, COU load goes above 100% and I get crackles again.

  • @rs2000 said:
    Guys, I've just loaded 8 instances of Convolutor PE on 8 channels in AUM with Xynthesizr running over IAA and MIDI sync. AUM buffer set to 256 samples.
    You won't believe it.
    I got no crackles!!!
    What gives?
    As if more Convolutor PE instances wiuld require less CPU than a single instance.

    Weird, isn't it?

    That is weird.
    Do you have any theories?

    Try it with two in place rather than eight.
    See if that works because if so
    then it would provide the needed stability until Jens Guell tracks down the bugs.

  • @McD said:

    @mjcouche said:
    @McD not to hijack but iSymphonic has 16 channels in one instance. I can imagine a few different ways to route this using only one instance and 4 channels. Solo cello is worth it.

    Hijack away. I love the Crudebyte sounds but tend to rarely use them due to a lot of issues:
    extremely slow load times, terrible UI for selecting a specific preset (I even tried programming MIDI controller knobs to Bank Select/PC's and gave up). I have similar feelings about using the sounds in BeatHawk, Noise, etc. I have decided that sampling into the 3 layers of NS2 is probably the right trade off for me of scaling in a DAW and quality of sounds.

    How is that going? Is it as time consuming as Scott has said? I'm considering this but I definitely need audio tracks as well.

  • edited January 2020

    @Gravitas said:

    @rs2000 said:
    Guys, I've just loaded 8 instances of Convolutor PE on 8 channels in AUM with Xynthesizr running over IAA and MIDI sync. AUM buffer set to 256 samples.
    You won't believe it.
    I got no crackles!!!
    What gives?
    As if more Convolutor PE instances wiuld require less CPU than a single instance.

    Weird, isn't it?

    That is weird.
    Do you have any theories?

    Try it with two in place rather than eight.
    See if that works because if so
    then it would provide the needed stability until Jens Guell tracks down the bugs.

    No, I cannot yet recognize any logic behind this behavior.
    Tried with two instances the way I've done it before, and it eats 40..48% CPU @256.
    Saved and re-loaded the AUM session, now it will grab between 60 and 80% CPU @256.

    Maybe @j_liljedahl or @brambos have seen something like this before?

  • @rs2000 said:

    @Gravitas said:

    @rs2000 said:
    Guys, I've just loaded 8 instances of Convolutor PE on 8 channels in AUM with Xynthesizr running over IAA and MIDI sync. AUM buffer set to 256 samples.
    You won't believe it.
    I got no crackles!!!
    What gives?
    As if more Convolutor PE instances wiuld require less CPU than a single instance.

    Weird, isn't it?

    That is weird.
    Do you have any theories?

    Try it with two in place rather than eight.
    See if that works because if so
    then it would provide the needed stability until Jens Guell tracks down the bugs.

    No, I cannot yet recognize any logic behind this behavior.
    Tried with two instances the way I've done it before, and it eats 40..48% CPU @256.
    Saved and re-loaded the AUM session, now it will grab between 60 and 80% CPU @256.

    Maybe @j_liljedahl or @brambos have seen something like this before?

    My best guess:

    • 2 instances cause the cpu cores to throttle down to low power mode because full cpu power isn’t required
    • More instances will cause the cpu to go into full power mode, causing overall cpu load percentage to go down
  • edited January 2020

    @brambos said:

    @rs2000 said:

    @Gravitas said:

    @rs2000 said:
    Guys, I've just loaded 8 instances of Convolutor PE on 8 channels in AUM with Xynthesizr running over IAA and MIDI sync. AUM buffer set to 256 samples.
    You won't believe it.
    I got no crackles!!!
    What gives?
    As if more Convolutor PE instances wiuld require less CPU than a single instance.

    Weird, isn't it?

    That is weird.
    Do you have any theories?

    Try it with two in place rather than eight.
    See if that works because if so
    then it would provide the needed stability until Jens Guell tracks down the bugs.

    No, I cannot yet recognize any logic behind this behavior.
    Tried with two instances the way I've done it before, and it eats 40..48% CPU @256.
    Saved and re-loaded the AUM session, now it will grab between 60 and 80% CPU @256.

    Maybe @j_liljedahl or @brambos have seen something like this before?

    My best guess:

    • 2 instances cause the cpu cores to throttle down to low power mode because full cpu power isn’t required
    • More instances will cause the cpu to go into full power mode, causing overall cpu load percentage to go down

    Do you know if there's a certain time interval that the CPU power management will watch for taking power decisions? Will a "spikey" use of CPU resources have different consequences than a more evenly distributed CPU load (analyzed in windows much smaller than the typical one-second CPU load measurements)?

  • I have found that opening Atom's window when running Convolutor PE in AUM allows me to run with a 512 buffer and sometimes on my iPad gen 6 without crackles.

  • @brambos

    Will running it with one instance and another CPU consuming app force the CPU to run at full load?

    Forgive my layman’s language here.

  • @Gravitas said:
    @brambos

    Will running it with one instance and another CPU consuming app force the CPU to run at full load?

    Forgive my layman’s language here.

    Nothing is more enigmatic than iOS cpu core management. In theory it should, but behavior seems to be different between different devices, iOS versions, etc. Just try it and see if it works for you :)

  • @mjcouche said:
    How is that going? Is it as time consuming as Scott has said? I'm considering this but I definitely need audio tracks as well.

    (Highjack continues).

    It's a pain and takes too much work. I was waiting for SynthJacker to support longer sample times and that has happened but I haven't tested it yet.

    I also got sidetracked learning Mozaic scripting. That's a really fun set of challenges for a scripting addict.

    I tend to prefer good puzzles or idle sketching over the hard work of making finished goods and building a sample library for personal use is really hard work.

    If each of us that want a ROM-pler like DAW without AU's made 1-2 great Obsidian instruments that would be nice. Share the pains and enjoy the gains.

    2 nice NS2 instruments went up for the Holidays so let's see what 2020 brings while we wait
    for audio tracks.

  • @brambos said:

    @Gravitas said:
    @brambos

    Will running it with one instance and another CPU consuming app force the CPU to run at full load?

    Forgive my layman’s language here.

    Nothing is more enigmatic than iOS cpu core management. In theory it should, but behavior seems to be different between different devices, iOS versions, etc. Just try it and see if it works for you :)

    Okay good to know.

    I’ve always found the CPU and RAM management to be strange on iOS
    so I’ll file this away in the box of random things belonging to iOS
    along with battery charging.

    Oh @McD it isn’t hijacking.
    To get a convincing mockup
    of orchestral sounds one needs
    a decent or several decent reverbs
    to achieve this goal.

    Part of my interest in the
    reverbs that iOS has to offer.
    I must say I’m not impressed
    with iSymphonic at all.
    The VSCO free library is more impressive.
    Once combined it is acceptable enough for a school classroom.
    As you were.

  • @brambos said:

    @Gravitas said:
    @brambos

    Will running it with one instance and another CPU consuming app force the CPU to run at full load?

    Forgive my layman’s language here.

    Nothing is more enigmatic than iOS cpu core management. In theory it should, but behavior seems to be different between different devices, iOS versions, etc. Just try it and see if it works for you :)

    Haha, and now watch these iPhone XR and KQ Dixie threads, recent iDevices seem to suffer from this phenomenon even more.
    On the positive side, if I have to insert a b*ttload of plugins into my project to avoid the poor CPU getting bored to death then so be it, I just have to be aware :D

Sign In or Register to comment.